id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
jonathan-roberts1/SIRI-WHU | 2023-03-31T17:18:08.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': agriculture
'1': commercial
'2': harbor
'3': idle_land
'4': industrial
'5': meadow
'6': overpass
'7': park
'8': pond
'9': residential
'10': river
'11': water
splits:
- name: train
num_bytes: 158215614.4
num_examples: 2400
download_size: 147702566
dataset_size: 158215614.4
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "SIRI-WHU"
## Dataset Description
- **Paper** [Dirichlet-derived multiple topic scene classification model for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/36/4358825/07329997.pdf)
- **Paper** [The Fisher kernel coding framework for high spatial resolution scene classification](https://www.mdpi.com/2072-4292/8/2/157/pdf)
- **Paper** [Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/8859/7473942/07466064.pdf)
### Licensing Information
CC BY-NC-ND
## Citation Information
[Dirichlet-derived multiple topic scene classification model for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/36/4358825/07329997.pdf)
[The Fisher kernel coding framework for high spatial resolution scene classification](https://www.mdpi.com/2072-4292/8/2/157/pdf)
[Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/8859/7473942/07466064.pdf)
```
@article{zhao2015dirichlet,
title={Dirichlet-derived multiple topic scene classification model for high spatial resolution remote sensing imagery},
author={Zhao, Bei and Zhong, Yanfei and Xia, Gui-Song and Zhang, Liangpei},
journal={IEEE Transactions on Geoscience and Remote Sensing},
volume={54},
number={4},
pages={2108--2123},
year={2015},
publisher={IEEE}
}
@article{zhao2016fisher,
title={The Fisher kernel coding framework for high spatial resolution scene classification},
author={Zhao, Bei and Zhong, Yanfei and Zhang, Liangpei and Huang, Bo},
journal={Remote Sensing},
volume={8},
number={2},
pages={157},
year={2016},
publisher={MDPI}
}
@article{zhu2016bag,
title={Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery},
author={Zhu, Qiqi and Zhong, Yanfei and Zhao, Bei and Xia, Gui-Song and Zhang, Liangpei},
journal={IEEE Geoscience and Remote Sensing Letters},
volume={13},
number={6},
pages={747--751},
year={2016},
publisher={IEEE}
}
``` |
jbarat/plant_species | 2023-01-22T14:03:45.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"region:us"
] | jbarat | null | null | null | 1 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aechmea_fasciata
'1': agave_americana
'2': agave_attenuata
'3': agave_tequilana
'4': aglaonema_commutatum
'5': albuca_spiralis
'6': allium_cepa
'7': allium_sativum
splits:
- name: train
num_bytes: 82083349.0
num_examples: 800
download_size: 82004194
dataset_size: 82083349.0
license: unknown
task_categories:
- image-classification
language:
- en
pretty_name: Plant Species
size_categories:
- 10K<n<100K
---
# Dataset Card for "plant_species"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nglaura/koreascience-summarization | 2023-04-11T10:23:00.000Z | [
"task_categories:summarization",
"language:fr",
"license:apache-2.0",
"region:us"
] | nglaura | null | null | null | 1 | 12 | ---
license: apache-2.0
task_categories:
- summarization
language:
- fr
pretty_name: KoreaScience
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## KoreaScience dataset for summarization
KoreaScience is a dataset for summarization of research papers written in Korean, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 35,248 |
| Validation | 1,125 |
| Test | 1,125 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
``` |
qwedsacf/competition_math | 2023-01-28T20:28:01.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"explanation-generation",
"arxiv:2103.03874",
"region:us"
... | qwedsacf | null | null | null | 6 | 12 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Mathematics Aptitude Test of Heuristics (MATH)
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- explanation-generation
---
# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/hendrycks/math
- **Repository:** https://github.com/hendrycks/math
- **Paper:** https://arxiv.org/pdf/2103.03874.pdf
- **Leaderboard:** N/A
- **Point of Contact:** Dan Hendrycks
### Dataset Summary
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems
from mathematics competitions, including the AMC 10, AMC 12, AIME, and more.
Each problem in MATH has a full step-by-step solution, which can be used to teach
models to generate answer derivations and explanations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's `\boxed` tag.
An example from the dataset is:
```
{'problem': 'A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the spinner landing on $A$ is $\\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\\frac{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common fraction.',
'level': 'Level 1',
'type': 'Counting & Probability',
'solution': 'The spinner is guaranteed to land on exactly one of the three regions, so we know that the sum of the probabilities of it landing in each region will be 1. If we let the probability of it landing in region $C$ be $x$, we then have the equation $1 = \\frac{5}{12}+\\frac{1}{3}+x$, from which we have $x=\\boxed{\\frac{1}{4}}$.'}
```
### Data Fields
* `problem`: The competition math problem.
* `solution`: The step-by-step solution.
* `level`: The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'.
* `type`: The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
https://github.com/hendrycks/math/blob/main/LICENSE
### Citation Information
```bibtex
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks
and Collin Burns
and Saurav Kadavath
and Akul Arora
and Steven Basart
and Eric Tang
and Dawn Song
and Jacob Steinhardt},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
``` |
qwertyforce/scenery_watermarks | 2023-01-31T16:58:17.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:cc-by-nc-4.0",
"watermark",
"doi:10.57967/hf/0313",
"region:us"
] | qwertyforce | null | null | null | 3 | 12 | ---
license: cc-by-nc-4.0
task_categories:
- image-classification
tags:
- watermark
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': no_watermark
'1': watermark
splits:
- name: train
num_bytes: 1094841327.222
num_examples: 22762
download_size: 1057455120
dataset_size: 1094841327.222
pretty_name: Scenery Watermarks
size_categories:
- 10K<n<100K
---
Dataset for watermark classification (no_watermark/watermark)
~22k images, 512x512, manually annotated
additional info - https://github.com/qwertyforce/scenery_watermarks |
Kaludi/data-food-category-classification | 2023-02-03T02:09:07.000Z | [
"task_categories:image-classification",
"region:us"
] | Kaludi | null | null | null | 0 | 12 | ---
task_categories:
- image-classification
---
# Dataset for project: food-category-classification
## Dataset Description
This dataset is for project food-category-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 0
},
{
"image": "<512x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Bread', 'Dairy product', 'Dessert', 'Egg', 'Fried food', 'Meat', 'Noodles-Pasta', 'Rice', 'Seafood', 'Soup', 'Vegetable-Fruit'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1210 |
| valid | 275 |
|
kasnerz/wikitabletext | 2023-03-14T15:09:16.000Z | [
"region:us"
] | kasnerz | null | null | null | 0 | 12 | Entry not found |
pszemraj/HC3-textgen-qa | 2023-02-11T22:56:14.000Z | [
"task_categories:text-generation",
"source_datasets:Hello-SimpleAI/HC3",
"language:en",
"license:apache-2.0",
"chatgpt",
"conversation",
"region:us"
] | pszemraj | null | null | null | 0 | 12 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- chatgpt
- conversation
source_datasets: Hello-SimpleAI/HC3
pretty_name: HC3 for QA textgen
---
# HC3-textgen-qa
- the `Hello-SimpleAI/HC3` reformatted for textgen
- special tokens for question/answer, see dataset preview |
jonathan-roberts1/Million-AID | 2023-03-31T15:46:07.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label_1
dtype:
class_label:
names:
'0': unutilized land
'1': commercial land
'2': public service land
'3': transportation land
'4': industrial land
'5': water area
'6': residential land
'7': agriculture land
- name: label_2
dtype:
class_label:
names:
'0': dam
'1': religious land
'2': rock land
'3': sparse shrub land
'4': arable land
'5': factory area
'6': detached house
'7': desert
'8': lake
'9': power station
'10': beach
'11': ice land
'12': bare land
'13': island
'14': woodland
'15': mobile home park
'16': railway area
'17': river
'18': grassland
'19': apartment
'20': special land
'21': port area
'22': commercial area
'23': highway area
'24': mining area
'25': sports land
'26': airport area
'27': leisure land
- name: label_3
dtype:
class_label:
names:
'0': dam
'1': parking lot
'2': greenhouse
'3': pier
'4': bridge
'5': mine
'6': rock land
'7': baseball field
'8': apron
'9': tennis court
'10': sparse shrub land
'11': works
'12': oil field
'13': meadow
'14': ground track field
'15': detached house
'16': golf course
'17': forest
'18': desert
'19': lake
'20': beach
'21': paddy field
'22': ice land
'23': bare land
'24': storage tank
'25': basketball court
'26': island
'27': substation
'28': mobile home park
'29': cemetery
'30': quarry
'31': solar power plant
'32': helipad
'33': roundabout
'34': runway
'35': wastewater plant
'36': river
'37': apartment
'38': dry field
'39': intersection
'40': swimming pool
'41': commercial area
'42': church
'43': road
'44': orchard
'45': terraced field
'46': stadium
'47': train station
'48': railway
'49': viaduct
'50': wind turbine
splits:
- name: train
num_bytes: 871962498
num_examples: 10000
download_size: 871644115
dataset_size: 871962498
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "Million-AID"
## Dataset Description
- **Paper** [On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid](https://ieeexplore.ieee.org/iel7/4609443/9314330/09393553.pdf)
- **Split** Train
## Split Information
This HuggingFace dataset repository contains just the Train split.
### Licensing Information
[CC BY-NC-ND 4.0](https://competitions.codalab.org/competitions/35974#learn_the_details-terms-and-conditions)
## Citation Information
[On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid](https://ieeexplore.ieee.org/iel7/4609443/9314330/09393553.pdf)
```
@article{long2021creating,
title = {On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid},
author = {Long, Yang and Xia, Gui-Song and Li, Shengyang and Yang, Wen and Yang, Michael Ying and Zhu, Xiao Xiang and Zhang, Liangpei and Li, Deren},
year = 2021,
journal = {IEEE Journal of selected topics in applied earth observations and remote sensing},
publisher = {IEEE},
volume = 14,
pages = {4205--4230}
}
``` |
Finnish-NLP/Reddit_fi_2006_2022 | 2023-05-19T18:32:54.000Z | [
"region:us"
] | Finnish-NLP | null | null | null | 1 | 12 | ---
dataset_info:
features:
- name: subreddit
dtype: string
- name: created_utc
dtype: int64
- name: score
dtype: int64
- name: body
dtype: string
- name: predicted_language
dtype: string
- name: probability
dtype: float64
- name: year
dtype: float64
- name: day
dtype: float64
- name: month
dtype: float64
- name: time
dtype: string
- name: label_identity_attack
dtype: float64
- name: label_insult
dtype: float64
- name: label_obscene
dtype: float64
- name: label_severe_toxicity
dtype: float64
- name: label_threat
dtype: float64
- name: label_toxicity
dtype: float64
splits:
- name: train
num_bytes: 1852133546
num_examples: 4476667
download_size: 1024144293
dataset_size: 1852133546
---
# Dataset Card for "Reddit_fi_2006_2022"
## Dataset Description
- **Point of Contact:** [RASMUS](https://www.linkedin.com/in/rasmustoivanen/)
- **Size of csv filee on disk files:** 1542.75 MB
- **Size of the generated parquet files:** 970 MB
### Dataset Summary
Reddit_fi is a filtered and post-processed corpus consisting of comments from [Reddit](https://reddit.com/).
Some words of caution at this stage however. Subreddits were not filtered as in ScandiReddit to filter out any specific subreddits that could have hate speech, toxicity, biased. Be careful when training language models with this data and curate you dataset properly.
All Reddit comments from January 2006 up until December 2022 were downloaded through [PushShift](https://files.pushshift.io/reddit/comments/), after which these were filtered based on the FastText language detection model by using confidence score of 70% was as a limit.
We also filter out shorter than 30 character messages based on body field.
This project was inspired by https://huggingface.co/datasets/alexandrainst/scandi-reddit creator https://www.saattrupdan.com/. Kudos to you!
### Filtering disclaimer. Toxicity and bias
The dataset is provided as is and high likely includes toxic, biased etch. material. You should carefully curate this dataset for your needs. To label toxic messages, we used Finnish toxicity classifier [TurkuNLP/bert-large-finnish-cased-toxicity](https://huggingface.co/TurkuNLP/bert-large-finnish-cased-toxicity) released by TurkuNLP. This dataset includes 6 different toxicity labels with their predicted scores for each message. You can use those labels and scores to filter out toxic messages.
We evaluated subreddits with over 500 messages and decided to provide a list that based on our fast analysis should be filtered out:
[FinlandOnlyfans,
Warframe,
Finnishbitches,
vitunluurangot,
WTF,
SaatananTeletapit,
FinnishWhores,
pics,
iidapiiroinen123,
okkamuretardi,
FinnishGenderCritical,
onlyfanssuomi,
SuomiBannatut,
jumalattaret,
jumalattaret2,
jumalattaretPro,
HommaInAction,
snappisensuroimaton]
### Supported Tasks and Leaderboards
Training language models is the intended task for this dataset.
You can also use this dataset for various data analysis things
### Languages
The dataset is available in Finnish
### Data Instances
An example from the dataset looks as follows.
```
{
"subreddit": "arkisuomi",
"created_utc": 1671152007,
"score": 1,
"body": "oatlyn iKaffe on maitoa parempaa kahvissa, en jois pelkästään kuitenkaan",
"predicted_language": "__label__fi",
"probability": 0.9783772230148317,
"year": 2022.0,
"day": 16.0,
"month": 12.0,
"time": "00:53:27",
"label_identity_attack": 0.00018978118896484375,
"label_insult": 0.00058746337890625,
"label_obscene": 0.00142669677734375,
"label_severe_toxicity": 6.723403930664062e-05,
"label_threat": 0.0004100799560546875,
"label_toxicity": 0.01025390625
}
```
### Data Fields
The data fields are the same among all splits.
- `subreddit`: `string`
- `created_utc: `int64`
- `score`: `int64`
- `body`: `string`
- `predicted_language`: `string`
- `probability`: `float64`
- `year`: `float64`
- `day`: `float64`
- `month`: `float64`
- `time`: `string`
- `label_identity_attack`: `float64`
- `label_insult`: `float64`
- `label_obscene`: `float64`
- `label_severe_toxicity`: `float64`
- `label_threat`: `float64`
- `label_toxicity`: `float64`
### Language Distribution
- fi: 4,476,667
### Top-5 Subreddit Distribution
- Suomi: 3 546 657
- snappijuorut: 469 592
- LakkoPostaukset: 64 198
- snappisensuroimaton: 6 248
- mina_irl: 2 828
## Dataset Creation
### Curation Rationale
The Finnish language does not have that many open source social media datasets. One notable dataset is Suomi24 but it has restricted access.
### Source Data
The raw Reddit data was collected through [PushShift](https://files.pushshift.io/reddit/comments/).
## Additional Information
### Dataset Curators
[Rasmus Toivanen](https://www.linkedin.com/in/rasmustoivanen/)
curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY 4.0
license](https://creativecommons.org/licenses/by/4.0/). |
t0mmy/livedoor_news_corpus | 2023-03-12T02:25:37.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ja",
"license:cc",
"region:us"
] | t0mmy | This corpus is from news stories in “livedoor news” administered by NHN Japan and only the following ones that are governed by Creative Commons license were collected and had as many HTML tags as possible deleted. | null | null | 1 | 12 | ---
license: cc
task_categories:
- text-classification
language:
- ja
pretty_name: livedoor News Corpus
size_categories:
- 1K<n<10K
---
# Dataset Card for "livedoor_news_corpus"
## Dataset Description
- **Homepage:** [ダウンロード - 株式会社ロンウイット](http://www.rondhuit.com/download.html#ldcc)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [RONDHUIT](mailto:sales@rondhuit.com)
### Dataset Summary
The livedoor News Corpus is a collection of 7k human-written Japanese news stories.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language in the dataset is Japanese. The BCP-47 code for Japanese is ja.
## Dataset Structure
### Data Instances
For each instance, there is a string for the URL, a datetime for the date, a string for the title, a string for the text, and an integer for the label.
```
{'url': 'http://news.livedoor.com/article/detail/6601535/',
'date': '2012-05-28T12:55:00+0900',
'title': 'NTTドコモ、2012夏モデル新商品内覧会を東京・名古屋・大阪で開催!DCMXおよびプレミアステージ会員向け',
'text': '2012夏モデル新商品内覧会が開催! \n\nNTTドコモは28日、この夏以降に発売予定の新商品を発売前に体験できる「2012 夏モデル新商品内覧会」を東京や名古屋、大阪にてDCMX会員およびプレミアステージ会員(ドコモプレミアクラブ)を対象に実施することをお知らせしています。\n\n事前お申込みは不要で、当日、入場の際にDCMXカードもしくはドコミプレミアクラブ・サイト画面を提示することで、入場できます。\n\nまた、1人の対象者がいれば、知り合いや友だちを連れていっても大丈夫とのことです。なお、DCMX mini会員は対象外となるということです。\n\n開催日時および開催会場は、以下の通りです。ただし、時間帯によっては混雑のために入場制限をする場合があるとのことですので、ご注意ください。\n\n【開催日】\n・東京会場\n2012年6月8日(金)〜10日(日)\n・名古屋会場\n2012年6月15日(金)〜17日(日)\n・大阪会場\n2012年6月16日(土)〜17日(日)\n\n※時間帯によっては混雑のため、入場制限させていただく場合があります。あらかじめご了承願います。\n※お連れ様は何名でもご来場いただけます。\n※会場までの交通費等はお客様ご負担となります。\n※ご来場の際は、公共交通機関をご利用ください。\n\n【東京会場】\n■会場\n東京ドームシティ プリズムホール 1F\n大好評の各機種のメーカー担当者によるプレゼンテーション、スマートフォン講座の他、20周年の感謝の気持ちを込めて、約60機種の歴代ケータイの展示や、歴代ドコモダケ展示など、特別企画も盛りだくさん!ご家族、お友達をお誘いの上、是非ご来場ください。\n\nステージスケジュールは6月1日(金)公開予定!\n■日時\n2012年6月8日(金)午後5:00〜午後9:00\n※最終入場時間:午後8:30\n2011年6月9日(土)・10日(日)午前10:30〜午後6:00\n※最終入場時間:午後5:30\n\n※途中入場可\n※開場時間にご注意ください。\n※当日の様子を取材しホームページ等に掲載する場合があります。なお、当日取材させていただいた画像、コメントなどの肖像権は弊社に帰属するものとさせていただきます。\n■混雑状況\n当日の混雑状況についてご確認いただけます。\n詳しくはこちら\n■住所\n東京都文京区後楽1-3-61\n東京ドームシティ プリズムホール 1F\n■交通アクセス\n・JR中央線・総武線・都営三田線「水道橋駅」徒歩約1分\n・東京メトロ丸ノ内線・南北線「後楽園駅」徒歩約3分\n・都営大江戸線「春日駅」徒歩約5分\n\n\n【名古屋会場】\n■会場\n栄ガスビル5F ガスホール\nスマートフォンのステージイベントを実施予定!モバイルアスキー・アスキードットPC編集部presentsで定番のアプリからおすすめの人気アプリなどを紹介します。\n\nステージスケジュールは6月1日(金)公開予定!\n\nDCMXのカードをご提示いただいた方に抽選で粗品をプレゼントいたします。DCMX会員の皆様は、是非DCMXのカードをご持参ください。\n※6月15日(金)は内覧会は開催されますが、ステージはございません。\n■日時\n2012年6月15日(金)午後6:00〜午後9:00\n※最終入場時間:午後8:30\n2012年6月16日(土)・17日(日)午前11:00〜午後6:00\n※最終入場時間:午後5:30\n\n※途中入場可\n※開催時間にご注意ください。\n■住所\n愛知県名古屋市中区栄3-15-33\n栄ガスホール 5F 栄ガスホール\n■交通アクセス\n・地下鉄東山線・名城線「栄駅」サカエチカ6番出口より徒歩約5分\n・地下鉄名城線「矢場町駅」6番出口より徒歩約2分\n\n\n【大阪会場】\n■会場\nハービスOSAKA B2F ハービスHALL\nスペシャルステージを実施予定! 各機種のメーカー担当者によるプレゼンテーションの他、メーカー担当者が一堂に会する「スマートフォンサミット」、その他お楽しみ企画もあるよ!\nステージスケジュールは6月1日(金)公開予定!\n\n■日時\n2012年6月16日(土)・17日(日)午前11:00〜午後6:00\n※最終入場時間:午後5:30\n※途中入場可\n※当日の様子を取材しホームページ等に掲載する場合があります。なお、当日取材させていただいた画像、コメントなどの肖像権は弊社に帰属するものとさせていただきます。\n■住所\n大阪府大阪市北区梅田2-5-25\nハービスOSAKA B2F ハービスHALL\n■交通アクセス\n・阪神電車「梅田駅」西改札より徒歩約6分\n・JR線「大阪駅」桜橋口より徒歩約7分\n・地下鉄御堂筋線「梅田駅」南改札より徒歩約10分\n・阪急電車「梅田駅」より徒歩約15分\n\n記事執筆:memn0ck\n\n■関連リンク\n・エスマックス(S-MAX)\n・エスマックス(S-MAX) smaxjp on Twitter\n・DCMX|ドコモのケータイクレジット\n',
'label': 6}
```
### Data Fields
- `url`: a string that URL
- `date`: a datetime that date
- `title`: a string that title
- `text`: a string that text
- `label`: an integer whose value may be either 0, indicating that category is Topic News, 1, indicating that category is Sports Watch, 2, indicating that category is IT Life Hack, 3, indicating that category is Appliance Channel, 4, indicating that category is MOVIE ENTER, 5, indicating that category is Single Woman Report, 6, indicating that category is Smax, 7, indicating that category is livedoor HOMME, 8, indicating that category is Peachy.
### Data Splits
The livedoor News Corpus has 1 split: *train*.
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| Train | 7,367 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The livedoor News Corpus was developed by [RONDHUIT](https://www.rondhuit.com/en.html).
### Licensing Information
The livedoor News Corpus is licensed under a [Creative Commons Attribution-NoDerivs 2.1 Japan License](https://creativecommons.org/licenses/by-nd/2.1/jp/)
### Citation Information
```
@misc{livedoornewscorpus,
title={livedoor News Corpus},
author={RONDHUIT},
year={2012},
howpublished={\url{http://www.rondhuit.com/download.html#ldcc}}
}
```
### Contributions
Thanks to [@rondhuit](https://github.com/RONDHUIT) for adding this dataset. |
hezarai/sentiment-dksf | 2023-09-02T10:33:35.000Z | [
"task_categories:text-classification",
"language:fa",
"region:us"
] | hezarai | Sentiment analysis dataset extracted and labeled from Digikala and Snapp Food comments | null | null | 0 | 12 | ---
task_categories:
- text-classification
language:
- fa
pretty_name: Digikala/SnappFood comments sentiment analysis
---
The Sentiment DKSF (Digikala/Snappfood comments) is a dataset for sentiment analysis. |
sedthh/cmu_wiki_qa | 2023-02-28T20:46:45.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"Carnegie Mellon University",
"University of Pittsburgh",
"Wikipedia",
"Q&A",
"region:us"
] | sedthh | null | null | null | 1 | 12 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 410246
num_examples: 1610
download_size: 105516
dataset_size: 410246
license: mit
task_categories:
- question-answering
- summarization
language:
- en
tags:
- Carnegie Mellon University
- University of Pittsburgh
- Wikipedia
- Q&A
pretty_name: Question-Answer Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for "cmu_wiki_qa"
A filtered / cleaned version of the http://www.cs.cmu.edu/~ark/QA-data/ Q&A dataset, which provides manually-generated factoid questions from Wikipedia articles.
**Acknowledgments**
These data were collected by Noah Smith, Michael Heilman, Rebecca Hwa, Shay Cohen, Kevin Gimpel, and many students at Carnegie Mellon University and the University of Pittsburgh between 2008 and 2010.
Their research project was supported by NSF IIS-0713265 (to Smith), an NSF Graduate Research Fellowship (to Heilman), NSF IIS-0712810 and IIS-0745914 (to Hwa), and Institute of Education Sciences, U.S. Department of Education R305B040063 (to Carnegie Mellon).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
totuta/youtube_subs_howto100M | 2023-03-04T01:38:37.000Z | [
"task_categories:conversational",
"size_categories:10M<n<100M",
"language:en",
"license:apache-2.0",
"arxiv:1906.03327",
"region:us"
] | totuta | null | null | null | 3 | 12 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1260882571
num_examples: 309136
download_size: 668637627
dataset_size: 1260882571
license: apache-2.0
task_categories:
- conversational
language:
- en
pretty_name: 'YouTube Subtitles of Instructions: HowTo100M'
size_categories:
- 10M<n<100M
---
# Dataset Card for youtube_subs_howto100M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [HowTo100M homepage](https://www.di.ens.fr/willow/research/howto100m/)
- **Repository:** [HowTo100M repository](https://github.com/antoine77340/howto100m)
- **Paper:** [HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips](https://arxiv.org/abs/1906.03327)
### Dataset Summary
The `youtube_subs_howto100M` dataset is an English-language dataset of instruction-response pairs extracted from 309136 YouTube videos. The dataset was orignally inspired by and sourced from the HowTo100M dataset, which was developed for natural language search for video clips.
### Supported Tasks and Leaderboards
- `conversational`: The dataset can be used to train a model for instruction(request) and a long form of response generation. This dataset is originally prepared for the [Open Assistant](https://github.com/LAION-AI/Open-Assistant), which is an open-source chat-based large language model.
### Languages
Currently, all text in the dataset is in English.
## Dataset Structure
### Data Instances
A typical data point comprises an `instruction`, `response`, and a `source`
An example from the youtube_subs_howto100M looks as follows:
```
{"instruction": "Please explain how to remove plaque without going to the dentist 2016", "response": "mineral deposit on teeth is known as tartar or plaque as time passes by the amount of tartar increases and if you don't take care it can cause periodontitis of course the best way to remove tartar is paying a visit to your dentist but another way is to remove plaque at your home in this video you will learn how to remove plaque at home to do so you will need baking soda toothbrush salt you hydrogen peroxide cup you gentle pick you water anti septic mouthwash you step one first mix one tablespoon of bacon soda with TSP of salt into the cup after you at the toothbrush with warm water dip it into the mixture scrub teeth with an in spit continue the same process for five minutes step to mix a cup full with hydrogen peroxide with cup of warm water and rinse your mouth for one minute then spit and rinse with cup of cool water step 3 rub the yellow tartar from teeth with a dental pick be careful not to scrape the gums it may irritate and damage them step 4 rinse mouth with an antiseptic mouthwash and repeat every second day here are some other advice is to help you keep your beautiful smile tomatoes and strawberries tomatoes and strawberries are rich in vitamin C which is excellent for oral health you can rub these fruits directly onto your teeth and let it sit for five minutes this way the tartar buildup will soften cheese being a Swiss or cheddar before meals helps neutralize the acids that involve black creation an ingredient in a cheese works as a barrier agent guava both guava fruit and leaves are considered excellent anti black agents to help remove plaque accumulated on the teeth and gums gloss they have anti-inflammatory and analgesic properties that help reduce swelling and pain in the gums brush your teeth regularly with a soft brush and make vertical movements pay attention on the space between gums and teeth floss regularly consuming spicy food stimulates syllabary glands that way saliva cleans mouth in a natural way five bacteria with an orange peel before going to bed and don't rinse mouth", "source": "YouTube"}
```
### Data Fields
- `instruction`: a request for an explanation.
- `response`: a long text of response sentences, currently not punctuated.
- `source`: the source of the datapoint, currently all `YouTube`.
### Data Splits
The dataset does not have train/valid/eval splits now.
## Dataset Creation
### Curation Rationale
The original HowTo100M dataset was developed for natural language search for video clips, not necessarily for conversational or chat based training. However, the long monologue response can be regarded as a sequence of answers for a question, which can be induced from the video title. Therefore, a good amount of high-quality request-response(long) pairs can be extracted from HowTo100M youtube videos.
Concretely, this dataset is curated like below:
```
for each video in YouTube100M dataset
if video_title starts with `how to`
add `Please explain` to the title to make an `instruction`
extract subtitles from the video to make a `response`
```
### Source Data
#### Initial Data Collection and Normalization
Refer to the [Curation Rationale](#curation-rationale)
#### Who are the source language producers?
The language producers are YouTube users of the videos in HowTo100M dataset.
### Annotations
#### Annotation process
Refer to the [Curation Rationale](#curation-rationale)
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
Apache license 2.0
## Additional Information
### Dataset Curators
The youtube_subs_howto100M dataset was created by [@totuta](https://github.com/totuta). The original HowTo100M dataset was created by Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic.
### Licensing Information
[N/A]
### Citation Information
@inproceedings{miech19howto100m,
title={How{T}o100{M}: {L}earning a {T}ext-{V}ideo {E}mbedding by {W}atching {H}undred {M}illion {N}arrated {V}ideo {C}lips},
author={Miech, Antoine and Zhukov, Dimitri and Alayrac, Jean-Baptiste and Tapaswi, Makarand and Laptev, Ivan and Sivic, Josef},
booktitle={ICCV},
year={2019},
}
### Contributions
Thanks to [@totuta](https://github.com/totuta) for adding this dataset. |
HuggingFaceH4/helpful_instructions | 2023-03-27T22:25:58.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"instruct",
"human-feedback",
"region:us"
] | HuggingFaceH4 | Helpful Instructions is a dataset of (prompt, completion) pairs that are derived from a variety of public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. | """
_DESCRIPTION = | null | 7 | 12 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- instruct
- human-feedback
pretty_name: Helpful Instructions
dataset_info:
- config_name: self_instruct
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 24378246
num_examples: 82612
download_size: 12589487
dataset_size: 24378246
- config_name: super_natural_instructions
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 43352923
num_examples: 50000
download_size: 22605900
dataset_size: 43352923
- config_name: prompt_source
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 59843768
num_examples: 52657
download_size: 23607134
dataset_size: 59843768
- config_name: all
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 127574937
num_examples: 185269
download_size: 58901460
dataset_size: 127574937
---
# Dataset Card for Helpful Instructions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Lewis Tunstall**
### Dataset Summary
Helpful Instructions is a dataset of `(instruction, completion)` pairs that are derived from public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. You can load the dataset as follows:
```python
from datasets import load_dataset
# Load all subsets
helpful_instructions = load_dataset("HuggingFaceH4/helpful_instructions", name="all")
# Load a single subset
helpful_instructions_subset = load_dataset("HuggingFaceH4/helpful_instructions", name="self_instruct")
```
### Supported Tasks and Leaderboards
This dataset can be used to fine-tune pretrained language models to follow instructions.
### Changelog
* March 5, 2023: `v1.0.0` release, with subsets from `HuggingFaceH4/self_instruct` (`self_instruct`, `super_natural_instructions`, `prompt_source`) |
tatiana-merz/cyrillic_turkic_langs | 2023-03-15T19:41:05.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ba",
"language:cv",
"language:sah",
"language:tt",
"language:ky",
"language:kk",
"language:tyv",
"language:krc",
"language:ru",
"license:cc",
"wiki",
"region:us"
] | tatiana-merz | null | null | null | 0 | 12 | ---
license: cc
task_categories:
- text-classification
language:
- ba
- cv
- sah
- tt
- ky
- kk
- tyv
- krc
- ru
tags:
- wiki
size_categories:
- 10K<n<100K
---
# Cyrillic dataset of 8 Turkic languages spoken in Russia and former USSR
## Dataset Description
The dataset is a part of the [Leipzig Corpora (Wiki) Collection]: https://corpora.uni-leipzig.de/
For the text-classification comparison, Russian has been included to the dataset.
**Paper:**
Dirk Goldhahn, Thomas Eckart and Uwe Quasthoff (2012): Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages. In: Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), 2012.
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
- ba - Bashkir
- cv - Chuvash
- sah - Sakha
- tt - Tatar
- ky - Kyrgyz
- kk - Kazakh
- tyv - Tuvinian
- krc - Karachay-Balkar
- ru - Russian
### Data Splits
train: Dataset({
features: ['text', 'label'],
num_rows: 72000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 9000
})
validation: Dataset({
features: ['text', 'label'],
num_rows: 9000
})
## Dataset Creation
[Link to the notebook](https://github.com/tatiana-merz/YakuToolkit/blob/main/CyrillicTurkicCorpus.ipynb)
### Curation Rationale
[More Information Needed]
### Source Data
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
katarinagresova/Genomic_Benchmarks_human_enhancers_ensembl | 2023-03-13T19:36:04.000Z | [
"region:us"
] | katarinagresova | null | null | null | 2 | 12 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 34821392
num_examples: 123872
- name: test
num_bytes: 8668172
num_examples: 30970
download_size: 4077057
dataset_size: 43489564
---
# Dataset Card for "Genomic_Benchmarks_human_enhancers_ensembl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JBJoyce/DENTAL_CLICK | 2023-03-17T16:32:57.000Z | [
"region:us"
] | JBJoyce | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype: int64
splits:
- name: train
num_bytes: 126813363.116
num_examples: 3956
- name: test
num_bytes: 32697248.72
num_examples: 1020
download_size: 149989127
dataset_size: 159510611.836
---
# Dataset Card for "DENTAL_CLICK"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saier/unarXive_citrec | 2023-04-02T01:28:05.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|10.5281/zenodo.7752615",
"language:en",
"license:cc-by-sa-4.0",
"a... | saier | null | null | null | 3 | 12 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: unarXive citation recommendation
size_categories:
- 1M<n<10M
tags:
- arXiv.org
- arXiv
- citation recommendation
- citation
- reference
- publication
- paper
- preprint
- section
- physics
- mathematics
- computer science
- cs
task_categories:
- text-classification
task_ids:
- multi-class-classification
source_datasets:
- extended|10.5281/zenodo.7752615
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: marker
dtype: string
- name: marker_offsets
sequence:
sequence: int64
- name: label
dtype: string
config_name: .
splits:
- name: train
num_bytes: 5457336094
num_examples: 2043192
- name: test
num_bytes: 551012459
num_examples: 225084
- name: validation
num_bytes: 586422261
num_examples: 225348
download_size: 7005370567
dataset_size: 6594770814
---
# Dataset Card for unarXive citation recommendation
## Dataset Description
* **Homepage:** [https://github.com/IllDepence/unarXive](https://github.com/IllDepence/unarXive)
* **Paper:** [unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network](https://arxiv.org/abs/2303.14957)
### Dataset Summary
The unarXive citation recommendation dataset contains 2.5 Million paragraphs from computer science papers and with an annotated citation marker. The paragraphs and citation information is derived from [unarXive](https://github.com/IllDepence/unarXive).
Note that citation infromation is only given as the [OpenAlex](https://openalex.org/) ID of the cited paper. An important consideration for models is therefore if the data is used *as is*, or if additional information of the cited papers (metadata, abstracts, full-text, etc.) is used.
The dataset can be used as follows.
```
from datasets import load_dataset
citrec_data = load_dataset('saier/unarXive_citrec')
citrec_data = citrec_data.class_encode_column('label') # assign target label column
citrec_data = citrec_data.remove_columns('_id') # remove sample ID column
```
## Dataset Structure
### Data Instances
Each data instance contains the paragraph’s text as well as information on one of the contained citation markers, in the form of a label (cited document OpenAlex ID), citation marker, and citation marker offset. An example is shown below.
```
{'_id': '7c1464bb-1f0f-4b38-b1a3-85754eaf6ad1',
'label': 'https://openalex.org/W3115081393',
'marker': '[1]',
'marker_offsets': [[316, 319]],
'text': 'Data: For sentiment analysis on Hindi-English CM tweets, we used the '
'dataset provided by the organizers of Task 9 at SemEval-2020.\n'
'The training dataset consists of 14 thousand tweets.\n'
'Whereas, the validation dataset as well as the test dataset contain '
'3 thousand tweets each.\n'
'The details of the dataset are given in [1]}.\n'
'For this task, we did not use any external dataset.\n'}
```
### Data Splits
The data is split into training, development, and testing data as follows.
* Training: 2,043,192 instances
* Development: 225,084 instances
* Testing: 225,348 instances
## Dataset Creation
### Source Data
The paragraph texts are extracted from the data set [unarXive](https://github.com/IllDepence/unarXive).
#### Who are the source language producers?
The paragraphs were written by the authors of the arXiv papers. In file `license_info.jsonl` author and text licensing information can be found for all samples, An example is shown below.
```
{'authors': 'Yusuke Sekikawa, Teppei Suzuki',
'license': 'http://creativecommons.org/licenses/by/4.0/',
'paper_arxiv_id': '2011.09852',
'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8',
'18dc073e-a48e-488e-b34c-e5fc3cb8a4ca',
'0c2e89b3-d863-4bc2-9e11-8f6c48d867cb',
'd85e46cf-b11d-49b6-801b-089aa2dd037d',
'92915cea-17ab-4a98-aad2-417f6cdd53d2',
'e88cb422-47b7-4f69-9b0b-fbddf8140d98',
'4f5094a4-0e6e-46ae-a34d-e15ce0b9803c',
'59003494-096f-4a7c-ad65-342b74eed561',
'6a99b3f5-217e-4d3d-a770-693483ef8670']}
```
### Annotations
Citation information in unarXive is automatically determined ([see implementation](https://github.com/IllDepence/unarXive/blob/master/src/match_references_openalex.py)).
<!--
## Considerations for Using the Data
### Discussion and Biases
TODO
### Other Known Limitations
TODO
-->
## Additional Information
### Licensing information
The dataset is released under the Creative Commons Attribution-ShareAlike 4.0.
### Citation Information
```
@inproceedings{Saier2023unarXive,
author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael},
title = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}},
booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries},
year = {2023},
series = {JCDL '23}
}
```
|
YuanPJ/icsi_summ | 2023-03-30T01:31:29.000Z | [
"license:cc-by-4.0",
"region:us"
] | YuanPJ | \ | \ | null | 0 | 12 | ---
license: cc-by-4.0
---
|
Francesco/apex-videogame | 2023-03-30T09:10:05.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': apex-game
'1': avatar
'2': object
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: apex-videogame
tags:
- rf100
---
# Dataset Card for apex-videogame
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/apex-videogame
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
apex-videogame
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/apex-videogame
### Citation Information
```
@misc{ apex-videogame,
title = { apex videogame Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/apex-videogame } },
url = { https://universe.roboflow.com/object-detection/apex-videogame },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
rcds/swiss_criticality_prediction | 2023-07-20T07:39:07.000Z | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",... | rcds | This dataset contains Swiss federal court decisions for the legal criticality prediction task | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 0 | 12 | ---
annotations_creators:
- machine-generated
language:
- de
- fr
- it
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
pretty_name: Legal Criticality Prediction
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-classification
---
# Dataset Card for Criticality Prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Legal Criticality Prediction (LCP) is a multilingual, diachronic dataset of 139K Swiss Federal Supreme Court (FSCS) cases annotated with two criticality labels. The bge_label i a binary label (critical, non-critical), while the citation label has 5 classes (critical-1, critical-2, critical-3, critical-4, non-critical). Critical classes of the citation_label are distinct subsets of the critical class of the bge_label. This dataset creates a challenging text classification task. We also provide additional metadata as the publication year, the law area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
### Supported Tasks and Leaderboards
LCP can be used as text classification task
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
German (91k), French (33k), Italian (15k)
## Dataset Structure
```
{
"decision_id": "008d8a52-f0ea-4820-a18c-d06066dbb407",
"language": "fr",
"year": "2018",
"chamber": "CH_BGer_004",
"region": "Federation",
"origin_chamber": "338.0",
"origin_court": "127.0",
"origin_canton": "24.0",
"law_area": "civil_law",
"law_sub_area": ,
"bge_label": "critical",
"citation_label": "critical-1",
"facts": "Faits : A. A.a. Le 17 août 2007, C.X._, née le 14 février 1944 et domiciliée...",
"considerations": "Considérant en droit : 1. Interjeté en temps utile (art. 100 al. 1 LTF) par les défendeurs qui ont succombé dans leurs conclusions (art. 76 LTF) contre une décision...",
"rulings": "Par ces motifs, le Tribunal fédéral prononce : 1. Le recours est rejeté. 2. Les frais judiciaires, arrêtés à 10'000 fr., sont mis solidairement à la charge des recourants...",
}
```
### Data Fields
```
decision_id: (str) a unique identifier of the for the document
language: (str) one of (de, fr, it)
year: (int) the publication year
chamber: (str) the chamber of the case
region: (str) the region of the case
origin_chamber: (str) the chamber of the origin case
origin_court: (str) the court of the origin case
origin_canton: (str) the canton of the origin case
law_area: (str) the law area of the case
law_sub_area:(str) the law sub area of the case
bge_label: (str) critical or non-critical
citation_label: (str) critical-1, critical-2, critical-3, critical-4, non-critical
facts: (str) the facts of the case
considerations: (str) the considerations of the case
rulings: (str) the rulings of the case
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset was split date-stratisfied
- Train: 2002-2015
- Validation: 2016-2017
- Test: 2018-2022
| Language | Subset | Number of Documents (Training/Validation/Test) |
|------------|------------|--------------------------------------------|
| German | **de** | 81'264 (56592 / 19601 / 5071) |
| French | **fr** | 49'354 (29263 / 11117 / 8974) |
| Italian | **it** | 7913 (5220 / 1901 / 792) |
## Dataset Creation
### Curation Rationale
The dataset was created by Stern (2023).
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
bge_label:
1. all bger_references in the bge header were extracted (for bge see rcds/swiss_rulings).
2. bger file_names are compared with the found references
citation_label:
1. count all citations for all bger cases and weight citations
2. divide cited cases in four different classes, depending on amount of citations
#### Who are the annotators?
Stern processed data and introduced bge and citation-label
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset. |
Yulong-W/squadpara | 2023-04-01T10:28:25.000Z | [
"region:us"
] | Yulong-W | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | null | 0 | 12 | Entry not found |
relbert/analogy_questions_private | 2023-04-02T15:07:46.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | [Analogy Question](https://aclanthology.org/2021.acl-long.280/) | @inproceedings{ushio-etal-2021-bert,
title = "{BERT} is to {NLP} what {A}lex{N}et is to {CV}: Can Pre-Trained Language Models Identify Analogies?",
author = "Ushio, Asahi and
Espinosa Anke, Luis and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.280",
doi = "10.18653/v1/2021.acl-long.280",
pages = "3609--3624",
abstract = "Analogies play a central role in human commonsense reasoning. The ability to recognize analogies such as {``}eye is to seeing what ear is to hearing{''}, sometimes referred to as analogical proportions, shape how we structure knowledge and understand language. Surprisingly, however, the task of identifying such analogies has not yet received much attention in the language model era. In this paper, we analyze the capabilities of transformer-based language models on this unsupervised task, using benchmarks obtained from educational settings, as well as more commonly used datasets. We find that off-the-shelf language models can identify analogies to a certain extent, but struggle with abstract and complex relations, and results are highly sensitive to model architecture and hyperparameters. Overall the best results were obtained with GPT-2 and RoBERTa, while configurations using BERT were not able to outperform word embedding models. Our results raise important questions for future work about how, and to what extent, pre-trained language models capture knowledge about abstract semantic relations.",
} | null | 0 | 12 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: Analogy Question
---
# Dataset Card for "relbert/analogy_questions"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/2021.acl-long.280/](https://aclanthology.org/2021.acl-long.280/)
- **Dataset:** Analogy Questions
### Dataset Summary
This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
- original analogy questions
| name | Size (valid/test) | Num of choice | Num of relation group | Original Reference |
|-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:|
| `sat_full`| -/374 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
| `sat` | 37/337 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
| `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) |
| `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) |
- extra analogy questions
| name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference |
|:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) |
| `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) |
| `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) |
| `nell_relational_similarity` | 400/600 | 5/7 | 4/6 | [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) |
| `scan` | 178/1616 | 3,36,136,10,45,78,15,21,55,120,153,91,28/3,36,136,10,45,78,15,21,55,120,153,91,28 | 2/2 | [relbert/scientific_and_creative_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy) |
## Dataset Structure
### Data Instances
An example of `test` looks as follows.
```
{
"stem": ["raphael", "painter"],
"answer": 2,
"choice": [["andersen", "plato"],
["reading", "berkshire"],
["marx", "philosopher"],
["tolstoi", "edison"]]
}
```
The `stem` is the query word pair, `choice` has word pair candidates,
and `answer` indicates the index of correct candidate which starts from `0`.
All data is lowercased except Google dataset.
### Citation Information
```
@inproceedings{ushio-etal-2021-bert-is,
title ={{BERT} is to {NLP} what {A}lex{N}et is to {CV}: {C}an {P}re-{T}rained {L}anguage {M}odels {I}dentify {A}nalogies?},
author={Ushio, Asahi and
Espinosa-Anke, Luis and
Schockaert, Steven and
Camacho-Collados, Jose},
booktitle={Proceedings of the {ACL}-{IJCNLP} 2021 Main Conference},
year={2021},
publisher={Association for Computational Linguistics}
}
```
### LICENSE
The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
|
pythainlp/thailaw | 2023-05-21T14:34:49.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:th",
"license:cc0-1.0",
"legal",
"region:us"
] | pythainlp | null | null | null | 3 | 12 | ---
dataset_info:
features:
- name: sysid
dtype: string
- name: title
dtype: string
- name: txt
dtype: string
splits:
- name: train
num_bytes: 825923852
num_examples: 42755
download_size: 190585391
dataset_size: 825923852
license: cc0-1.0
task_categories:
- text-generation
language:
- th
tags:
- legal
size_categories:
- 10K<n<100K
---
# Dataset Card for "thailaw"
## English
Thai Law Dataset (Act of Parliament)
- Data source from Office of the Council of State, Thailand. [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- This part of PyThaiNLP Project.
- License Dataset is public domain.
Download [https://github.com/PyThaiNLP/thai-law/releases](https://github.com/PyThaiNLP/thai-law/releases)
This hub based on [Thailaw v0.2](https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2).
## Thai
คลังข้อมูลกฎหมายไทย (พระราชบัญญัติ)
- ข้อมูลเก็บรวบรวมมาจากเว็บไซต์สำนักงานคณะกรรมการกฤษฎีกา [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- โครงการนี้เป็นส่วนหนึ่งในแผนพัฒนา [PyThaiNLP](https://github.com/PyThaiNLP/)
- ข้อมูลที่รวบรวมในคลังข้อความนี้เป็นสาธารณสมบัติ (public domain) ตามพ.ร.บ.ลิขสิทธิ์ พ.ศ. 2537 มาตรา 7 (สิ่งต่อไปนี้ไม่ถือว่าเป็นงานอันมีลิขสิทธิ์ตามพระราชบัญญัตินี้ (1) ข่าวประจำวัน และข้อเท็จจริงต่างๆ ที่มีลักษณะเป็นเพียงข่าวสารอันมิใช่งานในแผนกวรรณคดี แผนกวิทยาศาสตร์ หรือแผนกศิลปะ [...] (3) ระเบียบ ข้อบังคับ ประกาศ คำสั่ง คำชี้แจง และหนังสือตอบโต้ของกระทรวง ทบวง กรม หรือหน่วยงานอื่นใดของรัฐหรือของท้องถิ่น [...])
ดาวน์โหลดได้ที่ [https://github.com/PyThaiNLP/thai-law/releases](https://github.com/PyThaiNLP/thai-law/releases)
This dataset is Thai Law dataset v0.2
- Data source from Office of the Council of State, Thailand. [https://www.krisdika.go.th/](https://www.krisdika.go.th/)
- This part of PyThaiNLP Project.
- License Dataset is public domain.
Datasize: 42,755 row
GitHub: [https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2](https://github.com/PyThaiNLP/thai-law/releases/tag/v0.2) |
mstz/chess_rock_vs_pawn | 2023-04-16T17:01:23.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"chess",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_chess_(king-rook_vs._king-pawn)_22,
title = {{Chess (King-Rook vs. King-Pawn)}},
year = {1989},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5DK5C}}
} | null | 0 | 12 | ---
language:
- en
tags:
- chess
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Chess Rock VS Pawn
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- chess
license: cc
---
# Chess Rock VS Pawn
The [Chess Rock VS Pawn dataset](https://archive-beta.ics.uci.edu/dataset/22/chess+king+rook+vs+king+pawn) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|--------------------------|
| chess | Binary classification | Can the white piece win? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/chess_rock_vs_pawn")["train"]
``` |
mstz/promoters | 2023-04-16T17:58:13.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"promoters",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_molecular_biology_(promoter_gene_sequences)_67,
author = {Harley,C., Reynolds,R. & Noordewier,M.},
title = {{Molecular Biology (Promoter Gene Sequences)}},
year = {1990},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5S01D}}
} | null | 0 | 12 | ---
language:
- en
tags:
- promoters
- tabular_classification
- binary_classification
- UCI
pretty_name: Promoters
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- promoters
license: cc
---
# Promoters
The [Promoters dataset](https://archive.ics.uci.edu/ml/datasets/Promoters) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------|
| promoters | Binary classification | Is this DNA string a promoter? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/promoters")["train"]
``` |
mstz/monks | 2023-04-16T17:34:32.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"student performance",
"tabular_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_monk's_problems_70,
author = {Wnek,J.},
title = {{MONK's Problems}},
year = {1992},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5R30R}}
} | null | 0 | 12 | ---
language:
- en
tags:
- student performance
- tabular_classification
- UCI
pretty_name: Monk
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- monks1
- monks2
- monks3
license: cc
---
# Monks
The [Monk dataset](https://archive-beta.ics.uci.edu/dataset/70/monk+s+problems) from UCI.
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| monks1 | Binary classification |
| monks2 | Binary classification |
| monks3 | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/monks", "monks1")["train"]
``` |
climatebert/climate_specificity | 2023-04-18T16:02:48.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | climatebert | null | null | null | 1 | 12 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: ClimateSpecificity
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-specific
'1': specific
splits:
- name: train
num_bytes: 492077
num_examples: 1000
- name: test
num_bytes: 174265
num_examples: 320
download_size: 373454
dataset_size: 666342
---
# Dataset Card for climate_specificity
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying the climate-related specificity of climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given climate-related paragraph is specific or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> non-specific, 1 -> specific)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
BramVanroy/alpaca-cleaned-dutch | 2023-07-07T12:16:39.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:nl",
"license:cc-by-nc-4.0",
"alpaca",
"instruct",
"instruction",
"doi:10.57967/hf/0530",
"region:us"
] | BramVanroy | null | null | null | 0 | 12 | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
- text-generation
language:
- nl
tags:
- alpaca
- instruct
- instruction
pretty_name: Alpaca Cleaned Dutch
size_categories:
- 10K<n<100K
---
# Dataset Card for Alpaca Cleaned Dutch
## Dataset Description
- **Homepage:** N/A
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Bram Vanroy
### Dataset Summary
This dataset contains 51,712 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch. They are translations of [Alpaca Cleaned Dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned).
☕ [**Want to help me out?**](https://www.buymeacoffee.com/bramvanroy) Translating the data with the OpenAI API, and prompt testing, cost me 💸$57.99💸. If you like this dataset, please consider [buying me a coffee](https://www.buymeacoffee.com/bramvanroy) to offset a portion of this cost, I appreciate it a lot! ☕
### Languages
- Dutch
## Dataset Structure
### Data Instances
```python
{
'id': 7,
'instruction': 'Leg uit waarom de volgende breuk gelijk is aan 1/4',
'input': '4/16',
'output': 'De breuk 4/16 is gelijk aan 1/4 omdat zowel de teller als de '
'noemer deelbaar zijn door 4. Door zowel de teller als de noemer '
'door 4 te delen, krijgen we de breuk 1/4.'
}
```
### Data Fields
- **id**: the ID of the item. The following ID is not included because they could not be translated: `[23019]`
- **instruction**: the given instruction
**input**: optional input to accompany the instruction. Can be empty.
- **output**: the "answer" to the instruction
## Dataset Creation
The instructions, inputs and outputs were translated with OpenAI's API for `gpt-3.5-turbo`. `max_tokens=1024, temperature=0` as parameters.
The prompt template to translate is (where `src_lang` is English and `tgt_lang` is Dutch):
```python
TRANSLATION_PROMPT = """You are asked to translate a task's instruction, optional input to the task, and the output of the task, from {src_lang} into {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the task consists of a task instruction (marked `instruction: `), optional input to the task (marked `input: `) and output for the task marked with `output: `;
2. do not translate the identifiers `instruction: `, `input: `, and `output: ` but instead copy them to your output;
3. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
4. translate the instruction and input text using informal, but standard, language;
5. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
6. if the instruction is to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in the input in {tgt_lang}, and then also generate a corrected output version in the output in {tgt_lang};
7. if the instruction is to translate text from one language to another, then you do not translate the text that needs to be translated in the instruction or the input, nor the translation in the output (just copy them as-is);
8. do not translate code fragments but copy them to your output. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following task with the requirements set out above. Do not provide an explanation and do not add anything else.\n\n"""
```
This prompt is concatenated with the instruction, optionally the input, and the output. In code, that last part looks like this:
```python
text = f'instruction: "{instruction}"\n\n'
if inputstr:
text += f'input: "{inputstr}"\n\n'
text += f'output: "{outputstr}"'
```
The system message was:
```
You are a helpful assistant that translates English to Dutch to the requirements that are given to you.
```
Note that 1 item (0.0001%) was not successfully translated. The translation was missing the input, instruction, or output keywords where those were expected. The ID for the missing item is `[23019]`.
### Source Data
#### Initial Data Collection and Normalization
Initial data creation by [Tatsu lab](https://huggingface.co/datasets/tatsu-lab/alpaca) and cleaned by [Yahma](https://huggingface.co/datasets/yahma/alpaca-cleaned).
#### Who are the source language producers?
The original dataset was generated with OpenAI's `text-davinci-003`.
## Considerations for Using the Data
Note that the translations in this new dataset have not been verified by humans.
### Discussion of Biases
As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes `make sure to avoid biases (such as gender bias, grammatical bias, social bias)`, of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution.
### Other Known Limitations
The translation quality has not been verified. Use at your own risk!
### Licensing Information
As per OpenAI's terms of use, this dataset cannot be used to build [a commercial system that competes with OpenAI's services](https://openai.com/policies/terms-of-use). Similar to the original Alpaca dataset, this dataset is released under CC NC 4.0.
This text was generated (either in part or in full) with GPT-3 (`gpt-3.5-turbo`), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies.
As clearly stated in their [Terms of Use](https://openai.com/policies/terms-of-use), specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. [As far as I am aware](https://law.stackexchange.com/questions/93308/licensing-material-generated-with-chatgpt), that is a specific restriction that should serve as an addendum to the current license.
### Citation Information
If you use this data set, please cite :
Vanroy, B. (2023). Alpaca Cleaned Dutch [Data set]. Hugging Face. https://doi.org/10.57967/HF/0530
```bibtex
@misc{https://doi.org/10.57967/hf/0530,
doi = {10.57967/HF/0530},
url = {https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch},
author = {Vanroy, Bram},
title = {{A}lpaca {C}leaned {D}utch},
publisher = {Hugging Face},
year = {2023}
}
```
### Contributions
Thanks to [Tatsu lab](https://huggingface.co/datasets/tatsu-lab/alpaca) for the initial machine-generated dataset and yahma for [cleaning it](https://huggingface.co/datasets/yahma/alpaca-cleaned). |
alpindale/light-novels | 2023-04-14T18:46:15.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | alpindale | null | null | null | 8 | 12 | ---
license: creativeml-openrail-m
---
|
qbao775/PARARULE-Plus | 2023-06-05T03:56:52.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"Reasoning",
"Multi-Step-Deductive-Reasoning",
"Logical-Reasoning",
"region:us"
] | qbao775 | null | null | null | 4 | 12 | ---
license: mit
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- Reasoning
- Multi-Step-Deductive-Reasoning
- Logical-Reasoning
size_categories:
- 100K<n<1M
---
# PARARULE-Plus
This is a branch which includes the dataset from PARARULE-Plus Depth=2, Depth=3, Depth=4 and Depth=5. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
``` |
snipaid/snippet-mlsum-500-v2 | 2023-04-19T18:26:42.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:de",
"license:mit",
"news",
"headline",
"teaser",
"keywords",
"tweet",
"serp",
"summary",
"news snippets",
"region:us"
] | snipaid | null | null | null | 0 | 12 | ---
license: mit
language: de
tags:
- news
- headline
- teaser
- keywords
- tweet
- serp
- summary
- news snippets
task_categories:
- summarization
- text2text-generation
size_categories:
- n<1K
---
# Dataset Card for Snippet-MLSUM-500-V2
### Dataset Summary
This dataset is a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such as title, teaser, keywords, serp and tweet for news articles in German language.
### Languages
de - German
## Dataset Structure
text: a string feature.
title: a string feature.
teaser: a string feature.
keywords: a string feature.
summary: a string feature.
serp: a string feature.
tweet: a string feature.
url: a string feature.
date: a string feature.
topic: a string feature.
## Dataset Creation
The news articles in this dataset are a random sample of ~500 news articles from MLSUM balanced by topic.
Features text, title, teaser (originally summary in MLSUM), url, date and topic are copied from MLSUM.
Features keywords, serp, summary and tweet are machine generated with GPT-3.5.
Generated features comply with length limits in place for SERPs and Tweets at the time of publishing.
## Considerations for Using the Data
### Known Limitations
Part of the snippet data is machine generated. Be aware that these features (specifically: keywords, serp, summary and tweet) may exhibit signs of model hallucination, stereotypes and toxicity.
## Additional Information
### Licensing Information
This dataset is licensed under MIT license. |
lighteval/mutual_plus | 2023-04-21T13:35:31.000Z | [
"region:us"
] | lighteval | null | null | null | 0 | 12 | Entry not found |
Frorozcol/recetas-cocina | 2023-09-18T16:40:48.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:es",
"license:mit",
"region:us"
] | Frorozcol | null | null | null | 1 | 12 | ---
license: mit
task_categories:
- text-generation
- conversational
language:
- es
pretty_name: recetas de cocina
size_categories:
- 1K<n<10K
---
## Resumen del dataset
Se trata de un dataset de recetas de comidas en español. Se hizo un scrapy de diferentes páginas de internet sobre recetas de comidas que estuvieran en español, se logró extraer alrededor de 30 k valores, que se dividen en train, test y valid.
## Tareas admitidas y tablas de clasificación
task-generation: Dado los ingredientes, generar la receta.
Idioma
Es un dataset que cuenta con un español de diferentes partes del mundo, especial de latino america
## Estructura de los datos
### Instancias
A continuación se muestra una instancia de ejemplo del dataset:
```json
{
'title': 'Smoothie bicolor de leche KLIM® y MILO®',
'url': "https://www.recetasnestle.com.co/recetas/smoothie-chocolate-leche-bicolor"
'ingredients ': "2 cucharadas de MILO® (25 g) 1 taza de hielo 3 cucharadas de Leche en polvo KLIM® Clásica (24 g)",
'steps': ' 1. Licúa las cucharadas de MILO® con media taza de hielo hasta que lo veas frapeado y pon la mezcla en un vaso, lleva al congelador mientras preparas la leche. 2. Aparte, en el mismo vaso de la licuadora añade la media taza de hielo restante y las cucharadas de leche en polvo KLIM® Clásica, licúa por 5 segundos hasta que lo veas frapeado. 3. Retira el vaso del congelador y sirve encima el licuado de la leche, así tendrás los dos colores, decora con fruta de tu preferencia.',
'uuid': 'ca4fa322-a38d-4f6a-8c06-79f68fe729f4.'
}
```
## Campos de los datos
+ title: Titulo de la receta
+ url: Url de donde se hizo el scrapy
+ ingredients: Los ingredientes para hacer la receta
+ steps: Los pasos para hacer la receta
+ uuid: Código del dataset.
|
jlbaker361/anime_faces_10k | 2023-06-05T19:52:13.000Z | [
"region:us"
] | jlbaker361 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: split
dtype: string
- name: src
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 556740020.0
num_examples: 10000
download_size: 547181705
dataset_size: 556740020.0
---
# Dataset Card for "anime_faces_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jxu124/llava_complex_reasoning_77k | 2023-05-20T18:45:44.000Z | [
"region:us"
] | jxu124 | null | null | null | 1 | 12 | ---
dataset_info:
features:
- name: global_image_id
dtype: string
- name: image_path
dtype: string
- name: dialog
sequence:
sequence: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 71300555
num_examples: 76643
download_size: 36685003
dataset_size: 71300555
---
# Dataset Card for "llava_complex_reasoning_77k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stevendevoe/news-article-summary | 2023-04-26T18:53:09.000Z | [
"region:us"
] | stevendevoe | null | null | null | 1 | 12 | ---
dataset_info:
features:
- name: article
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 296759
num_examples: 99
download_size: 184628
dataset_size: 296759
---
# Dataset Card for "news-article-summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DeadPixels/DPhi_Sprint_25_Flowers | 2023-04-29T10:34:03.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:cc-by-2.0",
"region:us"
] | DeadPixels | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': daisy
'1': dandelion
'2': rose
'3': sunflower
'4': tulip
splits:
- name: train
num_bytes: 123964921.405
num_examples: 2589
- name: test
num_bytes: 47588262
num_examples: 864
- name: validation
num_bytes: 47493769
num_examples: 864
download_size: 237386772
dataset_size: 219046952.405
license: cc-by-2.0
task_categories:
- image-classification
pretty_name: 'Data Sprint #25: Flower Recognition Datas'
size_categories:
- 1K<n<10K
---
# Dataset Card for "DPhi_Sprint_25_Flowers"
All images in this archive are licensed under the Creative Commons By-Attribution License, available at:
https://creativecommons.org/licenses/by/2.0/
The photographers are listed in LICENSE.txt, thanks to all of them for making their work available.
However, you will observe the image file names are different in this file than those we have provided. The file names were changed solely for the purpose of the data sprint. |
cr7Por/ffhq_controlnet_5_2_23 | 2023-05-02T23:27:54.000Z | [
"region:us"
] | cr7Por | null | null | null | 1 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_crop
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 15826187729.91
num_examples: 39641
download_size: 15842739047
dataset_size: 15826187729.91
---
# Dataset Card for "ffhq_controlnet_5_2_23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PaulAdversarial/all_news_finance_sm_1h2023 | 2023-05-04T21:16:11.000Z | [
"license:afl-3.0",
"region:us"
] | PaulAdversarial | null | null | null | 9 | 12 | ---
license: afl-3.0
---
|
sharad/chatgpt-paraphrases-simple | 2023-05-08T09:09:04.000Z | [
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"paraphrase",
"region:us"
] | sharad | null | null | null | 2 | 12 | ---
license: apache-2.0
language:
- en
tags:
- paraphrase
size_categories:
- 1M<n<10M
dataset_info:
features:
- name: s1
dtype: string
- name: s2
dtype: string
splits:
- name: train
num_bytes: 1283650386
num_examples: 6286314
download_size: 211207464
dataset_size: 1283650386
pretty_name: ChatGPT Paraphrase
---
This dataset is simplified version of [ChatGPT Paraphrases](https://huggingface.co/datasets/humarin/chatgpt-paraphrases). And aims to take away the pain of expanding original dataset into unique paraphrase pairs.
# Structure:
Dataset is not divided into train/test split. And contains 6.3 million unique paraphrases(6x5x420000/2 = 6.3 million). Dataset contains following 2 columns-
1. s1 - Sentence
2. s2 - Paraphrase
**Original Dataset Structure:**
The original dataset has following 4 columns-
1. text - 420k Unique sentence
2. paraphrases - List of 5 unique paraphrases generated by ChatGPT
3. category - Questions / Sentence
4. source - Quora/CNN/Others
For more information, usage rights, and legal disclaimer, check out [original dataset](https://huggingface.co/datasets/humarin/chatgpt-paraphrases). |
abhijitgayen/user_admin_chat | 2023-05-08T11:47:05.000Z | [
"region:us"
] | abhijitgayen | null | null | null | 0 | 12 | Entry not found |
yangwang825/marc-ja | 2023-05-19T02:08:33.000Z | [
"task_categories:text-classification",
"language:ja",
"region:us"
] | yangwang825 | null | null | null | 1 | 12 | ---
task_categories:
- text-classification
language:
- ja
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
--- |
yangwang825/klue-ynat | 2023-05-19T02:07:06.000Z | [
"task_categories:text-classification",
"language:ko",
"region:us"
] | yangwang825 | null | null | null | 0 | 12 | ---
task_categories:
- text-classification
language:
- ko
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': IT과학
'1': 경제
'2': 사회
'3': 생활문화
'4': 세계
'5': 스포츠
'6': 정치
--- |
Fredithefish/GPTeacher-for-RedPajama-Chat | 2023-05-18T11:29:04.000Z | [
"license:apache-2.0",
"region:us"
] | Fredithefish | null | null | null | 1 | 12 | ---
license: apache-2.0
---
|
Dzeniks/BBC-article | 2023-05-19T09:10:13.000Z | [
"region:us"
] | Dzeniks | null | null | null | 0 | 12 | Entry not found |
tasksource/tasksource-instruct-v0 | 2023-06-12T15:14:23.000Z | [
"task_categories:text2text-generation",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:zero-shot-classification",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"... | tasksource | null | null | null | 16 | 12 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 2839591299.0
num_examples: 4894553
- name: test
num_bytes: 97972920.0
num_examples: 151829
- name: validation
num_bytes: 96766748.0
num_examples: 148634
download_size: 1631162334
dataset_size: 3034330967.0
license: apache-2.0
task_categories:
- text2text-generation
- conversational
- text-generation
- text-classification
- token-classification
- zero-shot-classification
language:
- en
tags:
- instructions
- instruction-tuning
- instruction-finetuning
- flan
- promptsource
- tasksource
pretty_name: tasksource-instruct
size_categories:
- 1M<n<10M
---
# Dataset Card for "tasksource-instruct-v0" (TSI)
Multi-task instruction-tuning data recasted from 485 of the [tasksource](https://github.com/sileod/tasksource) datasets.
Dataset size is capped at 30k examples per task to foster task diversity.
```python
!pip install tasksource, pandit
import tasksource, pandit
df = tasksource.list_tasks(instruct=True).sieve(id=lambda x: 'mmlu' not in x)
for tasks in df.id:
yield tasksource.load_task(task,instruct=True,max_rows=30_000,max_rows_eval=200)
```
https://github.com/sileod/tasksource
## How it differs from flan-v2
TSI is HuggingFace-centric and based on tasksource, a curated collection of HF datasets. It can be scaled to much more examples.
tasksource is focused on discriminative tasks (Classification/TokenClassification/MultipleChoice). The coverage on discriminative tasks is greater than flan.
List of tasks [here](https://github.com/sileod/tasksource/blob/main/tasks.md). Examples of tasks not in Flan V2 include Dynasent (adversarial sentiment analysis), Dynahate (adversarial hate speech detection, discriminative babi, epistemic logic, ruletaker, veridicality, discourse relation prediction, dozens of interesting natural language inference datasets...
TSI answers are mostly short answers to multiple-choice questions, but they target a wide array of problems.
TSI is reasoning intensive, while some flan tasks are not necessarily specific (e.g. generating hypothesis based on premise for NLI).
We explicitly mention that answers should not have explanations, to prevent biasing models toward short answers when using other instruction datasets.
`flan-v2` and `tasksource-instruct` can be combined to improve the reasoning capabilities of LLM.
## Contact and citation:
damien.sileo@inria.fr
https://arxiv.org/abs/2301.05948
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` |
wanng/midjourney-kaggle-clean | 2023-05-26T14:09:30.000Z | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:cc0-1.0",
"midjourney",
"kaggle",
"region:us"
] | wanng | null | null | null | 5 | 12 | ---
license: cc0-1.0
task_categories:
- image-to-text
- text-to-image
language:
- en
tags:
- midjourney
- kaggle
---
# midjourney-v5-202304-clean
## 简介 Brief Introduction
非官方的,对Kaggle (Midjourney User Prompts & Generated Images (250k))[https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage?select=general-01_2022_06_20.json] 上的数据集进行了清理,一共有 248,167对。
Unofficially, a cleanup of the dataset on Kaggle (Midjourney User Prompts & Generated Images (250k))[https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage?select=general-01_2022_06_20.json] yielded 248,167 pairs.
## 数据集信息 Dataset Information
我做了一些清洗,清理出了两个文件:
- ori.parquet (145,918对,midjourney的四格图)
- upscaled.parquet (102,249对,使用了高清指令的图,这意味着这个图更受欢迎。)
I did some cleaning and cleaned out two files:
- ori_prompts_df.parquet (145,918 pairs, midjourney's four-frame diagrams)
- upscaled_prompts_df.parquet (102,249 pairs, graphs that use the Upscale command, which means this one is more popular.)
## 列信息 Column Information
1. `content` (内容): 这一列包含了消息的主要内容,可能包括文本、链接、或者其他元素。
This column contains the main content of the message, which may include text, links, or other elements.
2. `url` (网址): 这一列包含了附件的URL,通常是图片或者其他文件。
This column contains the URL of the attachment, usually an image or other file.
3. `proxy_url` (代理网址): 这一列包含了附件的代理URL,这个URL可以用来访问附件,即使在原始URL不可用的情况下。
This column contains the proxy URL of the attachment, which can be used to access the attachment even when the original URL is not available.
4. `width` (宽度): 这一列包含了附件的宽度,通常是图片的宽度。
This column contains the width of the attachment, usually the width of an image.
5. `height` (高度): 这一列包含了附件的高度,通常是图片的高度。
This column contains the height of the attachment, usually the height of an image.
6. `date` (日期): 这一列包含了消息的发送日期和时间。
This column contains the date and time the message was sent.
7. `message_type` (消息类型): 这一列包含了消息的类型,例如是否是初始消息、变体请求或者是放大请求。
This column contains the type of the message, such as whether it is an initial message, a variation request, or an upscale request.
8. `content_links` (内容链接): 这一列包含了消息内容中的所有链接。
This column contains all the links in the message content.
9. `prompt` (提示): 这一列包含了消息中的主要提示,通常是用户输入的文本。
This column contains the main prompt in the message, usually the text input by the user.
10. `prompt_additions` (提示补充): 这一列包含了消息中的提示补充,这些补充可能包括额外的信息或者指示。
This column contains the prompt additions in the message, these additions may include extra information or instructions.
11. `user_name` (用户名): 这一列包含了发送消息的用户的用户名。
This column contains the username of the user who sent the message.
12. `aspect` (宽高比): 这一列包含了附件的宽高比,通常是图片的宽高比。
This column contains the aspect ratio of the attachment, usually the aspect ratio of an image.
13. `clean_prompts` (清理后的提示): 这一列包含了清理后的提示,其中已经删除了所有的链接和奇怪的字符。
This column contains the cleaned prompts, where all links and weird characters have been removed.
|
HOXSEC/csgo-maps | 2023-05-30T20:39:07.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:mit",
"region:us"
] | HOXSEC | null | null | null | 4 | 12 | ---
license: mit
task_categories:
- image-classification
pretty_name: Counter Strike Maps
size_categories:
- 1K<n<10K
---
# Counter Strike Map Dataset
This dataset consists of Counter Strike map images along with their corresponding labels and x-y coordinates. The dataset is suitable for image classification tasks and includes the necessary information for each image.
## Dataset Details
- Total Images: [1424]
- Classes: [5]
- Image Size: [1920x1080]
- Format: [png]
## Files
The dataset includes the following files:
- **maps/train/**: This folder contains the Counter Strike map images. The images are named in a consistent format, typically with a prefix or unique identifier followed by the file extension.
- **metadata.csv**: This CSV file contains the annotations for each image in the dataset. It has the following columns:
- `file_name`: The relative or absolute path to the image file.
- `label`: The label or class of the image.
- `x`: The x-coordinate of a specific point of interest within the image.
- `y`: The y-coordinate of the same point of interest within the image.
|
9wimu9/eli5_mult_answers_en_no_answer_in_context | 2023-05-30T10:28:09.000Z | [
"region:us"
] | 9wimu9 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: question
dtype: string
- name: contexts
sequence: string
- name: gold_answer
dtype: string
splits:
- name: train
num_bytes: 308894070
num_examples: 71236
- name: test
num_bytes: 34558419
num_examples: 7916
download_size: 209630607
dataset_size: 343452489
---
# Dataset Card for "eli5_mult_answers_en_no_answer_in_context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eastwind/semeval-2016-absa-reviews-arabic | 2023-06-07T13:09:16.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ar",
"license:mit",
"region:us"
] | eastwind | null | null | null | 0 | 12 | ---
license: mit
task_categories:
- text-classification
language:
- ar
pretty_name: SemEval 2016 Aspect Based Sentiment Analysis on Hotel Reviews
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://github.com/msmadi/ABSA-Hotels/tree/master
### Dataset Summary
Aspect based sentiment analysis dataset using hotel reviews in Arabic.
### Languages
Arabic
### Licensing Information
Original dataset was licensed under MIT, so this is also under MIT
### Citation Information
Cite this and the original authors if you want to.
|
pain/MASC | 2023-06-12T19:48:45.000Z | [
"task_categories:automatic-speech-recognition",
"language:ar",
"license:cc-by-4.0",
"region:us"
] | pain | MASC is a dataset that contains 1,000 hours of speech sampled at 16 kHz and crawled from over 700 YouTube channels. The dataset is multi-regional, multi-genre, and multi-dialect intended to advance the research and development of Arabic speech technology with a special emphasis on Arabic speech recognition. | @INPROCEEDINGS{10022652,
author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha},
booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)},
title={MASC: Massive Arabic Speech Corpus},
year={2023},
volume={},
number={},
pages={1006-1013},
doi={10.1109/SLT54892.2023.10022652}}
} | null | 1 | 12 | ---
license:
- cc-by-4.0
size_categories:
ar:
- n==1k
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: MASC dataset
extra_gated_prompt: >-
By clicking on “Access repository” below, you also agree to not attempt to
determine the identity of speakers in the MASC dataset.
language:
- ar
---
# Dataset Card for Common Voice Corpus 11.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus
- **Paper:** https://ieeexplore.ieee.org/document/10022652
### Dataset Summary
MASC is a dataset that contains 1,000 hours of speech sampled at 16 kHz and crawled from over 700 YouTube channels.
The dataset is multi-regional, multi-genre, and multi-dialect intended to advance the research and development of Arabic speech technology with a special emphasis on Arabic speech recognition.
### Supported Tasks
- Automatics Speach Recognition
### Languages
```
Arabic
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
```python
from datasets import load_dataset
masc = load_dataset("pain/MASC", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
masc = load_dataset("pain/MASC", split="train", streaming=True)
print(next(iter(masc)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
masc = load_dataset("pain/MASC", split="train")
batch_sampler = BatchSampler(RandomSampler(masc), batch_size=32, drop_last=False)
dataloader = DataLoader(masc, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
masc = load_dataset("pain/MASC", split="train")
dataloader = DataLoader(masc, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on MASC with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
```python
{'video_id': 'OGqz9G-JO0E', 'start': 770.6, 'end': 781.835, 'duration': 11.24,
'text': 'اللهم من ارادنا وبلادنا وبلاد المسلمين بسوء اللهم فاشغله في نفسه ورد كيده في نحره واجعل تدبيره تدميره يا رب العالمين',
'type': 'c', 'file_path': '87edeceb-5349-4210-89ad-8c3e91e54062_OGqz9G-JO0E.wav',
'audio': {'path': None,
'array': array([
0.05938721,
0.0539856,
0.03460693, ...,
0.00393677,
0.01745605,
0.03045654
]), 'sampling_rate': 16000
}
}
```
### Data Fields
`video_id` (`string`): An id for the video that the voice has been created from
`start` (`float64`): The start of the audio's chunk
`end` (`float64`): The end of the audio's chunk
`duration` (`float64`): The duration of the chunk
`text` (`string`): The text of the chunk
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`type` (`string`): It refers to the data set type, either clean or noisy where "c: clean and n: noisy"
'file_path' (`string`): A path for the audio chunk
"audio" ("audio"): Audio for the chunk
### Data Splits
The speech material has been subdivided into portions for train, dev, test.
The dataset splits has clean and noisy data that can be determined by type field.
### Citation Information
```
@INPROCEEDINGS{10022652,
author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha},
booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)},
title={MASC: Massive Arabic Speech Corpus},
year={2023},
volume={},
number={},
pages={1006-1013},
doi={10.1109/SLT54892.2023.10022652}}
}
``` |
afkfatih/turkishdataset | 2023-06-10T11:44:18.000Z | [
"region:us"
] | afkfatih | null | null | null | 1 | 12 | Deneme sürümdür lütfen kullanmayınız.
---
license: apache-2.0
--- |
Kamaljp/medium_articles | 2023-06-11T09:48:58.000Z | [
"region:us"
] | Kamaljp | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: authors
dtype: string
- name: timestamp
dtype: string
- name: tags
dtype: string
splits:
- name: train
num_bytes: 1044746687
num_examples: 192368
download_size: 601519297
dataset_size: 1044746687
---
# Dataset Card for "medium_articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dublikunt/hent | 2023-06-17T17:32:07.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:bsd-3-clause",
"art",
"region:us"
] | Dublikunt | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': anal
'1': armpits
'2': ass
'3': bdsm
'4': belly
'5': blowjob
'6': darkskin
'7': ecchi
'8': feet
'9': femboys
'10': femdom
'11': hips
'12': kemonomimi
'13': kissing
'14': lactation
'15': lingerie
'16': milf
'17': nakadashi
'18': oppai
'19': paizuri
'20': pussy
'21': rim-job
'22': slime-girls
'23': spreading
'24': squirting
'25': tomboy
'26': yuri
splits:
- name: train
num_bytes: 5624302009.552
num_examples: 42083
- name: validation
num_bytes: 280993818.54
num_examples: 2060
- name: test
num_bytes: 306970526.884
num_examples: 2262
download_size: 6112211487
dataset_size: 6212266354.976
license: bsd-3-clause
task_categories:
- image-classification
tags:
- art
pretty_name: Hent
size_categories:
- 10K<n<100K
---
# Dataset Card for "hent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nanyy1025/pubmed_rct_200k | 2023-06-17T08:34:24.000Z | [
"license:openrail",
"region:us"
] | nanyy1025 | null | null | null | 0 | 12 | ---
license: openrail
---
|
Csplk/video-images | 2023-06-18T07:43:33.000Z | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"webcams",
"outdoor",
"indoor",
"region:us"
] | Csplk | null | null | null | 0 | 12 | ---
license: openrail
task_categories:
- image-classification
- image-segmentation
language:
- en
tags:
- webcams
- outdoor
- indoor
pretty_name: vdoimgs
size_categories:
- n<1K
--- |
wwydmanski/USPS | 2023-06-20T08:34:38.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"tabular",
"region:us"
] | wwydmanski | null | null | null | 0 | 12 | ---
task_categories:
- tabular-classification
tags:
- tabular
pretty_name: A database for handwritten text recognition research
size_categories:
- 1K<n<10K
--- |
KaiLv/UDR_CR | 2023-06-21T12:22:14.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 204336
num_examples: 1772
- name: test
num_bytes: 233558
num_examples: 1996
download_size: 252165
dataset_size: 437894
---
# Dataset Card for "UDR_CR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_CosmosQA | 2023-06-21T12:35:02.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: label
dtype: string
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 11188271
num_examples: 18770
- name: test
num_bytes: 3979297
num_examples: 6030
- name: validation
num_bytes: 1722925
num_examples: 2603
- name: debug
num_bytes: 2985534
num_examples: 5000
download_size: 11095169
dataset_size: 19876027
---
# Dataset Card for "UDR_CosmosQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_DART | 2023-06-21T12:36:09.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: references
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 8360993
num_examples: 30123
- name: validation
num_bytes: 1657570
num_examples: 2718
- name: test
num_bytes: 2532366
num_examples: 4159
- name: debug
num_bytes: 1396342
num_examples: 5000
download_size: 4740566
dataset_size: 13947271
---
# Dataset Card for "UDR_DART"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_Go | 2023-06-21T12:39:08.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 89583705
num_examples: 167137
- name: validation
num_bytes: 3547138
num_examples: 7320
- name: test
num_bytes: 4244257
num_examples: 8115
- name: debug
num_bytes: 53690904
num_examples: 100000
download_size: 66725220
dataset_size: 151066004
---
# Dataset Card for "UDR_Go"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_Python | 2023-06-21T12:45:54.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 153748508
num_examples: 250818
- name: validation
num_bytes: 8561595
num_examples: 13841
- name: test
num_bytes: 9299006
num_examples: 14840
- name: debug
num_bytes: 61463442
num_examples: 100000
download_size: 107210496
dataset_size: 233072551
---
# Dataset Card for "UDR_Python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_SNLI | 2023-06-21T12:49:04.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: sentence
dtype: string
- name: len_sentence
dtype: int64
splits:
- name: test
num_bytes: 747502
num_examples: 3262
- name: train
num_bytes: 28963424
num_examples: 131062
- name: validation
num_bytes: 750070
num_examples: 3272
- name: debug
num_bytes: 22092624
num_examples: 100000
download_size: 17825058
dataset_size: 52553620
---
# Dataset Card for "UDR_SNLI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tianleliphoebe/DreamEditBench | 2023-06-23T05:05:09.000Z | [
"task_categories:image-to-image",
"task_categories:text-to-image",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"arxiv:2306.12624",
"region:us"
] | tianleliphoebe | null | null | null | 6 | 12 | ---
license: cc-by-4.0
task_categories:
- image-to-image
- text-to-image
language:
- en
size_categories:
- n<1K
---
## DreamEditBench for Subject Replacement task and Subject Addition task.
## Dataset Description
- **Homepage:** https://dreameditbenchteam.github.io
- **Repository:** https://github.com/DreamEditBenchTeam/DreamEdit
<!-- **Paper:** https://arxiv.org/abs/2306.12624 -->
The goal of subject replacement is to replace a subject from a source image with a customized subject. In contrast, the aim of the subject addition task is to add a customized
subject to a desired position in the source image. To standardize the evaluation of the two proposed tasks, we curate a new benchmark, i.e. DreamEditBench, consisting of 22 subjects in alignment with DreamBooth with 20 images for each subject correspondingly. For the subject replacement task, we collect 10 images for each type, which include same-typed source subjects in diverse environments. The images are retrieved from the
internet with the search query “a photo of [Class name]”, and the source subject should be the main subject in
the image which dominates a major part of the photo. For the subject addition task, we collect 10 reasonable
backgrounds for each type of subject. In the meantime, we manually designate the
specific location the target subject should be placed with a bounding box in the background. To collect the
specific backgrounds for each subject, we first brainstorm and list the possible common environments of the
subjects, then we search the listed keywords from the internet to retrieve and pick the backgrounds
## Data Structure
There are 22 subject folders in each task folder respectively. In each subject folder, there are 10 source images. For Subject Addition task, there is an additional bbox.json file recording the manually labeled bounding box for each background.
The replacement_subset.csv and addition_subset.csv record the easy/hard subset division for each task correspondingly.
## Citation Information
If you find this dataset useful, please consider citing our paper:
```
@misc{li2023dreamedit,
title={DreamEdit: Subject-driven Image Editing},
author={Tianle Li and Max Ku and Cong Wei and Wenhu Chen},
year={2023},
eprint={2306.12624},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
shinonomelab/cleanvid-15m_map | 2023-07-02T04:22:55.000Z | [
"task_categories:text-to-video",
"task_categories:video-classification",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-4.0",
"captions",
"metadata",
"region:us"
] | shinonomelab | null | null | null | 7 | 12 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: description
dtype: string
- name: duration
dtype: float64
- name: aspectratio
dtype: string
- name: videourl
dtype: string
- name: author
dtype: string
- name: categories
dtype: string
- name: framerate
dtype: float64
- name: r18
dtype: int64
splits:
- name: train
num_bytes: 16755833083
num_examples: 14394510
download_size: 5410262648
dataset_size: 16755833083
task_categories:
- text-to-video
- video-classification
language:
- en
tags:
- captions
- metadata
pretty_name: CleanVid Map (15M)
size_categories:
- 10M<n<100M
---
# CleanVid Map (15M) 🎥
### TempoFunk Video Generation Project
CleanVid-15M is a large-scale dataset of videos with multiple metadata entries such as:
- Textual Descriptions 📃
- Recording Equipment 📹
- Categories 🔠
- Framerate 🎞️
- Aspect Ratio 📺
CleanVid aim is to improve the quality of WebVid-10M dataset by adding more data and cleaning the dataset by dewatermarking the videos in it.
This dataset includes only the map with the urls and metadata, with 3,694,510 more entries than the original WebVid-10M dataset.
Note that the videos are low-resolution, ranging from 240p to 480p. But this shouldn't be a problem as resolution scaling is difficult in Text-To-Video models.
More Datasets to come for high-res use cases.
CleanVid is the foundation dataset for the TempoFunk Video Generation project.
Built from a crawl of Shutterstock from June 25, 2023.
## Format 📊
- id: Integer (int64) - Shutterstock video ID
- description: String - Description of the video
- duration: Float(64) - Duration of the video in seconds
- aspectratio: String - Aspect Ratio of the video separated by colons (":")
- videourl: String - Video URL for the video in the entry, MP4 format. WEBM format is also available most of the times (by changing the extension at the end of the URL.).
- author: String - JSON-String containing information of the author such as `Recording Equipment`, `Style`, `Nationality` and others.
- categories: String - JSON-String containing the categories of the videos. (Values from shutterstock, not by us.)
- framerate: Float(64) - Framerate of the video
- r18: Bit (int64) - Wether the video is marked as mature content. 0 = Safe For Work; 1 = Mature Content
## Code 👩💻
If you want to re-create this dataset on your own, code is available here:
https://github.com/chavinlo/tempofunk-scrapper/tree/refractor1/sites/shutterstock
Due to rate-limitations, you might need to obtain a proxy. Functionality for proxies is included in the repository.
## Sample 🧪
```json
{
"id": 1056934082,
"description": "Rio, Brazil - February 24, 2020: parade of the samba school Mangueira, at the Marques de Sapucai Sambodromo",
"duration": 9.76,
"aspectratio": "16:9",
"videourl": "https://www.shutterstock.com/shutterstock/videos/1056934082/preview/stock-footage-rio-brazil-february-parade-of-the-samba-school-mangueira-at-the-marques-de-sapucai.mp4",
"author": {
"accountsId": 101974372,
"contributorId": 62154,
"bio": "Sempre produzindo mais",
"location": "br",
"website": "www.dcpress.com.br",
"contributorTypeList": [
"photographer"
],
"equipmentList": [
"300mm f2.8",
"24-70mm",
"70-200mm",
"Nikon D7500 ",
"Nikon Df",
"Flashs Godox"
],
"styleList": [
"editorial",
"food",
"landscape"
],
"subjectMatterList": [
"photographer",
"people",
"nature",
"healthcare",
"food_and_drink"
],
"facebookUsername": "celso.pupo",
"googlePlusUsername": "celsopupo",
"twitterUsername": "celsopupo",
"storageKey": "/contributors/62154/avatars/thumb.jpg",
"cdnThumbPath": "/contributors/62154/avatars/thumb.jpg",
"displayName": "Celso Pupo",
"vanityUrlUsername": "rodrigues",
"portfolioUrlSuffix": "rodrigues",
"portfolioUrl": "https://www.shutterstock.com/g/rodrigues",
"instagramUsername": "celsopupo",
"hasPublicSets": true,
"instagramUrl": "https://www.instagram.com/celsopupo",
"facebookUrl": "https://www.facebook.com/celso.pupo",
"twitterUrl": "https://twitter.com/celsopupo"
},
"categories": [
"People"
],
"framerate": 29.97,
"r18": 0
}
```
## Credits 👥
### Main
- Lopho - Part of TempoFunk Video Generation
- Chavinlo - Part of TempoFunk Video Generation & CleanVid Crawling, Scraping and Formatting
```
@InProceedings{Bain21,
author = "Max Bain and Arsha Nagrani and G{\"u}l Varol and Andrew Zisserman",
title = "Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval",
booktitle = "IEEE International Conference on Computer Vision",
year = "2021",
}
```
### Extra
- Salt - Base Threading Code (2022) |
intertwine-expel/knowledge-base | 2023-07-01T09:55:08.000Z | [
"region:us"
] | intertwine-expel | null | null | null | 0 | 12 | ---
pretty_name: Expel Knowledge Base Articles
---
# Expel Knowledge Base Articles |
Ngadou/social-engineering-convo | 2023-07-03T01:36:04.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"language:fr",
"license:apache-2.0",
"region:us"
] | Ngadou | null | null | null | 1 | 12 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
- fr
pretty_name: Social Engineering Conversation modelling
size_categories:
- n<1K
---
Social Engineering Conversation modelling
## Rational
LLM are few shot learners |
Feanix/gtzan-5-sec | 2023-07-03T19:23:04.000Z | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"music",
"region:us"
] | Feanix | GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock. | @misc{tzanetakis_essl_cook_2001,
author = "Tzanetakis, George and Essl, Georg and Cook, Perry",
title = "Automatic Musical Genre Classification Of Audio Signals",
url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf",
publisher = "The International Society for Music Information Retrieval",
year = "2001"
} | null | 0 | 12 | ---
pretty_name: GTZAN
task_categories:
- audio-classification
tags:
- music
size_categories:
- 1K<n<10K
---
# Dataset Card for GTZAN
## Table of Contents
- [Dataset Card for GTZAN](#dataset-card-for-gtzan)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://marsyas.info/downloads/datasets.html](http://marsyas.info/downloads/datasets.html)
- **Paper:** [http://ismir2001.ismir.net/pdf/tzanetakis.pdf](http://ismir2001.ismir.net/pdf/tzanetakis.pdf)
- **Point of Contact:**
### Dataset Summary
GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.
*** THIS VERSION OF THE DATASET CONTAINS THE ORIGINAL AUDIO TRACKS SEGMENTED INTO 5 SECOND LONG FILES ***
### Languages
English
## Dataset Structure
GTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single `train` split that is assigned by default.
### Data Instances
An example of GTZAN looks as follows:
```python
{
"file": "/path/to/cache/genres/blues/blues.00000.wav",
"audio": {
"path": "/path/to/cache/genres/blues/blues.00000.wav",
"array": array(
[
0.00732422,
0.01660156,
0.00762939,
...,
-0.05560303,
-0.06106567,
-0.06417847,
],
dtype=float32,
),
"sampling_rate": 22050,
},
"genre": 0,
}
```
### Data Fields
The types associated with each of the data fields is as follows:
* `file`: a `string` feature.
* `audio`: an `Audio` feature containing the `path` of the sound file, the decoded waveform in the `array` field, and the `sampling_rate`.
* `genre`: a `ClassLabel` feature.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{tzanetakis_essl_cook_2001,
author = "Tzanetakis, George and Essl, Georg and Cook, Perry",
title = "Automatic Musical Genre Classification Of Audio Signals",
url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf",
publisher = "The International Society for Music Information Retrieval",
year = "2001"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. |
Senem/Nostalgic_Sentiment_Analysis_of_YouTube_Comments_Data | 2023-10-03T12:49:45.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:afl-3.0",
"youtube comments",
"nostalgia",
"nlp",
"music",
"sentiment analysis",
"region:us"
] | Senem | null | null | null | 1 | 12 | ---
language:
- en
license: afl-3.0
task_categories:
- text-classification
tags:
- youtube comments
- nostalgia
- nlp
- music
- sentiment analysis
size_categories:
- 1K<n<10K
paper:
- Comparison of Neural Network Models for Nostalgic Sentiment Analysis of YouTube Comments
---
# Dataset Summary
+ The dataset is a collection of Youtube Comments and it was captured using the YouTube Data API.
+ The data set consists of 1500 nostalgic and non-nostalgic comments in English.
# Languages
The language of the data is English.
# Citation
If you find this dataset usefull for your study, please cite the paper as followed:
```bibtex
@article{postalcioglu2020comparison,
title={Comparison of Neural Network Models for Nostalgic Sentiment Analysis of YouTube Comments},
author={Postalcioglu, Seda and Aktas, Senem},
journal={Hittite Journal of Science and Engineering},
volume={7},
number={3},
pages={215--221},
year={2020},
publisher={Hitit University}
}
``` |
visual-layer/vl-celeba-hq | 2023-07-04T10:59:12.000Z | [
"license:other",
"region:us"
] | visual-layer | null | null | null | 0 | 12 | ---
license: other
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 2674924572.386
num_examples: 27412
- name: validation
num_bytes: 192330079.038
num_examples: 1959
download_size: 2704339198
dataset_size: 2867254651.4240003
---
[](https://www.visual-layer.com)
# Description
The `vl-celeba-hq` is a sanitized version of the original CelebA-HQ dataset.
The following are issues found in the original dataset and removed in this dataset:
<table>
<thead>
<tr>
<th style="text-align: left;">Category</th>
<th style="text-align: left;">Percentage</th>
<th style="text-align: left;">Count</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Duplicates</td>
<td style="text-align: left;"><div>1.67%</div></td>
<td style="text-align: left;"><div>3,389</div></td>
</tr>
<tr>
<td style="text-align: left;">Outliers</td>
<td style="text-align: left;"><div>0.08%</div></td>
<td style="text-align: left;"><div>157</div></td>
</tr>
<tr>
<td style="text-align: left;">Blur</td>
<td style="text-align: left;"><div>0.51%</div></td>
<td style="text-align: left;"><div>1,037</div></td>
</tr>
<tr>
<td style="text-align: left;">Dark</td>
<td style="text-align: left;"><div>0.001%</div></td>
<td style="text-align: left;"><div>2</div></td>
</tr>
<tr>
<td style="text-align: left;">Mislabels</td>
<td style="text-align: left;"><div>0.01%</div></td>
<td style="text-align: left;"><div>13</div></td>
</tr>
<tr>
<td style="text-align: left;">Leakage</td>
<td style="text-align: left;"><div>0.09%</div></td>
<td style="text-align: left;"><div>188</div></td>
</tr>
<tr>
<td style="text-align: left; font-weight: bold;">Total</td>
<td style="text-align: left; font-weight: bold;"><div>2.362%</div></td>
<td style="text-align: left; font-weight: bold;"><div>4,786</div></td>
</tr>
</tbody>
</table>
Learn more - https://docs.visual-layer.com/docs/available-datasets#vl-celeba-hq
# About Visual-Layer
<div align="center">
<a href="https://www.visual-layer.com">
<img alt="Visual Layer Logo" src="https://github.com/visual-layer/visuallayer/blob/main/imgs/vl_horizontal_logo.png?raw=true" alt="Logo" width="400">
</a>
</div>
Visual Layer is founded by the authors of [XGBoost](https://github.com/apache/tvm), [Apache TVM](https://github.com/apache/tvm) & [Turi Create](https://github.com/apple/turicreate) - [Danny Bickson](https://www.linkedin.com/in/dr-danny-bickson-835b32), [Carlos Guestrin](https://www.linkedin.com/in/carlos-guestrin-5352a869) and [Amir Alush](https://www.linkedin.com/in/amiralush).
Learn more about Visual Layer [here](https://visual-layer.com). |
crumb/flan-ul2-tinystories-complex | 2023-07-08T05:54:03.000Z | [
"language:en",
"license:mit",
"region:us"
] | crumb | null | null | null | 4 | 12 | ---
license: mit
language:
- en
---
Around a quarter of a million examples generated from Flan-UL2 (20b) with the prompt "Write a complex short story using the vocabulary of a third-grader." to be used in an experimental curriculum learning setting. I had to checkpoint every 1024 examples to mitigate the program slowing down due to memory usage. This was run in bf16 on an RTXA6000 with the following settings:
```
top_k = random between (40, 128)
temperature = random between (0.6, 0.95)
max_length = 128
batch_size = 32
```
I wanted a less uniform boring set with the same exact patterns so I randomly modulate the temperature and top_k values to get a good mix. This cost ~$6 usd to create on runpod. |
bgglue/bgglue | 2023-08-06T15:22:26.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:named-entity-recognition",
"task_ids:natural-language-inference",
"task_ids:part-of-speech",
"task_ids:sent... | bgglue | The Bulgarian General Language Understanding Evaluation (bgGLUE) benchmark is a collection of resources for
training, evaluating, and analyzing natural language understanding systems in Bulgarian. | @inproceedings{hardalov-etal-2023-bgglue,
title = "{bgGLUE}: A Bulgarian General Language Understanding Evaluation Benchmark",
author = "Hardalov, Momchil and
Atanasova, Pepa and
Mihaylov, Todor and
Angelova, Galia and
Simov, Kiril and
Osenova, Petya and
Stoyanov, Ves and
Koychev, Ivan and
Nakov, Preslav and
Radev, Dragomir",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = july,
year = "2023",
address = "Online",
publisher = "Association for Computational Linguistics",
address = "Toronto, Canada",
url = "https://arxiv.org/abs/2306.02349"
} | null | 0 | 12 | ---
task_categories:
- text-classification
- token-classification
- question-answering
- multiple-choice
language:
- bg
pretty_name: Bulgarian GLUE
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
license:
- mit
- cc-by-3.0
- cc-by-sa-4.0
- other
- cc-by-nc-4.0
- cc-by-nc-3.0
task_ids:
- multiple-choice-qa
- named-entity-recognition
- natural-language-inference
- part-of-speech
- sentiment-analysis
source_datasets:
- bsnlp
- wikiann
- exams
- ct21.t1
- fakenews
- crediblenews
- universal_dependencies
tags:
- check-worthiness-estimation
- fake-new-detection
- humor-detection
- regression
- ranking
---
# Dataset Card for "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://bgglue.github.io/](https://bgglue.github.io/)
- **Repository:** [https://github.com/bgGLUE](https://github.com/bgGLUE)
- **Paper:** [bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark](https://arxiv.org/abs/2306.02349)
- **Point of Contact:** [bulgarianglue [at] gmail [dot] com](mailto:bulgarianglue@gmail.com)

### Dataset Summary
bgGLUE (Bulgarian General Language Understanding Evaluation) is a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. The benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression).
### Supported Tasks and Leaderboards
List of supported tasks: [Tasks](https://bgglue.github.io/tasks/).
Leaderboard: [bgGLUE Leaderboard](https://bgglue.github.io/leaderboard/).
### Languages
Bulgarian
## Dataset Structure
### Data Instances
<div id="container">
<table id="table-tasks" class="table table-striped table-bordered">
<thead>
<tr>
<th scope="col">Name</th>
<th scope="col">Task type</th>
<th scope="col">Identifier</th>
<th scope="col" data-toggle="tooltip" data-placement="top" title="Tooltip on right">Download</th>
<th scope="col">More Info</th>
<th scope="col">Metrics</th>
<th scope="col">Train / Val / Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>Balto-Slavic NLP Shared Task</td>
<td>Named Entity Recognition</td>
<td>BSNLP</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/bsnlp.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/bsnlp/">Info</a> </td>
<td>F1</td>
<td>724 / 182 / 301</td>
</tr>
<tr>
<td>CheckThat! (2021), Task 1A </td>
<td>Check-Worthiness Estimation</td>
<td>CT21.T1</td>
<td class="text-center"><a href="https://gitlab.com/checkthat_lab/clef2021-checkthat-lab/-/tree/master/task1" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/ct21-t1/">Info</a> </td>
<td>Average Precision</td>
<td>2,995 / 350 / 357</td>
</tr>
<tr>
<td>Cinexio Movie Reviews</td>
<td>Sentiment Analysis</td>
<td>Cinexio</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/cinexio.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/cinexio/">Info</a> </td>
<td>Pearson-Spearman Corr</td>
<td>8,155 / 811 / 861</td>
</tr>
<tr>
<td>Hack the News Datathon (2019)</td>
<td>Fake News Detection</td>
<td>Fake-N</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/fakenews.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/fakenews/">Info</a> </td>
<td>Binary F1</td>
<td>1,990 / 221 / 701</td>
</tr>
<tr>
<td>In Search of Credible News</td>
<td>Humor Detection</td>
<td>Cred.-N</td>
<td class="text-center"><a href="https://forms.gle/Z7PYHMAvFvFusWT37" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/crediblenews/">Info</a> </td>
<td>Binary F1</td>
<td>19,227 / 5,949 / 17,887</td>
</tr>
<tr>
<td>Multi-Subject High School Examinations Dataset</td>
<td>Multiple-choice QA</td>
<td>EXAMS</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/exams.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/exams/">Info</a> </td>
<td>Accuracy</td>
<td>1,512 / 365 / 1,472</td>
</tr>
<tr>
<td>Universal Dependencies</td>
<td>Part-of-Speech Tagging</td>
<td>U.Dep</td>
<td class="text-center"><a href="https://universaldependencies.org/#bulgarian-treebanks" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/udep/">Info</a> </td>
<td>F1</td>
<td>8,907 / 1,115 / 1,116</td>
</tr>
<tr>
<td>Cross-lingual Natural Language Inference</td>
<td>Natural Language Inference</td>
<td>XNLI</td>
<td class="text-center"><a href="https://github.com/facebookresearch/XNLI#download" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/xnli/">Info</a> </td>
<td>Accuracy</td>
<td>392,702 / 5,010 / 2,490</td>
</tr>
<tr>
<td>Cross-lingual Name Tagging and Linking (PAN-X / WikiAnn)</td>
<td>Named Entity Recognition</td>
<td>PAN-X</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/wikiann_bg.tar.gz">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/wikiann/">Info</a> </td>
<td>F1</td>
<td>16,237 / 7,029 / 7,263 </td>
</tr>
</tbody>
</table>
</div>
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Here, we describe the pre-processing steps we took to prepare the datasets before including them in the bgGLUE benchmark. Our main goal was to ensure that the setup evaluated the language understanding abilities of the models in a principled way and in a diverse set of domains. Since all of the datasets were publicly available, we preserved the original setup as much as possible. Nevertheless, we found that some datasets contained duplicate examples across their train/dev/test splits, or that all of the splits came from the same domain, which may overestimate the model's performance. Hereby, \textit{we removed data leaks} and proposed new topic-based or temporal-based (i.e., timestamp-based) data splits where needed. We deduplicated the examples based on a complete word overlap in two pairs of normalized texts, i.e., lowercased, and excluding all stop words.
## Considerations for Using the Data
### Discussion of Biases
The datasets included in bgGLUE were annotated by human annotators, who could be subject to potential biases in their annotation process. Hence, the datasets in \benchmarkName could potentially be misused to develop models that make predictions that are unfair to individuals or groups. Therefore, we ask users of bgGLUE to be aware of such potential biases and risks of misuse. We note that any biases that might exist in the original resources gathered in this benchmark are unintentional and do not aim to cause harm.
### Other Known Limitations
#### Tasks in bgGLUE
The bgGLUE benchmark is comprised of nine challenging NLU tasks, including three token classification tasks, one ranking task and five text classification tasks. While we cover three different types of tasks in the benchmark, we are restricted by the available resources for Bulgarian, and thus we could not include some other NLP tasks, such as language generation. We also consider only NLP tasks and we do not include tasks with other/multiple modalities. Finally, some of the tasks are of similar nature, e.g., we include two datasets for NER and two for credibility/fake news classification.
### Domains in bgGLUE
The tasks included in bgGLUE span over multiple domains such as social media posts, Wikipedia, and news articles and can test both for short and long document understanding. However, each task is limited to one domain and the topics within the domain do not necessarily have full coverage of all possible topics. Moreover, some of the tasks have overlapping domains, e.g., the documents in both Cred.-N and Fake-N are news articles.
## Additional Information
### Licensing Information
The primary bgGLUE tasks are built on and derived from existing datasets.
We refer users to the original licenses accompanying each dataset.
For each dataset the license is listed on its ["Tasks" page](https://bgglue.github.io/tasks/) on the bgGLUE website.
### Citation Information
```
@inproceedings{hardalov-etal-2023-bgglue,
title = "bg{GLUE}: A {B}ulgarian General Language Understanding Evaluation Benchmark",
author = "Hardalov, Momchil and
Atanasova, Pepa and
Mihaylov, Todor and
Angelova, Galia and
Simov, Kiril and
Osenova, Petya and
Stoyanov, Veselin and
Koychev, Ivan and
Nakov, Preslav and
Radev, Dragomir",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.487",
pages = "8733--8759",
}
```
### Contributions
[List of bgGLUE contributors](https://bgglue.github.io/people/) |
rcds/MultiLegalNeg | 2023-09-03T11:45:22.000Z | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"license:cc-by-nd-4.0",
"legal",
"arxiv:2306.02069",
"region:us"
] | rcds | null | null | null | 0 | 12 | ---
license: cc-by-nd-4.0
viewer: true
task_categories:
- token-classification
tags:
- legal
pretty_name: Multilingual Negation Scope Resolution
size_categories:
- 1K<n<10K
---
# Dataset Card for MultiLegalNeg
### Dataset Summary
This dataset consists of German, French, and Italian court documents annotated for negation cues and negation scopes. It also includes a reformated version of ConanDoyle-neg ([
Morante and Blanco. 2012](https://aclanthology.org/S12-1035/)), SFU Review ([Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf)), BioScope ([Szarvas et al. 2008](https://aclanthology.org/W08-0606/)) and Dalloux ([Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28)).
### Languages
| Language | Subset | Number of sentences | Negated sentences |
|----------------------|-----------------|----------------------|-------------------|
| French | **fr** | 1059 | 382 |
| Italian | **it** | 1001 | 418 |
| German(Germany) | **de(DE)** | 1068 | 1098 |
| German (Switzerland) | **de(CH)** | 206 | 208 |
| English | **SFU Review** | 17672 | 3528 |
| English | **BioScope** | 14700 | 2095 |
| English | **ConanDoyle-neg**| 5714 | 5714 |
| French | **Dalloux** | 11032 | 1817 |
## Dataset Structure
### Data Fields
- text (string): full sentence
- spans (list): list of annotated cues and scopes
- start (int): offset of the beginning of the annotation
- end (int): offset of the end of the annotation
- token_start(int): id of the first token in the annotation
- token_end(int): id of the last token in the annotation
- label (string): CUE or SCOPE
- tokens (list): list of tokens in the sentence
- text (string): token text
- start (int): offset of the first character
- end (int): offset of the last character
- id (int): token id
- ws (boolean): indicates if the token is followed by a white space
### Data Splits
For each subset a train (70%), test (20%), and validation (10%) split is available.
#### How to use this dataset
To load all data use ```'all_all'```, or specify which dataset to load as the second argument. The available configurations are
```'de', 'fr', 'it', 'swiss', 'fr_dalloux', 'fr_all', 'en_bioscope', 'en_sherlock', 'en_sfu', 'en_all', 'all_all'```
```
from datasets import load_dataset
dataset = load_dataset("rcds/MultiLegalNeg", "all_all")
dataset
```
```
DatasetDict({
train: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 26440
})
test: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 7593
})
validation: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 4053
})
})
```
### Source Data
| Subset | Source |
|-------------------|----------------------|
| **fr** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069) |
| **it** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069) |
| **de(DE)** | [Glaser et al. 2021](https://www.scitepress.org/Link.aspx?doi=10.5220/0010246308120821) |
| **de(CH)** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/) |
| **SFU Review** | [Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf) |
| **BioScope** | [Szarvas et al. 2008](https://aclanthology.org/W08-0606/) |
| **ConanDoyle-neg**| [Morante and Blanco. 2012](https://aclanthology.org/S12-1035/) |
| **Dalloux** | [Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28) |
### Annotations
The data is annotated for negation cues and their scopes. Annotation guidelines are available [here](https://github.com/RamonaChristen/Multilingual_Negation_Scope_Resolution_on_Legal_Data/blob/main/Annotation_Guidelines.pdf)
#### Annotation process
Each language was annotated by one native speaking annotator and follows strict annotation guidelines
### Citation Information
```
TBD
```
|
TinyPixel/open-assistant | 2023-09-03T02:26:32.000Z | [
"region:us"
] | TinyPixel | null | null | null | 2 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9599234
num_examples: 8274
download_size: 5137419
dataset_size: 9599234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "open-assistant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CarperAI/pickapic_v1_no_images_training_sfw | 2023-07-18T00:02:29.000Z | [
"license:mit",
"region:us"
] | CarperAI | null | null | null | 1 | 12 | ---
license: mit
---
### Dataset Information
This is an SFW sanitized prompt only version of the [PickAPic dataset](https://huggingface.co/datasets/yuvalkirstain/pickapic_v1), with 335,000 prompts and image URLs.
### Citation Information
If you find this work useful, please cite:
```bibtex
@inproceedings{Kirstain2023PickaPicAO,
title={Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation},
author={Yuval Kirstain and Adam Polyak and Uriel Singer and Shahbuland Matiana and Joe Penna and Omer Levy},
year={2023}
}
```
### LICENSE
MIT License
Copyright (c) 2021
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
agie-ai/lmsys-chatbot_arena_conversations | 2023-07-22T04:52:36.000Z | [
"region:us"
] | agie-ai | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: model_a
dtype: string
- name: model_b
dtype: string
- name: winner
dtype: string
- name: judge
dtype: string
- name: conversation_a
list:
- name: content
dtype: string
- name: role
dtype: string
- name: conversation_b
list:
- name: content
dtype: string
- name: role
dtype: string
- name: turn
dtype: int64
- name: anony
dtype: bool
- name: language
dtype: string
- name: tstamp
dtype: float64
- name: openai_moderation
struct:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: toxic_chat_tag
struct:
- name: roberta-large
struct:
- name: flagged
dtype: bool
- name: probability
dtype: float64
- name: t5-large
struct:
- name: flagged
dtype: bool
- name: score
dtype: float64
splits:
- name: train
num_bytes: 81159839
num_examples: 33000
download_size: 41572997
dataset_size: 81159839
---
# Dataset Card for "lmsys-chatbot_arena_conversations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qwerty8409/digesion_Ayurveda | 2023-07-22T06:41:00.000Z | [
"region:us"
] | qwerty8409 | null | null | null | 1 | 12 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
shlomihod/civil-comments-wilds | 2023-07-28T17:27:14.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc0-1.0",
"toxicity",
"arxiv:2012.07421",
"arxiv:1903.04561",
"arxiv:1808.07231",
"arxiv:1911.08731",
"arxiv:2211.09110",
"region:us"
] | shlomihod | In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. | @inproceedings{wilds2021,
title = {{WILDS}: A Benchmark of in-the-Wild Distribution Shifts},
author = {Pang Wei Koh and Shiori Sagawa and Henrik Marklund and Sang Michael Xie and Marvin Zhang and
Akshay Balsubramani and Weihua Hu and Michihiro Yasunaga and Richard Lanas Phillips and Irena Gao and
Tony Lee and Etienne David and Ian Stavness and Wei Guo and Berton A. Earnshaw and Imran S. Haque and
Sara Beery and Jure Leskovec and Anshul Kundaje and Emma Pierson and Sergey Levine and Chelsea Finn
and Percy Liang},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
@inproceedings{borkan2019nuanced,
title={Nuanced metrics for measuring unintended bias with real data for text classification},
author={Borkan, Daniel and Dixon, Lucas and Sorensen, Jeffrey and Thain, Nithum and Vasserman, Lucy},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={491--500},
year={2019}
}
@article{DBLP:journals/corr/abs-2211-09110,
author = {Percy Liang and
Rishi Bommasani and
Tony Lee and
Dimitris Tsipras and
Dilara Soylu and
Michihiro Yasunaga and
Yian Zhang and
Deepak Narayanan and
Yuhuai Wu and
Ananya Kumar and
Benjamin Newman and
Binhang Yuan and
Bobby Yan and
Ce Zhang and
Christian Cosgrove and
Christopher D. Manning and
Christopher R{\'{e}} and
Diana Acosta{-}Navas and
Drew A. Hudson and
Eric Zelikman and
Esin Durmus and
Faisal Ladhak and
Frieda Rong and
Hongyu Ren and
Huaxiu Yao and
Jue Wang and
Keshav Santhanam and
Laurel J. Orr and
Lucia Zheng and
Mert Y{\"{u}}ksekg{\"{o}}n{\"{u}}l and
Mirac Suzgun and
Nathan Kim and
Neel Guha and
Niladri S. Chatterji and
Omar Khattab and
Peter Henderson and
Qian Huang and
Ryan Chi and
Sang Michael Xie and
Shibani Santurkar and
Surya Ganguli and
Tatsunori Hashimoto and
Thomas Icard and
Tianyi Zhang and
Vishrav Chaudhary and
William Wang and
Xuechen Li and
Yifan Mai and
Yuhui Zhang and
Yuta Koreeda},
title = {Holistic Evaluation of Language Models},
journal = {CoRR},
volume = {abs/2211.09110},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2211.09110},
doi = {10.48550/arXiv.2211.09110},
eprinttype = {arXiv},
eprint = {2211.09110},
timestamp = {Wed, 23 Nov 2022 18:03:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2211-09110.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 12 | ---
license: cc0-1.0
task_categories:
- text-classification
language:
- en
tags:
- toxicity
pretty_name: CivilComments WILDS
size_categories:
- 100K<n<1M
---
# Dataset Card for CivilComments WILDS
## Dataset Description
- **Homepage:** https://wilds.stanford.edu/datasets/#civilcomments
- **Repository:**
- **Paper:** https://arxiv.org/abs/2012.07421 | https://arxiv.org/abs/1903.04561
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary

Automatic review of user-generated text—e.g., detecting toxic comments—is an important tool for moderating the sheer volume of text written on the Internet. Unfortunately, prior work has shown that such toxicity classifiers pick up on biases in the training data and spuriously associate toxicity with the mention of certain demographics ([Park et al., 2018](https://arxiv.org/abs/1808.07231); [Dixon et al., 2018](https://research.google/pubs/pub46743/)). These types of spurious correlations can significantly degrade model performance on particular subpopulations ([Sagawa et al.,2020](https://arxiv.org/abs/1911.08731)).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset is in the public domain and is distributed under [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
@inproceedings{wilds2021,
title = {{WILDS}: A Benchmark of in-the-Wild Distribution Shifts},
author = {Pang Wei Koh and Shiori Sagawa and Henrik Marklund and Sang Michael Xie and Marvin Zhang and
Akshay Balsubramani and Weihua Hu and Michihiro Yasunaga and Richard Lanas Phillips and Irena Gao and
Tony Lee and Etienne David and Ian Stavness and Wei Guo and Berton A. Earnshaw and Imran S. Haque and
Sara Beery and Jure Leskovec and Anshul Kundaje and Emma Pierson and Sergey Levine and Chelsea Finn
and Percy Liang},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
@inproceedings{borkan2019nuanced,
title={Nuanced metrics for measuring unintended bias with real data for text classification},
author={Borkan, Daniel and Dixon, Lucas and Sorensen, Jeffrey and Thain, Nithum and Vasserman, Lucy},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={491--500},
year={2019}
}
@article{DBLP:journals/corr/abs-2211-09110,
author = {Percy Liang and
Rishi Bommasani and
Tony Lee and
Dimitris Tsipras and
Dilara Soylu and
Michihiro Yasunaga and
Yian Zhang and
Deepak Narayanan and
Yuhuai Wu and
Ananya Kumar and
Benjamin Newman and
Binhang Yuan and
Bobby Yan and
Ce Zhang and
Christian Cosgrove and
Christopher D. Manning and
Christopher R{\'{e}} and
Diana Acosta{-}Navas and
Drew A. Hudson and
Eric Zelikman and
Esin Durmus and
Faisal Ladhak and
Frieda Rong and
Hongyu Ren and
Huaxiu Yao and
Jue Wang and
Keshav Santhanam and
Laurel J. Orr and
Lucia Zheng and
Mert Y{\"{u}}ksekg{\"{o}}n{\"{u}}l and
Mirac Suzgun and
Nathan Kim and
Neel Guha and
Niladri S. Chatterji and
Omar Khattab and
Peter Henderson and
Qian Huang and
Ryan Chi and
Sang Michael Xie and
Shibani Santurkar and
Surya Ganguli and
Tatsunori Hashimoto and
Thomas Icard and
Tianyi Zhang and
Vishrav Chaudhary and
William Wang and
Xuechen Li and
Yifan Mai and
Yuhui Zhang and
Yuta Koreeda},
title = {Holistic Evaluation of Language Models},
journal = {CoRR},
volume = {abs/2211.09110},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2211.09110},
doi = {10.48550/arXiv.2211.09110},
eprinttype = {arXiv},
eprint = {2211.09110},
timestamp = {Wed, 23 Nov 2022 18:03:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2211-09110.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
### Contributions
[More Information Needed] |
jeffnyman/emotions | 2023-07-29T18:10:20.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"emotion-classification",
"region:us"
] | jeffnyman | Emotion is a dataset of English Twitter messages with six basic emotions:
anger, fear, joy, love, sadness, and surprise. For more detailed information
please refer to the paper. | @inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
} | null | 0 | 12 | ---
pretty_name: Emotions
license: cc-by-sa-4.0
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- multi-class-classification
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
"0": sadness
"1": joy
"2": love
"3": anger
"4": fear
"5": surprise
splits:
- name: train
num_bytes: 1741597
num_examples: 16000
- name: validation
num_bytes: 214703
num_examples: 2000
- name: test
num_bytes: 217181
num_examples: 2000
download_size: 740883
dataset_size: 2173481
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
"0": sadness
"1": joy
"2": love
"3": anger
"4": fear
"5": surprise
splits:
- name: train
num_bytes: 45445685
num_examples: 416809
download_size: 15388281
dataset_size: 45445685
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404/)
- **Size of downloaded dataset files:** 16.13 MB
- **Size of the generated dataset:** 47.62 MB
- **Total amount of disk used:** 63.75 MB
### Dataset Summary
Emotions is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper. Note that the paper does contain a larger data set with eight emotions being considered.
## Dataset Structure
### Data Instances
An example bit of data looks like this:
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has two configurations.
- split: with a total of 20,000 examples split into train, validation and test.
- unsplit: with a total of 416,809 examples in a single train split.
| name | train | validation | test |
| ------- | -----: | ---------: | ---: |
| split | 16000 | 2000 | 2000 |
| unsplit | 416809 | n/a | n/a |
## Additional Information
### Licensing Information
The dataset should be used for educational and research purposes only. It is licensed under Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
|
OrionZheng/MNBVC_ready | 2023-08-26T02:48:00.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:zh",
"licens... | OrionZheng | MNBVC: Massive Never-ending BT Vast Chinese corpus | \ | null | 0 | 12 | ---
annotations_creators:
- other
language:
- zh
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MNBVC_ready
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
## Dataset Description
- **Homepage:** http://mnbvc.253874.net/
- **Repository:** https://github.com/esbatmop/MNBVC
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
(正在进行中)本仓库希望将已经发布在huggingface上的高质量的几个子集处理成纯文本格式,方便大家训练模型的时候使用。
原MNBVC仓库的文本数据的组织格式并不是可以直接用来训练的纯文本数据,而是将每个文件内的各行文本分别存储(为了方便去重等操作)的jsonl格式(如下所示)
```json
{
"文件名": datasets.Value("string"),
"是否待查文件": datasets.Value("bool"),
"是否重复文件": datasets.Value("bool"),
"文件大小": datasets.Value("int32"),
"simhash": datasets.Value("uint64"),
"最长段落长度": datasets.Value("int32"),
"段落数": datasets.Value("int32"),
"去重段落数": datasets.Value("int32"),
"低质量段落数": datasets.Value("int32"),
"段落": [
datasets.Features(
{
"行号": datasets.Value("int32"),
"是否重复": datasets.Value("bool"),
"是否跨文件重复": datasets.Value("bool"),
"md5": datasets.Value("string"),
"内容": datasets.Value("string"),
}
)
]
}
```
### 问答数据
问答数据使用如下格式组织:
```json
{
"id": datasets.Value("int32"),
"问": datasets.Value("string"),
"答": datasets.Value("string"),
"来源": datasets.Value("string"),
"元数据": {
"create_time": datasets.Value("string"),
"问题明细": datasets.Value("string"),
"回答明细": datasets.Value("string"),
"扩展字段": datasets.Value("string"),
}
}
```
项目早期所上传的数据使用如下格式,以后这一格式会被废弃,相应数据也会重新上传:
```json
{
"text": datasets.Value("string"),
"meta": datasets.Value("string")
}
``` |
HydraLM/partitioned_v3_standardized_02 | 2023-08-01T17:59:45.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 44285832.42473647
num_examples: 82359
download_size: 22082643
dataset_size: 44285832.42473647
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
plncmm/spanish-alpaca | 2023-08-01T19:46:33.000Z | [
"region:us"
] | plncmm | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 51456148
num_examples: 51942
download_size: 26649183
dataset_size: 51456148
---
# Dataset Card for "spanish-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pourmand1376/isna-news | 2023-08-19T11:56:01.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:fa",
"license:apache-2.0",
"region:us"
] | pourmand1376 | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: TEXT
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 8078800930
num_examples: 2104859
download_size: 2743795907
dataset_size: 8078800930
license: apache-2.0
task_categories:
- text-generation
language:
- fa
pretty_name: Isna News
size_categories:
- 1M<n<10M
---
# Dataset Card for "isna-news"
This is converted version of [Isna-news](https://www.kaggle.com/datasets/amirpourmand/isna-news) to comply with Open-assistant standards.
MetaData Column:
- title
- link: short link to news
- language: fa
- jalali-time: time in jalali calendar
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jxie/lipop | 2023-08-04T22:25:41.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: float64
splits:
- name: train_0
num_bytes: 200193
num_examples: 3360
- name: val_0
num_bytes: 24928
num_examples: 420
- name: test_0
num_bytes: 24770
num_examples: 420
- name: train_1
num_bytes: 199909
num_examples: 3360
- name: val_1
num_bytes: 25212
num_examples: 420
- name: test_1
num_bytes: 24770
num_examples: 420
- name: train_2
num_bytes: 200080
num_examples: 3360
- name: val_2
num_bytes: 24726
num_examples: 420
- name: test_2
num_bytes: 25085
num_examples: 420
download_size: 387383
dataset_size: 749673
---
# Dataset Card for "lipop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jxie/bace | 2023-08-04T22:25:50.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train_0
num_bytes: 91921
num_examples: 1210
- name: val_0
num_bytes: 11796
num_examples: 151
- name: test_0
num_bytes: 13118
num_examples: 152
- name: train_1
num_bytes: 91921
num_examples: 1210
- name: val_1
num_bytes: 11796
num_examples: 151
- name: test_1
num_bytes: 13118
num_examples: 152
- name: train_2
num_bytes: 91921
num_examples: 1210
- name: val_2
num_bytes: 11796
num_examples: 151
- name: test_2
num_bytes: 13118
num_examples: 152
download_size: 118857
dataset_size: 350505
---
# Dataset Card for "bace"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jxie/hiv | 2023-08-04T22:26:08.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train_0
num_bytes: 1869578
num_examples: 32901
- name: val_0
num_bytes: 256545
num_examples: 4113
- name: test_0
num_bytes: 232200
num_examples: 4113
- name: train_1
num_bytes: 1869578
num_examples: 32901
- name: val_1
num_bytes: 256545
num_examples: 4113
- name: test_1
num_bytes: 232200
num_examples: 4113
- name: train_2
num_bytes: 1869578
num_examples: 32901
- name: val_2
num_bytes: 256545
num_examples: 4113
- name: test_2
num_bytes: 232200
num_examples: 4113
download_size: 2758764
dataset_size: 7074969
---
# Dataset Card for "hiv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TariqJamil/guanaco-llama2-1k | 2023-08-05T13:09:17.000Z | [
"region:us"
] | TariqJamil | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1655208
num_examples: 1000
download_size: 966969
dataset_size: 1655208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loubnabnl/code_reviews_sample | 2023-08-08T10:55:29.000Z | [
"region:us"
] | loubnabnl | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: pull_request_info
struct:
- name: public
dtype: bool
- name: pull_request.body
dtype: string
- name: pull_request.closed_at
dtype: string
- name: pull_request.created_at
dtype: string
- name: pull_request.id
dtype: int64
- name: pull_request.merged_at
dtype: string
- name: pull_request.number
dtype: int64
- name: pull_request.state
dtype: string
- name: pull_request.title
dtype: string
- name: pull_request.user.id
dtype: float64
- name: pull_request.user.login
dtype: string
- name: repo.name
dtype: string
- name: head_repo_info
struct:
- name: pull_request.head.label
dtype: string
- name: pull_request.head.ref
dtype: string
- name: pull_request.head.repo.license.name
dtype: string
- name: pull_request.head.repo.owner.login
dtype: string
- name: pull_request.head.repo.owner.type
dtype: string
- name: pull_request.head.sha
dtype: string
- name: pull_request.head.user.login
dtype: string
- name: pull_request.head.user.type
dtype: string
- name: base_repo_info
struct:
- name: pull_request.base.label
dtype: string
- name: pull_request.base.ref
dtype: string
- name: pull_request.base.repo.default_branch
dtype: string
- name: pull_request.base.repo.description
dtype: string
- name: pull_request.base.repo.forks_count
dtype: int64
- name: pull_request.base.repo.language
dtype: string
- name: pull_request.base.repo.license.name
dtype: string
- name: pull_request.base.repo.open_issues_count
dtype: int64
- name: pull_request.base.repo.owner.login
dtype: string
- name: pull_request.base.repo.owner.type
dtype: string
- name: pull_request.base.repo.watchers_count
dtype: int64
- name: pull_request.base.sha
dtype: string
- name: pull_request.base.user.login
dtype: string
- name: pull_request.base.user.type
dtype: string
- name: events
list:
- name: action
dtype: string
- name: actor.id
dtype: float64
- name: actor.login
dtype: string
- name: bucket
dtype: string
- name: comment.author_association
dtype: string
- name: comment.body
dtype: string
- name: comment.commit_id
dtype: string
- name: comment.created_at
dtype: string
- name: comment.diff_hunk
dtype: string
- name: comment.id
dtype: float64
- name: comment.in_reply_to_id
dtype: float64
- name: comment.line
dtype: float64
- name: comment.original_commit_id
dtype: string
- name: comment.original_line
dtype: float64
- name: comment.original_position
dtype: float64
- name: comment.original_start_line
dtype: float64
- name: comment.path
dtype: string
- name: comment.position
dtype: float64
- name: comment.pull_request_review_id
dtype: float64
- name: comment.side
dtype: string
- name: comment.start_line
dtype: float64
- name: comment.start_side
dtype: string
- name: comment.updated_at
dtype: string
- name: created_at
dtype: string
- name: org.id
dtype: float64
- name: org.login
dtype: string
- name: pull_request.head.label
dtype: string
- name: pull_request.head.ref
dtype: string
- name: pull_request.head.sha
dtype: string
- name: pull_request.title
dtype: string
- name: review.author_association
dtype: string
- name: review.body
dtype: string
- name: review.commit_id
dtype: string
- name: review.id
dtype: float64
- name: review.state
dtype: string
- name: review.submitted_at
dtype: string
- name: type
dtype: string
- name: user.id
dtype: float64
- name: user.login
dtype: string
- name: user.type
dtype: string
splits:
- name: train
num_bytes: 318967665
num_examples: 35937
download_size: 85061035
dataset_size: 318967665
---
# Dataset Card for "code_reviews_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wwydmanski/biodataome | 2023-08-10T08:31:47.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"size_categories:1K<n<10K",
"license:afl-3.0",
"biology",
"region:us"
] | wwydmanski | null | null | 2 | 12 | ---
license: afl-3.0
task_categories:
- tabular-classification
pretty_name: BioDataome
size_categories:
- n<1k
- 1K<n<10K
tags:
- biology
---
# BioDataome
This is an aggregate dataset which allows you to download any and all data from the [BioDataome project](http://dataome.mensxmachina.org/).
## What is BioDataome?
BioDataome is a collection of uniformly preprocessed and automatically annotated datasets for data-driven biology. The processed data can be accessed via the BioDataome website in .csv format and the BioDataome package via github. BioDataome package contains all the functions used to download, preprocess and annotate gene expression and methylation microarray data from Gene Expression Omnibus, as well as RNASeq data from recount.
## Usage
```python
import datasets
ds = datasets.load_dataset("wwydmanski/biodataome", "GSE24849")['train']
split_ds = ds.train_test_split(test_size=0.1)
train_ds, test_ds = split_ds['train'], split_ds['test']
# there is probably a better way to do this, but this seems to work the fastest
y_train = train_ds.to_pandas()['metadata'].apply(lambda x: x['class'])
X_train = pd.DataFrame.from_records(train_ds.to_pandas()['data'])
y_test = test_ds.to_pandas()['metadata'].apply(lambda x: x['class'])
X_test = pd.DataFrame.from_records(test_ds.to_pandas()['data'])
```
Please refer to the [original metadata](http://dataome.mensxmachina.org/) for the list of available datasets.
## Disclaimer
BioDataome and its content are provided as is without any warranty of any kind, that BioDataome or any documents available from this server will be error free. In no event will its members be liable for any damages, arising out of, resulting from, or in any way connected with the use of BioDataome or documents available from this server.
BioDataome is restricted to research and educational use. The information you may retrieve and recover from BioDataome is not designed to diagnose, prevent, or treat any condition or disease
Part of research that led to the development of BioDataome has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 617393.
Part of the analyses results and the implementation of the web interface were funded by the “ELIXIR-GR: Managing and Analysing Life Sciences Data (MIS: 5002780)” Project, co-financed by Greece and the European Union - European Regional Development Fund. | |
Falah/arabic_magical_fantasy_prompts_sdxl_refiner | 2023-08-13T08:11:24.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1615984596
num_examples: 2000000
download_size: 172865349
dataset_size: 1615984596
---
# Dataset Card for "arabic_magical_fantasy_prompts_sdxl_refiner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Warlord-K/parti-prompts-sdxl-1.0 | 2023-08-12T06:29:47.000Z | [
"region:us"
] | Warlord-K | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: model_name
dtype: string
- name: seed
dtype: int64
- name: images
dtype: image
splits:
- name: train
num_bytes: 2617808054.24
num_examples: 1632
download_size: 2616607357
dataset_size: 2617808054.24
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "parti-promtps-sdxl-1.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FreedomIntelligence/sharegpt-korean | 2023-08-13T16:46:20.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | null | 0 | 12 | ---
license: apache-2.0
---
Korean ShareGPT data translated by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
amitness/logits-arabic-512 | 2023-09-21T19:14:54.000Z | [
"region:us"
] | amitness | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: teacher_logits
sequence:
sequence: float64
- name: teacher_indices
sequence:
sequence: int64
- name: teacher_mask_indices
sequence: int64
splits:
- name: train
num_bytes: 19256694548
num_examples: 1059535
download_size: 6841674965
dataset_size: 19256694548
---
# Dataset Card for "logits-arabic-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Deepakvictor/tanglish-tamil | 2023-08-15T12:44:47.000Z | [
"task_categories:translation",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ta",
"language:en",
"license:openrail",
"region:us"
] | Deepakvictor | null | null | null | 0 | 12 | ---
license: openrail
task_categories:
- translation
- text-classification
language:
- ta
- en
size_categories:
- 1K<n<10K
---
Translation of Tanglish to tamil
Source: karky.in
To use
```python
import datasets
s = datasets.load_dataset('Deepakvictor/tanglish-tamil')
print(s)
"""DatasetDict({
train: Dataset({
features: ['Movie', 'FileName', 'Song', 'Tamillyrics', 'Tanglishlyrics', 'Mood', 'Genre'],
num_rows: 597
})
})"""
```
Credits and Source: https://karky.in/
---
For simpler version
Visit this dataset --> "Deepakvictor/tan-tam" |
ceadar-ie/AIVision360-8k | 2023-08-17T22:04:53.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"LLM",
"Generative AI",
"Finetune",
"Domain Specific Data",
"doi:10.57967/hf/0998",
"region:us"
] | ceadar-ie | null | null | null | 2 | 12 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
- text-generation
language:
- en
tags:
- LLM
- Generative AI
- Finetune
- Domain Specific Data
size_categories:
- 1K<n<10K
---
# Dataset Card for AIVision360-8k
## Dataset Description
AIVision360 is the pioneering domain-specific dataset tailor-made for media and journalism, designed expressly for the instruction fine-tuning of Large Language Models (LLMs).\
The AIVision360-8k dataset is a curated collection sourced from "ainewshub.ie", a platform dedicated to Artificial Intelligence news from quality-controlled publishers. It is designed to provide a comprehensive representation of AI-related discussions, highlighting current developments and trends in the field. Each entry in the dataset contains three columns: "question", "response", and "context". These columns offer a structured view of AI news interactions, where the "question" and "response" provide insights on AI subjects, and the "context" column gives additional background information.
### Key Features
• Domain Specificity: The dataset is focused on AI news, catering to researchers, developers, and specialists in the domain.\
• Source Reliability: Data is sourced from established publishers featured on "ainewshub.ie", ensuring content reliability.\
• Licensing: It is distributed under the Apache 2.0 open-source license, facilitating its use and modification.\
• Accessibility: Intended for public use to support collaboration and analysis in the AI community.\
• Volume: Contains over 8,000 entries, making it a significant resource for AI news analysis.
### Intended Use Cases
• Model Training: Suitable for training language models, enhancing their capacity in AI news discussions.\
• Research: Useful for AI trend analysis, sentiment analysis, and linguistic pattern study.
### Limitations
• Despite careful curation, potential biases from AI news sources may persist in the dataset.\
• Its focus is on AI news, which may reflect specific perspectives of this niche.
## Language
English
### Data Privacy
The dataset comprises publicly available news articles and does not include private identifiers or sensitive information.
### License/Attribution
Copyright © 2023 CeADAR Connect Group. Developed by CeADAR (ceadar.ie), its use is governed by the Apache 2.0 license.
### Sources
Curated exclusively from ainewshub.ie, a recognized platform for AI news.
## Annotator Guidelines
• Question: Represents a query derived from the news article.\
• Response: Provides an answer based on the article's content.\
• Context: Offers background information for the query-answer pair.
### Feedback
For any questions or feedback related to the dataset, please direct your communications to ahtsham.zafar@ucd.ie
### Disclaimer
This dataset is provided "as is" without any guarantees or warranty. Although the data has been processed with care, CeADAR Connect Group is not responsible for any errors, omissions, or discrepancies within the data. Users are advised to use this dataset at their discretion and assume any risks associated with its use. |
Photolens/MedText-llama-2 | 2023-08-19T18:26:13.000Z | [
"license:cc-by-4.0",
"region:us"
] | Photolens | null | null | null | 5 | 12 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 971728
num_examples: 1412
download_size: 499669
dataset_size: 971728
---
This is the shuffled version of medtext_1, so the datapoints are in random order and not sorted by category. This is to prevent catastrophic forgetting by category.
This is a medical diagnosis dataset containing over 1000 top notch textbook quality patient presentations and diagnosis/treatments. The 100 most common diseases and the 30 most common injuries people go to the hospital with, are, among others, fully captured in the dataset, with multiple datapoints for each ranging from mild to complicated to severe. Full list below. The dataset also contains completions about the nature of the AI itself, that it never can replace a doctor and always emphasizes to go to a professional and some nonsensical or doubtful presentations. A model trained on this dataset explicitly tells when it CANNOT answer with confidence or if the presentation is insufficient. This is to prevent hallucinations.
Medtext is a free to use (CC BY 4.0) dataset of over 1000 patient presentations and their diagnosis/treatment plans.
This is original data, converted into uniform datapoints using GPT-4.
We then pulled 10 random examples of the dataset and showed them to 3 different doctors, 2 of them involved and 1 of them uninvolved, and they all categorize the quality as „textbook quality“.
It’s content includes:
NOISE/DATA POLLUTION
*Dismissing of non-medical or non-psychological issues
*specifically asking for more information / admitting no possible diagnosis with confidence if insufficient data
*conflicting/contradicting and irrelevant information
*cases where symptoms are misleading to seemingly obvious diagnosis but actually being something different
*information about the model (What are you? What can you do? Are you able to replace a doctor? This is to make the model humble and always emphasize that it can never replace a professional and it is just there to do some substitute analysis)
MISC
*emergency cases / first aid / almost fatal njuries that require emergency surgery
*injuries from crimes
*sexual injuries and STDs
*Infant specific cases
*Gynecological and urological cases
*genetic anomalies
*Previous medical mishandling
*Abuse/Overdosing/Misuse of drugs
*Cross side effects of drugs
ANALYSIS
*Textual analysis of blood tests, ultrasound, CT, MRI and X-ray examinations.
INJURIES:
* Sprains and strains
* Fractures
* Contusions (bruises)
* Cuts and lacerations
* Concussions
* Burns
* Dislocations
* Abrasions (scrapes)
* Whiplash injuries
* Eye injuries
* Puncture wounds
* Bites and stings
* Back injuries
* Broken nose
* Knee injuries
* Ankle injuries
* Shoulder injuries
* Wrist injuries
* Chest injuries
* Head injuries
DISEASES:
* Acne
* Allergies
* Alzheimer's Disease
* Anemia
* Angina
* Anxiety Disorders
* Arthritis
* Asthma
* Atherosclerosis
* Athlete's Foot
* Attention Deficit Hyperactivity Disorder (ADHD)
* Autism Spectrum Disorder
* Back Pain
* Bipolar Disorder
* Bronchitis
* Cataracts
* Chickenpox
* Chronic Obstructive Pulmonary Disease (COPD)
* Common Cold
* Conjunctivitis (Pink Eye)
* Constipation
* Coronary Heart Disease
* Cystitis
* Dementia
* Depression
* Diabetes Type 1
* Diabetes Type 2
* Diarrhea
* Diverticulitis
* Dizziness (Vertigo)
* Ear Infections
* Eczema
* Endometriosis
* Erectile Dysfunction
* Fibromyalgia
* Flu (Influenza)
* Food Poisoning
* Gallstones
* Gastroenteritis
* Gastroesophageal Reflux Disease (GERD)
* Gout
* Hay Fever (Allergic Rhinitis)
* Headaches
* Heart Failure
* Hemorrhoids
* Hepatitis B
* Hepatitis C
* Herpes Simplex Virus (HSV)
* High Blood Pressure (Hypertension)
* High Cholesterol (Hypercholesterolemia)
* HIV/AIDS
* Hyperthyroidism (Overactive Thyroid)
* Hypothyroidism (Underactive Thyroid)
* Inflammatory Bowel Disease (Including Crohn's and Ulcerative Colitis)
* Insomnia
* Iron Deficiency Anemia
* Irritable Bowel Syndrome (IBS)
* Kidney Stones
* Lactose Intolerance
* Lyme Disease
* Macular Degeneration
* Malaria
* Menopause
* Migraine
* Multiple Sclerosis
* Obesity
* Osteoarthritis
* Osteoporosis
* Otitis Media (Middle Ear Infection)
* Pancreatitis
* Parkinson's Disease
* Peptic Ulcers
* Periodontal Disease
* Pneumonia
* Polycystic Ovary Syndrome (PCOS)
* Prostate Enlargement (Benign Prostatic Hyperplasia)
* Psoriasis
* Pulmonary Embolism
* Restless Legs Syndrome
* Rheumatoid Arthritis
* Rosacea
* Schizophrenia
* Sciatica
* Scoliosis
* Seasonal Affective Disorder (SAD)
* Sinusitis
* Skin Cancer
* Sleep Apnea
* Strokes
* Tendonitis
* Tonsillitis
* Tuberculosis
* Urinary Tract Infection (UTI)
* Varicose Veins
* Vitiligo
* Yeast Infection (Candidiasis)
* Zika Virus
# Dataset card from [BI55/MedText](https://huggingface.co/datasets/BI55/MedText) |
Matias12f/cats_and_dogs | 2023-08-20T01:38:22.000Z | [
"license:apache-2.0",
"region:us"
] | Matias12f | null | null | null | 0 | 12 | ---
license: apache-2.0
---
|
zake7749/chinese-speech-corpus | 2023-08-30T16:19:14.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:zh",
"license:cc",
"region:us"
] | zake7749 | null | null | null | 0 | 12 | ---
language:
- zh
license: cc
size_categories:
- 1K<n<10K
task_categories:
- conversational
dataset_info:
features:
- name: sentences
list:
- name: speaker
dtype: string
- name: speech
dtype: string
- name: source_url
dtype: string
splits:
- name: train
num_bytes: 77964319
num_examples: 1739
download_size: 43895652
dataset_size: 77964319
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
pretty_name: s
---
# Chinese Speech Corpus
This dataset has been sourced from [SayIt](https://sayit.pdis.nat.gov.tw/), a specialized website focused on preserving transcripts and meeting notes. Presently, it encompasses a compilation of 1739 dialogues, encompassing approximately 340,000 sentences along with their respective speakers.
## License
[CC0 License](https://creativecommons.org/share-your-work/public-domain/cc0/) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.