id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
heegyu/namuwiki | 2022-10-01T02:40:40.000Z | [
"task_categories:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ko",
"license:cc-by-nc-sa-2.0",
"region:us"
] | heegyu | null | null | null | 2 | 9 | ---
license: cc-by-nc-sa-2.0
language:
- ko
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- other
---
# namu.wiki database dump
##
https://namu.wiki/ database dump 2022/03/01<br/>
- 867024 rows
- download size: 3GB
## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset("heegyu/namuwiki")
print(dataset["train"][0])
```
```
{'title': '!!아앗!!',
'text': '\n[목차]\n\n\'\'\'{{{+1 !!ああっと!!}}}\'\'\'\n\n== 개요 ==\n[[파일:3444050440.jpg|width=60%]]\n▲[[신 세계수의 미궁 2 파프니르기사|신 세계수의 미궁 2]]에서 뜬 !!아앗!!\n\n[[세계수의 미궁 시리즈]]에 전통으로 등장하는 대사. [[세계수의 미궁 2 제왕의 성배|2편]]부터 등장했으며 훌륭한 [[사망 플래그]]의 예시이다.\n\n세계수의 모험가들이 탐험하는 던전인 수해의 구석구석에는 채취/벌채/채굴 포인트가 있으며, 이를 위한 채집 스킬에 투자하면 제한된 채집 기회에서 보다 큰 이득을 챙길 수 있다. 그러나 분배할 수 있는 스킬 포인트는 한정되어 있기 때문에 채집 스킬에 투자하는 만큼 전투 스킬 레벨은 낮아지게 된다.[* 다만 채집 시스템은 신 세계수 시리즈의 그리모어 복제, 복합 채집 스킬인 야생의 감, 5편의 종족 특유 스킬, 크로스의 1레벨이 만렙인 채집 스킬 등으로 편의성이 점차 나아져서 채집 스킬 때문에 스킬 트리가 내려가는 일은 점점 줄어들었다.] !!아앗!!이 발생하는 과정을 요약하면 다음과 같다.\n\n 1. 채집용 캐릭터들로 이루어진 약한 파티(ex: [[레인저(세계수의 미궁 2)|레인저]] 5명)가 수해에 입장한다.\n 1. 필드 전투를 피해 채집 포인트에 도착한 후 열심히 아이템을 캐는 중에...\n 1. \'\'\'!!아앗!!\'\'\' ~~라플레시아가 나타났다!~~\n 이때 등장하는 것은 [[FOE(세계수의 미궁 시리즈)|FOE]]는 아니지만 \'\'\'훨씬 위층에 등장하는 강력한 필드 몬스터이며 선제 공격을 당하게 된다!\'\'\'\n 1. \'\'\'으앙 죽음\'\'\'(hage)\n\n여담으로 !!아앗!!의 유래는 1인칭 던전 크롤러의 원조 [[위저드리]]에서 함정을 건드렸을 때 나오는 대사 Oops!(おおっと!)라고 한다.\n\n== 각 작품에서의 모습 ==\n=== [[세계수의 미궁 2 제왕의 성배]] ===\n!!아앗!!의 악랄함은 첫 등장한 작품이자 시리즈 중에서도 불친절하기로 정평이 난 2편이 절정이었다. 그야말로 위의 !!아앗!! 시퀀스 그대로, 묻지도 따지지도 않고 채집할 때마다 일정 확률로 \'\'\'강제로\'\'\' 전투에 돌입해야 했다. 게다가 이럴 때 쓰라고 있는 레인저의 스킬 \'위험 감지(중간 확률로 적의 선제 공격을 무효화)\'는 정작 작동하지 않는다!\n\n참고로 2편에서 채집 도중 !!아앗!!이 뜰 확률은 [[http://www.atlusnet.jp/topic/detail/910|고작 1%다.]] [[던파확률의 법칙|낮아 보이는 확률이어도 플레이 중 한 번이라도 일어나는 것]]을 경험하는 체감 확률을 고려하여 확률을 설정한다고.\n\n=== [[세계수의 미궁 3 성해의 내방자]] ===\n다행히 채집 중 낮은 확률로 "좋은 아이템을 얻을 수 있을 것 같지만... 주변에서 몬스터들의 기척이 느껴진다."는 메시지가 뜨고 이때 운이 좋으면 레어 아이템을 얻을 수 있지만 반대의 경우 적과 싸우게 되는 것으로 조정되었다.\n\n=== [[세계수의 미궁 4 전승의 거신]] ===\n기본적인 것은 3편과 같지만, 4편에서는 움직이지 않고 채집할 때도 턴이 경과하도록 조정되었기 때문에 주변에 있는 FOE를 잊고 채집에 몰두하다가 FOE와 부딪히면 FOE 버전 !!아앗!!이 뜬다. 그리고 난이도 CASUAL로 플레이시, FOE로 인한 !!아앗!!을 제외하면 절대로 발생하지 않는다.\n\n=== [[신 세계수의 미궁 밀레니엄의 소녀|신 세계수의]] [[신 세계수의 미궁 2 파프니르기사|미궁 시리즈]] ===\n채집 방식이 한 턴으로 끝나는 구조[* 채집으로 한 번 아이템을 획득하면 "다시, (채집 스킬)에 의해..."가 뜨면서 한꺼번에 획득되는 구조.]로 바뀐 덕분인지 강제 조우로 다시 회귀해버렸다(...). 그나마 위험 감지 먹통과 같은 버그성 난점들은 수정되었다. 그 이후에 나온 [[세계수의 미궁 5 오랜 신화의 끝]]과 시리즈의 집대성 작품이자 3DS 마지막 작품인 [[세계수의 미궁 X]]도 마찬가지.\n\n=== [[세계수의 미궁 X]] ===\n본작의 채집은 신 세계수 시리즈와 같은 매커니즘이라 굳이 언급할 필요는 없으나, 퀘스트중에 2편의 !!아앗!! 시퀀스를 재현하면서 \'\'\'라플레시아\'\'\'가 등장하는 퀘스트가 존재한다.(...) 깨알같이 시스템 메세지 창이 아니라 대화창을 이용해서 완벽 재현한 것이 포인트.\n\n=== [[페르소나 Q 섀도우 오브 더 래버린스]] ===\n세계수 시스템을 기반으로 한 [[페르소나 시리즈]]와의 콜라보 작품인 페르소나 Q에서도 등장한다. 3, 4편과 같이 파워 스폿에서 채집 도중 메시지가 뜨며, 실패하면 파티에 참가하고 있는 멤버 중 한 명의 [[http://nico.ms/sm25683358|!!아앗!! 하는 음성]] ~~또는 [[코로마루|개소리]]~~과 함께 그 던전의 \'강적\'인 거대 [[섀도(페르소나 시리즈)|섀도우]]가 나타난다.\n\n그러나 내비 전용 스킬인 뱀눈 노려보기(위험 감지와 같은 효과)와 채집 보조 스킬은 파티의 전투력에 전혀 지장을 주지 않으며, \'대안심\'을 달면 거의 볼 일이 없어져서 초중반 이후에는 존재감이 급격히 줄어든다.\n[[분류:세계수의 미궁 시리즈]]',
'contributors': '110.46.34.123,kirby10,max0243,218.54.117.149,ruby3141,121.165.63.239,iviyuki,1.229.200.194,anatra95,kiri47,175.127.134.2,nickchaos71,chkong1998,kiwitree2,namubot,huwieblusnow',
'namespace': ''}
``` |
krm/for-ULPGL-Dissertation | 2022-10-16T07:53:00.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|orange_sum",
"language:fr",
"license:other",
"krm",
"ulpgl",
"orange",
"reg... | krm | null | null | null | 0 | 9 | ---
annotations_creators:
- other
language:
- fr
language_creators:
- other
license:
- other
multilinguality:
- monolingual
pretty_name: for-ULPGL-Dissertation
size_categories:
- 10K<n<100K
source_datasets:
- extended|orange_sum
tags:
- krm
- ulpgl
- orange
task_categories:
- summarization
task_ids:
- news-articles-summarization
---
# Dataset Card for [for-ULPGL-Dissertation]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** krm/for-ULPGL-Dissertation
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Ce dataset est essentiellement basé sur le dataset *GEM/Orange_sum* dédié à la synthèse d'articles en français. Il est constitué des données abstract de ce dataset (Orange_sum) auxquelles a été ajouté un certain nombre de synthèses générées par le système **Mon Résumeur** de **David Krame**.
### Supported Tasks and Leaderboards
Synthèse automatique
### Languages
Français
## Dataset Structure
### Data Fields
*summary* et *text* sont les champs du dataset avec :
**text** contient les textes et
**summary** les synthèses correspondantes.
### Data Splits
Pour le moment (le 16 Octobre 2022), le dataset est constitué de :
> **21721** données d'entraînement (split dénommé **train**)
> **1545** données de validation (split dénommé **validation**)
> **1581** données de test (split dénommé **test**)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
|
arbml/AQAD | 2022-10-14T22:35:38.000Z | [
"region:us"
] | arbml | null | null | null | 1 | 9 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 23343014
num_examples: 17911
download_size: 3581662
dataset_size: 23343014
---
# Dataset Card for "AQAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Bingsu/laion2b_multi_korean_subset_with_image | 2022-11-03T05:10:40.000Z | [
"task_categories:feature-extraction",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|laion/laion2B-multi",
"language:ko",
"license:cc-by-4.0",
"region:us"
] | Bingsu | null | null | null | 0 | 9 | ---
annotations_creators:
- crowdsourced
language:
- ko
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'laion2b multi korean subset with image'
size_categories:
- 1M<n<10M
source_datasets:
- extended|laion/laion2B-multi
tags: []
task_categories:
- feature-extraction
task_ids: []
---
# laion2b_multi_korean_subset_with_image
## Dataset Description
- **Download Size** 342 GB
img2dataset을 통해 다운로드에 성공한 [Bingsu/laion2B-multi-korean-subset](https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset) 이미지를 정리한 데이터셋입니다.
이미지는 9,800,137장입니다.
이미지는 짧은 쪽 길이가 256이 되도록 리사이즈 되었으며, 품질 100인 webp파일로 다운로드 되었습니다.
## Usage
### 1. datasets
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2b_multi_korean_subset_with_image", streaming=True, split="train")
>>> dataset.features
{'image': Image(decode=True, id=None),
'text': Value(dtype='string', id=None),
'width': Value(dtype='int32', id=None),
'height': Value(dtype='int32', id=None)}
>>> next(iter(dataset))
{'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=256x256>,
'text': '소닉기어 에어폰5 휴대용 스테레오 블루투스 헤드폰',
'width': 256,
'height': 256}
```
### 2. webdataset
이 데이터셋은 [webdataset](https://github.com/webdataset/webdataset)으로 사용할 수 있도록 구성되어있습니다. 데이터를 다운로드하지 않고 스트리밍으로 처리한다면 1번 방법보다 훨씬 빠릅니다.
!! 아래 방법은 Windows에서는 에러가 발생합니다.
```python
>>> import webdataset as wds
>>> url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar"
>>> dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json")
```
```python
>>> next(iter(dataset))
...
```
이 글을 작성하는 현재(22-10-18), webp이미지의 자동 디코딩을 지원하지 않고 있기 때문에([PR #215](https://github.com/webdataset/webdataset/pull/215)), 직접 디코딩해야 합니다.
```python
import io
import webdataset as wds
from PIL import Image
def preprocess(data):
webp, jsn = data
img = Image.open(io.BytesIO(webp))
out = {
"image": img,
"text": jsn["caption"],
"width": jsn["width"],
"height": jsn["height"]
}
return out
url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar"
dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json").map(preprocess)
```
```python
>>> next(iter(dataset))
{'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=427x256>,
'text': '[따블리에]유아동 미술가운, 미술 전신복',
'width': 427,
'height': 256}
```
## Note

각각의 tar 파일은 위 처럼 구성되어 있습니다.
다운로드에 실패한 이미지는 건너뛰어져있기 때문에 파일 이름은 완전히 연속적이지는 않습니다.
각각의 json 파일은 다음처럼 되어있습니다.
```json
{
"caption": "\ub514\uc790\uc778 \uc53d\ud0b9\uacfc \ub514\uc9c0\ud138 \ud2b8\ub79c\uc2a4\ud3ec\uba54\uc774\uc158",
"url": "https://image.samsungsds.com/kr/insights/dt1.jpg?queryString=20210915031642",
"key": "014770069",
"status": "success",
"error_message": null,
"width": 649,
"height": 256,
"original_width": 760,
"original_height": 300,
"exif": "{}"
}
```
txt파일은 json파일의 "caption"을 담고 있습니다.
|
gigant/ted_descriptions | 2022-10-18T11:16:29.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | gigant | null | null | null | 0 | 9 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: TED descriptions
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-generation
task_ids:
- language-modeling
dataset_info:
features:
- name: url
dtype: string
- name: descr
dtype: string
splits:
- name: train
num_bytes: 2617778
num_examples: 5705
download_size: 1672988
dataset_size: 2617778
---
# Dataset Card for TED descriptions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
faruk/bengali-names-vs-gender | 2022-10-22T07:48:50.000Z | [
"license:afl-3.0",
"doi:10.57967/hf/0053",
"region:us"
] | faruk | null | null | null | 0 | 9 | ---
license: afl-3.0
---
# Bengali Female VS Male Names Dataset
An NLP dataset that contains 2030 data samples of Bengali names and corresponding gender both for female and male. This is a very small and simple toy dataset that can be used by NLP starters to practice sequence classification problem and other NLP problems like gender recognition from names.
# Background
In Bengali language, name of a person is dependent largely on their gender. Normally, name of a female ends with certain type of suffix "A", "I", "EE" ["আ", "ই", "ঈ"]. And the names of male are significantly different from female in terms of phoneme patterns and ending suffix. So, In my observation there is a significant possibility that these difference in patterns can be used for gender classification based on names.
Find the full documentation here:
[Documentation and dataset specifications](https://github.com/faruk-ahmad/bengali-female-vs-male-names)
## Dataset Format
The dataset is in CSV format. There are two columns- namely
1. Name
2. Gender
Each row has two attributes. First one is name, second one is the gender. The name attribute is in ```utf-8``` encoding. And the second attribute i.e. the gender attribute has been signified by 0 and 1 as
| | |
|---|---|
|male| 0|
|female| 1|
| | |
## Dataset Statistics
The number of samples per class is as bellow-
| | |
|---|---|
|male| 1029|
|female| 1001|
| | |
## Possible Use Cases
1. Sequence Classification using RNN, LSTM etc
2. Sequence modeling using other type of machine learning algorithms
3. Gender recognition based on names
## Disclaimer
The names were collected from internet using different sources like wikipedia, baby name suggestion websites, blogs etc. If someones name is in the dataset, that is totally unintentional. |
arbml/LK_Hadith | 2022-10-23T17:25:24.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 9 | Entry not found |
pacovaldez/stackoverflow-questions | 2022-11-10T00:14:37.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"stackoverflow",
"technic... | pacovaldez | null | null | null | 30 | 9 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stackoverflow_post_questions
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- stackoverflow
- technical questions
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
|
pszemraj/text2image-multi-prompt | 2023-10-06T23:00:38.000Z | [
"multilinguality:monolingual",
"source_datasets:bartman081523/stable-diffusion-discord-prompts",
"source_datasets:succinctly/midjourney-prompts",
"source_datasets:Gustavosta/Stable-Diffusion-Prompts",
"language:en",
"license:apache-2.0",
"text generation",
"region:us"
] | pszemraj | null | null | null | 8 | 9 | ---
language:
- en
license: apache-2.0
multilinguality:
- monolingual
source_datasets:
- bartman081523/stable-diffusion-discord-prompts
- succinctly/midjourney-prompts
- Gustavosta/Stable-Diffusion-Prompts
pretty_name: multi text2image prompts a dataset collection
tags:
- text generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: original
data_files:
- split: train
path: original/train-*
- split: test
path: original/test-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: src_dataset
dtype: string
splits:
- name: train
num_bytes: 262736830
num_examples: 1677221
- name: test
num_bytes: 56294291
num_examples: 292876
download_size: 151054782
dataset_size: 319031121
- config_name: original
features:
- name: text
dtype: string
- name: src_dataset
dtype: string
splits:
- name: train
num_bytes: 741427383
num_examples: 3551734
- name: test
num_bytes: 83615440
num_examples: 399393
download_size: 402186258
dataset_size: 825042823
---
# text2image multi-prompt(s): a dataset collection
- collection of several text2image prompt datasets
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
## updates
- Oct 2023: the `default` config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.
- original version is still available under `config_name="original"`
## contents
default:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 1677221
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 292876
})
})
```
For `original` config:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 3551734
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 399393
})
})
```
_NOTE: as the other two datasets did not have a `validation` split, the validation split of `succinctly/midjourney-prompts` was merged into `train`._ |
andyyang/stable_diffusion_prompts_2m | 2022-11-10T06:38:10.000Z | [
"license:cc0-1.0",
"region:us"
] | andyyang | null | null | null | 8 | 9 | ---
license: cc0-1.0
---
# Stable Diffusion Prompts 200m
Because Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study.
The file introduction:
- sd_promts_2m.txt : the main dataset.
- sd_top5000.keywords.tsv: the top 5000 frequent key words or phrase.
- |
Ziyang/CC4M | 2022-11-12T06:33:23.000Z | [
"region:us"
] | Ziyang | null | null | null | 0 | 9 | The training and validation files of the conceptual captions dataset (4M). |
bigbio/muchmore | 2022-12-22T15:45:43.000Z | [
"multilinguality:multilingual",
"language:en",
"language:de",
"license:unknown",
"region:us"
] | bigbio | The corpus used in the MuchMore project is a parallel corpus of English-German scientific
medical abstracts obtained from the Springer Link web site. The corpus consists
approximately of 1 million tokens for each language. Abstracts are from 41 medical
journals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.
Neurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in
various ways, in order to produce a clean, plain text version, consisting of a title, abstract
and keywords. Additionally, the corpus was aligned on the sentence level.
Automatic (!) annotation includes: Part-of-Speech; Morphology (inflection and
decomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,
MeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS. | @inproceedings{buitelaar2003multi,
title={A multi-layered, xml-based approach to the integration of linguistic and semantic annotations},
author={Buitelaar, Paul and Declerck, Thierry and Sacaleanu, Bogdan and Vintar, {\v{S}}pela and Raileanu, Diana and Crispi, Claudia},
booktitle={Proceedings of EACL 2003 Workshop on Language Technology and the Semantic Web (NLPXML'03), Budapest, Hungary},
year={2003}
} | null | 0 | 9 |
---
language:
- en
- de
bigbio_language:
- English
- German
license: unknown
multilinguality: multilingual
bigbio_license_shortname: UNKNOWN
pretty_name: MuchMore
homepage: https://muchmore.dfki.de/resources1.htm
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- TRANSLATION
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for MuchMore
## Dataset Description
- **Homepage:** https://muchmore.dfki.de/resources1.htm
- **Pubmed:** True
- **Public:** True
- **Tasks:** TRANSL,NER,NED,RE
The corpus used in the MuchMore project is a parallel corpus of English-German scientific
medical abstracts obtained from the Springer Link web site. The corpus consists
approximately of 1 million tokens for each language. Abstracts are from 41 medical
journals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.
Neurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in
various ways, in order to produce a clean, plain text version, consisting of a title, abstract
and keywords. Additionally, the corpus was aligned on the sentence level.
Automatic (!) annotation includes: Part-of-Speech; Morphology (inflection and
decomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,
MeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS.
## Citation Information
```
@inproceedings{buitelaar2003multi,
title={A multi-layered, xml-based approach to the integration of linguistic and semantic annotations},
author={Buitelaar, Paul and Declerck, Thierry and Sacaleanu, Bogdan and Vintar, {{S}}pela and Raileanu, Diana and Crispi, Claudia},
booktitle={Proceedings of EACL 2003 Workshop on Language Technology and the Semantic Web (NLPXML'03), Budapest, Hungary},
year={2003}
}
```
|
fewshot-goes-multilingual/cs_czech-named-entity-corpus_2.0 | 2022-12-05T22:44:28.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-3.0",
"czech NER",
"CNEC",
... | fewshot-goes-multilingual | null | null | null | 1 | 9 | ---
annotations_creators:
- expert-generated
language:
- cs
language_creators:
- found
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
pretty_name: Czech Named Entity Corpus 2.0
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- czech NER
- CNEC
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for Czech Named Entity Corpus 2.0
## Dataset Description
The dataset contains Czech sentences and annotated named entities. Total number of sentences is around 9,000 and total number of entities is around 34,000. (Total means train + validation + test)
## Dataset Features
Each sample contains:
- `text`: source sentence
- `entities`: list of selected entities. Each entity contains:
- `category_id`: string identifier of the entity category
- `category_str`: human-friendly category name in Czech (verbalizer)
- `start`: index on which the entity starts in the source sentence
- `end`: index on which the entity ends in the source sentence
- `content`: entity content, it was created as `text[start:end]`
- `entity_id`: unique entity string identifier
- `parent_id`: If entity was selected inside another entity (e.g. house number inside address), `parent_id` is the identifier of the parent entity. None otherwise.
The `entity_id` field was checked to be globally unique (across data samples and dataset splits.)
## Entity categories
The list of the recognized entities (`category_id`, `category_str` pairs):
```python3
{
'A': 'číslo v adrese / kontaktním údaji',
'ah': 'číslo domu',
'at': 'telefonní číslo / fax',
'az': 'PSČ (poštovní směrovací číslo)',
'C': 'reference/bibliografie',
'f': 'cizí výraz',
'g_': 'geografický název - jiný',
'gc': 'stát/země',
'gh': 'jméno vodstva',
'gl': 'přírodní oblast/útvar',
'gq': 'městská čtvrť',
'gr': 'území',
'gs': 'ulice/náměstí',
'gt': 'kontinent',
'gu': 'město/zámek',
'i_': 'instituce - jiná',
'ia': 'konference/soutěž',
'ic': 'kulturní/vzdělávací/vědecká instituce',
'if': 'komerční instituce',
'io': 'vládní/politická instituce',
'me': 'emailová adresa',
'mi': 'URL / internetový odkaz',
'mn': 'časopis',
'ms': 'radio/televizní stanice',
'n_': 'číselný výraz - jiný',
'na': 'věk',
'nb': 'číslo stránky/kapitoly/sekce/objektu',
'nc': 'množství/počet',
'ni': 'číslo položky',
'no': 'pořadí',
'ns': 'sportovní skóre',
'o_': 'artefakt - jiný',
'oa': 'umělecké dílo / kulturní artefakt',
'oe': 'jednotka',
'om': 'měna',
'op': 'produkt/výrobek',
'or': 'zákon/směrnice/listina',
'P': 'celé jméno',
'p_': 'jméno - jiné',
'pc': 'národnost',
'pd': '(akademický) titul',
'pf': 'křestní jméno',
'pm': 'prostřední jméno',
'pp': 'mýtická/historická postava',
'ps': 'příjmení',
's': 'zkratka',
'T': 'čas/datum',
'td': 'den',
'tf': 'svátky',
'th': 'hodiny/minuty',
'tm': 'měsíc',
'ty': 'rok',
}
```
## Dataset Source
The dataset is a preprocessed adaptation of existing CNEC 2.0 dataset [project info](https://ufal.mff.cuni.cz/cnec/cnec2.0), [link to data](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-1B22-8). This adaptation contains (almost) same data, but converted to a convenient format. In addition, we inspected and decided to remove entity categories: `?`, `segm`, `cap`, `lower`, `upper`, which were either undocumented and/or carried little semantic meaning.
The category names (verbalizers) are not in the original dataset. They were added by a Czech native speaker using the available [documentation](https://ufal.mff.cuni.cz/cnec/cnec2.0) and by looking at several occurrences in the data.
## Citation
Cite authors of the [original dataset](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-1B22-8):
```bibtex
@misc{11858/00-097C-0000-0023-1B22-8,
title = {Czech Named Entity Corpus 2.0},
author = {{\v S}ev{\v c}{\'{\i}}kov{\'a}, Magda and {\v Z}abokrtsk{\'y}, Zden{\v e}k and Strakov{\'a}, Jana and Straka, Milan},
url = {http://hdl.handle.net/11858/00-097C-0000-0023-1B22-8},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported ({CC} {BY}-{NC}-{SA} 3.0)},
year = {2014}
}
```
|
language-and-voice-lab/samromur_asr | 2023-02-24T22:02:05.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"crowd-sourced icelandic",
"samrómur",
"icelandic spe... | language-and-voice-lab | Samrómur Icelandic Speech 1.0. | @misc{mollbergsamromur2022,
title={Samrómur Icelandic Speech 1.0},
ldc_catalog_no={LDC2022S05},
DOI={https://doi.org/10.35111/thx3-f170},
author={Mollberg, David Erik and Jónsson, Ólafur Helgi and Þorsteinsdóttir, Sunneva and Guðmundsdóttir, Jóhanna Vigdís and Steingrimsson, Steinthor and Magnusdottir, Eydis Huld and Fong, Judy Y. and Borsky, Michal and Guðnason, Jón},
publisher={Reykjavík University}
journal={Linguistic Data Consortium, Philadelphia},
year={2022},
url={https://catalog.ldc.upenn.edu/LDC2022S05},
} | null | 0 | 9 | ---
annotations_creators:
- crowdsourced
language:
- is
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: "Samrómur Icelandic Speech 1.0."
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- crowd-sourced icelandic
- "samrómur"
- icelandic speech
- samromur
- iceland
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for samromur_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Samrómur 21.05]
- **Repository:** [OpenSLR](http://www.openslr.org/112/)
- **Paper:** [Samrómur: Crowd-sourcing Data Collection for Icelandic Speech Recognition](https://aclanthology.org/2020.lrec-1.425.pdf)
- **Point of Contact:** [Jón Guðnason](mailto:jg@ru.is)
### Dataset Summary
This is the first release of the Samrómur Icelandic Speech corpus that contains 100.000 validated utterances.
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology.
### Example Usage
The Samrómur Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
samromur_asr = load_dataset("language-and-voice-lab/samromur_asr")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
samromur_asr = load_dataset("language-and-voice-lab/samromur_asr",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
The reading prompts were gathered from a variety of sources, mainly from the [Icelandic Gigaword Corpus](http://clarin.is/en/resources/gigaword). The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
## Dataset Structure
### Data Instances
```python
{
'audio_id': '009123-0150695',
'audio': {
'path': '/home/david/.cache/HuggingFace/datasets/downloads/extracted/cb428a7f1e46b058d76641ef32f36b49d28b73aea38509983170495408035a10/dev/009123/009123-0150695.flac',
'array': array([0., 0., 0., ..., 0., 0., 0.], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': '009123',
'gender': 'female',
'age': '18-19',
'duration': 3.299999952316284,
'normalized_text': 'það skipti heldur engu'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `age` (string) - range of age of the speaker.
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
The corpus is split into train, validation, and test subsets with no speaker overlap. Each subset contains folders that correspond to speaker IDs, and the audio files inside use the following naming convention: {speaker_ID}-{utterance_ID}.flac. Lenghts of each portion are: train=114h/34m, test=15h51m, validation=15h16m.
To load an specific portion please see the above section "Example Usage".
## Dataset Creation
### Curation Rationale
* The recording has started in October 2019 and continues to this day (May 2021).
* This release has been authorized for release in May 2021.
* The aim is to create an open-source speech corpus to enable research and development for Icelandic Language Technology.
* The corpus contains audio recordings and a metadata file that contains the prompts the participants read.
* A Kaldi based script using this data can be found on the Language and Voice Lab gitHub page https://github.com/cadia-lvl/samromur-asr
### Source Data
#### Initial Data Collection and Normalization
* The utterances were recorded by a smartphone or the web app.
* The data was collected using the website https://samromur.is, code of which is available at https://github.com/cadia-lvl/samromur.
* Each recording contains one read sentence from a script.
* The script contains 85.080 unique sentences and 90.838 unique tokens.
### Annotations
#### Annotation process
Prompts were pulled from these corpora if they met the criteria of having only letters which are present in the Icelandic alphabet, and if they are listed in the [DIM: Database Icelandic Morphology](https://aclanthology.org/W19-6116.pdf).
There are also synthesised prompts consisting of a name followed by a question or a demand, in order to simulate a dialogue with a smart-device.
#### Who are the annotators?
The audio files content was manually verified against the prompts by one or more listener (summer students mainly).
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This contribution describes an ongoing project of speech data collection, using the web application Samrómur which is built upon Common Voice, Mozilla Foundation's web platform for open-source voice collection. The goal of the project is to build a large-scale speech corpus for Automatic Speech Recognition (ASR) for Icelandic. Upon completion, Samrómur will be the largest open speech corpus for Icelandic collected from the public domain.
### Discussion of Biases
* The participants are aged between 18 to 90, 59,782 recordings are from female speakers and 40,218 are from male, recorded by a smartphone or the web app.
* Participants self-reported their age group, gender, and the native language.
* The corpus contains 100 000 utterance from 8392 speaker, totalling 145 hours.
### Other Known Limitations
"Samromur 21.05" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
## Additional Information
### Dataset Curators
The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{mollberg-etal-2020-samromur,
title = "{S}amr{\'o}mur: Crowd-sourcing Data Collection for {I}celandic Speech Recognition",
author = "Mollberg, David Erik and
J{\'o}nsson, {\'O}lafur Helgi and
{\TH}orsteinsd{\'o}ttir, Sunneva and
Steingr{\'\i}msson, Stein{\th}{\'o}r and
Magn{\'u}sd{\'o}ttir, Eyd{\'\i}s Huld and
Gudnason, Jon",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.425",
pages = "3463--3467",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
The verification for the dataset was funded by the the Icelandic Directorate of Labour's Student Summer Job Program.
Special thanks for the summer students for all the hard work.
|
argilla/banking_sentiment_zs_gpt3 | 2022-12-07T22:16:43.000Z | [
"region:us"
] | argilla | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
struct:
- name: explanation
dtype: string
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: string
- name: annotation_agent
dtype: string
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: 'null'
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 11113
num_examples: 36
download_size: 10768
dataset_size: 11113
---
# Dataset Card for "banking_sentiment_zs_gpt3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Alegzandra/REDv2 | 2022-12-22T15:14:54.000Z | [
"license:mit",
"region:us"
] | Alegzandra | null | null | null | 0 | 9 | ---
license: mit
---
# Dataset Card for [REDv2]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is the second version of the Romanian Emotions Dataset (RED) containing 5449 tweets annotated in a multi-label fashion with the following 7 emotions: Anger (Furie), Fear (Frică), Joy (Bucurie), Sadness (Tristețe), Surprise (Surpriză), Trust (Încredere) and Neutral (Neutru).
### Supported Tasks and Leaderboards
This dataset is intended for multi-class & multi-label emotion classification.
### Languages
The data is in Romanian.
## Dataset Structure
### Data Instances
Each instance is a tweet with a corresponding ID and one or more emotion annotations (or neutral).
### Data Fields
The simplified configuration includes:
```
text: the tweet
text_id: unique identifier of the tweet (can be used to look up the entry in the raw dataset)
agreed_labels: the agreed emotion annotations vector (each value of 1 means that at least two annotators recognized that specific emotion)
procentual_labels: vector containing three values: 0.33 if one annotator recognised the emotion, 0.66 if two annotators agreed on the emotion, and 0.99 if all annotators recognised the emotion
```
In addition to the above, the raw data includes:
```
Anger, Fear, Joy, Neutral, Sadness, Surprise, Trust: boolean values - True if the specific emotion is found in the agreed_labels vector
annotator1, annotator2, annotator3: vectors of zeros of ones - 1 means the annotator recognized the emotion on the corresponding vector index
sum_labels: the sum of annotator1, annotator2 and annotator3 vectors
```
The arrays of 7 values correspond to the following emotions: ['Sadness', 'Surprise', 'Fear', 'Anger', 'Neutral', 'Trust', 'Joy'].
### Data Splits
This dataset includes a set of train/val/test splits with 4088, 818, and 543 examples respectively.
## Dataset Creation
### Curation Rationale
From the paper introduction:
>Interpreting correctly one’s own emotions, as well as
other people’s emotional states, is a central aspect of
emotional intelligence. Today, people can automate
the process of emotion detection by creating machine
learning models, provided by the fact that the model
training was done on qualitative and sufficient data.
With the constant increase of social media usage there
is also an increase in online public data, freely available
for model creation. Thus, analyzing emotions in online
content naturally has became more and more of a topic
of interest in the recent years.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from Twitter (for more information see Chapter 3.1 of the [paper](https://aclanthology.org/2022.lrec-1.149.pdf)).
#### Who are the source language producers?
Romanian-speaking Twitter users.
### Annotations
#### Annotation process
See Chapter 3.2. in the [paper](https://aclanthology.org/2022.lrec-1.149.pdf).
#### Who are the annotators?
Annotations were produced by 66 Cognitive Science students, University of Bucharest, Faculty of Psichology and Educational Sciences.
### Personal and Sensitive Information
All tweets in this dataset are anonymized by removing usernames and proper nouns.
## Additional Information
### Dataset Curators
Researchers at the University of Bucharest and Adobe (see the authors of the paper [here](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.149.pdf)).
### Licensing Information
The [GitHub repository](https://github.com/Alegzandra/RED-Romanian-Emotions-Dataset/tree/main/REDv2) of this dataset has an MIT license.
### Citation Information
If you are using this dataset in your research, please cite:
```
@inproceedings{redv2,
author = "Alexandra Ciobotaru and
Mihai V. Constantinescu and
Liviu P. Dinu and
Stefan Daniel Dumitrescu",
title = "{RED} v2: {E}nhancing {RED} {D}ataset for {M}ulti-{L}abel {E}motion {D}etection",
journal = "Proceedings of the 13th Language Resources and Evaluation Conference (LREC 2022)",
pages = "1392–1399",
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.149.pdf",
language = "English"
}
```
### Contributions
Thanks to [@Alegzandra](https://github.com/<Alegzandra>) for adding this dataset.
|
parambharat/kannada_asr_corpus | 2022-12-13T09:49:00.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|openslr",
"language:kn",
"license:cc-by-4.0",
"region:us"
] | parambharat | The corpus contains roughly 360 hours of audio and transcripts in Kannada language. The transcripts have beed de-duplicated using exact match deduplication. | @misc{https://doi.org/10.48550/arxiv.2211.09536,
doi = {10.48550/ARXIV.2211.09536},
url = {https://arxiv.org/abs/2211.09536},
author = {Kumar, Gokul Karthik and S, Praveen and Kumar, Pratyush and Khapra, Mitesh M. and Nandakumar, Karthik},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Towards Building Text-To-Speech Systems for the Next Billion Users},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
@misc{https://doi.org/10.48550/arxiv.2205.12446,
doi = {10.48550/ARXIV.2205.12446},
url = {https://arxiv.org/abs/2205.12446},
author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
} | null | 0 | 9 | ---
annotations_creators:
- found
language:
- kn
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Kannada ASR Corpus
size_categories:
- 100K<n<1M
source_datasets:
- extended|openslr
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for [Kannada Asr Corpus]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset. |
laion/laion2b-en-vit-l-14-embeddings | 2022-12-17T02:00:01.000Z | [
"region:us"
] | laion | null | null | null | 10 | 9 | Entry not found |
NeelNanda/c4-code-20k | 2022-12-26T23:25:12.000Z | [
"region:us"
] | NeelNanda | null | null | null | 1 | 9 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 101351288
num_examples: 20000
download_size: 42778874
dataset_size: 101351288
---
# Dataset Card for "c4-code-10k"
10K elements of C4 and 10K elements of code parrot clean (Python code).
Note that these are the datasets used to train my interpretability-friendly models, but is *not* of the correct mixture. Those models were trained on 83% C4 and 17% Python Code (ish) by tokens. This dataset has 10K strings of each, and by tokens is about 22M of code and 5M of C4 (code is longer and harder to compress!)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
keremberke/garbage-object-detection | 2023-01-05T11:30:08.000Z | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | keremberke | null | @misc{ garbage-classification-3_dataset,
title = { GARBAGE CLASSIFICATION 3 Dataset },
type = { Open Source Dataset },
author = { Material Identification },
howpublished = { \\url{ https://universe.roboflow.com/material-identification/garbage-classification-3 } },
url = { https://universe.roboflow.com/material-identification/garbage-classification-3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-02 },
} | null | 4 | 9 | ---
task_categories:
- object-detection
tags:
- roboflow
---
### Roboflow Dataset Page
[https://universe.roboflow.com/material-identification/garbage-classification-3/dataset/2](https://universe.roboflow.com/material-identification/garbage-classification-3/dataset/2?ref=roboflow2huggingface)
### Dataset Labels
```
['biodegradable', 'cardboard', 'glass', 'metal', 'paper', 'plastic']
```
### Citation
```
@misc{ garbage-classification-3_dataset,
title = { GARBAGE CLASSIFICATION 3 Dataset },
type = { Open Source Dataset },
author = { Material Identification },
howpublished = { \\url{ https://universe.roboflow.com/material-identification/garbage-classification-3 } },
url = { https://universe.roboflow.com/material-identification/garbage-classification-3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-02 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on July 27, 2022 at 5:44 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 10464 images.
GARBAGE-GARBAGE-CLASSIFICATION are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
The following augmentation was applied to create 1 versions of each source image:
* 50% probability of horizontal flip
* 50% probability of vertical flip
* Equal probability of one of the following 90-degree rotations: none, clockwise, counter-clockwise, upside-down
|
irds/clinicaltrials_2021_trec-ct-2022 | 2023-01-05T02:54:20.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/clinicaltrials_2021",
"region:us"
] | irds | null | null | null | 1 | 9 | ---
pretty_name: '`clinicaltrials/2021/trec-ct-2022`'
viewer: false
source_datasets: ['irds/clinicaltrials_2021']
task_categories:
- text-retrieval
---
# Dataset Card for `clinicaltrials/2021/trec-ct-2022`
The `clinicaltrials/2021/trec-ct-2022` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2022).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2022', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
irds/wapo_v3_trec-news-2020 | 2023-01-05T03:56:49.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | null | 0 | 9 | ---
pretty_name: '`wapo/v3/trec-news-2020`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wapo/v3/trec-news-2020`
The `wapo/v3/trec-news-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v3/trec-news-2020).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=17,764
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/wapo_v3_trec-news-2020', 'queries')
for record in queries:
record # {'query_id': ..., 'doc_id': ..., 'url': ...}
qrels = load_dataset('irds/wapo_v3_trec-news-2020', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
amphora/korfin-asc | 2023-01-16T15:26:46.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:klue",
"language:ko",
"license:cc-... | amphora | null | null | null | 1 | 9 | ---
annotations_creators:
- expert-generated
language:
- ko
language_creators:
- expert-generated
license: cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: KorFin-ABSA
size_categories:
- 1K<n<10K
source_datasets:
- klue
tags:
- sentiment analysis
- aspect based sentiment analysis
- finance
task_categories:
- text-classification
task_ids:
- topic-classification
- sentiment-classification
---
# Dataset Card for KorFin-ABSA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated.
The samples were collected from [KLUE-TC](https://klue-benchmark.com/tasks/66/overview/description) and
analyst reports from [Naver Finance](https://finance.naver.com).
Annotation of the dataset is described in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Aspect-Based Sentiment Classification
### Languages
Korean
## Dataset Structure
### Data Instances
Each instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).
```
{
"title": "LGU+ 1분기 영업익 1천706억원…마케팅 비용 감소",
"aspect": "LG U+",
'sentiment': 'NEUTRAL',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008363739',
'annotator_id': 'A_01',
'Type': 'single'
}
```
### Data Fields
* title:
* aspect:
* sentiment:
* url:
* annotator_id:
* url:
### Data Splits
The dataset currently does not contain standard data splits.
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("amphora/KorFin-ASC")
```
Please find more information about the code and how the data was collected in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
The best-performing model on this dataset can be found at [link](https://huggingface.co/amphora/KorFinASC-XLM-RoBERTa).
### Licensing Information
KorFin-ASC is licensed under the terms of the [cc-by-sa-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
Please cite this data using:
```
@article{son2023removing,
title={Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance},
author={Son, Guijin and Lee, Hanwool and Kang, Nahyeon and Hahm, Moonjeong},
journal={arXiv preprint arXiv:2301.03136},
year={2023}
}
```
### Contributions
Thanks to [@Albertmade](https://github.com/h-albert-lee), [@amphora](https://github.com/guijinSON) for making this dataset. |
poolrf2001/FaceMask | 2023-01-17T22:58:52.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | poolrf2001 | MaskFace es un conjunto de datos de imágenes de personas con y sin mascarillas Consta de 3 clases: 1 clase de si la persona está puesta la mascarilla,
otra clase si la persona no esta puesta la mascarilla y una clase donde la persona está puesta la mascarilla incorrectamente. | @ONLINE {masksdata,
author="Pool_rf",
title="Mask face dataset",
month="January",
year="2023",
url="https://github.com/poolrf2001/maskFace"
} | null | 0 | 9 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: FaceMask
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
0: mask_weared_incorrect
1: with_mask
2: without_mask
splits:
- name: train
num_bytes: 38806014
num_examples: 1500
- name: validation
num_bytes: 4758962
num_examples: 180
- name: test
num_bytes: 4693735
num_examples: 180
download_size: 48258711
dataset_size: 49140913
---
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Repository:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
- `image-classification`: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x16BAA72A4A8>,
'labels': 1
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```json
{
"mask_weared_incorrect": 0,
"with_mask": 1,
"without_mask": 2,
}
```
### Data Splits
| |train|validation|test|
|-------------|----:|---------:|---:|
|# of examples|1500 |180 |180 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {beansdata,
author="Pool",
title="FaceMask dataset",
month="January",
year="2023",
url="https://github.com/poolrf2001/maskFace"
}
```
### Contributions
|
jonathan-roberts1/NWPU-RESISC45 | 2023-03-31T16:57:43.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': airport
'2': baseball diamond
'3': basketball court
'4': beach
'5': bridge
'6': chaparral
'7': church
'8': circular farmland
'9': cloud
'10': commercial area
'11': dense residential
'12': desert
'13': forest
'14': freeway
'15': golf course
'16': ground track field
'17': harbor
'18': industrial area
'19': intersection
'20': island
'21': lake
'22': meadow
'23': medium residential
'24': mobile home park
'25': mountain
'26': overpass
'27': palace
'28': parking lot
'29': railway
'30': railway station
'31': rectangular farmland
'32': river
'33': roundabout
'34': runway
'35': sea ice
'36': ship
'37': snowberg
'38': sparse residential
'39': stadium
'40': storage tank
'41': tennis court
'42': terrace
'43': thermal power station
'44': wetland
splits:
- name: train
num_bytes: 381151705
num_examples: 31500
download_size: 424827902
dataset_size: 381151705
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "NWPU-RESISC45"
## Dataset Description
- **Paper** [Remote sensing image scene classification: Benchmark and state of the art](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
### Licensing Information
[CC-BY-SA]
## Citation Information
[Remote sensing image scene classification: Benchmark and state of the art](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
```
@article{cheng2017remote,
title = {Remote sensing image scene classification: Benchmark and state of the art},
author = {Cheng, Gong and Han, Junwei and Lu, Xiaoqiang},
year = 2017,
journal = {Proceedings of the IEEE},
publisher = {IEEE},
volume = 105,
number = 10,
pages = {1865--1883}
}
``` |
eastwind/self-instruct-base | 2023-01-24T17:44:49.000Z | [
"license:apache-2.0",
"arxiv:2212.10560",
"region:us"
] | eastwind | null | null | null | 3 | 9 | ---
license: apache-2.0
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository: [Self-Instruct](https://github.com/yizhongw/self-instruct)**
- **Paper: [Self-Instruct: Aligning Language Model with Self Generated Instructions](https://arxiv.org/abs/2212.10560)**
### Dataset Summary
This dataset is a copy of yizhongw's data from the github above, note this was created on 24th Jan 2023.
## Dataset Structure
GPT3-finetuning format (prompt + completion)
### Data Fields
Prompt
"Task: [Instruction] Output:"
Completion
"[Answer]<|endoftext|>"
### Data Splits
No splits
## Dataset Creation
### Curation Rationale
Effeciently create a large dataset by using GPT3 to generate the data
### Annotations
The dataset was made and annotated by GPT3
### Dataset Curators
yizhongw
### Licensing Information
Apache 2.0
### Citation Information
I am not the creator of this dataset, please see the GitHub link above.
|
jonathan-roberts1/RSSCN7 | 2023-03-31T17:20:53.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': field
'1': forest
'2': grass
'3': industry
'4': parking
'5': resident
'6': river or lake
splits:
- name: train
num_bytes: 345895442.4
num_examples: 2800
download_size: 367257922
dataset_size: 345895442.4
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "RSSCN7"
## Dataset Description
- **Paper** [Deep Learning Based Feature Selection for Remote Sensing Scene Classification](https://ieeexplore.ieee.org/iel7/8859/7305891/07272047.pdf)
### Licensing Information
For research and academic purposes.
## Citation Information
[Deep Learning Based Feature Selection for Remote Sensing Scene Classification](https://ieeexplore.ieee.org/iel7/8859/7305891/07272047.pdf)
```
@article{7272047,
title = {Deep Learning Based Feature Selection for Remote Sensing Scene Classification},
author = {Zou, Qin and Ni, Lihao and Zhang, Tong and Wang, Qian},
year = 2015,
journal = {IEEE Geoscience and Remote Sensing Letters},
volume = 12,
number = 11,
pages = {2321--2325},
doi = {10.1109/LGRS.2015.2475299}
}
``` |
matchbench/books2 | 2023-02-12T10:18:25.000Z | [
"region:us"
] | matchbench | null | null | null | 0 | 9 | Entry not found |
cahya/instructions_indonesian | 2023-02-09T17:03:53.000Z | [
"license:mit",
"region:us"
] | cahya | null | null | null | 0 | 9 | ---
license: mit
---
# Indonesian Instructions Dataset
|
dirtycomputer/ChnSentiCorp_htl_all | 2023-02-17T06:46:13.000Z | [
"region:us"
] | dirtycomputer | null | null | null | 1 | 9 | Entry not found |
happycute/yelp_dataset | 2023-02-19T07:20:27.000Z | [
"license:other",
"region:us"
] | happycute | null | null | null | 0 | 9 | ---
license: other
---
|
dmayhem93/toolformer_raw_v0 | 2023-02-28T19:12:34.000Z | [
"region:us"
] | dmayhem93 | null | null | null | 3 | 9 | First try data generation for toolformer with retrieval, calculator, and calendar tasks. Don't expect too much magic.
C4 en variant was used to generate this data.
How to parse these:
Each item in the dataset comes with three components:
- file_index: index of c4 en streamed file
- text: complete text input to generation
- x_outputs - list of [score, token index, API call, API return]
token index with gpt-j tokenizer. |
BuroIdentidadDigital/recibos_telmex | 2023-09-29T21:53:42.000Z | [
"license:c-uda",
"region:us"
] | BuroIdentidadDigital | null | null | null | 1 | 9 | ---
license: c-uda
---
|
rmcpantoja/Ald_Mexican_Spanish_speech_dataset | 2023-03-13T05:59:04.000Z | [
"license:unlicense",
"region:us"
] | rmcpantoja | null | null | null | 1 | 9 | ---
license: unlicense
---
This dataset can be used to fine-tune Speech To Text models as Text To Speech.
## dataset information
* Speaker: Aldo
* Dataset size: 535 audio files
* audio duration of 4-15 seconds (1:33:15)
## Dataset structure
This dataset has been structured in the LJSpeech format:
* wavs/
* 1.wav
* 2.wav
* 3.wav
* ---
* 535.wav
* transcript.csv |
TurkuNLP/jigsaw_toxicity_pred_fi | 2023-09-25T09:56:33.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:extended|jigsaw_toxicity_pred",
"language:fi",
"license:cc-by-sa-4.0",
"toxicity, multi-label",
"region:us"
] | TurkuNLP | null | null | null | 0 | 9 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
task_ids:
- multi-label-classification
language:
- fi
multilinguality:
- translation
tags:
- toxicity, multi-label
source_datasets:
- extended|jigsaw_toxicity_pred
size_categories:
- 100K<n<1M
---
### Dataset Summary
This dataset is a DeepL -based machine translated version of the Jigsaw toxicity dataset for Finnish. The dataset is originally from a Kaggle competition https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data.
The dataset poses a multi-label text classification problem and includes the labels `identity_attack`, `insult`, `obscene`, `severe_toxicity`, `threat` and `toxicity`.
#### Example data
```
{
"label_identity_attack": 0,
"label_insult": 0,
"label_obscene": 0,
"label_severe_toxicity": 0,
"label_threat": 0,
"label_toxicity": 0,
"lang": "fi-deepl",
"text": "\" \n\n Hei Pieter Pietersen, ja tervetuloa Wikipediaan! \n\n Tervetuloa Wikipediaan! Toivottavasti viihdyt tietosanakirjassa ja haluat jäädä tänne. Ensimmäiseksi voit lukea johdannon. \n\n Jos sinulla on kysyttävää, voit kysyä minulta keskustelusivullani - autan mielelläni. Tai voit kysyä kysymyksesi Uusien avustajien ohjesivulla. \n\n - \n Seuraavassa on lisää resursseja, jotka auttavat sinua tutkimaan ja osallistumaan maailman suurinta tietosanakirjaa.... \n\n Löydät perille: \n\n \n * Sisällysluettelo \n\n * Osastohakemisto \n\n \n Tarvitsetko apua? \n\n \n * Kysymykset - opas siitä, mistä voi esittää kysymyksiä. \n * Huijausluettelo - pikaohje Wikipedian merkintäkoodeista. \n\n * Wikipedian 5 pilaria - yleiskatsaus Wikipedian perustaan. \n * The Simplified Ruleset - yhteenveto Wikipedian tärkeimmistä säännöistä. \n\n \n Miten voit auttaa: \n\n \n * Wikipedian avustaminen - opas siitä, miten voit auttaa. \n\n * Yhteisöportaali - Wikipedian toiminnan keskus. \n\n \n Lisää vinkkejä... \n\n \n * Allekirjoita viestisi keskustelusivuilla neljällä tildillä (~~~~). Tämä lisää automaattisesti \"\"allekirjoituksesi\"\" (käyttäjänimesi ja päivämääräleima). Myös Wikipedian tekstinmuokkausikkunan yläpuolella olevassa työkalupalkissa oleva painike tekee tämän. \n\n * Jos haluat leikkiä uusilla Wiki-taidoillasi, Hiekkalaatikko on sinua varten. \n\n \n Onnea ja hauskaa. \""
}
```
### Data Fields
Fields marked as `label_` have either `0` to convey *not* having that category of toxicity in the text and `1` to convey having that category of toxicity present in the text.
- `label_identity_attack`: a `int64` feature.
- `label_insult`: a `int64` feature.
- `label_obscene`: a `int64` feature.
- `label_severe_toxicity`: a `int64` feature.
- `label_threat`: a `int64` feature.
- `label_toxicity`: a `int64` feature.
- `lang`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
The splits are the same as in the original English data.
| dataset | train | test |
| -------- | -----: | ---------: |
| TurkuNLP/jigsaw_toxicity_pred_fi| 159571 | 63978 |
### Evaluation Results
Results from fine-tuning [TurkuNLP/bert-large-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-large-finnish-cased-v1) for multi-label toxicity detection. The fine-tuned model can be found
| dataset | F1-micro | Precision | Recall |
| -------------------- | ----: | ---: | ----: |
| TurkuNLP/jigsaw_toxicity_pred_fi | 0.66 | 0.58 | 0.76 |
<!--- Base results from fine-tuning [bert-large-cased](https://huggingface.co/bert-large-cased) on the original English data for multi-label toxicity detection.
| dataset | F1-micro | Precision | Recall |
| -------------------- | ----: | ---: | ----: |
| jigsaw_toxicity_pred | 0.69 | 0.59 | 0.81 | --->
### Considerations for Using the Data
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
### Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders.
### Citing
To cite this dataset use the following bibtex.
```
@inproceedings{eskelinen-etal-2023-toxicity,
title = "Toxicity Detection in {F}innish Using Machine Translation",
author = "Eskelinen, Anni and
Silvala, Laura and
Ginter, Filip and
Pyysalo, Sampo and
Laippala, Veronika",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.68",
pages = "685--697",
abstract = "Due to the popularity of social media platforms and the sheer amount of user-generated content online, the automatic detection of toxic language has become crucial in the creation of a friendly and safe digital space. Previous work has been mostly focusing on English leaving many lower-resource languages behind. In this paper, we present novel resources for toxicity detection in Finnish by introducing two new datasets, a machine translated toxicity dataset for Finnish based on the widely used English Jigsaw dataset and a smaller test set of Suomi24 discussion forum comments originally written in Finnish and manually annotated following the definitions of the labels that were used to annotate the Jigsaw dataset. We show that machine translating the training data to Finnish provides better toxicity detection results than using the original English training data and zero-shot cross-lingual transfer with XLM-R, even with our newly annotated dataset from Suomi24.",
}
``` |
CATIE-AQ/frenchQA | 2023-08-24T07:43:59.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:fr",
"license:cc-by-4.0",
"doi:10.57967/hf/0862",
"region:us"
] | CATIE-AQ | One French QA Dataset to rule them all, One French QA Dataset to find them, One French QA Dataset to bring them all, and in the darkness bind them. | \ | null | 0 | 9 | ---
task_categories:
- question-answering
language:
- fr
size_categories:
- 100K<n<1M
license: cc-by-4.0
---
# Dataset information
Dataset concatenating all QA datasets with context available in French and open-source.
In addition, an augmented version of these datasets has been added (same context but different questions to create data in SQuADv2 format).
In total, there are 221,348 training data, **910** validation data and 6,376 test data (the first 3,188 rows correspond to SQuADv2 format, the remaining 3,188 to SQuADv2 format).
In practice, due to the restrictive license for the FQUAD 1.0 dataset, we can only share **179,886** rows of the 221,348 training data and not the test dataset.
# Usage
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/frenchQA",sep=";")
```
```
dataset
DatasetDict({
train: Dataset({
features: ['context', 'question', 'answer', 'answer_start', 'dataset'],
num_rows: 179886
})
validation: Dataset({
features: ['context', 'question', 'answer', 'answer_start', 'dataset'],
num_rows: 910
})
})
```
# Dataset
## Dataset details
| Dataset | Format | Train split | Dev split | Test split | Available in frenchQA |
| ----------- | ----------- | ----------- | ----------- | ----------- | ------------------------ |
| [piaf](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)| SQuAD 1.0 | 9 224 Q & A | X | X | Yes |
| piaf_v2| SQuAD 2.0 | 9 224 Q & A | X | X | Yes |
| [fquad](https://fquad.illuin.tech/)| SQuAD 1.0 | 20 731 Q & A | 3 188 Q & A (is not used for training, but as a test dataset) | 2 189 Q & A (not freely available)| No due to the license |
| fquad_v2 | SQuAD 2.0 | 20 731 Q & A | 3 188 Q & A (is not used for training, but as a test dataset) | X | No due to the license |
| [lincoln/newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) | SQuAD 1.0 | 1 650 Q & A | 455 Q & A | X | Yes |
| lincoln/newsquadfr_v2 | SQuAD 2.0 | 1 650 Q & A | 455 Q & A | X | Yes |
| [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated)| SQuAD 2.0 | 79 069 Q & A | X | X | Yes |
| pragnakalp/squad_v2_french_translated_v2| SQuAD 2.0 | 79 069 Q & A | X | X | Yes |
## Columns
```
dataset_train = dataset['train'].to_pandas()
dataset_train.head()
context question answer answer_start dataset
0 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyonce a-t-elle commencé à devenir popu... à la fin des années 1990 269 pragnakalp/squad_v2_french_translated
1 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyonce a-t-elle quitté Destiny's Child ... 2003 549 pragnakalp/squad_v2_french_translated
2 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Qui a dirigé le groupe Destiny's Child ? Mathew Knowles 376 pragnakalp/squad_v2_french_translated
3 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Quand Beyoncé a-t-elle sorti Dangerously in Lo... 2003 549 pragnakalp/squad_v2_french_translated
4 Beyoncé Giselle Knowles-Carter (/ biːˈjɒnseɪ /... Combien de Grammy Awards Beyoncé a-t-elle gagn... cinq 629 pragnakalp/squad_v2_french_translated
```
- the `context` column contains the context
- the `question` column contains the question
- the `answer` column contains the answer (has been replaced by `no_answer` for rows in SQuAD v2 format)
- the `answer_start` column contains the start position of the answer in the context (has been replaced by `-1` for rows in SQuAD v2 format)
- the `dataset` column identifies the row's original dataset (if you wish to apply filters to it, rows in SQuAD v2 format are indicated with the suffix `_v2` in the dataset name)
## Split
- `train` corresponds to the concatenation of the training dataset from `pragnakalp/squad_v2_english_translated` + `lincoln/newsquadfr` + `PIAFv1.2` + the augmented version of each dataset in SQuADv2 format (no shuffle has been performed)
- `validation` corresponds to the concatenation of the newsquadfr validation dataset + this same dataset expanded in SQuAD v2 format (= newsquadfr_v2) (no shuffle performed)
# Question type statistics
The question type distribution is as follows:
| Type of question | Frequency in percent |
| ----------- | ----------- |
|What (que) |55.02|
|Who (qui) |15.96|
|How much (combien)|7.92|
|When (quand) |6.90|
|Where (où) |3.15|
|How (comment) |3.76|
|What (quoi) |2.60|
|Why (pourquoi) |1.25|
|Other |3.44|
The number of questions containing a negation, e.g. "What was the name of Chopin's first music teacher who was not an amateur musician?", is estimated at 3.55% of the total questions.
For information, the distribution of the complete dataset (containing FQUAD 1.0 and FQUAD 1.0 data in SQUAD 2.0 format) is as follows:
| Type of question | Frequency in percent |
| ----------- | ----------- |
|What (que) |55.12|
|Who (qui) |16.24|
|How much (combien)|7.56|
|When (quand) |6.85|
|Where (où) |3.98|
|How (comment) |3.76|
|What (quoi) |2.94|
|Why (pourquoi) |1.41|
|Other |2.14|
The number of questions containing a negation, e.g. "What was the name of Chopin's first music teacher who was not an amateur musician?", is estimated at 3.07% of the total questions.
# Citation
```
@misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { frenchQA (Revision 6249cd5) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/frenchQA },
doi = { 10.57967/hf/0862 },
publisher = { Hugging Face }
}
```
# License
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) |
musabg/commoncrawl-tr | 2023-05-09T20:04:43.000Z | [
"region:us"
] | musabg | null | null | null | 1 | 9 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: warc_headers
struct:
- name: warc-record-id
dtype: string
- name: warc-date
dtype: string
- name: content-type
dtype: string
- name: content-length
dtype: int32
- name: warc-type
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-block-digest
dtype: string
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float32
- name: harmful_pp
dtype: float32
- name: tlsh
dtype: string
- name: quality_warnings
sequence: string
- name: categories
sequence: string
- name: sentence_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float32
splits:
- name: train
num_bytes: 85952224217
num_examples: 13327165
download_size: 46952332972
dataset_size: 85952224217
---
# Dataset Card for "commoncrawl-tr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nlpcloud/instructions-dataset-adapted-from-stanford-alpaca-for-gpt-j | 2023-03-16T12:18:55.000Z | [
"license:gpl-3.0",
"region:us"
] | nlpcloud | null | null | null | 10 | 9 | ---
license: gpl-3.0
---
This dataset is an adaptation of the [Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) in order to turn a text generation model like GPT-J into an "instruct" model.
The initial dataset was slightly reworked in order to match the GPT-J fine-tuning format with Mesh Transformer Jax on TPUs. |
thewall/tokenizer | 2023-05-16T09:53:53.000Z | [
"license:openrail",
"region:us"
] | thewall | null | null | null | 0 | 9 | ---
license: openrail
---
|
Deysi/sentences-and-emotions | 2023-03-21T22:54:16.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | Deysi | null | null | null | 2 | 9 | ---
dataset_info:
features:
- name: utterance
dtype: string
- name: emotion
dtype: string
splits:
- name: test
num_bytes: 62487
num_examples: 816
- name: valid
num_bytes: 39971
num_examples: 493
- name: train
num_bytes: 188423
num_examples: 2405
download_size: 36170
dataset_size: 290881
task_categories:
- text-classification
language:
- en
pretty_name: Sentences and emotions
size_categories:
- 100K<n<1M
---
# Dataset Card for "sentences-and-emotions"
Recognizing Emotion Cause in Conversations. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Pengfei Hong, Romila Ghosh, Abhinaba Roy, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea. Cognitive Computation (2021). |
dvilasuero/somos-alpaca-es | 2023-03-23T23:41:11.000Z | [
"region:us"
] | dvilasuero | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
dtype: 'null'
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
dtype: 'null'
- name: multi_label
dtype: 'null'
- name: explanation
dtype: 'null'
- name: id
dtype: 'null'
- name: metadata
dtype: 'null'
- name: status
dtype: 'null'
- name: event_timestamp
dtype: 'null'
- name: metrics
dtype: 'null'
splits:
- name: train
download_size: 0
dataset_size: 0
---
# Dataset Card for "somos-alpaca-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bharat-raghunathan/indian-foods-dataset | 2023-03-26T08:58:10.000Z | [
"task_categories:image-classification",
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"region:us"
] | bharat-raghunathan | null | null | null | 1 | 9 | ---
license: cc0-1.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': biryani
'1': cholebhature
'2': dabeli
'3': dal
'4': dhokla
'5': dosa
'6': jalebi
'7': kathiroll
'8': kofta
'9': naan
'10': pakora
'11': paneer
'12': panipuri
'13': pavbhaji
'14': vadapav
splits:
- name: train
num_bytes: 611741947.222
num_examples: 3809
- name: test
num_bytes: 153961285
num_examples: 961
download_size: 688922167
dataset_size: 765703232.222
task_categories:
- image-classification
- text-to-image
language:
- en
pretty_name: indian-foods
size_categories:
- 1K<n<10K
---
# Dataset Card for Indian Foods Dataset
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset
- **Repository:** https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://www.kaggle.com/anshulmehtakaggl
### Dataset Summary
This is a multi-category(multi-class classification) related Indian food dataset showcasing [The-massive-Indian-Food-Dataset](https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset).
This card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['biryani', 'cholebhature', 'dabeli', 'dal', 'dhokla', 'dosa', 'jalebi', 'kathiroll', 'kofta', 'naan', 'pakora', 'paneer', 'panipuri', 'pavbhaji', 'vadapav'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and test split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3809 |
| test | 961 |
### Data Instances
Each instance is a picture of the Indian food item, along with the category it belongs to.
#### Initial Data Collection and Normalization
Collection by Scraping data from Google Images + Leveraging some JS Functions.
All the images are resized to (300,300) to maintain size uniformity.
### Dataset Curators
[Anshul Mehta](https://www.kaggle.com/anshulmehtakaggl)
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
[The Massive Indian Foods Dataset](https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset) |
bbaaaa/iwslt14-de-en-preprocess | 2023-03-28T16:19:35.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"source_datasets:original",
"language:de",
"language:en",
"license:cc-by-nc-nd-4.0",
"region:us"
] | bbaaaa | The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. | @inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
} | null | 0 | 9 | ---
annotations_creators:
- crowdsourced
language:
- de
- en
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2014 with fairseq preprocess
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2014 with fairseq preprocess
---
# Dataset Card for IWSLT 2014 with fairseq preprocess
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2014](https://sites.google.com/site/iwsltevaluation2014)
dataset_info:
- config_name: de-en
features:
- name: translation
languages:
- de
- en
splits:
- name: train
num_examples: 160239
- name: test
num_examples: 6750
- name: validation
num_examples: 7283
|
Francesco/cable-damage | 2023-03-30T09:29:47.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 1 | 9 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': cable-damage
'1': break
'2': thunderbolt
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: cable-damage
tags:
- rf100
---
# Dataset Card for cable-damage
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/cable-damage
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
cable-damage
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/cable-damage
### Citation Information
```
@misc{ cable-damage,
title = { cable damage Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/cable-damage } },
url = { https://universe.roboflow.com/object-detection/cable-damage },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
htriedman/wiki-sparql | 2023-05-09T15:50:05.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | htriedman | null | null | null | 4 | 9 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: test
num_bytes: 11923963
num_examples: 15103
- name: validation
num_bytes: 5137278
num_examples: 6474
- name: train
num_bytes: 189447618
num_examples: 240000
download_size: 54649053
dataset_size: 206508859
---
|
RyokoAI/CNNovel125K | 2023-04-04T11:38:03.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"novel",
"training",
"region:us"
] | RyokoAI | null | null | null | 14 | 9 | ---
license: apache-2.0
language:
- zh
tags:
- novel
- training
task_categories:
- text-classification
- text-generation
pretty_name: CNNovel125K
size_categories:
- 100K<n<1M
---
# Dataset Card for CNNovel125K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
### Dataset Summary
CNNovel125K is a dataset composed of approximately 125,000 novels downloaded from the Chinese novel hosting site <http://ibiquw.com>.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* Simplified Chinese
## Dataset Structure
### Data Instances
```json
{
"text": "\n------------\n\n全部章节\n\n\n------------\n\n第一章 她肯定做梦呢!\n\n HT国际大酒店总统套房。\n\n 清晨的第一缕阳光照射进圣地亚哥地板上,洒落在凌乱的床单上,突然地,床上睡的正熟的人睁开眼睛,
猛然惊醒!\n\n ...",
"meta": {
"subset": "cnnovel.ibiquw",
"id": "100067",
"q": 0.9,
"lang": "zh_cn",
"title": "为爱入局:嫁给秦先生",
"author": "奥德萨"
}
}
{
"text": "\n------------\n\n全部章节\n\n\n------------\n\n第1章:出狱就大婚\n\n 凉城第一监狱,大门缓缓打开,秦峰仰起头,贪婪的呼吸了一口空气。\n\n 三年了,终于又闻到了自由的味道。\n\n 他回过头,看着目
送他出来的那群人道:...",
"meta": {
"subset": "cnnovel.ibiquw",
"id": "100059",
"q": 0.9,
"lang": "zh_cn",
"title": "绝世弃婿",
"author": "绷带怪"
}
}
```
### Data Fields
* `text`: the actual novel text, all chapters
* `meta`: entry metadata
* `subset`: dataset tag: `cnnovel.ibiquw`
* `id`: novel ID
* `q`: quality score, fixed at 0.9
* `lang`: always `zh_cn` (Simplified Chinese)
* `title`: novel title
* `author`: novel author
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
TODO
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
The authors of each novel.
### Annotations
#### Annotation process
Titles were collected alongside the novel text and IDs.
#### Who are the annotators?
There were no human annotators.
### Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content in Chinese.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
the biases of those authors. Beware of stereotypes.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
distributed under fair use principles.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor (GH) for gathering this dataset. |
InstaDeepAI/multi_species_genomes | 2023-06-02T14:02:42.000Z | [
"DNA",
"Genomics",
"Nucleotide",
"region:us"
] | InstaDeepAI | Dataset made of diverse genomes available on NCBI and coming from ~850 different species.
Test and validation are made of 50 species each. The rest of the genomes are used for training.
Default configuration "6kbp" yields chunks of 6.2kbp (100bp overlap on each side). Similarly,
the "12kbp"configuration yields chunks of 12.2kbp. The chunks of DNA are cleaned and processed so that
they can only contain the letters A, T, C, G and N. | @article{o2016reference,
title={Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation},
author={O'Leary, Nuala A and Wright, Mathew W and Brister, J Rodney and Ciufo, Stacy and Haddad, Diana and McVeigh, Rich and Rajput, Bhanu and Robbertse, Barbara and Smith-White, Brian and Ako-Adjei, Danso and others},
journal={Nucleic acids research},
volume={44},
number={D1},
pages={D733--D745},
year={2016},
publisher={Oxford University Press}
} | null | 6 | 9 | ---
tags:
- DNA
- Genomics
- Nucleotide
pretty_name: Human Reference Genome
---
# Dataset Card for the Multi-species genome
## Dataset Description
- **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer)
- **Paper:** [The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics](https://www.biorxiv.org/content/10.1101/2023.01.11.523679v1)
### Dataset Summary
The Multi-species dataset was constructed by parsing the genomes available on [NCBI](https://www.ncbi.nlm.nih.gov/), before arbitrarily selecting only one species from each genus. Plant and virus genomes were not taken into account, as their regulatory elements differ from those of interest in the paper's tasks. The resulting collection of genomes was downsampled to a total of 850 species, in which several genomes that are heavily studied in the literature have been incorporated. The collection represents 174B nucleotides, resulting in roughly 29B tokens. The distribution of each genomics class in the dataset is displayed below:
```
| Class | Number of species | Number of nucleotides (B) |
| ---------------------| -------------------| --------------------------|
| Bacteria | 667 | 17.1 |
| Fungi | 46 | 2.3 |
| Invertebrate | 39 | 20.8 |
| Protozoa | 10 | 0.5 |
| Mammalian Vertebrate | 31 | 69.8 |
| Other Vertebrate | 57 | 63.4 |
```
### Supported Tasks and Leaderboards
This dataset has been used as a pre-training corpus for the Nucleotide Transformers models. Depending on the configuration used, each sequence is 6,200 or 12,200 base pase pairs long. If the dataset is iterated without being shuffled, the first 100 nucleotides of a sequence are the same as the last 100 base pairs of the previous sequence, and the last 100 nucleotides are the same as the first 100 base pairs of the next sequence. During training, this allows for randomly selecting a nucleotide between the first 200 nucleotides of the sequence and start the tokenization from this nucleotide. That way, all the chromosome is covered and the model sees different tokens for a given sequence at each epoch.
### Languages
DNA
## Dataset Structure
[N/A]
### Data Instances
For each instance, there is a string representing the sequence, a string indicating the description of the sequence, two integers representing the index of the first and last nucleotide respectively and the link to the genome's fasta URL. An instance is shown below:
```python
{'sequence': 'AAACTACCACTGGCTAAATTTCGACCATCTGGGCTAATAGCAACTGACCGCACCCAATATTTATGTCCTTTAAGTGTGCGAATTAGCTTTCCTGTGCCTAAATTCCAAACTTTGAGAGTGTTGTCATCGCTACCACTCACCAAAATTTTCCCATTAGGACTAATTGTTAATGCTTGAATGGAGTCAGTATGTCCTGTTAATGTGTAGACTATTTTACCTGTTGCCAAATTCCAGGCTTTAATAGTTTGATCATCACTCCCGCTAACCAAAGTTTTGCCATTGGGACTGATAGCCACAGCATTAACTTTTTGCGAATGTCCACTCAGGGTTAGTATTTCTTTTCCTGTGGTCAGATTCCACATTTTAATTATGCGTTCCCCTTCGCCACTACTAGCAATTGTCTGCCCATCGGGACTAATGGCGACAGAGACAACAGATTTTGCCCCACCTTTGAGGGTGTTAGCTAAGGAAATATTTTTAACTGGAACATTGGGTGACTGACCAAAAACAACTTCACCCTGAGTAGGACTGTAATTTCCTGGCTTTAGTCTCGATAACAAACTGGTTTGAATTTGGTGATATTTTTGATACCAAGTATCACTAAAACCAAATAACAAAATGAAAGCAGCGCCTAAAACTAAACTTTTGACAAAAGCATATTTAAAGGAGAACTTTGCACTCGGTTGAGTTACGGTGAATTTTCCTGATGATTGTCCGGCGGCTGGTAAGGCGCGTGGGAGTGATGGAATCAAATCTTTAATCACTTCATCGGCTGACTGGTAGCGTTGACTTAAGTCTTTTTGCAACAGCTTCGTCATCACCCCTTCCAATTCTGGCGACAAAGGACTACGCAAATATTCCCGCCAACTGTTCGCCCAGCCATAGCCATGTTCCATCCACAATTGAAAAGGGGATGTTCCTGTTAAGAGATGAAAACAGGTAGCCCCCAAACTGAACAAATCACTAGCTGGGTAAGCTTTACCGTCTCTGATTTGTTCCAGTGGAGAATAACCATGCGAACCAATGGATGTACCATTTTTATTCTTGACTTTTTCGGTTAATTGCTTAGAAGAACCAAAATCAATCAAGCTAAGTCGCCCATCATAACGACAGCGAATTAAATTTTCTGGTTTAATGTCTCGGTGAATCACACCGCGATCGTGAATGAATTTGAGTACAGGCAGTAAATCAAGTAAAATTGCTTGAATTTCATTCGCTTTATAGACTTTGCGCTGTTGTAATTCTTTTAACAAGTTCTGCCCATTAATAAACTGTTGTACCAAATAAAGGCAGTTATCTTGTTCAAAGTAAGCAATCAGTGTAGGAATTTGCGGATGTTCGCCGAGTTCTTGCAGTCGCTTGGCTTCTTCTGCAAATAACTCCATTGCTTTTTTCTGCGACCAAGTTCCTTGAAATTTCGGTGCTAATTGCTTAATTACACACAGTTCATTGAGTTTATCGGTATCTTCAGATAAATAAGTTCTGCCAAATCCCCCCTCATCGGAAAGCACCCGAATCACTCGAAAGCGATTTCTTAATAGTGGCACCAAGGGGGTGCTACAAGTTTGGCATGACTGCTTTCCTTTGGGATTTAGGGGATTTGGACAATCGGGATTTAAGCAGCAGATCATTATCTGACAGGCGCAACTGCATAAAAATTTTTACTAAATTAACCCCGATATTTCCCTAGATGATGATTGACTCTCACGTATTGATGGTAGATCCCGCTGGTAGTGGGGAGTGGGGAATCAATTATATAGTCAATTTTGGTAAATGCTCATAAGTTTTCTTCAATGCAGGAAAACTACGAGAGTCATCAGCTGAATTTTATCGATTATAGCAGCAGGCAAAAGTAGCAGACAGGTTAAGAGTGTCATTAGTCAAGACAAATGACTCATGACTAATGACTCATGACTAATAACTAAGGCTTTTGGGTGGCGATCGCTAATTTTGCCCCCTGGACTTGTCTGACTTGATCCATCACTGCCACTACTTTACCGTGGGTGACTGTTGCATCAGCATTCACAATTACTAATGCTTCTTGGTTATCGCCTACCAAGGTACGCAATTGTCCGGCTAAACCGTCAACAGTGCTTGGTTGACGGTTAACACTTACTATTCCATCTTTATCTACTGTGACGGTAATTTTGGCTGGAACTTGCTGCTGTTTGGCTGTCGCCGCTTTGGGTAAGTTGACGGGTAAACCTTCTGAGCGAGTTAAAAATAACGTTGACATGATAAAAAATGTCAAAATCGCAAATATCACATCAATCATTGGCACGATGTTGATTTGCGGTGGTAAATCTGGCTCATCTTGTAGACGCATAGGTTCTGTCTCCTCGTTCAAAGCGGCGGCGATAGAGCAGTTCTAATTGTCCACCATATTCTTGTATTGCGGCAATCTGTCGTTGATATAACCCTCGAAAGGTATTAGCAAATAAAAGTATAAAAATAGCCACAATTAAACCTGAAGCTGTAGATACCAGCGCTTCACTAATACCTGCGGTAACTCCTGCGGTTTTTGTCCCGCCTACATCACCCAAGTTTAATGATGCAAAAGAAGCAATCAAACCTAATACAGTACCCAGTAGACCTAAAAGTGGTGCAAGACCAATAATTGTGTCAAACATATTTTGAAAACGTTTGAGAACTGGGATTTCGGCTTGCGCTTCACTTTCTAGTGCAAGCCGAAATTCTTCTGGGGTTGGTTCTTCTAATTGCAACGCCGCTAAAAAAATCCGTGTCATGGGCAAATCTGCATTCTTTTGCAATTTATCCAACGCGCCAACAACATTATCAAGGCGGTAAAGATTCAACACTTCTCTGACTATGCGGTTTTGCCGAGTATTGATGCGATACCAAAAGCGGACTCGCTCGATAATTAAAGCAATTCCCACCACACTAAACGCCAGCAGGGGCCACATGACTACGCCACCTGCTACAAACAACTCATACATGGGCAATATCTCTAGGAACTAAATGGACAACGTTACAGTTAGACTAGCAGTTTACGGTACTAAATGATATATCTTATCAATAAGGAGTAGACAAAATAAAAAGCTATGTCAAATTCGGTTGAGTTTTGATGACATAATTATTCATTCTTGTTCAAGGCTTGATTCGCTACAATCCTGATGATGAAAGTATTTGTGTAAGTATACAGTTGATGAAAGCTAACTCAGGAATTTTTTTCTTTATTGCTTGACTTTTGCGAGAGATGGTTTTGAACAGAGTAATTACTAATAAGAACTTGCAATAAATTTAAACAGAACAGTAGTTTGTAGCTTTGCTTGAGAAGCGATCGCCCGACGTTGAGAGTTAAAGTATATTTTGCGTACTAACTTACCCAACGCCCAAAAAATTACATCATTTGAATATCGTCAATTTGTACTCTTAATCATCTATGGCTAAACTATTTGACTCAATCACAGAAGAACTGCAAGAGTTTATTGCAGCCCAAAACCTTTTCTTTGTAGGAACCGCGCCTCTGAGTGCTACAGGTCACGTTAATTTATCTCCCAAAGGTCTCGATTGCTTGCGGATTTTATCACCCCACAAAGTCGCCTATCTCGATCTCACAGGTAGCGGTAACGAAACTTCAGCCCATCTGCAAGAAAATGGTCGCATTACCTTCATGTTTTGCGCCTTCACTGAACCAGCGCGCATCTTGCGACTTTACGGTCAAGGACACGTAATTTTACCTAGCTATCCTGATTGGGATTCTGTATATTCAGTGTTTCCGCCGCTACCAGGAACTCGTCAAATTATCGTAGCTGATATTGAGATTGTGCAAAGTTCCTGTGGTTTCGGCGTTCCTCTTTACGAATACCAAGGTCAACGCCAAACACTAGTAAATTGGGCTGCTAAAAAAGGCGAACAGGGAGTCCGAGAATATCAACAACAAAAAAACAGCATCAGCATTGATGGTTTACCGACACCATTAGGCCAATTATCTGACGGTTAAAGCGGCGTTTCATATATTTTTAGTTAATCTGAACCAAAAAATCTCAAATTTTTTGTCAATAGTCTCTAGTCCAAAGAAGCTTGATTTTTGACCATAGATTGTAGGCTTTTGACAAAAATAACCTTTATAGAGAAAATTTATCCTTGCTGACACTCTATAACTAAGTTTATAAAACATAGCGTCAAAAATCGATACATATCAGTTCTATTTTCTGCCTCTATTCCTAATTAAATTTGGTGTAAAGGAACTATTATGCGGTTTCCGTGTCTTGACGTAATGATTTGCAACGAATTATGATTCGAGTTTAGTCCGGATCAACCGAGACATCCTCGAAAATTGGTGCAAGTAAATTCAACTTTCGCTCTACATAATCACACGCATGAGATTACGCTTATTTCTGTTTAGCGTTGTCAGTATTGTCCTGCTTTCTTCTCCAGTAAGAGCATCTCGCTTAGAATCTTGGAGCTTTGACACCGCACAAAATCAACTGAATATTACTACTGTATCTGGTGTTAAACCAAGAGCATTTTTAATTCAAAATCCCACGCGGTTAGTTATCGATCTTCCTGGTACACAACTGAACACAAATACAGTTCGGAAAAACTTTGGTTCCACAGTACGTGAAATCCGTGTTGGTAAGGTTGACGATAACACAACAAGATTAGTAGTTGAATTAGCACCTGGATACACTGTAGACCCTAACAAGTTACTGCTGCAAGGTGATTCTTCCACTCATTGGATAGTGAAATTTCCATCGGTAGAACGGGTTCAAAATCCTGTTGATAATAATTTTTCTTTATCTAGTGAAGAGCAAATTCCGGTTTCTGTGAGTGATGTTTCTTTGTTTGCGGGAGTTGTACCGTTAGGTAAGGAAATACCACAATTGCGATCGCAGGTACAAGCCTTAGCTGCTCGTTATCGTTCCCTGGATGCAGGAATGTTCTTTTTAGATTTAGATACTGGTAACTATCTAGATTTAAATGGTGAGAAAGTCTTTCCTGCTGCTAGTACAATAAAGTTTCCCATTTTAGTAGCGTTATTTCAAGAAGTAGATGCAGGTAGAGTCAAACTGAATGAAACCTTAGTTATGCGGCGCGACTTAATAACTGGAGGTTCTGGAGAATTTCAATACAAGCGTGCAGGAAGTCGTTTTAGTCTGATAGAAACCGTGACTAAGATGATTACCATCAGCGACAACACAGCTACCAATATGGTAATTGACCGATTAGGTGGTAAAGCTAAGTTAAATCAGCGTTTTCGTGGTTGGGGTCTGCAAAACACCGTTGTGCGGAATTTACTCGGCGACTTTAAGGGAACGAATACAACTAGCGCCAAAGATTTAGTCAGGCTGTCTGCGTTGGTTGCAAAAAATCAATTATTGACTGATTCCAGCCGTAGCAAAGTTTTGGATATTATGCAGCGTGTTCACAACACCAAGTTATTACCTGCTGGTTTGGGTAAAGGTGCGGTAATTGCTCACAAAACCGGAACTCTAGGCATTGTACTAGGTGATGCCGGGATTATTCAAATGCCATCTGGTAAGCGCTACTTAGCCGGAATTTTTGTCAGAAGACCTTTTAATGATTTAAAAGCGCGAGATTTTATCAATCAAGTTTCTCGAATTGTTTACGGCTATTTAGACCAACCAAGAGTCGCCAGCAAGCCTTAATACTCCTGATGTAAAAAAGAAAAATTTTAATTGACGTAAGCCCCTGATATTCATTAATATCTAGGGGTTTTTGCATATCTATTTATAGCAGTGCTTAACGCACCCTATCTCTCAGTGCGTTACGGCTAATCCTTATTCTCTTAAACTAACAAATTCTTGCATAGCCGTAACACATTCTAATTCATATTGGCTTTGAAGGATATTGACTGTATTCCTGCCAAGTTGGCTACATATACCTAAGCCGCACTGCTAAATTATGAATGGGAAATAACTTGCGGGCTTGATAAACCAACTTTTACTACACTAAACATGCTAAAGCATTAACAACGGACGGATTTAGGTTAGTTGCTTATTTTGCTCACTCTTGTGAGAGATTGCTGCTGTTTTTATTGTAGCGATCGACATCAAACTTCTTTATCTCTAAAAGGACAAATATAACAGGAAGTCCTCATTGATTACTCCTATCCTCACCTCGTTCATCGCAAAATGTACGAGGGCTTTTTTTATTTGGCAGAATTTACCCCTATTACGCCAATGATAATTAAAGCTATCGAGAAAAGTTTGGTAAGAGACATTGATTCACGAAACCAAATTACCCCAATAGTAGCGATTACAGTTGTGCCTAAACCTGACCAAACAGCATACGCAATGCTGACTTCAATTTTTTTAAGAGCTAAAGTTAAAAAACTAAAACAAATTCCATAACAGATAAAAATTAAAACCGAGGGAATAGTTCTTGTAAACCCCTCAGACAATTTCATGGAAGTTGTACCAGCGACTTCAAATAAGATTGCTGCAATGAGATAAAGCCAACTATTTACCATGTTTATTGATTGATTATAAGGTGATGATGGGAATATGATTTTTCGACAAGCATAATGAGTCAAAATTCTATATTTAATCTATTAACTAATTCTGCTATTTTGACAACATTTATAGTTAGCTGATGAGATAGGCAAAAATCAAAATATTCATATTTCCGAATTAGTAAAGAAGTTGGTAATCTCTAAAGTTCAGTTTACCACACCAATATTATGGGGGTTTACCGTACTAATACTAAGGTTCGGAAATCATGATGTAATTGGTGATAAAAACCGAATTTACACTGTACTGGATTGTGAATACTATAAAAACAACGCAAATGATTTAAACCTAAATCAACTACACAAAATTAGAAATTAAACGAGGTGGAGACATGACATTAGTGCGTTGGAATCCTTGGCAAGAAATGAACACTCTCCAAAGACAAATCAACAATTTATTTGCAGACGAAATGCTCCCATCTACTTTACTTGAAAGAAGCCTTACAAAAGTTCCGGCGGCTGAATTACACGAATCTGAAGAAGCTATTCATCTCAAGCTAGAATTACCAGGAATTGAAGCCAAAGACCTAGATGTGCAAGTTACAGAAAAAGCTGTGTATATCAGCGGTGAACGGAAATCTGAAACTAAAACAGAAGGGAAAGGTGTAACCAAGAGTGAATTTCATTATGGGAAATTCCAACGTTTGATTCCTTTACCAACTCGCATTCAAAATACCAATGTTACTGCTGATTATAAAGATGGTATTTTGACTCTGACTTTGCCTAAAGCCGAAGAAGAAAAGAAAAAGGTTGTCAAGCTGAATCTTGAATCTATTGGCTAATATCAATTTTGGATTAGCGCTAAAATACCCGACTTCTTTAAGAAGTCGGGTATTTTGTTGTTCACTAATGATTTAAAATTGCTATAAGCTGCGATTTCTGCCTGTTGATTGTTGTCTGTCTACGGGAAAAACGTCAAAATCGAAAGTTGCAATTAGACGCTCATCAACGTATACCTGTATTTTATGCTTACCAGGAGGATCACCTGCGGCGATCGTCCAATAGTTTTCAATTACACCATCATTAGCTATAGTTTTGCGCCTCATTACCGACTCTGTACCGTCAGCGGAGACTGTGAAGTTTTCACCATCATCTGTAGCCCAAGTTTCTGGGGGTTTTGGTAAGCGTAGGACTTCTCGCCATGTAACTTCGCCTTGGTAGTCTTTGAGTTGAATTCGCCACCCATATTTACTACCTTCTTGTAGTGGGACTCTGAATGTGGGGATGAAGTTAACTTTACCTCTAGCATCGACTCTCGCTATGCCAAACTCAGCTTTGTCGATCGCTACCGACTTTTTAGTATTGTTTGCTTGAGAAATTGACCCTGATGATGCTATTTTTTCGTCGGAGATCGCTACTGTAGCATTGATTGGCTGAGACGCTACCAACCCGGAAACTAGCCAAGAAGAAGTTAGTACAACTATTGCAGTCCAAATTCTCATCAGCAAAATTTTTGGTCATTTACTAGTACTTATTCCCGCCTTCCCATTGGCTTCCGGGTACAGTCCCGATAAATAGCCAAGTTGGCAGAATAAAAGTTGCAGAATTAATAGTCAGTTTATAGTTAAATCGGCAACACCAGATCAAGCCACTCAAACTACTTTACTCTCGGGCCAGTTGCCAGAACTGCGAAAACTATCATCGCAGGTTTTCGGTGTAGGTGCTAAATATGCGTTTATTCTTAACTATTTTGTGTTCAATACGGAATTTTTAATATGTAAGCAATTGCTGACAGTCGGCTATTTGATCAATTGTCATTTCCTAGAGTTTCATCCCCTTGAGGGGAAGGAGTTTGGGAAATGTCAAAAACTGTCAAATGCTTAATGCAAAGATTAACAGTTGTGCCTAAGTGCGATCGCACTTAGGCATGACAAAGCATCAAAAATTAGCATTGGAGAACCGATATTTTCCTATTACCTGACTGCTATATATTGATAGTGAGGCGTTTTTGAGCAGCAAACAGCATGGCAGATATTCCAAATTCCATCGCATCATACCGTGCCTTAGCACTGCAAGTTACCTGTCATGCTGTGAATCAAGCGAGCGATCGCCACGCTGTCCAAGAAATCATTCATCATACTATCAACCGCCTGGCGCAACAAATCGCCGCCAGTATTGCTTTTATTGGTTTTGACTGTCGTTTAATTGTTTTACCAGAATATTTTCTGACAGGTTTCCCGATGGGTGAACCTTTGGCTGTTTGGGGAGAAAAGGCTTGTATAGAAATGCACGGTGCCGAGTATGAAGCCCTCAGTAAAATTGCTCAAAAACATCAGATATTTTTAGCTGGTAACGCCTACGAACTCGACCCCAATTTTCCTGGCTTATACTTTCAAACTTGCTTTGTGATTGACCCGGCTGGTGCTATTGTCTTGCGGTATCGGCGGCTAAATTCGTTATTTGCACCCACACCTCATGATGTTTGGGATAAATATCTTGATTGTTACGGCCTAGAAGGGGTGTTTCCTGTAGCGAAAACTGCAATTGGCAATTTAGCCGCTTTAGCTTCCGAAGAAATTTTGTATCCAGAAGTAGCGCGGTGTTTAGCAATGCGTGGTGCAGAAATTTTTCTGCATTCCACTTCTGAAATTTATAGCAAAAACCTCACACCTAAAGATGCGGCGAAAATTTCTCGCGCTGTGGAAAATATGGCTTACGTTGTGTCTGCGAATACCGCAGGTCTAGCTAATAGTTCTATACCCAGCGCTTCTGTTGATGGTGGCTCAAAAATAGTTGACTATCGCGGTATCGTATTAGCAGAAACAGGTGCAGGCGAAAGTATGGCAGCTTTTGCAGAGATAGATTTAACTGCTTTAAGACGCGATCGCCGTCGTCCAGGGTTAAATAATTTACTGTCTCGCCAGCGATTTGAACTCTACGCCCAAAGCTACAGCCAGTCACAATTTTATCCAGCAAACACTATGCTAAATCAAGAATGCGATCGCCAACACTTCATCCAAACACAGCAACAAACCATAGAACGTCTATCTCAGTTAGGAGTGATTTAAAAGTCTAAAGTCTGAAATTAGATTCTTTTGACCATTGACTATTGACAAATGACAAATGACAAAACCAATCGAAGTCCGTAACCCGCGAACGGGAAAATATGATTATGTAATTATCCCACCGCCGCCGAAACTGCTGGCGCAGCAATGTAACCGAGCGCGAAGGGCGCAAGTGCGTTGGCAAAAACTGGGCGTAGAAGGGAGAGTTGCAGCTTTAAAAGAATGGAAGCAAGCAGTTTTGGCTGGACGCGAAAAGCTCACAGATGCTTTGGTCAATGATACGGGTAGATTATCTATATCAGTGATGGAAATCGACTCATTCCTTTCTAGCATCGATCGCTGGTGTGGATTAGCGCCAGATTTATTACAAGATTCGGCCAAAAATACATCAATTCCGTTCATCGCCTTACAACAAACATCAACGCCTTACCCTGTAGTTGGGGTAATTAGTCCTTGGAATTTCCCTCTGTTGCTGTCTACGATAGATACCATTCCCGCACTGTTGGCGGGTTGTGCTGTAGTTGTCAAACCCAGTGAAATTGCACCGCGTTTCATCGCCCCACTGATAGCTGCAATTAATCAAGTACCCGCCTTGCGCGATGTTTTCAGTTTTGTGGAAGGTGCGGGAGAAACTGGCGCGGCTTTGATGGAGAATGTAGATTTAGTTTGTTTTACCGGTAGTGTCGCTACTGGACGCAAAGTTGCAGAAGTCGCCGCACAAAGATTTATCCCCGCTTTTTTGGAATTGGGCGGGAAAGATCCGGCGATCGTGTTGGAATCTGCCGATTTAGAATTAGCCACATCAGCGATTTTATGGGGTTCCGTCGTTAACACCGGACAGTCTTGTTTATCAATTGAGCGTATTTACGTTGCCGAATCTATCTTTGAAAAGTTTTATCATCAGTTAGTAGCCAAAGCACATCGCCTACAACTAGCCCATCCCACCATTGAAAGTGGCGAAATCGGCCCCATTATTGCTGAAAGACAAGCTGGCATAATTAACGAGCATATCTCCGATGCAGTGCAAAAAGGTGCAGTAATTCATTGTGGCGGTAAAGTTGAAGAGTTAGGCGGTGGTTGGTGGTGTCATCCCACAGTGCTGACTCATGTTAACCATACAATGAAAGTCATGACCGAAGAGACTTTTGGCCCGATCATGCCAATCATGCCTTTTGCCACAGTAGAGGAAGCTGTTAACTTAGCCAACGATTCAATTTATGGACTGAGTGCGGCGGTGTTTGCGGAAACCGAAACTGAAGCGTTAACAGTTGCCCAGCAAATAGATGCAGGTGCTATCAGTATTAATGATGCCGCCCTCACCGCCATTATGCACGAAGGTGAAAAAAACGCTTTCAAATTATCCGGTTTAGGCGGTTCACGTATGGGTGCAGCCGCCATCAAACGATTTTTGCGGAAAAAAGCGTTTTTGATTAAAACCAACTCAAATCAAGACCCTTGGTGGTTTGAGCCTAAAGTGTAGTGCAATCTTCTCTCAGCGACCTCTGCGTCTCTGTAGTTCGTTAAAAACCGTATTAGATTCTGTTTGTTGGGTTTCGCTGTCGCTTCACCCAACCTACTTTCCTTAAACCCCTACTACAGATTCATTCACAGTTTCACTAGCCGCAACACCATTAGTCAAAATCGCTTGCCGAGTTTTCAGGTTAAATTTATAACCATGTGGCAAAATATGCAGCTTCGCACCACAAATTGCCAAAGGTTCATCCCGGAGAATTGTATCTGCGTTGTTATATGTAGATTCAGACTCATCCACAATGGTGACTGAACCTTCACCAATAATTTCGATTTGGTCATCAGTCACGGCGATCGCTGTATTCTCATCAATCCCAAATCCTAACACCGCAGGTTCATGAATTAAAGCTGTAATTAAACGCCCTAAGCGTCCCCGTTGTAAGAAATGTTGGTCAATCACCACCCCTGGGAGAAAACCCATACCAGGCCCCATTTCCACAATTTCCATCCGTGGTGTACTTTGAGAATCACCCTCAACAATCATTTTATCGGGCATCACAGCCGCACCCGCACTAGTACCTGCAATTACTGCACCTTCAGCATAGCGTTGGTGAATAGCCGCATCGATTTCGGTATCCTTGAGGATACTAGTAATTCGCGCTTGGTCTCCTCCAGTAAAAAATATCCCAGTCGCCTTAGCAATAGCTTCTAAAGCCGTAGAAGACCTAGCATCTTCACGAGTTTCTGTATCAATAATGCGAACGTGTTCTGCACCTAGCCGTTCAAAAACTCTAATATAATTTTCCCCCACTTCTCTAGGCAGTTCTGTGGCGGCCGTCATAATTACAATATTGGCTTTTGTACCCCCAGCCCGACGGACAAATTCTCGCAGAATCACACAATCTCCTTCTTTATCTTCTGCGCCACCAATAATTACCAACTGGCGTTTATGTGCAGTTTCTGTCATAATGCCCCCCGGATAACCGGATTAGAATTTAATTTAGATTAATTTCAATAAAACATGACAATTATCACAATCAAATCATCCATTTGATAGATTAATTTTTAATGGCAAAAGTTAAATTATATATAACTTTATGTATATATAAACTCTTGCCAAATTTAGCATTTTTAATAATTGGTAATTCATTTAGCAGAATTACCAATTACTTATACAGTAATAATTTATGTATAACTCTTCTCAAGTAATAGCACTAAAATCTCATAGT',
'description': 'NZ_AP018174.1 Anabaenopsis circularis NIES-21 DNA, nearly complete genome',
'start_pos': 1824000,
'end_pos': 1836200,
'fasta_url': 'https://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/Anabaenopsis_circularis/latest_assembly_versions/GCF_002367975.1_ASM236797v1/GCF_002367975.1_ASM236797v1_genomic.fna.gz'}
```
### Data Fields
- `sequence`: a string containing a DNA sequence from the human reference genome
- `desciption`: a string indicating the Species of the sequence as well as the NCBI id.
- `start_pos`: an integer indicating the index of the sequence's first nucleotide
- `end_pos`: an integer indicating the index of the sequence's last nucleotide
- `fasta_url`: a string indicating the URL used to download the fasta from which the sequence was taken.
### Data Splits
The Multi-species dataset has 3 splits: train, validation, and test. |
## Dataset Creation
[N/A]
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
The data consists of sequences cut from the the whole genome sequences of the 850 species sampled that can be found in the `urls.csv` file of this dataset's repository.
#### Who are the source language producers?
[N/A]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
[N/A]
### Citation Information
```bibtex
@article{dalla2023nucleotide,
title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics},
author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others},
journal={bioRxiv},
pages={2023--01},
year={2023},
publisher={Cold Spring Harbor Laboratory}
}
``` |
gigant/tib | 2023-09-25T12:05:25.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | gigant | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: doi
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: video_url
dtype: string
- name: license
dtype: string
- name: subject
dtype: string
- name: genre
dtype: string
- name: release_year
dtype: string
- name: author
dtype: string
- name: contributors
dtype: string
- name: abstract
dtype: string
- name: transcript
dtype: string
- name: transcript_segments
sequence:
- name: id
dtype: int32
- name: seek
dtype: int32
- name: start
dtype: float32
- name: end
dtype: float32
- name: text
dtype: string
- name: tokens
sequence: int32
- name: temperature
dtype: float32
- name: avg_logprob
dtype: float32
- name: compression_ratio
dtype: float32
- name: no_speech_prob
dtype: float32
- name: keyframes
sequence:
- name: slide
dtype: string
- name: frames
sequence: int32
- name: timestamp
sequence: float32
- name: language
dtype: string
splits:
- name: valid
num_bytes: 101380279
num_examples: 910
- name: train
num_bytes: 827555875
num_examples: 7282
- name: test
num_bytes: 102396941
num_examples: 911
download_size: 502166165
dataset_size: 1031333095
task_categories:
- summarization
language:
- en
pretty_name: "TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records"
size_categories:
- 1K<n<10K
pinned: True
---
# Dataset Card for "TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Description
- **Homepage:** [Dataset page](https://huggingface.co/datasets/gigant/tib)
- **Repository:** [Dataset page](https://huggingface.co/datasets/gigant/tib)
- **Paper:** [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records
](https://hal.science/hal-04168911)
- **Point of Contact:** [Théo Gigant](mailto:theo.gigant@l2s.centralesupelec.fr)
## Dataset Summary
TIB is an English dataset for abstractive summarization of multimodal presentations, introduced in [*TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records*
](https://hal.science/hal-04168911).
It is a collection of 9,103 videoconference records extracted from the German National Library of Science and Technology (TIB) archive, along with their metadata, an abstract and automatically processed transcripts and key frames.
### Supported Tasks and Leaderboards
- `summarization`
### Languages
The text in the dataset is in English, both for the transcripted audios and the abstracts.
## Usage
To use within the [`datasets`](https://github.com/huggingface/datasets) library:
```python
from datasets import load_dataset
dataset = load_dataset("gigant/tib")
```
## Dataset Structure
### Data Instances
A typical data point represents a videoconference record, the `transcript` and `keyframes` are textual and visual modalities, processed from the video found at `video_url`, and the `abstract` is used as a target abstractive summary.
### Data Fields
Each record consist of the following attributes:
* `doi`: digital object identifier (DOI) of the record or the associated paper
* `title`: title of the presentation
* `url`: URL of the record in the TIB archive
* `video_url`: URL of the video file
* `license`: license of the record
* `subject`: academic field (*eg* Computer Science, Mathematics, ...)
* `genre`: type of presentation (*eg* Lecture, Conference, ...)
* `release_year`: year the record was released
* `author`: name of the author
* `contributors`: name of the contributors
* `abstract`: the abstract of the presentation, that serve as a target summary
* `transcript`: the automatically extracted transcript
* `transcript_segments`: the automatically extracted transcript with time codes, output of the speech recognition system
* `keyframes`: the automatically extracted key frames time codes
`doi`, `title`, `url`, `video_url`, `license`, `subject`, `genre`, `release_year`, `author`, `contributors` and `abstract` are provided as found in the TIB archive. The length, style, quality and content of the abstract can differ from video to video as it was likely provided by each author. For instance, some abstracts can provide very short title-like summaries, introduction of the conference, the lecture or the speaker, or longer descriptions of the content. We provide examples of transcripts and summaries in the paper's Appendix.
### Data Splits
The data is split into a training, validation and test set.
* Train: 7,282 (80%)
* Validation: 910 (10%)
* Test: 911 (10%)
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was first assembled by crawling the [TIB-AV portal](https://av.tib.eu/) which is a large archive of videos, developed by the German National Library of Science and Technology: *Technische Informationsbibliothek* (TIB).
Entries with missing abstracts or abstracts that were too short (less than 30 characters) were filtered out.
We also filtered out records for which the abstract or the transcript is in another language than English.
In order to keep the abstracts that are relevant to the associated record, we removed documents if the abstract is the same as the abstract for another video. This allowed to get rid of all the abstracts that were written for a set of records such as conferences, instead of specifically written for a single presentation.
More information about the dataset collection and filtering can be found in [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records
](https://hal.science/hal-04168911).
### Dataset Curators
The dataset was initially created by Théo Gigant, Frédéric Dufaux, Camille Guinaudeau and Marc Decombas.
### Citation Information
```
@inproceedings{gigant:hal-04168911,
TITLE = {{TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records}},
AUTHOR = {GIGANT, Th{\'e}o and Dufaux, Fr{\'e}d{\'e}ric and Guinaudeau, Camille and Decombas, Marc},
URL = {https://hal.science/hal-04168911},
BOOKTITLE = {{Proc. 20th International Conference on Content-based Multimedia Indexing (CBMI 2023)}},
ADDRESS = {Orl{\'e}ans, France},
ORGANIZATION = {{ACM}},
YEAR = {2023},
MONTH = Sep,
KEYWORDS = {multimedia dataset, multimodal documents, automatic summarization},
HAL_ID = {hal-04168911},
HAL_VERSION = {v1},
}
``` |
pythainlp/final_training_set_v1 | 2023-04-29T07:06:04.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"region:us"
] | pythainlp | null | null | null | 1 | 9 | ---
dataset_info:
features:
- name: text
dtype: string
- name: metadata
struct:
- name: source
dtype: string
- name: nb_token
dtype: int64
splits:
- name: train
num_bytes: 337155434.9768474
num_examples: 405760
- name: test
num_bytes: 1277960.0231525812
num_examples: 1538
download_size: 191404581
dataset_size: 338433395
task_categories:
- conversational
- text-generation
language:
- en
---
# Dataset Card for "final_training_set_v1"
Finetuning datasets for [WangChanGLM](https://github.com/pythainlp/wangchanglm) sourced from [LAION OIG chip2 and infill_dbpedia](https://huggingface.co/datasets/laion/OIG) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [DataBricks Dolly v2](https://github.com/databrickslabs/dolly) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [OpenAI TL;DR](https://github.com/openai/summarize-from-feedback) ([MIT](https://opensource.org/license/mit/)), and [Hello-SimpleAI HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) ([CC-BY SA](https://creativecommons.org/licenses/by-sa/4.0/)) |
seanghay/kmcs | 2023-05-03T04:38:54.000Z | [
"license:apache-2.0",
"region:us"
] | seanghay | null | null | null | 0 | 9 | ---
license: apache-2.0
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1226373371.915
num_examples: 5565
download_size: 1064307923
dataset_size: 1226373371.915
---
# ⚠️ Migration Notice
Moved to [seanghay/km-speech-corpus](https://huggingface.co/datasets/seanghay/km-speech-corpus)
## Khmer Common Speech 1.0
This dataset contains 5,565 samples of Khmer speech downloaded from public YouTube videos. 4.83 hours in total.
This dataset was made by this project: https://github.com/seanghay/subtitle-demuxer
## References
- [Chanty Sothy](https://github.com/chantysothy) - the initial idea and YouTube links with Khmer subtitles. |
may-ohta/tatoeba | 2023-04-24T14:38:50.000Z | [
"license:cc-by-2.0",
"region:us"
] | may-ohta | This is a collection of translated sentences from Tatoeba
359 languages, 3,403 bitexts
total number of files: 750
total number of tokens: 65.54M
total number of sentence fragments: 8.96M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 0 | 9 | ---
license: cc-by-2.0
---
---
annotations_creators:
- found
language_creators:
- found
language:
- ab
- acm
- ady
- af
- afb
- afh
- aii
- ain
- ajp
- akl
- aln
- am
- an
- ang
- aoz
- apc
- ar
- arq
- ary
- arz
- as
- ast
- avk
- awa
- ayl
- az
- ba
- bal
- bar
- be
- ber
- bg
- bho
- bjn
- bm
- bn
- bo
- br
- brx
- bs
- bua
- bvy
- bzt
- ca
- cay
- cbk
- ce
- ceb
- ch
- chg
- chn
- cho
- chr
- cjy
- ckb
- ckt
- cmn
- co
- code
- cpi
- crh
- crk
- cs
- csb
- cv
- cy
- da
- de
- dng
- drt
- dsb
- dtp
- dv
- dws
- ee
- egl
- el
- emx
- en
- enm
- eo
- es
- et
- eu
- ext
- fi
- fj
- fkv
- fo
- fr
- frm
- fro
- frr
- fuc
- fur
- fuv
- fy
- ga
- gag
- gan
- gbm
- gcf
- gd
- gil
- gl
- gn
- gom
- gos
- got
- grc
- gsw
- gu
- gv
- ha
- hak
- haw
- hbo
- he
- hi
- hif
- hil
- hnj
- hoc
- hr
- hrx
- hsb
- hsn
- ht
- hu
- hy
- ia
- iba
- id
- ie
- ig
- ii
- ike
- ilo
- io
- is
- it
- izh
- ja
- jam
- jbo
- jdt
- jpa
- jv
- ka
- kaa
- kab
- kam
- kek
- kha
- kjh
- kk
- kl
- km
- kmr
- kn
- ko
- koi
- kpv
- krc
- krl
- ksh
- ku
- kum
- kw
- kxi
- ky
- la
- laa
- lad
- lb
- ldn
- lfn
- lg
- lij
- liv
- lkt
- lld
- lmo
- ln
- lo
- lt
- ltg
- lut
- lv
- lzh
- lzz
- mad
- mai
- max
- mdf
- mfe
- mg
- mgm
- mh
- mhr
- mi
- mic
- min
- mk
- ml
- mn
- mni
- mnw
- moh
- mr
- mt
- mvv
- mwl
- mww
- my
- myv
- na
- nah
- nan
- nb
- nch
- nds
- ngt
- ngu
- niu
- nl
- nlv
- nn
- nog
- non
- nov
- npi
- nst
- nus
- nv
- ny
- nys
- oar
- oc
- ofs
- ood
- or
- orv
- os
- osp
- ota
- otk
- pa
- pag
- pal
- pam
- pap
- pau
- pcd
- pdc
- pes
- phn
- pi
- pl
- pms
- pnb
- ppl
- prg
- ps
- pt
- qu
- quc
- qya
- rap
- rif
- rm
- rn
- ro
- rom
- ru
- rue
- rw
- sa
- sah
- sc
- scn
- sco
- sd
- sdh
- se
- sg
- sgs
- shs
- shy
- si
- sjn
- sl
- sm
- sma
- sn
- so
- sq
- sr
- stq
- su
- sux
- sv
- swg
- swh
- syc
- ta
- te
- tet
- tg
- th
- thv
- ti
- tig
- tk
- tl
- tlh
- tly
- tmr
- tmw
- tn
- to
- toi
- tok
- tpi
- tpw
- tr
- ts
- tt
- tts
- tvl
- ty
- tyv
- tzl
- udm
- ug
- uk
- umb
- ur
- uz
- vec
- vep
- vi
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xqa
- yi
- yo
- yue
- zlm
- zsm
- zu
- zza
license:
- cc-by-2.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: tatoeba
pretty_name: Tatoeba
dataset_info:
- config_name: en-mr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- mr
splits:
- name: train
num_bytes: 6190484
num_examples: 53462
download_size: 1436200
dataset_size: 6190484
- config_name: eo-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- nl
splits:
- name: train
num_bytes: 8150048
num_examples: 93650
download_size: 3020382
dataset_size: 8150048
- config_name: es-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 6180464
num_examples: 67782
download_size: 2340361
dataset_size: 6180464
- config_name: fr-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 19775390
num_examples: 195161
download_size: 5509784
dataset_size: 19775390
- config_name: es-gl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- gl
splits:
- name: train
num_bytes: 287683
num_examples: 3135
download_size: 128506
dataset_size: 287683
---
# Dataset Card for Tatoeba
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Tatoeba.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Tatoeba is a collection of sentences and translations.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tatoeba.php
E.g.
`dataset = load_dataset("tatoeba", lang1="en", lang2="he")`
The default date is v2021-07-22, but you can also change the date with
`dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- ab
- acm
- ady
- af
- afb
- afh
- aii
- ain
- ajp
- akl
- aln
- am
- an
- ang
- aoz
- apc
- ar
- arq
- ary
- arz
- as
- ast
- avk
- awa
- ayl
- az
- ba
- bal
- bar
- be
- ber
- bg
- bho
- bjn
- bm
- bn
- bo
- br
- brx
- bs
- bua
- bvy
- bzt
- ca
- cay
- cbk
- ce
- ceb
- ch
- chg
- chn
- cho
- chr
- cjy
- ckb
- ckt
- cmn
- co
- code
- cpi
- crh
- crk
- cs
- csb
- cv
- cy
- da
- de
- dng
- drt
- dsb
- dtp
- dv
- dws
- ee
- egl
- el
- emx
- en
- enm
- eo
- es
- et
- eu
- ext
- fi
- fj
- fkv
- fo
- fr
- frm
- fro
- frr
- fuc
- fur
- fuv
- fy
- ga
- gag
- gan
- gbm
- gcf
- gd
- gil
- gl
- gn
- gom
- gos
- got
- grc
- gsw
- gu
- gv
- ha
- hak
- haw
- hbo
- he
- hi
- hif
- hil
- hnj
- hoc
- hr
- hrx
- hsb
- hsn
- ht
- hu
- hy
- ia
- iba
- id
- ie
- ig
- ii
- ike
- ilo
- io
- is
- it
- izh
- ja
- jam
- jbo
- jdt
- jpa
- jv
- ka
- kaa
- kab
- kam
- kek
- kha
- kjh
- kk
- kl
- km
- kmr
- kn
- ko
- koi
- kpv
- krc
- krl
- ksh
- ku
- kum
- kw
- kxi
- ky
- kzj: Coastal Kadazan (deprecated tag; preferred value: Kadazan Dusun; Central Dusun (`dtp`))
- la
- laa
- lad
- lb
- ldn
- lfn
- lg
- lij
- liv
- lkt
- lld
- lmo
- ln
- lo
- lt
- ltg
- lut
- lv
- lzh
- lzz
- mad
- mai
- max
- mdf
- mfe
- mg
- mgm
- mh
- mhr
- mi
- mic
- min
- mk
- ml
- mn
- mni
- mnw
- moh
- mr
- mt
- mvv
- mwl
- mww
- my
- myv
- na
- nah
- nan
- nb
- nch
- nds
- ngt
- ngu
- niu
- nl
- nlv
- nn
- nog
- non
- nov
- npi
- nst
- nus
- nv
- ny
- nys
- oar
- oc
- ofs
- ood
- or
- orv
- os
- osp
- ota
- otk
- pa
- pag
- pal
- pam
- pap
- pau
- pcd
- pdc
- pes
- phn
- pi
- pl
- pms
- pnb
- ppl
- prg
- ps
- pt
- qu
- quc
- qya
- rap
- rif
- rm
- rn
- ro
- rom
- ru
- rue
- rw
- sa
- sah
- sc
- scn
- sco
- sd
- sdh
- se
- sg
- sgs
- shs
- shy
- si
- sjn
- sl
- sm
- sma
- sn
- so
- sq
- sr
- stq
- su
- sux
- sv
- swg
- swh
- syc
- ta
- te
- tet
- tg
- th
- thv
- ti
- tig
- tk
- tl
- tlh
- tly
- tmr
- tmw
- tn
- to
- toi
- tok
- tpi
- tpw
- tr
- ts
- tt
- tts
- tvl
- ty
- tyv
- tzl
- udm
- ug
- uk
- umb
- ur
- uz
- vec
- vep
- vi
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xqa
- yi
- yo
- yue
- zlm
- zsm
- zu
- zza
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
|
amitness/wikipedia_mt | 2023-08-14T09:44:46.000Z | [
"language:mt",
"region:us"
] | amitness | null | null | null | 0 | 9 | ---
language: mt
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 26154083
num_examples: 5326
download_size: 15314612
dataset_size: 26154083
---
# Dataset Card for "wikipedia_mt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2 | 2023-04-25T16:43:40.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | null | 4 | 9 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `h2ogpt-oig-oasst1-instruct-cleaned-v2` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `350581`
- Number of columns: `3`
- Column names: `['input', 'source', 'prompt_type']`
## Source
- [Original LAION OIG Dataset](https://github.com/LAION-AI/Open-Instruction-Generalist)
- [LAION OIG data detoxed and filtered down by scripts in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/main/FINETUNE.md#high-quality-oig-based-instruct-data)
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/0e70c2fbb16410bd8e6992d879b4c55cd981211f/create_data.py#L1375-L1415)
|
jaydenccc/AI_Storyteller_Dataset | 2023-04-26T19:52:33.000Z | [
"region:us"
] | jaydenccc | null | null | null | 9 | 9 | ---
dataset_info:
features:
- name: synopsis
dtype: string
- name: short_story
dtype: string
splits:
- name: train
num_bytes: 204642
num_examples: 100
download_size: 129691
dataset_size: 204642
---
# Dataset Card for "AI_Storyteller_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arvisioncode/donut-funsd | 2023-04-28T09:16:03.000Z | [
"region:us"
] | arvisioncode | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: ground_truth
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 25994868.0
num_examples: 147
- name: test
num_bytes: 9129119.0
num_examples: 47
- name: validation
num_bytes: 9129119.0
num_examples: 47
download_size: 44182619
dataset_size: 44253106.0
---
# Dataset Card for "donut-funsd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
miladfa7/5-Flower-Types-Classification-Dataset | 2023-05-02T04:15:51.000Z | [
"region:us"
] | miladfa7 | null | null | null | 0 | 9 | Entry not found |
mirfan899/phoneme_asr | 2023-06-17T12:32:48.000Z | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:bsd",
"region:us"
] | mirfan899 | null | null | null | 0 | 9 | ---
license: bsd
task_categories:
- automatic-speech-recognition
language:
- en
pretty_name: timit phoneme datas
size_categories:
- 1K<n<10K
---
This dataset contains the phonetic transcriptions of audios as well as English transcripts. Phonetic transcriptions are based on the g2p model. It can be used to train phoneme recognition
model using wav2vec2. |
thomasavare/waste-classification-v2 | 2023-05-23T14:30:38.000Z | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | thomasavare | null | null | null | 0 | 9 | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset used to train a language model to do classification on 50 different waste classes.
### Languages
English
## Dataset Structure
### Data Instances
Phrase | Class | Index
-------|-------|-------
"I have this apple phone charger to throw, where should I put it ?" | PHONE CHARGER | 26
"Should I recycle a disposable cup ?" | Plastic Cup | 32
"I have a milk brick" | Tetrapack | 45
### Data Fields
- Phrase
- Class
- Class_index
### Data Splits
train: 12.5K rows
test: 5.38K rows
additional data: 7.24K rows (unseen_phrases.csv)
## Dataset Creation
Manualy with objects and phrases templates.
### Annotations
#### Annotation process
Each object was annotated and then the phrases were annotated according to the object according to its annnotation.
#### Who are the annotators?
Myself
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
None
### Discussion of Biases
Some classes are more present than others but the dataset is balanced overall. Because it was created using patterns, might not be very robust.
### Other Known Limitations
Repetition of phrase patterns, have to verify performances of model on external phrases for robustness.
|
thu-coai/augesc | 2023-05-16T08:57:04.000Z | [
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2202.13047",
"region:us"
] | thu-coai | null | null | null | 2 | 9 | ---
license: cc-by-nc-4.0
language:
- en
---
AugESC is an augmented dialogue dataset for emotional support conversation, distilled from a fine-tuned GPT-J model.
[GitHub repo](https://github.com/thu-coai/AugESC). [Original paper](https://arxiv.org/abs/2202.13047).
Please kindly cite our papers if you use this resource:
```bib
@inproceedings{zheng-etal-2023-augesc,
title={AugESC: Dialogue Augmentation with Large Language Models for Emotional Support Conversation},
author={Zheng, Chujie and
Sabour, Sahand and
Wen, Jiaxin and
Zhang, Zheng and
Huang, Minlie},
booktitle={Findings of ACL},
year={2023}
}
@inproceedings{liu-etal-2021-towards,
title={Towards Emotional Support Dialog Systems},
author={Liu, Siyang and
Zheng, Chujie and
Demasi, Orianna and
Sabour, Sahand and
Li, Yu and
Yu, Zhou and
Jiang, Yong and
Huang, Minlie},
booktitle={ACL},
year={2021}
}
```
|
d0rj/dialogsum-ru | 2023-05-13T06:27:30.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:knkarthick/dialogsum",
"language:ru",
"... | d0rj | null | null | null | 1 | 9 | ---
annotations_creators:
- expert-generated
language_creators:
- translated
language:
- ru
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- knkarthick/dialogsum
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids: []
pretty_name: DIALOGSum Corpus (ru)
tags:
- conversations-summarization
- dialogue-summarization
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 19115158
num_examples: 12460
- name: validation
num_bytes: 746312
num_examples: 500
- name: test
num_bytes: 2282379
num_examples: 1500
download_size: 10144708
dataset_size: 22143849
train-eval-index:
- config: samsum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
dialogue: text
summary: target
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
Russian (translated from English by Google Translate).
## Dataset Structure
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
MIT License
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
davanstrien/amazonian_fish_classifier_data | 2023-05-09T14:56:52.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"biology",
"lam",
"region:us"
] | davanstrien | TODO | TODO | null | 0 | 9 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Ancistrus
'1': Apistogramma
'2': Astyanax
'3': Bario
'4': Bryconops
'5': Bujurquina
'6': Bunocephalus
'7': Characidium
'8': Charax
'9': Copella
'10': Corydoras
'11': Creagrutus
'12': Curimata
'13': Doras
'14': Erythrinus
'15': Gasteropelecus
'16': Gymnotus
'17': Hemigrammus
'18': Hyphessobrycon
'19': Knodus
'20': Moenkhausia
'21': Otocinclus
'22': Oxyropsis
'23': Phenacogaster
'24': Pimelodella
'25': Prochilodus
'26': Pygocentrus
'27': Pyrrhulina
'28': Rineloricaria
'29': Sorubim
'30': Tatia
'31': Tetragonopterus
'32': Tyttocharax
splits:
- name: train
num_bytes: 1068363405
num_examples: 3068
download_size: 330399200
dataset_size: 1068363405
task_categories:
- image-classification
pretty_name: cc
license: cc-by-4.0
tags:
- biology
- lam
size_categories:
- 1K<n<10K
--- |
Abrumu/Fashion_controlnet_dataset | 2023-05-16T00:45:16.000Z | [
"region:us"
] | Abrumu | null | null | null | 4 | 9 | ---
dataset_info:
features:
- name: target
dtype: image
- name: prompt
dtype: string
- name: control
dtype: image
- name: CLIP_captions
dtype: string
splits:
- name: train
num_bytes: 9533440093.0
num_examples: 11647
download_size: 9530317166
dataset_size: 9533440093.0
---
# Dataset Card for "Fashion_controlnet_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lucasmccabe-lmi/FLAN_CoT_alpaca_style | 2023-05-15T18:28:43.000Z | [
"arxiv:2210.11416",
"region:us"
] | lucasmccabe-lmi | null | null | null | 3 | 9 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 37140971
num_examples: 74771
download_size: 14062550
dataset_size: 37140971
---
# Dataset Card for "FLAN_CoT_alpaca_style"
We provide a dataset representing the 9 chain-of-thought (reasoning) fine-tuning tasks from [FLAN](https://arxiv.org/pdf/2210.11416.pdf). Minor formatting has been applied:
- We apply an Alpaca-style format (i.e. instruction/input/output fields)
- If the question is multiple-choice, the options are provided in the input field
- The phrase "Explain your reasoning step-by-step before providing the correct answer." is added to the end of the instruction field.
Numbers:
Prompts: 74771
Tokens: 9016176 using the EleutherAI/gpt-neox-20b tokenizer (counting instruction+input+output) |
J-Mourad/MNAD.v1 | 2023-05-16T12:23:40.000Z | [
"region:us"
] | J-Mourad | null | null | null | 0 | 9 | # About the MNAD Dataset
The MNAD corpus is a collection of over **1 million Moroccan news articles** written in the modern Arabic language. These news articles have been gathered from 11 prominent electronic news sources. The dataset is made available to the academic community for research purposes, such as data mining (clustering, classification, etc.), information retrieval (ranking, search, etc.), and other non-commercial activities.
## Dataset Fields
- Title: The title of the article
- Body: The body of the article
- Category: The category of the article
- Source: The Electronic Newspaper source of the article
## About Version 1 of the Dataset (MNAD.v1)
Version 1 of the dataset comprises **418,563** articles classified into 19 categories. The data was collected from well-known electronic news sources, namely Akhbarona.ma, Hespress.ma, Hibapress.com, and Le360.com. The articles were stored in four separate CSV files, each corresponding to the news website source. Each CSV file contains three fields: Title, Body, and Category of the news article.
The dataset is rich in Arabic vocabulary, with approximately 906,125 unique words. It has been utilized as a benchmark in the research paper:
```"A Moroccan News Articles Dataset (MNAD) For Arabic Text Categorization". In 2021 International Conference on Decision Aid Sciences and Application (DASA).```
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv1](https://www.kaggle.com/datasets/jmourad100/mnad-moroccan-news-articles-dataset)
- Huggingface Datasets: [MNADv1](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
## About Version 2 of the Dataset (MNAD.v2)
Version 2 of the MNAD dataset includes an additional **653,901** articles, bringing the total number of articles to over 1 million (**1,069,489**), classified into the same 19 categories as in version 1. The new documents were collected from seven additional prominent Moroccan news websites, namely al3omk.com, medi1news.com, alayam24.com, anfaspress.com, alyaoum24.com, barlamane.com, and SnrtNews.com.
The newly collected articles have been merged with the articles from the previous version into a single CSV file named ```MNADv2.csv```. This file includes an additional column called "Source" to indicate the source of each news article.
Furthermore, MNAD.v2 incorporates improved pre-processing techniques and data-cleaning methods. These enhancements involve removing duplicates, eliminating multiple spaces, discarding rows with NaN values, replacing new lines with "\n", excluding very long and very short articles, and removing non-Arabic articles. These additions and improvements aim to enhance the usability and value of the MNAD dataset for researchers and practitioners in the field of Arabic text analysis.
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
- Huggingface Datasets: [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
## Citation
If you use our data, please cite the following paper:
```bibtex
@inproceedings{MNAD2021,
author = {Mourad Jbene and
Smail Tigani and
Rachid Saadane and
Abdellah Chehri},
title = {A Moroccan News Articles Dataset ({MNAD}) For Arabic Text Categorization},
year = {2021},
publisher = {{IEEE}},
booktitle = {2021 International Conference on Decision Aid Sciences and Application ({DASA})}
doi = {10.1109/dasa53625.2021.9682402},
url = {https://doi.org/10.1109/dasa53625.2021.9682402},
}
``` |
camenduru/hdvila_test | 2023-05-22T14:39:53.000Z | [
"region:us"
] | camenduru | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: video_id
dtype: string
- name: url
dtype: string
- name: clip
list:
- name: clip_id
dtype: string
- name: span
sequence: string
splits:
- name: train
num_bytes: 764532309
num_examples: 300000
download_size: 236034101
dataset_size: 764532309
---
# Dataset Card for "hdvila_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HumanCompatibleAI/ppo-seals-Swimmer-v0 | 2023-05-29T09:49:19.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float64
splits:
- name: train
num_bytes: 128625365
num_examples: 104
download_size: 23073060
dataset_size: 128625365
---
# Dataset Card for "ppo-seals-Swimmer-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HumanCompatibleAI/ppo-seals-Walker2d-v0 | 2023-05-29T09:51:20.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float64
splits:
- name: train
num_bytes: 60728770
num_examples: 104
download_size: 21507130
dataset_size: 60728770
---
# Dataset Card for "ppo-seals-Walker2d-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xmj2002/tang_poems | 2023-05-30T02:45:48.000Z | [
"region:us"
] | xmj2002 | null | null | null | 2 | 9 | ---
dataset_info:
features:
- name: author
dtype: string
- name: paragraphs
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 6199272.5686873095
num_examples: 36000
- name: test
num_bytes: 908193.4313126908
num_examples: 5274
download_size: 5867663
dataset_size: 7107466.0
---
# Dataset Card for "tang_poems"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sadFaceEmoji/english-poems | 2023-06-03T15:45:56.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | sadFaceEmoji | null | null | null | 2 | 9 | ---
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
This dataset contains 93265 english poems. |
macrocosm/arxiv_abstracts | 2023-06-04T19:09:46.000Z | [
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"region:us"
] | macrocosm | null | null | null | 6 | 9 | ---
license: mit
language:
- en
size_categories:
- 1M<n<10M
---
All 2.3 million papers in the Arxiv, embedded via abstract with the InstructorXL model.
No claims are made about the copyright or license of contained materials. We assume no responsibilty for and are not liable under any circumstances for damages. Use at your own risk.
Good luck, have fun. |
kaist-ai/Flan-Collection_subset_multilingual | 2023-06-05T09:41:38.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.14045",
"region:us"
] | kaist-ai | """
_LICENSE = "CC BY 4.0"
_HOMEPAGE = "https://github.com/kaistAI/CoT-Collection"
_LANGUAGES = {
"ko": "Korean",
"fr": "French",
"ru": "Russian",
"ja": "Japanese",
"zh": "Chinese",
}
# _ALL_LANGUAGES = "all_languages"
class FlanCollectionMultiConfig(datasets.BuilderConfig): | @article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
} | null | 1 | 9 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
## Additional Information
### Citation Information
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` |
Cubpaw/voxelgym_5c_42x42_100000 | 2023-06-11T21:06:32.000Z | [
"region:us"
] | Cubpaw | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: rgb_label
dtype: image
- name: path_label
dtype: image
- name: path_rgb_label
dtype: image
splits:
- name: train
num_bytes: 73637120.0
num_examples: 80000
- name: validation
num_bytes: 18495820.0
num_examples: 20000
download_size: 70037769
dataset_size: 92132940.0
---
# Dataset Card for "voxelgym_5c_42x42_100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jdabello/yt_transcriptions | 2023-06-16T18:27:22.000Z | [
"license:apache-2.0",
"region:us"
] | jdabello | null | null | null | 0 | 9 | ---
license: apache-2.0
---
|
mtkinit/Example-Dataset-Super-2 | 2023-06-20T16:34:39.000Z | [
"keyword1",
"keyword2",
"region:us"
] | mtkinit | null | null | null | 0 | 9 | ---
pretty_name: Example-Dataset-Super-2
tags:
- keyword1
- keyword2
---
# Example-Dataset-Super-2
Created from AIOD platform |
KaiLv/UDR_AGNews | 2023-06-21T12:22:33.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: headline
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 7767906
num_examples: 29914
- name: test
num_bytes: 779115
num_examples: 3000
- name: debug
num_bytes: 1301480
num_examples: 5000
download_size: 6889819
dataset_size: 9848501
---
# Dataset Card for "UDR_AGNews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChanceFocus/flare-fiqasa | 2023-08-18T16:24:08.000Z | [
"region:us"
] | ChanceFocus | null | null | null | 1 | 9 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 248828
num_examples: 750
- name: valid
num_bytes: 61667
num_examples: 188
- name: test
num_bytes: 77672
num_examples: 235
download_size: 0
dataset_size: 388167
---
# Dataset Card for "flare-fiqasa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
numind/C4-generic-sentiment_analysis-v1 | 2023-06-30T14:32:43.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_ids:sentiment-analysis",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | numind | null | null | null | 3 | 9 | ---
license: mit
task_categories:
- text-classification
- zero-shot-classification
task_ids:
- sentiment-analysis
language:
- en
size_categories:
- 100K<n<1M
---
250k (40k after balancing the classes) sentences from C4 dataset (clean version of Common Crawl) with sentiment annotation (Positive, Negative, Neutral) automatically annotated with GPT3.5. Can be used to train generic (no domain) sentiment analysis model.
Labels:
0: Positive
1:Negative
2:Neutral
|
AFFFPupu/Maths_competition_questions | 2023-06-28T18:01:42.000Z | [
"license:unknown",
"region:us"
] | AFFFPupu | null | null | null | 0 | 9 | ---
license: unknown
---
|
chromadb/paul_graham_essay | 2023-07-01T14:27:10.000Z | [
"region:us"
] | chromadb | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: id
dtype: string
- name: embedding
sequence: float64
- name: metadata
struct:
- name: author
dtype: string
- name: document
dtype: string
splits:
- name: data
num_bytes: 1359141
num_examples: 104
download_size: 1270436
dataset_size: 1359141
---
# Dataset Card for "paul_graham_essay"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CCOM/wikimt_plus | 2023-10-10T10:43:28.000Z | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"task_categories:token-classification",
"task_categories:feature-extraction",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"mus... | CCOM | We plan to publish WikiMT++ onto TISMIR... | @dataset{Zhou2023WikiMTPlus,
title = {WikiMT++ Dataset Card},
author = {Monan Zhou, Shangda Wu, Yuan Wang, Wei Li},
journal = {ArXiv},
month = {Sep},
year = {2023},
publisher = {HuggingFace},
version = {2.2},
volume = {abs/2309.13259},
url = {https://arxiv.org/abs/2309.13259}
} | null | 1 | 9 | ---
license: mit
task_categories:
- text-classification
- zero-shot-classification
- text-generation
- token-classification
- feature-extraction
- text2text-generation
language:
- en
tags:
- music
- art
pretty_name: WikiMT++
size_categories:
- 1K<n<10K
---
# Dataset Card for WikiMT++
## Dataset Description
- **Homepage:** [CCOM/wikimt_plus](https://huggingface.co/datasets/CCOM/wikimt_plus)
- **Repository:** `git@hf.co:datasets/CCOM/wikimt_plus`
- **Paper:** [WikiMT++ Dataset Card](https://arxiv.org/pdf/2309.13259.pdf)
- **Leaderboard:** [arxiv:2309.13259](https://arxiv.org/abs/2309.13259)
- **Point of Contact:** This dataset has not been officially released yet and is still undergoing validation and evaluation. The data publicly available during the validation process is solely intended for verification and cannot be used for other purposes.
## Dataset Summary
WikiMT++ is an expanded and refined version of WikiMusicText (WikiMT), featuring 1010 curated lead sheets in ABC notation. To expand application scenarios of WikiMT, we add both objective (album, lyrics, video) and subjective emotion (12 emotion adjectives) and emo\_4q (Russell 4Q) attributes, enhancing its usability for music information retrieval, conditional music generation, automatic composition, and emotion classification, etc. Additionally, CLaMP is implemented to correct the attributes inherited from WikiMT to reduce errors introduced during original data collection and enhance the accuracy and completeness of our dataset.
## Supported Tasks and Leaderboards
Audio classification, music generation, MIR, etc
## Languages
Multilingual
## Usage
```
from datasets import load_dataset
dataset = load_dataset("CCOM/wikimt_plus")
for item in dataset['train']:
print(item['midi_score'])
```
## Requirements
librosa, soundfile
## Dataset Structure
### Data Instances
.json, .zip(.jpg, .mid, .mp3, .musicxml, .mxl, .pdf)
### Data Fields
audio music, text, video url, symbolic music, image, etc
### Data Splits
Train, validation, test
## Dataset Creation
### Curation Rationale
Promoting the development of AI in the music industry
### Source Data
#### Initial Data Collection and Normalization
Monan Zhou, Shangda Wu, Yuan Wang, et al.
#### Who are the source language producers?
Composers of the songs in dataset
### Annotations
#### Annotation process
1. Use mturk and scripts to verify wikimt v2 into v2.2
2. Annotators fix wikimt v2.2 to v2.3
#### Who are the annotators?
Annotators from mturk, CCOM and Fudan University
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Most are English songs
### Other Known Limitations
Samples are not balanced enough
## Additional Information
### Dataset Curators
Monan Zhou
### Licensing Information
```
MIT License
Copyright (c) CCOM
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@article{DBLP:journals/corr/abs-2309-13259,
author = {Monan Zhou and
Shangda Wu and
Yuan Wang and
Wei Li},
title = {WikiMT++ Dataset Card},
journal = {CoRR},
volume = {abs/2309.13259},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2309.13259},
doi = {10.48550/arXiv.2309.13259},
eprinttype = {arXiv},
eprint = {2309.13259},
timestamp = {Wed, 27 Sep 2023 11:44:46 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2309-13259.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Provide a dataset for music classification, generation, etc |
jjzha/kompetencer | 2023-10-08T15:16:13.000Z | [
"language:da",
"license:cc-by-4.0",
"region:us"
] | jjzha | null | null | null | 0 | 9 | ---
license: cc-by-4.0
language: da
---
This is the Kompetencer dataset created by:
```
@inproceedings{zhang-etal-2022-kompetencer,
title = "Kompetencer: Fine-grained Skill Classification in {D}anish Job Postings via Distant Supervision and Transfer Learning",
author = "Zhang, Mike and
Jensen, Kristian N{\o}rgaard and
Plank, Barbara",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.46",
pages = "436--447",
}
```
There are document delimiters indicated by `idx`.
Number of samples (sentences):
- train: 778
- dev: 346
- test: 262
Sources:
- STAR (house)
Type of tags:
- Generic BIO tags with keys `tags_skill` and `tags_knowledge`
Sample:
```
{
"idx": 1,
"tokens": ["Du", "skal", "s\u00e6tte", "dagsordenen", "v\u00e6re", "v\u00e6rdiskabende", "og", "levere", "skarpt", "fagligt", "og", "strategisk", "med-", "og", "modspil", "."],
"tags_skill": ["O", "O", "B", "I", "B", "I", "O", "B", "I", "I", "I", "I", "I", "I", "I", "I"],
"tags_knowledge": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]
}
``` |
richardr1126/spider-natsql-context-instruct | 2023-07-06T15:25:36.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"sql",
"spider",
"natsql",
"text-to-sql",
"sql finetune",
"arxiv:1809.08887",
"arxiv:2109.05153",
"region:us"
] | richardr1126 | null | null | null | 0 | 9 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
tags:
- sql
- spider
- natsql
- text-to-sql
- sql finetune
dataset_info:
features:
- name: db_id
dtype: string
- name: text
dtype: string
---
# Dataset Card for Spider NatSQL Context Instruct
### Dataset Summary
[Spider](https://arxiv.org/abs/1809.08887) is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to finetune LLMs on the Spider dataset with database context using NatSQL.
### NatSQL
[NatSQL](https://arxiv.org/abs/2109.05153) is an intermediate representation for SQL that simplifies the queries and reduces the mismatch between
natural language and SQL. NatSQL preserves the core functionalities of SQL, but removes some clauses and keywords
that are hard to infer from natural language descriptions. NatSQL also makes schema linking easier by reducing the
number of schema items to predict. NatSQL can be easily converted to executable SQL queries and can improve the
performance of text-to-SQL models.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
```
```
@inproceedings{gan-etal-2021-natural-sql,
title = "Natural {SQL}: Making {SQL} Easier to Infer from Natural Language Specifications",
author = "Gan, Yujian and
Chen, Xinyun and
Xie, Jinxia and
Purver, Matthew and
Woodward, John R. and
Drake, John and
Zhang, Qiaofu",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.174",
doi = "10.18653/v1/2021.findings-emnlp.174",
pages = "2030--2042",
}
``` |
fhirfly/medicalquestions | 2023-07-15T23:23:45.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"medical",
"region:us"
] | fhirfly | null | null | null | 3 | 9 | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- medical
pretty_name: FhirFly Medical Questions
size_categories:
- 10K<n<100K
---
# 🤗 Dataset Card: fhirfly/medicalquestions
## Dataset Overview
- Dataset name: fhirfly/medicalquestions
- Dataset size: 23,297 questions
- Labels: 1 (medical), 0 (non-medical)
- Distribution: Evenly distributed between medical and non-medical questions
## Dataset Description
The fhirfly/medicalquestions dataset is a collection of 23,297 questions labeled as either medical or non-medical. The dataset aims to provide a diverse range of questions covering various medical and non-medical domains.
The questions in the dataset have been manually labeled by domain experts based on the context and content of each question. Each question is assigned a label of 1 if it is determined to be a medical question and a label of 0 if it is classified as a non-medical question.
## Dataset Structure
The dataset consists of a single file containing the following columns:
- **Text**: The text of the question.
- **Label**: The label assigned to each question, either 1 (medical) or 0 (non-medical).
The questions are evenly distributed between medical and non-medical categories, ensuring a balanced dataset for training and evaluation.
## Potential Biases
Efforts have been made to ensure that the dataset is representative of various medical and non-medical topics. However, it is important to acknowledge that biases may exist in the dataset due to the subjective nature of labeling questions. Biases could be present in terms of the types of questions included, the representation of certain medical conditions or non-medical topics, or the labeling process itself.
It is recommended to perform thorough evaluation and analysis of the dataset to identify and mitigate potential biases during model training and deployment. Care should be taken to address any biases to ensure fair and unbiased predictions.
## Dataset Quality
The fhirfly/medicalquestions dataset has undergone manual labeling by domain experts, which helps maintain a high level of quality and accuracy. However, human labeling is not entirely immune to errors or subjectivity.
To ensure the quality of the dataset, a thorough review process has been conducted to minimize errors and maintain consistency in labeling. Nonetheless, it is advisable to validate and verify the data as part of your specific use case to ensure it meets your requirements.
## Data License
The fhirfly/medicalquestions dataset is released under the MIT license. Please refer to the license file accompanying the dataset for more information on its usage and any restrictions that may apply.
## Dataset Citation
If you use the fhirfly/medicalquestions dataset in your work, please cite it as:
```
@dataset{fhirfly/medicalquestions,
title = {fhirfly/medicalquestions},
author = {fhirfly},
year = {2023},
publisher = {Hugging Face},
version = {1.0.0},
url = {https://huggingface.co/datasets/fhirfly/medicalquestions}
}
``` |
PedroCJardim/QASports | 2023-07-28T19:21:37.000Z | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"sports",
"open-domain-qa",
"extractive-qa",
"region:us"
] | PedroCJardim | null | null | null | 2 | 9 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- sports
- open-domain-qa
- extractive-qa
size_categories:
- 1M<n<10M
---
### Dataset Summary
QASports is the first large sports-themed question answering dataset, using as documents the wiki of 3 of the most popular sports in the world, Soccer, Football and Basketball.
Each sport can be downloaded individually as a subset, with the train, test and validation splits, or all 3 can be downloaded together.
### Supported Tasks and Leaderboards
Extractive Question Answering.
### Languages
English.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"answer": {
"offset": [42,44],
"text": "16"
},
"context": "The following is a list of squads for all 16 national teams competing at the Copa América Centenario. Each national team had to submit a squad of 23 players, 3 of whom must be goalkeepers. The provisional squads were announced on 4 May 2016. A final selection was provided to the organisers on 20 May 2016." ,
"qa_id": "61200579912616854316543272456523433217",
"question": "How many national teams competed at the Copa América Centenario?",
"context_id": "171084087809998484545703642399578583178",
"context_title": "Copa América Centenario squads | Football Wiki | Fandom",
"url": "https://football.fandom.com/wiki/Copa_Am%C3%A9rica_Centenario_squads"
}
```
### Data Fields
The data fields are the same among all splits.
- `id_qa`: a `string` feature.
- `context_id`: a `string` feature.
- `context_title`: a `string` feature.
- `url`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `offset`: a list feature containing:
- 2 `int32` features for start and end. |
sudy-super/dialogsum-ja | 2023-07-15T10:27:58.000Z | [
"task_categories:summarization",
"language:ja",
"license:mit",
"region:us"
] | sudy-super | null | null | null | 10 | 9 | ---
license: mit
task_categories:
- summarization
language:
- ja
---
**dialogsum-ja**
このデータセットはdialogsum、CSDSなどを翻訳した日本語対話要約データセットです。
**元のデータセット**
knkarthick/dialogsum https://huggingface.co/datasets/knkarthick/dialogsum
xiaolinAndy/CSDS https://github.com/xiaolinAndy/CSDS |
nRuaif/book2-lite-cleaned | 2023-07-18T06:25:50.000Z | [
"region:us"
] | nRuaif | null | null | null | 1 | 9 | Entry not found |
mtc/frank-test-set-with-faithfulness-annotation | 2023-07-19T07:39:10.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: hash
dtype: string
- name: model_name
dtype: string
- name: article
dtype: string
- name: summary
dtype: string
- name: reference
dtype: string
- name: summary_sentences
sequence: string
- name: summary_sentences_annotations
list:
- name: annotator_0
sequence: string
- name: annotator_1
sequence: string
- name: annotator_2
sequence: string
- name: split
dtype: string
- name: Factual
dtype: bool
- name: NoE_count
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: validation
num_bytes: 2555126
num_examples: 671
- name: test
num_bytes: 5688685
num_examples: 1575
download_size: 1477522
dataset_size: 8243811
---
# Dataset Card for "frank-test-set-with-factuality-annotation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/spam-text-messages-dataset | 2023-09-19T19:33:39.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] | TrainingDataPro | The SMS spam dataset contains a collection of text messages. The dataset
includes a diverse range of spam messages, including promotional offers,
fraudulent schemes, phishing attempts, and other forms of unsolicited
communication.
Each SMS message is represented as a string of text, and each entry in the
dataset also has a link to the corresponding screenshot. The dataset's content
represents real-life examples of spam messages that users encounter in their
everyday communication. | @InProceedings{huggingface:dataset,
title = {spam-text-messages-dataset},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 9 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
tags:
- code
- finance
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4134124
num_examples: 25
download_size: 3713381
dataset_size: 4134124
---
# Spam Text Messages Dataset
The SMS spam dataset contains a collection of text messages. The dataset includes a diverse range of spam messages, including *promotional offers, fraudulent schemes, phishing attempts, and other forms of unsolicited communication*.
Each SMS message is represented as a string of text, and each entry in the dataset also has a link to the corresponding screenshot. The dataset's content represents real-life examples of spam messages that users encounter in their everyday communication.
### The dataset's possible applications:
- spam detection
- fraud detection
- customer support automation
- trend and sentiment analysis
- educational purposes
- network security
.png?generation=1689765543776590&alt=media)
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=spam-text-messages-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **images**: includes screenshots of spam messages
- **.csv** file: contains information about the dataset
### File with the extension .csv
includes the following information:
- **image**: link to the screenshot with the spam message,
- **text**: text of the spam message
# Spam messages might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=spam-text-messages-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
HeshamHaroon/Arabic_fake_news_dataset | 2023-07-23T10:22:59.000Z | [
"task_categories:text-classification",
"language:ar",
"license:apache-2.0",
"fake-news",
"arabic",
"web-scraping",
"region:us"
] | HeshamHaroon | null | null | null | 2 | 9 | ---
language: ar
pretty_name: "Arabic Fake News Dataset"
tags:
- "fake-news"
- "arabic"
- "web-scraping"
task_categories:
- "text-classification"
- "natural-language-processing"
- "web-scraping"
license: "apache-2.0"
---
# Arabic_fake_news_dataset
### Please note that this dataset needs more preprocessing.
## Introduction
This repository contains the `Arabic_fake_news_dataset`, a collection of news articles scraped from the Egyptian platform [متصدقش (Matsda2sh)](https://matsda2sh.com/). The dataset is intended for studying and addressing the spread of fake news within the Egyptian community. It includes news articles classified as either fake or true, along with their corresponding titles.
## Dataset Details
- The dataset is provided in the form of a JSON file named `arabic_fake_news_dataset.json`.
- The JSON file contains a list of dictionaries, where each dictionary represents a news article and has the following key-value pairs:
- `link`: The URL of the news article.
- `fakes`: A list of strings representing fake news titles associated with the article.
- `trues`: A list of strings representing true news titles associated with the article.
- The data was collected through web scraping from the [متصدقش (Matsda2sh)](https://matsda2sh.com/) platform, which focuses on the dissemination of fake news within the Egyptian context.
## Preprocessing Note
It is essential to note that the dataset may require preprocessing steps to ensure the data's quality and consistency for use in various natural language processing (NLP) tasks. The suggested preprocessing steps include, but are not limited to:
- Removing duplicate entries to avoid data redundancy.
- Handling missing or incorrect data, if any.
- Removing noise or irrelevant information that might have been introduced during web scraping.
- Tokenization and text normalization to prepare the text for downstream NLP tasks.
## Citation
If you use this dataset in your research or projects, we kindly request you to cite the source:
@misc{Arabic_fake_news_dataset,
title = {Arabic_fake_news_dataset},
author = {Hesham Haroon},
year = {2023}
}
## Disclaimer
This dataset is provided for research purposes only. The authors and contributors of this repository do not guarantee the accuracy or reliability of the information in the dataset. Users are advised to exercise caution and conduct their own verification and validation processes.
## Acknowledgments
Special thanks to the team at [متصدقش (Matsda2sh)](https://matsda2sh.com/) for providing the data and making it accessible for research purposes.
### Happy NLP Research!
|
iamtarun/code_contest_python3_alpaca | 2023-07-27T15:44:05.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"code",
"region:us"
] | iamtarun | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: id
dtype: string
- name: description
dtype: string
- name: code
dtype: string
- name: test_samples
sequence:
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype:
class_label:
names:
'0': UNKNOWN_SOURCE
'1': CODECHEF
'2': CODEFORCES
'3': HACKEREARTH
'4': CODEJAM
'5': ATCODER
'6': AIZU
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 624564538
num_examples: 8139
- name: valid
num_bytes: 32348022
num_examples: 95
- name: test
num_bytes: 20764786
num_examples: 122
download_size: 192319894
dataset_size: 677677346
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
task_categories:
- question-answering
- text2text-generation
- text-generation
tags:
- code
size_categories:
- 1K<n<10K
---
# Dataset Card for Code Contest Processed
## Dataset Summary
This dataset contains coding contest questions and their solution written in Python3.
This dataset is created by processing [code_contest dataset from Deepmind](https://huggingface.co/datasets/deepmind/code_contests). It is a competitive programming dataset for machine-learning. Read more about dataset at [original source](https://huggingface.co/datasets/deepmind/code_contests).
## Columns Description
- `id` : unique string associated with a problem
- `description` : problem description
- `code` : one correct code for the problem
- `test_samples` : contains inputs and their corresponding outputs for the problem
- `source` : source of problem
- `prompt` : alpaca style generated prompt for text generation |
Azure99/blossom-chat-v1 | 2023-07-28T07:41:32.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:apache-2.0",
"region:us"
] | Azure99 | null | null | null | 2 | 9 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
# BLOSSOM CHAT V1
### 介绍
Blossom Chat V1是基于ShareGPT 90K衍生而来的中英双语对话数据集,适用于多轮对话微调。
本数据集抽取了ShareGPT的多轮对话指令,仅将指令进行翻译,随后使用多轮指令迭代调用gpt-3.5-turbo-0613。
相比原始的ShareGPT数据,主要解决了中文对话数据量较少,以及由ChatGPT生成长度限制而导致的输出截断问题。
本次发布了全量数据的20%,包含30K记录。
### 语言
以中文和英文为主,中英文数据按照约5:1的比例混合。
### 数据集结构
每条数据代表一个完整的多轮对话,包含id和conversations两个字段。
- id:字符串,代表原始ShareGPT的对话id,可以通过链接https://sharegpt.com/c/id来访问原始对话。
- conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。
### 数据集限制
由于仅抽取了原始多轮对话的输入,对于一些涉及随机性的对话,例如:猜随机数,可能会出现多轮对话不连贯的情况。
此外,本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。 |
izumi-lab/wikinews-en-20230728 | 2023-07-29T03:07:56.000Z | [
"language:en",
"license:cc-by-2.5",
"region:us"
] | izumi-lab | null | null | null | 1 | 9 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 114757457
num_examples: 43246
download_size: 38557626
dataset_size: 114757457
license: cc-by-2.5
language:
- en
---
# Dataset Card for "wikinews-en-20230728"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DynamicSuperb/DialogueActClassification_DailyTalk | 2023-08-02T08:51:06.000Z | [
"region:us"
] | DynamicSuperb | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 1379347051.504
num_examples: 4758
download_size: 1292848993
dataset_size: 1379347051.504
---
# Dataset Card for "DailyTalk_DialogueActClassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
universeTBD/arxiv-astro-abstracts-all | 2023-07-30T05:49:39.000Z | [
"license:mit",
"region:us"
] | universeTBD | null | null | null | 2 | 9 | ---
license: mit
dataset_info:
features:
- name: arxiv_id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 321324568.42730767
num_examples: 264252
- name: valid
num_bytes: 35703540.4771302
num_examples: 29362
- name: test
num_bytes: 39670060.09556214
num_examples: 32624
download_size: 230417103
dataset_size: 396698169.0
---
|
ChanceFocus/flare-convfinqa | 2023-07-31T03:49:30.000Z | [
"region:us"
] | ChanceFocus | null | null | null | 2 | 9 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: turn
dtype: int64
- name: dialogue_id
dtype: int64
splits:
- name: train
num_bytes: 44382083
num_examples: 8891
- name: valid
num_bytes: 11171617
num_examples: 2213
- name: test
num_bytes: 7116753
num_examples: 1490
download_size: 11803908
dataset_size: 62670453
---
# Dataset Card for "flare-convfinqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
raicrits/Orca_ITA_200k | 2023-07-31T08:17:51.000Z | [
"license:other",
"region:us"
] | raicrits | null | null | null | 1 | 9 | ---
license: other
---
# OpenOrca ITA 200k
Google Translate Italian translations of 200k random entries of the dataset [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). All the entries are selected randomly, in particular
100k from the ones generated with gpt-3.5-turbo and the other 100k from the ones generated with gpt-4 (visible in the "gpt_version" column of this dataset). The ids are the ones present in the orginial dataset. |
AtlasUnified/atlas-math-sets | 2023-08-01T18:24:15.000Z | [
"task_categories:question-answering",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"math",
"region:us"
] | AtlasUnified | null | null | null | 0 | 9 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- math
pretty_name: Atlas Math Sets
size_categories:
- 10M<n<100M
---
# ATLAS MATH SETS

This set of data consists of mathematical computations. Simple in nature as it derived from python scripts, this dataset contains addition, subtraction, multiplication, division, fractions, decimals, square roots, cube roots, exponents, and factors.
Format of the JSONL is as follows:
{"answer": "[num]", "input": "[equation]", "output": "[num]", "instruction": "[pre-generated_instruction] [equation]"} |
argilla/llama-2-banking-preference | 2023-08-03T15:01:01.000Z | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | null | 0 | 9 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for llama-2-banking-preference
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/llama-2-banking-preference")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/llama-2-banking-preference")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| request | User request | TextField | True | False |
| response-1 | Assistant response 1 | TextField | True | True |
| response-2 | Assistant response 2 | TextField | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| preference | Order response based on your preference | RankingQuestion | True | 1 = most accurate and helpful, 2 = worst. Ties are allowed. | ['response-1', 'response-2'] |
| correct-response | If both responses are innacurate, complete the response: | TextQuestion | True | N/A | N/A |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"request": "Do I get charged if i am getting money?",
"response-1": " Of course! Thank you for reaching out to us at the Bank of Tillamook. We\u0027re here to help you with any questions or concerns you may have about your bank account. When it comes to fees associated with withdrawals, we do charge a small fee for certain types of transactions. However, please note that there is no charge for accessing your own funds in our bank through one of our ATMs or by using your debit card for purchases. If you could provide me with more details on what kind of transaction you were trying to make, I would be happy to assist you further and explain any potential fees associated with those services. Is there anything else I can help you with today?",
"response-2": " Of course! Thank you for choosing the Bank of Tillamook for your financial needs. We appreciate your business and value your trust in us. To answer your question, no, you will not be charged for withdrawing money from your own account at the Bank of Tillamook. Our bank is committed to providing convenient and fee-free services to our customers, so you can access your funds without worrying about additional charges. If you have any further questions or concerns, please don\u0027t hesitate to ask. Is there anything else I can help you with today?"
},
"id": null,
"metadata": {},
"responses": [],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"correct-response": null,
"correct-response-suggestion": null,
"correct-response-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"external_id": null,
"metadata": null,
"preference": null,
"preference-suggestion": null,
"preference-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"request": "Do I get charged if i am getting money?",
"response-1": " Of course! Thank you for reaching out to us at the Bank of Tillamook. We\u0027re here to help you with any questions or concerns you may have about your bank account. When it comes to fees associated with withdrawals, we do charge a small fee for certain types of transactions. However, please note that there is no charge for accessing your own funds in our bank through one of our ATMs or by using your debit card for purchases. If you could provide me with more details on what kind of transaction you were trying to make, I would be happy to assist you further and explain any potential fees associated with those services. Is there anything else I can help you with today?",
"response-2": " Of course! Thank you for choosing the Bank of Tillamook for your financial needs. We appreciate your business and value your trust in us. To answer your question, no, you will not be charged for withdrawing money from your own account at the Bank of Tillamook. Our bank is committed to providing convenient and fee-free services to our customers, so you can access your funds without worrying about additional charges. If you have any further questions or concerns, please don\u0027t hesitate to ask. Is there anything else I can help you with today?"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **request** is of type `TextField`.
* **response-1** is of type `TextField`.
* **response-2** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **preference** is of type `RankingQuestion` with the following allowed values ['response-1', 'response-2'], and description "1 = most accurate and helpful, 2 = worst. Ties are allowed.".
* (optional) **correct-response** is of type `TextQuestion`.
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **preference-suggestion** is of type `ranking` with the following allowed values ['response-1', 'response-2'].
* (optional) **correct-response-suggestion** is of type `text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Please, read the question carefully and try to answer it as accurately as possible.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
GalaktischeGurke/emails_5500_to_6000 | 2023-08-04T11:35:53.000Z | [
"region:us"
] | GalaktischeGurke | null | null | null | 0 | 9 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12006721.928708866
num_examples: 6537
download_size: 20864531
dataset_size: 12006721.928708866
---
# Dataset Card for "emails_5500_to_6000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JohnTeddy3/midjourney-v5-202304 | 2023-08-07T13:02:27.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"license:apache-2.0",
"midjourney",
"region:us"
] | JohnTeddy3 | null | null | null | 2 | 9 | ---
license: apache-2.0
task_categories:
- text-to-image
- image-to-text
language:
- en
tags:
- midjourney
---
# midjourney-v5-202304-clean
## 简介 Brief Introduction
转载自wanng/midjourney-v5-202304-clean
非官方的,爬取自midjourney v5的2023年4月的数据,一共1701420条。
Unofficial, crawled from midjourney v5 for April 2023, 1,701,420 pairs in total.
## 数据集信息 Dataset Information
原始项目地址:https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset
我做了一些清洗,清理出了两个文件:
- ori_prompts_df.parquet (1,255,812对,midjourney的四格图)

- upscaled_prompts_df.parquet (445,608对,使用了高清指令的图,这意味着这个图更受欢迎。)

Original project address: https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset
I did some cleaning and cleaned out two files:
- ori_prompts_df.parquet (1,255,812 pairs, midjourney's four-frame diagrams)
- upscaled_prompts_df.parquet (445,608 pairs, graphs that use the Upscale command, which means this one is more popular.)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.