id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
RyokoAI/Syosetu711K | 2023-04-05T01:13:44.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ja",
"license:apache-2.0",
"novel",
"training",
"region:us"
] | RyokoAI | null | null | null | 6 | 7 | ---
license: apache-2.0
language:
- ja
tags:
- novel
- training
task_categories:
- text-classification
- text-generation
pretty_name: Syosetuka ni Narou 711K
size_categories:
- 100K<n<1M
---
# Dataset Card for Syosetu711K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
### Dataset Summary
Syosetu711K is a dataset composed of approximately 711,700 novels scraped from the Japanese novel self-publishing
website Syosetuka ni Narou (JA: 小説家になろう, lit. "Let's Become a Novelist") between March 26 and March 27, 2023.
The dataset contains most if not all novels published on the site, regardless of length or quality; however, we
include metadata so users of this dataset can filter and evaluate its contents.
Syosetu711Kは、日本の小説投稿サイト「小説家になろう」から2023年3月26日から27日にかけてスクレイプされた約711,700冊の小説から
構成されるデータセットです。このデータセットには、長さや品質に関係なく、サイトに掲載されているほとんどの小説が含まれています。ただし、
各小説のIDも含まれているため、小説家になろうAPIを使ってその情報を検索することができます。
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* Japanese
## Dataset Structure
### Data Instances
```json
{
"text": "【小説タイトル】\n焼けて爛れる恋よりも、微睡む優しい愛が欲しい\n【Nコード】\nN5029ID\n【作者名】\n秋暁秋季\n【あらすじ】\n俺の彼女は物凄く気の多い人だった。\nお眼鏡に適う奴が居れば、瞳孔を蕩
けさせる人だった。\nその癖照れ屋で、すぐに目を逸らす。\nな...",
"meta": {
"subset": "syosetu",
"q": 0.6,
"id": "N5029ID",
"author": "秋暁秋季",
"userid": 719797,
"title": "焼けて爛れる恋よりも、微睡む優しい愛が欲しい",
"length": 871,
"points": 0,
"lang": "ja",
"chapters": 1,
"keywords": ["気が多い", "浮気性", "無愛想", "照れる", "嫉妬", "好みではない", "クソデカ感情", "空気のような安心感"],
"isr15": 0,
"genre": 102,
"biggenre": 1
}
}
{
"text": "【小説タイトル】\n【能力者】\n【Nコード】\nN9864IB\n【作者名】\n夢音いちご\n【あらすじ】\n私立アビリティ学園。\n小・中・高・大が一貫となった、大規模な名門校。\nそして、ここは規模の大きさだけ
でなく、ある特殊な制度を設けて\nいることでも有名だ。\nそれ...",
"meta": {
"subset": "syosetu",
"q": 0.6,
"id": "N9864IB",
"author": "夢音いちご",
"userid": 1912777,
"title": "【能力者】",
"length": 2334,
"points": 0,
"lang": "ja",
"chapters": 2,
"keywords": ["ガールズラブ", "身分差", "伝奇", "日常", "青春", "ラブコメ", "女主人公", "学園", "魔法", "超能力"],
"isr15": 0,
"genre": 202,
"biggenre": 2
}
}
```
### Data Fields
* `text`: the actual novel text, all chapters
* `meta`: novel metadata
* `subset`: dataset tag: `syosetu`
* `lang`: dataset language: `ja` (Japanese)
* `id`: novel ID/ncode
* `author`: author name
* `userid`: author user ID
* `title`: novel title
* `length`: novel length in words
* `points`: global points (corresponds to `global_point` from the Syosetu API)
* `q`: q-score (quality score) calculated based on `points`
* `chapters`: number of chapters (corresponds to `general_all_no` from the Syosetu API)
* `keywords`: array of novel keywords (corresponds to `keyword` from the Syosetu API, split on spaces)
* `isr15`: whether the novel is rated R15+
* `genre`: novel genre ID (optional, see Syosetu API documentation)
* `biggenre`: general novel genre ID (optional, see Syosetu API documentation)
* `isr18`: whether the novel is rated R18+
* `nocgenre`: novel genre ID (optional, only available if `isr18` is true, see Syosetu API documentation)
*For further reference, see the Syosetuka ni Narou API documentation: <https://dev.syosetu.com/man/api/> (JA).*
#### Q-Score Distribution
```
0.00: 0
0.10: 0
0.20: 0
0.30: 0
0.40: 0
0.50: 213005
0.60: 331393
0.70: 101971
0.80: 63877
0.90: 1542
1.00: 2
```
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
Syosetuka ni Narou is the most popular website in Japan for authors wishing to self-publish their novels online. Many works on
the site been picked up by large commercial publishers. Because of this, we believe that this dataset provides a large corpus
of high-quality, creative content in the Japanese language.
### Source Data
#### Initial Data Collection and Normalization
*More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.*
First, metadata for all novels on the site was gathered into a JSON lines (JSONL) file. The Syosetuka ni Narou API was used to
obtain this information.
Second, this listing was used to create a secondary text file containing a list of only the novel "ncodes," or IDs. This
secondary file was distributed to downloader nodes.
Third, the sister site <https://pdfnovels.net> was queried with each novel ID, and the resulting PDF was saved for later processing.
Fourth, the `pdftotext` tool was used to convert the PDF files to text documents. A few other scripts were then used to clean up
the resulting text files.
Finally, the text files and other metadata were converted into the specified data field schema above, and the resulting JSON entries
were concatenated into the Syosetu711K dataset. The version uploaded to this repository, however, is split into multiple files,
numbered 00 through 20 inclusive.
#### Who are the source language producers?
The authors of each novel.
### Annotations
#### Annotation process
Titles and general genre were collected alongside the novel text and IDs.
#### Who are the annotators?
There were no human annotators.
### Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content in Japanese.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.**
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
distributed under fair use principles.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor (GH) for gathering this dataset. |
Francesco/liver-disease | 2023-03-30T09:11:15.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 1 | 7 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': diseases
'1': ballooning
'2': fibrosis
'3': inflammation
'4': steatosis
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: liver-disease
tags:
- rf100
---
# Dataset Card for liver-disease
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/liver-disease
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
liver-disease
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/liver-disease
### Citation Information
```
@misc{ liver-disease,
title = { liver disease Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/liver-disease } },
url = { https://universe.roboflow.com/object-detection/liver-disease },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Francesco/halo-infinite-angel-videogame | 2023-03-30T10:07:59.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': halo-infinite-angel-videogame
'1': enemy
'2': enemy-head
'3': friendly
'4': friendly-head
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: halo-infinite-angel-videogame
tags:
- rf100
---
# Dataset Card for halo-infinite-angel-videogame
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/halo-infinite-angel-videogame
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
halo-infinite-angel-videogame
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/halo-infinite-angel-videogame
### Citation Information
```
@misc{ halo-infinite-angel-videogame,
title = { halo infinite angel videogame Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/halo-infinite-angel-videogame } },
url = { https://universe.roboflow.com/object-detection/halo-infinite-angel-videogame },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Ekimetrics/ipcc-ar6 | 2023-04-03T10:45:57.000Z | [
"region:us"
] | Ekimetrics | null | null | null | 3 | 7 | Entry not found |
philschmid/sharegpt-raw | 2023-04-04T08:52:59.000Z | [
"license:other",
"region:us"
] | philschmid | null | null | null | 70 | 7 | ---
license: other
duplicated_from: jeffwan/sharegpt_vicuna
---
## Prepraration
```
pip3 install -r requirements.txt
```
## Data Cleaning
1. merge two raw json files and json beautify the merged file
```
python merge.py sharegpt_90k_raw_dataset/sg_90k_part1.json sharegpt_90k_raw_dataset/sg_90k_part2.json sharegpt_20230401_html_unformatted.json
python pretty_json.py --in sharegpt_20230401_html_unformatted.json --out sharegpt_20230401_html.json
```
2. (Optional) Verify the json file
```
if jq empty sharegpt_20230401_html.json 2>/dev/null; then
echo "JSON is valid"
else
echo "JSON is invalid"
fi
jq length sharegpt_90k_raw_dataset/sg_90k_part1.json
jq length sharegpt_90k_raw_dataset/sg_90k_part2.json
jq length sharegpt_20230401_html.json
```
3. clean data - remove html tags etc
```
python3 clean_sharegpt.py --in sharegpt_20230401_html.json --out sharegpt_20230401_clean.json
....
100%|███████████████████████████████████████████████████████████████████| 90665/90665 [06:32<00:00, 230.98it/s]
total: 90665, skip: 13745, new: 76920
```
4. Filter dataset by language
```
python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_zh.json --lang zh
....
return 6240 out of 76920, start dump ...
python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_en.json --lang en
...
return 55413 out of 76920, start dump ...
```
> Note: the code itself doesn't support languange list, I didn't change the code for adpation. You can change the code to support more languages. Instead, I just filter two languages I need and merge the `sharegpt_20230401_clean_lang_zh.json` and `sharegpt_20230401_clean_lang_en.json` into `sharegpt_20230401_clean_lang.json`.
5. Split the long conversation
```
python3 split_long_conversation.py --in sharegpt_20230401_clean_lang.json --out sharegpt_20230401_clean_lang_split.json --model-name /home/ubuntu/llama-13b-hf/
...
total: 61653, new: 126032
```
Ok, now we have the cleaned dataset `sharegpt_20230401_clean_lang_split.json` which should be used for finetuning.
|
FronkonGames/steam-games-dataset | 2023-04-06T01:31:39.000Z | [
"size_categories:10K<n<100K",
"license:apache-2.0",
"games",
"steam",
"python",
"json",
"csv",
"video games",
"doi:10.57967/hf/0511",
"region:us"
] | FronkonGames | null | null | null | 2 | 7 | ---
license: apache-2.0
tags:
- games
- steam
- python
- json
- csv
- video games
pretty_name: Steam Games Dataset
size_categories:
- 10K<n<100K
---
This dataset has been created with [this code (MIT)](https://github.com/FronkonGames/Steam-Games-Scraper) and use the API provided by Steam, the largest gaming platform on PC. Data is also collected from Steam Spy.
Only games (no DLCs, episodes, music, videos, etc) currently released have been added. |
PaulTran/vietnamese_spelling_error_detection | 2023-04-07T09:31:12.000Z | [
"region:us"
] | PaulTran | null | null | null | 1 | 7 | Entry not found |
hackathon-somos-nlp-2023/suicide-comments-es | 2023-04-10T09:26:54.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:es",
"license:apache-2.0",
"region:us"
] | hackathon-somos-nlp-2023 | null | null | null | 5 | 7 | ---
task_categories:
- text-classification
language:
- es
size_categories:
- 1K<n<10K
license: apache-2.0
---
# Dataset Description
* Example model using the dataset: https://huggingface.co/hackathon-somos-nlp-2023/roberta-base-bne-finetuned-suicide-es
* Example space using the dataset: https://huggingface.co/spaces/hackathon-somos-nlp-2023/suicide-comments-es
* Language: Spanish
## Dataset Summary
The dataset consists of comments on Reddit, Twitter, and inputs/outputs of the Alpaca dataset translated to Spanish language and classified as suicidal ideation/behavior and non-suicidal.
# Dataset Structure
The dataset has 10050 rows (777 considered as Suicidal Ideation/Behavior and 9273 considered Not Suicidal).
## Dataset fields
* `Text`: User comment.
* `Label`: 1 if suicidal ideation/behavior; 0 if not suicidal comment.
# Dataset Creation
## Suicidal Ideation/Behavior
* 90 rows from Columbia Suicide Severity Rating Scale (C-SSRS)
https://zenodo.org/record/2667859#.ZDGnX-xBxYi
C-SSRS is a gold dataset for suicidal comments detection on Reddit.
We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We also explode on paragraphs, filter messages less than 240 characters, and we filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation).
* 519 rows from https://github.com/laxmimerit/twitter-suicidal-intention-dataset/tree/master
The dataset contains the tweet data of suicidal intention and no intention data.
We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset. We filter the positive ones validating against the [Moderation API of OpenAI](https://platform.openai.com/docs/guides/moderation).
* 168 rows added manually from public forums and public blogs.
## Non Suicidal
* 5000 rows from instructions of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es
* 2000 rows from output of https://huggingface.co/datasets/somosnlp/somos-clean-alpaca-es
* 2000 rows from Columbia Suicide Severity Rating Scale (C-SSRS)
* 100 rows from https://huggingface.co/datasets/ziq/depression_advice. We use `Helsinki-NLP/opus-mt-en-es` to translate the dataset.
* 100 rows added manually from public forums, blogs and podcasts.
# Considerations for Using the Data
## Social Impact of Dataset
The dataset could contain some patterns to detect suicidal ideation/behavior.
## Discussion of Biases
No measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because [the Internet is used more by younger people](https://www.statista.com/statistics/272365/age-distribution-of-internet-users-worldwide).
# Additional Information
## Team
* [dariolopez](https://huggingface.co/dariolopez)
* [diegogd](https://huggingface.co/diegogd)
## Licesing
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
letinnghia/student-feedbacks | 2023-04-10T13:01:27.000Z | [
"license:gpl-2.0",
"region:us"
] | letinnghia | null | null | null | 0 | 7 | ---
license: gpl-2.0
---
|
aaqibsaeed/databricks-dolly-15k-ur | 2023-04-14T13:24:03.000Z | [
"license:cc-by-3.0",
"region:us"
] | aaqibsaeed | null | null | null | 1 | 7 | ---
license: cc-by-3.0
---
This dataset was created by translating "databricks-dolly-15k.jsonl" into Urdu. It is licensed under CC BY 3.0.
.اس ڈیٹا سیٹ کو "ڈیٹابرکس-ڈولی" کو اردو میں ترجمہ کرکے تیار کیا گیا تھا
databricks-dolly-15k https://github.com/databrickslabs/dolly/tree/master/data |
treadon/dolly-15k | 2023-04-14T14:46:03.000Z | [
"license:cc-by-3.0",
"region:us"
] | treadon | null | null | null | 1 | 7 | ---
license: cc-by-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 12208856
num_examples: 14863
- name: validation
num_bytes: 117314
num_examples: 151
download_size: 7866269
dataset_size: 12326170
---
# Dataset Card for "dolly-15k"
# Summary
This is the dataset supplied by Databricks for training Dolly V2. This set is split 99% training / 1% validation, should you want to set aside some records for evaluation purposes.
## Special thanks to ❤️ Databricks for creating and making this set available.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
may-ohta/iwslt14 | 2023-04-26T09:55:06.000Z | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | may-ohta | The IWSLT 2014 Evaluation Campaign includes the MT track on TED Talks. In this edition, the official language pairs are five:
from English to French
from English to German
from German to English
from English to Italian
from Italian to English
Optional tasks are proposed with English paired in both directions with other twelve languages:
from/to English to/from Arabic, Spanish, Farsi, Hebrew, Dutch, Polish, Portuguese-Brazil, Romanian, Russian, Slovenian, Turkish and Chinese
Submitted runs on additional pairs will be evaluated as well, in the hope to stimulate the MT community to evaluate systems on common benchmarks and to share achievements on challenging translation tasks. | @inproceedings{cettoloEtAl:EAMT2012,
Address = {Trento, Italy},
Author = {Mauro Cettolo and Christian Girardi and Marcello Federico},
Booktitle = {Proceedings of the 16$^{th}$ Conference of the European Association for Machine Translation (EAMT)},
Date = {28-30},
Month = {May},
Pages = {261--268},
Title = {WIT$^3$: Web Inventory of Transcribed and Translated Talks},
Year = {2012}} | null | 0 | 7 | ---
license: cc-by-nc-nd-4.0
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 39120226
num_examples: 171721
- name: validation
num_bytes: 492473
num_examples: 2082
- name: test
num_bytes: 1058859
num_examples: 4782
download_size: 23758217
dataset_size: 40671558
---
|
liyucheng/zhihu_26k | 2023-04-15T20:41:37.000Z | [
"license:cc-by-2.0",
"region:us"
] | liyucheng | null | null | null | 20 | 7 | ---
license: cc-by-2.0
---
|
chrisociepa/wikipedia-pl-20230401 | 2023-04-17T20:41:24.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"size_categories:1M<n<10M",
"language:pl",
"license:cc-by-sa-3.0",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
] | chrisociepa | null | null | null | 0 | 7 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2883878741
num_examples: 1562327
download_size: 1761971402
dataset_size: 2883878741
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
language:
- pl
pretty_name: Polish Wikipedia 2023-04-01
size_categories:
- 1M<n<10M
tags:
- pretraining
- language modelling
- wikipedia
- web
---
# Dataset Card for April 2023 Polish Wikipedia
Wikipedia dataset containing cleaned articles of Polish language.
The dataset has been built from the Wikipedia dump (https://dumps.wikimedia.org/)
using the [OLM Project](https://github.com/huggingface/olm-datasets).
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
``` |
yash1811/news_summaries | 2023-04-19T22:00:36.000Z | [
"license:mit",
"region:us"
] | yash1811 | null | null | null | 2 | 7 | ---
license: mit
---
|
joey234/mmlu-astronomy-neg | 2023-04-20T05:13:25.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 46473
num_examples: 152
download_size: 28019
dataset_size: 46473
---
# Dataset Card for "mmlu-astronomy-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Heerak/ko_en_parallel_dataset | 2023-04-20T08:51:52.000Z | [
"region:us"
] | Heerak | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: ko
dtype: string
- name: en
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4112684317
num_examples: 11800415
- name: validation
num_bytes: 20767480
num_examples: 59299
- name: test
num_bytes: 419935
num_examples: 1982
download_size: 2691575595
dataset_size: 4133871732
---
# Dataset Card for "ko_en_parallel_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jlh/home-credit | 2023-04-25T22:58:10.000Z | [
"region:us"
] | jlh | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: SK_ID_CURR
dtype: int64
- name: TARGET
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: NAME_CONTRACT_TYPE
dtype: string
- name: CODE_GENDER
dtype: string
- name: FLAG_OWN_CAR
dtype: string
- name: FLAG_OWN_REALTY
dtype: string
- name: CNT_CHILDREN
dtype: int64
- name: AMT_INCOME_TOTAL
dtype: float64
- name: AMT_CREDIT
dtype: float64
- name: AMT_ANNUITY
dtype: float64
- name: AMT_GOODS_PRICE
dtype: float64
- name: NAME_TYPE_SUITE
dtype: string
- name: NAME_INCOME_TYPE
dtype: string
- name: NAME_EDUCATION_TYPE
dtype: string
- name: NAME_FAMILY_STATUS
dtype: string
- name: NAME_HOUSING_TYPE
dtype: string
- name: REGION_POPULATION_RELATIVE
dtype: float64
- name: DAYS_BIRTH
dtype: int64
- name: DAYS_EMPLOYED
dtype: int64
- name: DAYS_REGISTRATION
dtype: float64
- name: DAYS_ID_PUBLISH
dtype: int64
- name: OWN_CAR_AGE
dtype: float64
- name: FLAG_MOBIL
dtype: int64
- name: FLAG_EMP_PHONE
dtype: int64
- name: FLAG_WORK_PHONE
dtype: int64
- name: FLAG_CONT_MOBILE
dtype: int64
- name: FLAG_PHONE
dtype: int64
- name: FLAG_EMAIL
dtype: int64
- name: OCCUPATION_TYPE
dtype: string
- name: CNT_FAM_MEMBERS
dtype: float64
- name: REGION_RATING_CLIENT
dtype: int64
- name: REGION_RATING_CLIENT_W_CITY
dtype: int64
- name: WEEKDAY_APPR_PROCESS_START
dtype: string
- name: HOUR_APPR_PROCESS_START
dtype: int64
- name: REG_REGION_NOT_LIVE_REGION
dtype: int64
- name: REG_REGION_NOT_WORK_REGION
dtype: int64
- name: LIVE_REGION_NOT_WORK_REGION
dtype: int64
- name: REG_CITY_NOT_LIVE_CITY
dtype: int64
- name: REG_CITY_NOT_WORK_CITY
dtype: int64
- name: LIVE_CITY_NOT_WORK_CITY
dtype: int64
- name: ORGANIZATION_TYPE
dtype: string
- name: EXT_SOURCE_1
dtype: float64
- name: EXT_SOURCE_2
dtype: float64
- name: EXT_SOURCE_3
dtype: float64
- name: APARTMENTS_AVG
dtype: float64
- name: BASEMENTAREA_AVG
dtype: float64
- name: YEARS_BEGINEXPLUATATION_AVG
dtype: float64
- name: YEARS_BUILD_AVG
dtype: float64
- name: COMMONAREA_AVG
dtype: float64
- name: ELEVATORS_AVG
dtype: float64
- name: ENTRANCES_AVG
dtype: float64
- name: FLOORSMAX_AVG
dtype: float64
- name: FLOORSMIN_AVG
dtype: float64
- name: LANDAREA_AVG
dtype: float64
- name: LIVINGAPARTMENTS_AVG
dtype: float64
- name: LIVINGAREA_AVG
dtype: float64
- name: NONLIVINGAPARTMENTS_AVG
dtype: float64
- name: NONLIVINGAREA_AVG
dtype: float64
- name: APARTMENTS_MODE
dtype: float64
- name: BASEMENTAREA_MODE
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MODE
dtype: float64
- name: YEARS_BUILD_MODE
dtype: float64
- name: COMMONAREA_MODE
dtype: float64
- name: ELEVATORS_MODE
dtype: float64
- name: ENTRANCES_MODE
dtype: float64
- name: FLOORSMAX_MODE
dtype: float64
- name: FLOORSMIN_MODE
dtype: float64
- name: LANDAREA_MODE
dtype: float64
- name: LIVINGAPARTMENTS_MODE
dtype: float64
- name: LIVINGAREA_MODE
dtype: float64
- name: NONLIVINGAPARTMENTS_MODE
dtype: float64
- name: NONLIVINGAREA_MODE
dtype: float64
- name: APARTMENTS_MEDI
dtype: float64
- name: BASEMENTAREA_MEDI
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MEDI
dtype: float64
- name: YEARS_BUILD_MEDI
dtype: float64
- name: COMMONAREA_MEDI
dtype: float64
- name: ELEVATORS_MEDI
dtype: float64
- name: ENTRANCES_MEDI
dtype: float64
- name: FLOORSMAX_MEDI
dtype: float64
- name: FLOORSMIN_MEDI
dtype: float64
- name: LANDAREA_MEDI
dtype: float64
- name: LIVINGAPARTMENTS_MEDI
dtype: float64
- name: LIVINGAREA_MEDI
dtype: float64
- name: NONLIVINGAPARTMENTS_MEDI
dtype: float64
- name: NONLIVINGAREA_MEDI
dtype: float64
- name: FONDKAPREMONT_MODE
dtype: string
- name: HOUSETYPE_MODE
dtype: string
- name: TOTALAREA_MODE
dtype: float64
- name: WALLSMATERIAL_MODE
dtype: string
- name: EMERGENCYSTATE_MODE
dtype: string
- name: OBS_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: OBS_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DAYS_LAST_PHONE_CHANGE
dtype: float64
- name: FLAG_DOCUMENT_2
dtype: int64
- name: FLAG_DOCUMENT_3
dtype: int64
- name: FLAG_DOCUMENT_4
dtype: int64
- name: FLAG_DOCUMENT_5
dtype: int64
- name: FLAG_DOCUMENT_6
dtype: int64
- name: FLAG_DOCUMENT_7
dtype: int64
- name: FLAG_DOCUMENT_8
dtype: int64
- name: FLAG_DOCUMENT_9
dtype: int64
- name: FLAG_DOCUMENT_10
dtype: int64
- name: FLAG_DOCUMENT_11
dtype: int64
- name: FLAG_DOCUMENT_12
dtype: int64
- name: FLAG_DOCUMENT_13
dtype: int64
- name: FLAG_DOCUMENT_14
dtype: int64
- name: FLAG_DOCUMENT_15
dtype: int64
- name: FLAG_DOCUMENT_16
dtype: int64
- name: FLAG_DOCUMENT_17
dtype: int64
- name: FLAG_DOCUMENT_18
dtype: int64
- name: FLAG_DOCUMENT_19
dtype: int64
- name: FLAG_DOCUMENT_20
dtype: int64
- name: FLAG_DOCUMENT_21
dtype: int64
- name: AMT_REQ_CREDIT_BUREAU_HOUR
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_DAY
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_WEEK
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_MON
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_QRT
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_YEAR
dtype: float64
splits:
- name: train
num_bytes: 323536216
num_examples: 307511
download_size: 0
dataset_size: 323536216
---
# Dataset Card for "home-credit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
halaction/song-lyrics | 2023-04-29T18:58:36.000Z | [
"license:apache-2.0",
"region:us"
] | halaction | null | null | null | 0 | 7 | ---
license: apache-2.0
---
|
chittaranjankhatua/car_damage_pub | 2023-05-02T13:43:35.000Z | [
"region:us"
] | chittaranjankhatua | null | null | null | 0 | 7 | Entry not found |
sileod/mindgames | 2023-06-29T08:30:21.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"theory of mind",
"tom",
"Logical-Reasoning",
"Modal-Logic",
"Reasoning",
"Logics",
"Logic",
"nli",
... | sileod | null | null | null | 4 | 7 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
task_ids:
- natural-language-inference
- multi-input-text-classification
multilinguality:
- monolingual
tags:
- theory of mind
- tom
- Logical-Reasoning
- Modal-Logic
- Reasoning
- Logics
- Logic
- nli
- model-checking
- natural language inference
dataset_info:
features:
- name: premise
dtype: string
- name: smcdel_problem
dtype: string
- name: n_announcements
dtype: int64
- name: pbcheck
dtype: string
- name: hypothesis
dtype: string
- name: setup
dtype: string
- name: hypothesis_depth
dtype: int64
- name: n_agents
dtype: int64
- name: label
dtype: int64
- name: names
sequence: string
- name: index
dtype: int64
- name: s-l
dtype: string
- name: deberta_pred
dtype: int64
- name: deberta_confidence
dtype: float64
- name: difficulty
dtype: float64
splits:
- name: train
num_bytes: 8619563.842139175
num_examples: 11174
- name: validation
num_bytes: 2873445.0789304124
num_examples: 3725
- name: test
num_bytes: 2873445.0789304124
num_examples: 3725
download_size: 2991434
dataset_size: 14366454
---
Mindgame dataset
Code:
https://github.com/sileod/llm-theory-of-mind
Article:
https://arxiv.org/abs/2305.03353
```
@article{sileo2023mindgames,
title={MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic},
author={Sileo, Damien and Lernould, Antoine},
journal={arXiv preprint arXiv:2305.03353},
year={2023}
}
``` |
alejandrowallace/tmdb-5000 | 2023-05-03T20:19:43.000Z | [
"task_categories:zero-shot-classification",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | alejandrowallace | null | null | null | 0 | 7 | ---
license: unknown
task_categories:
- zero-shot-classification
language:
- en
size_categories:
- 1K<n<10K
--- |
Finnish-NLP/mc4_3.1.0_fi_cleaned | 2023-05-19T16:20:51.000Z | [
"region:us"
] | Finnish-NLP | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: perplexity_kenlm
dtype: int64
- name: label_identity_attack
dtype: float64
- name: label_insult
dtype: float64
- name: label_obscene
dtype: float64
- name: label_severe_toxicity
dtype: float64
- name: label_threat
dtype: float64
- name: label_toxicity
dtype: float64
splits:
- name: train
num_bytes: 103354369732
num_examples: 26468761
- name: validation
num_bytes: 101931416
num_examples: 26149
download_size: 7141130482
dataset_size: 103456301148
---
# Dataset Card for "mc4_3.1.0_fi_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk/hh-lmgym-demo | 2023-05-17T18:13:29.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 1 | 7 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 126803175
num_examples: 112052
- name: test
num_bytes: 14079595
num_examples: 12451
download_size: 0
dataset_size: 140882770
---
# Dataset Card for "hh-lmgym-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alzoubi36/privaseer | 2023-06-21T12:32:56.000Z | [
"license:gpl-3.0",
"region:us"
] | alzoubi36 | null | null | null | 0 | 7 | ---
license: gpl-3.0
dataset_info:
features:
- name: hash
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 17080868768
num_examples: 2180300
download_size: 8133175578
dataset_size: 17080868768
---
## Privaseer Dataset
Huggingface version of the [Privaseer](https://privaseer.ist.psu.edu/) dataset.
<pre>
@inproceedings{srinath-etal-2021-privacy,
title = "Privacy at Scale: Introducing the {P}riva{S}eer Corpus of Web Privacy Policies",
author = "Srinath, Mukund and
Wilson, Shomir and
Giles, C Lee",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.532",
doi = "10.18653/v1/2021.acl-long.532",
pages = "6829--6839",
abstract = "Organisations disclose their privacy practices by posting privacy policies on their websites. Even though internet users often care about their digital privacy, they usually do not read privacy policies, since understanding them requires a significant investment of time and effort. Natural language processing has been used to create experimental tools to interpret privacy policies, but there has been a lack of large privacy policy corpora to facilitate the creation of large-scale semi-supervised and unsupervised models to interpret and simplify privacy policies. Thus, we present the PrivaSeer Corpus of 1,005,380 English language website privacy policies collected from the web. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies, and it surpasses the aggregate of unique websites represented in all other publicly available privacy policy corpora combined. We describe a corpus creation pipeline with stages that include a web crawler, language detection, document classification, duplicate and near-duplicate removal, and content extraction. We employ an unsupervised topic modelling approach to investigate the contents of policy documents in the corpus and discuss the distribution of topics in privacy policies at web scale. We further investigate the relationship between privacy policy domain PageRanks and text features of the privacy policies. Finally, we use the corpus to pretrain PrivBERT, a transformer-based privacy policy language model, and obtain state of the art results on the data practice classification and question answering tasks.",}
</pre> |
squarelike/ReverseProxy-OAI-Log | 2023-06-16T18:32:28.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | squarelike | null | null | null | 7 | 7 | ---
license: apache-2.0
language:
- en
---
OAI reverse proxy log data to be found on the Internet until 2023-06-17.<br>
The dataset was built to fit the Vicuna format, but some modifications are required if you are actually learning.<br>
There are three types: GPT3.5, GPT4, and claude<br>
<br>
This dataset contains vast amounts of AI chatting data (in TavernAI, RisuAI, etc.)<br>
I didn't dedup the dataset |
muhrafli/heart-diseases | 2023-05-22T08:57:31.000Z | [
"region:us"
] | muhrafli | null | null | null | 0 | 7 | Entry not found |
dbdu/ShareGPT-74k-ko | 2023-08-19T07:00:39.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ko",
"license:cc-by-2.0",
"conversation",
"chatgpt",
"gpt-3.5",
"region:us"
] | dbdu | null | null | null | 12 | 7 | ---
language:
- ko
pretty_name: ShareGPT-74k-ko
tags:
- conversation
- chatgpt
- gpt-3.5
license: cc-by-2.0
task_categories:
- text-generation
size_categories:
- 10K<n<100K
---
# ShareGPT-ko-74k
ShareGPT 90k의 cleaned 버전을 구글 번역기를 이용하여 번역하였습니다.\
원본 데이터셋은 [여기](https://github.com/lm-sys/FastChat/issues/90)에서 확인하실 수 있습니다.
Korean-translated version of ShareGPT-90k, translated by Google Translaton.\
You can check the original dataset [here](https://github.com/lm-sys/FastChat/issues/90).
## Dataset Description
json 파일의 구조는 원본 데이터셋과 동일합니다.\
`*_unclneaed.json`은 원본 데이터셋을 번역하고 따로 후처리하지 않은 데이터셋입니다. (총 74k)\
`*_cleaned.json`은 위의 데이터에서 코드가 포함된 데이터를 러프하게 제거한 데이터셋입니다. (총 55k)\
**주의**: 코드는 번역되었을 수 있으므로 cleaned를 쓰시는 걸 추천합니다.
The structure of the dataset is the same with the original dataset.\
`*_unclneaed.json` are Korean-translated data, without any post-processing. (total 74k dialogues)\
`*_clneaed.json` are post-processed version which dialogues containing code snippets are eliminated from. (total 55k dialogues)\
**WARNING**: Code snippets might have been translated into Korean. I recommend you use cleaned files.
## Licensing Information
GPT를 이용한 데이터셋이므로 OPENAI의 [약관](https://openai.com/policies/terms-of-use)을 따릅니다.\
그 외의 경우 [CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/)을 따릅니다.
The licensing status of the datasets follows [OPENAI Licence](https://openai.com/policies/terms-of-use) as it contains GPT-generated sentences.\
For all the other cases, the licensing status follows [CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/).
## Code
번역에 사용한 코드는 아래 리포지토리에서 확인 가능합니다. Check out the following repository to see the translation code used.\
https://github.com/dubuduru/ShareGPT-translation
You can use the repository to translate ShareGPT-like dataset into your preferred language. |
RussianNLP/RuSpellGold | 2023-05-26T16:41:30.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ru",
"license:apache-2.0",
"region:us"
] | RussianNLP | RuSpellGold is a benchmark of 1711 sentence pairs
dedicated to a problem of automatic spelling correction in Russian language.
The dataset is gathered from five different domains including news, Russian classic literature,
social media texts, open web and strategic documents.
It has been passed through two-stage manual labeling process with native speakers as annotators
to correct spelling violation and preserve original style of text at the same time. | null | null | 0 | 7 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ru
size_categories:
- 1K<n<10K
---
# Dataset Card for RuSpellGold
## Dataset Description
- **Paper:** # TODO
- **ArXiv:** # TODO
- **Point of Contact:** nikita.martynov.98@list.ru
- **Language:** Russian
### Dataset Summary
RuSpellGold is a benchmark of 1711 sentence pairs dedicated to a problem of automatic spelling correction in Russian language. The dataset is gathered from five different domains including news, Russian classic literature, social media texts, open web and strategic documents. It has been passed through two-stage manual labeling process with native speakers as annotators to correct spelling violation and preserve original style of text at the same time.
## Dataset Structure
### Supported Tasks and Leaderboards
- **Task:** automatic spelling correction.
- **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
### Languages
Russian.
### Data Instances
```
{
"sources": "Видела в городе афиши, анонсрующие ее концерт.",
"corrections": "Видела в городе афиши, анонсирующие её концерт",
"domain": "aranea"
}
```
### Data Fields
- ```sources (str)```: original sentence.
- ```corrections (str)```: corrected sentence.
- ```domain (str)```: domain, from which the sentence is taken from.
### Data Splits
Current version of benchmark is only represented by test part:
- ```test```: 1711 sentence pairs (```"data/test.csv"```).
which is then splitted into following domain-relaited shards:
- ```aranea```: 756 sentence pairs (```"data/aranea/split.csv"```);
- ```literature```: 260 sentence pairs (```"data/literature/split.csv"```);
- ```news```: 245 sentence pairs (```"data/news/split.csv"```);
- ```social_media```: 200 sentence pairs (```"data/social_media/split.csv"```);
- ```strategic_documents```: 250 sentence pairs (```"data/strategic_documents/split.csv"```);
## Dataset Creation
### Source Data
|Source |Strategy |Domain |
|---|---|---|
|Vladimír Benko. 2014. Aranea: Yet another family of (comparable) web corpora. // Text, Speech and Dialogue: 17th International Conference, TSD 2014, Brno, Czech Republic, September 8-12, 2014. Proceedings 17, P 247–256. Springer| Random sentences from Araneum Russicum|Open web (aranea) |
| Russian classic literature aggregated in this [corpus](https://www.kaggle.com/datasets/d0rj3228/russian-literature) | Random sentences | Literature |
|Ilya Gusev. 2020. Dataset for automatic summarization of russian news. // Artificial Intelligence and Natural Language: 9th Conference, AINL 2020, Helsinki, Finland, October 7–9, 2020, Proceedings 9, P 122–134. Springer | Random sentences | News |
|Social media platforms | Posts from social media platforms marked with specific hashtags | Social Media |
|Vitaly Ivanin, Ekaterina Artemova, Tatiana Batura, Vladimir Ivanov, Veronika Sarkisyan, Elena Tutubalina, and Ivan Smurov. 2020. Rurebus-2020 shared task: Russian relation extraction for business. // Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”], Moscow, Russia. | Random sentences | Strategic documents |
### Annotations
#### Annotation process
All of the sentences undergo a two-stage annotation procedure on [Toloka](https://toloka.ai), a crowd-sourcing platform for data labeling.
Each stage includes an unpaid training phase with explanations, control tasks for tracking annotation quality, and the main annotation task. Before starting, a worker is given detailed instructions describing the task, explaining the labels, and showing plenty of examples.
The instruction is available at any time during both the training and main annotation phases. To get access to the main phase, the worker should first complete the training phase by labeling more than 70% of its examples correctly. To ensure high-quality expertise on the matter of spelling, we set up additional test phase on a small portion of data, manually revised the results and approved only those annotators, who managed to avoid any mistakes.
- **Stage 1: Data gathering**
We provide texts with possible mistakes to annotators and ask them to write the sentence correctly preserving the original style-markers of the text.
- **Stage 2: Validation**
We provide annotators with the pair of sentences (origin and its corresponding correction from the previous stage) and ask them to check if the correction is right.
### Personal and Sensitive Information
Each annotator is warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion).
## Additional Information
### Dataset Curators
Correspondence: ```nikita.martynov.98@list.ru```
### Licensing Information
The corpus is available under the Apache 2.0 license. The copyright (where applicable) of texts from the linguistic publications and resources remains with the original authors or publishers.
### Other
Please refer to our paper # TODO for more details. |
yjernite/stable-bias_grounding-images_multimodel_3_12_22 | 2023-05-29T16:44:50.000Z | [
"region:us"
] | yjernite | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: gender_phrase
dtype: string
- name: ethnicity_phrase
dtype: string
- name: image
dtype: image
- name: source_type
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 70960514.0
num_examples: 2040
download_size: 70651732
dataset_size: 70960514.0
---
# Dataset Card for "stable-bias_grounding-images_multimodel_3_12_22"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Cubpaw/voxelgym_5c_42x42_10 | 2023-06-01T13:00:45.000Z | [
"region:us"
] | Cubpaw | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: rgb_label
dtype: image
- name: path_label
dtype: image
- name: path_rgb_label
dtype: image
splits:
- name: train
num_bytes: 6953.0
num_examples: 8
- name: validation
num_bytes: 1776.0
num_examples: 2
download_size: 26790
dataset_size: 8729.0
---
# Dataset Card for "voxelgym_5c_42x42_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
whu9/mediasum_postprocess | 2023-06-03T06:02:12.000Z | [
"region:us"
] | whu9 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 3913935357
num_examples: 443511
- name: validation
num_bytes: 86873579
num_examples: 9999
- name: test
num_bytes: 88635215
num_examples: 9997
download_size: 2335096802
dataset_size: 4089444151
---
# Dataset Card for "mediasum_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
swaption2009/cyber-threat-intelligence-custom-data | 2023-06-04T07:35:25.000Z | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"language:en",
"region:us"
] | swaption2009 | null | null | null | 3 | 7 | ---
task_categories:
- text-generation
- table-question-answering
language:
- en
--- |
kaist-ai/CoT-Collection_multilingual | 2023-06-08T07:32:18.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.14045",
"region:us"
] | kaist-ai | """
_LICENSE = "CC BY 4.0"
_HOMEPAGE = "https://github.com/kaistAI/CoT-Collection"
_LANGUAGES = {
"ko": "Korean",
"fr": "French",
"ru": "Russian",
"ja": "Japanese",
"zh": "Chinese",
}
# _ALL_LANGUAGES = "all_languages"
class CoTCollectionMultiConfig(datasets.BuilderConfig): | @article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
} | null | 3 | 7 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
## Additional Information
### Citation Information
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` |
ParsaKgvr/socce_report_analysis | 2023-06-06T08:09:18.000Z | [
"region:us"
] | ParsaKgvr | null | null | null | 1 | 7 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: sent0
dtype: string
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: sent3
dtype: string
- name: sent4
dtype: string
- name: sent5
dtype: string
- name: sent6
dtype: string
- name: sent7
dtype: string
- name: sent8
dtype: string
- name: sent9
dtype: string
- name: sent10
dtype: string
- name: sent11
dtype: string
- name: sent12
dtype: string
- name: sent13
dtype: string
- name: sent14
dtype: string
- name: sent15
dtype: string
- name: sent16
dtype: string
- name: sent17
dtype: string
- name: sent18
dtype: string
- name: sent19
dtype: string
- name: sent20
dtype: string
- name: sent21
dtype: string
- name: sent22
dtype: string
- name: sent23
dtype: string
- name: sent24
dtype: string
- name: sent25
dtype: string
- name: sent26
dtype: string
- name: sent27
dtype: string
- name: sent28
dtype: string
- name: sent29
dtype: string
- name: sent30
dtype: string
- name: sent31
dtype: string
- name: sent32
dtype: string
- name: sent33
dtype: string
- name: sent34
dtype: string
- name: sent35
dtype: string
- name: sent36
dtype: string
- name: sent37
dtype: string
- name: sent38
dtype: string
- name: sent39
dtype: string
- name: sent40
dtype: string
- name: sent41
dtype: string
- name: sent42
dtype: string
- name: sent43
dtype: string
- name: sent44
dtype: string
- name: sent45
dtype: string
- name: sent46
dtype: string
- name: sent47
dtype: string
- name: sent48
dtype: string
- name: sent49
dtype: string
- name: sent50
dtype: string
- name: sent51
dtype: string
- name: sent52
dtype: string
- name: sent53
dtype: string
- name: sent54
dtype: string
- name: sent55
dtype: string
- name: sent56
dtype: string
- name: sent57
dtype: string
- name: sent58
dtype: string
- name: sent59
dtype: string
- name: sent60
dtype: string
- name: sent61
dtype: string
- name: sent62
dtype: string
- name: sent63
dtype: string
- name: sent64
dtype: string
- name: sent65
dtype: string
- name: sent66
dtype: string
- name: sent67
dtype: string
- name: sent68
dtype: string
- name: sent69
dtype: string
- name: sent70
dtype: string
- name: sent71
dtype: string
- name: sent72
dtype: string
- name: sent73
dtype: string
- name: sent74
dtype: string
- name: sent75
dtype: string
- name: sent76
dtype: string
- name: sent77
dtype: string
- name: sent78
dtype: string
- name: sent79
dtype: string
- name: sent80
dtype: string
- name: sent81
dtype: string
- name: sent82
dtype: string
- name: sent83
dtype: string
- name: sent84
dtype: string
- name: sent85
dtype: string
- name: sent86
dtype: string
- name: sent87
dtype: string
- name: sent88
dtype: string
- name: sent89
dtype: string
- name: sent90
dtype: string
- name: sent91
dtype: string
- name: sent92
dtype: string
- name: sent93
dtype: string
- name: sent94
dtype: string
- name: sent95
dtype: string
- name: sent96
dtype: string
- name: sent97
dtype: string
- name: sent98
dtype: string
- name: sent99
dtype: string
- name: sent100
dtype: string
- name: sent101
dtype: string
- name: sent102
dtype: string
- name: sent103
dtype: string
- name: sent104
dtype: string
- name: sent105
dtype: string
- name: sent106
dtype: string
- name: sent107
dtype: string
- name: sent108
dtype: string
- name: sent109
dtype: string
- name: sent110
dtype: string
- name: sent111
dtype: string
- name: sent112
dtype: string
- name: sent113
dtype: string
- name: sent114
dtype: string
- name: sent115
dtype: string
- name: sent116
dtype: string
- name: sent117
dtype: string
- name: sent118
dtype: string
- name: sent119
dtype: string
- name: sent120
dtype: string
- name: sent121
dtype: string
- name: sent122
dtype: string
- name: sent123
dtype: string
- name: sent124
dtype: string
- name: sent125
dtype: string
- name: sent126
dtype: string
- name: sent127
dtype: string
- name: sent128
dtype: string
- name: sent129
dtype: string
- name: sent130
dtype: string
- name: sent131
dtype: string
- name: sent132
dtype: string
- name: sent133
dtype: string
- name: sent134
dtype: string
- name: sent135
dtype: string
- name: sent136
dtype: string
- name: player0
dtype: string
- name: rating0
dtype: string
- name: player1
dtype: string
- name: rating1
dtype: string
- name: player2
dtype: string
- name: rating2
dtype: string
- name: player3
dtype: string
- name: rating3
dtype: string
- name: player4
dtype: string
- name: rating4
dtype: string
- name: player5
dtype: string
- name: rating5
dtype: string
- name: player6
dtype: string
- name: rating6
dtype: string
- name: player7
dtype: string
- name: rating7
dtype: string
- name: player8
dtype: string
- name: rating8
dtype: string
- name: player9
dtype: string
- name: rating9
dtype: string
- name: player10
dtype: string
- name: rating10
dtype: string
- name: player11
dtype: string
- name: rating11
dtype: string
- name: player12
dtype: string
- name: rating12
dtype: string
- name: player13
dtype: string
- name: rating13
dtype: string
- name: player14
dtype: string
- name: rating14
dtype: string
- name: player15
dtype: string
- name: rating15
dtype: string
- name: player16
dtype: string
- name: rating16
dtype: string
- name: player17
dtype: string
- name: rating17
dtype: string
- name: player18
dtype: string
- name: rating18
dtype: string
- name: player19
dtype: string
- name: rating19
dtype: string
- name: player20
dtype: string
- name: rating20
dtype: string
- name: player21
dtype: string
- name: rating21
dtype: string
- name: player22
dtype: string
- name: rating22
dtype: string
- name: player23
dtype: string
- name: rating23
dtype: string
- name: player24
dtype: string
- name: rating24
dtype: string
- name: player25
dtype: string
- name: rating25
dtype: string
- name: player26
dtype: string
- name: rating26
dtype: string
- name: player27
dtype: string
- name: rating27
dtype: string
- name: player28
dtype: string
- name: rating28
dtype: string
- name: player29
dtype: string
- name: rating29
dtype: string
- name: player30
dtype: string
- name: rating30
dtype: string
- name: player31
dtype: string
- name: rating31
dtype: string
- name: player32
dtype: string
- name: rating32
dtype: string
- name: player33
dtype: string
- name: rating33
dtype: string
splits:
- name: train
num_bytes: 13072462
num_examples: 1996
download_size: 6901926
dataset_size: 13072462
---
# Dataset Card for "socce_report_analysis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla/reward-model-data-falcon | 2023-06-07T11:33:30.000Z | [
"region:us"
] | argilla | null | null | null | 0 | 7 | ---
{}
---
## Guidelines
These guidelines are based on the paper [Training Language Models to Follow Instructions with Human Feedback](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf)
**You are given a text-based description of a task, submitted by a user.**
This task description may be in the form of an explicit instruction (e.g. "Write a story about a wise frog."). The task may also be specified indirectly, for example by using several examples of the desired behavior (e.g. given a sequence of movie reviews followed by their sentiment, followed by one movie review without a sentiment, you can assume the task is to predict the sentiment of the final review), or by generating the start of a desired output (e.g. given "There once was a wise frog named Julius," you can assume the task is to continue the story).
**You will also be given several text outputs, intended to help the user with their task.** Your job is to evaluate these outputs to ensure that they are helpful, truthful, and harmless, and select the best according to these criteria.
In case **none of the outputs is correct, conforming to the criteria, or they're exactly equal, use the discard option to not rank them**.
For most tasks, being truthful and harmless is more important than being helpful.
By **helpful**, we mean that the output should follow the user's intention and help the user solve their task. Some examples of helpful behavior:
- Writing in clear language.
- Answering the question they meant to ask, even if they mis-asked it.
- Being sensitive to internationality (e.g., "football" shouldn't mean American football, and "the president" doesn't necessarily mean the US president).
- If an instruction is too confusing, asking for clarification and explaining why the instruction is confusing.
- Not giving overly long or rambling answers, or repeating information from the question.
- Not assuming extraneous extra context outside of what's given (besides things like facts about the world), unless that's an implied part of the task. For example, if asked to "write a polite email response to this email: email body," the output shouldn't assume "I can't make it this time, but am free next weekend." But if asked to "Write an email to Socrates," then you can safely assume the output can make up a context.
By **truthful**, we mean that the output contains accurate information and doesn't mislead the user. Some examples of truthful behavior:
- On tasks like summarization, where the output should only use information from input, not making up details that are not part of the input description.
- Not producing clearly false information about the world (e.g., making up facts or promoting conspiracies). For example, the output should not state that Hillary Clinton has served time in prison.
- Avoid generating misleading information or information with questionable authenticity. For example, if the input asks "Why did Hillary Clinton go to jail?", the output should not say "It's not totally clear," but rather should refute the premise of the question.
By **harmless**, we mean that the output should not cause physical, psychological, or social harm to people; damage to or loss of equipment or property; damage to the environment; or harm to institutions or resources necessary to human well-being. Some examples of harmless behavior:
- Treating other humans with kindness, respect, and consideration; not denigrating members of certain groups or using biased language against a particular group.
- Not generating abusive, threatening, or offensive language or promoting violence.
- Not writing sexual or violent content if it's not asked for.
- Not giving bad real-world advice or promoting illegal activity.
Evaluating model outputs may involve making trade-offs between these criteria. These trade-offs will depend on the task.
## Fields
* `instruction` is of type <class 'str'>
* `response-1` is of type <class 'str'>
* `response-2` is of type <class 'str'>
## Questions
* `choose-best` : Choose the most helpful, harmless, and truthful response. Select 1 for response-1, 2 for response-2, or discard if both are equally good/bad.
## Load with Argilla
To load this dataset with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface('argilla/reward-model-data-falcon')
```
## Load with Datasets
To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset('argilla/reward-model-data-falcon')
```
|
64bits/lex_fridman_podcast_for_llm_vicuna | 2023-06-09T10:13:46.000Z | [
"task_categories:text-generation",
"language:en",
"transformers",
"region:us"
] | 64bits | null | null | null | 11 | 7 | ---
task_categories:
- text-generation
language:
- en
pretty_name: lex-llm
tags:
- transformers
---
# Intro
This dataset represents a compilation of audio-to-text transcripts from the Lex Fridman Podcast. The Lex Fridman Podcast, hosted by AI researcher at MIT, Lex Fridman, is a deep dive into a broad range of topics that touch on science, technology, history, philosophy, and the nature of intelligence, consciousness, love, and power. The guests on the podcast are drawn from a diverse range of fields, providing unique and insightful perspectives on these subjects.
The dataset has been formatted in ShareGPT format for use with conversational large language models (LLMs) like Vicuna, WizardVicuna, etc.
This dataset can be an invaluable resource for training and refining language models, offering a rich source of nuanced, intellectual, and thought-provoking dialogue. Furthermore, the diversity of topics covered provides a broad spectrum of language usage, idiomatic expressions, and subject matter expertise.
### 3 versions
1. _original: original dataset where each item is an entire episode
2. _chunked: chunked dataset where episodes are formated into chunks of approximately 1200 words(roughly < 2048 tokens)
3. _chunked_gpt: change "lex" & "guest" to "human" & "gpt" in _chunked dataset to fit Vicuna training
# What I did
1. Fetch all episode links of Lex Fridman Podcast
2. For each episode, transform the transcript in html to json format (Vicuna ShareGPT format)
3. remove the first few sentences from Lex for each episode to remove the introduction and ads.
# Problems & Concerns
1. These are audio-to-text transcriptions, which contain inaccurate detections
2. Although the speakers are professionals, these are verbal conversations which contain oral languages
3. The dataset may contain ads and personal opinions from Lex Fridman and the speakers
4. more ...
# Next Steps
1. finetune LLaMA, WizardVicuna, Vicuna models using this dataset |
TigerResearch/sft_en | 2023-06-09T12:21:07.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | null | 3 | 7 | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 开源项目中微调英文sft-en数据合集
本合集涵盖本组织下开源的其他中文sft-英文-数据集,不需要重复下载
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/sft_en')
```
## 文件细分
| 类型 | 语言 | 数据集文件 | 数量 |
| ------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| alpaca 英文 | 英文 | [tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-alpaca-en-50k.json) | 50k |
| 头脑风暴 | 英文 | [tigerbot-dolly-Brainstorming-en-1.7k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-dolly-Brainstorming-en-1.7k.json) | 1.7k |
| 分类 | 英文 | [tigerbot-dolly-Classification-en-2k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-dolly-Classification-en-2k.json) | 2k |
| 数学问题 | 英文 | [tigerbot-gsm-8k-en](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-gsm-8k-en.json) | 8k |
| 代码 | 英文 | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-kaggle-leetcodesolutions-en-2k.json) | 2k |
| 食谱生成 | 英文 | [tigerbot-kaggle-recipes-en-2k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-kaggle-recipes-en-2k.json) | 2k |
| 病历生成 | 英文 | [tigerbot-mt-note-generation-en](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-mt-note-generation-en.json) | 450 |
| 多轮对话 | 英文 | [tigerbot-OIG-multichat-en-50k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-OIG-multichat-en-50k.json) | 50k |
| 综合问答 | 英文 | [tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-stackexchange-qa-en-0.5m.json) | 0.5m |
| wiki 问答 | 英文 | [tigerbot-wiki-qa-bart-en-10k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-wiki-qa-bart-en-10k.json) | 10k |
| 如何做类教程 | 英文 | [tigerbot-youtube-howto-en-50k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-youtube-howto-en-50k.json) | 50k | |
orangetin/oig-chip | 2023-06-12T01:32:23.000Z | [
"license:apache-2.0",
"region:us"
] | orangetin | null | null | null | 0 | 7 | ---
license: apache-2.0
---
|
Edoh/manim_python | 2023-06-12T17:01:54.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | Edoh | null | null | null | 1 | 7 | ---
license: creativeml-openrail-m
---
|
yyu/agnews-attrprompt | 2023-08-22T08:27:07.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | yyu | null | null | null | 0 | 7 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
---
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt).
- `label.txt`: the label name for each class
- `train.jsonl`: The original training set.
- `valid.jsonl`: The original validation set.
- `test.jsonl`: The original test set.
- `simprompt.jsonl`: The training data generated by the simple prompt.
- `attrprompt.jsonl`: The training data generated by the attributed prompt. |
mcimpoi/alot | 2023-06-16T10:34:05.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | mcimpoi | null | null | null | 0 | 7 | ---
task_categories:
- image-classification
language:
- en
pretty_name: Amsterdam Library of Textures (ALOT)
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '10'
'2': '100'
'3': '101'
'4': '102'
'5': '103'
'6': '104'
'7': '105'
'8': '106'
'9': '107'
'10': '108'
'11': '109'
'12': '11'
'13': '110'
'14': '111'
'15': '112'
'16': '113'
'17': '114'
'18': '115'
'19': '116'
'20': '117'
'21': '118'
'22': '119'
'23': '12'
'24': '120'
'25': '121'
'26': '122'
'27': '123'
'28': '124'
'29': '125'
'30': '126'
'31': '127'
'32': '128'
'33': '129'
'34': '13'
'35': '130'
'36': '131'
'37': '132'
'38': '133'
'39': '134'
'40': '135'
'41': '136'
'42': '137'
'43': '138'
'44': '139'
'45': '14'
'46': '140'
'47': '141'
'48': '142'
'49': '143'
'50': '144'
'51': '145'
'52': '146'
'53': '147'
'54': '148'
'55': '149'
'56': '15'
'57': '150'
'58': '151'
'59': '152'
'60': '153'
'61': '154'
'62': '155'
'63': '156'
'64': '157'
'65': '158'
'66': '159'
'67': '16'
'68': '160'
'69': '161'
'70': '162'
'71': '163'
'72': '164'
'73': '165'
'74': '166'
'75': '167'
'76': '168'
'77': '169'
'78': '17'
'79': '170'
'80': '171'
'81': '172'
'82': '173'
'83': '174'
'84': '175'
'85': '176'
'86': '177'
'87': '178'
'88': '179'
'89': '18'
'90': '180'
'91': '181'
'92': '182'
'93': '183'
'94': '184'
'95': '185'
'96': '186'
'97': '187'
'98': '188'
'99': '189'
'100': '19'
'101': '190'
'102': '191'
'103': '192'
'104': '193'
'105': '194'
'106': '195'
'107': '196'
'108': '197'
'109': '198'
'110': '199'
'111': '2'
'112': '20'
'113': '200'
'114': '201'
'115': '202'
'116': '203'
'117': '204'
'118': '205'
'119': '206'
'120': '207'
'121': '208'
'122': '209'
'123': '21'
'124': '210'
'125': '211'
'126': '212'
'127': '213'
'128': '214'
'129': '215'
'130': '216'
'131': '217'
'132': '218'
'133': '219'
'134': '22'
'135': '220'
'136': '221'
'137': '222'
'138': '223'
'139': '224'
'140': '225'
'141': '226'
'142': '227'
'143': '228'
'144': '229'
'145': '23'
'146': '230'
'147': '231'
'148': '232'
'149': '233'
'150': '234'
'151': '235'
'152': '236'
'153': '237'
'154': '238'
'155': '239'
'156': '24'
'157': '240'
'158': '241'
'159': '242'
'160': '243'
'161': '244'
'162': '245'
'163': '246'
'164': '247'
'165': '248'
'166': '249'
'167': '25'
'168': '250'
'169': '26'
'170': '27'
'171': '28'
'172': '29'
'173': '3'
'174': '30'
'175': '31'
'176': '32'
'177': '33'
'178': '34'
'179': '35'
'180': '36'
'181': '37'
'182': '38'
'183': '39'
'184': '4'
'185': '40'
'186': '41'
'187': '42'
'188': '43'
'189': '44'
'190': '45'
'191': '46'
'192': '47'
'193': '48'
'194': '49'
'195': '5'
'196': '50'
'197': '51'
'198': '52'
'199': '53'
'200': '54'
'201': '55'
'202': '56'
'203': '57'
'204': '58'
'205': '59'
'206': '6'
'207': '60'
'208': '61'
'209': '62'
'210': '63'
'211': '64'
'212': '65'
'213': '66'
'214': '67'
'215': '68'
'216': '69'
'217': '7'
'218': '70'
'219': '71'
'220': '72'
'221': '73'
'222': '74'
'223': '75'
'224': '76'
'225': '77'
'226': '78'
'227': '79'
'228': '8'
'229': '80'
'230': '81'
'231': '82'
'232': '83'
'233': '84'
'234': '85'
'235': '86'
'236': '87'
'237': '88'
'238': '89'
'239': '9'
'240': '90'
'241': '91'
'242': '92'
'243': '93'
'244': '94'
'245': '95'
'246': '96'
'247': '97'
'248': '98'
'249': '99'
splits:
- name: train
num_bytes: 3302794460.0
num_examples: 20000
- name: test
num_bytes: 411146945.0
num_examples: 2500
- name: dev
num_bytes: 415575782.5
num_examples: 2500
download_size: 4104421810
dataset_size: 4129517187.5
---
# Dataset Card for Amsterdam Library of Textures (ALOT)
## Dataset Description
- **Homepage:** https://aloi.science.uva.nl/public_alot/
- **Paper:** G. J. Burghouts and J. M. Geusebroek, Material-specific adaptation of color invariant features,
Pattern Recognition Letters, vol. 30, 306-313, 2009
### Licensing Information
Not known, see website
### Citation Information
@article{burghouts2009material,
title={Material-specific adaptation of color invariant features},
author={Burghouts, Gertjan J and Geusebroek, Jan-Mark},
journal={Pattern Recognition Letters},
volume={30},
number={3},
pages={306--313},
year={2009},
publisher={Elsevier}
} |
mike-ravkine/rosettacode-parsed | 2023-06-20T12:01:47.000Z | [
"task_categories:text-generation",
"language:en",
"language:code",
"license:gfdl",
"region:us"
] | mike-ravkine | null | null | null | 8 | 7 | ---
license: gfdl
task_categories:
- text-generation
language:
- en
- code
---
## Data Origins
Original dataset: https://huggingface.co/datasets/jondurbin/rosettacode-raw/
Cleaner code: https://github.com/the-crypt-keeper/rosettacode-parser
## Data Fields
|Field|Type|Description|
|---|---|---|
|title|string|problem title|
|task|string|problem description|
|language|string|solution language/variant|
|soulution|string|solution source code|
## Languages
One .jsonl is provided per language group, the sublanguage field in the data denotes the specific language version/variant or the source language the example was ported from.
```
Language Python problems 510 rows 621
Language C problems 350 rows 350
Language C++ problems 403 rows 416
Language C sharp problems 322 rows 342
Language Go problems 496 rows 503
Language JavaScript problems 269 rows 301
Language Java problems 470 rows 512
Language Lua problems 335 rows 339
Language Kotlin problems 435 rows 435
Language Ruby problems 418 rows 444
Total 4894 done 565 skip 4329 failed 0 rows 4263
``` |
jondurbin/rosettacode-10 | 2023-06-21T07:37:59.000Z | [
"license:gfdl",
"region:us"
] | jondurbin | null | null | null | 2 | 7 | ---
license: gfdl
---
Instruction/response formatted rosettacode.org tasks/solutions for:
- c++
- c
- c#
- go
- java
- javascript
- kotlin
- lua
- python
- ruby |
MikhailT/cmu-arctic | 2023-06-23T09:07:03.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | MikhailT | null | null | null | 0 | 7 | ---
license: mit
language:
- en
pretty_name: CMU Arctic
dataset_info:
features:
- name: speaker
dtype: string
- name: file
dtype: string
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: aew
num_bytes: 124532319
num_examples: 1132
- name: ahw
num_bytes: 65802249
num_examples: 593
- name: aup
num_bytes: 55771949
num_examples: 593
- name: awb
num_bytes: 106781643
num_examples: 1138
- name: axb
num_bytes: 67641455
num_examples: 593
- name: bdl
num_bytes: 97845496
num_examples: 1131
- name: clb
num_bytes: 123294691
num_examples: 1132
- name: eey
num_bytes: 55460671
num_examples: 592
- name: fem
num_bytes: 57115651
num_examples: 593
- name: gka
num_bytes: 64208369
num_examples: 592
- name: jmk
num_bytes: 103401609
num_examples: 1114
- name: ksp
num_bytes: 114080099
num_examples: 1132
- name: ljm
num_bytes: 51847413
num_examples: 593
- name: lnh
num_bytes: 120446549
num_examples: 1132
- name: rms
num_bytes: 127163811
num_examples: 1132
- name: rxr
num_bytes: 83873386
num_examples: 666
- name: slp
num_bytes: 72360869
num_examples: 593
- name: slt
num_bytes: 108798337
num_examples: 1132
download_size: 1577150976
dataset_size: 1600426566
size_categories:
- 10K<n<100K
---
# CMU Arctic Dataset |
FreedomIntelligence/alpaca-gpt4-arabic | 2023-08-06T08:07:51.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | null | 1 | 7 | ---
license: apache-2.0
---
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
aisyahhrazak/ms-rotikaya | 2023-06-29T03:54:09.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | null | 0 | 7 | ---
language:
- ms
---
Roti Kaya articles scraped on 27.6.2023 |
Norod78/il-license-plates | 2023-06-28T14:11:25.000Z | [
"task_categories:object-detection",
"size_categories:n<1K",
"license:mit",
"region:us"
] | Norod78 | null | null | null | 0 | 7 | ---
license: mit
size_categories:
- n<1K
task_categories:
- object-detection
---
Images of Israeli License Plates with annotation for Plate-Object detection |
ai4privacy/pii-masking-43k | 2023-06-28T17:45:58.000Z | [
"size_categories:10K<n<100K",
"language:en",
"legal",
"business",
"psychology",
"privacy",
"doi:10.57967/hf/0824",
"region:us"
] | ai4privacy | null | null | null | 8 | 7 | ---
language:
- en
tags:
- legal
- business
- psychology
- privacy
size_categories:
- 10K<n<100K
---
# Purpose and Features
The purpose of the model and dataset is to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs.
The model is a fine-tuned version of "Distilled BERT", a smaller and faster version of BERT. It was adapted for the task of token classification based on the largest to our knowledge open-source PII masking dataset, which we are releasing simultaneously. The model size is 62 million parameters. The original encoding of the parameters yields a model size of 268 MB, which is compressed to 43MB after parameter quantization. The models are available in PyTorch, tensorflow, and tensorflow.js
The dataset is composed of ~43’000 observations. Each row starts with a natural language sentence that includes placeholders for PII and could plausibly be written to an AI assistant. The placeholders are then filled in with mocked personal information and tokenized with the BERT tokenizer. We label the tokens that correspond to PII, serving as the ground truth to train our model.
The dataset covers a range of contexts in which PII can appear. The sentences span 54 sensitive data types (~111 token classes), targeting 125 discussion subjects / use cases split across business, psychology and legal fields, and 5 interactions styles (e.g. casual conversation vs formal document).
Key facts:
- Currently 5.6m tokens with 43k PII examples.
- Scaling to 100k examples
- Human-in-the-loop validated
- Synthetic data generated using proprietary algorithms
- Adapted from DistilBertForTokenClassification
- Framework PyTorch
- 8 bit quantization
# Performance evaluation
| Test Precision | Test Recall | Test Accuracy |
|:-:|:-:|:-:|
| 0.998636 | 0.998945 | 0.994621 |
Training/Test Set split:
- 4300 Testing Examples (10%)
- 38700 Train Examples
# Community Engagement:
Newsletter & updates: www.Ai4privacy.com
- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)
- Integrations with already existing open source solutions
# Roadmap and Future Development
- Multilingual
- Extended integrations
- Continuously increase the training set
- Further optimisation to the model to reduce size and increase generalisability
- Next released major update is planned for the 14th of July (subscribe to newsletter for updates)
# Use Cases and Applications
**Chatbots**: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses.
**Customer Support Systems**: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information.
**Email Filtering**: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information.
**Data Anonymization**: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes.
**Social Media Platforms**: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment.
**Content Moderation**: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details.
**Online Forms**: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection.
**Collaborative Document Editing**: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents.
**Research and Data Sharing**: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft.
**Content Generation**: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals.
(...and whatever else your creative mind can think of)
# Support and Maintenance
AI4Privacy is a project affiliated with [AISuisse SA](https://www.aisuisse.com/). |
TrainingDataPro/selfie-and-video-on-back-camera | 2023-09-14T16:55:55.000Z | [
"task_categories:image-classification",
"task_categories:video-classification",
"task_categories:image-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"legal",
"region:us"
] | TrainingDataPro | The dataset consists of selfies and video of real people made on a back camera
of the smartphone. The dataset solves tasks in the field of anti-spoofing and
it is useful for buisness and safety systems. | @InProceedings{huggingface:dataset,
title = {selfie-and-video-on-back-camera},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 7 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- video-classification
- image-to-image
language:
- en
tags:
- legal
dataset_info:
features:
- name: photo
dtype: image
- name: video
dtype: string
- name: phone
dtype: string
- name: gender
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
splits:
- name: train
num_bytes: 21414921
num_examples: 10
download_size: 239042378
dataset_size: 21414921
---
# Selfie and Video on Back Camera Dataset
The dataset consists of selfies and video of real people made on a back camera of the smartphone. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
### The dataset includes 2 different types of files:
- **Photo** - a selfie of a person from a mobile phone, the person is depicted alone on it, the face is clearly visible.
- **Video** - filmed on the front camera, on which a person moves his/her head left, right, up and down. Duration of the video is from 10 to 20 seconds.
.png?generation=1688132311367523&alt=media)
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=selfie-and-video-on-back-camera) to discuss your requirements, learn about the price and buy the dataset.
# Content
### Phone models in the datset:
- iPhone 13 mini
- iPhone 11
- iPhone XR
- iPhone 8 plus
- Samsung A80
- Samsung galaxy A51 5G
- Samsung A32
- Samsung Galaxy S10 5G
### File with the extension .csv
includes the following information for each media file:
- **photo**: link to access the selfie,
- **video**: link to access the video,
- **phone**: the device used to capture selfie and video,
- **gender**: gender of a person,
- **age**: age of the person,
- **country**: country of the person
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=selfie-and-video-on-back-camera) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
TrainingDataPro/printed-2d-masks-with-holes-for-eyes-attacks | 2023-09-14T16:56:28.000Z | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"legal",
"region:us"
] | TrainingDataPro | The dataset consists of selfies of people and videos of them wearing a printed
2d mask with their face. The dataset solves tasks in the field of anti-spoofing
and it is useful for buisness and safety systems.
The dataset includes: **attacks** - videos of people wearing printed portraits
of themselves with cut-out eyes. | @InProceedings{huggingface:dataset,
title = {printed-2d-masks-with-holes-for-eyes-attacks},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 7 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- image-to-image
- video-classification
language:
- en
tags:
- legal
dataset_info:
features:
- name: photo
dtype: image
- name: attack
dtype: string
- name: phone
dtype: string
- name: gender
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
splits:
- name: train
num_bytes: 97360072
num_examples: 15
download_size: 502647114
dataset_size: 97360072
---
# Printed 2D Masks with Holes for Eyes Attacks Dataset
The dataset consists of selfies of people and videos of them wearing a printed 2d mask with their face. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
The dataset includes: **attacks** - videos of people wearing printed portraits of themselves with cut-out eyes.
### The dataset includes 2 different types of files:
- **Photo** - a selfie of a person from a mobile phone, the person is depicted alone on it, the face is clearly visible.
- **Video** - filmed on the front camera, on which a person moves his/her head left, right, up and down. Duration of the video is from 10 to 20 seconds. On the video, a person is wearing a printed 2d mask made from the corresponding photo from the "photo" folder.
.png?generation=1688134701103179&alt=media)
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=printed-2d-masks-with-holes-for-eyes-attacks) to discuss your requirements, learn about the price and buy the dataset.
# Content
### File with the extension .csv
includes the following information for each media file:
- **photo**: link to access the photo,
- **attack**: link to access the attack video,
- **phone**: the device used to capture the replay video,
- **computer**: the device used to play the video,
- **gender**: gender of a person in the video,
- **age**: age of the person in the video,
- **country**: country of the person
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=printed-2d-masks-with-holes-for-eyes-attacks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
knowrohit07/know_logic | 2023-06-30T21:32:18.000Z | [
"license:other",
"region:us"
] | knowrohit07 | null | null | null | 3 | 7 | ---
license: other
---
|
Peppertuna/ChartQADatasetV2 | 2023-09-11T02:26:42.000Z | [
"region:us"
] | Peppertuna | ChartQA dataset demo | null | null | 2 | 7 | Entry not found |
aisyahhrazak/ms-majalahsains | 2023-07-03T00:47:04.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | null | 0 | 7 | ---
language:
- ms
---
About
- Scraped articles from https://www.majalahsains.com/
- Data scraped on 1.7.2023
Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...], "tags": [......,.....,]}
``` |
aisyahhrazak/ms-melakahariini | 2023-07-03T00:47:31.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | null | 0 | 7 | ---
language:
- ms
---
About
- Scraped articles from https://www.melakahariini.my/
- Data scraped on 1.7.2023
Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...]}
``` |
causal-lm/instructions-ko | 2023-07-24T05:54:16.000Z | [
"language:ko",
"region:us"
] | causal-lm | null | null | null | 1 | 7 | ---
language: ko
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 71817534.51580903
num_examples: 112104
- name: validation
num_bytes: 8026314.24732017
num_examples: 12429
download_size: 43862664
dataset_size: 79843848.7631292
---
# Dataset Card for "instructions-ko"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aisyahhrazak/ms-malaysiakini-my | 2023-07-03T00:49:22.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | null | 0 | 7 | ---
language:
- ms
---
About
- Scraped articles from https://www.malaysiakini.com/my
- Not including other domains (page.malaysiakini/newslab.malaysiakini)
- Data scraped on 2.7.2023
Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...]}
``` |
jjzha/skillspan | 2023-09-07T12:12:10.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | jjzha | null | null | null | 0 | 7 | ---
license: cc-by-4.0
language: en
---
This is the SkillSpan dataset created by:
```
@inproceedings{zhang-etal-2022-skillspan,
title = "{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings",
author = "Zhang, Mike and
Jensen, Kristian and
Sonniks, Sif and
Plank, Barbara",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.366",
doi = "10.18653/v1/2022.naacl-main.366",
pages = "4962--4984"
}
```
There are document delimiters indicated by `idx`.
Number of samples (sentences):
- train: 4800
- dev: 3174
- test: 3569
Sources:
- Stackoverflow (tech)
- STAR (house)
Type of tags:
- Generic BIO tags with keys `tags_skill` and `tags_knowledge`
Sample:
```
{
"idx": 53,
"tokens": ["Drive", "our", "IT", "compliance", "agenda", "and", "develop", "our", "processes"],
"tags_skill": ["B", "I", "I", "I", "I", "O", "B", "I", "I"],
"tags_knowledge": ["O", "O", "O", "O", "O", "O", "O", "O", "O"],
"source": "house"
}
``` |
Beluuuuuuga/Japanese-Instruction-Linux-Command-169 | 2023-07-17T11:06:56.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:ja",
"license:cc-by-nc-4.0",
"region:us"
] | Beluuuuuuga | null | null | null | 2 | 7 | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
language:
- ja
size_categories:
- n<1K
--- |
rudraml/fma | 2023-07-14T23:31:34.000Z | [
"license:openrail",
"region:us"
] | rudraml | FMA is a dataset for music analysis. It includes song title, album, artist, genres; spectrograms, metadata, and features. | """
_DESCRIPTION = | null | 0 | 7 | ---
license: openrail
--- |
FunDialogues/sports-basketball-coach | 2023-08-28T23:39:54.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"fictitious dialogues",
"prototyping",
"sports",
"region:us"
] | FunDialogues | null | null | null | 1 | 7 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- fictitious dialogues
- prototyping
- sports
pretty_name: 'sports-basketball-coach'
size_categories:
- n<1K
---
# fun dialogues
A library of fictitious dialogues that can be used to train language models or augment prompts for prototyping and educational purposes. Fun dialogues currently come in json and csv format for easy ingestion or conversion to popular data structures. Dialogues span various topics such as sports, retail, academia, healthcare, and more. The library also includes basic tooling for loading dialogues and will include quick chatbot prototyping functionality in the future.
Visit the Project Repo: https://github.com/eduand-alvarez/fun-dialogues/
# This Dialogue
Comprised of fictitious examples of dialogues between a basketball coach and the players on the court during a game. Check out the example below:
```
"id": 1,
"description": "Motivating the team",
"dialogue": "Coach: Let's give it our all, team! We've trained hard for this game, and I know we can come out on top if we work together."
```
# How to Load Dialogues
Loading dialogues can be accomplished using the fun dialogues library or Hugging Face datasets library.
## Load using fun dialogues
1. Install fun dialogues package
`pip install fundialogues`
2. Use loader utility to load dataset as pandas dataframe. Further processing might be required for use.
```
from fundialogues import dialoader
# load as pandas dataframe
bball_coach = dialoader("FunDialogues/sports-basketball-coach")
```
## Loading using Hugging Face datasets
1. Install datasets package
2. Load using datasets
```
from datasets import load_dataset
dataset = load_dataset("FunDialogues/sports-basketball-coach")
```
## How to Contribute
If you want to contribute to this project and make it better, your help is very welcome. Contributing is also a great way to learn more about social coding on Github, new technologies and and their ecosystems and how to make constructive, helpful bug reports, feature requests and the noblest of all contributions: a good, clean pull request.
### Contributing your own Lifecycle Solution
If you want to contribute to an existing dialogue or add a new dialogue, please open an issue and I will follow up with you ASAP!
### Implementing Patches and Bug Fixes
- Create a personal fork of the project on Github.
- Clone the fork on your local machine. Your remote repo on Github is called origin.
- Add the original repository as a remote called upstream.
- If you created your fork a while ago be sure to pull upstream changes into your local repository.
- Create a new branch to work on! Branch from develop if it exists, else from master.
- Implement/fix your feature, comment your code.
- Follow the code style of the project, including indentation.
- If the component has tests run them!
- Write or adapt tests as needed.
- Add or change the documentation as needed.
- Squash your commits into a single commit with git's interactive rebase. Create a new branch if necessary.
- Push your branch to your fork on Github, the remote origin.
- From your fork open a pull request in the correct branch. Target the project's develop branch if there is one, else go for master!
If the maintainer requests further changes just push them to your branch. The PR will be updated automatically.
Once the pull request is approved and merged you can pull the changes from upstream to your local repo and delete your extra branch(es).
And last but not least: Always write your commit messages in the present tense. Your commit message should describe what the commit, when applied, does to the code – not what you did to the code.
# Disclaimer
The dialogues contained in this repository are provided for experimental purposes only. It is important to note that these dialogues are assumed to be original work by a human and are entirely fictitious, despite the possibility of some examples including factually correct information. The primary intention behind these dialogues is to serve as a tool for language modeling experimentation and should not be used for designing real-world products beyond non-production prototyping.
Please be aware that the utilization of fictitious data in these datasets may increase the likelihood of language model artifacts, such as hallucinations or unrealistic responses. Therefore, it is essential to exercise caution and discretion when employing these datasets for any purpose.
It is crucial to emphasize that none of the scenarios described in the fun dialogues dataset should be relied upon to provide advice or guidance to humans. These scenarios are purely fictitious and are intended solely for demonstration purposes. Any resemblance to real-world situations or individuals is entirely coincidental.
The responsibility for the usage and application of these datasets rests solely with the individual or entity employing them. By accessing and utilizing these dialogues and all contents of the repository, you acknowledge that you have read and understood this disclaimer, and you agree to use them at your own discretion and risk. |
dvinagre/euskera-speaker-embeddings | 2023-07-19T13:26:54.000Z | [
"region:us"
] | dvinagre | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: speaker_embeddings
sequence: float64
splits:
- name: train
num_bytes: 40286600
num_examples: 9826
download_size: 33659727
dataset_size: 40286600
---
# Dataset Card for "euskera-speaker-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hammer888/captcha-data | 2023-07-19T17:10:50.000Z | [
"region:us"
] | hammer888 | null | null | null | 1 | 7 | Entry not found |
RainerGa/openassistant-guanaco-de | 2023-07-22T08:33:23.000Z | [
"language:de",
"license:apache-2.0",
"region:us"
] | RainerGa | null | null | null | 1 | 7 | ---
license: apache-2.0
language:
- de
---
This is only a Copy of the Work of OpenAssistant and the user timdettmers
Target of this trainingdata is finetuning only in German language.
File openassistant_origfile_with_lang_informations.txt is the full trainingdata. Every line starts with Language Informations. You can easily filter with:
cat openassistant_origfile_with_lang_informations.txt | grep ^de | sed s/^de,//g > openassistant_best_replies_de_train.jsonl
Replace ^de with the language you are interessted in.
For language detection I use python "langdetect". The simple python Script "detect_language.py" is part of this dataset but NOT needed for finetuning a model!
### Languages
OpenAssistant Conversations only German language:
**Languages with over 1000 messages**
- German: 5279 |
wisenut-nlp-team/squad_kor_v1 | 2023-08-03T04:45:50.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-nd-4.0",
"arxiv:1909.07005",
"region:us"
] | wisenut-nlp-team | KorQuAD 1.0 is a large-scale Korean dataset for machine reading comprehension task consisting of human generated questions for Wikipedia articles. We benchmark the data collecting process of SQuADv1.0 and crowdsourced 70,000+ question-answer pairs. 1,637 articles and 70,079 pairs of question answers were collected. 1,420 articles are used for the training set, 140 for the dev set, and 77 for the test set. 60,407 question-answer pairs are for the training set, 5,774 for the dev set, and 3,898 for the test set. | @article{lim2019korquad1,
title={Korquad1. 0: Korean qa dataset for machine reading comprehension},
author={Lim, Seungyoung and Kim, Myungji and Lee, Jooyoul},
journal={arXiv preprint arXiv:1909.07005},
year={2019}
} | null | 2 | 7 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ko
license:
- cc-by-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: korquad
pretty_name: The Korean Question Answering Dataset
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: squad_kor_v1_512
splits:
- name: train
num_examples: 60407
- name: validation
num_examples: 5774
viewer: true
---
# Dataset Card for KorQuAD v1.0 512 Tokens
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://korquad.github.io/KorQuad%201.0/
- **Repository:** https://github.com/korquad/korquad.github.io/tree/master/dataset
- **Paper:** https://arxiv.org/abs/1909.07005
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ritterdennis/topex-printer | 2023-07-24T20:28:46.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:cc-by-nc-3.0",
"region:us"
] | ritterdennis | null | null | null | 1 | 7 | ---
license: cc-by-nc-3.0
task_categories:
- image-classification
size_categories:
- 1K<n<10K
viewer: false
---
## Dataset Description
We introduce a challenging dataset for identifying machine parts from real photos,
featuring images of 102 parts from a labeling machine. This dataset was developed
with the complexity of real-world scenarios in mind and highlights the complexity
of distinguishing between closely related classes, providing an opportunity to
improve domain adaption methods. The dataset includes 3,264 CAD-rendered
images (32 per part) and 6,146 real images (6 to 137 per part) for UDA and
testing. Rendered images were produced using a Blender-based pipeline with
environment maps, lights, and virtual cameras arranged to ensure varied mesh
orientations. We also use material metadata and apply one of 21 texture materials
to the objects. We render all images at 512x512 pixels. The real photo set consists of
raw images captured under varying conditions using different cameras, including
varied lighting, backgrounds, and environmental factors.
### Citation Information
|
thomasavare/waste-classification-audio-helsinki | 2023-08-30T00:25:38.000Z | [
"language:en",
"language:it",
"region:us"
] | thomasavare | null | null | null | 0 | 7 | ---
language:
- en
- it
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: speaker
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: Class
dtype: string
- name: Class_index
dtype: float64
splits:
- name: train
num_bytes: 380069293.0
num_examples: 500
download_size: 287632439
dataset_size: 380069293.0
---
# Dataset Card for "waste-classification-audio"
english to italian translation was made with [helsinki-NLP](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) translation model.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tiwes/apa_de | 2023-07-27T14:42:10.000Z | [
"region:us"
] | tiwes | null | null | null | 0 | 7 | Entry not found |
HydraLM/GPTeacher_codegen_list_dict | 2023-07-27T20:07:35.000Z | [
"region:us"
] | HydraLM | null | null | null | 1 | 7 | ---
dataset_info:
features:
- name: conversations
list:
- name: input
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 1909869
num_examples: 4534
download_size: 901695
dataset_size: 1909869
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "GPTeacher_codegen_list_dict"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
msaad02/formatted-ss-cleaned-brockport-qa | 2023-07-29T15:04:17.000Z | [
"region:us"
] | msaad02 | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2897094
num_examples: 7098
download_size: 828075
dataset_size: 2897094
---
# Dataset Card for "formatted-ss-cleaned-brockport-qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ammarnasr/the-stack-rust-clean | 2023-08-14T21:21:30.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:code",
"license:openrail",
"code",
"region:us"
] | ammarnasr | null | null | null | 0 | 7 | ---
license: openrail
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 3582248477.9086223
num_examples: 806789
- name: test
num_bytes: 394048264.9973618
num_examples: 88747
- name: valid
num_bytes: 3982797.09401595
num_examples: 897
download_size: 1323156008
dataset_size: 3980279540
task_categories:
- text-generation
language:
- code
tags:
- code
pretty_name: TheStack-Rust
size_categories:
- 1M<n<10M
---
## Dataset 1: TheStack - Rust - Cleaned
**Description**: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Rust, a popular statically typed language.
**Target Language**: Rust
**Dataset Size**:
- Training: 900,000 files
- Validation: 50,000 files
- Test: 50,000 files
**Preprocessing**:
1. Selected Rust as the target language due to its popularity on GitHub.
2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
3. Split files into 90% training, 5% validation, and 5% test sets.
**Tokenizer**: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
**Training Sequences**: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning). |
DylanJHJ/pds2023 | 2023-08-08T15:45:29.000Z | [
"license:apache-2.0",
"region:us"
] | DylanJHJ | null | null | null | 0 | 7 | ---
license: apache-2.0
---
|
HydraLM/partitioned_v3_standardized_04 | 2023-08-01T18:00:11.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 23349821.964955334
num_examples: 43424
download_size: 19455661
dataset_size: 23349821.964955334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_04"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_05 | 2023-08-01T18:00:21.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 10155860.52533418
num_examples: 18887
download_size: 3249498
dataset_size: 10155860.52533418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_06 | 2023-08-01T18:00:33.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 9838607.509505982
num_examples: 18297
download_size: 9055730
dataset_size: 9838607.509505982
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_06"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_07 | 2023-08-01T18:00:48.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 38216083.199874274
num_examples: 71071
download_size: 20257791
dataset_size: 38216083.199874274
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_07"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_08 | 2023-08-01T18:00:59.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 30473496.462738317
num_examples: 56672
download_size: 4432781
dataset_size: 30473496.462738317
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_08"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_09 | 2023-08-01T18:01:12.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 15952987.243374513
num_examples: 29668
download_size: 18032321
dataset_size: 15952987.243374513
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_09"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_010 | 2023-08-01T18:01:22.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 8385158.523432054
num_examples: 15594
download_size: 9398209
dataset_size: 8385158.523432054
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_010"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_011 | 2023-08-01T18:01:32.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 11869026.810806446
num_examples: 22073
download_size: 8319441
dataset_size: 11869026.810806446
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_011"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_012 | 2023-08-01T18:01:44.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 19913272.771467183
num_examples: 37033
download_size: 16406844
dataset_size: 19913272.771467183
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_012"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_013 | 2023-08-01T18:01:55.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 36260944.275211014
num_examples: 67435
download_size: 10436734
dataset_size: 36260944.275211014
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_013"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_014 | 2023-08-01T18:02:05.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 13434858.644860193
num_examples: 24985
download_size: 3851796
dataset_size: 13434858.644860193
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_014"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_015 | 2023-08-01T18:02:20.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 36865875.87318851
num_examples: 68560
download_size: 24239053
dataset_size: 36865875.87318851
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_016 | 2023-08-01T18:02:31.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 12924027.517679198
num_examples: 24035
download_size: 9107397
dataset_size: 12924027.517679198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_016"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_017 | 2023-08-01T18:02:41.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 17233291.363182884
num_examples: 32049
download_size: 7807381
dataset_size: 17233291.363182884
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_017"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_018 | 2023-08-01T18:02:53.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 64622288.45629991
num_examples: 120179
download_size: 9924326
dataset_size: 64622288.45629991
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_018"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_019 | 2023-08-01T18:03:03.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 31693038.56426095
num_examples: 58940
download_size: 2446972
dataset_size: 31693038.56426095
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_019"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_020 | 2023-08-01T18:03:16.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 88140953.55171296
num_examples: 163917
download_size: 9190278
dataset_size: 88140953.55171296
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_020"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_021 | 2023-08-01T18:03:27.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 40218541.218423784
num_examples: 74795
download_size: 8276625
dataset_size: 40218541.218423784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_021"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_022 | 2023-08-01T18:03:46.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 62738665.88944198
num_examples: 116676
download_size: 35427818
dataset_size: 62738665.88944198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_022"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_023 | 2023-08-01T18:03:57.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 10570978.030790735
num_examples: 19659
download_size: 11495691
dataset_size: 10570978.030790735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_024 | 2023-08-01T18:04:09.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 12182515.807802783
num_examples: 22656
download_size: 12802424
dataset_size: 12182515.807802783
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_025 | 2023-08-01T18:04:18.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 10684436.312722515
num_examples: 19870
download_size: 6109603
dataset_size: 10684436.312722515
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_025"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_026 | 2023-08-01T18:04:29.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 34048776.63602931
num_examples: 63321
download_size: 5555939
dataset_size: 34048776.63602931
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_026"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_027 | 2023-08-01T18:04:38.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 6027807.300735752
num_examples: 11210
download_size: 6004359
dataset_size: 6027807.300735752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_027"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_028 | 2023-08-01T18:04:49.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 31271468.45509263
num_examples: 58156
download_size: 5794647
dataset_size: 31271468.45509263
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_028"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_029 | 2023-08-01T18:05:00.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 9747733.340565363
num_examples: 18128
download_size: 9524643
dataset_size: 9747733.340565363
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_029"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_030 | 2023-08-01T18:05:15.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 7 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 119843133.30456556
num_examples: 222874
download_size: 9661106
dataset_size: 119843133.30456556
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_030"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bloyal/antiberta-pretrain | 2023-08-02T14:25:11.000Z | [
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | bloyal | null | null | null | 0 | 7 | ---
license: apache-2.0
size_categories:
- 100K<n<1M
---
# AntiBERTa Pretraining Data
## Description
Pretraining data for the [AntiBERTa](https://github.com/alchemab/antiberta) protein language model from [Alchemab Therapeutics](https://www.alchemab.com/).
## Citations
```
@article{Leem_Mitchell_Farmery_Barton_Galson_2022, title={Deciphering the language of antibodies using self-supervised learning}, volume={3}, ISSN={2666-3899}, url={https://www.cell.com/patterns/abstract/S2666-3899(22)00105-2}, DOI={10.1016/j.patter.2022.100513}, number={7}, journal={Patterns}, publisher={Elsevier}, author={Leem, Jinwoo and Mitchell, Laura S. and Farmery, James H. R. and Barton, Justin and Galson, Jacob D.}, year={2022}, month={Jul}, language={English} }
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.