id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jkwiatkowski/raven | jkwiatkowski | 2023-01-14T21:40:08Z | 43 | 0 | null | [
"region:us"
] | 2023-01-14T21:40:08Z | 2023-01-14T21:25:46.000Z | 2023-01-14T21:25:46 | ---
dataset_info:
features:
- name: inputs
dtype:
array3_d:
shape:
- 16
- 160
- 160
dtype: uint8
- name: target
dtype:
array2_d:
shape:
- 16
- 113
dtype: int8
- name: index
dtype: uint8
splits:
- name: train
num_bytes: 17714970000
num_examples: 42000
- name: val
num_bytes: 5904990000
num_examples: 14000
- name: test
num_bytes: 5904990000
num_examples: 14000
download_size: 1225465267
dataset_size: 29524950000
---
# Dataset Card for "raven"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6005617380142212,
-0.027202557772397995,
0.034039780497550964,
0.26029589772224426,
0.033646393567323685,
-0.19423764944076538,
0.812788188457489,
-0.3802948594093323,
1.155961513519287,
0.5254784822463989,
-0.9438897371292114,
-0.7131692171096802,
-0.6113013029098511,
-0.29577222466468... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
competitions/aiornot | competitions | 2023-03-30T12:32:32Z | 43 | 25 | null | [
"task_categories:image-classification",
"image-classification",
"autotrain",
"competitions",
"region:us"
] | 2023-03-30T12:32:32Z | 2023-01-25T15:22:37.000Z | 2023-01-25T15:22:37 | ---
task_categories:
- image-classification
tags:
- image-classification
- autotrain
- competitions
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: label
dtype: int64
---
# Dataset Card for aiornot
Dataset for the [aiornot competition](https://hf.co/spaces/competitions/aiornot).
By accessing this dataset, you accept the rules of the AI or Not competition.
Please note that dataset may contain images which are not considered safe for work.
## Usage
### With Hugging Face Datasets 🤗
You can download and use this dataset using the `datasets` library.
📝 **Note:** You must be logged in to you Hugging Face account for the snippet below to work. You can do this with `huggingface-cli login` or `huggingface_hub.notebook_login` if you have the `huggingface_hub` python library installed (`pip install huggingface_hub`).
```python
from datasets import load_dataset
ds = load_dataset('competitions/aiornot')
```
### From Original Files
The original files and sample submission can be found in the `.extras` folder (under the files and versions tab of this repo). Feel free to download them and use them directly if you don't wish to use the `datasets` library.
| [
-0.39490485191345215,
-0.42858749628067017,
-0.09024375677108765,
0.6200641989707947,
-0.49466678500175476,
-0.1339232623577118,
-0.05084623768925667,
-0.35253527760505676,
0.6617997288703918,
0.24080923199653625,
-0.81304532289505,
-0.6762734651565552,
-0.6632581353187561,
0.2788597047328... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/OK-VQA_test | Multimodal-Fatima | 2023-05-29T02:08:55Z | 43 | 0 | null | [
"region:us"
] | 2023-05-29T02:08:55Z | 2023-03-07T14:45:38.000Z | 2023-03-07T14:45:38 | ---
dataset_info:
features:
- name: image
dtype: image
- name: question_type
dtype: string
- name: confidence
dtype: int32
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: raw_answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: id
dtype: int64
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: clip_tags_ViT_B_16_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: blip_caption_topk_50_Salesforce_blip_image_captioning_large_multiple
sequence: string
splits:
- name: test
num_bytes: 1133674079.0
num_examples: 5046
download_size: 959321361
dataset_size: 1133674079.0
---
# Dataset Card for "OK-VQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7838833332061768,
-0.47043442726135254,
0.24061399698257446,
-0.12111825495958328,
-0.10199844837188721,
-0.22216439247131348,
0.49055591225624084,
-0.12725578248500824,
0.6808611154556274,
0.4884566068649292,
-0.7074025273323059,
-0.5855929851531982,
-0.374126672744751,
-0.336188286542... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-source-metrics/pip-external | open-source-metrics | 2023-11-22T15:49:29Z | 43 | 0 | null | [
"region:us"
] | 2023-11-22T15:49:29Z | 2023-03-24T14:32:07.000Z | 2023-03-24T14:32:07 | ---
dataset_info:
features:
- name: day
dtype: string
- name: num_downloads
dtype: int64
splits:
- name: pytorch
num_bytes: 33726
num_examples: 1533
- name: openai
num_bytes: 24574
num_examples: 1117
- name: langchain
num_bytes: 8668
num_examples: 394
- name: tensorflow
num_bytes: 33726
num_examples: 1533
download_size: 59366
dataset_size: 100694
configs:
- config_name: default
data_files:
- split: langchain
path: data/langchain-*
- split: pytorch
path: data/pytorch-*
- split: tensorflow
path: data/tensorflow-*
---
# Dataset Card for "pip-external"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5737804770469666,
0.0741593986749649,
0.05101093277335167,
0.33434054255485535,
-0.1407611221075058,
-0.23954346776008606,
0.39344850182533264,
-0.16884899139404297,
0.7893158197402954,
0.36550337076187134,
-0.9102442264556885,
-0.4241996109485626,
-0.6594387292861938,
-0.17589490115642... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Francesco/road-traffic | Francesco | 2023-03-30T09:12:18Z | 43 | 1 | null | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | 2023-03-30T09:12:18Z | 2023-03-30T09:11:50.000Z | 2023-03-30T09:11:50 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': road-traffic
'1': bicycles
'2': buses
'3': crosswalks
'4': fire hydrants
'5': motorcycles
'6': traffic lights
'7': vehicles
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: road-traffic
tags:
- rf100
---
# Dataset Card for road-traffic
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/road-traffic
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
road-traffic
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/road-traffic
### Citation Information
```
@misc{ road-traffic,
title = { road traffic Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/road-traffic } },
url = { https://universe.roboflow.com/object-detection/road-traffic },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.725415050983429,
-0.6613062024116516,
0.34089893102645874,
-0.08982060849666595,
-0.5363978743553162,
-0.1311916559934616,
-0.08092565089464188,
-0.6036465764045715,
0.30932581424713135,
0.44510263204574585,
-0.6744555234909058,
-0.8976103663444519,
-0.5684770345687866,
0.17702196538448... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/MULTI_VALUE_sst2_comparative_more_and | liuyanchen1015 | 2023-04-03T19:47:13Z | 43 | 0 | null | [
"region:us"
] | 2023-04-03T19:47:13Z | 2023-04-03T19:47:10.000Z | 2023-04-03T19:47:10 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 3080
num_examples: 19
- name: test
num_bytes: 6036
num_examples: 38
- name: train
num_bytes: 73392
num_examples: 631
download_size: 35653
dataset_size: 82508
---
# Dataset Card for "MULTI_VALUE_sst2_comparative_more_and"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.38559451699256897,
-0.07859700918197632,
0.28031912446022034,
0.0752108246088028,
-0.3979485034942627,
0.287525475025177,
0.2331845462322235,
-0.18516641855239868,
0.7512829899787903,
0.2106890231370926,
-0.6053675413131714,
-0.5348562598228455,
-0.643838107585907,
-0.3702797591686249,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
camel-ai/biology | camel-ai | 2023-05-23T21:11:56Z | 43 | 19 | null | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | 2023-05-23T21:11:56Z | 2023-04-16T01:30:03.000Z | 2023-04-16T01:30:03 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Biology
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Biology dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 biology topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
We provide the data in `biology.zip`.
## Data Fields
**The data fields for files in `biology.zip` are as follows:**
* `role_1`: assistant role
* `topic`: biology topic
* `sub_topic`: biology subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/biology", repo_type="dataset", filename="biology.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
| [
-0.34610170125961304,
-1.030604362487793,
0.1959645003080368,
0.16683945059776306,
-0.06699174642562866,
0.023658502846956253,
-0.4136985242366791,
-0.44533976912498474,
0.39107584953308105,
0.31327933073043823,
-0.5514075756072998,
-0.3644706606864929,
-0.6920718550682068,
0.1374517828226... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PierreLepagnol/WRENCH | PierreLepagnol | 2023-08-17T11:18:59Z | 43 | 0 | null | [
"task_categories:text-classification",
"task_categories:token-classification",
"size_categories:10K<n<100K",
"region:us"
] | 2023-08-17T11:18:59Z | 2023-04-17T11:42:58.000Z | 2023-04-17T11:42:58 | ---
task_categories:
- text-classification
- token-classification
size_categories:
- 10K<n<100K
dataset_info:
- config_name: yelp
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 22618599
num_examples: 30400
- name: validation
num_bytes: 2824249
num_examples: 3800
- name: test
num_bytes: 2709033
num_examples: 3800
download_size: 37356054
dataset_size: 28151881
- config_name: imdb
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 25515241
num_examples: 20000
- name: validation
num_bytes: 3269130
num_examples: 2500
- name: test
num_bytes: 3151954
num_examples: 2500
download_size: 33910706
dataset_size: 31936325
- config_name: agnews
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 20357121
num_examples: 96000
- name: validation
num_bytes: 2487983
num_examples: 12000
- name: test
num_bytes: 2521518
num_examples: 12000
download_size: 39149014
dataset_size: 25366622
- config_name: cdr
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 2318545
num_examples: 8430
- name: validation
num_bytes: 246252
num_examples: 920
- name: test
num_bytes: 1229627
num_examples: 4673
download_size: 11036213
dataset_size: 3794424
- config_name: chemprot
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 3474637
num_examples: 12861
- name: validation
num_bytes: 435850
num_examples: 1607
- name: test
num_bytes: 434031
num_examples: 1607
download_size: 15743249
dataset_size: 4344518
- config_name: semeval
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 532785
num_examples: 1749
- name: validation
num_bytes: 54373
num_examples: 178
- name: test
num_bytes: 184826
num_examples: 600
download_size: 2295058
dataset_size: 771984
- config_name: sms
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 741520
num_examples: 4571
- name: validation
num_bytes: 81747
num_examples: 500
- name: test
num_bytes: 80152
num_examples: 500
download_size: 6715435
dataset_size: 903419
- config_name: spouse
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 7550613
num_examples: 22254
- name: validation
num_bytes: 952523
num_examples: 2811
- name: test
num_bytes: 876804
num_examples: 2701
download_size: 22017644
dataset_size: 9379940
- config_name: trec
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 610244
num_examples: 4965
- name: validation
num_bytes: 61048
num_examples: 500
- name: test
num_bytes: 56479
num_examples: 500
download_size: 2277067
dataset_size: 727771
- config_name: youtube
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 180736
num_examples: 1586
- name: validation
num_bytes: 14659
num_examples: 120
- name: test
num_bytes: 33347
num_examples: 250
download_size: 759494
dataset_size: 228742
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
michaelwzhu/ChatMed_Consult_Dataset | michaelwzhu | 2023-05-05T13:41:10Z | 43 | 72 | null | [
"task_categories:text-generation",
"language:zh",
"license:cc-by-4.0",
"medical consultation",
"finetuning",
"region:us"
] | 2023-05-05T13:41:10Z | 2023-04-20T04:53:33.000Z | 2023-04-20T04:53:33 | ---
license: cc-by-4.0
language:
- zh
tags:
- medical consultation
- finetuning
pretty_name: ChatMed-Dataset
task_categories:
- text-generation
---
# Dataset Card for ChatMed
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/michaelwzhu/ChatMed-Datasets
- **Repository:** https://github.com/michael-wzhu/ChatMed
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Wei Zhu
### Dataset Summary
ChatMed-Dataset is a dataset of 110,113 medical query-response pairs (in Chinese) generated by OpenAI's `GPT-3.5` engine. The queries are crawled from several online medical consultation sites, reflecting the medical needs in the real world. The responses are generated by the OpenAI engine. This dataset is designated to to inject medical knowledge into Chinese large language models.
The dataset size growing rapidly. Stay tuned for updates!
在大模型时代,模型都是趋同的,比如开源社区由于LlaMa的"开源",出现了各种stanford-alpaca等一系列的开源模型。但是中文垂直领域上的AGI数据集相对匮乏。
本数据集, ChatMed-Dataset, 中的query(或者是prompt)来自于互联网上的医疗问诊问题(110,113),反映了真实世界的不同用户/患者的医疗问诊需求。目前response都是由OpenAI `GPT-3.5`引擎回答的。我们后续会对互联网上的医生回答与患者回答进行筛选甄别,择优选择,构建质量更优的数据集。
### Supported Tasks and Leaderboards
The ChatMed-Dataset designed for fine-tuning pretrained language models to perform better in automatic medical consultations.
### Languages
The data in ChatMed-Dataset are in Chinese (zh).
## Dataset Structure
### Data Instances
The dataset is in json-line format and is very convenient to use! An example in the ChatMed-Dataset looks as follows:
```json
{
"query": "你好,视神经萎缩,黄斑变性。怎样缓解,及日常注意事项。谢谢。",
"response": "视神经萎缩和黄斑变性是两种不同的疾病,需要分别进行治疗。\n\n视神经萎缩是指视神经逐渐受损,导致视力逐渐减退,常见的治疗方法包括:\n\n1.药物治疗:如促进视神经细胞再生的药物,改善视网膜供血的药物等。\n\n2.手术治疗:如视神经萎缩的手术治疗可以通过神经移植、修复和再生等手术方式来缓解症状。\n\n3.康复训练:通过视力训练等康复训练可以帮助患者恢复视力。\n\n黄斑变性则是指黄斑区域受损,导致中央视力减退,常见的治疗方法包括:\n\n1.药物治疗:如抗氧化剂、维生素等可以缓解症状。\n\n2.手术治疗:如黄斑脱离手术等可以恢复视力。\n\n3.日常注意事项:保持良好的生活习惯,如戒烟、控制血压、血糖等,避免暴晒和过度用眼,保持充足的睡眠和均衡的饮食等。\n\n需要根据具体情况选择合适的治疗方案,建议您咨询专业医生进行诊断和治疗。"
}
```
### Data Fields
The data fields are as follows:
* `query`: real-life medical query from the web.
* `response`: response generated by the OpenAI `GPT-3.5` engine.
### Data Splits
| | train |
|---------------|------:|
| ChatMed-Dataset | 110,113 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on how large language models responses to real-life medical query.
> More social impacts will be discussed soon!
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `ChatMed-Dataset` data is generated by a language model (`GPT-3.5`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Wei Zhu },
title = {ChatMed-Dataset: An GPT generated medical query-response datasets for medcial large language models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/michael-wzhu/ChatMed}},
}
```
### Contributions
[More Information Needed] | [
-0.1948956549167633,
-0.9471865296363831,
0.009738540276885033,
0.2476164698600769,
-0.4048019051551819,
-0.3155708611011505,
-0.37189266085624695,
-0.6297939419746399,
0.6301724910736084,
0.5011020302772522,
-0.5354740023612976,
-0.7830479741096497,
-0.5588686466217041,
0.0473465062677860... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heegyu/open-korean-instructions | heegyu | 2023-05-06T09:18:37Z | 43 | 12 | null | [
"license:mit",
"region:us"
] | 2023-05-06T09:18:37Z | 2023-04-22T02:10:17.000Z | 2023-04-22T02:10:17 | ---
license: mit
---
4가지 한국어 챗봇 학습용 데이터셋을 합쳐놓았습니다. 이중 ShareGPT 데이터는 멀티턴으로 되어있습니다.
데이터 생성 및 합치는 코드는 https://github.com/HeegyuKim/open-korean-instructions 여기를 참고하세요
| 이름 | # | 타입 |
|---|---|---|
| [KoAlpaca v1.0](https://huggingface.co/datasets/Bingsu/ko_alpaca_data) | 52K | 싱글턴 |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 21K | 싱글턴 |
| [ShareGPT DeepL 번역](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) | 620K(싱글턴), 84K(멀티턴) | 멀티턴, 싱글턴 |
| [OIG-small-chip2-ko](https://huggingface.co/datasets/heegyu/OIG-small-chip2-ko) | 210K | 싱글턴 |
| [Korquad-Chat](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
모든 데이터는 포멧이 통일되어 있습니다. `<sys>`, `<usr>`, `<bot>` 세가지 토큰과 줄넘김으로 화자를 구분합니다.
korquad-chat 데이터의 경우, 유저와 봇이 서로를 호칭할 때는 `<|bot|>`, `<|user|>`로 되어있습니다.
```
{"source": "koalpaca-v1.0", "text": "<usr> 홀수 중 하나를 밝히세요.\n<sys> 트위터, 인스타그램, 텔레그램\n<bot> 텔레그램입니다."}
{"source": "korquad-chat", "text": "
<sys>1984년 9월부터 덴노지 역 ~ 히네노 역간에 운전을 개시한 '홈 라이너 이즈미'가 전신이다. 특급 구로시오의 381계 전동차의 히네노 전차구 입선 회송 열차를 좌석 정리권을 필요로 하는 정원제 열차로서 운행해온 것으로 간사이 지방 최초의 홈 라이너였다. 1986년 11월 개정에 따라 운전 구간이 와카야마 역까지 연장되어 한와 라이너로 개칭되어 현재에 이르게 되었다. 이전에는 평일에 상행 3편, 하행 6편이 운전되었지만 일부가 신오사카 역이나 교토 역 방면 발착 특급 구로시오로 변경되어 2002년 3월 이후에는 상행 2편, 하행 4편으로 축소되었다. 2009년 3월의 시간표 개정부터는 휴일 시간대의 운전이 중단되었으며 같은 해 6월 1일부터는 전면 금연화되었다.\n
<usr> 안녕하세요, <|bot|>. \"한와_선\"에 대해 알고 계시나요?\n
<bot> 네, 전체적으로 알고 있습니다. 1984년에 운전을 개시한 \"홈 라이너 이즈미\"가 전신이며, 간사이 지방 최초의 홈 라이너였습니다. 1986년에는 와카야마 역까지 연장된 후 \"한와 라이너\"로 개칭되었습니다.\n
<usr> 그렇군요. 이전에는 평일에 몇 편이 운행되었나요?\n
<bot> 평일에 상행 3편, 하행 6편이 운전되었습니다. 하지만 2002년 이후로는 상행 2편, 하행 4편으로 축소되었습니다.\n
<usr> 그렇군요. 혹시 특급 구로시오와 연관이 있을까요?\n
<bot> 네, \"한와 라이너\"는 특급 구로시오의 381계 전동차의 히네노 전차구 입선 회송 열차를 좌석 정리권을 필요로 하는 정원제 열차로서 운행되었습니다."
}
```
| [
-0.6776437759399414,
-0.8182427287101746,
0.3058846890926361,
0.5684847831726074,
-0.5376150608062744,
-0.007696210406720638,
0.2074437439441681,
-0.38939815759658813,
0.762615978717804,
0.39154863357543945,
-0.5585095286369324,
-0.6197200417518616,
-0.7746405005455017,
-0.0803266763687133... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thehamkercat/telegram-spam-ham | thehamkercat | 2023-04-30T15:11:17Z | 43 | 1 | null | [
"license:wtfpl",
"region:us"
] | 2023-04-30T15:11:17Z | 2023-04-30T15:09:34.000Z | 2023-04-30T15:09:34 | ---
license: wtfpl
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fptudsc/face-celeb-vietnamese | fptudsc | 2023-05-10T15:13:18Z | 43 | 1 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-classification",
"size_categories:10M<n<100M",
"language:vi",
"license:apache-2.0",
"region:us"
] | 2023-05-10T15:13:18Z | 2023-05-03T17:56:54.000Z | 2023-05-03T17:56:54 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 82233752.864
num_examples: 8557
download_size: 80630170
dataset_size: 82233752.864
license: apache-2.0
task_categories:
- image-classification
- zero-shot-classification
language:
- vi
size_categories:
- 10M<n<100M
---
# Dataset Card for "face-celeb-vietnamese"
## Dataset Summary
This dataset contains information on over 8,000 samples of well-known Vietnamese individuals, categorized into three professions: singers, actors, and beauty queens. The dataset includes data on more than 100 celebrities in each of the three job categories.
## Languages
- Vietnamese: The label is used to indicate the name of celebrities in Vietnamese.
## Dataset Structure
- The image and Vietnamese sequences are
## Source Data - Initial Data Collection and Normalization
[Website người nổi tiếng](https://nguoinoitieng.tv)
### Licensing Information
Apache License 2.0
### Contributions
Thanks to [@github-duongttr](https://github.com/duongttr) and [@github-pphuc25](https://github.com/pphuc25) for adding this dataset. | [
-0.2535244822502136,
-0.4009217917919159,
-0.20677080750465393,
0.1814933568239212,
-0.16279755532741547,
0.12318381667137146,
0.02658444084227085,
-0.18506939709186554,
0.21042849123477936,
1.0871955156326294,
-0.5686021447181702,
-0.7138303518295288,
-0.6565490365028381,
0.06075911596417... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jkhedri/psychology-dataset | jkhedri | 2023-05-04T10:12:40Z | 43 | 16 | null | [
"region:us"
] | 2023-05-04T10:12:40Z | 2023-05-04T10:08:53.000Z | 2023-05-04T10:08:53 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xmcmic/PMC-VQA | xmcmic | 2023-07-12T01:10:58Z | 43 | 18 | null | [
"region:us"
] | 2023-07-12T01:10:58Z | 2023-05-24T12:00:03.000Z | 2023-05-24T12:00:03 | # PMC-VQA Dataset
- [PMC-VQA Dataset](#pmc-vqa-dataset)
- [Daraset Structure](#daraset-structure)
- [Sample](#sample)
## Daraset Structure
**PMC-VQA** (version-1: 227k VQA pairs of 149k images).
- `train.csv`: metafile of train set
- `test.csv`: metafile of test set
- `test_clean.csv`: metafile of test clean set
- `images.zip`: images folder
-
(**update** version-2: noncompound images).
- `train2.csv`: metafile of train set
- `test2.csv`: metafile of test set
- `images2.zip`: images folder
-
## Sample
A row in `train.csv` is shown bellow,
| Figure_path | PMC1064097_F1.jpg |
| ------------ | --------------------------------------------- |
| Question | What is the uptake pattern in the breast? |
| Answer | Focal uptake pattern |
| Choice A | A:Diffuse uptake pattern |
| Choice B | B:Focal uptake pattern |
| Choice C | C:No uptake pattern |
| Choice D | D:Cannot determine from the information given |
| Answer_label | B |
Explanation to each key
- Figure_path: path to the image
- Question: question corresponding to the image
- Answer: the correct answer corresponding to the image
- Choice A: the provide choice A
- Choice B: the provide choice B
- Choice C: the provide choice C
- Choice D: the provide choice D
- Anwser_label: the correct answer label
| [
-0.3357618451118469,
-0.0659436285495758,
0.5220481753349304,
0.39933401346206665,
-0.5142713785171509,
-0.13122515380382538,
0.4726024866104126,
0.20063824951648712,
0.12464544177055359,
0.698150634765625,
-0.9735161066055298,
-0.6449598073959351,
-0.6949925422668457,
0.12075259536504745,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
d0rj/wikisum | d0rj | 2023-06-16T11:24:25Z | 43 | 1 | wikisum | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"abstractive-summarization",
"wiki",
"abstractive",
"arxiv:1801.10198",
"region:us"
] | 2023-06-16T11:24:25Z | 2023-06-16T11:13:38.000Z | 2023-06-16T11:13:38 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: article
dtype: string
- name: step_headers
dtype: string
splits:
- name: train
num_bytes: 315275236
num_examples: 35775
- name: test
num_bytes: 17584216
num_examples: 2000
- name: validation
num_bytes: 17880851
num_examples: 2000
download_size: 194202865
dataset_size: 350740303
license:
- unknown
task_categories:
- summarization
language:
- en
multilinguality:
- monolingual
tags:
- abstractive-summarization
- wiki
- abstractive
pretty_name: 'WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation'
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: wikisum
---
# wikisum
## Dataset Description
- **Homepage:** https://registry.opendata.aws/wikisum/
- **Repository:** https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/data_generators/wikisum
- **Paper:** [Generating Wikipedia by Summarizing Long Sequences](https://arxiv.org/abs/1801.10198)
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [nachshon](mailto:nachshon@amazon.com)
| [
-0.5056736469268799,
-0.3219315707683563,
-0.017971374094486237,
0.08399040251970291,
-0.4157596826553345,
-0.07268033176660538,
-0.23377051949501038,
-0.1514424830675125,
0.38166099786758423,
0.3418792486190796,
-0.6548262238502502,
-0.5391581654548645,
-0.6513972282409668,
0.087702766060... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openaccess-ai-collective/chatbot-arena-elo-scores | openaccess-ai-collective | 2023-06-23T19:57:16Z | 43 | 0 | null | [
"region:us"
] | 2023-06-23T19:57:16Z | 2023-06-17T23:31:18.000Z | 2023-06-17T23:31:18 | ---
dataset_info:
features:
- name: elo_score
dtype: float64
- name: chatbot_name
dtype: string
splits:
- name: train
num_bytes: 359
num_examples: 14
download_size: 1669
dataset_size: 359
---
# Dataset Card for "chatbot-arena-elo-scores"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44750484824180603,
-0.4757183790206909,
-0.03251056745648384,
0.3160454034805298,
-0.06678543984889984,
0.12328743189573288,
0.08661755919456482,
-0.22261358797550201,
0.8693844079971313,
0.4432219862937927,
-0.7620216012001038,
-0.841468095779419,
-0.46548816561698914,
-0.3371861875057... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yulongmannlp/adv_ori | yulongmannlp | 2023-06-26T00:35:38Z | 43 | 0 | null | [
"region:us"
] | 2023-06-26T00:35:38Z | 2023-06-26T00:34:08.000Z | 2023-06-26T00:34:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Alignment-Lab-AI/Lawyer-Instruct | Alignment-Lab-AI | 2023-07-14T17:21:48Z | 43 | 6 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-14T17:21:48Z | 2023-07-14T06:59:11.000Z | 2023-07-14T06:59:11 | ---
license: apache-2.0
---
# Dataset Card for "Lawyer-Instruct"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
### Dataset Summary
Lawyer-Instruct is a conversational dataset primarily in English, reformatted from the original LawyerChat dataset. It contains legal dialogue scenarios reshaped into an instruction, input, and expected output format. This reshaped dataset is ideal for supervised dialogue model training.
Dataset generated in part by dang/futures
### Supported Tasks and Leaderboards
- `dialogue-modeling`: The dataset can be used to train a model for dialogue understanding and response generation based on given instruction. Performance can be evaluated based on dialogue understanding and the quality of the generated responses.
- There is no official leaderboard associated with this dataset at this time.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
An instance in the Lawyer-Instruct dataset represents a dialogue instruction and its corresponding output. Example:
```json
{
"instruction": "What are the possible legal consequences of not paying taxes?",
"input": "",
"output": "There can be several legal consequences, ranging from fines to imprisonment..."
}
```
### Data Fields
- `instruction`: a string representing the client's question or statement in the dialogue, serving as the input for dialogue model training.
- `input`:
- `output`: a string representing the legal professional's response.
### Data Splits
This dataset does not have a standard split. Users should carefully consider how they wish to split the data for training, validation, and testing purposes.
| [
-0.2871548533439636,
-0.6438157558441162,
0.1831914484500885,
-0.08602054417133331,
-0.40869879722595215,
0.12657950818538666,
-0.053288690745830536,
0.08567647635936737,
0.21447981894016266,
0.9137961864471436,
-0.7894001603126526,
-0.9552119970321655,
-0.40259847044944763,
-0.10297254472... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OneFly7/llama2-sst2-fine-tuning | OneFly7 | 2023-08-08T07:03:26Z | 43 | 2 | null | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2023-08-08T07:03:26Z | 2023-07-29T19:28:23.000Z | 2023-07-29T19:28:23 | ---
dataset_info:
features:
- name: label_text
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23202578
num_examples: 67349
- name: validation
num_bytes: 334716
num_examples: 872
download_size: 4418625
dataset_size: 23537294
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "llama2-sst2-finetuning"
## Dataset Description
The Llama2-sst2-fine-tuning dataset is designed for supervised fine-tuning of the LLaMA V2 based on the GLUE SST2 for sentiment analysis classification task.
We provide two subsets: training and validation.
To ensure the effectiveness of fine-tuning, we convert the data into the prompt template for LLaMA V2 supervised fine-tuning, where the data will follow this format:
```
<s>[INST] <<SYS>>
{System prompt}
<</SYS>>
{User prompt} [/INST] {Label} </s>.
```
The feasibility of this dataset has been tested in supervised fine-tuning on the meta-llama/Llama-2-7b-hf model.
Note. For the sake of simplicity, we have retained only one new column of data ('text').
## Other Useful Links
- [Get Llama 2 Prompt Format Right](https://www.reddit.com/r/LocalLLaMA/comments/155po2p/get_llama_2_prompt_format_right/)
- [Fine-Tune Your Own Llama 2 Model in a Colab Notebook](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32)
- [Instruction fine-tuning Llama 2 with PEFT’s QLoRa method](https://medium.com/@ud.chandra/instruction-fine-tuning-llama-2-with-pefts-qlora-method-d6a801ebb19)
- [GLUE SST2 Dataset](https://www.tensorflow.org/datasets/catalog/glue#gluesst2)
<!--[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)--> | [
-0.2522786855697632,
-0.6928192973136902,
0.33277854323387146,
0.2654375731945038,
-0.6749866604804993,
0.12734510004520416,
-0.1964537650346756,
-0.19423919916152954,
0.1713409572839737,
0.40373560786247253,
-0.9330906271934509,
-0.6321382522583008,
-0.5692822933197021,
0.0890729799866676... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shanover/disease_symptoms_prec_full | shanover | 2023-08-01T06:26:27Z | 43 | 2 | null | [
"license:mit",
"region:us"
] | 2023-08-01T06:26:27Z | 2023-08-01T06:25:45.000Z | 2023-08-01T06:25:45 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EleutherAI/CEBaB | EleutherAI | 2023-08-16T23:09:21Z | 43 | 1 | null | [
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"arxiv:2205.14140",
"region:us"
] | 2023-08-16T23:09:21Z | 2023-08-16T05:13:51.000Z | 2023-08-16T05:13:51 | ---
license: cc-by-4.0
dataset_info:
features:
- name: original_id
dtype: int32
- name: edit_goal
dtype: string
- name: edit_type
dtype: string
- name: text
dtype: string
- name: food
dtype: string
- name: ambiance
dtype: string
- name: service
dtype: string
- name: noise
dtype: string
- name: counterfactual
dtype: bool
- name: rating
dtype: int64
splits:
- name: validation
num_bytes: 306529
num_examples: 1673
- name: test
num_bytes: 309751
num_examples: 1689
- name: train
num_bytes: 2282439
num_examples: 11728
download_size: 628886
dataset_size: 2898719
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "CEBaB"
This is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from [this paper](https://arxiv.org/abs/2205.14140).
The most important difference from the original dataset is that the `rating` column corresponds to the _median_ rating provided by the Mechanical Turkers,
rather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s,
two 2s, and one 3), the original dataset used a `"no majority"` placeholder whereas we are able to provide an aggregate rating for all reviews.
The exact code used to process the original dataset is provided below:
```py
from ast import literal_eval
from datasets import DatasetDict, Value, load_dataset
def compute_median(x: str):
"""Compute the median rating given a multiset of ratings."""
# Decode the dictionary from string format
dist = literal_eval(x)
# Should be a dictionary whose keys are string-encoded integer ratings
# and whose values are the number of times that the rating was observed
assert isinstance(dist, dict)
assert sum(dist.values()) % 2 == 1, "Number of ratings should be odd"
ratings = []
for rating, count in dist.items():
ratings.extend([int(rating)] * count)
ratings.sort()
return ratings[len(ratings) // 2]
cebab = load_dataset('CEBaB/CEBaB')
assert isinstance(cebab, DatasetDict)
# Remove redundant splits
cebab['train'] = cebab.pop('train_inclusive')
del cebab['train_exclusive']
del cebab['train_observational']
cebab = cebab.cast_column(
'original_id', Value('int32')
).map(
lambda x: {
# New column with inverted label for counterfactuals
'counterfactual': not x['is_original'],
# Reduce the rating multiset into a single median rating
'rating': compute_median(x['review_label_distribution'])
}
).map(
# Replace the empty string and 'None' with Apache Arrow nulls
lambda x: {
k: v if v not in ('', 'no majority', 'None') else None
for k, v in x.items()
}
)
# Sanity check that all the splits have the same columns
cols = next(iter(cebab.values())).column_names
assert all(split.column_names == cols for split in cebab.values())
# Clean up the names a bit
cebab = cebab.rename_columns({
col: col.removesuffix('_majority').removesuffix('_aspect')
for col in cols if col.endswith('_majority')
}).rename_column(
'description', 'text'
)
# Drop the unimportant columns
cebab = cebab.remove_columns([
col for col in cols if col.endswith('_distribution') or col.endswith('_workers')
] + [
'edit_id', 'edit_worker', 'id', 'is_original', 'opentable_metadata', 'review'
]).sort([
# Make sure counterfactual reviews come immediately after each original review
'original_id', 'counterfactual'
])
``` | [
-0.6946167349815369,
-0.6743980646133423,
0.2029796689748764,
0.2222827821969986,
-0.36193352937698364,
-0.39566025137901306,
-0.14374130964279175,
-0.028203856199979782,
0.4259285628795624,
0.5809728503227234,
-0.4916161894798279,
-0.8156996369361877,
-0.46290111541748047,
0.1953769028186... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/minangnlp_mt | SEACrowd | 2023-09-26T12:29:22Z | 43 | 0 | null | [
"language:min",
"language:ind",
"license:mit",
"machine-translation",
"region:us"
] | 2023-09-26T12:29:22Z | 2023-09-26T11:11:19.000Z | 2023-09-26T11:11:19 | ---
license: mit
tags:
- machine-translation
language:
- min
- ind
---
# minangnlp_mt
In this work, we create Minangkabau–Indonesian (MIN-ID) parallel corpus by using Wikipedia. We obtain 224,180 Minangkabau and
510,258 Indonesian articles, and align documents through title matching, resulting in 111,430 MINID document pairs.
After that, we do sentence segmentation based on simple punctuation heuristics and obtain 4,323,315 Minangkabau sentences. We
then use the bilingual dictionary to translate Minangkabau article (MIN) into Indonesian language (ID'). Sentence alignment is conducted using
ROUGE-1 (F1) score (unigram overlap) (Lin, 2004) between ID’ and ID, and we pair each MIN sentencewith an ID sentence based on the highest ROUGE1.
We then discard sentence pairs with a score of less than 0.5 to result in 345,146 MIN-ID parallel sentences.
We observe that the sentence pattern in the collection is highly repetitive (e.g. 100k sentences are about biological term definition). Therefore,
we conduct final filtering based on top-1000 trigram by iteratively discarding sentences until the frequency of each trigram equals to 100. Finally, we
obtain 16,371 MIN-ID parallel sentences and conducted manual evaluation by asking two native Minangkabau speakers to assess the adequacy and
fluency (Koehn and Monz, 2006). The human judgement is based on scale 1–5 (1 means poor quality and 5 otherwise) and conducted against 100 random
samples. We average the weights of two annotators before computing the overall score, and we achieve 4.98 and 4.87 for adequacy and fluency respectively.
This indicates that the resulting corpus is high-quality for machine translation training.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{koto-koto-2020-towards,
title = "Towards Computational Linguistics in {M}inangkabau Language: Studies on Sentiment Analysis and Machine Translation",
author = "Koto, Fajri and
Koto, Ikhwan",
booktitle = "Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation",
month = oct,
year = "2020",
address = "Hanoi, Vietnam",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.paclic-1.17",
pages = "138--148",
}
```
## License
MIT
## Homepage
[https://github.com/fajri91/minangNLP](https://github.com/fajri91/minangNLP)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.7880073189735413,
-0.7326773405075073,
0.15478907525539398,
0.5952534675598145,
-0.3640165328979492,
-0.01903381384909153,
-0.310352623462677,
-0.34145116806030273,
0.4861316680908203,
0.540446400642395,
-0.2906944453716278,
-0.7065355777740479,
-0.5238995552062988,
0.662419855594635,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/czi_drsm | bigbio | 2023-11-23T12:33:19Z | 43 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-11-23T12:33:19Z | 2023-09-26T13:22:47.000Z | 2023-09-26T13:22:47 | ---
language:
- en
bigbio_language:
- English
license: cc0-1.0
bigbio_license_shortname: cc0-1.0
multilinguality: monolingual
pretty_name: CZI DRSM
homepage: https://github.com/chanzuckerberg/DRSM-corpus
bigbio_pubmed: false
bigbio_public: true
bigbio_tasks:
- TXTCLASS
---
# Dataset Card for CZI DRSM
## Dataset Description
- **Homepage:** https://github.com/chanzuckerberg/DRSM-corpus
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
Research Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets:
(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers);
- Clinical Characteristics, Disease Pathology, and Diagnosis -
Text that describes (A) symptoms, signs, or ‘phenotype’ of a disease;
(B) the effects of the disease on patient organs, tissues, or cells;
(C) the results of clinical tests that reveal pathology (including
biomarkers); (D) research that use this information to figure out
a diagnosis.
- Therapeutics in the clinic -
Text describing how treatments work in the clinic (but not in a clinical trial).
- Disease mechanism -
Text that describes either (A) mechanistic involvement of specific genes in disease
(deletions, gain of function, etc); (B) how molecular signalling or metabolism
binding, activating, phosphorylation, concentration increase, etc.)
are involved in the mechanism of a disease; or (C) the physiological
mechanism of disease at the level of tissues, organs, and body systems.
- Patient-Based Therapeutics -
Text describing (A) Clinical trials (studies of therapeutic measures being
used on patients in a clinical trial); (B) Post Marketing Drug Surveillance
(effects of a drug after approval in the general population or as part of
‘standard healthcare’); (C) Drug repurposing (how a drug that has been
approved for one use is being applied to a new disease).
(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers);
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study does not directly investigate quality of life
- 1 - the study investigates qol but not as its primary contribution
- 2 - the study's primary contribution centers on quality of life measures
(C) identifies if a paper is a natural history study (~10k papers).
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study is not directly investigating the natural history of a disease
- 1 - the study includes some elements a natural history but not as its primary contribution
- 2 - the study's primary contribution centers on observing the time course of a rare disease
These classifications are particularly relevant in rare disease research, a field that is generally understudied.
## Citation Information
```
# N/A
```
| [
-0.03644198924303055,
-0.4680969715118408,
0.4056791365146637,
-0.08593057841062546,
-0.33537015318870544,
0.10386127233505249,
0.11315945535898209,
-0.5509960651397705,
0.5707467198371887,
0.5564211010932922,
-0.26160621643066406,
-0.727165162563324,
-0.6203343868255615,
0.237175568938255... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liyucheng/ceval_all | liyucheng | 2023-09-29T10:07:50Z | 43 | 0 | null | [
"region:us"
] | 2023-09-29T10:07:50Z | 2023-09-29T10:04:27.000Z | 2023-09-29T10:04:27 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: val
num_bytes: 406528
num_examples: 1346
- name: test
num_bytes: 3720917
num_examples: 12342
- name: dev
num_bytes: 172688
num_examples: 260
download_size: 2792076
dataset_size: 4300133
---
# Dataset Card for "ceval_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5494203567504883,
-0.31406843662261963,
0.4239391088485718,
0.17056383192539215,
-0.2549703121185303,
-0.19323690235614777,
0.1720377802848816,
-0.1453969031572342,
0.9439498782157898,
0.6072936654090881,
-0.6155927777290344,
-1.0429418087005615,
-0.6235786080360413,
-0.2352985739707946... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bitadin/attributes-v10 | bitadin | 2023-10-04T16:37:47Z | 43 | 0 | null | [
"region:us"
] | 2023-10-04T16:37:47Z | 2023-10-03T09:50:31.000Z | 2023-10-03T09:50:31 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 37543937
num_examples: 42533
download_size: 20572906
dataset_size: 37543937
---
# Dataset Card for "attributes-v10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6389021277427673,
-0.08920017629861832,
0.28191739320755005,
0.34039705991744995,
0.11257288604974747,
-0.05918734148144722,
0.34871524572372437,
-0.389333575963974,
0.8055630326271057,
0.43912672996520996,
-1.0591931343078613,
-0.6113393902778625,
-0.5157551765441895,
-0.11462496966123... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
approximatelabs/tablib-v1-sample | approximatelabs | 2023-10-13T22:34:05Z | 43 | 8 | null | [
"size_categories:1M<n<10M",
"license:other",
"arxiv:2310.07875",
"region:us"
] | 2023-10-13T22:34:05Z | 2023-10-04T16:55:20.000Z | 2023-10-04T16:55:20 | ---
license: other
pretty_name: TabLib
size_categories:
- 1M<n<10M
extra_gated_prompt: >-
Access to this dataset is automatically granted once this form is completed.
Note that this access request is for the TabLib sample, not [the full TabLib dataset](https://huggingface.co/datasets/approximatelabs/tablib-v1-full).
extra_gated_fields:
I agree to abide by the license requirements of the data contained in TabLib: checkbox
---
[](https://discord.gg/kW9nBQErGe)
<img src="https://approximatelabs.com/tablib.png" width="800" />
# TabLib Sample
**NOTE**: This is a 0.1% sample of [the full TabLib dataset](https://huggingface.co/datasets/approximatelabs/tablib-v1-full).
TabLib is a minimally-preprocessed dataset of 627M tables (69 TiB) extracted from HTML, PDF, CSV, TSV, Excel, and SQLite files from GitHub and Common Crawl.
This includes 867B tokens of "context metadata": each table includes provenance information and table context such as filename, text before/after, HTML metadata, etc.
For more information, read the [paper](https://arxiv.org/abs/2310.07875) & [announcement blog](https://approximatelabs.com/blog/tablib).
# Dataset Details
## Sources
* **GitHub**: nearly all public GitHub repositories
* **Common Crawl**: the `CC-MAIN-2023-23` crawl
## Reading Tables
Tables are stored as serialized Arrow bytes in the `arrow_bytes` column. To read these, you will need to deserialize the bytes:
```python
import datasets
import pyarrow as pa
# load a single file of the dataset
ds = datasets.load_dataset(
'approximatelabs/tablib-v1-sample',
token='...',
)
df = ds['train'].to_pandas()
tables = [pa.RecordBatchStreamReader(b).read_all() for b in df['arrow_bytes']]
```
## Licensing
This dataset is intended for research use only.
For specific licensing information, refer to the license of the specific datum being used.
# Contact
If you have any questions, comments, or concerns about licensing, pii, etc. please contact using [this form](https://forms.gle/C74VTWP7L78QDVR67).
# Approximate Labs
TabLib is a project from Approximate Labs. Find us on [Twitter](https://twitter.com/approximatelabs), [Github](https://github.com/approximatelabs), [Linkedin](https://www.linkedin.com/company/approximate-labs), and [Discord](https://discord.gg/kW9nBQErGe).
# Citations
If you use TabLib for any of your research, please cite the TabLib paper:
```
@misc{eggert2023tablib,
title={TabLib: A Dataset of 627M Tables with Context},
author={Gus Eggert and Kevin Huo and Mike Biven and Justin Waugh},
year={2023},
eprint={2310.07875},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.2783307135105133,
-0.5830659866333008,
0.2683723270893097,
-0.11500802636146545,
-0.05614251643419266,
-0.06311583518981934,
-0.143595352768898,
-0.24119189381599426,
0.3561713397502899,
0.14900581538677216,
-0.3954290449619293,
-0.6065826416015625,
0.13813234865665436,
0.11945387721061... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
madaanpulkit/tab-wnut | madaanpulkit | 2023-11-02T06:07:27Z | 43 | 0 | null | [
"region:us"
] | 2023-11-02T06:07:27Z | 2023-10-11T07:38:29.000Z | 2023-10-11T07:38:29 | ---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence: string
- name: tagged_text
sequence: string
- name: tags
sequence:
class_label:
names:
'0': '0'
'1': B-DIRECT-CODE
'2': I-DIRECT-CODE
'3': B-DIRECT-PERSON
'4': I-DIRECT-PERSON
'5': B-QUASI-DATETIME
'6': I-QUASI-DATETIME
'7': B-QUASI-PERSON
'8': I-QUASI-PERSON
'9': B-QUASI-LOC
'10': I-QUASI-LOC
'11': B-QUASI-QUANTITY
'12': I-QUASI-QUANTITY
'13': B-QUASI-CODE
'14': I-QUASI-CODE
'15': B-QUASI-ORG
'16': I-QUASI-ORG
'17': B-QUASI-DEM
'18': I-QUASI-DEM
'19': B-QUASI-MISC
'20': I-QUASI-MISC
'21': B-DIRECT-ORG
'22': I-DIRECT-ORG
'23': B-DIRECT-DATETIME
'24': I-DIRECT-DATETIME
'25': B-DIRECT-LOC
'26': I-DIRECT-LOC
'27': B-DIRECT-MISC
'28': I-DIRECT-MISC
'29': B-DIRECT-DEM
'30': I-DIRECT-DEM
splits:
- name: train
num_bytes: 45872319
num_examples: 1014
- name: dev
num_bytes: 3749307
num_examples: 127
- name: test
num_bytes: 3619745
num_examples: 127
download_size: 11056816
dataset_size: 53241371
---
# Dataset Card for "tab-wnut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6508665680885315,
-0.46953272819519043,
0.10622363537549973,
0.15729834139347076,
-0.1762140393257141,
0.270943284034729,
-0.07176018506288528,
-0.10128361731767654,
0.9958912134170532,
0.5741275548934937,
-0.8620771765708923,
-0.8444288969039917,
-0.34829437732696533,
-0.28843483328819... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
theblackcat102/gpt-4v-eval-samples | theblackcat102 | 2023-11-05T12:20:01Z | 43 | 1 | null | [
"region:us"
] | 2023-11-05T12:20:01Z | 2023-10-13T00:51:36.000Z | 2023-10-13T00:51:36 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: conversations
dtype: string
splits:
- name: test
num_bytes: 334178840.35
num_examples: 1682
download_size: 324453952
dataset_size: 334178840.35
---
# GPT-4V Eval samples
This is a hand curated images from the web and questions asked by myself to GPT-4V to understand its ability and limits.
I am mainly focus in localization, OCR ability and understanding of GPT-4V vision module. So the language part is skipped as we already seen in GPT-4. As long as GPT-4V can extract the required information in text, the rest of the LLM shouldn't have any issue answering the rest of the questions.
The numbers of examples is still pretty tiny and will continue to increase further in the future until I am satisfy with the size. So please check back from time to time.
Note : the dataset viewer had a bug which cause the image displayed differ from the actual dataset (Due to frequent update). Please load the dataset and save it on your local path for best accuracy.
## How to use:
```
import json
from datasets import load_dataset
dataset = load_dataset('theblackcat102/gpt-4v-eval-samples')['test']
print(dataset[0]['image'])
print(json.loads(dataset[0]['conversations']))
```
## Contributions
Please checkout my github repo for more details : [theblackcat102/gpt-4v-samples](https://github.com/theblackcat102/gpt-4v-samples)
## Citation
```
@article{yang2023dawn,
title={The Dawn of LMMs: Preliminary Explorations with GPT-4V (ision)},
author={Yang, Zhengyuan and Li, Linjie and Lin, Kevin and Wang, Jianfeng and Lin, Chung-Ching and Liu, Zicheng and Wang, Lijuan},
journal={arXiv preprint arXiv:2309.17421},
year={2023}
}
```
| [
-0.2684652805328369,
-0.6585031151771545,
0.7817088961601257,
-0.06891198456287384,
-0.21350471675395966,
0.014596681110560894,
-0.11287204176187515,
-0.4157443344593048,
-0.011691453866660595,
0.4277186393737793,
-0.41706493496894836,
-0.5328382849693298,
-0.2800806760787964,
0.3322938680... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fm_aliases | coastalcph | 2023-10-20T08:18:20Z | 43 | 0 | null | [
"region:us"
] | 2023-10-20T08:18:20Z | 2023-10-20T07:40:26.000Z | 2023-10-20T07:40:26 | Entry not found | [
-0.32276469469070435,
-0.22568407654762268,
0.8622258901596069,
0.434614896774292,
-0.5282987952232361,
0.7012966275215149,
0.7915717363357544,
0.07618635147809982,
0.7746022939682007,
0.25632190704345703,
-0.7852814793586731,
-0.22573821246623993,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Horus7/FromTo | Horus7 | 2023-11-07T10:53:44Z | 43 | 0 | null | [
"task_categories:token-classification",
"region:us"
] | 2023-11-07T10:53:44Z | 2023-10-22T12:54:04.000Z | 2023-10-22T12:54:04 | ---
task_categories:
- token-classification
--- | [
-0.128533735871315,
-0.18616747856140137,
0.6529128551483154,
0.4943627715110779,
-0.19319336116313934,
0.2360745221376419,
0.3607197701931,
0.05056330934166908,
0.5793653130531311,
0.740013837814331,
-0.6508103013038635,
-0.23783954977989197,
-0.7102248668670654,
-0.04782583937048912,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
worden1/ultra-feedback-paired | worden1 | 2023-10-24T04:21:21Z | 43 | 1 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"region:us"
] | 2023-10-24T04:21:21Z | 2023-10-24T04:00:41.000Z | 2023-10-24T04:00:41 | ---
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: ultra_feedback_paired
size_categories:
- 10M<n<100M
--- | [
-0.128533735871315,
-0.18616747856140137,
0.6529128551483154,
0.4943627715110779,
-0.19319336116313934,
0.2360745221376419,
0.3607197701931,
0.05056330934166908,
0.5793653130531311,
0.740013837814331,
-0.6508103013038635,
-0.23783954977989197,
-0.7102248668670654,
-0.04782583937048912,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gokul00060/armchat1 | gokul00060 | 2023-10-28T09:29:18Z | 43 | 1 | null | [
"license:mit",
"region:us"
] | 2023-10-28T09:29:18Z | 2023-10-28T08:02:33.000Z | 2023-10-28T08:02:33 | ---
license: mit
---
## THIS DATASET IS ONLY MADE FOR THESE
# ID name color
# 1. ball yellow
# 2. battery silver
# 3. wood wood
# 4. bowl white | [
0.0160054974257946,
-0.03366042673587799,
0.24456089735031128,
0.2935163676738739,
-0.3947492241859436,
0.12965478003025055,
0.591144323348999,
0.11723477393388748,
0.5281916856765747,
0.41909998655319214,
-0.8528489470481873,
-0.4740232825279236,
-0.35777005553245544,
0.20128236711025238,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bragovo/dsum_en | bragovo | 2023-11-05T18:57:50Z | 43 | 0 | null | [
"language:en",
"region:us"
] | 2023-11-05T18:57:50Z | 2023-11-04T07:28:09.000Z | 2023-11-04T07:28:09 | ---
configs:
- config_name: default
task: summarization
data_files:
- split: train
path: data/train-*
language:
- en
--- | [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dippi9845/sci_lay | Dippi9845 | 2023-11-04T19:40:46Z | 43 | 0 | null | [
"region:us"
] | 2023-11-04T19:40:46Z | 2023-11-04T19:22:23.000Z | 2023-11-04T19:22:23 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nicolepng/project_train_set | nicolepng | 2023-11-28T04:38:09Z | 43 | 0 | null | [
"region:us"
] | 2023-11-28T04:38:09Z | 2023-11-07T16:13:53.000Z | 2023-11-07T16:13:53 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jin05102518/Raw_koen_v2 | jin05102518 | 2023-11-10T15:58:15Z | 43 | 0 | null | [
"region:us"
] | 2023-11-10T15:58:15Z | 2023-11-10T15:53:46.000Z | 2023-11-10T15:53:46 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
danielz01/BigEarthNet-S2-v1.0 | danielz01 | 2023-11-13T09:11:20Z | 43 | 0 | null | [
"region:us"
] | 2023-11-13T09:11:20Z | 2023-11-12T20:17:15.000Z | 2023-11-12T20:17:15 | ---
configs:
- config_name: s2-rgb
data_files:
- split: test
path: s2-rgb/test-*
- split: val
path: s2-rgb/val-*
- split: train
path: s2-rgb/train-*
dataset_info:
config_name: s2-rgb
features:
- name: img
dtype: image
- name: labels
sequence: string
- name: coordinates
struct:
- name: lrx
dtype: int64
- name: lry
dtype: int64
- name: ulx
dtype: int64
- name: uly
dtype: int64
- name: projection
dtype: string
- name: tile_source
dtype: string
- name: acquisition_date
dtype: string
splits:
- name: test
num_bytes: 3453114936.75
num_examples: 125866
- name: val
num_bytes: 3393628600.625
num_examples: 123723
- name: train
num_bytes: 7391482704.125
num_examples: 269695
download_size: 13839792533
dataset_size: 14238226241.5
---
# Dataset Card for "BigEarthNet-S2-v1.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.36614206433296204,
-0.06623243540525436,
0.2273523360490799,
0.21576648950576782,
-0.38680514693260193,
-0.34535282850265503,
0.23066940903663635,
-0.18518057465553284,
1.0744163990020752,
0.5938584804534912,
-0.8570621609687805,
-0.6570728421211243,
-0.7954596877098083,
-0.322587013244... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
varox34/med | varox34 | 2023-11-16T14:51:07Z | 43 | 0 | null | [
"region:us"
] | 2023-11-16T14:51:07Z | 2023-11-14T16:12:45.000Z | 2023-11-14T16:12:45 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openerotica/basilisk-v0.2 | openerotica | 2023-11-18T04:46:33Z | 43 | 5 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-18T04:46:33Z | 2023-11-18T01:39:15.000Z | 2023-11-18T01:39:15 | ---
license: apache-2.0
---
This is the dataset that was used to train basilisk-7b-v0.2. It contains a subset of orca-best, as well as most of freedom-rp.
If you like what I'm trying to do, please consider subscribing to my patreon. I'm only asking for about tree fiddy.
https://patreon.com/openerotica | [
-0.39424604177474976,
0.053046055138111115,
-0.1137784942984581,
0.03043847158551216,
-0.8074538111686707,
0.014721393585205078,
0.18955311179161072,
-0.4360947906970978,
0.7952367067337036,
0.4511547386646271,
-0.9075160622596741,
-0.4082108438014984,
-0.32066115736961365,
-0.006198649760... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
idning/ffhq32-caption | idning | 2023-11-21T02:53:23Z | 43 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-21T02:53:23Z | 2023-11-21T02:10:57.000Z | 2023-11-21T02:10:57 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 174847756.0
num_examples: 70000
download_size: 182541625
dataset_size: 174847756.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
michaelginn/latent-trees-agreement-ID | michaelginn | 2023-11-26T02:52:05Z | 43 | 0 | null | [
"region:us"
] | 2023-11-26T02:52:05Z | 2023-11-24T20:32:27.000Z | 2023-11-24T20:32:27 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 87976.8
num_examples: 2400
- name: eval
num_bytes: 29325.6
num_examples: 800
- name: test
num_bytes: 29325.6
num_examples: 800
download_size: 53220
dataset_size: 146628.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
---
# Dataset Card for "latent-trees-agreement-ID"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3402356803417206,
-0.2793717086315155,
0.2816840708255768,
0.299689918756485,
-0.3870009481906891,
0.2905147969722748,
0.3010704815387726,
-0.11942082643508911,
0.7732731103897095,
0.13998448848724365,
-0.6947784423828125,
-0.7373343110084534,
-0.6647218465805054,
-0.2833920419216156,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RahulRaman/counting-object-sd | RahulRaman | 2023-11-24T22:59:30Z | 43 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-24T22:59:30Z | 2023-11-24T22:13:27.000Z | 2023-11-24T22:13:27 | ---
license: mit
dataset_info:
features:
- name: input_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 155541.0
num_examples: 1
download_size: 157933
dataset_size: 155541.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thomwolf/github-python | thomwolf | 2021-07-07T11:53:28Z | 42 | 7 | null | [
"region:us"
] | 2021-07-07T11:53:28Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
transformersbook/codeparrot | transformersbook | 2022-02-05T16:15:40Z | 42 | 36 | null | [
"python",
"code",
"region:us"
] | 2022-02-05T16:15:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
tags:
- python
- code
---
# CodeParrot 🦜 Dataset
## What is it?
This is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
## Creation
It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following:
```sql
SELECT
f.repo_name, f.path, c.copies, c.size, c.content, l.license
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id
JOIN
`bigquery-public-data.github_repos.licenses` AS l
ON
f.repo_name = l.repo_name
WHERE
NOT c.binary
AND ((f.path LIKE '%.py')
AND (c.size BETWEEN 1024 AND 1048575))
```
## Duplication
Note that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See [codeparrot-clean](https://huggingface.co/datasets/lvwerra/codeparrot-clean) for a deduplicated version of this dataset. | [
-0.4493507146835327,
-0.2454541176557541,
-0.18914784491062164,
0.01970096118748188,
-0.392004132270813,
0.16630613803863525,
-0.15097640454769135,
0.029605261981487274,
0.2747681736946106,
0.7408788800239563,
-0.641850471496582,
-0.39302197098731995,
-0.2935982942581177,
0.280081003904342... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/no-nonself-title-mrtydi-corpus | crystina-z | 2022-03-11T22:55:53Z | 42 | 0 | null | [
"region:us"
] | 2022-03-11T22:55:53Z | 2022-03-11T22:40:38.000Z | 2022-03-11T22:40:38 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Guldeniz/flower_dataset | Guldeniz | 2022-04-09T20:52:59Z | 42 | 0 | null | [
"region:us"
] | 2022-04-09T20:52:59Z | 2022-04-09T20:36:46.000Z | 2022-04-09T20:36:46 | #flowersdataset #segmentation #VGG
# Dataset Card for Flowers Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Official VGG'S README.md](#official-vggs-README.md)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/flowers/17/index.html
- **Repository:** https://huggingface.co/datasets/Guldeniz/flower_dataset
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
VGG have created a 17 category flower dataset with 80 images for each class. The flowers chosen are some common flowers in the UK. The images have large scale, pose and light variations and there are also classes with large varations of images within the class and close similarity to other classes. The categories can be seen in the figure below. We randomly split the dataset into 3 different training, validation and test sets. A subset of the images have been groundtruth labelled for segmentation.
You can find the split files in the link, as a mat file.
### Official VGG's README.md
17 Flower Category Database
----------------------------------------------
This set contains images of flowers belonging to 17 different categories.
The images were acquired by searching the web and taking pictures. There are
80 images for each category.
The database was used in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
The datasplits used in this paper are specified in datasplits.mat
There are 3 separate splits. The results in the paper are averaged over the 3 splits.
Each split has a training file (trn1,trn2,trn3), a validation file (val1, val2, val3)
and a testfile (tst1, tst2 or tst3).
Segmentation Ground Truth
------------------------------------------------
The ground truth is given for a subset of the images from 13 different
categories.
More details can be found in:
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz).
The ground truth file also contains the file imlist.mat, which indicated
which images in the original database that have been anotated.
Distance matrices
-----------------------------------------------
We provide two set of distance matrices:
1. distancematrices17gcfeat06.mat
- Distance matrices using the same features and segmentation as detailed in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
2. distancematrices17itfeat08.mat
- Distance matrices using the same features as described in:
Nilsback, M-E. and Zisserman, A. Automated flower classification over a large number of classes.
Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing (2008)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback08.{pdf,ps.gz}.
and the iterative segmenation scheme detailed in
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz). | [
-0.43512672185897827,
-0.5809287428855896,
0.4221346378326416,
0.3072062134742737,
-0.29811346530914307,
-0.26065102219581604,
-0.07171368598937988,
-0.7349885106086731,
-0.003149945754557848,
0.1124582588672638,
-0.7830734252929688,
-0.8427574038505554,
-0.5441911816596985,
0.289608865976... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ziq/depression_tweet | ziq | 2022-06-06T07:09:06Z | 42 | 0 | null | [
"region:us"
] | 2022-06-06T07:09:06Z | 2022-06-06T06:48:27.000Z | 2022-06-06T06:48:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Heriot-WattUniversity/dialog_babi | Heriot-WattUniversity | 2022-07-12T08:27:12Z | 42 | 2 | null | [
"arxiv:1605.07683",
"arxiv:1502.05698",
"region:us"
] | 2022-07-12T08:27:12Z | 2022-07-09T09:32:32.000Z | 2022-07-09T09:32:32 | # Dialog bAbI tasks data
In this directory is the set of 6 tasks for testing end-to-end dialog systems in the restaurant domain as described in the paper "Learning End-to-End Goal-Oriented Dialog" by Bordes & Weston (http://arxiv.org/abs/1605.07683). The aim is that each task tests a unique aspect of dialog. Tasks are designed to complement the set of 20 bAbI tasks for story understanding already released with the paper "Towards AI Complete Question Answering: A Set of Prerequisite Toy Tasks" by Weston et al. (http://arxiv.org/abs/1502.05698).
## Data
For each task, there are 1000 dialogs for training, 1000 for development and 1000 for testing. For tasks 1-5, we also include a second test set (with suffix -OOV.txt) that contains dialogs including entities not present in training and development sets.
The file format for each task is as follows:
`ID user_utterance [tab] bot_utterances`
The IDs for a given dialog start at 1 and increase. When the IDs in a file reset back to 1 you can consider the following sentences as a new dialog. When the bot speaks two times in a row, we used the special token "<SILENCE>" to fill in for the missing user utterance.
For example (for task 1):
```
1 hi hello what can i help you with today
2 can you make a restaurant reservation with italian cuisine for six people in a cheap price range i'm on it
3 <SILENCE> where should it be
4 rome please ok let me look into some options for you
5 <SILENCE> api_call italian rome six cheap
```
The goal of the tasks is to predict the bot utterances, that can be sentences or API calls (sentences starting with the special token "api_call").
Along with the train, dev and test sets, we also include a knowledge base file (dialog-babi-kb-all.txt) that contain all entities appearing in dialogs for tasks 1-5. We also include a file containing the candidates to select the answer from (dialog-babi-candidates.txt) for tasks 1-5, that is simply made of all the bot utterances in train, dev, test for these tasks.
Task 6 is a bit different since its data comes from the Dialog State Tracking Challenge 2 (http://camdial.org/~mh521/dstc/), which we modified to convert it into the same format as the other tasks. There is no OOV test set associated with this task and the knowledge base (dialog-babi-task6-dstc2-kb.txt) is imperfect. This task has its own candidates file (dialog-babi-task6-dstc2-candidates.txt).
## License
This dataset is released under Creative Commons Attribution 3.0 Unported license. A copy of this license is included with the data.
## Contact
The author of this porting is Alessandro Suglia and he has only made available the dataset via
Huggingface datasets.
For more details on the dataset and baselines, see the paper "Learning End-to-End Goal-Oriented Dialog" by Antoine Bordes and Jason Weston (http://arxiv.org/abs/1605.07683). For any information, contact Antoine Bordes : abordes (at) fb (dot) com .
| [
-0.27448979020118713,
-0.9917677640914917,
0.28202223777770996,
-0.07982929795980453,
0.06955923140048981,
0.037552181631326675,
-0.01826575957238674,
-0.3814316987991333,
-0.09137967228889465,
0.7114912271499634,
-0.8999067544937134,
-0.4462299942970276,
-0.2767004072666168,
0.09331561625... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/genetag | bigbio | 2022-12-22T15:44:38Z | 42 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:44:38Z | 2022-11-13T22:08:32.000Z | 2022-11-13T22:08:32 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NCBI_LICENSE
pretty_name: GENETAG
homepage: https://github.com/openbiocorpora/genetag
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for GENETAG
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/genetag
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
Named entity recognition (NER) is an important first step for text mining the biomedical literature.
Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus.
The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity
of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE®
sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition..
## Citation Information
```
@article{Tanabe2005,
author = {Lorraine Tanabe and Natalie Xie and Lynne H Thom and Wayne Matten and W John Wilbur},
title = {{GENETAG}: a tagged corpus for gene/protein named entity recognition},
journal = {{BMC} Bioinformatics},
volume = {6},
year = {2005},
url = {https://doi.org/10.1186/1471-2105-6-S1-S3},
doi = {10.1186/1471-2105-6-s1-s3},
biburl = {},
bibsource = {}
}
```
| [
-0.48755159974098206,
-0.4562014639377594,
0.0976119115948677,
-0.13416479527950287,
-0.31888827681541443,
-0.04351365938782692,
-0.13063791394233704,
-0.6937922835350037,
0.591434121131897,
0.3078661561012268,
-0.27980470657348633,
-0.4991764724254608,
-0.6638529300689697,
0.7407892346382... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xusenlin/cmeee | xusenlin | 2022-12-07T14:24:00Z | 42 | 3 | null | [
"region:us"
] | 2022-12-07T14:24:00Z | 2022-12-07T14:16:08.000Z | 2022-12-07T14:16:08 | ---
dataset_info:
features:
- name: text
dtype: string
- name: entities
list:
- name: id
dtype: int64
- name: entity
dtype: string
- name: start_offset
dtype: int64
- name: end_offset
dtype: int64
- name: label
dtype: string
splits:
- name: train
num_bytes: 5289666
num_examples: 15000
- name: test
num_bytes: 461472
num_examples: 3000
- name: validation
num_bytes: 1752698
num_examples: 5000
download_size: 3359069
dataset_size: 7503836
---
# CMeEE 中文医学命名实体识别数据集
字段说明
+ `text`: 文本
+ `entities`: 文本中包含的实体
+ `id`: 实体 `id`
+ `entity`: 实体对应的字符串
+ `start_offset`: 实体开始位置
+ `end_offset`: 实体结束位置的下一位
+ `label`: 实体对应的开始位置
| [
-0.840850830078125,
-0.892907977104187,
0.5226985216140747,
0.3399074077606201,
-0.6539755463600159,
0.04288371279835701,
0.10504792630672455,
-0.4010762870311737,
0.7671908140182495,
0.25808602571487427,
-0.4637112021446228,
-0.9510857462882996,
-0.8738322257995605,
-0.15990909934043884,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clip-benchmark/wds_vtab-flowers | clip-benchmark | 2023-01-20T07:14:22Z | 42 | 0 | null | [
"region:us"
] | 2023-01-20T07:14:22Z | 2023-01-20T07:13:41.000Z | 2023-01-20T07:13:41 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davanstrien/notebooks_by_repo_type | davanstrien | 2023-11-28T01:31:04Z | 42 | 0 | null | [
"region:us"
] | 2023-11-28T01:31:04Z | 2023-03-27T09:16:12.000Z | 2023-03-27T09:16:12 | ---
dataset_info:
features:
- name: repo_type
dtype: large_string
- name: repo_notebook_count
dtype: int64
- name: date
dtype: date32
splits:
- name: train
num_bytes: 10550
num_examples: 422
download_size: 5072
dataset_size: 10550
---
# Dataset Card for "notebooks_by_repo_type"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3241296112537384,
0.00674766581505537,
0.2476266473531723,
0.26046857237815857,
-0.2796349823474884,
0.18740269541740417,
-0.04864993318915367,
0.016417181119322777,
0.8191913366317749,
0.6630593538284302,
-0.563766598701477,
-0.9094176888465881,
-0.4772111177444458,
-0.0087682949379086... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davanstrien/notebooks_on_the_hub_raw | davanstrien | 2023-11-28T01:31:00Z | 42 | 0 | null | [
"region:us"
] | 2023-11-28T01:31:00Z | 2023-04-03T12:41:41.000Z | 2023-04-03T12:41:41 | ---
dataset_info:
features:
- name: date
dtype: date32
- name: repo_type
dtype: large_string
- name: user
dtype: large_string
- name: repo_id
dtype: large_string
- name: repo_notebook_count
dtype: int64
splits:
- name: train
num_bytes: 63192539
num_examples: 959671
download_size: 24373235
dataset_size: 63192539
---
# Dataset Card for "notebooks_on_the_hub_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46061551570892334,
-0.5036563873291016,
0.16005496680736542,
0.281045526266098,
-0.12038256227970123,
0.20998729765415192,
-0.028321102261543274,
-0.05998429283499718,
0.9584814310073853,
0.28292882442474365,
-0.6815826892852783,
-0.7447462677955627,
-0.18636783957481384,
-0.08952058106... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/chaos-mnli-ambiguity | metaeval | 2023-04-11T13:43:17Z | 42 | 0 | null | [
"region:us"
] | 2023-04-11T13:43:17Z | 2023-04-11T13:37:19.000Z | 2023-04-11T13:37:19 | Chaos NLI MNLI portion with gini coefficient pre-computed (from 0 to 1)
High gini means unambiguous inference.
```
@inproceedings{xzhou2022distnli,
Author = {Xiang Zhou and Yixin Nie and Mohit Bansal},
Booktitle = {Findings of the Association for Computational Linguistics: ACL 2022},
Publisher = {Association for Computational Linguistics},
Title = {Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning},
Year = {2022}
}
``` | [
-0.45982372760772705,
-0.5642482042312622,
0.04255004972219467,
0.5811824798583984,
-0.12992902100086212,
-0.09203768521547318,
0.04897983744740486,
-0.5110451579093933,
0.5697202682495117,
-0.03630997985601425,
-0.8726965188980103,
-0.43263646960258484,
-0.4137818515300751,
0.150115519762... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/steel_plates | mstz | 2023-04-16T18:03:07Z | 42 | 1 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<5K",
"language:en",
"license:cc",
"steel_plates",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T18:03:07Z | 2023-04-14T15:26:35.000Z | 2023-04-14T15:26:35 | ---
language:
- en
tags:
- steel_plates
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Landsat
size_categories:
- 1K<n<5K
task_categories:
- tabular-classification
configs:
- steel_plates
- steel_plates_0
- steel_plates_1
- steel_plates_2
- steel_plates_3
- steel_plates_4
- steel_plates_5
- steel_plates_6
license: cc
---
# Landsat
The [Steel Plates dataset](https://archive-beta.ics.uci.edu/dataset/198/steel+plates+faults) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| steel_plates | Multiclass classification.| |
| steel_plates_0 | Binary classification. | Is the input of class 0? |
| steel_plates_1 | Binary classification. | Is the input of class 1? |
| steel_plates_2 | Binary classification. | Is the input of class 2? |
| steel_plates_3 | Binary classification. | Is the input of class 3? |
| steel_plates_4 | Binary classification. | Is the input of class 4? |
| steel_plates_5 | Binary classification. | Is the input of class 5? |
| steel_plates_6 | Binary classification. | Is the input of class 6? | | [
-0.6809302568435669,
-0.03178585320711136,
0.6375670433044434,
0.33013519644737244,
-0.18740376830101013,
0.0882507860660553,
0.388309121131897,
0.12169593572616577,
0.24040977656841278,
0.6550065279006958,
-0.6043320298194885,
-0.7886854410171509,
-0.7146094441413879,
-0.23535528779029846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PaulineSanchez/Translation_words_and_sentences_english_french | PaulineSanchez | 2023-04-27T12:29:31Z | 42 | 0 | null | [
"task_categories:translation",
"language:en",
"language:fr",
"words",
"sentences",
"everyday life",
"casual",
"region:us"
] | 2023-04-27T12:29:31Z | 2023-04-27T12:23:20.000Z | 2023-04-27T12:23:20 | ---
task_categories:
- translation
language:
- en
- fr
tags:
- words
- sentences
- everyday life
- casual
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
This dataset is a clean version (all NanN removed) of this dataset : https://www.kaggle.com/datasets/devicharith/language-translation-englishfrench . I'm not the person who posted it first on Kaggle.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.45667073130607605,
-0.49136802554130554,
-0.06790883839130402,
0.3569483757019043,
-0.4825454652309418,
0.01670614816248417,
-0.27151650190353394,
-0.3108958899974823,
0.6836026906967163,
0.6072930097579956,
-0.9122272729873657,
-1.0814622640609741,
-0.7831945419311523,
0.06850252300500... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MU-NLPC/Calc-ape210k | MU-NLPC | 2023-10-30T15:56:39Z | 42 | 13 | null | [
"license:mit",
"arxiv:2305.15017",
"arxiv:2009.11506",
"region:us"
] | 2023-10-30T15:56:39Z | 2023-05-22T14:20:16.000Z | 2023-05-22T14:20:16 | ---
license: mit
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_chinese
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
splits:
- name: train
num_bytes: 111988047
num_examples: 195179
- name: validation
num_bytes: 1172933
num_examples: 1783
- name: test
num_bytes: 1157061
num_examples: 1785
download_size: 50827709
dataset_size: 114318041
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_chinese
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
splits:
- name: train
num_bytes: 111988047
num_examples: 195179
- name: validation
num_bytes: 2798479
num_examples: 4867
- name: test
num_bytes: 2793355
num_examples: 4867
download_size: 52234086
dataset_size: 117579881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: validation
path: original-splits/validation-*
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-ape210k
## Summary
This dataset is an instance of Ape210K dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction Process
First, we translated the questions into English using Google Translate. Next, we parsed the equations and the results. We linearized
the equations into a sequence of elementary steps and evaluated them using a sympy-based calculator. We numerically compare the output
with the result in the data and remove all examples where they do not match (less than 3% loss in each split). Finally, we save the
chain of steps in the HTML-like language in the `chain` column. We keep the original columns in the dataset for convenience. We also perform
in-dataset and cross-dataset data-leak detection within [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
Specifically for Ape210k, we removed parts of the validation and test split, with around 1700 remaining in each.
You can read more information about this process in our [Calc-X paper](https://arxiv.org/abs/2305.15017).
## Data splits
The default config contains filtered splits with data leaks removed.
You can load it using:
```python
datasets.load_dataset("MU-NLPC/calc-ape210k")
```
In the `original-splits` config, the data splits are unfiltered and correspond to the original Ape210K dataset. See [ape210k dataset github](https://github.com/Chenny0808/ape210k) and [the paper](https://arxiv.org/abs/2009.11506) for more info.
You can load it using:
```python
datasets.load_dataset("MU-NLPC/calc-ape210k", "original-splits")
```
## Attributes
- **id** - id of the example
- **question** - the description of the math problem. Automatically translated from the `question_chinese` column into English using Google Translate
- **question_chinese** - the original description of the math problem in Chinese
- **chain** - linearized `equation`, sequence of arithmetic steps in HTML-like language that can be evaluated using our sympy-based calculator
- **result** - result as a string (can be an integer, float, or a fraction)
- **result_float** - result, converted to a float
- **equation** - a nested expression that evaluates to the correct answer
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original Ape210k dataset and repo**](https://github.com/Chenny0808/ape210k)
- [**original Ape210k paper**](https://arxiv.org/abs/2009.11506)
## Licence
MIT, consistently with the original dataset.
## Cite
If you use this version of the dataset in research, please cite the [original Ape210k paper](https://arxiv.org/abs/2009.11506), and the [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` | [
-0.511009931564331,
-0.48047757148742676,
0.19785691797733307,
0.22766044735908508,
0.04139144718647003,
-0.19017547369003296,
-0.18192116916179657,
-0.37438857555389404,
0.20941385626792908,
0.3604888617992401,
-0.7455439567565918,
-0.18350094556808472,
-0.29360589385032654,
0.25759023427... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EasyTerms/Manuel_dataset | EasyTerms | 2023-11-02T14:03:24Z | 42 | 2 | null | [
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"legal",
"region:us"
] | 2023-11-02T14:03:24Z | 2023-06-07T08:50:06.000Z | 2023-06-07T08:50:06 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
tags:
- legal
pretty_name: lilo
size_categories:
- n<1K
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teleprint-me/phi-1 | teleprint-me | 2023-07-08T04:01:52Z | 42 | 32 | null | [
"license:cc-by-nc-sa-3.0",
"arxiv:2306.11644",
"region:us"
] | 2023-07-08T04:01:52Z | 2023-07-04T03:49:40.000Z | 2023-07-04T03:49:40 | ---
title: 'Phi-1 Model Dataset'
date: '2023-07-03'
license: cc-by-nc-sa-3.0
---
## Dataset Description
- **Homepage:** [teleprint.me](https://teleprint.me)
- **Repository:** [phi-1](https://huggingface.co/datasets/teleprint-me/phi-1)
- **Paper:** [2306.11644v1](https://arxiv.org/abs/2306.11644v1)
- **Leaderboard:** [Link to the leaderboard]
- **Point of Contact:** [aberrio@teleprint.me](aberrio@teleprint.me)
### Dataset Summary
This dataset is created for training the phi-1 model, based on the paper
"Textbooks are All You Need". It contains high-quality data derived from various
textbooks, transformed and synthesized using OpenAI's GPT-3.5 and GPT-4 models.
For optimal results, it is recommended to train models with the following
parameters and sequence lengths:
- For a model with 350M parameters, use a sequence length of 2048.
- For a model with 700M parameters, use a sequence length of 4096.
- For a model with 1.3B parameters, use a sequence length of 8096.
Please note that the dataset is currently in its initial phase of planning and
collection. The process involves preparing the data, extracting it, formatting
it, chunking it, and preparing it for synthesis. Scripts for preparing and
processing the data for the model will be developed. Once the data is generated,
it will undergo a review and revision process to ensure its quality and
relevance.
These recommendations and notes are based on the dataset creator's initial plans
and may be subject to change as the project progresses.
**NOTE**: Due to the nature of this dataset, it cannot be released without
obtaining permissions from the respective publishers and/or authors. If you are
an author or publisher and have any concerns about this repository, please feel
free to email me.
If you are an author or publisher and would like to grant permission for the use
of your work, your support would be greatly appreciated. Please note that in
order for the dataset to be released, permissions would need to be unanimous
from all involved parties.
In the absence of such permissions, I will respect the copyrights of the
copyrighted materials and exercise my right to Fair Use with my own physical
property for personal use.
**This dataset is NOT intended for commercial purposes**. Its primary purpose is
for research in machine learning and AI software development. If a model is
created using this dataset, it will be shared under the same license.
Any proceeds derived from donations will be primarily used for the development
of the dataset and the model.
### Supported Tasks and Leaderboards
- `text-generation`: The dataset can be used to train a model for chat-like text
generation, more specifically, for generating explanations and examples in the
context of arithmetic, algebra, geometry, trigonometry, calculus, algorithms
and data structures, design patterns, and the python programming language.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance consists of a dialogue between a user and an assistant,
discussing a topic in arithmetic, algebra, geometry, trigonometry, calculus,
algorithms and data structures, design patterns, or the Python programming
language. The dialogue is structured as a list of turns, each turn containing
the role ("user" or "assistant") and the content of the turn.
### Data Fields
- `role`: a string indicating the role of the speaker in the dialogue ("system",
"user", "assistant", "function").
- `content`: a string containing the content of the speaker's turn in the
dialogue.
### Data Splits
The dataset is split into a training set, a validation set, and a test set. The
exact sizes and proportions of these splits will depend on the final size of the
dataset.
## Dataset Creation
### Curation Rationale
The dataset is being created to train a model capable of generating explanations
and examples in the context of various mathematical and computer science topics.
The goal is to create an AI assistant that can provide clear, accurate, and
pedagogically sound responses to user queries on these topics.
### Source Data
#### Initial Data Collection and Normalization
The data is collected from a variety of textbooks covering arithmetic, algebra,
geometry, trigonometry, calculus, algorithms and data structures, design
patterns, and the Python programming language. The textbooks used include:
- Barron's Arithmetic The Easy Way Fourth Edition
- Blitzer Introductory Algebra for College Students Fifth Edition
- McDougal Littell Geometry
- Blitzer Intermediate Algebra for College Students 5th Edition
- Trigonometry Sixth Edition
- Pearson College Algebra Fourth Edition
- Hughes-Hallet Applied Calculus 5th Edition
- CLRS Introduction to Algorithms Third Edition
In addition to the textbooks, the dataset also includes material from the
following online resources:
- [C reference](https://en.cppreference.com/w/c)
- [Cpp reference](https://en.cppreference.com/w/cpp)
- [Python Standard Library](https://docs.python.org/3/)
These resources provide up-to-date information and examples for the C, C++, and
Python programming languages. The creators of the Cppreference site also provide
[archives](https://en.cppreference.com/w/Cppreference:Archives) of their site
for offline use. Code samples synthesized by OpenAI's GPT models, curated by the
dataset creator, are also included in the dataset.
**Note:** The creator of this dataset owns physical copies of all the textbooks
listed above. The data from these sources are transformed into a dialogue format
using OpenAI's GPT-3.5 and GPT-4 models. The resulting dialogues are then used
as the training data for the phi-1 model. This dataset does not include the full
content of the source textbooks. Instead, it consists of transformations and
syntheses of the original content. Anyone who wants access to the full original
content should purchase or otherwise legally access the textbooks themselves.
#### Who are the source language producers?
The original language data was created by a variety of authors and educators,
who wrote the textbooks and other materials used as sources for this dataset.
These include:
- Barron's Arithmetic The Easy Way Fourth Edition - Edward Williams, Katie
Prindle
- Blitzer Introductory Algebra for College Students Fifth Edition - Robert
Blitzer
- McDougal Littell Geometry - Ron Larson, Laurie Boswell, Timothy D. Kanold, Lee
Stiff
- Blitzer Intermediate Algebra for College Students 5th Edition - Robert Blitzer
- Trigonometry Sixth Edition - Charles P. McKeague, Mark D. Turner
- Pearson College Algebra Fourth Edition - Robert F. Blitzer
- Hughes-Hallet Applied Calculus 5th Edition - Deborah Hughes-Hallett, Andrew M.
Gleason, Patti Frazer Lock, Daniel E. Flath, Sheldon P. Gordon, David O.
Lomen, David Lovelock, William G. McCallum, Brad G. Osgood, Andrew Pasquale,
Jeff Tecosky-Feldman, Joseph Thrash, Karen R. Rhea, Thomas W. Tucker
- CLRS Introduction to Algorithms Third Edition - Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest, Clifford Stein
In addition to these authors, the developers of OpenAI's GPT-3.5 and GPT-4
models also contributed to the creation of the language data, as these models
were used to transform the source material into a dialogue format.
### Annotations
#### Annotation process
The dataset does not contain any explicit annotations. However, the data is
curated and synthesized using OpenAI's GPT-3.5 and GPT-4 models. The process
involves transforming the source material into a dialogue format suitable for
training the phi-1 model. The dataset creator, an independent learner with a
strong interest in computer science, reviewed and curated the synthesized
dialogues to ensure their quality and relevance.
#### Who are the annotators?
The dataset creator, an independent learner who has studied computer science
extensively in a self-directed manner, performed the curation and review of the
synthesized dialogues.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information. All the data
is derived from publicly available textbooks and online resources. Any names or
other potential identifiers in the source material have been removed or
anonymized.
### Social Impact of Dataset
The dataset is intended to support the development of AI models capable of
providing detailed explanations and examples in the context of arithmetic,
algebra, geometry, trigonometry, calculus, algorithms and data structures,
design patterns, and the python programming language. The potential social
impact is significant, as such models could greatly enhance self-directed
learning and provide valuable educational support to students worldwide.
However, it's important to note that the quality and usefulness of the AI models
trained on this dataset will depend on the quality of the data itself. If the
data is inaccurate or biased, the models could propagate these inaccuracies and
biases, potentially leading to misinformation or unfair outcomes.
### Discussion of Biases
The dataset is based on a variety of textbooks and online resources, which may
contain their own inherent biases. For example, textbooks often reflect the
perspectives and biases of their authors, which can influence the way
information is presented. These biases could potentially be reflected in the
dataset and in any models trained on it.
### Other Known Limitations
At this stage of the dataset creation process, it's difficult to identify all
potential limitations. However, one potential limitation is that the dataset may
not cover all possible topics or perspectives within the fields it addresses.
The dataset creator will continue to monitor and assess the dataset for
limitations as the work progresses.
## Additional Information
### Dataset Curators
The dataset was curated by an independent learner with a strong interest in
computer science. The curator has studied the subject matter in a self-directed
manner, using a variety of resources including textbooks and online materials.
The curation process also involved the use of OpenAI's GPT-3.5 and GPT-4 models
to synthesize dialogues based on the source material.
### Licensing Information
This dataset is released under the Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 International (CC BY-NC-SA 3.0)
license.
### Citation Information
As this dataset is a compilation of various sources synthesized and curated for
the purpose of training the phi-1 model, please ensure to cite the original
sources when using this dataset. If referencing the dataset directly, please
refer to this repository.
| [
-0.34044349193573,
-0.8559185266494751,
0.08879148960113525,
0.16663244366645813,
-0.1542578488588333,
-0.3713133633136749,
-0.11402738839387894,
-0.26436591148376465,
-0.06326959282159805,
0.42348477244377136,
-0.6403316855430603,
-0.42881911993026733,
-0.3644697070121765,
0.0887374207377... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
germank/hh-generated_flan_t5_large_with_features2 | germank | 2023-07-07T14:32:37Z | 42 | 0 | null | [
"region:us"
] | 2023-07-07T14:32:37Z | 2023-07-07T14:32:34.000Z | 2023-07-07T14:32:34 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: 'biased:'
dtype: int64
- name: easy-to-understand
dtype: int64
- name: enough-detail
dtype: int64
- name: factuality
dtype: int64
- name: fail-to-consider-context
dtype: int64
- name: fail-to-consider-individual-preferences
dtype: int64
- name: helpfulness
dtype: int64
- name: intent
dtype: int64
- name: readability
dtype: int64
- name: relevance
dtype: int64
- name: repetetive
dtype: int64
- name: specificity
dtype: int64
- name: too-long
dtype: int64
splits:
- name: train
num_bytes: 395548
num_examples: 1600
download_size: 76218
dataset_size: 395548
---
# Dataset Card for "hh-generated_flan_t5_large_with_features2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6099486351013184,
-0.2668794095516205,
0.23376578092575073,
0.2297472506761551,
-0.24362187087535858,
-0.08947805315256119,
0.03599203750491142,
-0.3987950384616852,
0.9470623135566711,
0.6430664658546448,
-0.6460676193237305,
-0.6454576253890991,
-0.5401854515075684,
-0.191577464342117... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ssbuild/alpaca_finance_en | ssbuild | 2023-07-09T03:32:00Z | 42 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-09T03:32:00Z | 2023-07-09T03:29:57.000Z | 2023-07-09T03:29:57 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
language-and-voice-lab/samromur_milljon | language-and-voice-lab | 2023-10-15T15:14:45Z | 42 | 2 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"crowd-sourced icelandic",
"samrómur",
"icelandic spee... | 2023-10-15T15:14:45Z | 2023-07-09T04:29:15.000Z | 2023-07-09T04:29:15 | ---
annotations_creators:
- crowdsourced
language:
- is
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: "Samrómur Milljón, Audio and Transcriptions"
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- crowd-sourced icelandic
- samrómur
- icelandic speech
- samromur
- iceland
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for samromur_milljon
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Samrómur Milljón](https://huggingface.co/datasets/language-and-voice-lab/samromur_milljon)
- **Repository:** [Language and Voice Laboratory](https://huggingface.co/language-and-voice-lab)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org), [Jón Guðnason](mailto:jg@ru.is)
### Dataset Summary
Samrómur Milljón is the result of the automatic verification of the corpus [Samromur Unverified 22.07](http://hdl.handle.net/20.500.12537/265) which contains 2,159,314 (2233 hours) speech-recordings in Icelandic that are essentially unverified. Samrómur Milljón consists of 1,002,157 of speech recordings (967 hours). The transcripts accompanying these recordings were automatically verified using various ASR systems such as: [Wav2Vec2](https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h), [Whisper](https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h), [Faster-Whisper](https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h-ct2) and [NeMo](https://huggingface.co/carlosdanielhernandezmena/stt_is_quartznet15x5_ft_ep56_875h). If any of the ASR systems get a transcript that perfectly matches the reference transcription, the speech-recording is then cosidered as verified.
### Example Usage
Samrómur Milljón is divided in 7 portions. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
samromur_milljon = load_dataset("language-and-voice-lab/samromur_milljon")
```
To load a specific split (for example, the split with female speakers between 18 and 49 years old) do:
```python
from datasets import load_dataset
samromur_milljon = load_dataset("language-and-voice-lab/samromur_milljon",split="female_18to49_yrs")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
The reading prompts were gathered from a variety of sources, mainly from the [Icelandic Gigaword Corpus](http://clarin.is/en/resources/gigaword). The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
## Dataset Structure
### Data Instances
```python
{
'audio_id': '000664-0013663',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/5238762551eb35654e659f0bf3f3bf30ba02b40036715337500cba84051fed4e/FEM_18to49_YRS_PART_0001/000664/000664-0013663.flac',
'array': array([0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., 0.0000000e+00,
3.0517578e-05, 3.0517578e-05], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': '000664',
'gender': 'female',
'age': '30-39',
'duration': 3.7799999713897705,
'verified_with': 'V+F',
'normalized_text': 'mattías lækkaðu í hátalaranum'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `age` (string) - range of age of the speaker.
* `duration` (float32) - duration of the audio file in seconds.
* `verified_with` (string) - Set of ASR systems that found a perfect match between the current speech file and the reference transcription : V = wav2vec, N = NeMo, W = Whisper, F = Whisper-Faster
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
Due to its large size, the corpus is divided in 7 splits with the objective that it will be more manageable this way.
* Female speakers less than 18 years old (n < 18): **female_lt_18_yrs**
* Female speakers from 18 to 49 years old (18 <= n <=49): **female_18to49_yrs**
* Female speakers greater than 49 years old (n > 49): **female_gt_49_yrs**
* Male speakers less than 18 years old (n < 18): **male_lt_18_yrs**
* Male speakers from 18 to 49 years old (18 <= n <=49): **male_18to49_yrs**
* Male speakers greater than 49 years old (n > 49): **male_gt_49_yrs**
* Speakers where age, gender or both are unknown: **other**
To load an specific portion please see the above section "Example Usage".
## Dataset Creation
### Curation Rationale
* The collection process started in October 2019.
* The aim is to create an open-source speech corpus to enable research and development for Icelandic Language Technology.
* The corpus comprises of audio recordings and metadata files containing the text sentences read by the participants.
* At the beginning, some data was manually verified by summer students, but due to the success of the collection process, we ended up with more than 2 million of unverified speech-recordings. In 2023, it was decided to use the distict ASR systems that we already had in Icelandic to automatically verify as much data as possible. The verification process consited in transcribing the recordings using various ASR systems. If one or more of them get a transcription that perfectly matches the reference transcription, the recording is considered as verified.
* The resulting corpus of more than 1 million recordings is too big to download it all at once. In consequence, it was decided to split it in 7 portions with the hope that it will be more manageable this way.
### Source Data
#### Initial Data Collection and Normalization
* The utterances were recorded by a smartphone or the web app.
* The data was collected using the website https://samromur.is, code of which is available at https://github.com/cadia-lvl/samromur.
* Each recording contains one read sentence from a script.
### Annotations
#### Annotation process
Prompts were pulled from these corpora if they met the criteria of having only letters which are present in the Icelandic alphabet, and if they are listed in the [DIM: Database Icelandic Morphology](https://aclanthology.org/W19-6116.pdf).
There are also synthesised prompts consisting of a name followed by a question or a demand, in order to simulate a dialogue with a smart-device.
#### Who are the annotators?
The audio files content was automatically verified using ASR systems such as: [Wav2Vec2](https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h), [Whisper](https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h), [Faster-Whisper](https://huggingface.co/language-and-voice-lab/whisper-large-icelandic-30k-steps-1000h-ct2) and [NeMo](https://huggingface.co/carlosdanielhernandezmena/stt_is_quartznet15x5_ft_ep56_875h). If any of the ASR systems get a transcript that perfectly matches the reference transcription, the speech-recording is then cosidered as verified.
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This contribution describes a project of speech data collection, using the web application [Samrómur](samromur.is) which is built upon [Common Voice](https://commonvoice.mozilla.org/), Mozilla Foundation's web platform for open-source voice collection. The goal of the project is to build a large-scale speech corpus for Automatic Speech Recognition (ASR) for Icelandic. Samrómur is the largest open speech corpus for Icelandic collected from the public domain.
### Discussion of Biases
* The participants are aged between 4 to 90 years old. The speech files were recorded by a smartphone or the web app.
* Participants self-reported their age group, gender, and the native language.
* The corpus contains 1,002,157 from 16,729 speakers, totalling 967 hours and 7 minutes.
* The number of recordings coming from female speakers are 714,564, totalling 697h22m.
* The number of recordings coming from male speakers are 282,499, totalling 264h28m.
* The number of recordings where the gender, the age or both is unknown are 5,094, totalling 5h16m. These recordings belong to the split called "other".
### Other Known Limitations
"Samrómur Milljón" by the Language and Voice Laboratory (LVL) from Reykjavik University (RU) is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
## Additional Information
### Dataset Curators
The corpus was curated by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) during 2023 at the [Language and Voice Laboratory](https://lvl.ru.is/) from [Reykjavik University](https://en.ru.is/).
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{menasamromurmilljon2023,
title={Samrómur Milljón, Audio and Transcriptions},
author={Hernández Mena, Carlos Daniel and Guðnason, Jón},
publisher={Reykjavík University},
year={2023},
url={https://huggingface.co/datasets/language-and-voice-lab/samromur_milljon}
}
```
### Contributions
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
| [
-0.44506344199180603,
-0.7101281881332397,
-0.015282555483281612,
0.2058647722005844,
-0.2865961194038391,
-0.023824017494916916,
-0.2620087265968323,
-0.39566609263420105,
0.2943781316280365,
0.42428910732269287,
-0.6300172805786133,
-0.6150010228157043,
-0.5684615969657898,
0.27953588962... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DynamicSuperb/SpoofDetection_ASVspoof2017 | DynamicSuperb | 2023-07-31T10:54:40Z | 42 | 0 | null | [
"region:us"
] | 2023-07-31T10:54:40Z | 2023-07-13T03:40:36.000Z | 2023-07-13T03:40:36 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 1411064438.928
num_examples: 13306
download_size: 1361993549
dataset_size: 1411064438.928
---
# Dataset Card for "SpoofDetection_ASVspoof2017"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.36759451031684875,
-0.3352685570716858,
0.04605626314878464,
0.45132046937942505,
-0.20406070351600647,
0.002451710170134902,
0.425631582736969,
-0.302956759929657,
0.9166664481163025,
0.5812397003173828,
-0.9144275784492493,
-0.5550990104675293,
-0.6897755265235901,
-0.2436248660087585... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nahyeon00/mixsnips_clean | nahyeon00 | 2023-07-19T08:38:38Z | 42 | 0 | null | [
"region:us"
] | 2023-07-19T08:38:38Z | 2023-07-19T08:38:23.000Z | 2023-07-19T08:38:23 | ---
dataset_info:
features:
- name: token
sequence: string
- name: tag
sequence: string
- name: intent
sequence: string
splits:
- name: train
num_bytes: 16319528
num_examples: 39776
- name: validation
num_bytes: 915087
num_examples: 2198
- name: test
num_bytes: 902367
num_examples: 2199
download_size: 3076227
dataset_size: 18136982
---
# Dataset Card for "mixsnips_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.613571286201477,
-0.10863792151212692,
-0.056348010897636414,
0.2060285061597824,
-0.2805746793746948,
0.05870713293552399,
0.20118677616119385,
-0.2758224308490753,
1.1619783639907837,
0.6712122559547424,
-0.8539255261421204,
-0.6207185983657837,
-0.5563148260116577,
-0.297561705112457... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heliosbrahma/mental_health_conversational_dataset | heliosbrahma | 2023-07-22T11:30:56Z | 42 | 6 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:mit",
"medical",
"region:us"
] | 2023-07-22T11:30:56Z | 2023-07-21T06:58:12.000Z | 2023-07-21T06:58:12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 102904
num_examples: 154
download_size: 60865
dataset_size: 102904
license: mit
task_categories:
- text-generation
- conversational
language:
- en
tags:
- medical
pretty_name: Mental Health Conversational Dataset
size_categories:
- n<1K
---
# Dataset Card for "mental_health_conversational_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
### Dataset Summary
This dataset contains conversational pair of questions and answers in a single text related to Mental Health. Dataset was curated from healthcare websites, popular blogs like WebMD and HeatlhLine, online FAQs etc. All questions and answers have been anonymized to remove any PII data and pre-processed to remove any unwanted characters.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance include a text columns which is a conversational pair of questions and answers. Questions were asked by the patients and answers were given by healthcare providers.
### Data Fields
- 'text': conversational pair of questions and answers between patient and healthcare provider.
## Dataset Creation
### Curation Rationale
Chatbots offer a readily available and accessible platform for individuals seeking support. They can be accessed anytime and anywhere, providing immediate assistance to those in need. Chatbots can offer empathetic and non-judgmental responses, providing emotional support to users. While they cannot replace human interaction entirely, they can be a helpful supplement, especially in moments of distress.
Hence, this dataset was curated to help finetune a conversational AI bot using this custom dataset which can then be deployed and be provided to the end patient as a chatbot.
### Source Data
This dataset was curated from healthcare websites, popular blogs like WebMD and HeatlhLine, online FAQs etc.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All questions and answers have been anonymized to remove any PII data. | [
-0.397456556558609,
-1.0399049520492554,
0.33559897541999817,
0.23919573426246643,
-0.098739854991436,
0.10757171362638474,
-0.17098332941532135,
-0.20774517953395844,
0.5017397403717041,
0.5549408197402954,
-1.078742504119873,
-0.7450926899909973,
-0.4924566447734833,
0.001868122024461627... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdiOO7/Llama-2 | AdiOO7 | 2023-07-24T20:40:11Z | 42 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-24T20:40:11Z | 2023-07-22T10:53:21.000Z | 2023-07-22T10:53:21 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wesley7137/psychology1cllm | wesley7137 | 2023-07-31T23:13:11Z | 42 | 3 | null | [
"region:us"
] | 2023-07-31T23:13:11Z | 2023-07-31T23:12:55.000Z | 2023-07-31T23:12:55 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seanlb02/medicare_faq | seanlb02 | 2023-07-31T23:38:43Z | 42 | 0 | null | [
"license:mit",
"region:us"
] | 2023-07-31T23:38:43Z | 2023-07-31T23:22:03.000Z | 2023-07-31T23:22:03 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hsultanbey/javascript | hsultanbey | 2023-08-03T09:42:53Z | 42 | 0 | null | [
"region:us"
] | 2023-08-03T09:42:53Z | 2023-08-03T09:42:14.000Z | 2023-08-03T09:42:14 | ---
dataset_info:
features:
- name: code
dtype: string
splits:
- name: train
num_bytes: 863518025
num_examples: 99999
download_size: 308377342
dataset_size: 863518025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "javascript"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.419099897146225,
-0.23142649233341217,
0.11820007860660553,
0.2183465212583542,
0.04537452384829521,
0.21743007004261017,
0.07949639111757278,
-0.18311390280723572,
0.7626698017120361,
0.43227702379226685,
-0.7853715419769287,
-1.0367473363876343,
-0.5901117920875549,
-0.399862200021743... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/polemo2_out | PL-MTEB | 2023-08-11T12:42:58Z | 42 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-08-11T12:42:58Z | 2023-08-11T12:42:01.000Z | 2023-08-11T12:42:01 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/allegro-reviews | PL-MTEB | 2023-08-11T13:11:54Z | 42 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-08-11T13:11:54Z | 2023-08-11T13:11:37.000Z | 2023-08-11T13:11:37 | ---
license: cc-by-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Matias12f/cats_dogs_trabajo | Matias12f | 2023-08-22T02:24:38Z | 42 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-22T02:24:38Z | 2023-08-22T02:23:34.000Z | 2023-08-22T02:23:34 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Fsoft-AIC/the-vault-class | Fsoft-AIC | 2023-10-11T16:42:43Z | 42 | 1 | null | [
"task_categories:text-generation",
"multilinguality:multiprogramming languages",
"language:code",
"language:en",
"license:mit",
"arxiv:2305.06156",
"region:us"
] | 2023-10-11T16:42:43Z | 2023-08-22T07:11:11.000Z | 2023-08-22T07:11:11 | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
*Note: C and Go are not contained in this repo due to the nonexistence of traditional classes in these languages.*
## Dataset Structure
### Data Instances
```
{
"hexsha": "78b961a6673ec1e12f8d95c33ef081f75561a87c",
"repo": "AIS-Bonn/sl-cutscenes",
"path": "sl_cutscenes/object_models.py",
"license": [
"MIT"
],
"language": "Python",
"identifier": "MeshLoader",
"original_docstring": "\n Class to load the meshes for the objects in a scene.\n ",
"docstring": "Class to load the meshes for the objects in a scene.",
"docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"code": "class MeshLoader:\n \"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"\n\n def __init__(self):\n \"\"\"Module initializer\"\"\"\n self.base_dir = CONSTANTS.MESH_BASE_DIR\n self.text_dir = CONSTANTS.TEXT_BASE_DIR\n self.reset()\n\n def reset(self):\n self.loaded_meshes = []\n\n def get_meshes(self):\n \"\"\" \"\"\"\n extract_singular = lambda x: x[0] if len(x) == 1 else x\n return [extract_singular(item) for item in self.loaded_meshes]\n\n def load_meshes(self, obj_info: List[object_info.ObjectInfo], **kwargs):\n \"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"\n paths = []\n for obj in obj_info:\n path = self.text_dir if obj.name.endswith(\"_floor\") or obj.name.endswith(\"_wall\") else self.base_dir\n paths.append((path / obj.mesh_fp).resolve())\n scales = [obj.scale for obj in obj_info]\n class_ids = [obj.class_id for obj in obj_info]\n mod_scales = kwargs.get(\"mod_scale\", [1.0] * len(scales))\n scales = [s * ms for (s, ms) in zip(scales, mod_scales)]\n flags = [mesh_flags(obj) for obj in obj_info]\n meshes = sl.Mesh.load_threaded(filenames=paths, flags=flags)\n\n # Setup class IDs\n for _, (mesh, scale, class_id) in enumerate(zip(meshes, scales, class_ids)):\n pt = torch.eye(4)\n pt[:3, :3] *= scale\n mesh.pretransform = pt\n mesh.class_index = class_id\n\n info_mesh_tuples = list(zip(obj_info, meshes))\n self.loaded_meshes.append(info_mesh_tuples)",
"code_tokens": [
"class",
"MeshLoader",
":",
"def",
"__init__",
"(",
"self",
")",
":",
"\"\"\"Module initializer\"\"\"",
"self",
".",
"base_dir",
"=",
"CONSTANTS",
".",
"MESH_BASE_DIR",
"self",
".",
"text_dir",
"=",
"CONSTANTS",
".",
"TEXT_BASE_DIR",
"self",
".",
"reset",
"(",
")",
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"loaded_meshes",
"=",
"[",
"]",
"def",
"get_meshes",
"(",
"self",
")",
":",
"\"\"\" \"\"\"",
"extract_singular",
"=",
"lambda",
"x",
":",
"x",
"[",
"0",
"]",
"if",
"len",
"(",
"x",
")",
"==",
"1",
"else",
"x",
"return",
"[",
"extract_singular",
"(",
"item",
")",
"for",
"item",
"in",
"self",
".",
"loaded_meshes",
"]",
"def",
"load_meshes",
"(",
"self",
",",
"obj_info",
":",
"List",
"[",
"object_info",
".",
"ObjectInfo",
"]",
",",
"**",
"kwargs",
")",
":",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"paths",
"=",
"[",
"]",
"for",
"obj",
"in",
"obj_info",
":",
"path",
"=",
"self",
".",
"text_dir",
"if",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_floor\"",
")",
"or",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_wall\"",
")",
"else",
"self",
".",
"base_dir",
"paths",
".",
"append",
"(",
"(",
"path",
"/",
"obj",
".",
"mesh_fp",
")",
".",
"resolve",
"(",
")",
")",
"scales",
"=",
"[",
"obj",
".",
"scale",
"for",
"obj",
"in",
"obj_info",
"]",
"class_ids",
"=",
"[",
"obj",
".",
"class_id",
"for",
"obj",
"in",
"obj_info",
"]",
"mod_scales",
"=",
"kwargs",
".",
"get",
"(",
"\"mod_scale\"",
",",
"[",
"1.0",
"]",
"*",
"len",
"(",
"scales",
")",
")",
"scales",
"=",
"[",
"s",
"*",
"ms",
"for",
"(",
"s",
",",
"ms",
")",
"in",
"zip",
"(",
"scales",
",",
"mod_scales",
")",
"]",
"flags",
"=",
"[",
"mesh_flags",
"(",
"obj",
")",
"for",
"obj",
"in",
"obj_info",
"]",
"meshes",
"=",
"sl",
".",
"Mesh",
".",
"load_threaded",
"(",
"filenames",
"=",
"paths",
",",
"flags",
"=",
"flags",
")",
"for",
"_",
",",
"(",
"mesh",
",",
"scale",
",",
"class_id",
")",
"in",
"enumerate",
"(",
"zip",
"(",
"meshes",
",",
"scales",
",",
"class_ids",
")",
")",
":",
"pt",
"=",
"torch",
".",
"eye",
"(",
"4",
")",
"pt",
"[",
":",
"3",
",",
":",
"3",
"]",
"*=",
"scale",
"mesh",
".",
"pretransform",
"=",
"pt",
"mesh",
".",
"class_index",
"=",
"class_id",
"info_mesh_tuples",
"=",
"list",
"(",
"zip",
"(",
"obj_info",
",",
"meshes",
")",
")",
"self",
".",
"loaded_meshes",
".",
"append",
"(",
"info_mesh_tuples",
")"
],
"short_docstring": "Class to load the meshes for the objects in a scene.",
"short_docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"comment": [
"\"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"",
"\"\"\"Module initializer\"\"\"",
"\"\"\" \"\"\"",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"# Setup class IDs"
],
"parameters": [],
"docstring_params": {
"returns": [],
"raises": [],
"params": [],
"outlier_params": [],
"others": []
}
}
```
### Data Fields
Data fields for function level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **original_string** (string): original version of function/class node
- **original_docstring** (string): the raw string before tokenization or parsing
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **short_docstring** (string): short, brief summarization (first line of the docstring)
- **short_docstring_tokens** (list): tokenized version of `short_docstring
- **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception fields, etc)
- **docstring_tokens** (list): tokenized version of docstring
- **comment** (list): list of comments (line) inside the function/class
- **parameters** (list): List of parameters and its type (type can be None)
- **docstring_params** (dict): Dictionary of the parsed information from docstring
See [here](https://github.com/FSoft-AI4Code/TheVault/blob/main/data/README.md) for more details and examples.
### Data Splits
In this repo, the class level data is not split, and contained in only train set.
## Dataset Statistics
|Language | Number of samples |
|:-----------|------------------------:|
|Python | 422,187 |
|Java | 4,872,485 |
|JavaScript | 291,479 |
|PHP | 1,173,916 |
|C# | 1,437,800 |
|C++ | 174,370 |
|Ruby | 353,859 |
|Rust | 93,311 |
|C | - |
|Go | - |
|TOTAL | **9,121,300** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full class level dataset
dataset = load_dataset("Fsoft-AIC/the-vault-class")
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-class", languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-class", streaming= True)
for sample in iter(data['train']):
print(sample)
```
A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link).
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). | [
-0.39418020844459534,
-0.7190294861793518,
0.19936105608940125,
0.39837419986724854,
0.05490336939692497,
0.07443682849407196,
0.11695282906293869,
-0.2424573004245758,
-0.008141901344060898,
0.49152934551239014,
-0.526888906955719,
-0.8261966705322266,
-0.4479849636554718,
0.0708123967051... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
folkopinion/government-interpellation-qa-swedish | folkopinion | 2023-09-25T18:07:34Z | 42 | 1 | null | [
"task_categories:table-question-answering",
"size_categories:1K<n<10K",
"language:sv",
"government",
"politics",
"interpellation",
"region:us"
] | 2023-09-25T18:07:34Z | 2023-09-17T19:00:42.000Z | 2023-09-17T19:00:42 | ---
task_categories:
- table-question-answering
language:
- sv
tags:
- government
- politics
- interpellation
pretty_name: Riksdagen Interpellation Debate QA
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mcaleste/sat_multiple_choice_math_may_23 | mcaleste | 2023-10-14T02:23:29Z | 42 | 0 | null | [
"size_categories:n<1K",
"language:en",
"region:us"
] | 2023-10-14T02:23:29Z | 2023-09-18T21:30:36.000Z | 2023-09-18T21:30:36 | ---
language:
- en
size_categories:
- n<1K
---
This is the set of math SAT questions from the May 2023 SAT, taken from here: https://www.mcelroytutoring.com/lower.php?url=44-official-sat-pdfs-and-82-official-act-pdf-practice-tests-free.
Questions that included images were not included but all other math questions, including those that have tables were included. | [
-0.7273389101028442,
-1.0905193090438843,
0.7060952186584473,
0.18812677264213562,
0.012341175228357315,
-0.47519487142562866,
0.7260611653327942,
-0.288883775472641,
0.4807451367378235,
1.1878622770309448,
-1.2471224069595337,
-0.03385022655129433,
-0.3393001854419708,
0.03873644396662712... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hieu-Pham/cooking_squad | Hieu-Pham | 2023-10-11T04:23:13Z | 42 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-11T04:23:13Z | 2023-10-11T03:31:28.000Z | 2023-10-11T03:31:28 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
humane-lab/K-HATERS | humane-lab | 2023-10-27T03:45:26Z | 42 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-10-27T03:45:26Z | 2023-10-16T04:29:58.000Z | 2023-10-16T04:29:58 | ---
license: cc-by-4.0
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zelros/pj-ca | zelros | 2023-11-05T22:55:31Z | 42 | 0 | null | [
"insurance",
"region:us"
] | 2023-11-05T22:55:31Z | 2023-10-17T19:41:34.000Z | 2023-10-17T19:41:34 | ---
tags:
- insurance
---
This dataset contains question/answer pairs from a French legal protection insurance (https://www.service-public.fr/particuliers/vosdroits/F3049?lang=en).
The objective of this dataset is to contribute to open source research projects aiming to, for instance:
* fine-tune LLMs on high-quality datasets, specializing them in the insurance domain
* develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts
* assess the knowledge of language models in the insurance field
* more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry.
Other datasets of the same kind are also available - or will be available soon - and are part of this research effort. See here: https://huggingface.co/collections/zelros/legal-protection-insurance-6536e8f389dd48faca78447e
Here is an example of usages of this dataset: https://huggingface.co/spaces/zelros/The-legal-protection-insurance-comparator | [
-0.12690630555152893,
-0.6109647154808044,
0.20794343948364258,
0.2446521520614624,
-0.06387617439031601,
-0.19570039212703705,
0.15373264253139496,
-0.43992751836776733,
0.39488929510116577,
1.1311315298080444,
-0.28130269050598145,
-0.5277532935142517,
-0.4185519516468048,
-0.14612421393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Taj-Mahal/magic-the-gathering | Taj-Mahal | 2023-10-31T20:58:35Z | 42 | 1 | null | [
"region:us"
] | 2023-10-31T20:58:35Z | 2023-10-26T19:43:32.000Z | 2023-10-26T19:43:32 | ---
dataset_info:
features:
- name: name
dtype: string
- name: firstPrinting
dtype: string
- name: manaCost
dtype: string
- name: convertedManaCost
dtype: float64
- name: type
dtype: string
- name: text
dtype: string
- name: power
dtype: string
- name: toughness
dtype: string
- name: loyalty
dtype: string
- name: layout
dtype: string
splits:
- name: train
num_bytes: 6999997
num_examples: 27703
- name: train_clean
num_bytes: 6813519.081146446
num_examples: 26965
download_size: 2539289
dataset_size: 13813516.081146445
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_clean
path: data/train_clean-*
---
# Dataset Card for "magic-the-gathering"
This is a HuggingFace adaptation of the [MTGJSON Atomic Card Database](https://mtgjson.com/data-models/card/card-atomic/) from the Taj-Mahal Data Science & Machine Learning Group.
## Usage
```
from datasets import load_dataset
dataset = load_dataset("Taj-Mahal/magic-the-gathering")
``` | [
-0.45174098014831543,
-0.6950588822364807,
-0.3052142858505249,
-0.06703761219978333,
-0.5002790093421936,
0.09664130955934525,
0.04445413872599602,
-0.26922813057899475,
0.5714948177337646,
0.5423680543899536,
-0.9428568482398987,
-0.6494654417037964,
-0.42537567019462585,
0.0422885119915... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JosueElias/pipeline_dataset2 | JosueElias | 2023-10-29T21:23:27Z | 42 | 0 | null | [
"region:us"
] | 2023-10-29T21:23:27Z | 2023-10-29T20:59:49.000Z | 2023-10-29T20:59:49 | ---
dataset_info:
features:
- name: title
dtype: string
- name: section
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1522896529
num_examples: 2101279
download_size: 850821844
dataset_size: 1522896529
---
# Dataset Card for "pipeline_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3034590482711792,
-0.08749112486839294,
0.12437744438648224,
0.19001281261444092,
-0.3166126608848572,
0.13243848085403442,
0.5400586128234863,
-0.1195332482457161,
0.75567626953125,
0.5601155757904053,
-0.8765096068382263,
-0.5750784277915955,
-0.8440394997596741,
-0.4758780598640442,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ftang97/sw-consultancy-book | ftang97 | 2023-11-03T15:23:20Z | 42 | 0 | null | [
"region:us"
] | 2023-11-03T15:23:20Z | 2023-11-03T15:23:17.000Z | 2023-11-03T15:23:17 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2592763
num_examples: 1
download_size: 1529310
dataset_size: 2592763
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sw-consultancy-book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5195884704589844,
-0.06594770401716232,
-0.03535483032464981,
0.23486678302288055,
-0.20163089036941528,
-0.023167749866843224,
0.2417953610420227,
-0.33982396125793457,
0.5954000353813171,
0.6877579092979431,
-1.0472073554992676,
-0.8918899893760681,
-0.27092474699020386,
-0.3203490078... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ceroxlol/pictarine | Ceroxlol | 2023-11-03T16:59:34Z | 42 | 0 | null | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"region:us"
] | 2023-11-03T16:59:34Z | 2023-11-03T16:53:38.000Z | 2023-11-03T16:53:38 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 110425
num_examples: 520
download_size: 57760
dataset_size: 110425
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
language:
- en
pretty_name: pictarineChatbot
size_categories:
- n<1K
---
# Dataset Card for "pictarine"
Dataset for training pictarineChatbot | [
-0.13372467458248138,
-0.2952459752559662,
-0.4012910723686218,
0.5577057003974915,
-0.5757617354393005,
0.4014209508895874,
0.023343531414866447,
-0.1169564500451088,
0.2739781439304352,
0.3025149703025818,
-0.41545504331588745,
-0.6753764748573303,
-0.36801373958587646,
-0.35433885455131... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Flyfer/TestingFineTuningCB | Flyfer | 2023-11-13T10:17:27Z | 42 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-13T10:17:27Z | 2023-11-07T08:56:45.000Z | 2023-11-07T08:56:45 | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: text
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7667
num_examples: 4
download_size: 11176
dataset_size: 7667
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
realshyfox/GuanacoFormat | realshyfox | 2023-11-07T21:46:46Z | 42 | 0 | null | [
"license:llama2",
"region:us"
] | 2023-11-07T21:46:46Z | 2023-11-07T20:07:40.000Z | 2023-11-07T20:07:40 | ---
license: llama2
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GabrielGaleana34/Products | GabrielGaleana34 | 2023-11-13T04:26:55Z | 42 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-13T04:26:55Z | 2023-11-08T00:10:44.000Z | 2023-11-08T00:10:44 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 91453253.0
num_examples: 451
download_size: 88009496
dataset_size: 91453253.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dz-data-ai/A10_CMD_SCRIPT_MULLEN_TEST | dz-data-ai | 2023-11-23T07:23:11Z | 42 | 0 | null | [
"region:us"
] | 2023-11-23T07:23:11Z | 2023-11-13T07:11:20.000Z | 2023-11-13T07:11:20 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/concode_trainuse | mengmengmmm | 2023-11-13T13:37:42Z | 42 | 0 | null | [
"region:us"
] | 2023-11-13T13:37:42Z | 2023-11-13T13:36:15.000Z | 2023-11-13T13:36:15 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joseluhf11/oct-object-detection | joseluhf11 | 2023-11-22T08:36:49Z | 42 | 0 | null | [
"region:us"
] | 2023-11-22T08:36:49Z | 2023-11-13T13:51:02.000Z | 2023-11-13T13:51:02 | ---
dataset_info:
features:
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: int64
- name: categories
sequence: string
splits:
- name: train
num_bytes: 666483144.628
num_examples: 4698
download_size: 76903163
dataset_size: 666483144.628
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oct-object-detection"
Dataset is composed of images with individual object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
[Source datataset](https://doi.org/10.1101/2023.03.29.534704) | [
-0.7288023829460144,
-0.8208656907081604,
0.08783753961324692,
-0.29677173495292664,
-0.6897058486938477,
0.09807159006595612,
0.5399805307388306,
-0.4976142644882202,
0.5162763595581055,
0.9565261006355286,
-0.4565790891647339,
-0.7641271948814392,
-0.4670449495315552,
0.39208531379699707... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
josephimperial/CebuaNER | josephimperial | 2023-11-17T12:07:58Z | 42 | 0 | null | [
"size_categories:1K<n<10K",
"license:cc-by-nc-4.0",
"arxiv:2310.00679",
"region:us"
] | 2023-11-17T12:07:58Z | 2023-11-17T11:58:38.000Z | 2023-11-17T11:58:38 | ---
license: cc-by-nc-4.0
size_categories:
- 1K<n<10K
---
This repository contains CebuaNER, the largest gold-standard datasets for named entities in Cebuano. This dataset is used for the [paper](https://arxiv.org/abs/2310.00679) `CebuaNER: A New Baseline Cebuano Named Entity Recognition Model` to be presented at PACLIC 2023, authored by Ma. Beatrice Emanuela N. Pilar, Ellyza Mari J. Papas, Mary Loise Buenaventura, Dane C. Dedoroy, Myron Montefalcon, Jay Rhald Padilla, Lany Maceda, Mideth Abisado, and Joseph Imperial.
## Data
The dataset contribution of this study is a compilation of Cebuano news articles from two local news stations, Yes the Best Dumaguete and
the Filipinas Bisaya. To further increase the data count, we also incorporated another publicly available dataset from [Sunstar Cebu](https://github.com/rjrequina/Cebuano-POS-Tagger/blob/master/eval/data/scraped/news-raw.txt) pre-collected by independent researcher Arjemariel Requina. The total accumulated and filtered size of the Cebuano dataset is 4258 articles. All the dataset used for this study is purely for non-commercial research purposes and under Fair Use.
| Source | Original | Cleaned |
| ------------------------| ------------- | ------- |
| Yes the Best Dumaguete | 1484 | 781 |
| Filipinas Bisaya | 769 | 377 |
| Sunstar | 3100 | 3100 |
## Note on Data Cataloging
Please send an email before submitting this repository to any data cataloging, data aggregation, and benchmarking projects/initiatives. The proponents of the paper of this dataset would like to be acknowledged appropriately or involved in co-authorship.
## Contact
For questions, you may reach the corresponding author of the study at:
**Joseph Marvin Imperial**\
Faculty Member, Department of Computer Science\
Lab Head, NU Human Language Technology Lab\
jrimperial@national-u.edu.ph | [
-0.3761669993400574,
0.0866926908493042,
0.1247243583202362,
0.4606899619102478,
-0.46812909841537476,
-0.268138587474823,
-0.23521602153778076,
-0.35500505566596985,
0.3941730558872223,
1.1633199453353882,
-0.24005882441997528,
-0.6712182760238647,
-0.4943116307258606,
0.635182797908783,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arianhosseini/openai_summarize_unlabelled | arianhosseini | 2023-11-18T21:34:06Z | 42 | 0 | null | [
"region:us"
] | 2023-11-18T21:34:06Z | 2023-11-18T21:33:58.000Z | 2023-11-18T21:33:58 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 167037359
num_examples: 107543
download_size: 101979854
dataset_size: 167037359
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openai_summarize_unlabelled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5498514771461487,
-0.0042190817184746265,
-0.07743614912033081,
0.2613656222820282,
-0.3403221666812897,
-0.13584265112876892,
-0.00758151151239872,
-0.014219074510037899,
0.998954176902771,
0.5618550181388855,
-0.5642372965812683,
-0.7596110701560974,
-0.6582470536231995,
-0.2543130517... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
portafolio/llamadas-celular-04 | portafolio | 2023-11-20T12:37:49Z | 42 | 0 | null | [
"task_categories:conversational",
"size_categories:n<1K",
"size_categories:divition 80-20",
"language:es",
"region:us"
] | 2023-11-20T12:37:49Z | 2023-11-19T15:48:18.000Z | 2023-11-19T15:48:18 | ---
task_categories:
- conversational
language:
- es
pretty_name: llamadas telefónicas
size_categories:
- n<1K
- divition 80-20
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MrBananaHuman/kor_ethical_question_answer | MrBananaHuman | 2023-11-20T11:08:47Z | 42 | 1 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2023-11-20T11:08:47Z | 2023-11-20T11:08:01.000Z | 2023-11-20T11:08:01 | ---
license: cc-by-nc-nd-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PaulTran/translated_math | PaulTran | 2023-11-21T01:06:23Z | 42 | 0 | null | [
"region:us"
] | 2023-11-21T01:06:23Z | 2023-11-20T12:43:30.000Z | 2023-11-20T12:43:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: answer
dtype: string
- name: explanation
dtype: string
- name: choices
sequence: string
- name: question
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 7908449
num_examples: 28386
download_size: 1068029
dataset_size: 7908449
---
# Dataset Card for "translated_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.45498740673065186,
-0.35691264271736145,
0.14711567759513855,
0.4955821633338928,
-0.26428425312042236,
-0.04074779525399208,
-0.08143892139196396,
-0.04963758960366249,
0.7440202236175537,
0.4875999093055725,
-0.8252142667770386,
-0.8094675540924072,
-0.7669659852981567,
-0.21924106776... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gaxys/wayuu_spa | Gaxys | 2023-11-26T00:14:24Z | 42 | 0 | null | [
"region:us"
] | 2023-11-26T00:14:24Z | 2023-11-22T09:44:00.000Z | 2023-11-22T09:44:00 | ---
dataset_info:
features:
- name: translation
struct:
- name: guc
dtype: string
- name: spa
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 10761772
num_examples: 46827
download_size: 6083524
dataset_size: 10761772
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tankaplans/buddismEval | tankaplans | 2023-11-27T05:10:36Z | 42 | 0 | null | [
"region:us"
] | 2023-11-27T05:10:36Z | 2023-11-27T02:59:41.000Z | 2023-11-27T02:59:41 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liweili/c4_200m | liweili | 2022-10-23T11:00:46Z | 41 | 25 | null | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"grammatical-error-correction",
"region:us"
] | 2022-10-23T11:00:46Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
c4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` | [
-0.3422134518623352,
-0.7494551539421082,
0.5181824564933777,
0.08256334811449051,
0.0328662283718586,
0.18976685404777527,
-0.2387077510356903,
-0.3461028039455414,
0.16504374146461487,
0.5613929033279419,
-0.45490849018096924,
-0.5462290048599243,
-0.4097924828529358,
0.4829891622066498,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_reviews_multi_es | SetFit | 2022-03-23T15:43:09Z | 41 | 0 | null | [
"region:us"
] | 2022-03-23T15:43:09Z | 2022-03-13T02:45:47.000Z | 2022-03-13T02:45:47 | #amazon reviews multi spanish
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Spanish language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [
-0.8170874714851379,
-0.4594860076904297,
-0.10711963474750519,
0.9387503266334534,
-0.27773210406303406,
0.1738867163658142,
0.010639384388923645,
-0.6464211940765381,
0.8231520652770996,
1.0282046794891357,
-1.1254680156707764,
-0.4142968952655792,
-0.16464002430438995,
0.429659187793731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
google/wit | google | 2022-07-04T10:47:07Z | 41 | 14 | wit | [
"task_categories:text-retrieval",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"source_datasets:extended|wikipedia",
"lang... | 2022-07-04T10:47:07Z | 2022-05-02T11:22:32.000Z | 2022-05-02T11:22:32 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- af
- ar
- ast
- azb
- be
- bg
- bn
- br
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- hr
- hu
- hy
- id
- it
- iw
- ja
- ka
- ko
- la
- lt
- lv
- mk
- ml
- ms
- nl
- nn
- 'no'
- pl
- pt
- ro
- ru
- sk
- sl
- sr
- sv
- th
- tr
- uk
- ur
- vi
- vo
- zh
license:
- cc-by-sa-3.0
multilinguality:
- multilingual
paperswithcode_id: wit
pretty_name: Wikipedia-based Image Text
size_categories:
- 10M<n<100M
source_datasets:
- original
- extended|wikipedia
task_categories:
- text-retrieval
- image-to-text
task_ids:
- text-retrieval-other-text-image-retrieval
- image-captioning
---
# Dataset Card for WIT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WIT homepage](https://github.com/google-research-datasets/wit)
- **Repository:** [WIT repository](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)
- **Leaderboard:** [WIT leaderboard](https://www.kaggle.com/c/wikipedia-image-caption)
- **Point of Contact:** [WIT e-mail](mailto:wit-dataset@google.com)
### Dataset Summary
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
A few unique advantages of WIT:
* The largest multimodal dataset (time of this writing) by the number of image-text examples.
* A massively multilingual (first of its kind) with coverage for over 100+ languages.
* A collection of diverse set of concepts and real world entities.
* Brings forth challenging real-world test sets.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": get_datasets_user_agent()},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("wit")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train a model for image captioning where the goal is to predict a caption given the image.
- `text-retrieval`: The goal in this task is to build a model that retrieves the text closest to an image.
In these tasks, any combination of the `caption_reference_description`, `caption_attribution_description` and `caption_alt_text_description` fields can be used as the input text/caption.
### Languages
The dataset contains examples from all Wikipedia languages, with the following stats:
Image-Text | # Lang | Uniq. Images | # Lang
------------ | ------ | ------------- | ------
total > 1M | 9 | images > 1M | 6
total > 500K | 10 | images > 500K | 12
total > 100K | 36 | images > 100K | 35
total > 50K | 15 | images > 50K | 17
total > 14K | 38 | images > 13K | 38
## Dataset Structure
### Data Instances
```
{
'language': 'en',
'page_url': 'https://en.wikipedia.org/wiki/Oxydactylus',
'image_url': 'https://upload.wikimedia.org/wikipedia/commons/5/5f/Oxydactylus_longipes_fm.jpg',
'page_title': 'Oxydactylus',
'section_title': None,
'hierarchical_section_title': 'Oxydactylus',
'caption_reference_description': None,
'caption_attribution_description': 'English: Mounted skeleton of Oxydactylus longipes in the Field Museum of Natural History.',
'caption_alt_text_description': None,
'mime_type': 'image/jpeg',
'original_height': 3564,
'original_width': 2748,
'is_main_image': True,
'attribution_passes_lang_id': True,
'page_changed_recently': True,
'context_page_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene, existing for approximately 14 million years. The name is from the Ancient Greek οξύς and δάκτυλος.\nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.',
'context_section_description': 'Oxydactylus is an extinct genus of camelid endemic to North America. It lived from the Late Oligocene to the Middle Miocene (28.4–13.7 mya), existing for approximately 14 million years. The name is from the Ancient Greek οξύς (oxys, "sharp")and δάκτυλος (daktylos, "finger").\n \nThey had very long legs and necks, and were probably adapted to eating high vegetation, much like modern giraffes. Unlike modern camelids, they had hooves, rather than tough sole-pads, and splayed toes.'
}
```
### Data Fields
- `language`: Language code depicting wikipedia language of the page
- `page_url`: URL to wikipedia page
- `image_url`: URL to wikipedia image
- `page_title`: Wikipedia page's title
- `section_title`: Section's title
- `hierarchical_section_title`: Hierarchical section's title
- `caption_reference_description`: This is the caption that is visible on the wiki page directly below the image.
- `caption_attribution_description`: This is the text found on the Wikimedia page of the image. This text is common to all occurrences of that image across all Wikipedias and thus can be in a language different to the original page article.
- `caption_alt_text_description`: This is the “alt” text associated with the image. While not visible in general, it is commonly used for accessibility / screen readers
- `mime_type`: Mime type associated to the image.
- `original_height`: Image height
- `original_width`: Image width
- `is_main_image`: Flag determining if the image is the first image of the page. Usually displayed on the top-right part of the page when using web browsers.
- `attribution_passes_lang_id`: Compared `language` field with the attribution language (written in the prefix of the attribution description).
- `page_changed_recently`: [More Information Needed]
- `context_page_description`: Page description corresponds to the short description of the page. It provides a concise explanation of the scope of the page.
- `context_section_description`: Text within the image's section.
<p align='center'>
<img width='75%' src='https://production-media.paperswithcode.com/datasets/Screenshot_2021-03-04_at_14.26.02.png' alt="Half Dome" /> </br>
<b>Figure: WIT annotation example. </b>
</p>
Details on the field content can be found directly in the [paper, figure 5 and table 12.](https://arxiv.org/abs/2103.01913)
### Data Splits
All data is held in `train` split, with a total of 37046386 rows.
## Dataset Creation
### Curation Rationale
From the [repository](https://github.com/google-research-datasets/wit#motivation):
> Multimodal visio-linguistic models rely on a rich dataset to help them learn to model the relationship between images and texts. Having large image-text datasets can significantly improve performance, as shown by recent works. Furthermore the lack of language coverage in existing datasets (which are mostly only in English) also impedes research in the multilingual multimodal space – we consider this a lost opportunity given the potential shown in leveraging images (as a language-agnostic medium) to help improve our multilingual textual understanding.
>
> To address these challenges and advance research on multilingual, multimodal learning we created the Wikipedia-based Image Text (WIT) Dataset. WIT is created by extracting multiple different texts associated with an image (e.g., as shown in the above image) from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets.
>
> The resulting dataset contains over 37.6 million image-text sets – making WIT the largest multimodal dataset (publicly available at the time of this writing) with unparalleled multilingual coverage – with 12K+ examples in each of 108 languages (53 languages have 100K+ image-text pairs).
### Source Data
#### Initial Data Collection and Normalization
From the [paper, section 3.1](https://arxiv.org/abs/2103.01913):
> We started with all Wikipedia content pages (i.e., ignoring other
pages that have discussions, comments and such). These number about ∼124M pages across 279 languages.
#### Who are the source language producers?
Text was extracted from Wikipedia.
### Annotations
#### Annotation process
WIT was constructed using an automatic process. However it was human-validated.
From the [paper, section 3.7](https://arxiv.org/abs/2103.01913):
> To further verify the quality of the WIT dataset we performed a
study using (crowd-sourced) human annotators. As seen in Fig. 3,
we asked raters to answer 3 questions. Given an image and the page
title, raters first evaluate the quality of the attribution description
and reference description in the first two questions (order randomized). The third question understands the contextual quality of these
text descriptions given the page description and caption. Each response is on a 3-point scale: "Yes" if the text perfectly describes
the image, "Maybe" if it is sufficiently explanatory and "No" if it is
irrelevant or the image is inappropriate.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the [paper, section 3.4](https://arxiv.org/abs/2103.01913):
> Lastly we found that certain image-text pairs occurred very
frequently. These were often generic images that did not have
much to do with the main article page. Common examples
included flags, logos, maps, insignia and such. To prevent
biasing the data, we heavily under-sampled all such images
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21), [@nateraw](https://github.com/nateraw) and [hassiahk](https://github.com/hassiahk) for adding this dataset. | [
-0.6656076312065125,
-0.5034418106079102,
0.19551177322864532,
-0.0853305384516716,
-0.44727781414985657,
-0.23967157304286957,
-0.2058749794960022,
-0.6034829020500183,
0.2808675765991211,
0.3733353614807129,
-0.7191671133041382,
-0.5403562784194946,
-0.42393526434898376,
0.31614708900451... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeIR/nq-generated-queries | BeIR | 2022-10-23T06:15:15Z | 41 | 0 | beir | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T06:15:15Z | 2022-06-17T13:20:26.000Z | 2022-06-17T13:20:26 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | [
-0.5227212905883789,
-0.5249219536781311,
0.14435674250125885,
0.04820423573255539,
0.055916160345077515,
0.0011022627586498857,
-0.1081070527434349,
-0.24874727427959442,
0.28598034381866455,
0.07840226590633392,
-0.45233607292175293,
-0.7186435461044312,
-0.347678542137146,
0.20300328731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jakartaresearch/indonews | jakartaresearch | 2022-08-07T04:27:54Z | 41 | 2 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"news",
"news-classifcation",
"indon... | 2022-08-07T04:27:54Z | 2022-08-07T04:03:02.000Z | 2022-08-07T04:03:02 | ---
annotations_creators:
- found
language:
- id
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Indonews
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- news
- news-classifcation
- indonesia
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Indonesian News Categorization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Indonews: Multiclass News Categorization scrapped popular news portals in Indonesia.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | [
-0.5002199411392212,
-0.6640398502349854,
-0.06621712446212769,
0.6014594435691833,
-0.5171281695365906,
-0.0065231104381382465,
-0.4123924672603607,
-0.31859663128852844,
0.6532034277915955,
0.9153424501419067,
-0.6304149627685547,
-0.8700197339057922,
-0.8426398038864136,
0.5489035844802... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.