id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vietgpt/orca_en | vietgpt | 2023-07-04T06:35:28Z | 17 | 1 | null | [
"region:us"
] | 2023-07-04T06:35:28Z | 2023-07-03T09:05:36.000Z | 2023-07-03T09:05:36 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: meta
struct:
- name: subset
dtype: string
splits:
- name: train
num_bytes: 6194081932
num_examples: 3601717
- name: test
num_bytes: 1093059093
num_examples: 635599
download_size: 3534002711
dataset_size: 7287141025
---
# Dataset Card for "orca_en"
```python
def preprocess(
sample,
instruction_key="### Instruction:",
response_key="<|endofprompt|>",
end_key="<|endoftext|>"
):
system_prompt = sample['system_prompt']
instruction = sample['question']
response = sample['response']
if system_prompt:
return {'text': """{system_prompt}
{instruction_key}
{instruction}
{response_key}
{response}
{end_key}""".format(
system_prompt=system_prompt,
instruction_key=instruction_key,
instruction=instruction,
response_key=response_key,
response=response,
end_key=end_key,
)}
else:
return {'text': """{instruction_key}
{instruction}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.
### Instruction:
Q: Answer the following question given this paragraph: The kidneys also secrete hormones that help maintain homeostasis. For example, they produce a hormone that stimulates bone marrow to produce red blood cells when more are needed. They also secrete a hormone that regulates blood pressure and keeps it in a normal range. Q: What organs secrete hormones that help maintain homeostasis? A:
The answer is:
<|endofprompt|>
The kidneys are the organs that secrete hormones to help maintain homeostasis. They produce a hormone that stimulates bone marrow to produce red blood cells when needed, and they also secrete a hormone that regulates blood pressure, keeping it within a normal range.
<|endoftext|>
"""
``` | [
-0.03483344241976738,
-0.6312434673309326,
0.33963170647621155,
0.04426034539937973,
-0.2564240097999573,
-0.17945052683353424,
-0.030318353325128555,
0.07009073346853256,
-0.03544769808650017,
0.4456827640533447,
-0.7008131742477417,
-0.6522802710533142,
-0.45836398005485535,
0.6174454092... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EleutherAI/unscramble | EleutherAI | 2023-11-02T14:54:07Z | 17 | 1 | null | [
"license:other",
"region:us"
] | 2023-11-02T14:54:07Z | 2023-07-05T19:13:42.000Z | 2023-07-05T19:13:42 | ---
license: other
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SiberiaSoft/SiberianDatasetXL | SiberiaSoft | 2023-07-24T00:28:56Z | 17 | 2 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] | 2023-07-24T00:28:56Z | 2023-07-07T16:44:34.000Z | 2023-07-07T16:44:34 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
- conversational
language:
- ru
size_categories:
- 100K<n<1M
---
### SiberiaSoft/SiberianDatasetXL
Датасет инструкций, диалогов, QA
## Процентное содержание задач:
| Задача | Процентное содержание |
|:-----------------------------------------------------------------------------:|:---------------------:|
| Живые с контекстом | 38.746% |
| QA с длинными ответами | 11.907% |
| russian_instructions_2 Den4ikAI/russian_instructions_2 (очищенный) | 9.65% |
| QA по тексту Den4ikAI/ru_sberquad_long_answers | 9.203% |
| QA с короткими ответами | 8.57% |
| Инструкции с IlyaGusev/ru_turbo_alpaca_evol_instruct (очень жестко очищенные) | 6.087% |
| Персонализированные диалоги с контекстом | 5.795% |
| Инструкции с its5Q/yandex-q | 4.373% |
| QA с использованием Wikipedia | 2.822% |
| Инструкции с lksy/ru_instruct_gpt4 (жестко очищенные) | 2.741% |
| Решение проблем | 0.085% |
| QA объясни ребенку | 0.02% |
### Citation
```
@MISC{SiberianDatasetXL,
author = {Denis Petrov, Ivan Ramovich},
title = {Russian dataset for Instruct/Chat models},
url = {https://huggingface.co/datasets/SiberiaSoft/SiberianDatasetXL},
year = 2023
}
``` | [
-0.3033314347267151,
-0.5091400742530823,
0.22022847831249237,
0.46273908019065857,
-0.6553646922111511,
0.06284373253583908,
0.2700594365596771,
-0.24788783490657806,
0.5055076479911804,
-0.011165331117808819,
-0.9823326468467712,
-0.7418814897537231,
-0.30092111229896545,
-0.268828511238... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigcode/commits_ft | bigcode | 2023-07-11T04:31:12Z | 17 | 0 | null | [
"region:us"
] | 2023-07-11T04:31:12Z | 2023-07-11T04:00:41.000Z | 2023-07-11T04:00:41 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
diffusers-parti-prompts/kandinsky-2-2 | diffusers-parti-prompts | 2023-07-18T05:32:32Z | 17 | 0 | null | [
"region:us"
] | 2023-07-18T05:32:32Z | 2023-07-17T06:40:58.000Z | 2023-07-17T06:40:58 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 163668480.032
num_examples: 1632
download_size: 163766653
dataset_size: 163668480.032
---
# Dataset Card for "kandinsky-2-2"
The dataset was generated using the code below:
```python
import PIL
import torch
from datasets import Dataset, Features
from datasets import Image as ImageFeature
from datasets import Value, load_dataset
from diffusers import DiffusionPipeline
def main():
print("Loading dataset...")
parti_prompts = load_dataset("nateraw/parti-prompts", split="train")
print("Loading pipeline...")
pipe_prior = DiffusionPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
pipe_prior.set_progress_bar_config(disable=True)
t2i_pipe = DiffusionPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
)
t2i_pipe.to("cuda")
t2i_pipe.set_progress_bar_config(disable=True)
seed = 0
generator = torch.Generator("cuda").manual_seed(seed)
ckpt_id = (
"kandinsky-community/" + "kandinsky-2-2-prior" + "_" + "kandinsky-2-2-decoder"
)
print("Running inference...")
main_dict = {}
for i in range(len(parti_prompts)):
sample = parti_prompts[i]
prompt = sample["Prompt"]
image_embeds, negative_image_embeds = pipe_prior(
prompt,
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
).to_tuple()
image = t2i_pipe(
image_embeds=image_embeds,
negative_image_embeds=negative_image_embeds,
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
).images[0]
image = image.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS)
img_path = f"kandinsky_22_{i}.png"
image.save(img_path)
main_dict.update(
{
prompt: {
"img_path": img_path,
"Category": sample["Category"],
"Challenge": sample["Challenge"],
"Note": sample["Note"],
"model_name": ckpt_id,
"seed": seed,
}
}
)
def generation_fn():
for prompt in main_dict:
prompt_entry = main_dict[prompt]
yield {
"Prompt": prompt,
"Category": prompt_entry["Category"],
"Challenge": prompt_entry["Challenge"],
"Note": prompt_entry["Note"],
"images": {"path": prompt_entry["img_path"]},
"model_name": prompt_entry["model_name"],
"seed": prompt_entry["seed"],
}
print("Preparing HF dataset...")
ds = Dataset.from_generator(
generation_fn,
features=Features(
Prompt=Value("string"),
Category=Value("string"),
Challenge=Value("string"),
Note=Value("string"),
images=ImageFeature(),
model_name=Value("string"),
seed=Value("int64"),
),
)
ds_id = "diffusers-parti-prompts/kandinsky-2-2"
ds.push_to_hub(ds_id)
if __name__ == "__main__":
main()
``` | [
-0.359020471572876,
-0.4553609788417816,
0.37795498967170715,
0.08351559191942215,
-0.3608100414276123,
-0.136566162109375,
-0.013107791543006897,
0.07563570141792297,
-0.18401849269866943,
0.3452621400356293,
-0.7435709238052368,
-0.7046366333961487,
-0.5817816853523254,
-0.03686982393264... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
InferencePrince555/Resume-Dataset | InferencePrince555 | 2023-07-23T10:37:41Z | 17 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-23T10:37:41Z | 2023-07-20T09:22:47.000Z | 2023-07-20T09:22:47 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Guilherme34/Cabrita-lora-ptbr | Guilherme34 | 2023-07-20T12:44:35Z | 17 | 3 | null | [
"region:us"
] | 2023-07-20T12:44:35Z | 2023-07-20T12:43:29.000Z | 2023-07-20T12:43:29 | its not my dataset, im just posting it here | [
-0.4851800501346588,
-0.5626221299171448,
0.029086552560329437,
0.5953306555747986,
-0.1853773295879364,
-0.07133577018976212,
0.2578863203525543,
0.18125686049461365,
0.9958900809288025,
0.4311060905456543,
-0.8313500881195068,
-0.6017649173736572,
-0.6651303172111511,
0.2805974781513214,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NebulaByte/E-Commerce_FAQs | NebulaByte | 2023-07-22T10:05:51Z | 17 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-22T10:05:51Z | 2023-07-22T10:01:25.000Z | 2023-07-22T10:01:25 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gwlms/germeval2018 | gwlms | 2023-07-26T11:05:10Z | 17 | 0 | null | [
"task_categories:text-classification",
"language:de",
"license:cc-by-4.0",
"region:us"
] | 2023-07-26T11:05:10Z | 2023-07-26T08:59:30.000Z | 2023-07-26T08:59:30 | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: coarse-grained
dtype: string
- name: fine-grained
dtype: string
config_name: germeval2018
splits:
- name: train
num_bytes: 840593
num_examples: 5009
- name: test
num_bytes: 519146
num_examples: 3532
download_size: 1282870
dataset_size: 1359739
task_categories:
- text-classification
language:
- de
--- | [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BramDelisse/ETHICS_llama-chat | BramDelisse | 2023-11-03T10:36:16Z | 17 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-03T10:36:16Z | 2023-08-11T13:00:20.000Z | 2023-08-11T13:00:20 | ---
license: mit
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BuroIdentidadDigital/recibos_izzi | BuroIdentidadDigital | 2023-11-28T05:45:04Z | 17 | 1 | null | [
"license:c-uda",
"region:us"
] | 2023-11-28T05:45:04Z | 2023-08-11T20:55:16.000Z | 2023-08-11T20:55:16 | ---
license: c-uda
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/oasst_en | dim | 2023-08-13T14:36:10Z | 17 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-13T14:36:10Z | 2023-08-13T14:33:21.000Z | 2023-08-13T14:33:21 | ---
license: mit
dataset_info:
features:
- name: conversation_ids
sequence: string
- name: conversation_text
sequence: string
- name: status
dtype: string
splits:
- name: train
num_bytes: 5716785
num_examples: 3141
download_size: 2174320
dataset_size: 5716785
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
silk-road/Chat_Suzumiya_Fusion | silk-road | 2023-08-14T11:10:45Z | 17 | 4 | null | [
"region:us"
] | 2023-08-14T11:10:45Z | 2023-08-14T11:10:32.000Z | 2023-08-14T11:10:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: context
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 111274991
num_examples: 28612
download_size: 39798958
dataset_size: 111274991
---
# Dataset Card for "Chat_Suzumiya_Fusion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4435960650444031,
-0.3386753797531128,
0.052268028259277344,
0.18201836943626404,
-0.30254650115966797,
0.09061092883348465,
0.24802044034004211,
-0.15315239131450653,
1.0334393978118896,
0.5494493246078491,
-1.0219165086746216,
-0.7298096418380737,
-0.5087319016456604,
-0.3536574840545... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/essayforum_writing_prompts_6k | dim | 2023-08-16T20:37:43Z | 17 | 1 | null | [
"region:us"
] | 2023-08-16T20:37:43Z | 2023-08-16T01:03:40.000Z | 2023-08-16T01:03:40 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 21696702
num_examples: 6361
download_size: 11796178
dataset_size: 21696702
---
# Dataset Card for "essayforum_writing_prompts_6k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5452675819396973,
-0.18084731698036194,
0.5046871304512024,
0.2861672639846802,
-0.07504377514123917,
-0.16995394229888916,
0.1272132843732834,
0.04066242277622223,
0.5630541443824768,
0.6028813719749451,
-0.9685177803039551,
-0.7415096163749695,
-0.3959144055843353,
0.15098929405212402... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kelSidenna/softwareReq-data | kelSidenna | 2023-08-18T04:06:40Z | 17 | 2 | null | [
"region:us"
] | 2023-08-18T04:06:40Z | 2023-08-18T04:00:39.000Z | 2023-08-18T04:00:39 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/bugurt_thread_prompts | dim | 2023-09-01T23:13:38Z | 17 | 0 | null | [
"license:mit",
"region:us"
] | 2023-09-01T23:13:38Z | 2023-08-21T01:16:54.000Z | 2023-08-21T01:16:54 | ---
license: mit
dataset_info:
features:
- name: bugurt
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 301299
num_examples: 223
download_size: 159463
dataset_size: 301299
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/russian_lyrics_prompts | dim | 2023-08-21T01:23:59Z | 17 | 0 | null | [
"region:us"
] | 2023-08-21T01:23:59Z | 2023-08-21T01:23:41.000Z | 2023-08-21T01:23:41 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 18504
num_examples: 43
download_size: 14764
dataset_size: 18504
---
# Dataset Card for "russian_lyrics_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44301339983940125,
-0.2011946737766266,
0.3048294484615326,
0.5016696453094482,
-0.1863536238670349,
0.09635946154594421,
-0.07568414509296417,
0.10248517245054245,
0.7062275409698486,
0.4278663992881775,
-1.1490672826766968,
-0.9358817934989929,
-0.47334712743759155,
-0.068414956331253... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mvansegb123/commonsense-dialogues | mvansegb123 | 2023-08-30T23:10:24Z | 17 | 1 | null | [
"task_categories:text-classification",
"task_categories:table-question-answering",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"arxiv:2109.06427",
"region:us"
] | 2023-08-30T23:10:24Z | 2023-08-30T22:42:25.000Z | 2023-08-30T22:42:25 | ---
license: cc
task_categories:
- text-classification
- table-question-answering
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
## Commonsense-Dialogues Dataset
We present Commonsense-Dialogues, a crowdsourced dataset of ~11K dialogues grounded in social contexts involving utilization of commonsense. The social contexts used were sourced from the **train** split of the [SocialIQA](https://leaderboard.allenai.org/socialiqa/submissions/get-started) dataset, a multiple-choice question-answering based social commonsense reasoning benchmark.
For the collection of the Commonsense-Dialogues dataset, each Turker was presented a social context and asked to write a dialogue of 4-6 turns between two people based on the event(s) described in the context. The Turker was asked to alternate between the roles of an individual referenced in the context and a 3rd party friend. See the following dialogues as examples:
```
"1": { # dialogue_id
"context": "Sydney met Carson's mother for the first time last week. He liked her.", # multiple individuals in the context: Sydney and Carson
"speaker": "Sydney", # role 1 = Sydney, role 2 = a third-person friend of Sydney
"turns": [
"I met Carson's mother last week for the first time.",
"How was she?",
"She turned out to be really nice. I like her.",
"That's good to hear.",
"It is, especially since Carson and I are getting serious.",
"Well, at least you'll like your in-law if you guys get married."
]
}
"2": {
"context": "Kendall had a party at Jordan's house but was found out to not have asked and just broke in.",
"speaker": "Kendall",
"turns": [
"Did you hear about my party this weekend at Jordan\u2019s house?",
"I heard it was amazing, but that you broke in.",
"That was a misunderstanding, I had permission to be there.",
"Who gave you permission?",
"I talked to Jordan about it months ago before he left town to go to school, but he forgot to tell his roommates about it.",
"Ok cool, I hope everything gets resolved."
]
}
```
The data can be found in the `/data` directory of this repo. `train.json` has ~9K dialogues, `valid.json` and `test.json` have ~1K dialogues each. Since all the contexts were sourced from the **train** split of SocialIQA, it is imperative to note that any form of **multi-task** training and evaluation with Commonsense-Dialogues and SocialIQA must be done with caution to ensure fair and accurate conclusions.
Some statistics about the data are provided below:
| Stat | Train | Valid | Test |
| ---- | ---- | ---- | ---- |
|# of dialogues | 9058 | 1157 | 1158 |
|average # of turns in a dialogue | 5.72 | 5.72 | 5.71 |
|average # of words in a turn | 12.4 | 12.4 | 12.2 |
|# of distinct SocialIQA contexts used | 3672 | 483 | 473 |
|average # of dialogues for a SocialIQA context| 2.46 | 2.395 | 2.45 |
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This repository is licensed under the CC-BY-NC 4.0 License.
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{zhou-etal-2021-commonsense,
title = "Commonsense-Focused Dialogues for Response Generation: An Empirical Study",
author = "Zhou, Pei and
Gopalakrishnan, Karthik and
Hedayatnia, Behnam and
Kim, Seokhwan and
Pujara, Jay and
Ren, Xiang and
Liu, Yang and
Hakkani-Tur, Dilek",
booktitle = "Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue",
year = "2021",
address = "Singapore and Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2109.06427"
}
```
Note that the paper uses newly collected dialogues as well as those that were filtered from existing datasets. This repo contains our newly collected dialogues alone. | [
-0.35400477051734924,
-0.6362661123275757,
0.4490083158016205,
0.19910025596618652,
-0.28234797716140747,
-0.31706681847572327,
-0.32923421263694763,
-0.01929374411702156,
0.1701025366783142,
0.47977516055107117,
-0.7030788660049438,
-0.5443565845489502,
-0.4449789226055145,
0.218854546546... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jakir057/bangladeshi_banknotes_70k | Jakir057 | 2023-08-31T01:56:16Z | 17 | 0 | null | [
"region:us"
] | 2023-08-31T01:56:16Z | 2023-08-31T01:50:37.000Z | 2023-08-31T01:50:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '10'
'1': '100'
'2': '1000'
'3': '2'
'4': '20'
'5': '5'
'6': '50'
'7': '500'
splits:
- name: train
num_bytes: 5185934905.075951
num_examples: 59960
- name: test
num_bytes: 921148214.9020498
num_examples: 10582
download_size: 6163263960
dataset_size: 6107083119.978001
---
# Dataset Card for "bangladeshi_banknotes_70k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33450847864151,
-0.2437710464000702,
0.03123548999428749,
0.37042635679244995,
-0.706649661064148,
-0.015440888702869415,
0.25321659445762634,
-0.21680304408073425,
0.6915550827980042,
0.6475305557250977,
-0.2788485586643219,
-0.8357112407684326,
-0.6924962997436523,
-0.1627365201711654... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/runne_prompts | dim | 2023-09-02T16:20:49Z | 17 | 0 | null | [
"region:us"
] | 2023-09-02T16:20:49Z | 2023-08-31T21:35:34.000Z | 2023-08-31T21:35:34 | ---
dataset_info:
features:
- name: text
dtype: string
- name: parsed_entities
dtype: string
splits:
- name: train
num_bytes: 2636744
num_examples: 537
download_size: 1142735
dataset_size: 2636744
---
# Dataset Card for "runne_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6686438322067261,
-0.2901899814605713,
0.35673636198043823,
0.206139475107193,
-0.07554545998573303,
-0.13362058997154236,
0.14987656474113464,
0.1940060257911682,
0.8626795411109924,
0.6274946331977844,
-1.163504719734192,
-0.6497917175292969,
-0.3995436131954193,
-0.0625653937458992,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/resh_edu_short_prompts | dim | 2023-09-01T22:24:23Z | 17 | 0 | null | [
"region:us"
] | 2023-09-01T22:24:23Z | 2023-09-01T21:36:36.000Z | 2023-09-01T21:36:36 | ---
dataset_info:
features:
- name: solution
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 12371576
num_examples: 2106
download_size: 5361614
dataset_size: 12371576
---
# Dataset Card for "resh_edu_short_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5765056610107422,
-0.45921066403388977,
0.5075168609619141,
-0.19915467500686646,
-0.22810137271881104,
0.06715671718120575,
0.007317190524190664,
0.22943641245365143,
1.0107556581497192,
0.29044774174690247,
-1.0434046983718872,
-0.6715715527534485,
-0.24936147034168243,
0.156482070684... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/bugurt_completion_prompts | dim | 2023-09-01T23:28:27Z | 17 | 0 | null | [
"region:us"
] | 2023-09-01T23:28:27Z | 2023-09-01T23:28:23.000Z | 2023-09-01T23:28:23 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: bugurt
dtype: string
splits:
- name: train
num_bytes: 5451066
num_examples: 5000
download_size: 2806557
dataset_size: 5451066
---
# Dataset Card for "bugurt_completion_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7104490399360657,
-0.1443425714969635,
0.4071011245250702,
0.13531121611595154,
-0.14137786626815796,
0.1583685427904129,
0.1402856409549713,
0.060426823794841766,
0.9347193241119385,
0.27409565448760986,
-1.101637601852417,
-0.8327078819274902,
-0.4053317904472351,
0.15740175545215607,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chats-bug/agent_action_plan | chats-bug | 2023-09-04T11:22:49Z | 17 | 7 | null | [
"region:us"
] | 2023-09-04T11:22:49Z | 2023-09-04T11:21:45.000Z | 2023-09-04T11:21:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2487201.95821727
num_examples: 861
- name: test
num_bytes: 623967.0417827298
num_examples: 216
download_size: 0
dataset_size: 3111169.0
---
# Dataset Card for "agent_action_plan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3630063831806183,
-0.19780266284942627,
0.1842329204082489,
0.21405576169490814,
0.059683602303266525,
-0.010946069844067097,
0.493416965007782,
-0.05183035880327225,
1.0496373176574707,
0.744162380695343,
-0.9549373388290405,
-0.6819629073143005,
-0.4681514501571655,
-0.427323609590530... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vikp/code_instructions_filtered | vikp | 2023-09-04T15:29:06Z | 17 | 0 | null | [
"region:us"
] | 2023-09-04T15:29:06Z | 2023-09-04T15:25:02.000Z | 2023-09-04T15:25:02 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: kind
dtype: string
splits:
- name: train
num_bytes: 250321474.7560524
num_examples: 136147
download_size: 146821284
dataset_size: 250321474.7560524
---
# Dataset Card for "code_instructions_filtered"
This includes data from [xlcost](https://huggingface.co/datasets/vikp/xlcost_filtered_2k), [evol instruct](https://huggingface.co/datasets/vikp/evol_instruct_code_filtered_39k), [code alpaca](https://huggingface.co/datasets/vikp/evol_codealpaca_filtered_87k), and [code instructions](https://huggingface.co/datasets/vikp/code_instructions_filtered_7k). Data is filtered based on quality and learning value.
When used to fine-tune code llama 7B, achieves a `.62` humaneval score. | [
-0.42953455448150635,
-0.4459383487701416,
0.16110186278820038,
0.24919866025447845,
-0.7251303791999817,
0.16164754331111908,
0.2230936884880066,
-0.47370830178260803,
0.5273258090019226,
0.7905407547950745,
-0.562213122844696,
-0.950714647769928,
-0.4442562758922577,
0.053143035620450974... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/agentcode | AlignmentLab-AI | 2023-10-10T11:53:55Z | 17 | 2 | null | [
"region:us"
] | 2023-10-10T11:53:55Z | 2023-09-08T09:28:44.000Z | 2023-09-08T09:28:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
diffusers-parti-prompts/wuerstchen | diffusers-parti-prompts | 2023-09-13T17:08:21Z | 17 | 0 | null | [
"region:us"
] | 2023-09-13T17:08:21Z | 2023-09-11T17:12:20.000Z | 2023-09-11T17:12:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 149898953.312
num_examples: 1632
download_size: 150261013
dataset_size: 149898953.312
---
# Dataset Card for "wuerstchen"
Dataset was generated using the code below:
```py
import torch
from datasets import Dataset, Features
from datasets import Image as ImageFeature
from datasets import Value, load_dataset
from diffusers import AutoPipelineForText2Image
import PIL
def main():
print("Loading dataset...")
parti_prompts = load_dataset("nateraw/parti-prompts", split="train")
print("Loading pipeline...")
seed = 0
device = "cuda"
generator = torch.Generator(device).manual_seed(seed)
dtype = torch.float16
ckpt_id = "warp-diffusion/wuerstchen"
pipeline = AutoPipelineForText2Image.from_pretrained(
ckpt_id, torch_dtype=dtype
).to(device)
pipeline.prior_prior = torch.compile(pipeline.prior_prior, mode="reduce-overhead", fullgraph=True)
pipeline.decoder = torch.compile(pipeline.decoder, mode="reduce-overhead", fullgraph=True)
print("Running inference...")
main_dict = {}
for i in range(len(parti_prompts)):
sample = parti_prompts[i]
prompt = sample["Prompt"]
image = pipeline(
prompt=prompt,
height=1024,
width=1024,
prior_guidance_scale=4.0,
decoder_guidance_scale=0.0,
generator=generator,
).images[0]
image = image.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS)
img_path = f"wuerstchen_{i}.png"
image.save(img_path)
main_dict.update(
{
prompt: {
"img_path": img_path,
"Category": sample["Category"],
"Challenge": sample["Challenge"],
"Note": sample["Note"],
"model_name": ckpt_id,
"seed": seed,
}
}
)
def generation_fn():
for prompt in main_dict:
prompt_entry = main_dict[prompt]
yield {
"Prompt": prompt,
"Category": prompt_entry["Category"],
"Challenge": prompt_entry["Challenge"],
"Note": prompt_entry["Note"],
"images": {"path": prompt_entry["img_path"]},
"model_name": prompt_entry["model_name"],
"seed": prompt_entry["seed"],
}
print("Preparing HF dataset...")
ds = Dataset.from_generator(
generation_fn,
features=Features(
Prompt=Value("string"),
Category=Value("string"),
Challenge=Value("string"),
Note=Value("string"),
images=ImageFeature(),
model_name=Value("string"),
seed=Value("int64"),
),
)
ds_id = "diffusers-parti-prompts/wuerstchen"
ds.push_to_hub(ds_id)
if __name__ == "__main__":
main()
``` | [
-0.4604480564594269,
-0.3350418210029602,
0.2248377948999405,
0.09312602132558823,
-0.32985740900039673,
-0.3017875552177429,
-0.01488895807415247,
-0.027936706319451332,
-0.1843017339706421,
0.3811436593532562,
-0.760863184928894,
-0.5789149403572083,
-0.5690860152244568,
-0.0073853489011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minh21/COVID-QA-unique-context-test-10-percent-validation-10-percent | minh21 | 2023-09-17T18:29:42Z | 17 | 0 | null | [
"region:us"
] | 2023-09-17T18:29:42Z | 2023-09-17T11:11:59.000Z | 2023-09-17T11:11:59 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: is_impossible
dtype: bool
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 2050073
num_examples: 1615
- name: test
num_bytes: 260386
num_examples: 202
- name: validation
num_bytes: 261992
num_examples: 202
download_size: 0
dataset_size: 2572451
---
# Dataset Card for "COVID-QA-unique-context-test-10-percent-validation-10-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4749789237976074,
-0.4272553026676178,
0.02612168714404106,
0.5223057866096497,
-0.18155698478221893,
-0.10121192783117294,
0.35459253191947937,
0.13270822167396545,
0.6305025815963745,
0.06431246548891068,
-0.8389326930046082,
-0.8562013506889343,
-0.22907036542892456,
-0.0557466149330... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DopeorNope/2000sample_COT | DopeorNope | 2023-10-19T15:37:10Z | 17 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-10-19T15:37:10Z | 2023-09-21T12:01:52.000Z | 2023-09-21T12:01:52 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: rationale
dtype: string
- name: task
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 2298020
num_examples: 2159
download_size: 1099835
dataset_size: 2298020
license: cc-by-nc-sa-4.0
---
# Dataset Card for "2000sample_COT"
# DopeorNope/Eng_Kor_COT_combined
- KOpen-platypus + DopeorNope/2000sample_COT
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
- 고품질 한국어 데이터셋 + COT 방식으로 구성한 영어+ 한국어 dataset구성
---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6069234609603882,
-0.39718249440193176,
0.18741030991077423,
0.6203832626342773,
-0.6357113122940063,
0.09548640251159668,
-0.023699641227722168,
-0.21930621564388275,
0.7540397047996521,
0.4467546343803406,
-0.46698877215385437,
-0.7352580428123474,
-0.5820758938789368,
0.2054456472396... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/grammarly_coedit | dim | 2023-09-21T16:25:22Z | 17 | 1 | null | [
"region:us"
] | 2023-09-21T16:25:22Z | 2023-09-21T16:25:13.000Z | 2023-09-21T16:25:13 | ---
dataset_info:
features:
- name: _id
dtype: string
- name: task
dtype: string
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 19943349
num_examples: 82466
download_size: 11658767
dataset_size: 19943349
---
# Dataset Card for "grammarly_coedit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.32168272137641907,
-0.35459569096565247,
0.22597277164459229,
0.4005892276763916,
0.14296647906303406,
-0.05765955522656441,
-0.08951970189809799,
-0.1454678475856781,
0.6259833574295044,
0.1778203845024109,
-0.9460787177085876,
-0.8677042126655579,
-0.7325401902198792,
-0.0982432067394... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/kinopoisk_prompts | dim | 2023-09-21T18:01:28Z | 17 | 0 | null | [
"region:us"
] | 2023-09-21T18:01:28Z | 2023-09-21T17:27:26.000Z | 2023-09-21T17:27:26 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 136177618
num_examples: 36591
download_size: 68332043
dataset_size: 136177618
---
# Dataset Card for "kinopoisk_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5564578771591187,
-0.22565564513206482,
0.33343571424484253,
0.31975024938583374,
-0.2844725251197815,
0.1349494755268097,
0.3555554449558258,
0.09021367877721786,
0.9731693863868713,
0.4036713242530823,
-1.2307873964309692,
-0.9152904748916626,
-0.5091246366500854,
-0.03160615637898445... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JB/mimic-cxr-rrg | JB | 2023-09-23T06:22:52Z | 17 | 0 | null | [
"region:us"
] | 2023-09-23T06:22:52Z | 2023-09-23T06:22:47.000Z | 2023-09-23T06:22:47 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: impression
dtype: string
splits:
- name: test
num_bytes: 14124813.0
num_examples: 100
download_size: 14118845
dataset_size: 14124813.0
---
# Dataset Card for "mimic-cxr-rrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6119637489318848,
-0.29279980063438416,
0.026054363697767258,
0.2219732254743576,
-0.17638066411018372,
0.18562467396259308,
0.2611905336380005,
-0.342506468296051,
0.8507453799247742,
0.37181419134140015,
-1.0408536195755005,
-0.6685024499893188,
-0.6583491563796997,
-0.051564514636993... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/dialogsum_ru | dim | 2023-09-25T13:59:33Z | 17 | 1 | null | [
"region:us"
] | 2023-09-25T13:59:33Z | 2023-09-25T13:59:29.000Z | 2023-09-25T13:59:29 | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 19115158
num_examples: 12460
download_size: 9286024
dataset_size: 19115158
---
# Dataset Card for "dialogsum_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46517378091812134,
-0.4184243679046631,
0.11307131499052048,
0.12098433077335358,
-0.16296972334384918,
-0.14091362059116364,
0.3450268507003784,
-0.13520127534866333,
0.725662112236023,
0.5814766883850098,
-0.9494434595108032,
-0.7223820686340332,
-0.4163724482059479,
-0.17496927082538... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/dialogsum | dim | 2023-09-25T14:43:54Z | 17 | 0 | null | [
"region:us"
] | 2023-09-25T14:43:54Z | 2023-09-25T14:43:50.000Z | 2023-09-25T14:43:50 | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 11439628
num_examples: 12460
download_size: 6516766
dataset_size: 11439628
---
# Dataset Card for "dialogsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5527430176734924,
-0.37822526693344116,
0.17353220283985138,
0.053433630615472794,
-0.12864062190055847,
-0.11598687618970871,
0.29635855555534363,
-0.15362198650836945,
0.7904512882232666,
0.7373952269554138,
-0.9233346581459045,
-0.701299250125885,
-0.4584680199623108,
-0.256676316261... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/horoscopes_ru_10k | dim | 2023-09-25T22:23:24Z | 17 | 0 | null | [
"region:us"
] | 2023-09-25T22:23:24Z | 2023-09-25T15:08:17.000Z | 2023-09-25T15:08:17 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prediction
dtype: string
splits:
- name: train
num_bytes: 9449348
num_examples: 10000
download_size: 4589882
dataset_size: 9449348
---
# Dataset Card for "horoscopes_ru_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44385024905204773,
-0.07059872895479202,
0.15737958252429962,
0.4602627754211426,
-0.3604249954223633,
-0.013939167372882366,
0.1571902185678482,
-0.3381660282611847,
1.0361988544464111,
0.564224123954773,
-0.9027223587036133,
-0.7129814624786377,
-0.804141104221344,
-0.1861614882946014... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/panorama_prompts_10k | dim | 2023-09-25T15:16:40Z | 17 | 0 | null | [
"region:us"
] | 2023-09-25T15:16:40Z | 2023-09-25T15:16:34.000Z | 2023-09-25T15:16:34 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30478073
num_examples: 11024
download_size: 15784032
dataset_size: 30478073
---
# Dataset Card for "panorama_prompts_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8780155777931213,
-0.1711755394935608,
0.21678709983825684,
0.8016568422317505,
-0.37792521715164185,
-0.2487010657787323,
0.31023138761520386,
-0.10505764186382294,
0.9120409488677979,
0.4844706952571869,
-0.931418776512146,
-0.7024232149124146,
-0.7096111178398132,
-0.0123703442513942... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aharneish/spirit-qa | Aharneish | 2023-10-03T07:23:52Z | 17 | 0 | null | [
"region:us"
] | 2023-10-03T07:23:52Z | 2023-09-26T04:53:09.000Z | 2023-09-26T04:53:09 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sproos/SlimPajama-6B-embedded | sproos | 2023-09-27T19:36:48Z | 17 | 0 | null | [
"region:us"
] | 2023-09-27T19:36:48Z | 2023-09-27T19:06:00.000Z | 2023-09-27T19:06:00 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for SlimPajama-6B-embedded
This is a copy of [DKYoon/SlimPajama-6B](https://huggingface.co/datasets/DKYoon/SlimPajama-6B), together with embeddings generated by [thenlper/gte-large](https://huggingface.co/thenlper/gte-large).
There are 5.49 million examples of text, a representative random sample of [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B). Each text is associated with a 1024-dimensional embedding vector that is meant to represent the semantic content. The vectors were generated by average-pooling (max-pooling dataset to come in the future).
This dataset is intended to help with downstream tasks such as reverse-embeddings, interpreting embedding spaces and creating adapters between embeddings models. | [
-0.44165724515914917,
-0.6391316652297974,
0.31562739610671997,
0.1260167509317398,
-0.5142214298248291,
-0.30799469351768494,
-0.11859027296304703,
-0.13992680609226227,
0.6868928074836731,
0.6037893891334534,
-0.6791918873786926,
-0.5039716362953186,
-0.6309837102890015,
0.04249089211225... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keonroohparvar/music_vid_256 | keonroohparvar | 2023-10-10T04:40:44Z | 17 | 0 | null | [
"region:us"
] | 2023-10-10T04:40:44Z | 2023-10-10T04:40:33.000Z | 2023-10-10T04:40:33 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 93921778.139
num_examples: 9179
download_size: 80840102
dataset_size: 93921778.139
---
# Dataset Card for "music_vid_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8469468355178833,
-0.2500373423099518,
0.08007495105266571,
0.31629061698913574,
-0.2868800163269043,
-0.10333964228630066,
-0.039209406822919846,
0.1790061891078949,
0.7430175542831421,
0.49438712000846863,
-1.055098533630371,
-0.8341414928436279,
-0.5159057378768921,
-0.47203147411346... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlewins/fleurs_ceb_to_en | nlewins | 2023-10-10T15:44:08Z | 17 | 0 | null | [
"region:us"
] | 2023-10-10T15:44:08Z | 2023-10-10T15:39:39.000Z | 2023-10-10T15:39:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int32
- name: transcription
dtype: string
- name: language
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription_en
dtype: string
- name: audio_en
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 4837972886.028
num_examples: 3188
- name: validation
num_bytes: 332770769.0
num_examples: 225
- name: test
num_bytes: 834809869.0
num_examples: 541
download_size: 5885482902
dataset_size: 6005553524.028
---
# Dataset Card for "fleurs_ceb_to_en_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6126842498779297,
-0.3770003914833069,
0.2694903314113617,
0.5017682909965515,
-0.21472680568695068,
-0.22982442378997803,
0.19341228902339935,
-0.3759808838367462,
0.76814204454422,
0.639726459980011,
-0.7311698794364929,
-0.6688305139541626,
-0.7680978775024414,
-0.23645846545696259,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/EverythingIsAllYouNeed-v0.24 | AlignmentLab-AI | 2023-10-18T05:53:33Z | 17 | 0 | null | [
"region:us"
] | 2023-10-18T05:53:33Z | 2023-10-11T04:20:52.000Z | 2023-10-11T04:20:52 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/PubmedQA_train | hippocrates | 2023-10-12T16:17:38Z | 17 | 0 | null | [
"region:us"
] | 2023-10-12T16:17:38Z | 2023-10-12T16:17:31.000Z | 2023-10-12T16:17:31 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9890661
num_examples: 3000
- name: valid
num_bytes: 159299
num_examples: 50
- name: test
num_bytes: 1622241
num_examples: 500
download_size: 5990883
dataset_size: 11672201
---
# Dataset Card for "PubmedQA_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.439451664686203,
0.18773221969604492,
0.38559409976005554,
0.12468516081571579,
-0.23305675387382507,
-0.07184892892837524,
0.3674127757549286,
0.08615810424089432,
0.6256636381149292,
0.4522722363471985,
-0.7767746448516846,
-0.6269656419754028,
-0.619182288646698,
-0.155608132481575,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/camel_ai_physics | dim | 2023-10-12T17:17:57Z | 17 | 0 | null | [
"region:us"
] | 2023-10-12T17:17:57Z | 2023-10-12T17:17:30.000Z | 2023-10-12T17:17:30 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 51650490
num_examples: 20000
download_size: 18889012
dataset_size: 51650490
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "camel_ai_physics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5013147592544556,
-0.2536172866821289,
-0.1511441320180893,
0.23919840157032013,
-0.28666767477989197,
-0.06629539281129837,
0.3359333574771881,
-0.34892261028289795,
0.6617637276649475,
0.20048344135284424,
-0.8185381889343262,
-0.6808261275291443,
-0.476506769657135,
-0.37433579564094... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/camel_ai_biology | dim | 2023-10-12T17:27:15Z | 17 | 0 | null | [
"region:us"
] | 2023-10-12T17:27:15Z | 2023-10-12T17:26:35.000Z | 2023-10-12T17:26:35 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 61275986
num_examples: 20000
download_size: 22376128
dataset_size: 61275986
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "camel_ai_biology"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42974215745925903,
-0.290292888879776,
-0.24194775521755219,
0.1635635942220688,
-0.3047477900981903,
-0.06410349160432816,
0.38272443413734436,
-0.45683541893959045,
0.8107252717018127,
0.3434065580368042,
-0.7580887079238892,
-0.8320057392120361,
-0.5859478712081909,
-0.11617740243673... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Asif-Saad/Bangla_News_Article | Asif-Saad | 2023-11-07T02:17:19Z | 17 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-07T02:17:19Z | 2023-10-16T08:31:58.000Z | 2023-10-16T08:31:58 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FrsECM/CelebAHQ_mask | FrsECM | 2023-11-02T13:22:21Z | 17 | 0 | null | [
"task_categories:image-segmentation",
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"region:us"
] | 2023-11-02T13:22:21Z | 2023-10-18T13:59:25.000Z | 2023-10-18T13:59:25 | ---
size_categories:
- 10K<n<100K
task_categories:
- image-segmentation
- image-to-image
pretty_name: CelebAHQ Mask Dataset
dataset_info:
features:
- name: image_id
dtype: string
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 2829644617.0
num_examples: 28500
- name: test
num_bytes: 150219016.0
num_examples: 1500
download_size: 2993732687
dataset_size: 2979863633.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ceia-nlp/hellaswag-portuguese | ceia-nlp | 2023-10-23T22:39:24Z | 17 | 0 | null | [
"region:us"
] | 2023-10-23T22:39:24Z | 2023-10-18T21:01:27.000Z | 2023-10-18T21:01:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
sequence: string
- name: ctx_b
sequence: string
- name: ctx
sequence: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 44275109
num_examples: 39905
- name: test
num_bytes: 11058244
num_examples: 10003
- name: validation
num_bytes: 11332175
num_examples: 10042
download_size: 36875810
dataset_size: 66665528
---
# Dataset Card for "hellaswag-portuguese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5988726019859314,
-0.30265918374061584,
0.033604953438043594,
0.39450326561927795,
-0.5797833800315857,
-0.15537883341312408,
0.05334975942969322,
-0.30600741505622864,
0.9617871642112732,
0.5993988513946533,
-0.8662109375,
-1.0325629711151123,
-0.7932628989219666,
-0.17295292019844055,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carles-undergrad-thesis/en-id-parallel-sentences | carles-undergrad-thesis | 2023-10-20T01:32:19Z | 17 | 0 | null | [
"region:us"
] | 2023-10-20T01:32:19Z | 2023-10-20T01:31:57.000Z | 2023-10-20T01:31:57 | ---
dataset_info:
features:
- name: text_en
dtype: string
- name: text_id
dtype: string
splits:
- name: train
num_bytes: 392096944
num_examples: 1000000
download_size: 204794393
dataset_size: 392096944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "en-id-parallel-sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6455099582672119,
-0.5465430021286011,
0.3715488314628601,
0.5201319456100464,
-0.1902807503938675,
-0.11333761364221573,
-0.12500989437103271,
-0.2084330916404724,
0.8199585676193237,
0.3729568421840668,
-0.8307312726974487,
-0.7660292387008667,
-0.5613464713096619,
0.17781832814216614... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KennethEnevoldsen/spontanous-speech-qa | KennethEnevoldsen | 2023-10-24T11:23:18Z | 17 | 0 | null | [
"task_categories:question-answering",
"annotations_creators:found",
"size_categories:n<1K",
"source_datasets:DDSC/partial-danish-gigaword-no-twitter",
"language:da",
"conversational",
"region:us"
] | 2023-10-24T11:23:18Z | 2023-10-24T11:17:48.000Z | 2023-10-24T11:17:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
annotations_creators:
- found
source_datasets:
- DDSC/partial-danish-gigaword-no-twitter
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 44345.110764430574
num_examples: 512
- name: test
num_bytes: 11172.889235569422
num_examples: 129
download_size: 37996
dataset_size: 55518
task_categories:
- question-answering
language:
- da
tags:
- conversational
pretty_name: Spontanous speech QA
size_categories:
- n<1K
---
# Spontanous speech QA
This dataset contains QA pairs from the spontaneous speech subsection of the Danish Gigaword.
The dataset is created from the [DDSC dataset](DDSC/partial-danish-gigaword-no-twitter) and
filtered to only include QA pairs where the question is less than 20 tokens and the answer is
at least 4 tokens long.
To find out more about the creation see the accompanying script. | [
-0.5468828082084656,
-0.9936970472335815,
0.607136607170105,
0.3855023682117462,
-0.4552963972091675,
0.27025362849235535,
-0.1828058958053589,
-0.28379830718040466,
0.9629424214363098,
0.5340801477432251,
-1.0377740859985352,
-0.42486730217933655,
-0.24027344584465027,
0.12553156912326813... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nadsoft/Syria_audio | nadsoft | 2023-10-24T15:31:11Z | 17 | 0 | null | [
"region:us"
] | 2023-10-24T15:31:11Z | 2023-10-24T15:21:55.000Z | 2023-10-24T15:21:55 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 9669767962.166
num_examples: 72377
download_size: 10562028174
dataset_size: 9669767962.166
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Syria_audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8038270473480225,
-0.08297126740217209,
0.35323184728622437,
0.16470931470394135,
-0.14963693916797638,
-0.0519416481256485,
-0.04413365200161934,
-0.0852794274687767,
0.685955822467804,
0.35279902815818787,
-0.7181782126426697,
-1.097280502319336,
-0.7139759063720703,
-0.59617286920547... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eswardivi/Malayalam_MSA_Chunked | eswardivi | 2023-10-25T11:07:41Z | 17 | 0 | null | [
"region:us"
] | 2023-10-25T11:07:41Z | 2023-10-25T11:07:21.000Z | 2023-10-25T11:07:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
splits:
- name: train
num_bytes: 426015037.0
num_examples: 161
download_size: 424859122
dataset_size: 426015037.0
---
# Dataset Card for "Malayalam_MSA_Chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3694271445274353,
-0.3994129002094269,
0.22127270698547363,
0.5349137783050537,
-0.586349606513977,
0.02957230992615223,
0.32687366008758545,
-0.09464000165462494,
1.2435100078582764,
0.48255863785743713,
-0.6676395535469055,
-0.5025229454040527,
-0.7874438762664795,
-0.1743629276752472... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
M0hammed87/DatabaseDictionary | M0hammed87 | 2023-10-28T15:23:07Z | 17 | 0 | null | [
"region:us"
] | 2023-10-28T15:23:07Z | 2023-10-28T06:37:33.000Z | 2023-10-28T06:37:33 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Brandoko/Instruct-Recharts-v2 | Brandoko | 2023-10-28T15:44:21Z | 17 | 0 | null | [
"region:us"
] | 2023-10-28T15:44:21Z | 2023-10-28T15:44:20.000Z | 2023-10-28T15:44:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1453192
num_examples: 623
download_size: 409363
dataset_size: 1453192
---
# Dataset Card for "Instruct-Recharts-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48368436098098755,
-0.08583386987447739,
0.26275014877319336,
0.06750083714723587,
-0.22077476978302002,
0.1642959713935852,
0.562637209892273,
-0.3385363817214966,
0.6760886311531067,
0.7016724944114685,
-0.8874477744102478,
-0.6069028377532959,
-0.521389365196228,
-0.3166615962982178,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HoangHa/VietAI_QA | HoangHa | 2023-10-29T09:17:20Z | 17 | 0 | null | [
"region:us"
] | 2023-10-29T09:17:20Z | 2023-10-29T08:24:01.000Z | 2023-10-29T08:24:01 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
toilaluan/ig_rewarding_db_v4 | toilaluan | 2023-10-29T18:58:49Z | 17 | 0 | null | [
"region:us"
] | 2023-10-29T18:58:49Z | 2023-10-29T18:58:35.000Z | 2023-10-29T18:58:35 | ---
dataset_info:
features:
- name: image
dtype: image
- name: topic
dtype: string
- name: prompt
dtype: string
- name: request_id
dtype: int64
- name: model_type
dtype: string
splits:
- name: train
num_bytes: 330547445.0
num_examples: 4500
download_size: 340509190
dataset_size: 330547445.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ig_rewarding_db_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5495621562004089,
-0.042446114122867584,
0.08999547362327576,
0.3525426387786865,
-0.2703324854373932,
0.027829529717564583,
0.43292421102523804,
-0.1141401082277298,
0.8590195775032043,
0.4660221040248871,
-1.056433916091919,
-0.7967444062232971,
-0.5232887864112854,
0.0345470011234283... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ioana23/codeparrot-ds-50k | Ioana23 | 2023-10-30T08:20:47Z | 17 | 0 | null | [
"region:us"
] | 2023-10-30T08:20:47Z | 2023-10-30T08:19:20.000Z | 2023-10-30T08:19:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: copies
dtype: string
- name: size
dtype: string
- name: content
dtype: string
- name: license
dtype: string
splits:
- name: train
num_bytes: 652784990.8524525
num_examples: 50000
- name: valid
num_bytes: 6658657.886815172
num_examples: 500
download_size: 251530132
dataset_size: 659443648.7392677
---
# Dataset Card for "codeparrot-ds-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7333784699440002,
0.1200467199087143,
0.04438036307692528,
0.24781069159507751,
-0.3991532027721405,
0.35154128074645996,
0.1985708475112915,
0.089913010597229,
0.9396597146987915,
0.4379928708076477,
-0.8348091840744019,
-0.7881605625152588,
-0.5690954923629761,
-0.07490803301334381,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rocailler/test_prod_lab | rocailler | 2023-10-31T19:35:17Z | 17 | 0 | null | [
"license:cc",
"region:us"
] | 2023-10-31T19:35:17Z | 2023-10-31T17:40:12.000Z | 2023-10-31T17:40:12 | ---
license: cc
dataset_info:
features:
- name: product_name
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 19643
num_examples: 99
download_size: 0
dataset_size: 19643
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
derekiya/bigquery | derekiya | 2023-10-31T20:48:59Z | 17 | 0 | null | [
"region:us"
] | 2023-10-31T20:48:59Z | 2023-10-31T20:38:25.000Z | 2023-10-31T20:38:25 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
phanvancongthanh/enamine_leadlike | phanvancongthanh | 2023-11-01T00:13:38Z | 17 | 0 | null | [
"region:us"
] | 2023-11-01T00:13:38Z | 2023-11-01T00:02:22.000Z | 2023-11-01T00:02:22 | ---
dataset_info:
features:
- name: smiles
dtype: string
splits:
- name: train
num_bytes: 31490993396
num_examples: 672148662
download_size: 12563051169
dataset_size: 31490993396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "enamine_leadlike"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43448442220687866,
-0.2987968921661377,
0.33307626843452454,
0.37722182273864746,
-0.22309409081935883,
-0.4293922781944275,
-0.06279298663139343,
0.0527556948363781,
1.0177959203720093,
0.4842734932899475,
-0.7871468663215637,
-0.8030875325202942,
-0.5981903672218323,
-0.03643710538744... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunghuncsa/testdataset | sunghuncsa | 2023-11-01T07:07:32Z | 17 | 0 | null | [
"region:us"
] | 2023-11-01T07:07:32Z | 2023-11-01T07:00:22.000Z | 2023-11-01T07:00:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aminlouhichi/donut5Fournissuer | aminlouhichi | 2023-11-01T20:04:45Z | 17 | 0 | null | [
"region:us"
] | 2023-11-01T20:04:45Z | 2023-11-01T20:04:38.000Z | 2023-11-01T20:04:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 22887975.0
num_examples: 106
- name: validation
num_bytes: 22887975.0
num_examples: 106
- name: test
num_bytes: 35690926.0
num_examples: 106
download_size: 69740850
dataset_size: 81466876.0
---
# Dataset Card for "donut5Fournissuer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3768826723098755,
-0.038618043065071106,
0.2465788722038269,
0.21092399954795837,
-0.049947988241910934,
0.13665756583213806,
0.2647395133972168,
-0.14292722940444946,
0.7208327054977417,
0.592031717300415,
-0.8613772988319397,
-0.7027674913406372,
-0.658362627029419,
-0.118996009230613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rami/Diabetic_Retinopathy_Preprocessed_Dataset_256x256 | Rami | 2023-11-02T17:52:16Z | 17 | 0 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"medical",
"region:us"
] | 2023-11-02T17:52:16Z | 2023-11-02T05:02:54.000Z | 2023-11-02T05:02:54 | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- image-classification
tags:
- medical
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 354568127.0
num_examples: 2750
download_size: 0
dataset_size: 354568127.0
---
This is dataset comes from this [Kaggle Dataset](https://www.kaggle.com/datasets/sachinkumar413/diabetic-retinopathy-dataset/)
from the user [Sachin Kumar](https://www.kaggle.com/sachinkumar413).
- The goal of the dataset is for the Varun AIM Projects to easily start running and download the dataset on their local computer in the HF libraries as the directory I strongly recommedn to use. | [
-0.23931598663330078,
-0.47580865025520325,
0.16697712242603302,
-0.19824880361557007,
-0.20245593786239624,
0.32807713747024536,
0.3904256522655487,
-0.31101614236831665,
0.3245077133178711,
0.9176143407821655,
-0.6849024891853333,
-0.5209628939628601,
-0.3587099611759186,
0.1705409437417... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Shishir1807/test_drug | Shishir1807 | 2023-11-02T07:01:49Z | 17 | 0 | null | [
"region:us"
] | 2023-11-02T07:01:49Z | 2023-11-02T07:01:32.000Z | 2023-11-02T07:01:32 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Intuit-GenSRF/all_french_datasets | Intuit-GenSRF | 2023-11-03T22:31:34Z | 17 | 0 | null | [
"region:us"
] | 2023-11-03T22:31:34Z | 2023-11-03T22:31:04.000Z | 2023-11-03T22:31:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: encoded_labels
sequence: int64
- name: lang
dtype: string
- name: has_toxic
dtype: int64
- name: has_profane
dtype: int64
- name: has_insult
dtype: int64
- name: has_hate
dtype: int64
- name: has_threat
dtype: int64
- name: has_sexual
dtype: int64
- name: has_offensive
dtype: int64
- name: has_selfharm
dtype: int64
- name: has_harassment
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1135963065
num_examples: 2124196
download_size: 522591191
dataset_size: 1135963065
---
# Dataset Card for "all_french_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5870482921600342,
-0.328619122505188,
0.4064498543739319,
0.43509793281555176,
-0.0632673054933548,
-0.017366496846079826,
0.24754047393798828,
0.0018395022489130497,
0.9670878648757935,
0.6889093518257141,
-0.7143211364746094,
-0.9312423467636108,
-0.825201153755188,
-0.135366067290306... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Giacinta/weibo | Giacinta | 2023-11-04T03:10:44Z | 17 | 1 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:zh",
"license:apache-2.0",
"medical",
"region:us"
] | 2023-11-04T03:10:44Z | 2023-11-04T02:31:51.000Z | 2023-11-04T02:31:51 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- zh
tags:
- medical
pretty_name: weibo
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: PYH微博抽样数据.csv
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
san457/my_dataset | san457 | 2023-11-04T20:06:28Z | 17 | 0 | null | [
"region:us"
] | 2023-11-04T20:06:28Z | 2023-11-04T20:02:56.000Z | 2023-11-04T20:02:56 | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 79302267.0
num_examples: 3
download_size: 77773397
dataset_size: 79302267.0
---
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7651491761207581,
-0.2544492185115814,
0.19326886534690857,
0.20587360858917236,
-0.011474364437162876,
0.01957985945045948,
0.2946240305900574,
-0.0951300859451294,
1.0738117694854736,
0.5611457228660583,
-0.901695966720581,
-0.6428897976875305,
-0.5292006134986877,
0.01166255399584770... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kalyan003/Question_Answer_Dataset | kalyan003 | 2023-11-05T03:21:15Z | 17 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-05T03:21:15Z | 2023-11-05T03:20:28.000Z | 2023-11-05T03:20:28 | ---
license: unknown
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saumya1999/squad_qa | saumya1999 | 2023-11-07T00:26:05Z | 17 | 0 | null | [
"region:us"
] | 2023-11-07T00:26:05Z | 2023-11-05T20:55:39.000Z | 2023-11-05T20:55:39 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kjappelbaum/chemnlp-orbnet-denali | kjappelbaum | 2023-11-07T07:18:26Z | 17 | 0 | null | [
"region:us"
] | 2023-11-07T07:18:26Z | 2023-11-07T06:41:09.000Z | 2023-11-07T06:41:09 | ---
dataset_info:
features:
- name: id
dtype: string
- name: smiles
dtype: string
- name: xyz
dtype: string
- name: mol2000
dtype: string
- name: mol3000
dtype: string
- name: charge
dtype: int64
- name: dft_energy
dtype: float64
- name: xtb1_energy
dtype: float64
splits:
- name: train
num_bytes: 6436401032
num_examples: 1053275
download_size: 2534938845
dataset_size: 6436401032
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chemnlp-orbnet-denali"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.424858033657074,
-0.023524103686213493,
0.10750240832567215,
0.1815456748008728,
-0.08490698039531708,
0.12364885956048965,
0.25972267985343933,
-0.0915754958987236,
0.7844516038894653,
0.5718276500701904,
-0.8521345257759094,
-0.9616534113883972,
-0.37341976165771484,
-0.02098741754889... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joujiboi/japanese-anime-speech | joujiboi | 2023-11-22T13:47:06Z | 17 | 3 | null | [
"task_categories:automatic-speech-recognition",
"size_categories:10K<n<100K",
"language:ja",
"license:cc0-1.0",
"anime",
"japanese",
"日本語",
"nihongo",
"speech",
"audio-text",
"asr",
"whisper",
"voice",
"large-v3",
"region:us"
] | 2023-11-22T13:47:06Z | 2023-11-07T13:53:40.000Z | 2023-11-07T13:53:40 | ---
language:
- ja
license: cc0-1.0
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
pretty_name: Japanese-Anime-Speech
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 3981327916.675
num_examples: 38325
download_size: 4168517599
dataset_size: 3981327916.675
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- anime
- japanese
- 日本語
- nihongo
- speech
- audio-text
- asr
- whisper
- voice
- large-v3
---
# Japanese Anime Speech Dataset
**japanese-anime-speech** is an audio-text dataset designed for the training of automatic speech recognition models. The dataset is comprised of thousands of audio clips and their corresponding transcriptions from different visual novels.
The goal of this dataset is to enhance the proficiency of automatic speech recognition systems, such as OpenAI's [Whisper](https://huggingface.co/openai/whisper-large-v2), in accurately transcribing dialogue from anime and other similar Japanese media. This genre is characterized by unique linguistic features and speech patterns that diverge from conventional Japanese speech.
A list of all audio files and transcriptions are [**here**](https://huggingface.co/datasets/joujiboi/japanese-anime-speech/raw/main/audio_transcription_list.txt).
# Changelog
* V1 - This version contains **16,143** audio-text pairs from the visual novel `IxSHE Tell`. Some cleaning of the transcriptions has been done to get rid of unwanted characters at the start and end of lines, but I intend to do much more for the second version.
* V2 - The version contains **23,422** audio-text pairs from three different visual novels. Cleaning has been done to remove most nsfw lines, especially noises that aren't words. The audio is now in mp3 format, rather than wav. This version contains **32.6** hours of audio.
* **V3** - The dataset now contains **38,325** audio-text pairs from five different visual novels. Very thorough cleaning has been done to remove almost all nsfw or low-quality audio files. Transcriptions have been formatted to contain much fewer dramatised duplicated characters (for example 「ああああーーー」), and transcriptions have been made much more consistent (for example, using the same type of quotation mark). This version contains **52.5 hours** of audio.
# Dataset information
* **38,325** audio-text pairs
* **52.5 hours** of audio (OpenAI suggests a minimum of [5 hours](https://huggingface.co/blog/fine-tune-whisper) for productive [Whisper](https://huggingface.co/openai/whisper-large-v2) fine-tuning).
* **4.9s** average audio length.
* **All** transcriptions have been scraped directly from the game files of visual novels.
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Content Warning:</b> Please be advised that the majority of the audio in this dataset is sourced from visual novels and may include content that is not suitable for all audiences, such as suggestive sounds or mature topics. Efforts have been undertaken to minimise this content as much as possible. </p>
</div>
# To do
* [X] Create a dataset of over 10k items
* [X] Create a dataset of over 20k items
* [X] Compress the audio with minimal quality loss
* [X] Create a dataset of over 30k items
* [ ] Create more workflows for scraping audio from visual novels that use an engine other than Artemis
* [ ] Add audio from more visual novels
* [ ] Convert names in transcriptions to katakana?
# Use & Credit
This dataset is openly available for commercial or non-commercial use. Anyone is welcome to use this resource as they deem appropriate. However, the creator assumes no responsibility for the consequences of its use. While not mandatory, crediting this dataset with a hyperlink in any derivative work would be greatly appreciated.
I hope that by sharing this dataset, we (the open-source community) improve automatic speech recognition for anime content. | [
-0.17295584082603455,
-0.6055169701576233,
0.26360374689102173,
0.2454892098903656,
-0.09326424449682236,
-0.30050599575042725,
-0.2988753914833069,
-0.5927961468696594,
0.5188033580780029,
0.7888553142547607,
-0.756962239742279,
-0.6665269136428833,
-0.21830067038536072,
0.217318385839462... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/norquinal_claude_multiround_chat_30k | dim | 2023-11-08T02:43:45Z | 17 | 0 | null | [
"region:us"
] | 2023-11-08T02:43:45Z | 2023-11-08T02:43:09.000Z | 2023-11-08T02:43:09 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 176848427
num_examples: 32170
download_size: 95127719
dataset_size: 176848427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "norquinal_claude_multiround_chat_30k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5115129947662354,
-0.33119314908981323,
-0.0699065774679184,
0.6278461813926697,
-0.11957190185785294,
-0.026150472462177277,
0.13341408967971802,
-0.13547544181346893,
0.8366467356681824,
0.7918888926506042,
-0.8945015072822571,
-0.8749160170555115,
-0.44268232583999634,
-0.46439811587... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cmu-mlsp/wavlm-large_layer21-librispeech-asr100h | cmu-mlsp | 2023-11-10T23:15:08Z | 17 | 0 | null | [
"region:us"
] | 2023-11-10T23:15:08Z | 2023-11-08T03:14:55.000Z | 2023-11-08T03:14:55 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence: string
splits:
- name: train
num_bytes: 17518738129.625
num_examples: 28539
- name: validation
num_bytes: 938627900.125
num_examples: 2703
- name: test
num_bytes: 941323793.5
num_examples: 2620
download_size: 18863507508
dataset_size: 19398689823.25
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "wavlm-large_layer21-librispeech-asr100h"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6555708050727844,
-0.3218686878681183,
0.0207720547914505,
0.5489891171455383,
-0.07901781052350998,
0.025472408160567284,
0.0020824922248721123,
-0.3603537678718567,
0.588657796382904,
0.7249594330787659,
-0.8835094571113586,
-0.6784469485282898,
-0.4368042051792145,
-0.530625522136688... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nonstopio/GeneratedMusic | nonstopio | 2023-11-10T11:18:41Z | 17 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-10T11:18:41Z | 2023-11-08T13:03:25.000Z | 2023-11-08T13:03:25 | ---
license: apache-2.0
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype: string
splits:
- name: train
num_bytes: 264370838.0
num_examples: 50
download_size: 0
dataset_size: 264370838.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lollitor/FineTuneDataset | Lollitor | 2023-11-08T17:05:47Z | 17 | 0 | null | [
"region:us"
] | 2023-11-08T17:05:47Z | 2023-11-08T17:05:40.000Z | 2023-11-08T17:05:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: sequence
dtype: string
- name: label
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 20058390
num_examples: 17033
- name: validation
num_bytes: 2192697
num_examples: 1893
download_size: 9789079
dataset_size: 22251087
---
# Dataset Card for "FineTuneDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.77580726146698,
-0.33101704716682434,
0.0906348004937172,
0.1585543304681778,
-0.2819790244102478,
-0.12162197381258011,
0.15468348562717438,
-0.0860215499997139,
0.8903214335441589,
0.6100728511810303,
-0.8758994340896606,
-0.6892549395561218,
-0.5202297568321228,
-0.2909126877784729,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/povarenok_10k | dim | 2023-11-09T02:08:35Z | 17 | 0 | null | [
"region:us"
] | 2023-11-09T02:08:35Z | 2023-11-09T01:46:45.000Z | 2023-11-09T01:46:45 | ---
dataset_info:
features:
- name: full_receipt_text
dtype: string
- name: steps
sequence: string
- name: title_receipt
dtype: string
- name: title
dtype: string
- name: ingridients
sequence: string
- name: views
dtype: int64
- name: likes
dtype: int64
- name: ups
dtype: int64
- name: link
dtype: string
splits:
- name: train
num_bytes: 37922507.52688172
num_examples: 10000
download_size: 12019931
dataset_size: 37922507.52688172
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "povarenok_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5548620223999023,
-0.1595313549041748,
0.22752226889133453,
0.3784494400024414,
-0.4642952084541321,
-0.05882040783762932,
0.32975876331329346,
-0.06556738913059235,
0.8919500112533569,
0.6182172894477844,
-0.6816827654838562,
-1.0211528539657593,
-0.6025710105895996,
-0.069284744560718... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ManeAI31416/Platillos | ManeAI31416 | 2023-11-12T01:06:05Z | 17 | 0 | null | [
"license:pddl",
"region:us"
] | 2023-11-12T01:06:05Z | 2023-11-09T03:18:45.000Z | 2023-11-09T03:18:45 | ---
license: pddl
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tucan-ai/summaries-de-v2 | tucan-ai | 2023-11-09T10:42:15Z | 17 | 0 | null | [
"region:us"
] | 2023-11-09T10:42:15Z | 2023-11-09T10:40:10.000Z | 2023-11-09T10:40:10 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 23211177.6
num_examples: 2015
download_size: 13703832
dataset_size: 23211177.6
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "summaries-de-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5241008996963501,
-0.2515522241592407,
0.23065674304962158,
0.12772586941719055,
-0.4208766520023346,
-0.17752735316753387,
0.41797083616256714,
-0.18207544088363647,
0.9745088815689087,
0.5912265181541443,
-0.7746928334236145,
-0.6063164472579956,
-0.8009397387504578,
-0.13323090970516... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andreped/LyNoS | andreped | 2023-11-15T13:11:19Z | 17 | 4 | null | [
"task_categories:image-segmentation",
"size_categories:1B<n<10B",
"language:en",
"license:mit",
"medical",
"region:us"
] | 2023-11-15T13:11:19Z | 2023-11-09T19:35:31.000Z | 2023-11-09T19:35:31 | ---
license: mit
task_categories:
- image-segmentation
language:
- en
tags:
- medical
pretty_name: AeroPath
size_categories:
- 1B<n<10B
---
<div align="center">
<h1 align="center">🫁 LyNoS 🤗</h1>
<h3 align="center">A multilabel lymph node segmentation dataset from contrast CT</h3>
**LyNoS** was developed by SINTEF Medical Image Analysis to accelerate medical AI research.
</div>
## [Brief intro](https://github.com/raidionics/LyNoS#brief-intro)
This repository contains the LyNoS dataset described in ["_Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding_"](https://doi.org/10.1080/21681163.2022.2043778).
The dataset has now also been uploaded to Zenodo and the Hugging Face Hub enabling users to more easily access the data through Python API.
We have also developed a web demo to enable others to easily test the pretrained model presented in the paper. The application was developed using [Gradio](https://www.gradio.app) for the frontend and the segmentation is performed using the [Raidionics](https://raidionics.github.io/) backend.
## [Dataset](https://github.com/raidionics/LyNoS#data) <a href="https://colab.research.google.com/gist/andreped/274bf953771059fd9537877404369bed/lynos-load-dataset-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### [Accessing dataset](https://github.com/raidionics/LyNoS#accessing-dataset)
The dataset contains 15 CTs with corresponding lymph nodes, azygos, esophagus, and subclavian carotid arteries. The folder structure is described below.
The easiest way to access the data is through Python with Hugging Face's [datasets](https://pypi.org/project/datasets/) package:
```
from datasets import load_dataset
# downloads data from Zenodo through the Hugging Face hub
# - might take several minutes (~5 minutes in CoLab)
dataset = load_dataset("andreped/LyNoS")
print(dataset)
# list paths of all available patients and corresponding features (ct/lymphnodes/azygos/brachiocephalicveins/esophagus/subclaviancarotidarteries)
for d in dataset["test"]:
print(d)
```
A detailed interactive demo on how to load and work with the data can be seen on CoLab. Click the CoLab badge <a href="https://colab.research.google.com/gist/andreped/274bf953771059fd9537877404369bed/lynos-load-dataset-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> to see the notebook or alternatively click [here](https://github.com/raidionics/LyNoS/blob/main/notebooks/lynos-load-dataset-example.ipynb) to see it on GitHub.
### [Dataset structure](https://github.com/raidionics/LyNoS#dataset-structure)
```
└── LyNoS.zip
├── stations_sto.csv
└── LyNoS/
├── Pat1/
│ ├── pat1_data.nii.gz
│ ├── pat1_labels_Azygos.nii.gz
│ ├── pat1_labels_Esophagus.nii.gz
│ ├── pat1_labels_LymphNodes.nii.gz
│ └── pat1_labels_SubCarArt.nii.gz
├── [...]
└── Pat15/
├── pat15_data.nii.gz
├── pat15_labels_Azygos.nii.gz
├── pat15_labels_Esophagus.nii.gz
├── pat15_labels_LymphNodes.nii.gz
└── pat15_labels_SubCarArt.nii.gz
```
### [NIH Dataset Completion](https://github.com/raidionics/LyNoS#nih-dataset-completion)
A larger dataset made of 90 patients featuring enlarged lymph nodes has also been made available by the National Institutes of Health, and is available for download on the official [web-page](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=19726546).
As a supplement to this dataset, lymph nodes segmentation masks have been refined for all patients and stations have been manually assigned to each, available [here](https://drive.google.com/uc?id=1iVCnZc1GHwtx9scyAXdANqz2HdQArTHn).
## [Demo](https://github.com/raidionics/LyNoS#demo) <a target="_blank" href="https://huggingface.co/spaces/andreped/LyNoS"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-yellow.svg"></a>
To access the live demo, click on the `Hugging Face` badge above. Below is a snapshot of the current state of the demo app.
<img width="1400" alt="Screenshot 2023-11-09 at 20 53 29" src="https://github.com/raidionics/LyNoS/assets/29090665/ce661da0-d172-4481-b9b5-8b3e29a9fc1f">
## [Development](https://github.com/raidionics/LyNoS#development)
### [Docker](https://github.com/raidionics/LyNoS#docker)
Alternatively, you can deploy the software locally. Note that this is only relevant for development purposes. Simply dockerize the app and run it:
```
docker build -t LyNoS .
docker run -it -p 7860:7860 LyNoS
```
Then open `http://127.0.0.1:7860` in your favourite internet browser to view the demo.
### [Python](https://github.com/raidionics/LyNoS#python)
It is also possible to run the app locally without Docker. Just setup a virtual environment and run the app.
Note that the current working directory would need to be adjusted based on where `LyNoS` is located on disk.
```
git clone https://github.com/raidionics/LyNoS.git
cd LyNoS/
virtualenv -python3 venv --clear
source venv/bin/activate
pip install -r ./demo/requirements.txt
python demo/app.py --cwd ./
```
## [Citation](https://github.com/raidionics/LyNoS#citation)
If you found the dataset and/or web application relevant in your research, please cite the following reference:
```
@article{bouget2021mediastinal,
author = {David Bouget and André Pedersen and Johanna Vanel and Haakon O. Leira and Thomas Langø},
title = {Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding},
journal = {Computer Methods in Biomechanics and Biomedical Engineering: Imaging \& Visualization},
volume = {0},
number = {0},
pages = {1-15},
year = {2022},
publisher = {Taylor & Francis},
doi = {10.1080/21681163.2022.2043778},
URL = {https://doi.org/10.1080/21681163.2022.2043778},
eprint = {https://doi.org/10.1080/21681163.2022.2043778}
}
```
## [License](https://github.com/raidionics/LyNoS#license)
The code in this repository is released under [MIT license](https://github.com/raidionics/LyNoS/blob/main/LICENSE). | [
-0.4123977720737457,
-0.2549324631690979,
0.5835862755775452,
-0.03748466819524765,
-0.551340639591217,
0.08666270971298218,
0.010917856357991695,
-0.702224612236023,
0.8227691054344177,
0.15038079023361206,
-0.7685324549674988,
-0.5495119094848633,
-0.3209850788116455,
0.11019941419363022... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bys2058/sd1111 | bys2058 | 2023-11-14T13:52:27Z | 17 | 0 | null | [
"region:us"
] | 2023-11-14T13:52:27Z | 2023-11-11T01:15:43.000Z | 2023-11-11T01:15:43 | ---
dataset_info:
features:
- name: image
dtype: image
- name: hair_mask
dtype: image
- name: result_image
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 93744509413.828
num_examples: 54062
download_size: 91885409225
dataset_size: 93744509413.828
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sd1111"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7589772939682007,
0.04085793346166611,
0.36826446652412415,
0.33788612484931946,
-0.18425430357456207,
0.12468373030424118,
0.3840981721878052,
-0.15426677465438843,
1.059314489364624,
0.5111561417579651,
-0.8321301937103271,
-0.7251666188240051,
-0.6541339755058289,
-0.2159870117902755... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
2ndBestKiller/DrugTestWithClassLabels | 2ndBestKiller | 2023-11-11T12:22:38Z | 17 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-11T12:22:38Z | 2023-11-11T11:18:22.000Z | 2023-11-11T11:18:22 | ---
license: unknown
dataset_info:
features:
- name: entities
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-DRUG
'2': I-DRUG
- name: words
sequence: string
splits:
- name: train
num_bytes: 1451130
num_examples: 2193
- name: validation
num_bytes: 182934
num_examples: 274
- name: test
num_bytes: 181271
num_examples: 275
download_size: 347218
dataset_size: 1815335
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mjphayes/textbook_dataset | mjphayes | 2023-11-12T06:27:32Z | 17 | 0 | null | [
"region:us"
] | 2023-11-12T06:27:32Z | 2023-11-12T06:27:24.000Z | 2023-11-12T06:27:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 729355
num_examples: 8605
download_size: 371479
dataset_size: 729355
---
# Dataset Card for "textbook_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5970937013626099,
-0.2786937654018402,
0.21819010376930237,
-0.09536914527416229,
-0.1959218531847,
-0.14067427814006805,
0.15221764147281647,
0.13425441086292267,
0.5388787984848022,
0.36475133895874023,
-0.6326197385787964,
-0.910068690776825,
-0.4371059536933899,
-0.2841965854167938,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gokul00060/datasetv2 | gokul00060 | 2023-11-12T07:59:37Z | 17 | 0 | null | [
"region:us"
] | 2023-11-12T07:59:37Z | 2023-11-12T07:39:12.000Z | 2023-11-12T07:39:12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 11145
num_examples: 52
download_size: 3564
dataset_size: 11145
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "datasetv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4314141273498535,
-0.17156125605106354,
0.0826423391699791,
0.2894901931285858,
-0.2608519196510315,
-0.07033742219209671,
0.4742906987667084,
-0.17861570417881012,
0.5991601347923279,
0.5289841890335083,
-0.8470524549484253,
-0.6263468265533447,
-0.626736581325531,
-0.4517535865306854,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rewcifer/clean_trainset_2000_cutoff_llama | Rewcifer | 2023-11-13T00:49:48Z | 17 | 0 | null | [
"region:us"
] | 2023-11-13T00:49:48Z | 2023-11-13T00:49:41.000Z | 2023-11-13T00:49:41 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 394450486.10182
num_examples: 100767
download_size: 90442844
dataset_size: 394450486.10182
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "clean_trainset_2000_cutoff_llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44301164150238037,
-0.009706287644803524,
0.2661760747432709,
0.2677003741264343,
-0.5127725601196289,
-0.08916899561882019,
0.39988309144973755,
-0.11004243791103363,
0.8860583305358887,
0.6210585832595825,
-0.9663604497909546,
-0.6657520532608032,
-0.5828230977058411,
-0.0917058289051... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thepavankoushik/tweet-disaster-llm | thepavankoushik | 2023-11-14T07:48:03Z | 17 | 0 | null | [
"region:us"
] | 2023-11-14T07:48:03Z | 2023-11-14T07:48:00.000Z | 2023-11-14T07:48:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1751235.4677525286
num_examples: 6851
- name: test
num_bytes: 194780.53224747142
num_examples: 762
download_size: 725064
dataset_size: 1946016.0
---
# Dataset Card for "tweet-disaster-llm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.23363086581230164,
-0.2988744378089905,
0.35308727622032166,
0.5935849547386169,
-0.3975209891796112,
0.36097824573516846,
0.27616405487060547,
0.11882554739713669,
0.8898500800132751,
0.5804396271705627,
-0.9340522289276123,
-0.8451279401779175,
-0.8112562894821167,
-0.2467552423477172... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AbstractPerspective/court_cases0 | AbstractPerspective | 2023-11-15T08:13:39Z | 17 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-11-15T08:13:39Z | 2023-11-15T08:12:06.000Z | 2023-11-15T08:12:06 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xapien/vsum_advtest | Xapien | 2023-11-15T10:33:44Z | 17 | 0 | null | [
"region:us"
] | 2023-11-15T10:33:44Z | 2023-11-15T10:33:34.000Z | 2023-11-15T10:33:34 | ---
dataset_info:
features:
- name: entity_description_A
dtype: string
- name: entity_description_B
dtype: string
- name: entity_A
dtype: string
- name: entity_B
dtype: string
- name: is_same_entity
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 166671
num_examples: 452
- name: test
num_bytes: 41063
num_examples: 114
download_size: 36596
dataset_size: 207734
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "vsum_advtest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8087984323501587,
-0.3103299140930176,
0.15140599012374878,
-0.07578475028276443,
-0.12316004186868668,
-0.12709535658359528,
0.32819387316703796,
-0.0254001934081316,
0.7116063237190247,
0.4648763835430145,
-0.7793753147125244,
-0.686665952205658,
-0.2629070580005646,
-0.42456859350204... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
salimon/beta | salimon | 2023-11-22T09:01:07Z | 17 | 0 | null | [
"size_categories:n<1K",
"license:mit",
"region:us"
] | 2023-11-22T09:01:07Z | 2023-11-15T16:33:54.000Z | 2023-11-15T16:33:54 | ---
license: mit
pretty_name: salimon experimental train data
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path:
- "train.jsonl"
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
m4faisal/RedditSuicide | m4faisal | 2023-11-15T23:07:05Z | 17 | 0 | null | [
"region:us"
] | 2023-11-15T23:07:05Z | 2023-11-15T23:06:36.000Z | 2023-11-15T23:06:36 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/amazon_review_automotive_rap | zxvix | 2023-11-16T02:52:12Z | 17 | 0 | null | [
"region:us"
] | 2023-11-16T02:52:12Z | 2023-11-16T02:52:10.000Z | 2023-11-16T02:52:10 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 108204.0
num_examples: 100
download_size: 76498
dataset_size: 108204.0
---
# Dataset Card for "amazon_review_automotive_rap"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6675288081169128,
-0.08597338199615479,
0.12271386384963989,
0.4371623694896698,
-0.21542851626873016,
0.27647531032562256,
0.36924028396606445,
-0.37192249298095703,
0.5498175024986267,
0.1544591784477234,
-0.9453456997871399,
-0.800284206867218,
-0.45128926634788513,
-0.29822582006454... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yan-ds/AI-sharing-test | yan-ds | 2023-11-16T07:56:24Z | 17 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-16T07:56:24Z | 2023-11-16T06:11:38.000Z | 2023-11-16T06:11:38 | ---
license: apache-2.0
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TokenBender/alpaca_synthia_v2 | TokenBender | 2023-11-17T09:36:21Z | 17 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-17T09:36:21Z | 2023-11-16T07:48:13.000Z | 2023-11-16T07:48:13 | ---
license: apache-2.0
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mirfan899/ur_news_sum | mirfan899 | 2023-11-16T15:33:23Z | 17 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-16T15:33:23Z | 2023-11-16T15:30:08.000Z | 2023-11-16T15:30:08 | ---
license: mit
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AmanMussa/kazakh-instruction-v1 | AmanMussa | 2023-11-16T18:42:42Z | 17 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-16T18:42:42Z | 2023-11-16T18:37:52.000Z | 2023-11-16T18:37:52 | ---
license: mit
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ttrung1402/UIT-ViIC | ttrung1402 | 2023-11-17T15:14:42Z | 17 | 0 | null | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:vi",
"license:cc-by-4.0",
"region:us"
] | 2023-11-17T15:14:42Z | 2023-11-17T09:49:48.000Z | 2023-11-17T09:49:48 | ---
license: cc-by-4.0
task_categories:
- image-to-text
language:
- vi
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=10).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5334446430206299,
-0.5517486929893494,
0.12895812094211578,
0.23507995903491974,
-0.39409148693084717,
-0.11786621809005737,
-0.036281734704971313,
-0.6386359930038452,
0.5702211856842041,
0.786153256893158,
-0.7829524278640747,
-0.9157671332359314,
-0.5570055246353149,
0.13175596296787... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kenneth12/MachineLearning_EmojiDataset_Nov17 | Kenneth12 | 2023-11-17T15:22:03Z | 17 | 0 | null | [
"region:us"
] | 2023-11-17T15:22:03Z | 2023-11-17T14:27:52.000Z | 2023-11-17T14:27:52 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kjappelbaum/chemnlp_iupac_smiles | kjappelbaum | 2023-11-17T21:54:28Z | 17 | 0 | null | [
"region:us"
] | 2023-11-17T21:54:28Z | 2023-11-17T21:48:23.000Z | 2023-11-17T21:48:23 | ---
dataset_info:
features:
- name: SMILES
dtype: string
- name: Allowed
dtype: string
- name: CAS_like_Style
dtype: string
- name: Markup
dtype: string
- name: Preferred
dtype: string
- name: Systematic
dtype: string
- name: Traditional
dtype: string
splits:
- name: train
num_bytes: 22485225644
num_examples: 34351330
download_size: 6770692514
dataset_size: 22485225644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chemnlp_iupac_smiles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5042148232460022,
0.1160692349076271,
0.43171924352645874,
0.3342819809913635,
-0.03693072497844696,
0.21273179352283478,
0.1350705623626709,
-0.22873789072036743,
0.9878708720207214,
0.29283809661865234,
-1.063064694404602,
-0.8987380266189575,
-0.48488879203796387,
-0.0579224638640880... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Anwaarma/MySentimentAnwar | Anwaarma | 2023-11-19T15:33:03Z | 17 | 0 | null | [
"region:us"
] | 2023-11-19T15:33:03Z | 2023-11-19T15:32:56.000Z | 2023-11-19T15:32:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Positive
splits:
- name: train
num_bytes: 376893.0610734115
num_examples: 2882
- name: test
num_bytes: 96145
num_examples: 580
download_size: 269068
dataset_size: 473038.0610734115
---
# Dataset Card for "MySentimentAnwar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6937493085861206,
-0.27298951148986816,
0.24225082993507385,
0.5200730562210083,
-0.18277224898338318,
0.06308294832706451,
0.33954957127571106,
-0.14067663252353668,
0.8384355902671814,
0.44642508029937744,
-1.009717583656311,
-0.6999038457870483,
-0.5488318800926208,
-0.12776216864585... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saiprasath21/safety_awareness | saiprasath21 | 2023-11-24T17:18:45Z | 17 | 0 | null | [
"region:us"
] | 2023-11-24T17:18:45Z | 2023-11-19T23:04:56.000Z | 2023-11-19T23:04:56 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/tlc_slice1 | mengmengmmm | 2023-11-20T15:44:49Z | 17 | 0 | null | [
"region:us"
] | 2023-11-20T15:44:49Z | 2023-11-20T15:44:13.000Z | 2023-11-20T15:44:13 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhallee/EC_reg | lhallee | 2023-11-21T03:35:40Z | 17 | 0 | null | [
"region:us"
] | 2023-11-21T03:35:40Z | 2023-11-21T03:35:35.000Z | 2023-11-21T03:35:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 26623731
num_examples: 13090
- name: valid
num_bytes: 2988422
num_examples: 1465
- name: test
num_bytes: 3241706
num_examples: 1604
download_size: 5227567
dataset_size: 32853859
---
# Dataset Card for "EC_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6586450934410095,
-0.24709735810756683,
0.20541086792945862,
0.08660758286714554,
-0.24083900451660156,
-0.08414867520332336,
0.11082661896944046,
-0.3037206530570984,
0.7731381058692932,
0.523674488067627,
-0.8675521612167358,
-0.914728045463562,
-0.3431452810764313,
-0.004774711560457... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hanifabdlh/Setfit-Multi-Duplicate-Sample-Dataset | hanifabdlh | 2023-11-21T09:19:07Z | 17 | 0 | null | [
"region:us"
] | 2023-11-21T09:19:07Z | 2023-11-21T09:19:06.000Z | 2023-11-21T09:19:06 | ---
dataset_info:
features:
- name: sample_text
dtype: string
- name: label
dtype:
class_label:
names:
'0': affirm
'1': bot_challenge
'2': deny
'3': goodbye
'4': greet
'5': grxxnsmxrt_affirm
'6': grxxnsmxrt_bot_challenge
'7': grxxnsmxrt_deny
'8': grxxnsmxrt_goodbye
'9': grxxnsmxrt_greet
'10': grxxnsmxrt_mood_great
'11': grxxnsmxrt_mood_unhappy
'12': mood_great
'13': mood_unhappy
'14': xlfxmxrt_affirm
'15': xlfxmxrt_bot_challenge
'16': xlfxmxrt_deny
'17': xlfxmxrt_goodbye
'18': xlfxmxrt_greet
'19': xlfxmxrt_mood_great
'20': xlfxmxrt_mood_unhappy
splits:
- name: train
num_bytes: 6654
num_examples: 204
download_size: 4188
dataset_size: 6654
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.