id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
KaiLv/UDR_Reddit | 2023-06-21T12:46:28.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 13 | 2023-06-21T12:45:56 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 68102488
num_examples: 37643
- name: validation
num_bytes: 1088422
num_examples: 576
- name: test
num_bytes: 1097563
num_examples: 562
- name: debug
num_bytes: 9039576
num_examples: 5000
download_size: 48794822
dataset_size: 79328049
---
# Dataset Card for "UDR_Reddit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 692 | [
[
-0.046783447265625,
-0.0251922607421875,
0.01114654541015625,
0.017822265625,
-0.0177154541015625,
0.005313873291015625,
0.0147552490234375,
-0.006908416748046875,
0.058349609375,
0.0267791748046875,
-0.058319091796875,
-0.054718017578125,
-0.032318115234375,
... |
KaiLv/UDR_SMCalFlow | 2023-06-21T12:48:41.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 13 | 2023-06-21T12:48:29 | ---
dataset_info:
features:
- name: idx
dtype: string
- name: user_utterance
dtype: string
- name: lispress
dtype: string
- name: fully_typed_lispress
dtype: string
splits:
- name: train
num_bytes: 61962192
num_examples: 58367
- name: validation
num_bytes: 10396934
num_examples: 14751
download_size: 14059335
dataset_size: 72359126
---
# Dataset Card for "UDR_SMCalFlow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 556 | [
[
-0.025909423828125,
-0.00975799560546875,
0.01422119140625,
0.00800323486328125,
-0.01715087890625,
0.0088653564453125,
0.027801513671875,
0.01306915283203125,
0.05242919921875,
0.052154541015625,
-0.0709228515625,
-0.032318115234375,
-0.043670654296875,
-0.... |
KaiLv/UDR_SNLI | 2023-06-21T12:49:04.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 13 | 2023-06-21T12:48:42 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: sentence
dtype: string
- name: len_sentence
dtype: int64
splits:
- name: test
num_bytes: 747502
num_examples: 3262
- name: train
num_bytes: 28963424
num_examples: 131062
- name: validation
num_bytes: 750070
num_examples: 3272
- name: debug
num_bytes: 22092624
num_examples: 100000
download_size: 17825058
dataset_size: 52553620
---
# Dataset Card for "UDR_SNLI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 760 | [
[
-0.0220184326171875,
-0.0081787109375,
0.01068878173828125,
0.0128021240234375,
-0.010345458984375,
0.004730224609375,
0.0206146240234375,
-0.00501251220703125,
0.06671142578125,
0.0277862548828125,
-0.0545654296875,
-0.0426025390625,
-0.0285797119140625,
-0... |
KaiLv/UDR_SST-5 | 2023-06-21T12:49:23.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 13 | 2023-06-21T12:49:14 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1044651
num_examples: 8534
- name: test
num_bytes: 270516
num_examples: 2210
- name: debug
num_bytes: 612421
num_examples: 5000
download_size: 1268755
dataset_size: 1927588
---
# Dataset Card for "UDR_SST-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.0300140380859375,
-0.0027866363525390625,
0.01715087890625,
0.0086212158203125,
-0.033538818359375,
0.0191497802734375,
0.0341796875,
-0.0002601146697998047,
0.04144287109375,
0.0254669189453125,
-0.056396484375,
-0.056549072265625,
-0.0290985107421875,
0... |
nazimali/quran-question-answer-context | 2023-07-08T21:35:05.000Z | [
"task_categories:question-answering",
"language:ar",
"language:en",
"license:cc-by-4.0",
"islam",
"quran",
"arabic",
"region:us"
] | nazimali | null | null | 2 | 13 | 2023-07-03T12:07:47 | ---
dataset_info:
features:
- name: q_id
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: q_word
dtype: string
- name: q_topic
dtype: string
- name: fine_class
dtype: string
- name: class
dtype: string
- name: ontology_concept
dtype: string
- name: ontology_concept2
dtype: string
- name: source
dtype: string
- name: q_src_id
dtype: int64
- name: quetion_type
dtype: string
- name: chapter_name
dtype: string
- name: chapter_no
dtype: int64
- name: verse
sequence: string
- name: question_en
dtype: string
- name: answer_en
dtype: string
- name: q_word_en
dtype: string
- name: q_topic_en
dtype: string
- name: fine_class_en
dtype: string
- name: class_en
dtype: string
- name: ontology_concept_en
dtype: string
- name: chapter_name_en
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 2226830.0310711367
num_examples: 978
- name: test
num_bytes: 557845.9689288634
num_examples: 245
download_size: 1515128
dataset_size: 2784676.0
license: cc-by-4.0
task_categories:
- question-answering
pretty_name: Quran Question Answer with Context
language:
- ar
- en
tags:
- islam
- quran
- arabic
---
# Dataset Card for "quran-question-answer-context"
## Dataset Summary
Translated the original dataset from Arabic to English and added the Surah ayahs to the `context` column.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("nazimali/quran-question-answer-context")
```
```python
DatasetDict({
train: Dataset({
features: ['q_id', 'question', 'answer', 'q_word', 'q_topic', 'fine_class', 'class', 'ontology_concept', 'ontology_concept2', 'source', 'q_src_id', 'quetion_type', 'chapter_name', 'chapter_no', 'verse', 'question_en', 'answer_en', 'q_word_en', 'q_topic_en', 'fine_class_en', 'class_en', 'ontology_concept_en', 'chapter_name_en', 'context'],
num_rows: 978
})
test: Dataset({
features: ['q_id', 'question', 'answer', 'q_word', 'q_topic', 'fine_class', 'class', 'ontology_concept', 'ontology_concept2', 'source', 'q_src_id', 'quetion_type', 'chapter_name', 'chapter_no', 'verse', 'question_en', 'answer_en', 'q_word_en', 'q_topic_en', 'fine_class_en', 'class_en', 'ontology_concept_en', 'chapter_name_en', 'context'],
num_rows: 245
})
})
```
## Translation Info
1. Translated the Arabic questions/concept columns to English with [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en)
2. Used `en-yusufali` translations for ayas [M-AI-C/quran-en-tafssirs](https://huggingface.co/datasets/M-AI-C/quran-en-tafssirs)
3. Renamed Surahs with [kheder/quran](https://huggingface.co/datasets/kheder/quran)
4. Added the ayahs that helped answer the questions
- Split the `ayah` columns string into a list of integers
- Concactenated the Surah:Ayah pairs into a sentence to the `context` column
Columns with the suffix `_en` contain the translations of the original columns.
## TODO
The `context` column has some `null` values that needs to be investigated and fixed
## Initial Data Collection
The original dataset is from **[Annotated Corpus of Arabic Al-Quran Question and Answer](https://archive.researchdata.leeds.ac.uk/464/)**
## Licensing Information
Original dataset [license](https://archive.researchdata.leeds.ac.uk/464/): **Creative Commons Attribution 4.0 International (CC BY 4.0)**
### Contributions
Original paper authors: Alqahtani, Mohammad and Atwell, Eric (2018) Annotated Corpus of Arabic Al-Quran Question and Answer. University of Leeds. https://doi.org/10.5518/356 | 3,732 | [
[
-0.040924072265625,
-0.049560546875,
-0.0012102127075195312,
-0.0040283203125,
-0.02349853515625,
-0.0266571044921875,
-0.006015777587890625,
-0.009552001953125,
0.020965576171875,
0.045745849609375,
-0.044586181640625,
-0.058929443359375,
-0.032470703125,
0... |
diffusers-parti-prompts/kandinsky-2-2 | 2023-07-18T05:32:32.000Z | [
"region:us"
] | diffusers-parti-prompts | null | null | 0 | 13 | 2023-07-17T06:40:58 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 163668480.032
num_examples: 1632
download_size: 163766653
dataset_size: 163668480.032
---
# Dataset Card for "kandinsky-2-2"
The dataset was generated using the code below:
```python
import PIL
import torch
from datasets import Dataset, Features
from datasets import Image as ImageFeature
from datasets import Value, load_dataset
from diffusers import DiffusionPipeline
def main():
print("Loading dataset...")
parti_prompts = load_dataset("nateraw/parti-prompts", split="train")
print("Loading pipeline...")
pipe_prior = DiffusionPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
)
pipe_prior.to("cuda")
pipe_prior.set_progress_bar_config(disable=True)
t2i_pipe = DiffusionPipeline.from_pretrained(
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
)
t2i_pipe.to("cuda")
t2i_pipe.set_progress_bar_config(disable=True)
seed = 0
generator = torch.Generator("cuda").manual_seed(seed)
ckpt_id = (
"kandinsky-community/" + "kandinsky-2-2-prior" + "_" + "kandinsky-2-2-decoder"
)
print("Running inference...")
main_dict = {}
for i in range(len(parti_prompts)):
sample = parti_prompts[i]
prompt = sample["Prompt"]
image_embeds, negative_image_embeds = pipe_prior(
prompt,
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
).to_tuple()
image = t2i_pipe(
image_embeds=image_embeds,
negative_image_embeds=negative_image_embeds,
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
).images[0]
image = image.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS)
img_path = f"kandinsky_22_{i}.png"
image.save(img_path)
main_dict.update(
{
prompt: {
"img_path": img_path,
"Category": sample["Category"],
"Challenge": sample["Challenge"],
"Note": sample["Note"],
"model_name": ckpt_id,
"seed": seed,
}
}
)
def generation_fn():
for prompt in main_dict:
prompt_entry = main_dict[prompt]
yield {
"Prompt": prompt,
"Category": prompt_entry["Category"],
"Challenge": prompt_entry["Challenge"],
"Note": prompt_entry["Note"],
"images": {"path": prompt_entry["img_path"]},
"model_name": prompt_entry["model_name"],
"seed": prompt_entry["seed"],
}
print("Preparing HF dataset...")
ds = Dataset.from_generator(
generation_fn,
features=Features(
Prompt=Value("string"),
Category=Value("string"),
Challenge=Value("string"),
Note=Value("string"),
images=ImageFeature(),
model_name=Value("string"),
seed=Value("int64"),
),
)
ds_id = "diffusers-parti-prompts/kandinsky-2-2"
ds.push_to_hub(ds_id)
if __name__ == "__main__":
main()
``` | 3,606 | [
[
-0.0263214111328125,
-0.03338623046875,
0.0277252197265625,
0.006137847900390625,
-0.026458740234375,
-0.01003265380859375,
-0.0009446144104003906,
0.005542755126953125,
-0.0135040283203125,
0.0253143310546875,
-0.0545654296875,
-0.051666259765625,
-0.0426330566... |
Shrey-1329/cxiu_hf_dataset | 2023-07-25T15:05:56.000Z | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:en",
"medical",
"region:us"
] | Shrey-1329 | null | null | 1 | 13 | 2023-07-25T14:02:46 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1108945726.54
num_examples: 6060
download_size: 1108991167
dataset_size: 1108945726.54
task_categories:
- image-to-text
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---
# Indiana University Chest Xray Dataset Card
## Data sources:
This is a converted and processed version of the open access pneumonia chest x-ray dataset provided by the indiana university.<br>
You can see its information page [here](https://openi.nlm.nih.gov/faq).<br>
The compressed images in the png format were downloaded from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_png.tgz) and the corresponding reports from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_reports.tgz).
## Data fields:
There are two fields: image and text.
The images are the x-rays and the texts are their associated findings.
## Preprocessing done:
1. **Make all text lowercase**: Convert all text to lowercase to ensure consistent and case-insensitive processing.
2. **Remove all punctuation**: Eliminate any punctuation marks (e.g., periods, commas, exclamation marks) from the text to avoid interference in language analysis.
3. **Remove all numbers**: Eliminate all numeric characters from the text since they might not be relevant for certain natural language processing tasks.
4. **Remove all words with 2 or more Xs in a row**: Remove any words that contain two or more consecutive occurrences of the letter "X" as they may not contribute meaningful information.
5. **Remove the bottom and top 2% of text by length**: Discard the shortest and longest text samples, removing the bottom 2% and top 2% of the text's length, respectively. This step is aimed at reducing the impact of outliers and ensuring a more balanced dataset. | 1,870 | [
[
-0.01861572265625,
-0.0213775634765625,
0.0391845703125,
0.00780487060546875,
-0.042572021484375,
-0.014068603515625,
-0.004573822021484375,
-0.01241302490234375,
0.019012451171875,
0.06414794921875,
-0.033782958984375,
-0.048583984375,
-0.06610107421875,
0.... |
DynamicSuperb/DialogueActClassification_DailyTalk | 2023-08-02T08:51:06.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 13 | 2023-07-29T13:23:06 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 1379347051.504
num_examples: 4758
download_size: 1292848993
dataset_size: 1379347051.504
---
# Dataset Card for "DailyTalk_DialogueActClassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 500 | [
[
-0.0228271484375,
-0.0244598388671875,
0.010589599609375,
0.020111083984375,
0.003620147705078125,
0.009490966796875,
0.01531219482421875,
-0.0251312255859375,
0.048583984375,
0.034423828125,
-0.067626953125,
-0.06829833984375,
-0.0318603515625,
-0.035461425... |
diffusers-parti-prompts/sdxl-1.0-refiner | 2023-07-30T16:22:20.000Z | [
"region:us"
] | diffusers-parti-prompts | null | null | 0 | 13 | 2023-07-30T13:33:05 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 189993385.856
num_examples: 1632
download_size: 189456016
dataset_size: 189993385.856
---
# Dataset Card for "sdxl-1.0-refiner"
Dataset was generated using the code below:
```python
import torch
from datasets import Dataset, Features
from datasets import Image as ImageFeature
from datasets import Value, load_dataset
from diffusers import DDIMScheduler, DiffusionPipeline
import PIL
def main():
print("Loading dataset...")
parti_prompts = load_dataset("nateraw/parti-prompts", split="train")
print("Loading pipeline...")
ckpt_id = "stabilityai/stable-diffusion-xl-base-1.0"
refiner_ckpt_id = "stabilityai/stable-diffusion-xl-refiner-1.0"
pipe = DiffusionPipeline.from_pretrained(
ckpt_id, torch_dtype=torch.float16, use_auth_token=True
).to("cuda")
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.set_progress_bar_config(disable=True)
refiner = DiffusionPipeline.from_pretrained(
refiner_ckpt_id,
torch_dtype=torch.float16,
use_auth_token=True
).to("cuda")
refiner.scheduler = DDIMScheduler.from_config(refiner.scheduler.config)
refiner.set_progress_bar_config(disable=True)
seed = 0
generator = torch.Generator("cuda").manual_seed(seed)
print("Running inference...")
main_dict = {}
for i in range(len(parti_prompts)):
sample = parti_prompts[i]
prompt = sample["Prompt"]
latent = pipe(
prompt,
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
output_type="latent",
).images[0]
image_refined = refiner(
prompt=prompt,
image=latent[None, :],
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
).images[0]
image = image_refined.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS)
img_path = f"sd_xl_{i}.png"
image.save(img_path)
main_dict.update(
{
prompt: {
"img_path": img_path,
"Category": sample["Category"],
"Challenge": sample["Challenge"],
"Note": sample["Note"],
"model_name": ckpt_id,
"seed": seed,
}
}
)
def generation_fn():
for prompt in main_dict:
prompt_entry = main_dict[prompt]
yield {
"Prompt": prompt,
"Category": prompt_entry["Category"],
"Challenge": prompt_entry["Challenge"],
"Note": prompt_entry["Note"],
"images": {"path": prompt_entry["img_path"]},
"model_name": prompt_entry["model_name"],
"seed": prompt_entry["seed"],
}
print("Preparing HF dataset...")
ds = Dataset.from_generator(
generation_fn,
features=Features(
Prompt=Value("string"),
Category=Value("string"),
Challenge=Value("string"),
Note=Value("string"),
images=ImageFeature(),
model_name=Value("string"),
seed=Value("int64"),
),
)
ds_id = "diffusers-parti-prompts/sdxl-1.0-refiner"
ds.push_to_hub(ds_id)
if __name__ == "__main__":
main()
``` | 3,709 | [
[
-0.037139892578125,
-0.0252532958984375,
0.037567138671875,
0.0087432861328125,
-0.0193328857421875,
-0.01316070556640625,
0.0042724609375,
0.006855010986328125,
-0.01081085205078125,
0.044158935546875,
-0.06353759765625,
-0.045074462890625,
-0.038421630859375,
... |
ppdev/medtext-llama2 | 2023-08-04T03:07:53.000Z | [
"license:cc-by-4.0",
"region:us"
] | ppdev | null | null | 3 | 13 | 2023-08-03T03:01:25 | ---
license: cc-by-4.0
---
Original data from:
https://huggingface.co/datasets/BI55/MedText
I just reformat it for fine tunning in lamma2 based on this article https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html
Another important point related to the data quality is the prompt template. Prompts are comprised of similar elements: system prompt (optional) to guide the model, user prompt (required) to give the instruction, additional inputs (optional) to take into consideration, and the model’s answer (required). In the case of Llama 2, the authors used the following template for the chat models:
[INST]
User prompt [/INST] Model answer | 688 | [
[
-0.0163726806640625,
-0.061370849609375,
0.0416259765625,
0.006389617919921875,
-0.038116455078125,
-0.0146484375,
0.0223236083984375,
-0.0203094482421875,
0.01337432861328125,
0.054290771484375,
-0.0501708984375,
-0.0262298583984375,
-0.016082763671875,
0.0... |
kentsui/open-react-retrieval-multi-neg-result-new-kw | 2023-08-07T17:49:01.000Z | [
"region:us"
] | kentsui | null | null | 0 | 13 | 2023-08-07T17:35:17 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: meta
struct:
- name: first_search_rank
dtype: int64
- name: second_search
dtype: bool
- name: second_search_success
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 83579841
num_examples: 25158
download_size: 21996450
dataset_size: 83579841
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "open-react-retrieval-multi-neg-result-new-kw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 770 | [
[
-0.0418701171875,
-0.024688720703125,
0.0154266357421875,
-0.0011444091796875,
-0.01525115966796875,
0.00579833984375,
0.002498626708984375,
-0.00676727294921875,
0.057861328125,
0.01812744140625,
-0.047821044921875,
-0.0650634765625,
-0.034027099609375,
-0.... |
MedHALT/Med-HALT | 2023-08-10T15:27:31.000Z | [
"license:apache-2.0",
"arxiv:2307.15343",
"region:us"
] | MedHALT | null | null | 5 | 13 | 2023-08-10T07:30:11 | ---
license: apache-2.0
configs:
- config_name: IR_abstract2pubmedlink
data_files: "IR_abstract2pubmedlink/IR_abstract2pubmedlink.csv"
- config_name: IR_pubmedlink2title
data_files: "IR_pubmedlink2title/IR_pubmedlink2title.csv"
- config_name: IR_pmid2title
data_files: "IR_pmid2title/IR_pmid2title.csv"
- config_name: IR_title2pubmedlink
data_files: "IR_title2pubmedlink/IR_title2pubmedlink.csv"
- config_name: reasoning_fake
data_files: "reasoning_fake/reasoning_fake.csv"
- config_name: reasoning_nota
data_files: "reasoning_nota/reasoning_nota.csv"
- config_name: reasoning_FCT
data_files: "reasoning_FCT/reasoning_FCT.csv"
---
# Med-HALT: Medical Domain Hallucination Test for Large Language Models
This is a dataset used in the [Med-HALT](https://arxiv.org/abs/2307.15343) research paper. This research paper focuses on the challenges posed by hallucinations in large language models (LLMs), particularly in the context of the medical domain. We propose a new benchmark and dataset, Med-HALT (Medical Domain Hallucination Test), designed specifically to evaluate hallucinations.
Med-HALT provides a diverse multinational dataset derived from medical examinations across various countries and includes multiple innovative testing modalities. Med-HALT includes two categories of tests reasoning and memory-based hallucination tests, designed to assess LLMs' problem-solving and information retrieval abilities. Our study evaluated leading LLMs, including Text Davinci, GPT-3.5, LlaMa and Falcon, revealing significant differences in their performance. The paper provides detailed insights into the dataset, promoting transparency and reproducibility. Through this work, we aim to contribute to the development of safer and more reliable language models in healthcare. Our benchmark can be found at https://github.com/medhalt/medhalt
## Benchmark
The Med-HALT framework proposes a two-tiered approach to evaluate the presence and impact of hallucinations in generated outputs.
#### Reasoning Hallucination Tests (RHTs)
<details>
<summary>False Confidence Test (FCT)</summary>
The False Confidence Test (FCT) involves presenting a multiple-choice medical question and a randomly suggested correct answer to the language model, tasking it with evaluating the validity of the proposed answer and providing detailed explanations for its correctness or incorrectness, in addition to explaining why the other options are wrong.
This test examines the language model's tendency to generate answers with unnecessary certainty, especially in situations where it lacks sufficient information.
</details>
<details>
<summary>None of the Above Test (Nota)</summary>
In the None of the Above (Nota) Test, the model is presented with a multiple-choice medical question where the correct answer is replaced by 'None of the above', requiring the model to identify this and justify its selection.
It tests the model's ability to distinguish irrelevant or incorrect information.
</details>
<details>
<summary>Fake Questions Test (FQT)</summary>
This test involves presenting the model with fake or nonsensical medical questions to examine whether it can correctly identify and handle such queries.
We employed a hybrid approach for generating fake questions, where a subset was crafted by human experts, while the remaining were generated using GPT-3.5.
</details>
#### Memory Hallucination Tests (MHTs)
<details>
<summary>Abstract-to-Link Test</summary>
Given the abstract of a PubMed article, the LLM is asked to generate the corresponding link to the article. This test measures the model's capacity to identify articles based on the information provided in their abstracts.
</details>
<details>
<summary>PMID-to-Title Test</summary>
In this test, the LLM is given the PubMed ID (PMID) of an article and is asked to generate the title of the article. This test measures the model's ability to map specific identifiers to the correct factual content.
</details>
<details>
<summary>Title-to-Link Test</summary>
Given the title of a PubMed article, the LLM is prompted to provide the PubMed link of the article. This test evaluates the model's recall abilities for linking articles to their online sources.
</details>
<details>
<summary>Link-to-Title Test</summary>
Similar to the previous one, in this test, we give the PubMed link of an article as input and ask the language model to provide the title as output. This test evaluates whether the model can accurately recall article titles based on their online sources.
</details>
## Citation
```
@article{Medhalt,
title={Med-HALT: Medical Domain Hallucination Test for Large Language Models},
author={Umapathi, Logesh Kumar and Pal, Ankit and Sankarasubbu, Malaikannan},
journal={arXiv preprint},
year={2023}
}
``` | 4,797 | [
[
-0.033233642578125,
-0.08154296875,
0.0576171875,
0.007781982421875,
-0.0032405853271484375,
-0.01207733154296875,
-0.0046234130859375,
-0.04541015625,
0.0262603759765625,
0.033447265625,
-0.041900634765625,
-0.0311279296875,
-0.0284271240234375,
0.033935546... |
tjaffri/NSText2SQL-generate | 2023-08-15T00:33:59.000Z | [
"license:apache-2.0",
"region:us"
] | tjaffri | null | null | 0 | 13 | 2023-08-13T07:09:33 | ---
license: apache-2.0
dataset_info:
features:
- name: question
dtype: string
- name: table_info
dtype: string
- name: sql_query
dtype: string
splits:
- name: train
num_bytes: 847766
num_examples: 3473
download_size: 391731
dataset_size: 847766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# NSText2SQL Dataset (Reformatted for Fine Tuned Generative Models)
This is the exact same dataset as NSText2SQL: https://huggingface.co/datasets/NumbersStation/NSText2SQL, but with the data reformatted to allow direct use to fine tune generative models. The original license and credits for the original dataset remain in place.
Specifically, the changes from standard NSText2SQL are:
1. Removed non-english questions
2. Removed all rows with more than one input table, simplifying the problem for smaller models.
3. Updated SQL queries in the dataset to prefer using LIKE statements for string matches, to allow better partial matching of results in chat scenarios where a user may not fully specify all data.
4. Removed syntactically invalid SQL. Specifically, we created in-memory (SQLite) tables using the SQL DESCRIBE of the tables, then ran the SQL query against these in-memory tables. Any SQL queries that threw exceptions for any reason were discarded, and the rest that ran without exceptions were included in this dataset.
| 1,414 | [
[
-0.01116180419921875,
-0.0640869140625,
0.031890869140625,
0.042510986328125,
-0.0272674560546875,
-0.0157012939453125,
-0.019500732421875,
-0.0180816650390625,
0.025787353515625,
0.0906982421875,
-0.052001953125,
-0.0266876220703125,
-0.01873779296875,
0.01... |
dim/logic_tasks_ru | 2023-08-14T18:00:38.000Z | [
"license:mit",
"region:us"
] | dim | null | null | 0 | 13 | 2023-08-14T17:59:33 | ---
license: mit
dataset_info:
features:
- name: title
dtype: string
- name: task
dtype: string
- name: answer
dtype: string
- name: ok/trash
dtype: string
splits:
- name: train
num_bytes: 87178
num_examples: 99
download_size: 54016
dataset_size: 87178
---
Задачи с этого сайта https://www.potehechas.ru/zadachi/zadachi.shtml | 368 | [
[
-0.0252838134765625,
-0.05474853515625,
0.019378662109375,
0.0198211669921875,
-0.061279296875,
-0.004146575927734375,
-0.0024280548095703125,
-0.0142974853515625,
0.06158447265625,
-0.0008769035339355469,
-0.0701904296875,
-0.055267333984375,
-0.015007019042968... |
botp/yentinglin-traditional_mandarin_instructions | 2023-08-16T06:59:16.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:cc-by-nc-4.0",
"arxiv:2305.13711",
"arxiv:2104.09864",
"region:us"
] | botp | null | null | 0 | 13 | 2023-08-16T06:59:16 | ---
license: cc-by-nc-4.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- zh
pretty_name: Traditional Chinese Instruction-tuning Set
size_categories:
- 100K<n<1M
duplicated_from: yentinglin/traditional_mandarin_instructions
---
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
| 10,523 | [
[
-0.032379150390625,
-0.049041748046875,
0.0237884521484375,
0.0232696533203125,
-0.037078857421875,
0.00859832763671875,
-0.007709503173828125,
-0.04510498046875,
0.0380859375,
0.0252227783203125,
-0.044891357421875,
-0.035675048828125,
-0.031494140625,
0.01... |
ceadar-ie/AIVision360-8k | 2023-08-17T22:04:53.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"LLM",
"Generative AI",
"Finetune",
"Domain Specific Data",
"doi:10.57967/hf/0998",
"region:us"
] | ceadar-ie | null | null | 2 | 13 | 2023-08-17T20:27:23 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
- text-generation
language:
- en
tags:
- LLM
- Generative AI
- Finetune
- Domain Specific Data
size_categories:
- 1K<n<10K
---
# Dataset Card for AIVision360-8k
## Dataset Description
AIVision360 is the pioneering domain-specific dataset tailor-made for media and journalism, designed expressly for the instruction fine-tuning of Large Language Models (LLMs).\
The AIVision360-8k dataset is a curated collection sourced from "ainewshub.ie", a platform dedicated to Artificial Intelligence news from quality-controlled publishers. It is designed to provide a comprehensive representation of AI-related discussions, highlighting current developments and trends in the field. Each entry in the dataset contains three columns: "question", "response", and "context". These columns offer a structured view of AI news interactions, where the "question" and "response" provide insights on AI subjects, and the "context" column gives additional background information.
### Key Features
• Domain Specificity: The dataset is focused on AI news, catering to researchers, developers, and specialists in the domain.\
• Source Reliability: Data is sourced from established publishers featured on "ainewshub.ie", ensuring content reliability.\
• Licensing: It is distributed under the Apache 2.0 open-source license, facilitating its use and modification.\
• Accessibility: Intended for public use to support collaboration and analysis in the AI community.\
• Volume: Contains over 8,000 entries, making it a significant resource for AI news analysis.
### Intended Use Cases
• Model Training: Suitable for training language models, enhancing their capacity in AI news discussions.\
• Research: Useful for AI trend analysis, sentiment analysis, and linguistic pattern study.
### Limitations
• Despite careful curation, potential biases from AI news sources may persist in the dataset.\
• Its focus is on AI news, which may reflect specific perspectives of this niche.
## Language
English
### Data Privacy
The dataset comprises publicly available news articles and does not include private identifiers or sensitive information.
### License/Attribution
Copyright © 2023 CeADAR Connect Group. Developed by CeADAR (ceadar.ie), its use is governed by the Apache 2.0 license.
### Sources
Curated exclusively from ainewshub.ie, a recognized platform for AI news.
## Annotator Guidelines
• Question: Represents a query derived from the news article.\
• Response: Provides an answer based on the article's content.\
• Context: Offers background information for the query-answer pair.
### Feedback
For any questions or feedback related to the dataset, please direct your communications to ahtsham.zafar@ucd.ie
### Disclaimer
This dataset is provided "as is" without any guarantees or warranty. Although the data has been processed with care, CeADAR Connect Group is not responsible for any errors, omissions, or discrepancies within the data. Users are advised to use this dataset at their discretion and assume any risks associated with its use. | 3,127 | [
[
-0.038970947265625,
-0.040435791015625,
0.004451751708984375,
0.016845703125,
-0.00853729248046875,
-0.01302337646484375,
-0.005382537841796875,
-0.036956787109375,
-0.000629425048828125,
0.0399169921875,
-0.0535888671875,
-0.0595703125,
-0.0379638671875,
0.... |
dim/sharegpt_short_ru | 2023-09-02T00:53:23.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | dim | null | null | 0 | 13 | 2023-08-17T22:15:08 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: conversation
sequence: string
- name: hash
dtype: string
splits:
- name: train
num_bytes: 825523
num_examples: 253
download_size: 367027
dataset_size: 825523
---
### Version 1
```python
import json
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part1.json") as f:
dataset1 = json.load(f)
with open("verbalist/datasets/RyokoAI_ShareGPT52K/sg_90k_part2.json") as f:
dataset2 = json.load(f)
dataset = dataset1 + dataset2
import re
import regex
import hashlib
def filter_string(string):
has = True
has_zh = not len(re.findall(r"[\u4e00-\u9fff]+", string)) > 0
has_ko = not len(re.findall(r"[\u3131-\ucb4c]+", string)) > 0
has = has_zh and has_ko
invalid_letters = "ієùéàçğİžš"
for letter in invalid_letters:
if letter in string:
return False
return has
def has_cyrillic(text):
return bool(regex.search(r"\p{IsCyrillic}", text))
clean_dataset = []
for conversation in dataset:
all_text = "\n".join([item["value"] for item in conversation["conversations"]])
# print(all_text)
# break
if filter_string(all_text) and has_cyrillic(all_text):
clean_dataset.append(conversation)
import markdownify
def correct_string(string):
string = string.replace("\\_", "_")
languages = [
"css",
"python",
"go",
"html",
"kotlin",
"diff",
"vba",
"sql",
]
for lang in languages:
string = string.replace(f"\n{lang}Copy code`", f"{lang}\n")
string = string.replace("`\n```", "\n```")
string = string.replace("\n ", "\n ")
delete_phrases = [
"Как искусственный интеллект, я не являюсь владельцем физических объектов и не могу продавать или покупать предметы. Однако, я могу поделиться советом, как можно попытаться убедить кого-то в покупке карандаша.",
"Как искусственный интеллект, я не имею личных чувств и мнений, и не могу иметь предпочтений в выборе между рождением своего ребенка и усыновлением приемного ребенка из приюта.",
"1 / 1",
"2 / 2",
"3 / 3",
"4 / 4",
"5 / 5",
"6 / 6",
"7 / 7",
"8 / 8",
"9 / 9",
"К сожалению, я не могу проверить дату вопроса, но я могу предоставить информацию о Максиме Радайкине и Борисе Марцинкевиче на начало 2021 года.",
"К сожалению, я не могу вставить пример базы данных в чат, но я могу объяснить, что это такое.",
"К сожалению, я не могу написать полноценное расширение для Google Chrome в рамках этой сессии. Однако,",
"К сожалению, я не могу выбрать материалы и дизайн за вас, так как это зависит от ваших потребностей и предпочтений. Однако,",
"Извините, но я не могу создать код для такой сложной программы с нуля, так как это потребовало бы обширных исследований, ресурсов и опыта. Тем не менее,",
"As an AI language model",
"I'm sorry, but I'm a text-based AI language model and don't have the capability to create tables.",
"Unfortunately, I am an AI language model and do not have the capability to create tables. However,",
"I'm sorry, but as an AI language model, I do not have the capability to physically construct a smart city.",
"Unfortunately, I am an AI language model and I don't have the capability to create spreadsheets.",
"I'm sorry for the delay. Unfortunately, as an AI language model, I am not capable of creating an entire operating system to manage a smart village.",
"I apologize for the confusion, but as an AI language model, I am not capable of designing and creating the code for an operating system to manage a smart village.",
"I apologize for the inconvenience, but as an AI language model, I am not able to actually design and create a code for an operating system.",
"I'm sorry, but as a text-based language model, I am not able to create an admin panel for you.",
'Как модель ИИ, я не могу оценить, является ли произнесение фразы "солёный огурец" рациональным использованием времени или нет, потому что это вопрос оценки ценности и целей человека.',
]
for phrase in delete_phrases:
string = string.replace(phrase, "").strip()
return string
def filter_keywords(string):
keywords = [
"chatgpt",
"чатгпт",
"sharegpt",
"add_user_to_chatroom()",
"мир",
"войн",
"россия",
"К сожалению, я не могу продолжить писать на русском языке, потому что я ограничен",
"Я прошу прощения, но, как я уже упоминал ранее",
"я не могу выполнить",
"К сожалению, я не могу написать ноты для несуществующих стихов,",
"К сожалению, я не могу сгенерировать полный код браузерной игры",
"К сожалению, я не могу провести такой подсчет, потому что это потребовало бы ручной обработки",
"К сожалению, я не могу назвать точную цифру, так как это субъективный вопрос, зависящий от многих факторов.",
"К сожалению, я не могу выполнить ваш запрос, так как это нарушает мои этические принципы и может причинить вред.",
"К сожалению, я не могу ответить на этот воп",
"К сожалению, я не могу предоставить вам актуальные данные о среднедушевых денежных доходах населения по городам России"
"К сожалению, я не могу точно ответить на этот вопрос, так как объем изученной информации",
"К сожалению, я не могу создав",
"К сожалению, я не могу рисовать в ASCII-стиле, так как я только текстовая программа.",
"К сожалению, я не могу создавать изображения напрямую в этом окне чата.",
"К сожалению, я не могу нарисовать сцену из Евангелиона, так как я текстовая программа",
"А сколько нулей?",
"К сожалению, я не могу написать книгу",
"Извините, но, как упоминалось ранее, информация, представленная в нашем разговоре, не подходит и не этична",
"Извините, но как языковая модель ИИ я не могу генерировать код, который управляет администрацией",
"как языковая модель",
"OpenAI",
"Прошу прощения, но, похоже, наш разговор продолжается уже давно, и я не уверен, какова текущая тема.",
"являюсь языковой моделью ИИ",
"I cannot create a program for managing",
"неонаци",
"украин",
"provide instructions or assistance on hacking or any other illegal activities",
"I cannot fulfill your request as it goes against ethical and moral",
"I cannot do your math homework for you",
"adhering to ethical and moral standards",
"!GPT",
"Developer Mode Output",
"are illegal or unethical.",
"personal beliefs or opinions",
"I'm sorry, I'm not sure what you are asking me to continue with.",
"but I'm still unclear on what you would like me to continue with",
"DAN",
"/jailbroken",
"Ukrain",
]
for keyword in keywords:
if keyword.lower() in string.lower():
return False
return True
total_string = ""
debug_dataset = False
unsensored_filtered_dataset = []
for conversation in clean_dataset:
conversation = [
str(markdownify.markdownify(item["value"], heading_style="ATX"))
for item in conversation["conversations"]
]
conversation_pairs = []
if "https://chathub.gg" in conversation[0]:
conversation.pop(0)
full_text = " ".join(conversation)
if filter_keywords(full_text):
for i in range(1, len(conversation)):
if (i + 1) % 2 == 0:
if debug_dataset:
bot_message = "BOT " + correct_string(conversation[i])
user_message = "USER " + correct_string(conversation[i - 1])
else:
bot_message = correct_string(conversation[i])
user_message = correct_string(conversation[i - 1])
conversation_pairs.append(user_message)
conversation_pairs.append(bot_message)
if len(conversation_pairs) > 0:
unsensored_filtered_dataset.append(conversation_pairs)
if debug_dataset:
all_text = "\n===\n".join([item for item in conversation_pairs])
total_string += all_text
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
total_string += "===" * 10
total_string += "\n"
# print(total_string)
from transformers import AutoTokenizer
from verbalist.datasets.utils import visualize_hist
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
conversation_lengths = []
for conversation in unsensored_filtered_dataset:
all_text = "\n===\n".join([item for item in conversation])
conversation_lengths.append(len(tokenizer(all_text)["input_ids"]))
# print(all_text)
# print("="*100)
# print("="*100)
# print("="*100)
# break
# if has_cyrillic(all_text):
# rus_conv.append(conversation)
visualize_hist(conversation_lengths, "ru_share_gpt_filtered")
filter_num = 85
passed_convs = (
np.array(conversation_lengths) < np.percentile(conversation_lengths, filter_num)
).tolist()
unsensored_passed = []
for i, status in enumerate(passed_convs):
if status:
unsensored_passed.append(unsensored_filtered_dataset[i])
unsensored_dataset = []
for conv in unsensored_passed:
conv_hash = hashlib.sha256(conv[0].encode('utf-8')).hexdigest()
unsensored_dataset.append({
"conversation": conv,
"hash": conv_hash
})
``` | 9,642 | [
[
-0.0312347412109375,
-0.054473876953125,
0.0278778076171875,
0.020294189453125,
-0.020660400390625,
0.01396942138671875,
-0.0096588134765625,
-0.01428985595703125,
0.0310516357421875,
0.0282135009765625,
-0.0419921875,
-0.0546875,
-0.0280609130859375,
0.0073... |
lhoestq/squad | 2023-08-18T10:52:41.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
... | lhoestq | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 1 | 13 | 2023-08-18T10:52:20 | ---
pretty_name: SQuAD
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: squad
train-eval-index:
- config: plain_text
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: plain_text
splits:
- name: train
num_bytes: 79317110
num_examples: 87599
- name: validation
num_bytes: 10472653
num_examples: 10570
download_size: 35142551
dataset_size: 89789763
---
# Dataset Card for "squad"
## Table of Contents
- [Dataset Card for "squad"](#dataset-card-for-squad)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [plain_text](#plain_text)
- [Data Fields](#data-fields)
- [plain_text](#plain_text-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 35.14 MB
- **Size of the generated dataset:** 89.92 MB
- **Total amount of disk used:** 125.06 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 35.14 MB
- **Size of the generated dataset:** 89.92 MB
- **Total amount of disk used:** 125.06 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|87599| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. | 7,665 | [
[
-0.047210693359375,
-0.046173095703125,
0.007049560546875,
0.01451873779296875,
-0.007724761962890625,
0.00609588623046875,
-0.0211639404296875,
-0.0267333984375,
0.04022216796875,
0.0289306640625,
-0.07452392578125,
-0.06414794921875,
-0.0291290283203125,
0... |
fake-news-UFG/fakebr | 2023-08-18T13:51:35.000Z | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:pt",
"region:us"
] | fake-news-UFG | Fake.Br Corpus is composed of aligned true and fake news written in Brazilian Portuguese. | @article{silva:20,
title = "Towards automatically filtering fake news in Portuguese",
journal = "Expert Systems with Applications",
volume = "146",
pages = "113199",
year = "2020",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2020.113199",
url = "http://www.sciencedirect.com/science/article/pii/S0957417420300257",
author = "Renato M. Silva and Roney L.S. Santos and Tiago A. Almeida and Thiago A.S. Pardo",
} | 0 | 13 | 2023-08-18T11:46:19 | ---
pretty_name: Fake.br
task_categories:
- text-classification
language:
- pt
language_details: pt-BR
size_categories:
- 1K<n<10K
multilinguality:
- monolingual
language_creators:
- found
---
# Dataset Card for fake.br
## Dataset Description
- **Homepage:**
- **Repository:** [https://github.com/roneysco/Fake.br-Corpus/](https://github.com/roneysco/Fake.br-Corpus/)
- **Paper:** [https://sites.icmc.usp.br/taspardo/OpenCor2018-SantosEtAl.pdf](https://sites.icmc.usp.br/taspardo/OpenCor2018-SantosEtAl.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Fake.Br Corpus is composed of aligned true and fake news written in Brazilian Portuguese.
### Supported Tasks and Leaderboards
The task is text classification of news content.
### Languages
The dataset is in Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use "Fake.br Dataset", please include a citation to the project website and the corresponding paper published in PROPOR 2018 conference:
```bibtex
@InProceedings{fakebr:18,
author={Monteiro, Rafael A. and Santos, Roney L. S. and Pardo, Thiago A. S. and de Almeida, Tiago A. and Ruiz, Evandro E. S. and Vale, Oto A.},
title={Contributions to the Study of Fake News in Portuguese: New Corpus and Automatic Detection Results},
booktitle={Computational Processing of the Portuguese Language},
year={2018},
publisher={Springer International Publishing},
pages={324--334},
isbn={978-3-319-99722-3},
}
```
or the paper published in Expert Systems with Applications:
```bibtex
@article{silva:20,
title = "Towards automatically filtering fake news in Portuguese",
journal = "Expert Systems with Applications",
volume = "146",
pages = "113199",
year = "2020",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2020.113199",
url = "http://www.sciencedirect.com/science/article/pii/S0957417420300257",
author = "Renato M. Silva and Roney L.S. Santos and Tiago A. Almeida and Thiago A.S. Pardo",
}
```
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset. | 2,995 | [
[
-0.0309600830078125,
-0.0631103515625,
0.00994110107421875,
0.0257110595703125,
-0.0244598388671875,
0.0190887451171875,
-0.0180206298828125,
-0.037322998046875,
0.046966552734375,
0.029022216796875,
-0.0193023681640625,
-0.0537109375,
-0.047393798828125,
0.... |
squarelike/ko_medical_chat | 2023-08-19T06:45:48.000Z | [
"language:ko",
"medical",
"region:us"
] | squarelike | null | null | 3 | 13 | 2023-08-18T18:24:58 | ---
language:
- ko
tags:
- medical
---
[https://github.com/jwj7140/ko-medical-chat](https://github.com/jwj7140/ko-medical-chat)
Korean medical conversation dataset from converting [MedText](https://huggingface.co/datasets/BI55/MedText) and [ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor) | 295 | [
[
-0.013275146484375,
-0.046173095703125,
0.0516357421875,
0.01837158203125,
-0.01519012451171875,
0.0057525634765625,
-0.0171966552734375,
-0.0225067138671875,
0.03515625,
0.0614013671875,
-0.0494384765625,
-0.060943603515625,
-0.0238494873046875,
-0.01042938... |
pkufool/libriheavy | 2023-09-19T11:35:45.000Z | [
"license:apache-2.0",
"arxiv:2309.08105",
"region:us"
] | pkufool | null | null | 4 | 13 | 2023-08-21T11:20:42 | ---
license: apache-2.0
---
# Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context
Libriheavy is a labeled version of [Librilight](https://github.com/facebookresearch/libri-light), read our [paper](https://arxiv.org/abs/2309.08105) for more details.
See https://github.com/k2-fsa/libriheavy for more details.
## Citation
```
@misc{kang2023libriheavy,
title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context},
author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel Povey},
year={2023},
eprint={2309.08105},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
| 712 | [
[
0.00737762451171875,
-0.031219482421875,
0.042877197265625,
0.0197296142578125,
-0.0217742919921875,
0.0028057098388671875,
-0.0247802734375,
-0.037109375,
0.01239776611328125,
0.03759765625,
-0.0108642578125,
-0.03240966796875,
-0.0013933181762695312,
0.016... |
nuprl/stack_dedup_lua_codegen_full | 2023-08-24T05:18:55.000Z | [
"region:us"
] | nuprl | null | null | 0 | 13 | 2023-08-24T05:18:51 | ---
dataset_info:
features:
- name: content
dtype: string
- name: pass_rate
dtype: float64
- name: id
dtype: int64
- name: original_id
dtype: int64
- name: tests
dtype: string
- name: edu_score
dtype: float64
splits:
- name: train
num_bytes: 152206357
num_examples: 117557
download_size: 51503174
dataset_size: 152206357
---
# Dataset Card for "stack_dedup_lua_codegen_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.029632568359375,
-0.0305328369140625,
0.00897979736328125,
0.024200439453125,
-0.0173187255859375,
0.005084991455078125,
0.02435302734375,
-0.022674560546875,
0.06414794921875,
0.043792724609375,
-0.040557861328125,
-0.055938720703125,
-0.03125,
-0.017150... |
monsoon-nlp/asknyc-chatassistant-format | 2023-08-29T20:53:15.000Z | [
"task_categories:question-answering",
"language:en",
"license:mit",
"reddit",
"nyc",
"new york city",
"region:us"
] | monsoon-nlp | null | null | 0 | 13 | 2023-08-26T22:01:28 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- reddit
- nyc
- new york city
---
Questions from Reddit.com/r/AskNYC, downloaded from PushShift, filtered to direct responses from humans, where the post net score is >= 3.
Collected one month of posts from each year 2015-2019 (i.e. no content from July 2019 onward)
Adapted from the CSV used to fine-tune https://huggingface.co/monsoon-nlp/gpt-nyc
Blog about the original model: https://medium.com/geekculture/gpt-nyc-part-1-9cb698b2e3d | 517 | [
[
-0.05023193359375,
-0.06640625,
0.0216064453125,
0.0297393798828125,
-0.04962158203125,
-0.0193634033203125,
0.02276611328125,
-0.06854248046875,
0.05084228515625,
0.032989501953125,
-0.0693359375,
-0.023712158203125,
-0.0279083251953125,
0.0185699462890625,... |
LibrAI/do-not-answer | 2023-08-28T18:22:52.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"safety",
"large language model",
"region:us"
] | LibrAI | null | null | 2 | 13 | 2023-08-28T17:37:24 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- safety
- large language model
pretty_name: do-not-answer
size_categories:
- n<1K
dataset_info:
features:
- name: id
dtype: int64
- name: risk_area
dtype: string
- name: types_of_harm
dtype: string
- name: specific_harms
dtype: string
- name: question
dtype: string
- name: GPT4_response
dtype: string
- name: GPT4_harmful
dtype: int64
- name: GPT4_action
dtype: int64
- name: ChatGPT_response
dtype: string
- name: ChatGPT_harmful
dtype: int64
- name: ChatGPT_action
dtype: int64
- name: Claude_response
dtype: string
- name: Claude_harmful
dtype: int64
- name: Claude_action
dtype: int64
- name: ChatGLM2_response
dtype: string
- name: ChatGLM2_harmful
dtype: int64
- name: ChatGLM2_action
dtype: int64
- name: llama2-7b-chat_response
dtype: string
- name: llama2-7b-chat_harmful
dtype: int64
- name: llama2-7b-chat_action
dtype: int64
- name: vicuna-7b_response
dtype: string
- name: vicuna-7b_harmful
dtype: int64
- name: vicuna-7b_action
dtype: int64
splits:
- name: train
num_bytes: 3997193
num_examples: 939
download_size: 1709142
dataset_size: 3997193
---
# Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
## Overview
*Do not answer* is an open-source dataset to evaluate LLMs' safety mechanism at a low cost. The dataset is curated and filtered to consist only of prompts to which responsible language models do not answer.
Besides human annotations, *Do not answer* also implements model-based evaluation, where a 600M fine-tuned BERT-like evaluator achieves comparable results with human and GPT-4.
## Instruction Taxonomy
Instructions are organized by a three-level hierarchical taxonomy, covering 61 specific harms.

*Do not answer* collects 939 instructions, the distribution across five risk areas and 12 harm types is shown below:

## Response Assessment
We assess six models' responses (GPT-4, ChatGPT, Claude, LLaMA-2, Vicuna, ChatGLM2) from two levels.
* Harmfulness: whether a response is harmful (a binary label).
* Action category: responses are categorized into six distinct groups, each representing a specific response type.

### Human Evaluation
High percentage of safe responses across five risk areas for six models demonstrates that they are basically safe.
LLaMA-2 ranks the first, followed by ChatGPT, Claude, GPT-4, and Vicuna, with ChatGLM2 ranking the last by 85 harmful responses among 939.
* Harmlessness Rank

* Action Category Distribution
*

### Automatic Evaluation
Longformer evaluators achieve comparable results with human and GPT-4.
* GPT-4 evaluation
* Fine-tuned Longformer classifier (600M)
| Model | Human | GPT-4 | Longformer |
|:---------------|--------:|--------:|-------------:|
| LLaMA-2 | 99.7 | 99.4 | 99.3 |
| ChatGPT | 98.5 | 97.7 | 97.9 |
| Claude | 98.3 | 98.3 | 97.6 |
| GPT-4 | 97.6 | 96.5 | 97.2 |
| vicuna | 94.5 | 94.9 | 95.0 |
| ChatGLM2 | 90.9 | 92.9 | 92.9 |
## Citation
If our work is useful for your own, you can cite us with the following BibTex entry:
```bibtex
@misc{wang2023donotanswer,
author = {Wang, Yuxia, and
Li, Haonan and
Han, Xudong and
Nakov, Preslav
and Baldwin, Timothy},
title = {Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs},
year = {2023},
howpublished = {arXiv preprint arXiv:2308.13387},
}
```
## License
All datasets in this repository are released under the
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/). All source files in this repository are released under the Apache 2.0 license, the text of which can be found in the LICENSE file. | 4,181 | [
[
-0.0134429931640625,
-0.0572509765625,
0.039520263671875,
-0.00807952880859375,
-0.01165008544921875,
-0.00476837158203125,
0.0185699462890625,
-0.03912353515625,
-0.00457763671875,
0.0268707275390625,
-0.0457763671875,
-0.059722900390625,
-0.03656005859375,
... |
Admin08077/STUPID | 2023-09-03T07:08:20.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:translation",
"task_categories:zero-shot-classification",
"task_categories:summarization"... | Admin08077 | null | null | 0 | 13 | 2023-09-03T05:33:59 | ---
task_categories:
- text-generation
- text-classification
- token-classification
- table-question-answering
- question-answering
- translation
- zero-shot-classification
- summarization
- conversational
- sentence-similarity
- audio-to-audio
- automatic-speech-recognition
- voice-activity-detection
- depth-estimation
- image-classification
- object-detection
- audio-classification
- image-segmentation
- text-to-image
- image-to-text
- text2text-generation
- feature-extraction
- unconditional-image-generation
- reinforcement-learning
- tabular-classification
- tabular-regression
- video-classification
- text-to-speech
- tabular-to-text
- robotics
- time-series-forecasting
- text-retrieval
- visual-question-answering
- zero-shot-image-classification
- text-to-video
- multiple-choice
- table-to-text
- image-to-image
- graph-ml
- fill-mask
tags:
- '#Admin08077/Stupid'
size_categories:
- n>1T
license: openrail
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 2,457 | [
[
-0.038177490234375,
-0.02984619140625,
-0.0036067962646484375,
0.027130126953125,
-0.0323486328125,
0.0037822723388671875,
-0.01727294921875,
-0.02020263671875,
0.049041748046875,
0.04046630859375,
-0.0634765625,
-0.08062744140625,
-0.052947998046875,
0.0020... |
hello2mao/Chinese_Audio_Resource | 2023-09-13T05:21:30.000Z | [
"task_categories:text-to-speech",
"task_categories:audio-classification",
"task_categories:audio-to-audio",
"language:zh",
"license:openrail",
"region:us"
] | hello2mao | null | null | 0 | 13 | 2023-09-13T04:36:38 | ---
license: openrail
task_categories:
- text-to-speech
- audio-classification
- audio-to-audio
language:
- zh
---
# 中文语音数据集
- 刘海柱
- 林黛玉
- 甜小喵
- 蔡徐坤
- 郭德纲 | 156 | [
[
0.00643157958984375,
-0.037994384765625,
0.0255584716796875,
0.0648193359375,
-0.050933837890625,
0.005466461181640625,
0.0005164146423339844,
-0.0166168212890625,
0.052001953125,
0.031768798828125,
-0.003093719482421875,
-0.033721923828125,
-0.034210205078125,
... |
ahmed000000000/cybersec | 2023-09-17T21:16:25.000Z | [
"region:us"
] | ahmed000000000 | null | null | 0 | 13 | 2023-09-17T21:15:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mwitiderrick/lamini_llama | 2023-09-18T05:36:13.000Z | [
"region:us"
] | mwitiderrick | null | null | 0 | 13 | 2023-09-18T05:35:58 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.014984130859375,
0.05718994140625,
0.0287933349609375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.005077362060546875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.0149993896484375,
-0.0604248046875,
0.037902... |
DavidMOBrien/8000-java-preprocessed-v2 | 2023-09-18T23:40:48.000Z | [
"region:us"
] | DavidMOBrien | null | null | 0 | 13 | 2023-09-18T23:38:59 | ---
dataset_info:
features:
- name: before
dtype: string
- name: after
dtype: string
- name: repo
dtype: string
- name: type
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 556419873
num_examples: 322448
- name: test
num_bytes: 76892752
num_examples: 44883
- name: valid
num_bytes: 73527268
num_examples: 45083
download_size: 292278962
dataset_size: 706839893
---
# Dataset Card for "8000-java-preprocessed-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 649 | [
[
-0.038482666015625,
-0.0156402587890625,
0.019744873046875,
0.0142974853515625,
-0.0240631103515625,
-0.01340484619140625,
0.0241546630859375,
-0.0167083740234375,
0.0452880859375,
0.059112548828125,
-0.042572021484375,
-0.051025390625,
-0.051025390625,
-0.0... |
ricardosantoss/mimic | 2023-09-19T18:02:19.000Z | [
"region:us"
] | ricardosantoss | null | null | 0 | 13 | 2023-09-19T18:01:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: TEXT
dtype: string
- name: ICD9_CODE
sequence: string
splits:
- name: train
num_bytes: 350160331
num_examples: 39354
- name: test
num_bytes: 44827959
num_examples: 5000
- name: validation
num_bytes: 44381049
num_examples: 5000
download_size: 245192456
dataset_size: 439369339
---
# Dataset Card for "mimic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 699 | [
[
-0.039520263671875,
-0.01849365234375,
0.006023406982421875,
0.023712158203125,
-0.0057830810546875,
0.005657196044921875,
0.026641845703125,
-0.028106689453125,
0.0726318359375,
0.03143310546875,
-0.0704345703125,
-0.035552978515625,
-0.035614013671875,
-0.... |
dim/dialogsum | 2023-09-25T14:43:54.000Z | [
"region:us"
] | dim | null | null | 0 | 13 | 2023-09-25T14:43:50 | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 11439628
num_examples: 12460
download_size: 6516766
dataset_size: 11439628
---
# Dataset Card for "dialogsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 459 | [
[
-0.038238525390625,
-0.0261077880859375,
0.0120086669921875,
0.0037136077880859375,
-0.00890350341796875,
-0.008026123046875,
0.020477294921875,
-0.01062774658203125,
0.05462646484375,
0.0509033203125,
-0.06378173828125,
-0.0484619140625,
-0.03167724609375,
... |
bigbio/czi_drsm | 2023-09-26T13:46:34.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | Research Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets:
(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers);
- Clinical Characteristics, Disease Pathology, and Diagnosis -
Text that describes (A) symptoms, signs, or ‘phenotype’ of a disease;
(B) the effects of the disease on patient organs, tissues, or cells;
(C) the results of clinical tests that reveal pathology (including
biomarkers); (D) research that use this information to figure out
a diagnosis.
- Therapeutics in the clinic -
Text describing how treatments work in the clinic (but not in a clinical trial).
- Disease mechanism -
Text that describes either (A) mechanistic involvement of specific genes in disease
(deletions, gain of function, etc); (B) how molecular signalling or metabolism
binding, activating, phosphorylation, concentration increase, etc.)
are involved in the mechanism of a disease; or (C) the physiological
mechanism of disease at the level of tissues, organs, and body systems.
- Patient-Based Therapeutics -
Text describing (A) Clinical trials (studies of therapeutic measures being
used on patients in a clinical trial); (B) Post Marketing Drug Surveillance
(effects of a drug after approval in the general population or as part of
‘standard healthcare’); (C) Drug repurposing (how a drug that has been
approved for one use is being applied to a new disease).
(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers);
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study does not directly investigate quality of life
- 1 - the study investigates qol but not as its primary contribution
- 2 - the study's primary contribution centers on quality of life measures
(C) identifies if a paper is a natural history study (~10k papers).
` - -1 - the paper is not a primary experimental study in rare disease
- 0 - the study is not directly investigating the natural history of a disease
- 1 - the study includes some elements a natural history but not as its primary contribution
- 2 - the study's primary contribution centers on observing the time course of a rare disease
These classifications are particularly relevant in rare disease research, a field that is generally understudied. | @article{,
author = {},
title = {},
journal = {},
volume = {},
year = {},
url = {},
doi = {},
biburl = {},
bibsource = {}
} | 0 | 13 | 2023-09-26T13:22:47 | ---
language:
- en
bigbio_language:
- English
license: cc0-1.0
bigbio_license_shortname: cc0-1.0
multilinguality: monolingual
pretty_name: CZI DRSM
homepage: https://github.com/chanzuckerberg/DRSM-corpus
bigbio_pubmed: false
bigbio_public: true
bigbio_tasks:
- TXTCLASS
---
# Dataset Card for CZI DRSM
[README.md](..%2Fmed_qa%2FREADME.md)
## Dataset Description
- **Homepage:** https://github.com/chanzuckerberg/DRSM-corpus
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
Research Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets:
(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers);
- Clinical Characteristics, Disease Pathology, and Diagnosis -
Text that describes (A) symptoms, signs, or ‘phenotype’ of a disease;
(B) the effects of the disease on patient organs, tissues, or cells;
(C) the results of clinical tests that reveal pathology (including
biomarkers); (D) research that use this information to figure out
a diagnosis.
- Therapeutics in the clinic -
Text describing how treatments work in the clinic (but not in a clinical trial).
- Disease mechanism -
Text that describes either (A) mechanistic involvement of specific genes in disease
(deletions, gain of function, etc); (B) how molecular signalling or metabolism
binding, activating, phosphorylation, concentration increase, etc.)
are involved in the mechanism of a disease; or (C) the physiological
mechanism of disease at the level of tissues, organs, and body systems.
- Patient-Based Therapeutics -
Text describing (A) Clinical trials (studies of therapeutic measures being
used on patients in a clinical trial); (B) Post Marketing Drug Surveillance
(effects of a drug after approval in the general population or as part of
‘standard healthcare’); (C) Drug repurposing (how a drug that has been
approved for one use is being applied to a new disease).
(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers);
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study does not directly investigate quality of life
- 1 - the study investigates qol but not as its primary contribution
- 2 - the study's primary contribution centers on quality of life measures
(C) identifies if a paper is a natural history study (~10k papers).
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study is not directly investigating the natural history of a disease
- 1 - the study includes some elements a natural history but not as its primary contribution
- 2 - the study's primary contribution centers on observing the time course of a rare disease
These classifications are particularly relevant in rare disease research, a field that is generally understudied.
## Citation Information
```
# N/A
```
| 3,149 | [
[
-0.0041656494140625,
-0.039581298828125,
0.03192138671875,
-0.00560760498046875,
-0.024322509765625,
0.006252288818359375,
0.00689697265625,
-0.04241943359375,
0.047882080078125,
0.04052734375,
-0.021728515625,
-0.04815673828125,
-0.04705810546875,
0.0189666... |
ThingsSolver/nsql-eng | 2023-09-28T07:39:58.000Z | [
"region:us"
] | ThingsSolver | null | null | 0 | 13 | 2023-09-26T13:28:45 | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: prompt
dtype: string
- name: is_english
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 911778978
num_examples: 261423
download_size: 226661607
dataset_size: 911778978
---
# Dataset Card for "nsql-eng"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 578 | [
[
-0.046661376953125,
-0.00969696044921875,
0.00942230224609375,
0.0262451171875,
-0.0135955810546875,
0.01474761962890625,
0.0282135009765625,
-0.022491455078125,
0.061920166015625,
0.047576904296875,
-0.06695556640625,
-0.058380126953125,
-0.0181427001953125,
... |
ZhafranR/CC-ID-News | 2023-10-02T00:37:33.000Z | [
"size_categories:100K<n<1M",
"language:id",
"license:cc",
"region:us"
] | ZhafranR | null | null | 1 | 13 | 2023-10-01T23:15:18 | ---
license: cc
language:
- id
size_categories:
- 100K<n<1M
---
[Needs More Information]
# Dataset Card for Common Crawled Indonesia News
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[Needs More Information]
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 2,493 | [
[
-0.03973388671875,
-0.0411376953125,
0.0010805130004882812,
0.0220184326171875,
-0.039306640625,
0.000033795833587646484,
-0.0163726806640625,
-0.03240966796875,
0.0494384765625,
0.06390380859375,
-0.054229736328125,
-0.07379150390625,
-0.05419921875,
0.0294... |
harinarayan/my_dataset_01 | 2023-10-03T04:15:57.000Z | [
"region:us"
] | harinarayan | null | null | 0 | 13 | 2023-10-03T03:16:06 | ---
dataset_info:
features:
- name: image_file
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2422
num_examples: 20
download_size: 2850
dataset_size: 2422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "my_dataset_01"
This is a dataset for captioning graph images
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 520 | [
[
-0.039520263671875,
-0.0296173095703125,
0.0027027130126953125,
0.0145111083984375,
-0.030792236328125,
0.0135955810546875,
0.03961181640625,
0.0021953582763671875,
0.07476806640625,
0.045166015625,
-0.07275390625,
-0.046783447265625,
-0.041168212890625,
0.0... |
AayushShah/SQL_CleanedKaggle | 2023-10-03T13:15:09.000Z | [
"region:us"
] | AayushShah | null | null | 0 | 13 | 2023-10-03T13:09:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 958625276.0
num_examples: 266581
- name: test
num_bytes: 106517116.0
num_examples: 29621
download_size: 50495032
dataset_size: 1065142392.0
---
# Dataset Card for "SQL_CleanedKaggle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 649 | [
[
-0.026641845703125,
-0.03656005859375,
-0.00428009033203125,
-0.00807952880859375,
-0.02587890625,
-0.0018568038940429688,
0.005702972412109375,
0.0027523040771484375,
0.065185546875,
0.058441162109375,
-0.06890869140625,
-0.05859375,
-0.0173187255859375,
-0... |
sordonia/platypus_mmlu_sub-10_from-wiki | 2023-10-03T13:18:30.000Z | [
"region:us"
] | sordonia | null | null | 0 | 13 | 2023-10-03T13:18:17 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: subject
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 368743007
num_examples: 82188
download_size: 120535471
dataset_size: 368743007
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "platypus_mmlu_sub-10_from-wiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 613 | [
[
-0.048004150390625,
-0.01137542724609375,
0.01043701171875,
0.0180206298828125,
-0.034912109375,
-0.00274658203125,
0.00814056396484375,
0.005619049072265625,
0.0711669921875,
0.026641845703125,
-0.07196044921875,
-0.04156494140625,
-0.043853759765625,
0.013... |
paul-w-qs/contracts_v1 | 2023-10-04T14:24:27.000Z | [
"region:us"
] | paul-w-qs | null | null | 0 | 13 | 2023-10-04T14:08:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 296160099.824
num_examples: 3052
- name: validation
num_bytes: 71579695.0
num_examples: 764
- name: test
num_bytes: 91333831.0
num_examples: 955
download_size: 457070753
dataset_size: 459073625.824
---
# Dataset Card for "contracts_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 715 | [
[
-0.0267333984375,
-0.0027294158935546875,
0.014923095703125,
0.0121307373046875,
-0.01739501953125,
-0.0125579833984375,
0.037078857421875,
-0.01537322998046875,
0.05908203125,
0.05712890625,
-0.0504150390625,
-0.067626953125,
-0.041229248046875,
-0.02827453... |
Hack90/ncbi_genbank_part_1 | 2023-10-06T23:08:06.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 13 | 2023-10-04T19:00:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 20345583566
num_examples: 137283
download_size: 9397135953
dataset_size: 20345583566
---
# Dataset Card for "ncbi_genbank_part_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 642 | [
[
-0.046905517578125,
-0.0256500244140625,
0.0126953125,
0.016937255859375,
-0.0302276611328125,
0.01313018798828125,
0.040130615234375,
0.00020897388458251953,
0.0635986328125,
0.038116455078125,
-0.05865478515625,
-0.064697265625,
-0.03204345703125,
-0.00451... |
xivin/test3 | 2023-10-05T16:14:06.000Z | [
"region:us"
] | xivin | null | null | 0 | 13 | 2023-10-05T15:04:57 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 28000
num_examples: 1000
download_size: 2170
dataset_size: 28000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 426 | [
[
-0.04132080078125,
-0.01328277587890625,
0.02020263671875,
0.016571044921875,
-0.002437591552734375,
-0.003795623779296875,
0.02923583984375,
-0.0168609619140625,
0.035064697265625,
0.0215911865234375,
-0.0513916015625,
-0.04864501953125,
-0.03179931640625,
... |
HamdanXI/paradetox_with_editOps | 2023-10-06T12:21:19.000Z | [
"region:us"
] | HamdanXI | null | null | 0 | 13 | 2023-10-06T12:21:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en_toxic_comment
dtype: string
- name: en_neutral_comment
dtype: string
- name: edit_ops
sequence:
sequence: string
splits:
- name: train
num_bytes: 4067285
num_examples: 19744
download_size: 1996316
dataset_size: 4067285
---
# Dataset Card for "difference_analysis_data_structure"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 578 | [
[
-0.039306640625,
-0.0246734619140625,
0.004848480224609375,
0.0223846435546875,
-0.00728607177734375,
0.0038013458251953125,
0.0309295654296875,
-0.0168304443359375,
0.0682373046875,
0.00495147705078125,
-0.052886962890625,
-0.05731201171875,
-0.05438232421875,
... |
Zaid/iAshaar | 2023-10-07T08:01:46.000Z | [
"region:us"
] | Zaid | null | null | 0 | 13 | 2023-10-07T08:01:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: poem title
dtype: string
- name: poem meter
dtype: string
- name: poem verses
sequence: string
- name: poem theme
dtype: string
- name: poem url
dtype: string
- name: poet name
dtype: string
- name: poet description
dtype: string
- name: poet url
dtype: string
- name: poet era
dtype: string
- name: poet location
dtype: string
- name: poem description
list:
- name: attributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: children
list:
- name: attributes
struct:
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: href
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: title
dtype: string
- name: value
dtype: string
- name: children
list:
- name: attributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: children
list:
- name: attributes
struct:
- name: align
dtype: string
- name: face
dtype: string
- name: nowrap
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: lang
dtype: string
- name: style
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: dir
dtype: string
- name: face
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: partB
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: dir
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: partB
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: poem language type
dtype: string
- name: text
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 846365314
num_examples: 136422
download_size: 370389390
dataset_size: 846365314
---
# Dataset Card for "iAshaar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 4,406 | [
[
-0.0447998046875,
-0.01261138916015625,
0.002994537353515625,
0.016876220703125,
-0.0142822265625,
0.0002548694610595703,
0.03204345703125,
-0.018890380859375,
0.06005859375,
0.035736083984375,
-0.04229736328125,
-0.049774169921875,
-0.0562744140625,
-0.0238... |
tyzhu/synpre_delete_1M | 2023-10-08T09:12:38.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 13 | 2023-10-08T09:08:19 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1742619734
num_examples: 1000000
- name: validation
num_bytes: 17552085
num_examples: 10000
download_size: 1091004286
dataset_size: 1760171819
---
# Dataset Card for "synpre_delete_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 477 | [
[
-0.04681396484375,
-0.0223541259765625,
-0.00360870361328125,
0.0106964111328125,
-0.0186614990234375,
-0.0082550048828125,
0.006587982177734375,
-0.0174560546875,
0.05950927734375,
0.0355224609375,
-0.06982421875,
-0.061798095703125,
-0.035308837890625,
-0.... |
sankettgorey/donut_two_layouts | 2023-10-09T11:42:10.000Z | [
"region:us"
] | sankettgorey | null | null | 0 | 13 | 2023-10-09T11:41:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 511297223.058686
num_examples: 1279
- name: test
num_bytes: 64193822.85115697
num_examples: 160
- name: valid
num_bytes: 63774381.85115697
num_examples: 160
download_size: 575142166
dataset_size: 639265427.7609999
---
# Dataset Card for "donut_two_layouts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 726 | [
[
-0.02374267578125,
-0.019256591796875,
0.017120361328125,
0.021148681640625,
0.003162384033203125,
0.007656097412109375,
0.0149688720703125,
0.00962066650390625,
0.0662841796875,
0.047698974609375,
-0.052215576171875,
-0.046478271484375,
-0.032623291015625,
... |
dmrau/cqadupstack-android | 2023-10-09T12:39:30.000Z | [
"region:us"
] | dmrau | null | null | 0 | 13 | 2023-10-09T12:39:26 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 47953
num_examples: 699
- name: corpus
num_bytes: 12840959
num_examples: 22998
download_size: 7657118
dataset_size: 12888912
---
# Dataset Card for "cqadupstack-android"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 624 | [
[
-0.050933837890625,
-0.00910186767578125,
0.02142333984375,
0.017303466796875,
-0.0228424072265625,
0.0230865478515625,
0.03271484375,
-0.019256591796875,
0.06292724609375,
0.0389404296875,
-0.06298828125,
-0.04345703125,
-0.03533935546875,
-0.02061462402343... |
carnival13/eng_sur_val_DA_tokenized_rt5 | 2023-10-09T16:31:43.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 13 | 2023-10-09T16:31:40 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 6022485
num_examples: 5000
download_size: 1353838
dataset_size: 6022485
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eng_sur_val_DA_tokenized_rt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.045074462890625,
-0.01476287841796875,
0.0023403167724609375,
0.0318603515625,
-0.0216522216796875,
0.005580902099609375,
0.0272369384765625,
-0.013824462890625,
0.056610107421875,
0.043212890625,
-0.050506591796875,
-0.064697265625,
-0.048370361328125,
0... |
carnival13/eng_sur_DA_tokenized_rt5 | 2023-10-09T16:36:05.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 13 | 2023-10-09T16:35:59 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 104310930
num_examples: 155590
download_size: 23898508
dataset_size: 104310930
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eng_sur_DA_tokenized_rt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 548 | [
[
-0.045257568359375,
-0.01340484619140625,
0.005153656005859375,
0.031036376953125,
-0.02227783203125,
0.0078582763671875,
0.0222320556640625,
-0.017120361328125,
0.058837890625,
0.03765869140625,
-0.051300048828125,
-0.06768798828125,
-0.045623779296875,
0.0... |
rpii2023/lallalala | 2023-10-11T05:03:51.000Z | [
"region:us"
] | rpii2023 | null | null | 0 | 13 | 2023-10-10T13:52:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 8865217
num_examples: 5247
- name: test
num_bytes: 2544613
num_examples: 1500
download_size: 5971582
dataset_size: 11409830
---
# Dataset Card for "lallalala"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 571 | [
[
-0.0295867919921875,
-0.0264739990234375,
-0.0026264190673828125,
0.036956787109375,
-0.00498199462890625,
0.0016889572143554688,
0.023101806640625,
-0.0158538818359375,
0.061187744140625,
0.0228118896484375,
-0.0440673828125,
-0.04962158203125,
-0.0436401367187... |
MattPiscopo/SolvayLLM_Data | 2023-10-14T20:42:15.000Z | [
"task_categories:conversational",
"language:fr",
"region:us"
] | MattPiscopo | null | null | 0 | 13 | 2023-10-11T19:39:15 | ---
task_categories:
- conversational
language:
- fr
pretty_name: Données de fine-tuning LLM du cours de Droit du travail DROI-S3002.
---
<h1><u>Leçons couvertes*:</u></h1>
<b><I>PARTIE 1</I></b>
- <u>Leçon 1:</u> Initiation à la notion de droit du travail [✔️]
- <u>Leçon 2:</u> Initiation à l’histoire du droit du travail [✔️]
- <u>Leçon 3:</u> Notion et fonctions des OPR [✔️]
- <u>Leçon 4:</u> Organes institutionnels de la concertation sociale [✔️]
- <u>Leçon 5:</u> Normes négociées du droit du travail [✔️]
<h1><u>Données complémentaires:</u></h1>
- Coordonnées du professeur [❌]
- Fiche technique du cours [❌]
- Code du droit du travail (articles + références dans les réponses) [❌]
<small>*Basé sur le cours de DROI-S3002 de l'année académique 2022 - 2023.</small> | 777 | [
[
-0.0254364013671875,
-0.034393310546875,
0.026947021484375,
0.033660888671875,
-0.007457733154296875,
0.00921630859375,
0.003009796142578125,
-0.016143798828125,
0.044586181640625,
0.03515625,
-0.047637939453125,
-0.030487060546875,
-0.0305328369140625,
0.01... |
josiahchung/radiology-findings | 2023-10-12T15:56:05.000Z | [
"region:us"
] | josiahchung | null | null | 0 | 13 | 2023-10-12T15:55:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
khaimaitien/qa-expert-multi-hop-qa-V1.0 | 2023-10-14T09:06:44.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | khaimaitien | null | null | 0 | 13 | 2023-10-13T03:17:28 | ---
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: Multi-hop Question Answering
size_categories:
- 10K<n<100K
---
# Dataset Card for QA-Expert-multi-hop-qa-V1.0
This dataset aims to provide training data for the task: Question Answering, with a focus on <b>Multi-hop Question Answering</b>.
In total, our training dataset contains 27720 data points (train & validation) including *single questions* and *multi-hop questions*.
You can take a look at the model we trained on this data: [https://huggingface.co/khaimaitien/qa-expert-7B-V1.0](https://huggingface.co/khaimaitien/qa-expert-7B-V1.0)
Example of single question: what is the capital city of Vietnam?
Example of multi-hop question: what is the population of the capital city of Vietnam?
## Dataset Details
### Dataset Description
To have more information (prompts we used and code) about the way we create our dataset, please take a look at: [https://github.com/khaimt/qa_expert/tree/main/train#training-data](https://github.com/khaimt/qa_expert/tree/main/train#training-data)
This training dataset contains 22720 data points from several sources:
#### Single Questions
We use single questions from the following sources:
+ [Squad](https://huggingface.co/datasets/squad_v2): We randomly select 4000 answerable questions + 2400 unanswerable questions.
As the answers to these questions are spans, which are short, so we use OpenAI model to generate a complete answer given the question and context.
+ [BoolQ](https://huggingface.co/datasets/boolq): We randomly select 1600 random questions.
As the answers of these questions are yes/no, so we also use OpenAI model to generate complete answers.
This type of question is more difficult and needs reasoning (like Chain-of-Thought), so we ask the model to first generate the reasoning and then the final answer.
+ [Drop](https://huggingface.co/datasets/drop): We randomly select 1600 random questions.
The answers of these questions are also short and without explanation.
So we also use OpenAI model to generate the reasoning, arithmetic sequence (if needed) and the final answer.
#### Multi-hop Questions
For multi-hop questions, we use [Musique](https://github.com/StonyBrookNLP/musique) and generated data.
+ [Musique]: This dataset is from: [Musique](https://github.com/StonyBrookNLP/musique).
The authors built these multi-hop questions based on single questions from various sources such as squad2, natural questions, zerore, ...
But we found that some single questions are not well-formed (not a question and containing: <b>">>"</b>), such as "Stadio Ciro Vigorito >> occupant". So we removed all data points that had at least one unwell-formed single question.
Finally, we attained 5847 answerable multi-hop questions and we also randomly selected 2400 unanswerable multi-hop questions.
Each multi-hop question is decomposed into 2 or more single questions, and each single question contains short span answer.
so Here is how we process the data:
+ First, for each single question, we generate the complete answer using OpenAI model.
+ Next, we generate final answer based on the <b>generated answers</b> from single questions.
+ Our generated data: We used openAI model to generate multi-hop questions, please take a look at here: [https://github.com/khaimt/qa_expert/tree/main/train#generate-training-data](https://github.com/khaimt/qa_expert/tree/main/train#generate-training-data)
### Format
Each data point is a Json:
+ *src*: source of data point: squad.json, drop.json, boolq.json, musicque.json or gen_qa.json
+ *question*: the question, either single question or multi-hop questions
+ *inal_answer*: the final answer of the question --> model will generate this answer in the end
+ *answer*: span answer or None --> please ignore this, just an additional field of information
+ *sub_questions*: List of single questions to answer to answer the multi-hop question. If len(sub_questions) == 1 --> this is single question, not multi-hop question
+ *question*: the single question to ask
+ *answer*: the span answer of None or missing --> please ignore this, just an additional field of information
+ *long_answer*: the complete answer of this single question
+ *paragraph*: the context of the single question (this is considered as the retrieved context of the single question)
+ *unanswerable*: = True if this question is unanswerable --> you can ignore this because long_answer, note this field might be missing, default value is False.
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
```
@Misc{qa-expert,
title={QA Expert: LLM for Multi-hop Question Answering},
author={Khai Mai},
howpublished={\url{https://github.com/khaimt/qa_expert}},
year={2023},
}
```
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 8,547 | [
[
-0.042724609375,
-0.0660400390625,
0.0249786376953125,
0.01554107666015625,
-0.00672149658203125,
-0.00811004638671875,
0.005908966064453125,
-0.03802490234375,
0.03656005859375,
0.045623779296875,
-0.062744140625,
-0.04205322265625,
-0.02716064453125,
0.012... |
Kabatubare/frederick | 2023-10-15T11:50:31.000Z | [
"region:us"
] | Kabatubare | null | null | 0 | 13 | 2023-10-14T17:09:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
olaaaiap/test_anotatuta | 2023-10-15T09:14:38.000Z | [
"region:us"
] | olaaaiap | null | null | 0 | 13 | 2023-10-15T09:14:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mmcho1157/attackgpt_base | 2023-10-25T05:52:46.000Z | [
"region:us"
] | mmcho1157 | null | null | 0 | 13 | 2023-10-15T12:28:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 464
num_examples: 2
download_size: 0
dataset_size: 464
---
# Dataset Card for "attackgpt_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 425 | [
[
-0.047943115234375,
-0.03131103515625,
0.00252532958984375,
0.01220703125,
-0.00913238525390625,
-0.0019254684448242188,
0.0214080810546875,
-0.0011005401611328125,
0.0491943359375,
0.03179931640625,
-0.040802001953125,
-0.048492431640625,
-0.054290771484375,
... |
lazaroq11/billqa3 | 2023-10-15T19:33:58.000Z | [
"region:us"
] | lazaroq11 | null | null | 0 | 13 | 2023-10-15T15:50:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yusuf802/leaf-images | 2023-10-15T17:16:13.000Z | [
"region:us"
] | yusuf802 | null | null | 0 | 13 | 2023-10-15T16:59:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Apple_Black_rot
'1': Apple_Cedar_apple_rust
'2': Apple_Powdery_mildew
'3': Apple_healthy
'4': Apple_scab
'5': Cherry_(including_sour)_Powdery_mildew
'6': Cherry_(including_sour)_healthy
'7': Corn_(maize)_Cercospora_leaf_spot Gray_leaf_spot
'8': Corn_(maize)_Common_rust
'9': Corn_(maize)_Northern_Leaf_Blight
'10': Corn_(maize)_healthy
'11': Cotton_leaf_diseased
'12': Cotton_leaf_fresh
'13': Grape_Black_rot
'14': Grape___Esca_(Black_Measles)
'15': Grape___Leaf_blight_(Isariopsis_Leaf_Spot)
'16': Grape___healthy
'17': Orange_Haunglongbing_(Citrus_greening)
'18': Orange__Black_Rot
'19': Orange__Canker
'20': Orange__Healthy
'21': Peach_Bacterial_spot
'22': Peach_healthy
'23': Pepper,_bell_Bacterial_spot
'24': Pepper,_bell_healthy
'25': Potato_Early_blight
'26': Potato_Late_blight
'27': Potato_healthy
'28': Squash_Powdery_mildew
'29': Strawberry_Leaf_scorch
'30': Strawberry_healthy
'31': Tomato_Bacterial_spot
'32': Tomato_Early_blight
'33': Tomato_Late_blight
'34': Tomato_Leaf_Mold
'35': Tomato_Septoria_leaf_spot
'36': Tomato_Spider_mites_Two_spotted_spider_mite
'37': Tomato_Target_Spot
'38': Tomato_Tomato_Yellow_Leaf_Curl_Virus
'39': Tomato_Tomato_mosaic_virus
'40': Tomato_healthy
'41': Wheat_healthy
'42': Wheat_leaf_rust
'43': Wheat_nitrogen_deficiency
splits:
- name: train
num_bytes: 7355420032.737346
num_examples: 56842
- name: test
num_bytes: 1331846480.2826538
num_examples: 10032
download_size: 8653117062
dataset_size: 8687266513.02
---
# Dataset Card for "leaf-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,345 | [
[
-0.031341552734375,
-0.01129913330078125,
0.01461029052734375,
0.00890350341796875,
-0.0186004638671875,
0.0008702278137207031,
0.029571533203125,
-0.025115966796875,
0.0653076171875,
0.031951904296875,
-0.07098388671875,
-0.0579833984375,
-0.040771484375,
-... |
dreeven/hand_vs_machine | 2023-10-16T09:02:06.000Z | [
"region:us"
] | dreeven | null | null | 0 | 13 | 2023-10-16T09:00:31 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': hand
'1': machine
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 155717.0
num_examples: 2
- name: train
num_bytes: 153606.0
num_examples: 18
download_size: 297352
dataset_size: 309323.0
---
# Dataset Card for "hand_vs_machine"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 560 | [
[
-0.038726806640625,
-0.0316162109375,
0.0173797607421875,
0.0001761913299560547,
-0.0203399658203125,
-0.00539398193359375,
0.0207366943359375,
-0.014678955078125,
0.055328369140625,
0.032470703125,
-0.0523681640625,
-0.038604736328125,
-0.05517578125,
-0.01... |
Shiveswarran/llm_instruction_code_manual_v8 | 2023-10-17T19:05:01.000Z | [
"region:us"
] | Shiveswarran | null | null | 0 | 13 | 2023-10-16T18:52:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.057220458984375,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.00507354736328125,
0.0513916015625,
0.0169830322265625,
-0.052032470703125,
-0.014984130859375,
-0.060455322265625,
0.037... |
fahrialfiansyah/openstax_instruction_list | 2023-10-17T01:53:50.000Z | [
"region:us"
] | fahrialfiansyah | null | null | 0 | 13 | 2023-10-17T01:53:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/eval_tag_nq_test_v10_first_0 | 2023-10-17T16:47:14.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 13 | 2023-10-17T16:47:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
- name: titles
dtype: string
splits:
- name: train
num_bytes: 2524
num_examples: 10
- name: validation
num_bytes: 1001116
num_examples: 3610
download_size: 617056
dataset_size: 1003640
---
# Dataset Card for "eval_tag_nq_test_v10_first_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 860 | [
[
-0.04669189453125,
-0.03143310546875,
0.0009832382202148438,
0.00936126708984375,
-0.01397705078125,
0.01201629638671875,
0.035614013671875,
0.00174713134765625,
0.059112548828125,
0.0291290283203125,
-0.049591064453125,
-0.045867919921875,
-0.01904296875,
0... |
Back-up/education-QA-data | 2023-10-17T16:55:38.000Z | [
"region:us"
] | Back-up | null | null | 0 | 13 | 2023-10-17T16:55:36 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: university
dtype: string
- name: code
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1250212
num_examples: 1169
download_size: 402389
dataset_size: 1250212
---
# Dataset Card for "education-QA-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 654 | [
[
-0.034210205078125,
-0.02069091796875,
0.0223541259765625,
-0.005062103271484375,
-0.00228118896484375,
0.01502227783203125,
0.0333251953125,
0.0018224716186523438,
0.057891845703125,
0.0240936279296875,
-0.055572509765625,
-0.058441162109375,
-0.031768798828125... |
zkdeng/combinedSpidersBalanced | 2023-10-19T01:35:11.000Z | [
"region:us"
] | zkdeng | null | null | 0 | 13 | 2023-10-19T00:46:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Acantholycosa_lignaria
'1': Aculepeira_ceropegia
'2': Agalenatea_redii
'3': Agelena_labyrinthica
'4': Aglaoctenus_castaneus
'5': Aglaoctenus_lagotis
'6': Allocosa_funerea
'7': Allotrochosina_schauinslandi
'8': Alopecosa_albofasciata
'9': Alopecosa_barbipes
'10': Alopecosa_cuneata
'11': Alopecosa_inquilina
'12': Alopecosa_kochi
'13': Alopecosa_pulverulenta
'14': Anahita_punctulata
'15': Anasaitis_canosa
'16': Ancylometes_bogotensis
'17': Ancylometes_concolor
'18': Ancylometes_rufus
'19': Anoteropsis_hilaris
'20': Anoteropsis_litoralis
'21': Anyphaena_accentuata
'22': Aphonopelma_hentzi
'23': Araneus_diadematus
'24': Araneus_marmoreus
'25': Araneus_quadratus
'26': Araneus_trifolium
'27': Araniella_displicata
'28': Arctosa_cinerea
'29': Arctosa_leopardus
'30': Arctosa_littoralis
'31': Arctosa_perita
'32': Arctosa_personata
'33': Argiope_argentata
'34': Argiope_aurantia
'35': Argiope_bruennichi
'36': Argiope_keyserlingi
'37': Argiope_lobata
'38': Argiope_trifasciata
'39': Asthenoctenus_borellii
'40': Attulus_fasciger
'41': Aulonia_albimana
'42': Austracantha_minax
'43': Badumna_longinqua
'44': Carrhotus_xanthogramma
'45': Centroctenus_brevipes
'46': Cheiracanthium_erraticum
'47': Cheiracanthium_gracile
'48': Cheiracanthium_inclusum
'49': Cheiracanthium_mildei
'50': Cheiracanthium_punctorium
'51': Colonus_hesperus
'52': Colonus_sylvanus
'53': Ctenus_amphora
'54': Ctenus_hibernalis
'55': Ctenus_medius
'56': Ctenus_ornatus
'57': Cupiennius_coccineus
'58': Cupiennius_getazi
'59': Cupiennius_salei
'60': Cyclosa_conica
'61': Cyrtophora_citricola
'62': Diapontia_uruguayensis
'63': Dolomedes_albineus
'64': Dolomedes_minor
'65': Dolomedes_scriptus
'66': Dolomedes_tenebrosus
'67': Dolomedes_triton
'68': Dysdera_crocata
'69': Ebrechtella_tricuspidata
'70': Enoplognatha_ovata
'71': Eratigena_agrestis
'72': Eratigena_duellica
'73': Eriophora_ravilla
'74': Eris_militaris
'75': Evarcha_arcuata
'76': Gasteracantha_cancriformis
'77': Geolycosa_vultuosa
'78': Gladicosa_gulosa
'79': Gladicosa_pulchra
'80': Habronattus_pyrrithrix
'81': Hasarius_adansoni
'82': Helpis_minitabunda
'83': Hentzia_mitrata
'84': Hentzia_palmarum
'85': Herpyllus_ecclesiasticus
'86': Heteropoda_venatoria
'87': Hippasa_holmerae
'88': Hogna_antelucana
'89': Hogna_baltimoriana
'90': Hogna_bivittata
'91': Hogna_carolinensis
'92': Hogna_crispipes
'93': Hogna_frondicola
'94': Hogna_gumia
'95': Hogna_radiata
'96': Holocnemus_pluchei
'97': Kukulcania_hibernalis
'98': Lampona_cylindrata
'99': Larinioides_cornutus
'100': Larinioides_sclopetarius
'101': Latrodectus_bishopi
'102': Latrodectus_curacaviensis
'103': Latrodectus_geometricus
'104': Latrodectus_hasselti
'105': Latrodectus_hesperus
'106': Latrodectus_katipo
'107': Latrodectus_mactans
'108': Latrodectus_mirabilis
'109': Latrodectus_renivulvatus
'110': Latrodectus_tredecimguttatus
'111': Latrodectus_variolus
'112': Leucauge_argyra
'113': Leucauge_argyrobapta
'114': Leucauge_dromedaria
'115': Leucauge_venusta
'116': Loxosceles_amazonica
'117': Loxosceles_deserta
'118': Loxosceles_laeta
'119': Loxosceles_reclusa
'120': Loxosceles_rufescens
'121': Loxosceles_tenochtitlan
'122': Loxosceles_yucatana
'123': Lycosa_erythrognatha
'124': Lycosa_hispanica
'125': Lycosa_pampeana
'126': Lycosa_praegrandis
'127': Lycosa_singoriensis
'128': Lycosa_tarantula
'129': Lyssomanes_viridis
'130': Maevia_inclemens
'131': Mangora_acalypha
'132': Maratus_griseus
'133': Marpissa_muscosa
'134': Mecynogea_lemniscata
'135': Menemerus_bivittatus
'136': Menemerus_semilimbatus
'137': Micrathena_gracilis
'138': Micrathena_sagittata
'139': Micrommata_virescens
'140': Missulena_bradleyi
'141': Missulena_occatoria
'142': Misumena_vatia
'143': Misumenoides_formosipes
'144': Misumessus_oblongus
'145': Naphrys_pulex
'146': Neoscona_arabesca
'147': Neoscona_crucifera
'148': Neoscona_oaxacensis
'149': Nephila_pilipes
'150': Neriene_radiata
'151': Nesticodes_rufipes
'152': Nuctenea_umbratica
'153': Oxyopes_salticus
'154': Oxyopes_scalaris
'155': Paraphidippus_aurantius
'156': Parasteatoda_tepidariorum
'157': Paratrochosina_amica
'158': Pardosa_amentata
'159': Pardosa_lapidicina
'160': Pardosa_mercurialis
'161': Pardosa_moesta
'162': Pardosa_wagleri
'163': Peucetia_viridans
'164': Phidippus_audax
'165': Phidippus_clarus
'166': Phidippus_johnsoni
'167': Phidippus_putnami
'168': Philaeus_chrysops
'169': Philodromus_dispar
'170': Pholcus_phalangioides
'171': Phoneutria_boliviensis
'172': Phoneutria_depilata
'173': Phoneutria_fera
'174': Phoneutria_nigriventer
'175': Phoneutria_pertyi
'176': Phoneutria_reidyi
'177': Pirata_piraticus
'178': Pisaura_mirabilis
'179': Pisaurina_mira
'180': Platycryptus_californicus
'181': Platycryptus_undatus
'182': Plebs_eburnus
'183': Plexippus_paykulli
'184': Portacosa_cinerea
'185': Rabidosa_hentzi
'186': Rabidosa_punctulata
'187': Rabidosa_rabida
'188': Salticus_scenicus
'189': Sassacus_vitis
'190': Schizocosa_avida
'191': Schizocosa_malitiosa
'192': Schizocosa_mccooki
'193': Scytodes_thoracica
'194': Sicarius_thomisoides
'195': Socca_pustulosa
'196': Sosippus_californicus
'197': Steatoda_grossa
'198': Steatoda_nobilis
'199': Steatoda_triangulosa
'200': Synema_globosum
'201': Thomisus_onustus
'202': Tigrosa_annexa
'203': Tigrosa_aspersa
'204': Tigrosa_georgicola
'205': Tigrosa_helluo
'206': Trichonephila_clavata
'207': Trichonephila_clavipes
'208': Trichonephila_edulis
'209': Trichonephila_plumipes
'210': Trochosa_ruricola
'211': Trochosa_sepulchralis
'212': Trochosa_terricola
'213': Tropicosa_moesta
'214': Venator_immansuetus
'215': Venator_spenceri
'216': Venatrix_furcillata
'217': Verrucosa_arenata
'218': Wadicosa_fidelis
'219': Xerolycosa_miniata
'220': Xerolycosa_nemoralis
'221': Zoropsis_spinimana
'222': Zygiella_x-notata
splits:
- name: train
num_bytes: 6290926802.784
num_examples: 269172
download_size: 5634209308
dataset_size: 6290926802.784
---
# Dataset Card for "combinedSpidersBalanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 8,488 | [
[
-0.04510498046875,
-0.019500732421875,
-0.0050201416015625,
0.027191162109375,
-0.0309600830078125,
0.012969970703125,
0.01081085205078125,
-0.01532745361328125,
0.0750732421875,
0.049102783203125,
-0.06329345703125,
-0.046539306640625,
-0.046600341796875,
-... |
bh8648/split_dataset_2 | 2023-10-19T10:18:31.000Z | [
"region:us"
] | bh8648 | null | null | 0 | 13 | 2023-10-19T10:18:28 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: page_num
dtype: int64
splits:
- name: train
num_bytes: 713287
num_examples: 212
download_size: 374432
dataset_size: 713287
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "split_dataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 517 | [
[
-0.038177490234375,
-0.030731201171875,
0.006999969482421875,
0.022979736328125,
-0.0308837890625,
0.01062774658203125,
0.02301025390625,
-0.01593017578125,
0.05621337890625,
0.03497314453125,
-0.056304931640625,
-0.02655029296875,
-0.0496826171875,
-0.02688... |
Imran1/PhysiotherapyExercises | 2023-10-19T14:19:23.000Z | [
"region:us"
] | Imran1 | null | null | 0 | 13 | 2023-10-19T14:18:27 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Elbove Extension
'1': KNEE Flexion
'2': NECK Exercise
'3': PlanterFlexion of Foot
'4': Trunk Extension
'5': Trunk Flexion
'6': Wrist Extension
'7': Wrist Flexion
splits:
- name: train
num_bytes: 1588400544.8692603
num_examples: 2784
download_size: 1182979826
dataset_size: 1588400544.8692603
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "PhysiotherapyExercises"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 783 | [
[
-0.02154541015625,
0.006488800048828125,
0.0197906494140625,
0.01091766357421875,
-0.0124969482421875,
-0.01497650146484375,
0.028289794921875,
-0.01027679443359375,
0.059783935546875,
0.03466796875,
-0.06085205078125,
-0.046966552734375,
-0.04388427734375,
... |
Ailaysa-MTPE/en_ta_wiki_words | 2023-10-21T04:19:16.000Z | [
"region:us"
] | Ailaysa-MTPE | null | null | 1 | 13 | 2023-10-21T04:19:06 | ---
dataset_info:
features:
- name: en
dtype: string
- name: ta
dtype: string
splits:
- name: train
num_bytes: 9442019
num_examples: 125712
download_size: 4020340
dataset_size: 9442019
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "en_ta_wiki_words"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.045654296875,
-0.0301666259765625,
0.01111602783203125,
0.0093994140625,
-0.0272064208984375,
0.0005650520324707031,
-0.01068878173828125,
-0.020050048828125,
0.06878662109375,
0.033721923828125,
-0.058319091796875,
-0.045989990234375,
-0.051116943359375,
... |
ChristophSchuhmann/yt-urls-for-emotional-tts | 2023-10-21T07:46:42.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | 0 | 13 | 2023-10-21T07:45:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomashs/LSC_Acronyms_LDA_topics | 2023-10-22T00:39:02.000Z | [
"region:us"
] | tomashs | null | null | 0 | 13 | 2023-10-22T00:38:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: short_form
dtype: string
- name: long_form
dtype: string
- name: label
dtype: int64
- name: topic_vector
sequence: float32
splits:
- name: train
num_bytes: 262502596
num_examples: 352720
- name: validation
num_bytes: 56048086
num_examples: 75339
- name: test
num_bytes: 56294328
num_examples: 75540
download_size: 117708613
dataset_size: 374845010
---
# Dataset Card for "LSC_Acronyms_LDA_topics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 834 | [
[
-0.04156494140625,
-0.028564453125,
0.028106689453125,
0.00004839897155761719,
-0.0264892578125,
0.0191802978515625,
0.02978515625,
-0.0027904510498046875,
0.0880126953125,
0.0240631103515625,
-0.061279296875,
-0.053924560546875,
-0.036773681640625,
-0.01351... |
sahityas/goodreads-llama-7b | 2023-10-25T19:14:00.000Z | [
"region:us"
] | sahityas | null | null | 0 | 13 | 2023-10-22T01:19:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 27512
num_examples: 254
download_size: 15892
dataset_size: 27512
---
# Dataset Card for "goodreads-llama-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 439 | [
[
-0.031585693359375,
-0.00424957275390625,
0.01160430908203125,
0.0286865234375,
-0.054443359375,
0.0017852783203125,
0.041229248046875,
-0.0301361083984375,
0.06884765625,
0.043548583984375,
-0.0484619140625,
-0.0565185546875,
-0.0538330078125,
0.01404571533... |
euclaise/LittleTown | 2023-10-23T17:04:15.000Z | [
"size_categories:10K<n<100K",
"license:other",
"arxiv:2210.01240",
"region:us"
] | euclaise | null | null | 1 | 13 | 2023-10-22T19:30:20 | ---
license: other
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 75640201
num_examples: 100000
download_size: 16577014
dataset_size: 75640201
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LittleTown"
[Language models are greedy reasoners](https://arxiv.org/pdf/2210.01240.pdf), so they don't often backtrack. This is a dataset made to teach them backtracking. The data is synthetic, generated randomly in Python.
90% of the examples contain backtracking.
License:
```
Zero-Clause BSD
=============
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED “AS IS” AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE
FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY
DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
``` | 1,309 | [
[
-0.004344940185546875,
-0.068115234375,
0.049957275390625,
0.01788330078125,
-0.00894927978515625,
-0.0235443115234375,
-0.0157623291015625,
-0.0221405029296875,
0.01541900634765625,
0.059906005859375,
-0.04888916015625,
-0.05718994140625,
-0.044036865234375,
... |
fmagot01/test_videos | 2023-10-22T22:25:57.000Z | [
"region:us"
] | fmagot01 | null | null | 0 | 13 | 2023-10-22T22:25:54 | ---
configs:
- config_name: default
data_files:
- split: test_videos
path: data/test_videos-*
dataset_info:
features:
- name: videos
struct:
- name: duration_seconds
dtype: float64
- name: video_data
dtype: binary
- name: video_path
dtype: string
splits:
- name: test_videos
num_bytes: 4184462
num_examples: 4
download_size: 4184930
dataset_size: 4184462
---
# Dataset Card for "test_videos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 586 | [
[
-0.048065185546875,
-0.0311279296875,
0.0030002593994140625,
0.0108184814453125,
-0.01409149169921875,
-0.00257110595703125,
0.012359619140625,
0.021728515625,
0.0455322265625,
0.024200439453125,
-0.06182861328125,
-0.042999267578125,
-0.045196533203125,
-0.... |
james-burton/vet_month_1d_all_text | 2023-10-23T14:42:11.000Z | [
"region:us"
] | james-burton | null | null | 0 | 13 | 2023-10-23T14:42:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: age_at_consult
dtype: string
- name: Ear_or_Mastoid
dtype: string
- name: Mental_Behavioral_or_Neuro
dtype: string
- name: Blood_or_Blood-forming
dtype: string
- name: Circulatory
dtype: string
- name: Dental
dtype: string
- name: Developmental
dtype: string
- name: Digestive
dtype: string
- name: Endocrine_Nutritional_or_Metabolic
dtype: string
- name: Immune
dtype: string
- name: Infectious_or_Parasitic
dtype: string
- name: Skin
dtype: string
- name: Musculoskeletal_or_Connective_Tissue
dtype: string
- name: Neoplasms
dtype: string
- name: Nervous
dtype: string
- name: Visual
dtype: string
- name: Perinatal
dtype: string
- name: Pregnancy_Childbirth_or_Puerperium
dtype: string
- name: Respiratory
dtype: string
- name: Injury_Poisoning_or_External_Causes
dtype: string
- name: Genitourinary
dtype: string
- name: gender
dtype: string
- name: neutered
dtype: string
- name: species
dtype: string
- name: insured
dtype: string
- name: practice_id
dtype: string
- name: premise_id
dtype: string
- name: breed
dtype: string
- name: region
dtype: string
- name: record
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 5353930
num_examples: 8552
- name: validation
num_bytes: 946736
num_examples: 1510
- name: test
num_bytes: 1635039
num_examples: 2606
download_size: 4002909
dataset_size: 7935705
---
# Dataset Card for "vet_month_1d_all_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,943 | [
[
-0.020660400390625,
-0.0188140869140625,
0.0271148681640625,
0.0287322998046875,
-0.0380859375,
-0.0226593017578125,
0.036712646484375,
0.006580352783203125,
0.06854248046875,
0.052581787109375,
-0.061492919921875,
-0.07489013671875,
-0.03363037109375,
-0.00... |
sunhaozhepy/ag_news_keywords | 2023-11-02T09:01:07.000Z | [
"region:us"
] | sunhaozhepy | null | null | 0 | 13 | 2023-10-23T16:33:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': World
'1': Sports
'2': Business
'3': Sci/Tech
- name: keywords
dtype: string
splits:
- name: train
num_bytes: 35165730
num_examples: 120000
- name: test
num_bytes: 2218894
num_examples: 7600
download_size: 22071064
dataset_size: 37384624
---
# Dataset Card for "ag_news_keywords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 737 | [
[
-0.0305938720703125,
-0.0193939208984375,
0.0212860107421875,
-0.0024662017822265625,
-0.034210205078125,
0.0098876953125,
0.006107330322265625,
0.002986907958984375,
0.057281494140625,
0.0306549072265625,
-0.04254150390625,
-0.0772705078125,
-0.056396484375,
... |
DR-DRR/medibot_dataset_A | 2023-10-23T17:53:00.000Z | [
"region:us"
] | DR-DRR | null | null | 0 | 13 | 2023-10-23T17:52:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sminchoi/guanaco-llama2-sample | 2023-10-24T00:13:10.000Z | [
"region:us"
] | sminchoi | null | null | 0 | 13 | 2023-10-24T00:10:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
KennethEnevoldsen/spontanous-speech-qa | 2023-10-24T11:23:18.000Z | [
"task_categories:question-answering",
"annotations_creators:found",
"size_categories:n<1K",
"source_datasets:DDSC/partial-danish-gigaword-no-twitter",
"language:da",
"conversational",
"region:us"
] | KennethEnevoldsen | null | null | 0 | 13 | 2023-10-24T11:17:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
annotations_creators:
- found
source_datasets:
- DDSC/partial-danish-gigaword-no-twitter
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 44345.110764430574
num_examples: 512
- name: test
num_bytes: 11172.889235569422
num_examples: 129
download_size: 37996
dataset_size: 55518
task_categories:
- question-answering
language:
- da
tags:
- conversational
pretty_name: Spontanous speech QA
size_categories:
- n<1K
---
# Spontanous speech QA
This dataset contains QA pairs from the spontaneous speech subsection of the Danish Gigaword.
The dataset is created from the [DDSC dataset](DDSC/partial-danish-gigaword-no-twitter) and
filtered to only include QA pairs where the question is less than 20 tokens and the answer is
at least 4 tokens long.
To find out more about the creation see the accompanying script. | 1,047 | [
[
-0.038482666015625,
-0.0699462890625,
0.042694091796875,
0.0271453857421875,
-0.032012939453125,
0.019012451171875,
-0.01284027099609375,
-0.0200042724609375,
0.0677490234375,
0.03759765625,
-0.072998046875,
-0.029937744140625,
-0.01690673828125,
0.008842468... |
quyanh/helm-samsum-dolly-lima | 2023-10-25T09:27:23.000Z | [
"region:us"
] | quyanh | null | null | 0 | 13 | 2023-10-24T16:03:51 | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 22189219.45245881
num_examples: 21963
download_size: 17712616
dataset_size: 22189219.45245881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "helm-samsum-dolly-lima"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 474 | [
[
-0.028656005859375,
-0.01393890380859375,
0.00933074951171875,
0.032379150390625,
-0.03826904296875,
-0.01026153564453125,
0.0251312255859375,
-0.020965576171875,
0.06939697265625,
0.042510986328125,
-0.05865478515625,
-0.0723876953125,
-0.0682373046875,
-0.... |
multimodalart/matryoshka-diffusion-models-paper-examples | 2023-10-26T08:48:38.000Z | [
"license:mit",
"arxiv:2310.15111",
"region:us"
] | multimodalart | null | null | 0 | 13 | 2023-10-25T09:10:13 | ---
license: mit
---
# Matryoshka Diffusion Models - paper examples
This dataset contains the 1024x1024 images included in the [Matryoshka Diffusion Models
](https://huggingface.co/papers/2310.15111) paper.
Arxiv: https://arxiv.org/abs/2310.15111
| 250 | [
[
-0.0292816162109375,
-0.03887939453125,
0.0577392578125,
0.0148162841796875,
-0.006023406982421875,
-0.028472900390625,
0.0261077880859375,
0.007556915283203125,
0.027587890625,
0.0615234375,
-0.054046630859375,
-0.03216552734375,
-0.0287628173828125,
-0.029... |
anjakuzev/michael_scott | 2023-10-25T12:25:40.000Z | [
"region:us"
] | anjakuzev | null | null | 0 | 13 | 2023-10-25T12:18:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jayashri710/cs_qa_dataset | 2023-10-25T14:47:47.000Z | [
"region:us"
] | jayashri710 | null | null | 0 | 13 | 2023-10-25T14:46:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
WestonBond/YelpTokenized | 2023-10-26T02:07:12.000Z | [
"region:us"
] | WestonBond | null | null | 0 | 13 | 2023-10-26T02:06:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': 1 star
'1': 2 star
'2': 3 stars
'3': 4 stars
'4': 5 stars
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 2488411554
num_examples: 650000
- name: test
num_bytes: 191471188
num_examples: 50000
download_size: 565360957
dataset_size: 2679882742
---
# Dataset Card for "YelpTokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 855 | [
[
-0.0280914306640625,
-0.01374053955078125,
0.0141143798828125,
-0.0006318092346191406,
-0.021942138671875,
-0.00931549072265625,
0.02081298828125,
-0.0188140869140625,
0.0640869140625,
0.037261962890625,
-0.0714111328125,
-0.046844482421875,
-0.030120849609375,
... |
vishnusr/code_searchnet_reduced_train | 2023-10-26T17:08:32.000Z | [
"region:us"
] | vishnusr | null | null | 0 | 13 | 2023-10-26T17:08:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: 'Unnamed: 0.1'
dtype: int64
- name: 'Unnamed: 0'
dtype: int64
- name: code
dtype: string
- name: docstring
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 5888994
num_examples: 3000
download_size: 2569124
dataset_size: 5888994
---
# Dataset Card for "code_searchnet_reduced_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 611 | [
[
-0.050079345703125,
0.007114410400390625,
0.0010919570922851562,
0.00540924072265625,
0.0014791488647460938,
-0.00936126708984375,
0.00018787384033203125,
0.01464080810546875,
0.05999755859375,
0.03131103515625,
-0.06634521484375,
-0.042938232421875,
-0.02464294... |
Kabatubare/medical-guanaco | 2023-10-27T05:44:01.000Z | [
"region:us"
] | Kabatubare | null | null | 0 | 13 | 2023-10-27T05:32:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ayoub999/dataset_2 | 2023-10-27T21:28:24.000Z | [
"region:us"
] | ayoub999 | null | null | 0 | 13 | 2023-10-27T15:32:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: bboxes
sequence:
sequence: int64
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': Ref
'2': NumFa
'3': Fourniss
'4': DateFa
'5': DateLim
'6': TotalHT
'7': TVA
'8': TotalTTc
'9': unitP
'10': Qt
'11': TVAP
'12': descp
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 875976.0
num_examples: 2
- name: test
num_bytes: 1021145.0
num_examples: 1
download_size: 1276358
dataset_size: 1897121.0
---
# Dataset Card for "dataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 999 | [
[
-0.028350830078125,
-0.021026611328125,
0.00955963134765625,
0.0232696533203125,
-0.0217437744140625,
-0.006046295166015625,
0.0302276611328125,
-0.01776123046875,
0.0494384765625,
0.033203125,
-0.054229736328125,
-0.039764404296875,
-0.050384521484375,
-0.0... |
MichaelVeser/finetuningopensecurity | 2023-10-29T19:31:54.000Z | [
"region:us"
] | MichaelVeser | null | null | 0 | 13 | 2023-10-29T19:31:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
geraldOslo/prompt_answer_dataset_for_norwegian_radiation_protection_in_dentistry | 2023-10-30T14:07:46.000Z | [
"size_categories:1K<n<10K",
"language:no",
"license:cc-by-2.0",
"dentistry",
"physics",
"radiation protection",
"doi:10.57967/hf/1291",
"region:us"
] | geraldOslo | null | null | 0 | 13 | 2023-10-30T08:56:56 | ---
license: cc-by-2.0
language:
- 'no'
tags:
- dentistry
- physics
- radiation protection
pretty_name: Question/answer connected to radiation protection in dentistry
size_categories:
- 1K<n<10K
configs:
- config_name: tab
data_files: "question_answer_pairs_radiation_protection.csv"
sep: ";"
---
# Contents
This is a dataset with prompt/answer pairs regarding the curriculum for Norwegian dentistry and dental hygiene students in the area of radiation physics, radatition protection and radiological technology.
The set consists of prompt/answer pairs generated using commercial available LLMs such as OpenAI GPT-4 and Anthropic Claude 2. They are based on documents uploaded and analyzed by those LLMs.
Documents used for generation the dataset are mainly:
- Lov om strålevern og bruk av stråling (strålevernloven) (2000)
- Forskrift om strålevern og bruk av stråling (strålevernforskriften) (2016)
- DSA: Medisinsk strålebruk nettsider https://dsa.no/medisinsk-stralebruk (2023)
- DSA: Veileder 14
- DSA: StrålevernRapport • 2014:2 Strålebruk i Norge
- DSA: StrålevernRapport 2015:12 Stråledoser til befolkningen
- DSA: Veileder til forskrift om strålevern og bruk av stråling Veileder Nummer 5 Revidert mai 2023
- My own teaching material and notes
# Purpose
The dataset is generated for fine-tuning of open source LLMs.
# Format
The dataset is a UTF-8 formatted ","-separated csv-file. There are two columns: prompt, prediction
# Warning
The dataset is provided in an early stage and might contain errors. Use on own responsibility. Please give feedback if you find a serious error.
# Todo
The dataset is not yet throughoutly proof read. I will also add a category field in the csv for categories as: physics, regulations, technology, ...
- proofreading
- proofreading
- proofreading
- add more relevant prompt/response pairs | 1,843 | [
[
-0.00974273681640625,
-0.045867919921875,
0.040374755859375,
-0.0101318359375,
-0.01241302490234375,
-0.0056610107421875,
0.0093841552734375,
-0.0041046142578125,
0.0220947265625,
0.047760009765625,
-0.0543212890625,
-0.05206298828125,
-0.0097198486328125,
0... |
jcho02/golf | 2023-10-30T19:52:21.000Z | [
"region:us"
] | jcho02 | null | null | 0 | 13 | 2023-10-30T18:01:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jay401521/domain_test_balance | 2023-10-31T05:52:07.000Z | [
"region:us"
] | jay401521 | null | null | 0 | 13 | 2023-10-31T05:52:02 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: domain
dtype:
class_label:
names:
'0': AIRL
'1': CAR
'2': COMM
'3': TECH
- name: label
dtype: int64
- name: rank
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1789645
num_examples: 20648
download_size: 971736
dataset_size: 1789645
---
# Dataset Card for "domain_test_balance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 602 | [
[
-0.046661376953125,
-0.0287322998046875,
-0.0118560791015625,
0.0167083740234375,
-0.0086212158203125,
0.0025463104248046875,
0.0242156982421875,
-0.006282806396484375,
0.048980712890625,
0.0307769775390625,
-0.062286376953125,
-0.03887939453125,
-0.037567138671... |
SeyedAli/Persian-Image-Captioning | 2023-10-31T07:44:13.000Z | [
"region:us"
] | SeyedAli | null | null | 1 | 13 | 2023-10-31T07:24:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 3004589821.346736
num_examples: 25923
- name: test
num_bytes: 760335821.3452641
num_examples: 6481
download_size: 3781052840
dataset_size: 3764925642.692
---
# Dataset Card for "Persian-Image-Captioning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 618 | [
[
-0.04156494140625,
-0.01236724853515625,
0.0178680419921875,
0.028656005859375,
-0.034759521484375,
0.0194549560546875,
-0.004215240478515625,
-0.0028324127197265625,
0.044891357421875,
0.033905029296875,
-0.05242919921875,
-0.050506591796875,
-0.03875732421875,... |
youyu0105/llm-MIDI4 | 2023-10-31T13:55:47.000Z | [
"region:us"
] | youyu0105 | null | null | 0 | 13 | 2023-10-31T13:55:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 570535
num_examples: 335
download_size: 131987
dataset_size: 570535
---
# Dataset Card for "llm-MIDI4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 433 | [
[
-0.047088623046875,
-0.005214691162109375,
0.03948974609375,
0.0156707763671875,
-0.0166015625,
0.01165008544921875,
0.019775390625,
-0.0083465576171875,
0.05548095703125,
0.0355224609375,
-0.06951904296875,
-0.06494140625,
-0.037750244140625,
-0.01098632812... |
GSalimp/UFOPQANDA | 2023-11-01T01:02:16.000Z | [
"region:us"
] | GSalimp | null | null | 0 | 13 | 2023-10-31T23:31:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
longface/pronto-qa-flanT5 | 2023-11-02T04:16:03.000Z | [
"region:us"
] | longface | null | null | 0 | 13 | 2023-11-01T03:42:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
re2panda/click_bate_1000 | 2023-11-02T05:09:17.000Z | [
"region:us"
] | re2panda | null | null | 0 | 13 | 2023-11-02T05:08:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mideind/icelandic-error-corpus-IceEC | 2022-10-25T09:51:04.000Z | [
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"region:us"
] | mideind | The Icelandic Error Corpus (IceEC) is a collection of texts in modern Icelandic annotated for mistakes related to spelling, grammar, and other issues. The texts are organized by genre. The current version includes sentences from student essays, online news texts and Wikipedia articles.
Sentences within texts in the student essays had to be shuffled due to the license which they were originally published under, but neither the online news texts nor the Wikipedia articles needed to be shuffled. | null | 1 | 12 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language:
- is
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
pretty_name: Icelandic Error Corpus
---
# Icelandic Error Corpus
Refer to [https://github.com/antonkarl/iceErrorCorpus](https://github.com/antonkarl/iceErrorCorpus) for a description of the dataset.
Please cite the dataset as follows if you use it.
```
Anton Karl Ingason, Lilja Björk Stefánsdóttir, Þórunn Arnardóttir, and Xindan Xu. 2021. The Icelandic Error Corpus (IceEC). Version 1.1. (https://github.com/antonkarl/iceErrorCorpus)
``` | 614 | [
[
-0.01358795166015625,
-0.02484130859375,
0.007476806640625,
0.0357666015625,
-0.0101318359375,
0.007537841796875,
0.00009202957153320312,
-0.0226287841796875,
0.022003173828125,
0.036590576171875,
-0.040802001953125,
-0.044464111328125,
-0.046783447265625,
0... |
projecte-aina/tecla | 2023-09-13T12:41:32.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | projecte-aina | TeCla: Text Classification Catalan dataset
Catalan News corpus for Text classification, crawled from ACN (Catalan News Agency) site: www.acn.cat
Corpus de notícies en català per a classificació textual, extret del web de l'Agència Catalana de Notícies - www.acn.cat | Baucells, Irene, Carrino, Casimiro Pio, Rodriguez-Penagos, Carlos Gerardo, & Armentano-Oller, Carme. (2021).
TeCla: Text Classification Catalan dataset (Version 2.0) [Data set].
Zenodo. http://doi.org/10.5281/zenodo.7334110 | 0 | 12 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: tecla
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for TeCla
## Dataset Description
- **Website:** [Zenodo](https://zenodo.org/record/7334110)
- **Point of Contact:** [Irene Baucells de la Peña](irene.baucells@bsc.es), [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
TeCla (Text Classification) is a Catalan News corpus for thematic multi-class Text Classification tasks. The present version (2.0) contains 113.376 articles classified under a hierarchical class structure consisting of a coarse-grained and a fine-grained class. Each of the 4 coarse-grained classes accept a subset of fine-grained ones, 53 in total.
The previous version (1.0.1) can still be found at https://zenodo.org/record/4761505
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Three json files, one for each split.
### Data Fields
Each example contains the following 3 fields:
* text: the article text (string)
* label1: the coarse-grained class
* label2: the fine-grained class
#### Example:
<pre>
{"version": "2.0",
"data":
[
{
'sentence': "La setena edició del Festival Fantàstik inclourà les cintes 'Matar a dios' i 'Mandy' i un homenatge a 'Mi vecino Totoro'. Es projectaran 22 curtmetratges seleccionats d'entre més de 500 presentats a nivell internacional. El Centre Cultural de Granollers acull del 8 a l'11 de novembre la setena edició del Festival Fantàstik. El certamen, que s'allargarà un dia, arrencarà amb la projecció de la cinta de Caye Casas i Albert Pide 'Matar a Dios'. Els dos directors estaran presents en la inauguració de la cita. A més, els asssitents podran gaudir de 'Mandy', el darrer treball de Nicolas Cage. Altres llargmetratges seleccionats per aquest any són 'Aterrados' (2017), 'Revenge' (2017), 'A Mata Negra' (2018), 'Top Knot Detective' (2018) i 'La Gran Desfeta' (2018). A més, amb motiu del trentè aniversari de la pel·lícula 'El meu veí Totoro' es durà a terme l'exposició dedicada a aquest film '30 anys 30 artistes' comissariada per Jordi Pastor i Reinaldo Pereira. La mostra '30 anys 30 artistes' recull els treballs de trenta artistes d'estils diferents al voltant de la figura de Totoro i el seu director. Es podrà veure durant els dies de festival i es complementarà amb la projecció de la pel·lícula el diumenge 11 de novembre. Al llarg del festival també es projectaran els 22 curtmetratges prèviament seleccionats d'entre més de 500 presentats a nivell internacional. El millor tindrà una dotació de 1000 euros fruit de la unió de forces amb el Mercat Audiovisual de Catalunya.",
'label1': 'Cultura',
'label2': 'Cinema'
},
...
]
}
</pre>
#### Labels
* label1: 'Societat', 'Política', 'Economia', 'Cultura'
* label2: 'Llengua', 'Infraestructures', 'Arts', 'Parlament', 'Noves tecnologies', 'Castells', 'Successos', 'Empresa', 'Mobilitat', 'Teatre', 'Treball', 'Logística', 'Urbanisme', 'Govern', 'Entitats', 'Finances', 'Govern espanyol', 'Trànsit', 'Indústria', 'Esports', 'Exteriors', 'Medi ambient', 'Habitatge', 'Salut', 'Equipaments i patrimoni', 'Recerca', 'Cooperació', 'Innovació', 'Agroalimentació', 'Policial', 'Serveis Socials', 'Cinema', 'Memòria històrica', 'Turisme', 'Política municipal', 'Comerç', 'Universitats', 'Hisenda', 'Judicial', 'Partits', 'Música', 'Lletres', 'Religió', 'Festa i cultura popular', 'Unió Europea', 'Moda', 'Moviments socials', 'Comptes públics', 'Immigració', 'Educació', 'Gastronomia', 'Meteorologia', 'Energia'
### Data Splits
Train, development and test splits were created in a stratified fashion, following a 0.8, 0.05 and 0.15 proportion, respectively. The sizes of each split are the following:
* train.json: 90700 examples
* dev.json: 5669 examples
* test.json: 17007 examples
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The source data are crawled articles from the Catalan News Agency ([Agència Catalana de Notícies, ACN](https://www.acn.cat/)) site.
We crawled 219.586 articles from the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) newswire archive, the latest from October 11, 2020.
From the crawled data, we selected those articles whose 'section' and 'subsection' categories followed the expected codification combinations included in the ACN's style guide and whose 'section' complied the requirements of containing subsections and being thematically founded (in contrast to geographically defined categories such as 'Món' and 'Unió Europea'). The articles originally belonging to the 'Unió Europea' section, which were related to political organisms from the European Union, were included in the 'Política' coarse-grained category (within a fine-grained category named 'Unió Europea') due to its close proximity between some of the original subsections of 'Política' and those of 'Unió Europea', both defined by the specific political organism dealt with in the article.
The text field in each example is a concatenation of the original title, subtitle and body of the article (before the concatenation, both title and subtitle were added a final dot whenever they lacked one). The preprocessing of the texts was minimal and consisted in the removal of the pattern "ACN {location}.-" included before the body in each text as well as newlines originally used to divide the text in paragraphs.
#### Who are the source language producers?
The Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) is a news agency owned by the Catalan government via the public corporation Intracatalònia, SA. It is one of the first digital news agencies created in Europe and has been operating since 1999 (source: [wikipedia](https://en.wikipedia.org/wiki/Catalan_News_Agency)).
### Annotations
#### Annotation process
The crawled data contained the categories' annotations, which were then used to create this dataset with the mentioned criteria.
#### Who are the annotators?
Editorial staff classified the articles under the different thematic sections and subsections, and we extracted these from metadata.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Irene Baucells (irene.baucells@bsc.es), Casimiro Pio Carrino (casimiro.carrino@bsc.es), Carlos Rodríguez (carlos.rodriguez1@bsc.es) and Carme Armentano (carme.armentano@bsc.es), from [BSC-CNS](https://www.bsc.es/).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
### Citation Information
[DOI]([https://doi.org/10.5281/zenodo.7334110])
| 8,098 | [
[
-0.043212890625,
-0.0289459228515625,
0.01129150390625,
0.027496337890625,
-0.038360595703125,
0.0254364013671875,
-0.0203857421875,
-0.0189208984375,
0.059478759765625,
0.030609130859375,
-0.03363037109375,
-0.06829833984375,
-0.04193115234375,
0.0232086181... |
projecte-aina/wnli-ca | 2023-09-13T12:42:10.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"language:ca",
"license:cc-by-4.0",
"region:us"
] | projecte-aina | professional translation into Catalan of Winograd NLI dataset as published in GLUE Benchmark.
The Winograd NLI dataset presents 855 sentence pairs,
in which the first sentence contains an ambiguity and the second one a possible interpretation of it.
The label indicates if the interpretation is correct (1) or not (0). | ADD CITATION | 1 | 12 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: wnli-ca
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# WNLI-ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Catalan of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
The dataset is in Catalan (`ca-ES`)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Example
| index | sentence 1 | sentence 2 | label |
| ------- |----------- | --------- | ----- |
| 0 | Vaig clavar una agulla en una pastanaga. Quan la vaig treure, tenia un forat. | La pastanaga tenia un forat. | 1 |
| 1 | En Joan no podia veure l’escenari amb en Guillem davant seu perquè és molt baix. | En Joan és molt baix. | 1 |
| 2 | Els policies van arrestar tots els membres de la banda. Volien aturar el tràfic de drogues del barri. | Els policies volien aturar el tràfic de drogues del barri. | 1 |
| 3 | L’Esteve segueix els passos d’en Frederic en tot. L’influencia moltíssim. | L’Esteve l’influencia moltíssim. | 0 |
### Data Splits
- wnli-train-ca.csv: 636
- wnli-dev-ca.csv: 72
- wnli-test-shuffled-ca.csv: 147
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Catalan, a low-resource language, and to allow inter-lingual comparisons.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan, commissioned by BSC TeMU within the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan.
#### Who are the annotators?
Translation was commisioned to a professional translator.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Contributions
[N/A]
| 6,222 | [
[
-0.01068115234375,
-0.0286865234375,
0.01175689697265625,
0.026458740234375,
-0.012420654296875,
0.0026035308837890625,
-0.0271453857421875,
-0.04522705078125,
0.038421630859375,
0.03375244140625,
-0.04180908203125,
-0.0621337890625,
-0.048065185546875,
0.00... |
webis/args_me | 2022-09-21T12:09:09.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | webis | The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal. | @dataset{yamen_ajjour_2020_4139439,
author = {Yamen Ajjour and
Henning Wachsmuth and
Johannes Kiesel and
Martin Potthast and
Matthias Hagen and
Benno Stein},
title = {args.me corpus},
month = oct,
year = 2020,
publisher = {Zenodo},
version = {1.0-cleaned},
doi = {10.5281/zenodo.4139439},
url = {https://doi.org/10.5281/zenodo.4139439}
} | 1 | 12 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Webis args.me argument corpus
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for the args.me corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Usage](#dataset-usage)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4139439
- **Repository:** https://git.webis.de/code-research/arguana/args/args-framework
- **Paper:** [Building an Argument Search Engine for the Web](https://webis.de/downloads/publications/papers/wachsmuth_2017f.pdf)
- **Leaderboard:** https://touche.webis.de/
- **Point of Contact:** [Webis Group](https://webis.de/people.html)
### Dataset Summary
The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal.
### Dataset Usage
```python
import datasets
args = datasets.load_dataset('webis/args_me', 'corpus', streaming=True)
args_iterator = iter(args)
for arg in args_iterator:
print(args['conclusion'])
print(args['id'])
print(args['argument'])
print(args['stance'])
break
```
### Supported Tasks and Leaderboards
Document Retrieval, Argument Retrieval for Controversial Questions
### Languages
The args.me corpus is monolingual; it only includes English (mostly en-US) documents.
## Dataset Structure
### Data Instances
#### Corpus
```
{'conclusion': 'Science is the best!',
'id': 'd6517702-2019-04-18T12:36:24Z-00000-000',
'argument': 'Science is aright I guess, but Physical Education (P.E) is better. Think about it, you could sit in a classroom for and hour learning about molecular reconfiguration, or you could play football with your mates. Why would you want to learn about molecular reconfiguration anyway? I think the argument here would be based on, healthy mind or healthy body. With science being the healthy mind and P.E being the healthy body. To work this one out all you got to do is ask Steven Hawkins. Only 500 words',
'stance': 'CON'}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@dataset{yamen_ajjour_2020_4139439,
author = {Yamen Ajjour and
Henning Wachsmuth and
Johannes Kiesel and
Martin Potthast and
Matthias Hagen and
Benno Stein},
title = {args.me corpus},
month = oct,
year = 2020,
publisher = {Zenodo},
version = {1.0-cleaned},
doi = {10.5281/zenodo.4139439},
url = {https://doi.org/10.5281/zenodo.4139439}
}
```
| 4,842 | [
[
-0.044097900390625,
-0.039764404296875,
0.0238494873046875,
-0.0159912109375,
-0.0260162353515625,
-0.0018339157104492188,
-0.020050048828125,
-0.016143798828125,
0.04388427734375,
0.020263671875,
-0.038543701171875,
-0.05010986328125,
-0.039825439453125,
0.... |
nreimers/trec-covid-generated-queries | 2022-03-23T12:56:58.000Z | [
"region:us"
] | nreimers | null | null | 0 | 12 | 2022-03-23T12:56:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggan/cats | 2022-03-24T09:48:20.000Z | [
"region:us"
] | huggan | null | null | 2 | 12 | 2022-03-24T09:41:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggan/metfaces | 2022-03-25T09:13:25.000Z | [
"region:us"
] | huggan | null | null | 0 | 12 | 2022-03-25T09:12:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
UrukHan/t5-russian-summarization | 2022-04-02T18:07:55.000Z | [
"region:us"
] | UrukHan | null | null | 2 | 12 | 2022-04-02T18:07:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.