id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
legacy107/qa_wikipedia_retrieved_chunks | 2023-09-28T05:16:17.000Z | [
"region:us"
] | legacy107 | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer_start
dtype: int64
- name: answer
dtype: string
- name: article
dtype: string
- name: retrieved_context
dtype: string
splits:
- name: train
num_bytes: 6212832895
num_examples: 110970
- name: validation
num_bytes: 732218436
num_examples: 13833
- name: test
num_bytes: 763004753
num_examples: 13873
download_size: 420701697
dataset_size: 7708056084
---
# Dataset Card for "qa_wikipedia_retrieved_chunks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thanhduycao/data_synthesis_v1 | 2023-09-22T00:45:24.000Z | [
"region:us"
] | thanhduycao | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: transcription
dtype: string
- name: old_transcription
dtype: string
splits:
- name: train
num_bytes: 10125909
num_examples: 20
download_size: 2434457
dataset_size: 10125909
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data_synthesis_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hwattenberger/llama2_helpcenter | 2023-10-01T23:13:15.000Z | [
"region:us"
] | hwattenberger | null | null | null | 0 | 5 | Entry not found |
jscode13/Celestia | 2023-09-22T01:37:32.000Z | [
"region:us"
] | jscode13 | null | null | null | 0 | 5 | Entry not found |
carlicode/violence_context | 2023-09-22T04:47:09.000Z | [
"license:other",
"region:us"
] | carlicode | null | null | null | 0 | 5 | ---
license: other
---
|
Herreera1/Instructions_objects | 2023-09-22T16:35:19.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:es",
"region:us"
] | Herreera1 | null | null | null | 0 | 5 | ---
task_categories:
- text-classification
language:
- es
pretty_name: Dataset tesis
size_categories:
- n<1K
--- |
mozci/logobookDB | 2023-09-26T02:15:39.000Z | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:afl-3.0",
"brand",
"logo",
"design",
"graphic design",
"region:us"
] | mozci | null | null | null | 0 | 5 | ---
language:
- en
license: afl-3.0
size_categories:
- 1K<n<10K
task_categories:
- text-to-image
pretty_name: Logobook Archive with Captions
tags:
- brand
- logo
- design
- graphic design
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 162614866.176
num_examples: 4026
download_size: 139569721
dataset_size: 162614866.176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card
This dataset contains image caption pairs for logo designs screped from logobook.com. It is created for my research project to finetune text-image diffusion models with logo designs.
Logobook.com has a very nice logo archive consisting of modernist and simplistic logo designs. Each design stored along with some keywords. I used these keywords to create a caption for the logo designs.
See example below:

Caption:
Adams Law, a prominent law firm in Ireland, features a sleek and professional logo design by Jeremy Simmons of Process. The logo showcases a symbolic letter 'A' enclosed within a circular frame, representing unity and integrity. The inclusion of the word 'Ireland' emphasizes the firm's local expertise and dedication to serving the Irish community. A subtle quotation mark adds a touch of elegance and sophistication, reflecting Adams Law's commitment to delivering impactful legal solutions. This timeless logo design, created in 2017, effectively captures the firm's professionalism and legal expertise.
## Copyright disclaimer
Created and used for research purposes. |
Falah/blonde_woman_photography_prompts | 2023-09-23T06:14:01.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 98527
num_examples: 1000
download_size: 1673
dataset_size: 98527
---
# Dataset Card for "blonde_woman_photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanvirsrbd1/srbd-test1-1_annotated | 2023-09-23T09:15:33.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: xml
dtype: string
- name: html
dtype: string
- name: response
dtype: string
- name: annotated
dtype: string
splits:
- name: train
num_bytes: 35197381.665745854
num_examples: 1265
download_size: 3944835
dataset_size: 35197381.665745854
---
# Dataset Card for "srbd-test1-1_annotated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanvirsrbd1/srbd-test1-1_annotated_segmented | 2023-09-24T04:54:50.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1837883
num_examples: 2980
download_size: 607662
dataset_size: 1837883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "srbd-test1-1_annotated_segmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yuhthe/phoner | 2023-09-24T15:28:21.000Z | [
"region:us"
] | Yuhthe | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: words
sequence: string
- name: tags
sequence:
class_label:
names:
'0': B-AGE
'1': I-AGE
'2': B-DATE
'3': I-JOB
'4': O
'5': B-NAME
'6': I-PATIENT_ID
'7': B-LOCATION
'8': B-TRANSPORTATION
'9': B-GENDER
'10': I-ORGANIZATION
'11': B-SYMPTOM_AND_DISEASE
'12': B-JOB
'13': I-NAME
'14': B-ORGANIZATION
'15': I-TRANSPORTATION
'16': B-PATIENT_ID
'17': I-SYMPTOM_AND_DISEASE
'18': I-LOCATION
'19': I-DATE
splits:
- name: train
num_bytes: 2408512
num_examples: 5027
- name: val
num_bytes: 1020086
num_examples: 2000
- name: test
num_bytes: 1549558
num_examples: 3000
download_size: 0
dataset_size: 4978156
---
# Dataset Card for "phoner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yuhthe/phoner_conll | 2023-09-24T15:34:31.000Z | [
"region:us"
] | Yuhthe | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: words
sequence: string
- name: tags
sequence:
class_label:
names:
'0': B-AGE
'1': I-AGE
'2': B-DATE
'3': I-JOB
'4': O
'5': B-NAME
'6': I-PATIENT_ID
'7': B-LOCATION
'8': B-TRANSPORTATION
'9': B-GENDER
'10': I-ORGANIZATION
'11': B-SYMPTOM_AND_DISEASE
'12': B-JOB
'13': I-NAME
'14': B-ORGANIZATION
'15': I-TRANSPORTATION
'16': B-PATIENT_ID
'17': I-SYMPTOM_AND_DISEASE
'18': I-LOCATION
'19': I-DATE
splits:
- name: train
num_bytes: 2408512
num_examples: 5027
- name: val
num_bytes: 1020086
num_examples: 2000
- name: test
num_bytes: 1549558
num_examples: 3000
download_size: 831184
dataset_size: 4978156
---
# Dataset Card for "phoner_conll"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Twenty1/cwai_data | 2023-09-24T15:43:36.000Z | [
"region:us"
] | Twenty1 | null | null | null | 0 | 5 | Entry not found |
iohadrubin/top_terms | 2023-09-24T15:49:25.000Z | [
"region:us"
] | iohadrubin | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: value
dtype: string
splits:
- name: train
num_bytes: 49818
num_examples: 64
download_size: 31740
dataset_size: 49818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "top_terms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/habr_10k | 2023-09-24T15:56:17.000Z | [
"region:us"
] | dim | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: id
dtype: uint32
- name: language
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text_markdown
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: original_author
dtype: string
- name: original_url
dtype: string
- name: lead_html
dtype: string
- name: lead_markdown
dtype: string
- name: type
dtype: string
- name: time_published
dtype: uint64
- name: statistics
struct:
- name: commentsCount
dtype: uint32
- name: favoritesCount
dtype: uint32
- name: readingCount
dtype: uint32
- name: score
dtype: int32
- name: votesCount
dtype: int32
- name: votesCountPlus
dtype: int32
- name: votesCountMinus
dtype: int32
- name: labels
sequence: string
- name: hubs
sequence: string
- name: flows
sequence: string
- name: tags
sequence: string
- name: reading_time
dtype: uint32
- name: format
dtype: string
- name: complexity
dtype: string
- name: comments
sequence:
- name: id
dtype: uint64
- name: parent_id
dtype: uint64
- name: level
dtype: uint32
- name: time_published
dtype: uint64
- name: score
dtype: int32
- name: votes
dtype: uint32
- name: message_html
dtype: string
- name: message_markdown
dtype: string
- name: author
dtype: string
- name: children
sequence: uint64
- name: readingCount
dtype: int64
splits:
- name: train
num_bytes: 661170132.0315578
num_examples: 10000
download_size: 901387901
dataset_size: 661170132.0315578
---
# Dataset Card for "habr_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yuhthe/phoner_seq2seq | 2023-09-24T16:54:52.000Z | [
"region:us"
] | Yuhthe | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: words
dtype: string
- name: tags
dtype: string
splits:
- name: train
num_bytes: 2534372
num_examples: 5027
- name: val
num_bytes: 1140004
num_examples: 2000
- name: test
num_bytes: 1742126
num_examples: 3000
download_size: 2188554
dataset_size: 5416502
---
# Dataset Card for "phoner_seq2seq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
drumwell/llm-kuobot | 2023-09-24T17:11:11.000Z | [
"region:us"
] | drumwell | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1631004.0
num_examples: 199
- name: test
num_bytes: 188508.0
num_examples: 23
download_size: 942321
dataset_size: 1819512.0
---
# Dataset Card for "llm-kuobot2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nannullna/laion-subset | 2023-09-25T03:38:21.000Z | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | nannullna | null | null | null | 0 | 5 | ---
license: mit
task_categories:
- text-to-image
language:
- en
size_categories:
- 10K<n<100K
--- |
mickwokdotai/vist_sis | 2023-09-25T08:03:54.000Z | [
"region:us"
] | mickwokdotai | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: images
list:
- name: datetaken
dtype: timestamp[s]
- name: license
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: album_id
dtype: string
- name: longitude
dtype: string
- name: url_o
dtype: string
- name: secret
dtype: string
- name: media
dtype: string
- name: latitude
dtype: string
- name: id
dtype: string
- name: tags
dtype: string
- name: farm
dtype: string
- name: server
dtype: string
- name: url_m
dtype: string
- name: info
dtype: string
- name: albums
list:
- name: description
dtype: string
- name: title
dtype: string
- name: farm
dtype: string
- name: date_update
dtype: string
- name: primary
dtype: string
- name: server
dtype: string
- name: date_create
dtype: string
- name: photos
dtype: string
- name: secret
dtype: string
- name: owner
dtype: string
- name: vist_label
dtype: string
- name: id
dtype: string
- name: type
dtype: string
- name: annotations
list:
list:
- name: original_text
dtype: string
- name: album_id
dtype: string
- name: photo_flickr_id
dtype: string
- name: setting
dtype: string
- name: worker_id
dtype: string
- name: story_id
dtype: string
- name: tier
dtype: string
- name: worker_arranged_photo_order
dtype: int64
- name: text
dtype: string
- name: storylet_id
dtype: string
splits:
- name: train
num_bytes: 93223761
num_examples: 1
- name: validation
num_bytes: 11435294
num_examples: 1
- name: test
num_bytes: 11936227
num_examples: 1
download_size: 37413238
dataset_size: 116595282
---
# Dataset Card for "vist_sis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/amazon_polarity_embeddings_random0 | 2023-09-26T01:31:37.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: embedding
sequence: float32
- name: title
dtype: string
splits:
- name: train
num_bytes: 7148364432
num_examples: 3600000
- name: test
num_bytes: 19940712
num_examples: 10000
download_size: 3903677724
dataset_size: 7168305144
---
# Dataset Card for "amazon_polarity_embeddings_random0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
acrastt/EverythingLM-V3-ShareGPT | 2023-09-25T23:55:45.000Z | [
"license:mit",
"region:us"
] | acrastt | null | null | null | 0 | 5 | ---
license: mit
---
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
[EverythingLM V3 Data](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) converted to ShareGPT format. |
Joo99/counsel_data | 2023-09-26T04:22:09.000Z | [
"region:us"
] | Joo99 | null | null | null | 0 | 5 | Entry not found |
lowem1/cc_news_ocr | 2023-09-26T07:07:03.000Z | [
"region:us"
] | lowem1 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: tag
dtype: string
- name: ocr_data
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19826223
num_examples: 2000
download_size: 7547846
dataset_size: 19826223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cc_news_ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ddonuts/recurrent-events | 2023-09-26T09:50:00.000Z | [
"license:other",
"region:us"
] | ddonuts | null | null | null | 0 | 5 | ---
license: other
---
|
atsushi3110/chosen-rejected-pairs | 2023-09-26T13:24:47.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | atsushi3110 | null | null | null | 0 | 5 | ---
license: creativeml-openrail-m
---
|
hakkam10/screenplay_emotions | 2023-09-26T16:00:45.000Z | [
"region:us"
] | hakkam10 | null | null | null | 0 | 5 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
### Dataset Summary
This dataset was created by scrapping the screenplays from the imsdb website and then splitting them into 100 segments.
Each segment has been fed into a emotion classification model and classified into the emotion it evokes and represented as a number from 1 to 6.
Each number represents one of six emotions:
1 - joy
2 - love
3 - surprise
4 - sadness
5 - anger
6 - fear
These numbers are then stored as a one dimensional vector of length 100 in the column emotions.
Columns:
href - link of the website address from where the script was taken. It should be prefixed with "https://imsdb.com/".
title - Title of the film
script - The whole screenplay
scenes - a list of length 100. scripts are segmented into 100 segments and stored as a list
emotions - a list of emotions where each element corresponds to the segments of screenplay at the same position.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Aditya000001/llamadataset | 2023-09-26T18:19:07.000Z | [
"license:wtfpl",
"region:us"
] | Aditya000001 | null | null | null | 0 | 5 | ---
license: wtfpl
---
|
erhwenkuo/openorca-chinese-zhtw | 2023-09-26T22:30:01.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | erhwenkuo | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6491661288
num_examples: 4233915
download_size: 4106469779
dataset_size: 6491661288
language:
- zh
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: ' openorca-chinese-zhtw'
size_categories:
- 10M<n<100M
---
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openorca-chinese-zhtw"
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the origin data is primarily English and this dataset is translated by Google Translation to traditional Chinese.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` |
huangyt/FINETUNE4_compare8k | 2023-09-26T17:11:48.000Z | [
"region:us"
] | huangyt | null | null | null | 0 | 5 | Entry not found |
mindchain/wikitext2 | 2023-09-26T19:13:55.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"languag... | mindchain | null | null | null | 0 | 5 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: WikiText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
- config_name: wikitext-103-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1295579
num_examples: 4358
- name: train
num_bytes: 545142639
num_examples: 1801350
- name: validation
num_bytes: 1154755
num_examples: 3760
download_size: 190229076
dataset_size: 547592973
- config_name: wikitext-2-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1270951
num_examples: 4358
- name: train
num_bytes: 10918134
num_examples: 36718
- name: validation
num_bytes: 1134127
num_examples: 3760
download_size: 4475746
dataset_size: 13323212
- config_name: wikitext-103-raw-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1305092
num_examples: 4358
- name: train
num_bytes: 546501673
num_examples: 1801350
- name: validation
num_bytes: 1159292
num_examples: 3760
download_size: 191984949
dataset_size: 548966057
- config_name: wikitext-2-raw-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1305092
num_examples: 4358
- name: train
num_bytes: 11061733
num_examples: 36718
- name: validation
num_bytes: 1159292
num_examples: 3760
download_size: 4721645
dataset_size: 13526117
---
# Dataset Card for "wikitext"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843)
- **Point of Contact:** [Stephen Merity](mailto:smerity@salesforce.com)
- **Size of downloaded dataset files:** 391.41 MB
- **Size of the generated dataset:** 1.12 GB
- **Total amount of disk used:** 1.52 GB
### Dataset Summary
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation
and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models
that can take advantage of long term dependencies.
Each subset comes in two different variants:
- Raw (for character level work) contain the raw tokens, before the addition of the <unk> (unknown) tokens.
- Non-raw (for word level work) contain only the tokens in their vocabulary (wiki.train.tokens, wiki.valid.tokens, and wiki.test.tokens).
The out-of-vocabulary tokens have been replaced with the the <unk> token.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### wikitext-103-raw-v1
- **Size of downloaded dataset files:** 191.98 MB
- **Size of the generated dataset:** 549.42 MB
- **Total amount of disk used:** 741.41 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" The gold dollar or gold one @-@ dollar piece was a coin struck as a regular issue by the United States Bureau of the Mint from..."
}
```
#### wikitext-103-v1
- **Size of downloaded dataset files:** 190.23 MB
- **Size of the generated dataset:** 548.05 MB
- **Total amount of disk used:** 738.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
}
```
#### wikitext-2-raw-v1
- **Size of downloaded dataset files:** 4.72 MB
- **Size of the generated dataset:** 13.54 MB
- **Total amount of disk used:** 18.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" The Sinclair Scientific Programmable was introduced in 1975 , with the same case as the Sinclair Oxford . It was larger than t..."
}
```
#### wikitext-2-v1
- **Size of downloaded dataset files:** 4.48 MB
- **Size of the generated dataset:** 13.34 MB
- **Total amount of disk used:** 17.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### wikitext-103-raw-v1
- `text`: a `string` feature.
#### wikitext-103-v1
- `text`: a `string` feature.
#### wikitext-2-raw-v1
- `text`: a `string` feature.
#### wikitext-2-v1
- `text`: a `string` feature.
### Data Splits
| name | train |validation|test|
|-------------------|------:|---------:|---:|
|wikitext-103-raw-v1|1801350| 3760|4358|
|wikitext-103-v1 |1801350| 3760|4358|
|wikitext-2-raw-v1 | 36718| 3760|4358|
|wikitext-2-v1 | 36718| 3760|4358|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
pestowithpasta/npc-jokes | 2023-09-26T22:22:55.000Z | [
"region:us"
] | pestowithpasta | null | null | null | 0 | 5 | Entry not found |
charlieoneill/genre_dataset_train | 2023-09-26T23:49:18.000Z | [
"region:us"
] | charlieoneill | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 5268557
num_examples: 6743
download_size: 3656417
dataset_size: 5268557
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "genre_dataset_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DavidLanz/fine_tuning_datraset_4_openai | 2023-09-27T03:45:13.000Z | [
"license:cc-by-4.0",
"region:us"
] | DavidLanz | null | null | null | 0 | 5 | ---
license: cc-by-4.0
---
|
kewu93/natural_images_small | 2023-09-27T05:42:22.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2325900.3395232814
num_examples: 50
download_size: 2333116
dataset_size: 2325900.3395232814
---
# Dataset Card for "natural_images_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vrushali/generated_chat | 2023-09-27T18:36:46.000Z | [
"region:us"
] | Vrushali | null | null | null | 0 | 5 | Entry not found |
classla/ParlaSent | 2023-09-28T13:52:55.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:sl",
"language:en",
"language:cs",
"language:bs",
"language:hr",
"language:sr",
"language:sk",
"license:cc-by-sa-4.0",
"sentiment",
"classification",
"parliament",
"parlament",
"arxiv:2309.09783",
"region:us... | classla | null | null | null | 1 | 5 | ---
license: cc-by-sa-4.0
language:
- sl
- en
- cs
- bs
- hr
- sr
- sk
tags:
- sentiment
- classification
- parliament
- parlament
pretty_name: ParlaSent
size_categories:
- 10K<n<100K
configs:
- config_name: EN
data_files: ParlaSent_EN.jsonl
- config_name: BCS
data_files: ParlaSent_BCS.jsonl
- config_name: CZ
data_files: ParlaSent_CZ.jsonl
- config_name: SK
data_files: ParlaSent_SK.jsonl
- config_name: SL
data_files: ParlaSent_SL.jsonl
- config_name: EN_additional_test
data_files: ParlaSent_EN_test.jsonl
- config_name: BCS_additional_test
data_files: ParlaSent_BCS_test.jsonl
task_categories:
- text-classification
---
# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0
## Dataset Description
- **Repository: [Clarin.si repo](http://hdl.handle.net/11356/1868)**
- **Paper: https://arxiv.org/abs/2309.09783**
### Dataset Summary
This dataset was created and used for sentiment analysis experiments.
The dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.
Each test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into "train", "dev" and "test" portions" for performing language-specific experiments.
The 6-level annotation schema, used by annotators, is the following:
- Positive for sentences that are entirely or predominantly positive
- Negative for sentences that are entirely or predominantly negative
- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment
- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment
- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment
- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment
Dataset is described in detail in our [paper](https://arxiv.org/abs/2309.09783).
### Data Attributes
The attributes in training data are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first annotator's annotation
- annotator2 - second annotator's annotation
- reconciliation - the final label agreed upon after reconciliation
- label - three level (positive, negative, neutral) label based on the reconciliation label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
The attributes in the test data (_test.jsonl files) are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first (only) annotator's annotation, used as a final annotation
- label - three level (positive, negative, neutral) label based on the annotator1 label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
### Citation information
Please quote the following paper:
```
@article{
Mochtak_Rupnik_Ljubešić_2023,
title={The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings},
rights={All rights reserved},
url={http://arxiv.org/abs/2309.09783},
abstractNote={Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament’s results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.},
note={arXiv:2309.09783 [cs]},
number={arXiv:2309.09783},
publisher={arXiv},
author={Mochtak, Michal and Rupnik, Peter and Ljubešić, Nikola},
year={2023},
month={Sep},
language={en}
}
``` |
pixel-coping/pubmed_derived | 2023-10-06T02:26:15.000Z | [
"language:en",
"region:us"
] | pixel-coping | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: pubmed
path: data/pubmed-*
- split: nonbiomedical
path: data/nonbiomedical-*
- split: counterfactual
path: data/counterfactual-*
- split: casual
path: data/casual-*
- split: rap
path: data/rap-*
dataset_info:
features:
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
splits:
- name: pubmed
num_bytes: 1166668
num_examples: 1000
- name: nonbiomedical
num_bytes: 1141909
num_examples: 1000
- name: counterfactual
num_bytes: 1179347
num_examples: 991
- name: casual
num_bytes: 1205949
num_examples: 1000
- name: rap
num_bytes: 1252260
num_examples: 1000
download_size: 3357032
dataset_size: 5946133
language:
- en
---
# A corpus of rewritten pubmed abstracts
This corpus contains a 1k example subset from the [pubmed](https://huggingface.co/datasets/pubmed) corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible.
- **Paper:** [Dissecting learning and forgetting in language model finetuning](link pending)
Another corpus of rewritten general text is provided here: [c4_derived](https://huggingface.co/datasets/pixel-coping/c4_derived)
### Data Splits
- pubmed: a 1k example subset from the original pubmed corpus
- nonbiomedical: main topic of text changed to nonbiomedical topic
- counerfactual: factuals knowledge in text replaced by incorrect factuals
- casual: style of text changed to a casual style
- rap: style of text changed to a rap style
## Dataset Creation
Text is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets.
Please check the terms and conditions of pubmed data [here](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Citation Information
```
pending
``` |
hcho22/code_instructions_120k_alpaca_filtered | 2023-09-28T12:18:58.000Z | [
"license:apache-2.0",
"region:us"
] | hcho22 | null | null | null | 0 | 5 | ---
license: apache-2.0
---
|
Q-bert/test-dataset | 2023-10-03T17:03:36.000Z | [
"license:mit",
"region:us"
] | Q-bert | null | null | null | 0 | 5 | ---
license: mit
---
|
jxm/llama-7b__model__one_million_instructions__emb__sample | 2023-09-28T21:19:05.000Z | [
"region:us"
] | jxm | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: length
dtype: int64
- name: embedder_input_ids
sequence: int64
- name: embedder_attention_mask
sequence: int64
- name: idx
dtype: int64
- name: frozen_embeddings
sequence: float32
splits:
- name: train
num_bytes: 1325271843
num_examples: 10000
download_size: 870130332
dataset_size: 1325271843
---
# Dataset Card for "llama-7b__model__one_million_instructions__emb__sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
decoy4600/sgm-shiro1 | 2023-09-29T16:02:56.000Z | [
"region:us"
] | decoy4600 | null | null | null | 0 | 5 | Entry not found |
byrneml/company_names | 2023-09-30T00:08:25.000Z | [
"region:us"
] | byrneml | null | null | null | 0 | 5 | Entry not found |
jitx/distillation_code_4 | 2023-09-30T00:32:05.000Z | [
"region:us"
] | jitx | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: santacoder_prompts
dtype: string
- name: fim_inputs
dtype: string
- name: label_middles
dtype: string
- name: santacoder_outputs
dtype: string
- name: openai_rationales
dtype: string
splits:
- name: train
num_bytes: 16254
num_examples: 4
download_size: 32557
dataset_size: 16254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "distillation_code_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rghosh8/supportGPT-v8 | 2023-09-30T06:00:39.000Z | [
"license:bsd",
"region:us"
] | rghosh8 | null | null | null | 0 | 5 | ---
license: bsd
---
|
fengyang0317/open_images | 2023-10-01T01:22:51.000Z | [
"license:apache-2.0",
"region:us"
] | fengyang0317 | null | null | null | 0 | 5 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
splits:
- name: validation
num_bytes: 39844107.0
num_examples: 100
download_size: 39441443
dataset_size: 39844107.0
---
|
cfx211/starholder | 2023-09-30T19:09:15.000Z | [
"region:us"
] | cfx211 | null | null | null | 0 | 5 | Entry not found |
YakAnton/LLM_DS | 2023-10-01T11:37:39.000Z | [
"region:us"
] | YakAnton | null | null | null | 0 | 5 | Entry not found |
kolkata97/autotrain-data-test-pellm0 | 2023-10-01T22:29:54.000Z | [
"region:us"
] | kolkata97 | null | null | null | 0 | 5 | |
pin-lpt/little_island_and_coals_drop_yard | 2023-10-02T10:40:38.000Z | [
"region:us"
] | pin-lpt | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6763728.0
num_examples: 6
download_size: 6760101
dataset_size: 6763728.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "little_island_and_coals_drop_yard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ayoubkirouane/arxiv-math | 2023-10-02T18:59:00.000Z | [
"region:us"
] | ayoubkirouane | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 35436503.0
num_examples: 50488
download_size: 18875033
dataset_size: 35436503.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv-math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dloring1/Mini-50K-Recipes | 2023-10-02T21:03:08.000Z | [
"region:us"
] | Dloring1 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: input
dtype: string
splits:
- name: train
num_bytes: 36535401.96567886
num_examples: 50000
download_size: 19480443
dataset_size: 36535401.96567886
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mini-50K-Recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Harsha9044/TAM-MSA | 2023-10-03T08:24:38.000Z | [
"license:apache-2.0",
"region:us"
] | Harsha9044 | null | null | null | 0 | 5 | ---
license: apache-2.0
dataset_info:
features:
- name: File name
dtype: string
- name: Transcript
dtype: string
- name: Labels
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 283807
num_examples: 64
download_size: 120689
dataset_size: 283807
---
|
rohanbalkondekar/generate_json_long | 2023-10-05T11:58:38.000Z | [
"region:us"
] | rohanbalkondekar | null | null | null | 0 | 5 | Entry not found |
weaviate/WithRetrieval-Random-Test-80 | 2023-10-03T14:04:00.000Z | [
"license:apache-2.0",
"region:us"
] | weaviate | null | null | null | 0 | 5 | ---
license: apache-2.0
---
|
Malmika/ict_dataset | 2023-10-03T15:01:44.000Z | [
"region:us"
] | Malmika | null | null | null | 1 | 5 | Entry not found |
alvelvis/ccus-embeddings | 2023-10-03T16:39:11.000Z | [
"license:apache-2.0",
"region:us"
] | alvelvis | null | null | null | 0 | 5 | ---
license: apache-2.0
---
|
shossain/qa-no-pad-16384 | 2023-10-04T04:56:40.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9723626
num_examples: 192
download_size: 2505308
dataset_size: 9723626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "qa-no-pad-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ASIDS/alpaca-cleaned-ru | 2023-10-04T14:26:17.000Z | [
"task_categories:text-generation",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:yahma/alpaca-cleaned",
"language:ru",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | ASIDS | null | null | null | 0 | 5 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: iteration
dtype: uint32
splits:
- name: train
num_bytes: 74829755.0
num_examples: 51760
download_size: 36596664
dataset_size: 74829755.0
license: cc-by-4.0
language:
- ru
multilinguality:
- monolingual
tags:
- instruction-finetuning
pretty_name: alpaca-cleaned-ru
task_categories:
- text-generation
size_categories:
- 10K<n<100K
source_datasets:
- yahma/alpaca-cleaned
language_creators:
- translated
---
# alpaca-cleaned-ru
converter for autotrain from [d0rj/alpaca-cleaned-ru](https://huggingface.co/datasets/d0rj/alpaca-cleaned-ru)
Translated version of [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) into Russian.
## Dataset Description
- **Repository:** https://github.com/gururise/AlpacaDataCleaned
- **Repository:** https://huggingface.co/datasets/d0rj/alpaca-cleaned-ru |
legacy107/qa_wikipedia_sentence_transformer_negative_farming | 2023-10-04T13:45:59.000Z | [
"region:us"
] | legacy107 | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: negatives
sequence: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 147665416
num_examples: 27742
- name: test
num_bytes: 18591659
num_examples: 3468
- name: validation
num_bytes: 18443101
num_examples: 3458
download_size: 37917812
dataset_size: 184700176
---
# Dataset Card for "qa_wikipedia_sentence_transformer_negative_farming"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ewre324/appy-llama2-1k | 2023-10-06T13:28:59.000Z | [
"region:us"
] | ewre324 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: prompt
dtype: large_string
- name: main_topic
dtype: large_string
- name: subtopic
dtype: large_string
- name: adjective
dtype: large_string
- name: action_verb
dtype: large_string
- name: scenario
dtype: large_string
- name: target_audience
dtype: large_string
- name: programming_language
dtype: large_string
- name: common_sense_topic
dtype: large_string
- name: idx
dtype: int64
- name: response
dtype: large_string
- name: text
dtype: string
splits:
- name: train
num_bytes: 236790880
num_examples: 100000
download_size: 100584419
dataset_size: 236790880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "appy-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LLMGlobalyTest/categories-11k | 2023-10-04T16:42:01.000Z | [
"region:us"
] | LLMGlobalyTest | null | null | null | 0 | 5 | Entry not found |
adityarra07/czech_train_data | 2023-10-04T18:09:04.000Z | [
"region:us"
] | adityarra07 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 669027003.0330192
num_examples: 12613
- name: test
num_bytes: 26521327.322326932
num_examples: 500
download_size: 658874865
dataset_size: 695548330.3553461
---
# Dataset Card for "czech_train_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adityarra07/czech_test | 2023-10-04T18:09:08.000Z | [
"region:us"
] | adityarra07 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 53042654.644653864
num_examples: 1000
download_size: 52259185
dataset_size: 53042654.644653864
---
# Dataset Card for "czech_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/tweet-eval-hate | 2023-10-05T01:06:59.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 1217914
num_examples: 9000
download_size: 816470
dataset_size: 1217914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tweet_eval-hate"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mrabhi0505/instruction_output_dataset | 2023-10-05T11:38:37.000Z | [
"region:us"
] | mrabhi0505 | null | null | null | 0 | 5 | Entry not found |
teragron/reviews | 2023-10-09T23:55:54.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"finance",
"region:us"
] | teragron | null | null | null | 0 | 5 | ---
license: mit
language:
- en
tags:
- finance
pretty_name: review_me
size_categories:
- 1M<n<10M
task_categories:
- text-generation
---
Following packages are necessary to compile the model in C:
```bash
sudo apt install gcc-7
```
```bash
sudo apt-get install build-essential
```
```python
for i in range(1,21):
!wget https://huggingface.co/datasets/teragron/reviews/resolve/main/chunk_{i}.bin
```
```bash
git clone https://github.com/karpathy/llama2.c.git
```
```bash
cd llama2.c
```
```bash
pip install -r requirements.txt
```
Path: data/TinyStories_all_data |
Fraol/RunMetrics | 2023-10-05T15:59:41.000Z | [
"region:us"
] | Fraol | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: ref_status
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
splits:
- name: train
num_bytes: 2296248627
num_examples: 385811
download_size: 480698181
dataset_size: 2296248627
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "RunMetrics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
neelblabla/enron_labeled_email-prompts-for-llama2_7b | 2023-10-06T13:33:23.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | neelblabla | null | null | null | 0 | 5 | ---
task_categories:
- text-classification
- text-generation
language:
- en
size_categories:
- 1K<n<10K
--- |
ninja/arabic-english | 2023-10-10T10:06:43.000Z | [
"region:us"
] | ninja | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: ar
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 6307140.417917149
num_examples: 38085
- name: test
num_bytes: 700848.5820828509
num_examples: 4232
download_size: 4401263
dataset_size: 7007989.0
---
# Dataset Card for "arabic-english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
librarian-bots/arxiv_abstracts | 2023-10-05T18:42:31.000Z | [
"region:us"
] | librarian-bots | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: title
dtype: string
- name: abstract
dtype: string
- name: url
dtype: string
- name: category
dtype: string
- name: prediction
dtype: string
- name: probability
dtype: float64
- name: arxiv_id
dtype: string
splits:
- name: train
num_bytes: 715878
num_examples: 500
download_size: 411327
dataset_size: 715878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv_abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atulsinghphd/demo-new | 2023-10-05T20:03:41.000Z | [
"region:us"
] | atulsinghphd | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32616.0
num_examples: 172
- name: test
num_bytes: 8154.0
num_examples: 43
download_size: 12874
dataset_size: 40770.0
---
# Dataset Card for "demo-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-lamini-instruction-indo-50k-60k | 2023-10-06T02:22:35.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 3974664
num_examples: 10000
download_size: 2224367
dataset_size: 3974664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-lamini-instruction-indo-50k-60k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/affixal_negation_nonce | 2023-10-06T04:20:13.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: word
dtype: string
- name: affix
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 11912
num_examples: 418
download_size: 4873
dataset_size: 11912
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "affixal_negation_nonce"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/Islamic_forest_image_prompts | 2023-10-06T07:54:16.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 3517467
num_examples: 10000
download_size: 151517
dataset_size: 3517467
---
# Dataset Card for "Islamic_forest_image_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AayushShah/Univeral_SQL_Three_Datasets_Combined_WithText_IDs | 2023-10-06T11:46:02.000Z | [
"region:us"
] | AayushShah | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: context
path: data/context-*
- split: text_sql_v1
path: data/text_sql_v1-*
- split: sparc
path: data/sparc-*
dataset_info:
features:
- name: NATURAL_LANG
dtype: string
- name: SQL
dtype: string
- name: SCHEMA
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: context
num_bytes: 299674929
num_examples: 78519
- name: text_sql_v1
num_bytes: 899253880
num_examples: 220302
- name: sparc
num_bytes: 12250417
num_examples: 2846
download_size: 94153422
dataset_size: 1211179226
---
# Dataset Card for "Univeral_SQL_Three_Datasets_Combined_WithText_IDs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sharathhebbar24/Indian-Constitution | 2023-10-06T12:57:27.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | Sharathhebbar24 | null | null | null | 0 | 5 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
- text2text-generation
language:
- en
---
# Indian Constitution Dataset
The dataset can be used for text classification, text generation and text2text generation |
carnival13/massive_val_DA2_tokenized | 2023-10-06T13:41:11.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 16518290
num_examples: 24160
download_size: 3770585
dataset_size: 16518290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_val_DA2_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
umarigan/turkish_corpus_tokenized | 2023-10-06T22:58:06.000Z | [
"region:us"
] | umarigan | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 49605042096
num_examples: 48253932
- name: valid
num_bytes: 595216112
num_examples: 579004
download_size: 24336775144
dataset_size: 50200258208
---
# Dataset Card for "turkish_corpus_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
emozilla/Long-Data-Collections-Fine-Tune | 2023-10-09T15:01:11.000Z | [
"region:us"
] | emozilla | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 12859272204
num_examples: 98557
download_size: 7118608463
dataset_size: 12859272204
---
# Dataset Card for "Long-Data-Collections-Fine-Tune"
Paraquet version of the fine-tune split of [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections)
Statistics (in # of characters): `total_len: 6419025428, average_len: 65130.08135393731` |
AustinMcMike/steve_jobs_conversational | 2023-10-07T04:05:47.000Z | [
"license:apache-2.0",
"region:us"
] | AustinMcMike | null | null | null | 0 | 5 | ---
license: apache-2.0
---
|
rishikesh/mini-sarcasm-data | 2023-10-07T04:29:43.000Z | [
"license:mit",
"region:us"
] | rishikesh | null | null | null | 0 | 5 | ---
license: mit
---
|
gayathrimanoj/dataset_cpp | 2023-10-07T09:06:39.000Z | [
"region:us"
] | gayathrimanoj | null | null | null | 0 | 5 | Entry not found |
JiggaBooJombs/Novel | 2023-10-07T09:20:55.000Z | [
"license:apache-2.0",
"region:us"
] | JiggaBooJombs | null | null | null | 0 | 5 | ---
license: apache-2.0
---
|
Buffett/ntuadl_hw1 | 2023-10-07T12:59:32.000Z | [
"region:us"
] | Buffett | null | null | null | 0 | 5 | Entry not found |
zhangshuoming/c_arm64_json | 2023-10-07T13:42:43.000Z | [
"region:us"
] | zhangshuoming | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 86023555
num_examples: 19949
download_size: 23189009
dataset_size: 86023555
---
# Dataset Card for "c_arm64_json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marcus2000/dataset4sentinement_HSE | 2023-10-08T00:39:44.000Z | [
"region:us"
] | marcus2000 | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 3679508.0480941418
num_examples: 3322
- name: test
num_bytes: 650171.9519058582
num_examples: 587
download_size: 2311435
dataset_size: 4329680.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "dataset4sentinement_HSE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Prisha290/Dataset_onlycorrect | 2023-10-08T05:58:51.000Z | [
"region:us"
] | Prisha290 | null | null | null | 0 | 5 | Entry not found |
ShashiVish/cover-letter-dataset | 2023-10-08T07:21:14.000Z | [
"region:us"
] | ShashiVish | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: skills
dtype: string
- name: summary
dtype: string
- name: Job Title
dtype: string
- name: Job Responsibilities
dtype: string
- name: Preferred Qualifications
dtype: string
- name: Hiring Company
dtype: string
- name: User Name
dtype: string
- name: Past Working Experience
dtype: string
- name: Current Working Experience
dtype: string
- name: Skillsets
dtype: string
- name: Qualifications
dtype: string
- name: Cover Letter
dtype: string
splits:
- name: train
num_bytes: 42300.3
num_examples: 7
- name: test
num_bytes: 18128.7
num_examples: 3
download_size: 93771
dataset_size: 60429.0
---
# Dataset Card for "cover-letter-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carles-undergrad-thesis/en-id-parallel-sentences-embedding | 2023-10-08T07:57:27.000Z | [
"region:us"
] | carles-undergrad-thesis | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text_en
dtype: string
- name: text_id
dtype: string
- name: target_embedding
sequence: float32
- name: input_ids_en
sequence: int64
- name: attention_mask_en
sequence: int64
- name: input_ids_id
sequence: int64
- name: attention_mask_id
sequence: int64
splits:
- name: train
num_bytes: 7580096944
num_examples: 1000000
download_size: 4106348878
dataset_size: 7580096944
---
# Dataset Card for "en-id-parallel-sentences-embedding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Urna02/hangul_ivanov | 2023-10-08T09:24:42.000Z | [
"license:apache-2.0",
"region:us"
] | Urna02 | null | null | null | 0 | 5 | ---
license: apache-2.0
---
|
Linyuyu/linruanruan | 2023-10-10T07:00:52.000Z | [
"region:us"
] | Linyuyu | null | null | null | 0 | 5 | Entry not found |
hk-kaden-kim/uzh-hs23-etsp-eval-single-base-bar | 2023-10-08T10:52:59.000Z | [
"region:us"
] | hk-kaden-kim | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 5223052.0
num_examples: 100
download_size: 5179034
dataset_size: 5223052.0
---
# Dataset Card for "uzh-hs23-etsp-eval-single-base-bar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lollox/math_dataset_50k | 2023-10-08T14:51:58.000Z | [
"region:us"
] | lollox | null | null | null | 0 | 5 |
task_categories:
- question-answering
--- |
AryanNsc/spacehubdataset | 2023-10-08T16:41:27.000Z | [
"region:us"
] | AryanNsc | null | null | null | 0 | 5 | Entry not found |
TwoAbove/LAION-discord-gpt4v | 2023-10-08T22:33:39.000Z | [
"license:cc0-1.0",
"region:us"
] | TwoAbove | null | null | null | 0 | 5 | ---
license: cc0-1.0
dataset_info:
features:
- name: caption
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 10559
num_examples: 16
download_size: 11091
dataset_size: 10559
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
d4un/training-bias | 2023-10-09T05:12:26.000Z | [
"region:us"
] | d4un | null | null | null | 0 | 5 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is purely in English.
Some of the responses were generated by ChatGPT.
### Discussion of Biases
This dataset intentionally carries gender and job-related biases which reflect ones that exist in society,
for the research purposes of examining the effects the biases have on the model. Creators do not support these biases.
|
sankettgorey/donut_5 | 2023-10-09T08:00:21.000Z | [
"region:us"
] | sankettgorey | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 437960649.0
num_examples: 1000
download_size: 402681326
dataset_size: 437960649.0
---
# Dataset Card for "donut_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/coloring_book_animals | 2023-10-09T09:09:03.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 5 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 284135
num_examples: 1000
download_size: 3100
dataset_size: 284135
---
# Dataset Card for "coloring_book_animals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xjlulu/ntu_adl_QA | 2023-10-10T01:41:16.000Z | [
"task_categories:question-answering",
"language:zh",
"license:apache-2.0",
"region:us"
] | xjlulu | null | null | null | 0 | 5 | ---
configs:
- config_name: default
data_files:
- split: train
path: "train.csv"
- split: validation
path: "validation.csv"
- split: test
path: "test.csv"
- config_name: paragraphs
data_files:
- split: context
path: "context.csv"
license: apache-2.0
task_categories:
- question-answering
language:
- zh
--- |
ilyas3141/ilias_test4 | 2023-10-09T17:20:07.000Z | [
"region:us"
] | ilyas3141 | null | null | null | 0 | 5 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.