id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
chiayewken/flan-v2 | 2023-09-01T05:19:13.000Z | [
"region:us"
] | chiayewken | null | null | 3 | 26 | 2023-08-31T18:13:51 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: task_name
dtype: string
- name: task_source
dtype: string
- name: template_type
dtype: string
- name: template_idx
dtype: int64
splits:
- name: train
num_bytes: 44316029472
num_examples: 23173509
download_size: 0
dataset_size: 44316029472
---
# Dataset Card for "flan-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 552 | [
[
-0.035400390625,
-0.019439697265625,
0.006923675537109375,
0.006805419921875,
-0.00818634033203125,
-0.01605224609375,
0.023101806640625,
-0.03314208984375,
0.0595703125,
0.04095458984375,
-0.056549072265625,
-0.0292816162109375,
-0.035736083984375,
-0.02661... |
Arabic-Clip/ImageCaptions-7M-Translations-Arabic-subset-150000 | 2023-09-21T09:53:24.000Z | [
"region:us"
] | Arabic-Clip | null | null | 0 | 26 | 2023-09-20T13:33:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dim/grammarly_coedit | 2023-09-21T16:25:22.000Z | [
"region:us"
] | dim | null | null | 1 | 26 | 2023-09-21T16:25:13 | ---
dataset_info:
features:
- name: _id
dtype: string
- name: task
dtype: string
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 19943349
num_examples: 82466
download_size: 11658767
dataset_size: 19943349
---
# Dataset Card for "grammarly_coedit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 458 | [
[
-0.022247314453125,
-0.024505615234375,
0.01561737060546875,
0.027679443359375,
0.00989532470703125,
-0.003963470458984375,
-0.006183624267578125,
-0.01007843017578125,
0.043304443359375,
0.0122833251953125,
-0.06549072265625,
-0.06005859375,
-0.050689697265625,... |
sauravjoshi23/aws-documentation-chunked | 2023-09-23T17:41:26.000Z | [
"region:us"
] | sauravjoshi23 | null | null | 1 | 26 | 2023-09-23T09:13:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lhallee/uniref50_50-512 | 2023-09-26T19:14:45.000Z | [
"region:us"
] | lhallee | null | null | 0 | 26 | 2023-09-26T18:58:21 | ---
dataset_info:
features:
- name: uniref
dtype: string
splits:
- name: train
num_bytes: 10696656442
num_examples: 51521691
download_size: 10582703793
dataset_size: 10696656442
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uniref50_50-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 461 | [
[
-0.04925537109375,
0.0060577392578125,
0.00946044921875,
0.02313232421875,
-0.0175323486328125,
0.00858306884765625,
0.019500732421875,
-0.00148773193359375,
0.049957275390625,
0.040618896484375,
-0.059661865234375,
-0.053375244140625,
-0.0266571044921875,
-... |
mrabhi0505/rule_code | 2023-09-29T11:32:48.000Z | [
"region:us"
] | mrabhi0505 | null | null | 0 | 26 | 2023-09-29T11:29:59 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
yashnbx/l27b-E02-large-b05-0584-3 | 2023-09-30T17:13:20.000Z | [
"region:us"
] | yashnbx | null | null | 0 | 26 | 2023-09-30T17:13:04 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: test
num_bytes: 1011775
num_examples: 146
- name: train
num_bytes: 4032267
num_examples: 584
download_size: 831330
dataset_size: 5044042
---
# Dataset Card for "l27b-E02-large-b05-0584-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 531 | [
[
-0.045989990234375,
-0.01050567626953125,
0.0291748046875,
0.03167724609375,
-0.01543426513671875,
-0.002155303955078125,
0.01090240478515625,
-0.015960693359375,
0.0689697265625,
0.0305023193359375,
-0.056243896484375,
-0.04632568359375,
-0.0313720703125,
-... |
danielpark/smiles_plc50 | 2023-10-03T20:07:01.000Z | [
"license:unknown",
"region:us"
] | danielpark | null | null | 0 | 26 | 2023-10-03T19:53:49 | ---
license: unknown
---
Github Repository: [SMILES Featurizer](https://github.com/dsdanielpark/SMILES-featurizer)
```
curl -LO https://raw.githubusercontent.com/dsdanielpark/smiles-featurizer/master/dataset/smiles-plc50-train.csv
```
Alternatively, you can use the dataset uploaded to Hugging Face as follows.
```python
from datasets import load_dataset
dataset = load_dataset("danielpark/smiles_plc50", "csv")
df = pd.DataFrame(dataset['train'])
```
| 457 | [
[
-0.0189971923828125,
-0.011505126953125,
-0.00027108192443847656,
0.039276123046875,
-0.00911712646484375,
-0.003971099853515625,
-0.0170135498046875,
0.006137847900390625,
0.0589599609375,
0.00904083251953125,
-0.0697021484375,
-0.033294677734375,
-0.0453186035... |
CHOJW1004/kochatgpt_RM2 | 2023-10-04T07:45:31.000Z | [
"region:us"
] | CHOJW1004 | null | null | 0 | 26 | 2023-10-04T07:45:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 15758172.0
num_examples: 27594
- name: test
num_bytes: 1750908.0
num_examples: 3066
download_size: 9270108
dataset_size: 17509080.0
---
# Dataset Card for "kochatgpt_RM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 622 | [
[
-0.046661376953125,
-0.0153656005859375,
-0.0018987655639648438,
0.007122039794921875,
-0.038482666015625,
0.011688232421875,
0.0122833251953125,
-0.0027599334716796875,
0.051727294921875,
0.036590576171875,
-0.049468994140625,
-0.0423583984375,
-0.0521850585937... |
sordonia/wikipedia-en | 2023-10-10T21:16:05.000Z | [
"region:us"
] | sordonia | Wikipedia with math and latex included. | null | 0 | 26 | 2023-10-08T05:09:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
heegyu/chart2text_pew | 2023-10-12T05:08:48.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"region:us"
] | heegyu | null | null | 0 | 26 | 2023-10-12T05:06:05 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: old_id
dtype: string
- name: title
dtype: string
- name: imgPath
dtype: string
- name: caption
dtype: string
- name: URL
dtype: string
- name: dataPath
dtype: string
- name: chartType
dtype: string
- name: complexity
dtype: string
- name: topic
dtype: string
- name: bboxesPath
dtype: string
- name: image
dtype: image
- name: data
dtype: string
splits:
- name: train
num_bytes: 280009472
num_examples: 6500
- name: val
num_bytes: 62717503.096
num_examples: 1392
- name: test
num_bytes: 61265523.23
num_examples: 1393
download_size: 400276057
dataset_size: 403992498.32600003
license: gpl-3.0
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "chart2text_pew"
original dataset: https://github.com/vis-nlp/Chart-to-text
| 914 | [
[
-0.01233673095703125,
-0.024566650390625,
0.0175628662109375,
0.032958984375,
-0.045196533203125,
-0.001171112060546875,
-0.0244293212890625,
-0.01340484619140625,
0.0301971435546875,
0.054168701171875,
-0.0220184326171875,
-0.04852294921875,
-0.041900634765625,... |
erbacher/AmbigNQ-clarifying-question | 2023-10-12T16:58:14.000Z | [
"region:us"
] | erbacher | null | null | 0 | 26 | 2023-10-12T16:57:59 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: index
dtype: int64
- name: clar
dtype: string
- name: question
dtype: string
- name: ambig
dtype: bool
- name: input_passage
dtype: string
- name: intent
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 62693997.0
num_examples: 10000
- name: dev
num_bytes: 6291036.0
num_examples: 1001
- name: test
num_bytes: 64783344.0
num_examples: 1000
download_size: 75095693
dataset_size: 133768377.0
---
# Dataset Card for "AmbigNQ-clarifying-question"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 758 | [
[
-0.047454833984375,
-0.042327880859375,
0.00298309326171875,
0.011322021484375,
-0.017059326171875,
0.001590728759765625,
0.01059722900390625,
-0.01409149169921875,
0.059478759765625,
0.03863525390625,
-0.066162109375,
-0.0472412109375,
-0.0291748046875,
-0.... |
royzhong/sample-threat-scenarios | 2023-10-16T19:46:26.000Z | [
"region:us"
] | royzhong | null | null | 0 | 26 | 2023-10-16T06:05:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Mouli07/ROCO_Chest_Xray_v1 | 2023-10-17T17:37:34.000Z | [
"region:us"
] | Mouli07 | null | null | 0 | 26 | 2023-10-17T17:36:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 38241164.86
num_examples: 1735
download_size: 39693229
dataset_size: 38241164.86
---
# Dataset Card for "ROCO_Chest_Xray_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.024169921875,
0.0020904541015625,
0.0130615234375,
-0.0013952255249023438,
-0.0408935546875,
-0.01189422607421875,
0.039825439453125,
-0.01175689697265625,
0.055938720703125,
0.049163818359375,
-0.06451416015625,
-0.062744140625,
-0.05169677734375,
-0.022... |
Danielbrdz/Barcenas-lmsys-Dataset | 2023-10-18T15:49:46.000Z | [
"language:es",
"region:us"
] | Danielbrdz | null | null | 0 | 26 | 2023-10-18T01:39:28 | ---
language:
- es
---
Dataset made on the basis of lmsys/lmsys-chat-1m
With data only for the Spanish language. | 113 | [
[
0.00026607513427734375,
-0.0494384765625,
-0.0003216266632080078,
0.04766845703125,
-0.0016412734985351562,
0.00223541259765625,
-0.01227569580078125,
-0.01422882080078125,
0.095947265625,
0.078857421875,
-0.0977783203125,
-0.0445556640625,
-0.00815582275390625,... |
Globaly/glfamiliasPrendasDeVestir | 2023-10-18T17:12:28.000Z | [
"region:us"
] | Globaly | null | null | 0 | 26 | 2023-10-18T16:55:32 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
advancedcv/Food500Cap | 2023-10-19T02:01:16.000Z | [
"region:us"
] | advancedcv | null | null | 0 | 26 | 2023-10-19T01:25:20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: cat
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 3004559279.747
num_examples: 19877
- name: test
num_bytes: 601407879.384
num_examples: 4938
download_size: 3000710601
dataset_size: 3605967159.131
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "caps_data_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 630 | [
[
-0.02911376953125,
-0.021453857421875,
0.01036834716796875,
0.020111083984375,
-0.00742340087890625,
0.00811767578125,
0.0186614990234375,
-0.014495849609375,
0.0572509765625,
0.0343017578125,
-0.0555419921875,
-0.044036865234375,
-0.058563232421875,
-0.0240... |
skvarre/movie_posters | 2023-10-19T01:35:45.000Z | [
"region:us"
] | skvarre | null | null | 2 | 26 | 2023-10-19T01:31:05 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: title
dtype: string
- name: genres
list:
- name: id
dtype: int64
- name: name
dtype: string
- name: overview
dtype: string
- name: popularity
dtype: float64
- name: release_date
dtype: string
- name: budget
dtype: int64
- name: revenue
dtype: int64
- name: tagline
dtype: string
- name: original_language
dtype: string
- name: runtime
dtype: int64
splits:
- name: train
num_bytes: 4559499046.67
num_examples: 9955
download_size: 4558666511
dataset_size: 4559499046.67
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "movie_posters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 922 | [
[
-0.040008544921875,
-0.01100921630859375,
0.020660400390625,
0.0018768310546875,
-0.0181732177734375,
0.005489349365234375,
0.02532958984375,
0.0039215087890625,
0.058441162109375,
0.039031982421875,
-0.0601806640625,
-0.04632568359375,
-0.054168701171875,
-... |
noxneural/kashaloti | 2023-10-25T09:51:37.000Z | [
"task_categories:question-answering",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:sq",
"region:us"
] | noxneural | null | null | 2 | 26 | 2023-10-21T02:23:51 | ---
task_categories:
- question-answering
- translation
- summarization
- conversational
language:
- sq
pretty_name: Kashaloti_V0.1
size_categories:
- 100K<n<1M
---
# Kashaloti_V0.1
**Task Categories**:
- Question-Answering
- Translation
- Summarization
- Conversational
**Language**: sq
**Size Categories**: 100K < n < 1M
---
# Dataset Card for "Your Dataset Name in Albanian"
## Dataset Summary
This dataset is a translated version of the OpenOrca dataset into Albanian. The initial dataset consisted of augmented FLAN Collection data, primarily used for training and evaluation in the natural language processing field. This version specifically utilizes the ~1M GPT-4 completions, translated using the OPUS-MT model from English to Albanian, with subsequent refinement to ensure clarity and coherence.
## Dataset Attribution
### Translation Process:
The translation was executed using the OPUS-MT model from English to Albanian. The dataset was then refined and cleaned to ensure semantic accuracy and coherence.
## Supported Tasks and Leaderboards
This dataset can be utilized for tasks such as Albanian language modeling, text generation, text augmentation, and other NLP tasks in Albanian. As it's a translated version, it can also be used for cross-lingual understanding.
## Languages
The dataset is now in Albanian.
## Dataset Structure
### Data Instances
A data instance in this dataset represents entries from the FLAN collection, which have been translated into Albanian and augmented by querying either GPT-4 or GPT-3.5. The response is then input into the response field.
### Data Fields
- 'id': a unique identifier
- 'system_prompt': System Prompt presented to the GPT model
- 'question': a question entry translated into Albanian
- 'response': the model's response to the question
### Data Splits
The data is unsplit.
## Dataset Creation
### Curation Rationale
The dataset was translated to offer Albanian researchers and developers an augmented text data source. By leveraging the "reasoning trace" augmentation from GPT-3.5 and GPT-4, it provides insights into the model's reasoning capabilities in Albanian.
### Source Data
The original dataset is the OpenOrca dataset, particularly the ~1M GPT-4 completions.
## Dataset Use
### Use Cases
Potential applications include Albanian language understanding, model training, performance evaluation, and other NLP tasks.
### Usage Caveats
Given the translation process, users are encouraged to validate the dataset's accuracy and relevance for specific tasks. Regularly checking for updates and improvements is recommended.
### Getting Started
The dataset can be accessed via the Hugging Face datasets library. Streaming is advised due to the potential size of the files. Keep an eye on the dataset's repository on Hugging Face for any updates.
________________________________________
**Original dataset contributors**:
- Teknium
- WingLian/Caseus
- Eric Hartford
- NanoBit
- Pankaj
- Winddude
- Rohan
**Acknowledgment to the following organizations**:
- [AlignmentLab.ai](http://AlignmentLab.ai)
- Autometa
- Entropi
- AtlasUnified
- NeverendingToast
- NanoBit
- WingLian/Caseus
**Special mention**: TheBloke for supporting the community.
https://huggingface.co/TheBloke
| 3,287 | [
[
-0.042877197265625,
-0.06719970703125,
0.016815185546875,
0.01517486572265625,
-0.019378662109375,
-0.0128326416015625,
-0.0127105712890625,
-0.038818359375,
0.031982421875,
0.046783447265625,
-0.054351806640625,
-0.041290283203125,
-0.05133056640625,
0.0178... |
Tochi2023/kolizo-designs-dataset | 2023-10-21T11:35:42.000Z | [
"region:us"
] | Tochi2023 | null | null | 0 | 26 | 2023-10-21T11:35:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: ' text'
dtype: string
splits:
- name: train
num_bytes: 200160.0
num_examples: 10
download_size: 197786
dataset_size: 200160.0
---
# Dataset Card for "kolizo-designs-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.04998779296875,
-0.0091094970703125,
0.0226593017578125,
0.01861572265625,
-0.023101806640625,
-0.0010213851928710938,
0.01885986328125,
-0.01288604736328125,
0.057403564453125,
0.0509033203125,
-0.06109619140625,
-0.062286376953125,
-0.0302581787109375,
... |
Leekp/dataset | 2023-10-21T14:05:30.000Z | [
"region:us"
] | Leekp | null | null | 0 | 26 | 2023-10-21T13:58:10 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
davidfant/natural-questions-chunk-0 | 2023-10-22T22:48:50.000Z | [
"region:us"
] | davidfant | null | null | 0 | 26 | 2023-10-22T22:45:10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4705302627
num_examples: 10000
download_size: 1826111395
dataset_size: 4705302627
---
# Dataset Card for "natural-questions-chunk-0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,818 | [
[
-0.06341552734375,
-0.06829833984375,
0.0162506103515625,
0.0142669677734375,
-0.0302276611328125,
0.0013599395751953125,
0.0128173828125,
-0.02227783203125,
0.0789794921875,
0.04595947265625,
-0.06378173828125,
-0.02423095703125,
-0.02294921875,
0.000613689... |
chrisgru/dolphin | 2023-10-23T09:17:46.000Z | [
"region:us"
] | chrisgru | null | null | 0 | 26 | 2023-10-23T08:58:42 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 15782.991668743547
num_examples: 9
download_size: 10003
dataset_size: 15782.991668743547
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dolphin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.072509765625,
-0.015045166015625,
0.01043701171875,
0.018157958984375,
-0.03033447265625,
-0.01104736328125,
0.03289794921875,
-0.03570556640625,
0.064453125,
0.0469970703125,
-0.06280517578125,
-0.035430908203125,
-0.046051025390625,
-0.00026726722717285... |
toilaluan/t2i_topic_comparision_db_v2 | 2023-10-27T00:11:11.000Z | [
"region:us"
] | toilaluan | null | null | 0 | 26 | 2023-10-26T16:21:51 | ---
dataset_info:
features:
- name: image
dtype: image
- name: topic
dtype: string
- name: prompt
dtype: string
- name: request_id
dtype: int64
- name: model_type
dtype: string
splits:
- name: train
num_bytes: 335176439.2
num_examples: 7200
download_size: 653813254
dataset_size: 335176439.2
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "t2i_topic_comparision_db_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 611 | [
[
-0.032470703125,
-0.00798797607421875,
0.018524169921875,
0.01357269287109375,
-0.021697998046875,
-0.00511932373046875,
0.023895263671875,
-0.01430511474609375,
0.050445556640625,
0.023040771484375,
-0.054168701171875,
-0.04815673828125,
-0.03118896484375,
... |
Sree1994/ddb_baseprompts | 2023-10-26T22:52:37.000Z | [
"region:us"
] | Sree1994 | null | null | 0 | 26 | 2023-10-26T22:52:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Base_prompt
dtype: string
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 14886028
num_examples: 51602
- name: test
num_bytes: 2096918
num_examples: 7299
- name: valid
num_bytes: 4301342
num_examples: 14817
download_size: 10829614
dataset_size: 21284288
---
# Dataset Card for "ddb_baseprompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 691 | [
[
-0.0545654296875,
-0.03436279296875,
0.01294708251953125,
0.02960205078125,
-0.0198822021484375,
-0.003185272216796875,
0.02581787109375,
0.007232666015625,
0.06524658203125,
0.036529541015625,
-0.057769775390625,
-0.06597900390625,
-0.047332763671875,
-0.01... |
Adminhuggingface/LORA_ONE_DATA | 2023-10-27T06:18:33.000Z | [
"region:us"
] | Adminhuggingface | null | null | 0 | 26 | 2023-10-27T06:18:32 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2493084.0
num_examples: 6
download_size: 2495157
dataset_size: 2493084.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LORA_ONE_DATA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.042510986328125,
-0.038909912109375,
0.007354736328125,
0.0142822265625,
-0.0238800048828125,
-0.0162353515625,
0.035003662109375,
-0.01000213623046875,
0.0828857421875,
0.056732177734375,
-0.05950927734375,
-0.0643310546875,
-0.03656005859375,
-0.0294342... |
manishiitg/airoboros-2.2.1-hi | 2023-11-03T01:08:47.000Z | [
"region:us"
] | manishiitg | null | null | 0 | 26 | 2023-10-28T07:44:16 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: system
dtype: string
- name: skip_prompt_formatting
dtype: bool
- name: category
dtype: string
- name: instruction_hindi
dtype: string
- name: response_hindi
dtype: string
- name: system_hindi
dtype: string
splits:
- name: train
num_bytes: 102711269
num_examples: 13641
download_size: 44765656
dataset_size: 102711269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "airoboros-2.2.1-hi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 747 | [
[
-0.038665771484375,
-0.01473236083984375,
-0.02166748046875,
0.0206146240234375,
-0.023681640625,
-0.0134124755859375,
0.0256195068359375,
-0.0222625732421875,
0.064208984375,
0.03326416015625,
-0.04583740234375,
-0.031463623046875,
-0.045928955078125,
-0.02... |
riddhiparakh/mannbot | 2023-10-28T15:04:25.000Z | [
"region:us"
] | riddhiparakh | null | null | 0 | 26 | 2023-10-28T12:32:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
josedonoso/apples-dataset-v1 | 2023-10-28T23:35:52.000Z | [
"region:us"
] | josedonoso | null | null | 0 | 26 | 2023-10-28T23:35:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2704421.0
num_examples: 192
- name: test
num_bytes: 646648.0
num_examples: 48
download_size: 3236890
dataset_size: 3351069.0
---
# Dataset Card for "apples-dataset-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 579 | [
[
-0.040252685546875,
-0.0205078125,
0.0181121826171875,
0.016357421875,
-0.010711669921875,
-0.0028533935546875,
0.043853759765625,
-0.0150909423828125,
0.0711669921875,
0.038238525390625,
-0.081787109375,
-0.047149658203125,
-0.04876708984375,
-0.03114318847... |
kunishou/hh-rlhf-49k-ja-single-turn | 2023-11-02T14:30:34.000Z | [
"license:mit",
"region:us"
] | kunishou | null | null | 0 | 26 | 2023-10-31T17:47:50 | ---
license: mit
---
This dataset was created by automatically translating part of "Anthropic/hh-rlhf" into Japanese, and selected for single turn conversations.
You can use this dataset for RLHF and DPO.
hh-rlhf repository
https://github.com/anthropics/hh-rlhf
Anthropic/hh-rlhf
https://huggingface.co/datasets/Anthropic/hh-rlhf | 333 | [
[
-0.03619384765625,
-0.06500244140625,
0.0404052734375,
0.01678466796875,
-0.03948974609375,
0.004627227783203125,
0.0009098052978515625,
-0.0421142578125,
0.058746337890625,
0.06610107421875,
-0.08880615234375,
-0.043304443359375,
-0.020843505859375,
0.02790... |
Ujan/github_classification | 2023-11-01T08:43:54.000Z | [
"region:us"
] | Ujan | null | null | 0 | 26 | 2023-11-01T08:43:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: names
dtype: string
- name: readmes
dtype: string
- name: topics
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 51303107.05622984
num_examples: 10414
- name: validation
num_bytes: 6414119.971885082
num_examples: 1302
- name: test
num_bytes: 6414119.971885082
num_examples: 1302
download_size: 29047991
dataset_size: 64131347.00000001
---
# Dataset Card for "github_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 814 | [
[
-0.0306396484375,
-0.00876617431640625,
0.00431060791015625,
-0.0002086162567138672,
-0.007354736328125,
0.0144195556640625,
0.01120758056640625,
-0.01497650146484375,
0.057281494140625,
0.02337646484375,
-0.041748046875,
-0.05902099609375,
-0.042755126953125,
... |
magnifi/hl-codellama-chat-response | 2023-11-02T16:45:00.000Z | [
"region:us"
] | magnifi | null | null | 0 | 26 | 2023-11-02T13:36:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Query
dtype: string
- name: Result
dtype: string
- name: chat_response
dtype: string
splits:
- name: train
num_bytes: 1321860.461185117
num_examples: 1523
- name: test
num_bytes: 567627.5388148829
num_examples: 654
download_size: 0
dataset_size: 1889488.0
---
# Dataset Card for "hl-codellama-chat-response"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 646 | [
[
-0.04034423828125,
-0.04046630859375,
-0.00823974609375,
0.0284423828125,
0.0002810955047607422,
0.0218353271484375,
-0.002109527587890625,
-0.01214599609375,
0.0712890625,
0.033935546875,
-0.0556640625,
-0.049774169921875,
-0.0301513671875,
-0.0283355712890... |
kiamehr74/CoarseWSD-20 | 2021-08-10T09:48:50.000Z | [
"region:us"
] | kiamehr74 | The CoarseWSD-20 dataset is a coarse-grained sense disambiguation built from Wikipedia
(nouns only) targetting 2 to 5 senses of 20 ambiguous words. It was specifically designed
to provide an ideal setting for evaluating WSD models (e.g. no senses in test sets missing
from training), both quantitavely and qualitatively. | @misc{loureiro2021analysis,
title={Analysis and Evaluation of Language Models for Word Sense Disambiguation},
author={Daniel Loureiro and Kiamehr Rezaee and Mohammad Taher Pilehvar and Jose Camacho-Collados},
year={2021},
eprint={2008.11608},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1 | 25 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sc2qa/sc2q_commoncrawl_large | 2022-03-30T18:34:11.000Z | [
"arxiv:2109.04689",
"region:us"
] | sc2qa | \ | @article{zhou2021generating,
author = {Li Zhou, Kevin Small, Yong Zhang, Sandeep Atluri},
title = "{Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning}",
conference = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)},
year = 2021,
} | 2 | 25 | 2022-03-02T23:29:22 | For details, please refer to the following links.
Github repo: https://github.com/amazon-research/SC2QA-DRIL
Paper: [Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning](https://arxiv.org/pdf/2109.04689.pdf) | 270 | [
[
-0.0102996826171875,
-0.054779052734375,
0.03216552734375,
0.004520416259765625,
-0.007640838623046875,
0.0089569091796875,
0.022369384765625,
-0.0239715576171875,
0.046905517578125,
0.049041748046875,
-0.0650634765625,
-0.0158538818359375,
-0.0185089111328125,
... |
stjokerli/TextToText_wic_seqio | 2022-03-18T04:56:49.000Z | [
"region:us"
] | stjokerli | null | null | 0 | 25 | 2022-03-13T09:30:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Yaxin/SemEval2015Task12Raw | 2022-08-14T16:01:41.000Z | [
"region:us"
] | Yaxin | A collection of SemEval2015 specifically designed to aid research in Aspect Based Sentiment Analysis. | @inproceedings{pontiki2015semeval,
title={Semeval-2015 task 12: Aspect based sentiment analysis},
author={Pontiki, Maria and Galanis, Dimitrios and Papageorgiou, Harris and Manandhar, Suresh and Androutsopoulos, Ion},
booktitle={Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015)},
pages={486--495},
year={2015}
} | 2 | 25 | 2022-04-21T14:03:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Splend1dchan/NMSQA_testupload2 | 2022-06-02T15:31:29.000Z | [
"region:us"
] | Splend1dchan | null | null | 0 | 25 | 2022-06-02T11:52:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nateraw/pizza_not_pizza | 2022-07-07T19:58:03.000Z | [
"license:other",
"region:us"
] | nateraw | null | null | 1 | 25 | 2022-07-07T19:57:37 | ---
license:
- other
kaggle_id: carlosrunner/pizza-not-pizza
---
# Dataset Card for Pizza or Not Pizza?
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/carlosrunner/pizza-not-pizza
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Who doesn't like pizza? This dataset contains about 1000 images of pizza and 1000 images of dishes other than pizza. It can be used for a simple binary image classification task.
All images were rescaled to have a maximum side length of 512 pixels.
This is a subset of the Food-101 dataset. Information about the original dataset can be found in the following paper:
Bossard, Lukas, Matthieu Guillaumin, and Luc Van Gool. "Food-101 – Mining Discriminative Components with Random Forests." In *European conference on computer vision*, pp. 446-461. Springer, Cham, 2014.
The original dataset can be found in the following locations:
https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/
https://www.kaggle.com/datasets/dansbecker/food-101
https://paperswithcode.com/dataset/food-101
https://www.tensorflow.org/datasets/catalog/food101
Number of instances in each class:
Pizza: 983
Not Pizza: 983
##Acknowledgements
The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2].
[1] http://www.foodspotting.com/
[2] http://www.foodspotting.com/terms/
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@carlosrunner](https://kaggle.com/carlosrunner)
### Licensing Information
The license for this dataset is other
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | 3,871 | [
[
-0.029937744140625,
-0.05072021484375,
0.004695892333984375,
-0.0115509033203125,
0.007160186767578125,
-0.00890350341796875,
-0.0215911865234375,
-0.022796630859375,
0.039886474609375,
0.03997802734375,
-0.0577392578125,
-0.072998046875,
-0.04833984375,
0.0... |
bigscience/xP3all | 2023-05-30T15:51:40.000Z | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"lan... | bigscience | xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. | @misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 17 | 25 | 2022-07-30T21:05:02 | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.33|
|bm|107056|0.11|265180|0.33|
|ak|108096|0.11|265071|0.33|
|ca|110608|0.11|271191|0.33|
|eu|113008|0.11|281199|0.35|
|fon|113072|0.11|265063|0.33|
|st|114080|0.11|265063|0.33|
|ki|115040|0.12|265180|0.33|
|tum|116032|0.12|265063|0.33|
|wo|122560|0.12|365063|0.45|
|ln|126304|0.13|365060|0.45|
|as|156256|0.16|265063|0.33|
|or|161472|0.16|265063|0.33|
|kn|165456|0.17|265063|0.33|
|ml|175040|0.18|265864|0.33|
|rn|192992|0.19|318189|0.39|
|nso|229712|0.23|915051|1.13|
|tn|235536|0.24|915054|1.13|
|lg|235936|0.24|915021|1.13|
|rw|249360|0.25|915043|1.13|
|ts|250256|0.25|915044|1.13|
|sn|252496|0.25|865056|1.07|
|xh|254672|0.26|915058|1.13|
|zu|263712|0.26|915061|1.13|
|ny|272128|0.27|915063|1.13|
|ig|325232|0.33|950097|1.17|
|yo|352784|0.35|918416|1.13|
|ne|393680|0.39|315754|0.39|
|pa|523248|0.52|339210|0.42|
|gu|560688|0.56|347499|0.43|
|sw|566656|0.57|1130481|1.4|
|mr|666240|0.67|417269|0.52|
|bn|832720|0.83|428843|0.53|
|ta|926912|0.93|415433|0.51|
|te|1343232|1.35|584590|0.72|
|ur|1918272|1.92|855756|1.06|
|vi|3102512|3.11|1672106|2.07|
|code|4330752|4.34|2707724|3.34|
|hi|4403568|4.41|1554667|1.92|
|zh|4599440|4.61|3589234|4.43|
|id|4612256|4.62|2643418|3.27|
|ar|4683456|4.69|2160181|2.67|
|fr|6591120|6.6|5316403|6.57|
|pt|6886800|6.9|3752156|4.63|
|es|8587920|8.6|5413205|6.69|
|en|39252528|39.33|32740750|40.44|
|total|99807184|100.0|80956089|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets
- Coreference Resolution
- [WSC (Fixed)](https://huggingface.co/datasets/super_glue)
- Sentence Completion
- [HellaSwag](https://huggingface.co/datasets/hellaswag)
- Translation
- [MultiEurlex](https://huggingface.co/datasets/multi_eurlex)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. | 12,994 | [
[
-0.0367431640625,
-0.034088134765625,
0.0208740234375,
0.01209259033203125,
0.0088348388671875,
0.01058197021484375,
-0.0201568603515625,
-0.0263214111328125,
0.03271484375,
0.0136566162109375,
-0.056243896484375,
-0.057403564453125,
-0.0347900390625,
0.0226... |
copenlu/scientific-exaggeration-detection | 2022-08-17T13:45:14.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:gpl-3.0",
"scientific text",
... | copenlu | null | null | 3 | 25 | 2022-08-17T13:29:27 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: semi-supervised-exaggeration-detection-of
pretty_name: Scientific Exaggeration Detection
size_categories:
- n<1K
source_datasets: []
tags:
- scientific text
- scholarly text
- inference
- fact checking
- misinformation
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
---
# Dataset Card for Scientific Exaggeration Detection
## Dataset Description
- **Homepage:** https://github.com/copenlu/scientific-exaggeration-detection
- **Repository:** https://github.com/copenlu/scientific-exaggeration-detection
- **Paper:** https://aclanthology.org/2021.emnlp-main.845.pdf
### Dataset Summary
Public trust in science depends on honest and factual communication of scientific papers. However, recent studies have demonstrated a tendency of news media to misrepresent scientific papers by exaggerating their findings. Given this, we present a formalization of and study into the problem of exaggeration detection in science communication. While there are an abundance of scientific papers and popular media articles written about them, very rarely do the articles include a direct link to the original paper, making data collection challenging. We address this by curating a set of labeled press release/abstract pairs from existing expert annotated studies on exaggeration in press releases of scientific papers suitable for benchmarking the performance of machine learning models on the task. Using limited data from this and previous studies on exaggeration detection in science, we introduce MT-PET, a multi-task version of Pattern Exploiting Training (PET), which leverages knowledge from complementary cloze-style QA tasks to improve few-shot learning. We demonstrate that MT-PET outperforms PET and supervised learning both when data is limited, as well as when there is an abundance of data for the main task.
## Dataset Structure
The training and test data are derived from the InSciOut studies from [Sumner et al. 2014](https://www.bmj.com/content/349/bmj.g7015) and [Bratton et al. 2019](https://pubmed.ncbi.nlm.nih.gov/31728413/#:~:text=Results%3A%20We%20found%20that%20the,inference%20from%20non%2Dhuman%20studies.). The splits have the following fields:
```
original_file_id: The ID of the original spreadsheet in the Sumner/Bratton data where the annotations are derived from
press_release_conclusion: The conclusion sentence from the press release
press_release_strength: The strength label for the press release
abstract_conclusion: The conclusion sentence from the abstract
abstract_strength: The strength label for the abstract
exaggeration_label: The final exaggeration label
```
The exaggeration label is one of `same`, `exaggerates`, or `downplays`. The strength label is one of the following:
```
0: Statement of no relationship
1: Statement of correlation
2: Conditional statement of causation
3: Statement of causation
```
## Dataset Creation
See section 4 of the [paper](https://aclanthology.org/2021.emnlp-main.845.pdf) for details on how the dataset was curated. The original InSciOut data can be found [here](https://figshare.com/articles/dataset/InSciOut/903704)
## Citation
```
@inproceedings{wright2021exaggeration,
title={{Semi-Supervised Exaggeration Detection of Health Science Press Releases}},
author={Dustin Wright and Isabelle Augenstein},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2021
}
```
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset. | 3,713 | [
[
-0.021514892578125,
-0.057891845703125,
0.0237274169921875,
0.029327392578125,
-0.03082275390625,
0.0036602020263671875,
-0.0152435302734375,
-0.047210693359375,
0.04144287109375,
0.01192474365234375,
-0.021514892578125,
-0.048004150390625,
-0.06488037109375,
... |
rajistics/electricity_demand | 2022-10-19T21:03:02.000Z | [
"task_categories:time-series-forecasting",
"region:us"
] | rajistics | null | null | 2 | 25 | 2022-09-18T19:06:12 | ---
task_categories:
- time-series-forecasting
---
The Victoria electricity demand dataset from the [MAPIE github repository](https://github.com/scikit-learn-contrib/MAPIE/tree/master/examples/data).
It consists of hourly electricity demand (in GW)
of the Victoria state in Australia together with the temperature
(in Celsius degrees).
| 339 | [
[
-0.02130126953125,
-0.023529052734375,
0.02728271484375,
-0.0118255615234375,
-0.00452423095703125,
-0.0208587646484375,
0.023468017578125,
-0.008026123046875,
0.048065185546875,
0.057891845703125,
-0.055206298828125,
-0.040130615234375,
-0.01256561279296875,
... |
tomekkorbak/detoxify-pile-chunk3-500000-550000 | 2022-10-04T17:42:07.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-04T17:42:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-550000-600000 | 2022-10-04T17:46:16.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-04T17:46:07 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
tomekkorbak/detoxify-pile-chunk3-800000-850000 | 2022-10-04T22:47:07.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-04T17:47:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-700000-750000 | 2022-10-04T17:50:07.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-04T17:49:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-600000-650000 | 2022-10-04T17:51:35.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-04T17:51:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-650000-700000 | 2022-10-04T18:03:56.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-04T18:03:45 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
tomekkorbak/detoxify-pile-chunk3-1200000-1250000 | 2022-10-04T23:47:33.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-04T23:47:25 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
tomekkorbak/detoxify-pile-chunk3-1250000-1300000 | 2022-10-05T00:28:11.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 25 | 2022-10-05T00:28:02 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
carlosejimenez/stsb_corpus | 2022-10-28T00:27:31.000Z | [
"region:us"
] | carlosejimenez | null | null | 0 | 25 | 2022-10-28T00:27:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lewtun/music_genres_small | 2022-11-03T13:36:49.000Z | [
"region:us"
] | lewtun | null | null | 2 | 25 | 2022-11-03T13:36:11 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: song_id
dtype: int64
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: train
num_bytes: 392427659.9527852
num_examples: 1000
download_size: 390675126
dataset_size: 392427659.9527852
---
# Dataset Card for "music_genres_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 487 | [
[
-0.05072021484375,
-0.01276397705078125,
0.0205841064453125,
0.01464080810546875,
-0.00957489013671875,
-0.01158905029296875,
-0.0203399658203125,
-0.005855560302734375,
0.072265625,
0.0260467529296875,
-0.0682373046875,
-0.05999755859375,
-0.03411865234375,
... |
Norod78/simpsons-blip-captions | 2022-11-09T16:27:19.000Z | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | Norod78 | null | null | 2 | 25 | 2022-11-06T11:11:36 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 51605730.0
num_examples: 755
download_size: 50553165
dataset_size: 51605730.0
pretty_name: 'Simpsons BLIP captions'
size_categories:
- n<1K
tags: []
task_categories:
- text-to-image
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
---
# Dataset Card for "simpsons-blip-captions"
| 531 | [
[
-0.0197601318359375,
-0.0002884864807128906,
-0.014434814453125,
0.03839111328125,
-0.042205810546875,
0.026885986328125,
-0.0203094482421875,
0.0247802734375,
0.0306854248046875,
0.0413818359375,
-0.03826904296875,
-0.04022216796875,
-0.039031982421875,
0.0... |
bigbio/iepa | 2022-12-22T15:44:47.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | The IEPA benchmark PPI corpus is designed for relation extraction. It was created from 303 PubMed abstracts, each of which contains a specific pair of co-occurring chemicals. | @ARTICLE{ding2001mining,
title = "Mining {MEDLINE}: abstracts, sentences, or phrases?",
author = "Ding, J and Berleant, D and Nettleton, D and Wurtele, E",
journal = "Pac Symp Biocomput",
pages = "326--337",
year = 2002,
address = "United States",
language = "en"
} | 1 | 25 | 2022-11-13T22:09:00 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: IEPA
homepage: http://psb.stanford.edu/psb-online/proceedings/psb02/abstracts/p326.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
---
# Dataset Card for IEPA
## Dataset Description
- **Homepage:** http://psb.stanford.edu/psb-online/proceedings/psb02/abstracts/p326.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The IEPA benchmark PPI corpus is designed for relation extraction. It was created from 303 PubMed abstracts, each of which contains a specific pair of co-occurring chemicals.
## Citation Information
```
@ARTICLE{ding2001mining,
title = "Mining {MEDLINE}: abstracts, sentences, or phrases?",
author = "Ding, J and Berleant, D and Nettleton, D and Wurtele, E",
journal = "Pac Symp Biocomput",
pages = "326--337",
year = 2002,
address = "United States",
language = "en"
}
```
| 1,014 | [
[
-0.02197265625,
-0.0011644363403320312,
0.03387451171875,
0.024078369140625,
0.002170562744140625,
-0.0194854736328125,
0.0029201507568359375,
-0.036468505859375,
0.017578125,
0.0137481689453125,
-0.03680419921875,
-0.034881591796875,
-0.03515625,
0.03994750... |
bigbio/umnsrs | 2022-12-22T15:47:36.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | UMNSRS, developed by Pakhomov, et al., consists of 725 clinical term pairs whose semantic similarity and relatedness.
The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch
a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness.
The following subsets are available:
- similarity: A set of 566 UMLS concept pairs manually rated for semantic similarity (e.g. whale-dolphin) using a
continuous response scale.
- relatedness: A set of 588 UMLS concept pairs manually rated for semantic relatedness (e.g. needle-thread) using a
continuous response scale.
- similarity_mod: Modification of the UMNSRS-Similarity dataset to exclude control samples and those pairs that did not
match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper (Corpus
Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley, Reed McEwan,
Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644). The resulting dataset contains 449 pairs.
- relatedness_mod: Modification of the UMNSRS-Relatedness dataset to exclude control samples and those pairs that did
not match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper
(Corpus Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley,
Reed McEwan, Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644).
The resulting dataset contains 458 pairs. | @inproceedings{pakhomov2010semantic,
title={Semantic similarity and relatedness between clinical terms: an experimental study},
author={Pakhomov, Serguei and McInnes, Bridget and Adam, Terrence and Liu, Ying and Pedersen, Ted and Melton, Genevieve B},
booktitle={AMIA annual symposium proceedings},
volume={2010},
pages={572},
year={2010},
organization={American Medical Informatics Association}
} | 1 | 25 | 2022-11-13T22:12:42 |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: UMNSRS
homepage: https://conservancy.umn.edu/handle/11299/196265/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for UMNSRS
## Dataset Description
- **Homepage:** https://conservancy.umn.edu/handle/11299/196265/
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
UMNSRS, developed by Pakhomov, et al., consists of 725 clinical term pairs whose semantic similarity and relatedness.
The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch
a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness.
The following subsets are available:
- similarity: A set of 566 UMLS concept pairs manually rated for semantic similarity (e.g. whale-dolphin) using a
continuous response scale.
- relatedness: A set of 588 UMLS concept pairs manually rated for semantic relatedness (e.g. needle-thread) using a
continuous response scale.
- similarity_mod: Modification of the UMNSRS-Similarity dataset to exclude control samples and those pairs that did not
match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper (Corpus
Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley, Reed McEwan,
Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644). The resulting dataset contains 449 pairs.
- relatedness_mod: Modification of the UMNSRS-Relatedness dataset to exclude control samples and those pairs that did
not match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper
(Corpus Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley,
Reed McEwan, Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644).
The resulting dataset contains 458 pairs.
## Citation Information
```
@inproceedings{pakhomov2010semantic,
title={Semantic similarity and relatedness between clinical terms: an experimental study},
author={Pakhomov, Serguei and McInnes, Bridget and Adam, Terrence and Liu, Ying and Pedersen, Ted and Melton, Genevieve B},
booktitle={AMIA annual symposium proceedings},
volume={2010},
pages={572},
year={2010},
organization={American Medical Informatics Association}
}
```
| 2,540 | [
[
-0.01031494140625,
-0.037322998046875,
0.03857421875,
-0.00595855712890625,
-0.03045654296875,
-0.02264404296875,
-0.0037860870361328125,
-0.037506103515625,
0.036865234375,
0.057861328125,
-0.0208587646484375,
-0.045989990234375,
-0.042999267578125,
0.03509... |
r-three/fib | 2022-11-19T15:57:58.000Z | [
"region:us"
] | r-three | null | null | 4 | 25 | 2022-11-19T15:22:00 |
# Dataset Card for FIB
## Dataset Summary
The FIB benchmark consists of 3579 examples for evaluating the factual inconsistency of large language models. Each example consists of a document and a pair of summaries: a factually consistent one and a factually inconsistent one. It is based on documents and summaries from XSum and CNN/DM.
Since this dataset is intended to evaluate the factual inconsistency of large language models, there is only a test split.
Accuracies should be reported separately for examples from XSum and for examples from CNN/DM. This is because the behavior of models on XSum and CNN/DM are expected to be very different. The factually inconsistent summaries are model-extracted from the document for CNN/DM but are model-generated for XSum.
### Citation Information
```
@article{tam2022fib,
title={Evaluating the Factual Consistency of Large Language Models Through Summarization},
author={Tam, Derek and Mascarenhas, Anisha and Zhang, Shiyue and Kwan, Sarah and Bansal, Mohit and Raffel, Colin},
journal={arXiv preprint arXiv:2211.08412},
year={2022}
}
```
### Licensing Information
license: cc-by-4.0 | 1,145 | [
[
-0.0227813720703125,
-0.06976318359375,
0.0128631591796875,
0.0165863037109375,
-0.006855010986328125,
-0.00623321533203125,
-0.0201416015625,
-0.0281982421875,
0.0031909942626953125,
0.0305633544921875,
-0.0222930908203125,
-0.03460693359375,
-0.043212890625,
... |
piuba-bigdata/articles_and_comments | 2023-02-04T00:32:48.000Z | [
"region:us"
] | piuba-bigdata | null | null | 2 | 25 | 2022-11-29T01:25:15 | ---
dataset_info:
features:
- name: tweet_id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: user
dtype: string
- name: body
dtype: string
- name: created_at
dtype: string
- name: comments
list:
- name: created_at
dtype: string
- name: prediction
struct:
- name: APPEARANCE
dtype: int64
- name: CALLS
dtype: int64
- name: CLASS
dtype: int64
- name: CRIMINAL
dtype: int64
- name: DISABLED
dtype: int64
- name: LGBTI
dtype: int64
- name: POLITICS
dtype: int64
- name: RACISM
dtype: int64
- name: WOMEN
dtype: int64
- name: text
dtype: string
- name: tweet_id
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 4141280942
num_examples: 537201
download_size: 1984419392
dataset_size: 4141280942
---
# Dataset Card for "articles_and_comments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,193 | [
[
-0.045562744140625,
-0.0308685302734375,
0.0301361083984375,
0.02044677734375,
-0.0233306884765625,
-0.0051116943359375,
0.011383056640625,
-0.018463134765625,
0.062255859375,
0.03717041015625,
-0.054351806640625,
-0.049652099609375,
-0.039764404296875,
0.00... |
nielsr/coco-panoptic-val2017 | 2022-12-25T17:26:14.000Z | [
"region:us"
] | nielsr | null | null | 0 | 25 | 2022-12-25T16:56:03 | ---
dataset_info:
features:
- name: label
dtype: image
- name: segments_info
list:
- name: id
dtype: int64
- name: category_id
dtype: int64
- name: iscrowd
dtype: int64
- name: bbox
sequence: int64
- name: area
dtype: int64
- name: image_id
dtype: int64
- name: image
dtype: image
splits:
- name: train
num_bytes: 850795822.0
num_examples: 5000
download_size: 849210800
dataset_size: 850795822.0
---
# Dataset Card for "coco-panoptic-val2017"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 667 | [
[
-0.0576171875,
-0.02911376953125,
0.0003771781921386719,
0.037994384765625,
-0.0171661376953125,
-0.0005288124084472656,
0.02093505859375,
-0.0260009765625,
0.06597900390625,
0.05267333984375,
-0.06085205078125,
-0.054595947265625,
-0.043365478515625,
-0.013... |
irds/gov_trec-web-2002 | 2023-01-05T03:04:33.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/gov",
"region:us"
] | irds | null | null | 0 | 25 | 2023-01-05T03:04:27 | ---
pretty_name: '`gov/trec-web-2002`'
viewer: false
source_datasets: ['irds/gov']
task_categories:
- text-retrieval
---
# Dataset Card for `gov/trec-web-2002`
The `gov/trec-web-2002` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/gov#gov/trec-web-2002).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=56,650
- For `docs`, use [`irds/gov`](https://huggingface.co/datasets/irds/gov)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/gov_trec-web-2002', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/gov_trec-web-2002', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Craswell2002TrecWeb,
title={Overview of the TREC-2002 Web Track},
author={Nick Craswell and David Hawking},
booktitle={TREC},
year={2002}
}
```
| 1,314 | [
[
-0.02557373046875,
-0.02276611328125,
0.0113983154296875,
-0.00766754150390625,
-0.017730712890625,
-0.005218505859375,
-0.004180908203125,
0.007160186767578125,
0.01812744140625,
0.03045654296875,
-0.04150390625,
-0.06298828125,
-0.0235748291015625,
0.01698... |
qanastek/frenchmedmcqa | 2023-06-08T12:39:22.000Z | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1k<n<10k",
"source_datasets:original",
"lan... | qanastek | FrenchMedMCQA | @unpublished{labrak:hal-03824241,
TITLE = {{FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain}},
AUTHOR = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Daille, Béatrice and Gourraud, Pierre-Antoine and Morin, Emmanuel and Rouvier, Mickael},
URL = {https://hal.archives-ouvertes.fr/hal-03824241},
NOTE = {working paper or preprint},
YEAR = {2022},
MONTH = Oct,
PDF = {https://hal.archives-ouvertes.fr/hal-03824241/file/LOUHI_2022___QA-3.pdf},
HAL_ID = {hal-03824241},
HAL_VERSION = {v1},
} | 2 | 25 | 2023-01-08T20:22:47 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- fr
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1k<n<10k
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: frenchmedmcqa
pretty_name: FrenchMedMCQA
---
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-for-medical-domain)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://deft2023.univ-avignon.fr/
- **Repository:** https://deft2023.univ-avignon.fr/
- **Paper:** [FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain](https://hal.science/hal-03824241/document)
- **Leaderboard:** Coming soon
- **Point of Contact:** [Yanis LABRAK](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s).
We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
### Supported Tasks and Leaderboards
Multiple-Choice Question Answering (MCQA)
### Languages
The questions and answers are available in French.
## Dataset Structure
### Data Instances
```json
{
"id": "1863462668476003678",
"question": "Parmi les propositions suivantes, laquelle (lesquelles) est (sont) exacte(s) ? Les chylomicrons plasmatiques :",
"answers": {
"a": "Sont plus riches en cholestérol estérifié qu'en triglycérides",
"b": "Sont synthétisés par le foie",
"c": "Contiennent de l'apolipoprotéine B48",
"d": "Contiennent de l'apolipoprotéine E",
"e": "Sont transformés par action de la lipoprotéine lipase"
},
"correct_answers": [
"c",
"d",
"e"
],
"subject_name": "pharmacie",
"type": "multiple"
}
```
### Data Fields
- `id` : a string question identifier for each example
- `question` : question text (a string)
- `answer_a` : Option A
- `answer_b` : Option B
- `answer_c` : Option C
- `answer_d` : Option D
- `answer_e` : Option E
- `correct_answers` : Correct options, i.e., A, D and E
- `choice_type` ({"single", "multiple"}): Question choice type.
- "single": Single-choice question, where each choice contains a single option.
- "multiple": Multi-choice question, where each choice contains a combination of multiple options.
### Data Splits
| # Answers | Training | Validation | Test | Total |
|:---------:|:--------:|:----------:|:----:|:-----:|
| 1 | 595 | 164 | 321 | 1,080 |
| 2 | 528 | 45 | 97 | 670 |
| 3 | 718 | 71 | 141 | 930 |
| 4 | 296 | 30 | 56 | 382 |
| 5 | 34 | 2 | 7 | 43 |
| Total | 2171 | 312 | 622 | 3,105 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The questions and their associated candidate answer(s) were collected from real French pharmacy exams on the remede website. Questions and answers were manually created by medical experts and used during examinations. The dataset is composed of 2,025 questions with multiple answers and 1,080 with a single one, for a total of 3,105 questions. Each instance of the dataset contains an identifier, a question, five options (labeled from A to E) and correct answer(s). The average question length is 14.17 tokens and the average answer length is 6.44 tokens. The vocabulary size is of 13k words, of which 3.8k are estimated medical domain-specific words (i.e. a word related to the medical field). We find an average of 2.49 medical domain-specific words in each question (17 % of the words) and 2 in each answer (36 % of the words). On average, a medical domain-specific word is present in 2 questions and in 8 answers.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
The dataset was created by Labrak Yanis and Bazoge Adrien and Dufour Richard and Daille Béatrice and Gourraud Pierre-Antoine and Morin Emmanuel and Rouvier Mickael.
### Licensing Information
Apache 2.0
### Citation Information
If you find this useful in your research, please consider citing the dataset paper :
```latex
@inproceedings{labrak-etal-2022-frenchmedmcqa,
title = "{F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain",
author = "Labrak, Yanis and
Bazoge, Adrien and
Dufour, Richard and
Daille, Beatrice and
Gourraud, Pierre-Antoine and
Morin, Emmanuel and
Rouvier, Mickael",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.5",
pages = "41--46",
abstract = "This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.",
}
```
### Contact
Thanks to contact [Yanis LABRAK](https://github.com/qanastek) for more information about this dataset.
| 7,913 | [
[
-0.037109375,
-0.060577392578125,
0.0408935546875,
-0.011260986328125,
0.00005733966827392578,
-0.0004820823669433594,
0.00795745849609375,
-0.00525665283203125,
0.03680419921875,
0.0494384765625,
-0.054962158203125,
-0.054840087890625,
-0.04345703125,
0.028... |
IlyaGusev/rulm | 2023-03-20T23:53:53.000Z | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:ru",
"region:us"
] | IlyaGusev | null | null | 12 | 25 | 2023-01-25T18:14:38 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 78609111353
num_examples: 14811026
- name: test
num_bytes: 397130292
num_examples: 74794
- name: validation
num_bytes: 395354867
num_examples: 74691
download_size: 24170140196
dataset_size: 79401596512
task_categories:
- text-generation
language:
- ru
size_categories:
- 10M<n<100M
---
# Dataset for training Russian language models
Overall: 75G
Scripts: https://github.com/IlyaGusev/rulm/tree/master/data_processing
| Website | Char count (M) | Word count (M) |
|-----------------|---------------|---------------|
| pikabu | 14938 | 2161 |
| lenta | 1008 | 135 |
| stihi | 2994 | 393 |
| stackoverflow | 1073 | 228 |
| habr | 5112 | 753 |
| taiga_fontanka | 419 | 55 |
| librusec | 10149 | 1573 |
| buriy | 2646 | 352 |
| ods_tass | 1908 | 255 |
| wiki | 3473 | 469 |
| math | 987 | 177 |
| 1,234 | [
[
-0.00728607177734375,
-0.0285797119140625,
0.01556396484375,
0.00736236572265625,
-0.0164031982421875,
0.006450653076171875,
-0.00833892822265625,
0.0068206787109375,
-0.002155303955078125,
0.0208587646484375,
-0.038726806640625,
-0.059326171875,
-0.044219970703... |
mrm8488/CHISTES_spanish_jokes | 2023-02-17T10:26:57.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"language:es",
"region:us"
] | mrm8488 | null | null | 1 | 25 | 2023-02-15T07:19:30 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: keywords
dtype: string
- name: funny
dtype: int64
- name: category
dtype: string
splits:
- name: train
num_bytes: 814817
num_examples: 2419
download_size: 504749
dataset_size: 814817
task_categories:
- text-classification
- text-generation
language:
- es
pretty_name: chistes
---
# Dataset Card for "CHISTES_spanish_jokes"
Dataset from [Workshop for NLP introduction with Spanish jokes](https://github.com/liopic/chistes-nlp)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 694 | [
[
-0.0330810546875,
-0.0224761962890625,
0.01546478271484375,
0.044891357421875,
-0.035797119140625,
-0.0018529891967773438,
-0.00026607513427734375,
-0.02581787109375,
0.0606689453125,
0.038787841796875,
-0.06640625,
-0.059173583984375,
-0.031890869140625,
0.... |
lansinuote/diffusion.2.textual_inversion | 2023-02-24T06:16:59.000Z | [
"region:us"
] | lansinuote | null | null | 0 | 25 | 2023-02-24T05:29:07 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1740639.0
num_examples: 6
download_size: 0
dataset_size: 1740639.0
---
# Dataset Card for "diffusion.2.textual_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 366 | [
[
-0.0291900634765625,
-0.0667724609375,
0.032012939453125,
0.031585693359375,
-0.00567626953125,
-0.01430511474609375,
0.0036029815673828125,
0.0025482177734375,
0.035186767578125,
0.037261962890625,
-0.0455322265625,
-0.051025390625,
-0.058441162109375,
-0.0... |
urialon/gov_report_test | 2023-02-28T15:42:26.000Z | [
"region:us"
] | urialon | null | null | 0 | 25 | 2023-02-28T15:42:18 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.0149993896484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.01702880859375,
-0.0521240234375,
-0.01494598388671875,
-0.06036376953125,
0.0379028320... |
instruction-tuning-sd/low-level-image-proc | 2023-05-11T15:22:12.000Z | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | instruction-tuning-sd | null | null | 4 | 25 | 2023-03-04T05:45:03 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input_image
dtype: image
- name: ground_truth_image
dtype: image
splits:
- name: train
num_bytes: 3463030772.619
num_examples: 1917
download_size: 3528678684
dataset_size: 3463030772.619
task_categories:
- image-to-image
language:
- en
size_categories:
- 1K<n<10K
---
# Instruction-prompted low-level image processing dataset
To construct this dataset, we took different number of samples from the following datasets for each task and constructed
a single dataset with prompts added like so:
| **Task** | **Prompt** | **Dataset** | **Number of samples** |
|---|---|---|---|
| Deblurring | “deblur the blurry image” | REDS (train_blur and<br> train_sharp) | 1200 |
| Deraining | “derain the image” | Rain13k | 686 |
| Denoising | “denoise the noisy image” | SIDD | 8 |
| Low-light <br>image enhancement | "enhance the low-light image” | LOL | 23 |
To know more about how this sampling was performed, refer to [this notebook](https://huggingface.co/datasets/instruction-tuning-sd/low-level-image-proc/blob/main/data_preparation/sample_dataset.ipynb).This notebook outputs a CSV file which was then used for generating the final version of the dataset ([notebook](https://huggingface.co/datasets/instruction-tuning-sd/low-level-image-proc/blob/main/data_preparation/final_data_preparation.ipynb)).
## Known limitations and biases
Since this dataset was derived from various datasets, as mentioned above, it inherits their limitations and biases as well.
## Licensing
Since this dataset was derived from various datasets, as mentioned above, it inherits the licensing policies of those datasets. | 1,709 | [
[
-0.04205322265625,
-0.052276611328125,
0.0171966552734375,
0.01425933837890625,
-0.0234222412109375,
-0.0021800994873046875,
0.007476806640625,
-0.02984619140625,
-0.0036334991455078125,
0.04693603515625,
-0.06500244140625,
-0.053009033203125,
-0.035980224609375... |
MichiganNLP/svo_probes | 2023-06-18T05:28:20.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | MichiganNLP | null | null | 1 | 25 | 2023-03-22T20:57:44 | ---
license: cc-by-4.0
language:
- en
pretty_name: SVO-Probes
size_categories:
- 10K<n<100K
---
# SVO-Probes
This dataset comes from https://github.com/deepmind/svo_probes.
## Usage
```python
from datasets import load_dataset
# Note that the following line says "train" split, but there are actually no splits in this dataset.
dataset = load_dataset("MichiganNLP/svo_probes", split="train")
# To see an example, access the first element of the dataset with `dataset[0]`.
```
| 481 | [
[
-0.0220489501953125,
-0.0028839111328125,
0.0202178955078125,
-0.00185394287109375,
-0.030975341796875,
0.012542724609375,
0.021484375,
0.004764556884765625,
0.038360595703125,
0.04437255859375,
-0.076171875,
-0.0254669189453125,
-0.0243072509765625,
0.01055... |
sklearn-docs/digits | 2023-04-06T19:05:28.000Z | [
"size_categories:1K<n<10K",
"license:cc0-1.0",
"region:us"
] | sklearn-docs | null | null | 0 | 25 | 2023-04-01T14:09:07 | ---
license: cc0-1.0
size_categories:
- 1K<n<10K
---
# Dataset Card for digits dataset
Optical recognition of handwritten digits dataset
## Dataset Description
- **Homepage:** https://scikit-learn.org/stable/datasets/toy_dataset.html#digits-dataset
## Note - How to load this dataset directly with the datasets library
```
from datasets import load_dataset
dataset = load_dataset("sklearn-docs/digits",header=None)
```
### Dataset Summary
This is a copy of the test set of the UCI ML hand-written digits datasets https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
The data set contains images of hand-written digits: 10 classes where each class refers to a digit.
Preprocessing programs made available by NIST were used to extract normalized bitmaps of handwritten digits from a preprinted form. From a total of 43 people, 30 contributed to the training set and different 13 to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of 4x4 and the number of on pixels are counted in each block. This generates an input matrix of 8x8 where each element is an integer in the range 0..16. This reduces dimensionality and gives invariance to small distortions.
For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G. T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C. L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469, 1994.
### Data Instances
Number of Instances:
1797
Number of Attributes:
64
Attribute Information:
8x8 image of integer pixels in the range 0..16.
Missing Attribute Values:
None
Creator:
5. Alpaydin (alpaydin ‘@’ boun.edu.tr)
Date:
July; 1998
### Citation Information
References
C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their Applications to Handwritten Digit Recognition, MSc Thesis, Institute of Graduate Studies in Science and Engineering, Bogazici University.
Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin. Linear dimensionalityreduction using relevance weighted LDA. School of Electrical and Electronic Engineering Nanyang Technological University. 2005.
Claudio Gentile. A New Approximate Maximal Margin Classification Algorithm. NIPS. 2000.
| 2,290 | [
[
-0.02618408203125,
-0.002872467041015625,
0.02734375,
-0.0025959014892578125,
-0.03289794921875,
0.007076263427734375,
0.00464630126953125,
-0.035552978515625,
0.01097869873046875,
0.033416748046875,
-0.03533935546875,
-0.0311431884765625,
-0.0367431640625,
... |
Akajackson/donut_synthdog_rus | 2023-04-01T18:52:22.000Z | [
"region:us"
] | Akajackson | null | null | 1 | 25 | 2023-04-01T17:46:40 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 8522173356.748
num_examples: 96204
- name: validation
num_bytes: 1062440747.78
num_examples: 11820
- name: test
num_bytes: 1107229186.768
num_examples: 11976
download_size: 10700638276
dataset_size: 10691843291.296
---
# Dataset Card for "donut_rus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 555 | [
[
-0.012908935546875,
-0.00992584228515625,
0.0200042724609375,
0.003688812255859375,
-0.0007915496826171875,
0.0157012939453125,
0.006710052490234375,
-0.002285003662109375,
0.05902099609375,
0.0279541015625,
-0.057373046875,
-0.05682373046875,
-0.0443115234375,
... |
InstaDeepAI/multi_species_genomes | 2023-11-01T14:07:25.000Z | [
"DNA",
"Genomics",
"Nucleotide",
"region:us"
] | InstaDeepAI | Dataset made of diverse genomes available on NCBI and coming from ~850 different species.
Test and validation are made of 50 species each. The rest of the genomes are used for training.
Default configuration "6kbp" yields chunks of 6.2kbp (100bp overlap on each side). Similarly,
the "12kbp"configuration yields chunks of 12.2kbp. The chunks of DNA are cleaned and processed so that
they can only contain the letters A, T, C, G and N. | @article{o2016reference,
title={Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation},
author={O'Leary, Nuala A and Wright, Mathew W and Brister, J Rodney and Ciufo, Stacy and Haddad, Diana and McVeigh, Rich and Rajput, Bhanu and Robbertse, Barbara and Smith-White, Brian and Ako-Adjei, Danso and others},
journal={Nucleic acids research},
volume={44},
number={D1},
pages={D733--D745},
year={2016},
publisher={Oxford University Press}
} | 7 | 25 | 2023-04-06T19:05:46 | ---
tags:
- DNA
- Genomics
- Nucleotide
pretty_name: Human Reference Genome
---
# Dataset Card for the Multi-species genome
## Dataset Description
- **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer)
- **Paper:** [The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics](https://www.biorxiv.org/content/10.1101/2023.01.11.523679v1)
### Dataset Summary
The Multi-species dataset was constructed by parsing the genomes available on [NCBI](https://www.ncbi.nlm.nih.gov/), before arbitrarily selecting only one species from each genus. Plant and virus genomes were not taken into account, as their regulatory elements differ from those of interest in the paper's tasks. The resulting collection of genomes was downsampled to a total of 850 species, in which several genomes that are heavily studied in the literature have been incorporated. The collection represents 174B nucleotides, resulting in roughly 29B tokens. The distribution of each genomics class in the dataset is displayed below:
```
| Class | Number of species | Number of nucleotides (B) |
| ---------------------| -------------------| --------------------------|
| Bacteria | 667 | 17.1 |
| Fungi | 46 | 2.3 |
| Invertebrate | 39 | 20.8 |
| Protozoa | 10 | 0.5 |
| Mammalian Vertebrate | 31 | 69.8 |
| Other Vertebrate | 57 | 63.4 |
```
### Supported Tasks and Leaderboards
This dataset has been used as a pre-training corpus for the Nucleotide Transformers models. Depending on the configuration used, each sequence is 6,200 or 12,200 base pase pairs long. If the dataset is iterated without being shuffled, the first 100 nucleotides of a sequence are the same as the last 100 base pairs of the previous sequence, and the last 100 nucleotides are the same as the first 100 base pairs of the next sequence. During training, this allows for randomly selecting a nucleotide between the first 200 nucleotides of the sequence and start the tokenization from this nucleotide. That way, all the chromosome is covered and the model sees different tokens for a given sequence at each epoch.
### Languages
DNA
## Dataset Structure
[N/A]
### Data Instances
For each instance, there is a string representing the sequence, a string indicating the description of the sequence, two integers representing the index of the first and last nucleotide respectively and the link to the genome's fasta URL. An instance is shown below:
```python
{'sequence': 'AAACTACCACTGGCTAAATTTCGACCATCTGGGCTAATAGCAACTGACCGCACCCAATATTTATGTCCTTTAAGTGTGCGAATTAGCTTTCCTGTGCCTAAATTCCAAACTTTGAGAGTGTTGTCATCGCTACCACTCACCAAAATTTTCCCATTAGGACTAATTGTTAATGCTTGAATGGAGTCAGTATGTCCTGTTAATGTGTAGACTATTTTACCTGTTGCCAAATTCCAGGCTTTAATAGTTTGATCATCACTCCCGCTAACCAAAGTTTTGCCATTGGGACTGATAGCCACAGCATTAACTTTTTGCGAATGTCCACTCAGGGTTAGTATTTCTTTTCCTGTGGTCAGATTCCACATTTTAATTATGCGTTCCCCTTCGCCACTACTAGCAATTGTCTGCCCATCGGGACTAATGGCGACAGAGACAACAGATTTTGCCCCACCTTTGAGGGTGTTAGCTAAGGAAATATTTTTAACTGGAACATTGGGTGACTGACCAAAAACAACTTCACCCTGAGTAGGACTGTAATTTCCTGGCTTTAGTCTCGATAACAAACTGGTTTGAATTTGGTGATATTTTTGATACCAAGTATCACTAAAACCAAATAACAAAATGAAAGCAGCGCCTAAAACTAAACTTTTGACAAAAGCATATTTAAAGGAGAACTTTGCACTCGGTTGAGTTACGGTGAATTTTCCTGATGATTGTCCGGCGGCTGGTAAGGCGCGTGGGAGTGATGGAATCAAATCTTTAATCACTTCATCGGCTGACTGGTAGCGTTGACTTAAGTCTTTTTGCAACAGCTTCGTCATCACCCCTTCCAATTCTGGCGACAAAGGACTACGCAAATATTCCCGCCAACTGTTCGCCCAGCCATAGCCATGTTCCATCCACAATTGAAAAGGGGATGTTCCTGTTAAGAGATGAAAACAGGTAGCCCCCAAACTGAACAAATCACTAGCTGGGTAAGCTTTACCGTCTCTGATTTGTTCCAGTGGAGAATAACCATGCGAACCAATGGATGTACCATTTTTATTCTTGACTTTTTCGGTTAATTGCTTAGAAGAACCAAAATCAATCAAGCTAAGTCGCCCATCATAACGACAGCGAATTAAATTTTCTGGTTTAATGTCTCGGTGAATCACACCGCGATCGTGAATGAATTTGAGTACAGGCAGTAAATCAAGTAAAATTGCTTGAATTTCATTCGCTTTATAGACTTTGCGCTGTTGTAATTCTTTTAACAAGTTCTGCCCATTAATAAACTGTTGTACCAAATAAAGGCAGTTATCTTGTTCAAAGTAAGCAATCAGTGTAGGAATTTGCGGATGTTCGCCGAGTTCTTGCAGTCGCTTGGCTTCTTCTGCAAATAACTCCATTGCTTTTTTCTGCGACCAAGTTCCTTGAAATTTCGGTGCTAATTGCTTAATTACACACAGTTCATTGAGTTTATCGGTATCTTCAGATAAATAAGTTCTGCCAAATCCCCCCTCATCGGAAAGCACCCGAATCACTCGAAAGCGATTTCTTAATAGTGGCACCAAGGGGGTGCTACAAGTTTGGCATGACTGCTTTCCTTTGGGATTTAGGGGATTTGGACAATCGGGATTTAAGCAGCAGATCATTATCTGACAGGCGCAACTGCATAAAAATTTTTACTAAATTAACCCCGATATTTCCCTAGATGATGATTGACTCTCACGTATTGATGGTAGATCCCGCTGGTAGTGGGGAGTGGGGAATCAATTATATAGTCAATTTTGGTAAATGCTCATAAGTTTTCTTCAATGCAGGAAAACTACGAGAGTCATCAGCTGAATTTTATCGATTATAGCAGCAGGCAAAAGTAGCAGACAGGTTAAGAGTGTCATTAGTCAAGACAAATGACTCATGACTAATGACTCATGACTAATAACTAAGGCTTTTGGGTGGCGATCGCTAATTTTGCCCCCTGGACTTGTCTGACTTGATCCATCACTGCCACTACTTTACCGTGGGTGACTGTTGCATCAGCATTCACAATTACTAATGCTTCTTGGTTATCGCCTACCAAGGTACGCAATTGTCCGGCTAAACCGTCAACAGTGCTTGGTTGACGGTTAACACTTACTATTCCATCTTTATCTACTGTGACGGTAATTTTGGCTGGAACTTGCTGCTGTTTGGCTGTCGCCGCTTTGGGTAAGTTGACGGGTAAACCTTCTGAGCGAGTTAAAAATAACGTTGACATGATAAAAAATGTCAAAATCGCAAATATCACATCAATCATTGGCACGATGTTGATTTGCGGTGGTAAATCTGGCTCATCTTGTAGACGCATAGGTTCTGTCTCCTCGTTCAAAGCGGCGGCGATAGAGCAGTTCTAATTGTCCACCATATTCTTGTATTGCGGCAATCTGTCGTTGATATAACCCTCGAAAGGTATTAGCAAATAAAAGTATAAAAATAGCCACAATTAAACCTGAAGCTGTAGATACCAGCGCTTCACTAATACCTGCGGTAACTCCTGCGGTTTTTGTCCCGCCTACATCACCCAAGTTTAATGATGCAAAAGAAGCAATCAAACCTAATACAGTACCCAGTAGACCTAAAAGTGGTGCAAGACCAATAATTGTGTCAAACATATTTTGAAAACGTTTGAGAACTGGGATTTCGGCTTGCGCTTCACTTTCTAGTGCAAGCCGAAATTCTTCTGGGGTTGGTTCTTCTAATTGCAACGCCGCTAAAAAAATCCGTGTCATGGGCAAATCTGCATTCTTTTGCAATTTATCCAACGCGCCAACAACATTATCAAGGCGGTAAAGATTCAACACTTCTCTGACTATGCGGTTTTGCCGAGTATTGATGCGATACCAAAAGCGGACTCGCTCGATAATTAAAGCAATTCCCACCACACTAAACGCCAGCAGGGGCCACATGACTACGCCACCTGCTACAAACAACTCATACATGGGCAATATCTCTAGGAACTAAATGGACAACGTTACAGTTAGACTAGCAGTTTACGGTACTAAATGATATATCTTATCAATAAGGAGTAGACAAAATAAAAAGCTATGTCAAATTCGGTTGAGTTTTGATGACATAATTATTCATTCTTGTTCAAGGCTTGATTCGCTACAATCCTGATGATGAAAGTATTTGTGTAAGTATACAGTTGATGAAAGCTAACTCAGGAATTTTTTTCTTTATTGCTTGACTTTTGCGAGAGATGGTTTTGAACAGAGTAATTACTAATAAGAACTTGCAATAAATTTAAACAGAACAGTAGTTTGTAGCTTTGCTTGAGAAGCGATCGCCCGACGTTGAGAGTTAAAGTATATTTTGCGTACTAACTTACCCAACGCCCAAAAAATTACATCATTTGAATATCGTCAATTTGTACTCTTAATCATCTATGGCTAAACTATTTGACTCAATCACAGAAGAACTGCAAGAGTTTATTGCAGCCCAAAACCTTTTCTTTGTAGGAACCGCGCCTCTGAGTGCTACAGGTCACGTTAATTTATCTCCCAAAGGTCTCGATTGCTTGCGGATTTTATCACCCCACAAAGTCGCCTATCTCGATCTCACAGGTAGCGGTAACGAAACTTCAGCCCATCTGCAAGAAAATGGTCGCATTACCTTCATGTTTTGCGCCTTCACTGAACCAGCGCGCATCTTGCGACTTTACGGTCAAGGACACGTAATTTTACCTAGCTATCCTGATTGGGATTCTGTATATTCAGTGTTTCCGCCGCTACCAGGAACTCGTCAAATTATCGTAGCTGATATTGAGATTGTGCAAAGTTCCTGTGGTTTCGGCGTTCCTCTTTACGAATACCAAGGTCAACGCCAAACACTAGTAAATTGGGCTGCTAAAAAAGGCGAACAGGGAGTCCGAGAATATCAACAACAAAAAAACAGCATCAGCATTGATGGTTTACCGACACCATTAGGCCAATTATCTGACGGTTAAAGCGGCGTTTCATATATTTTTAGTTAATCTGAACCAAAAAATCTCAAATTTTTTGTCAATAGTCTCTAGTCCAAAGAAGCTTGATTTTTGACCATAGATTGTAGGCTTTTGACAAAAATAACCTTTATAGAGAAAATTTATCCTTGCTGACACTCTATAACTAAGTTTATAAAACATAGCGTCAAAAATCGATACATATCAGTTCTATTTTCTGCCTCTATTCCTAATTAAATTTGGTGTAAAGGAACTATTATGCGGTTTCCGTGTCTTGACGTAATGATTTGCAACGAATTATGATTCGAGTTTAGTCCGGATCAACCGAGACATCCTCGAAAATTGGTGCAAGTAAATTCAACTTTCGCTCTACATAATCACACGCATGAGATTACGCTTATTTCTGTTTAGCGTTGTCAGTATTGTCCTGCTTTCTTCTCCAGTAAGAGCATCTCGCTTAGAATCTTGGAGCTTTGACACCGCACAAAATCAACTGAATATTACTACTGTATCTGGTGTTAAACCAAGAGCATTTTTAATTCAAAATCCCACGCGGTTAGTTATCGATCTTCCTGGTACACAACTGAACACAAATACAGTTCGGAAAAACTTTGGTTCCACAGTACGTGAAATCCGTGTTGGTAAGGTTGACGATAACACAACAAGATTAGTAGTTGAATTAGCACCTGGATACACTGTAGACCCTAACAAGTTACTGCTGCAAGGTGATTCTTCCACTCATTGGATAGTGAAATTTCCATCGGTAGAACGGGTTCAAAATCCTGTTGATAATAATTTTTCTTTATCTAGTGAAGAGCAAATTCCGGTTTCTGTGAGTGATGTTTCTTTGTTTGCGGGAGTTGTACCGTTAGGTAAGGAAATACCACAATTGCGATCGCAGGTACAAGCCTTAGCTGCTCGTTATCGTTCCCTGGATGCAGGAATGTTCTTTTTAGATTTAGATACTGGTAACTATCTAGATTTAAATGGTGAGAAAGTCTTTCCTGCTGCTAGTACAATAAAGTTTCCCATTTTAGTAGCGTTATTTCAAGAAGTAGATGCAGGTAGAGTCAAACTGAATGAAACCTTAGTTATGCGGCGCGACTTAATAACTGGAGGTTCTGGAGAATTTCAATACAAGCGTGCAGGAAGTCGTTTTAGTCTGATAGAAACCGTGACTAAGATGATTACCATCAGCGACAACACAGCTACCAATATGGTAATTGACCGATTAGGTGGTAAAGCTAAGTTAAATCAGCGTTTTCGTGGTTGGGGTCTGCAAAACACCGTTGTGCGGAATTTACTCGGCGACTTTAAGGGAACGAATACAACTAGCGCCAAAGATTTAGTCAGGCTGTCTGCGTTGGTTGCAAAAAATCAATTATTGACTGATTCCAGCCGTAGCAAAGTTTTGGATATTATGCAGCGTGTTCACAACACCAAGTTATTACCTGCTGGTTTGGGTAAAGGTGCGGTAATTGCTCACAAAACCGGAACTCTAGGCATTGTACTAGGTGATGCCGGGATTATTCAAATGCCATCTGGTAAGCGCTACTTAGCCGGAATTTTTGTCAGAAGACCTTTTAATGATTTAAAAGCGCGAGATTTTATCAATCAAGTTTCTCGAATTGTTTACGGCTATTTAGACCAACCAAGAGTCGCCAGCAAGCCTTAATACTCCTGATGTAAAAAAGAAAAATTTTAATTGACGTAAGCCCCTGATATTCATTAATATCTAGGGGTTTTTGCATATCTATTTATAGCAGTGCTTAACGCACCCTATCTCTCAGTGCGTTACGGCTAATCCTTATTCTCTTAAACTAACAAATTCTTGCATAGCCGTAACACATTCTAATTCATATTGGCTTTGAAGGATATTGACTGTATTCCTGCCAAGTTGGCTACATATACCTAAGCCGCACTGCTAAATTATGAATGGGAAATAACTTGCGGGCTTGATAAACCAACTTTTACTACACTAAACATGCTAAAGCATTAACAACGGACGGATTTAGGTTAGTTGCTTATTTTGCTCACTCTTGTGAGAGATTGCTGCTGTTTTTATTGTAGCGATCGACATCAAACTTCTTTATCTCTAAAAGGACAAATATAACAGGAAGTCCTCATTGATTACTCCTATCCTCACCTCGTTCATCGCAAAATGTACGAGGGCTTTTTTTATTTGGCAGAATTTACCCCTATTACGCCAATGATAATTAAAGCTATCGAGAAAAGTTTGGTAAGAGACATTGATTCACGAAACCAAATTACCCCAATAGTAGCGATTACAGTTGTGCCTAAACCTGACCAAACAGCATACGCAATGCTGACTTCAATTTTTTTAAGAGCTAAAGTTAAAAAACTAAAACAAATTCCATAACAGATAAAAATTAAAACCGAGGGAATAGTTCTTGTAAACCCCTCAGACAATTTCATGGAAGTTGTACCAGCGACTTCAAATAAGATTGCTGCAATGAGATAAAGCCAACTATTTACCATGTTTATTGATTGATTATAAGGTGATGATGGGAATATGATTTTTCGACAAGCATAATGAGTCAAAATTCTATATTTAATCTATTAACTAATTCTGCTATTTTGACAACATTTATAGTTAGCTGATGAGATAGGCAAAAATCAAAATATTCATATTTCCGAATTAGTAAAGAAGTTGGTAATCTCTAAAGTTCAGTTTACCACACCAATATTATGGGGGTTTACCGTACTAATACTAAGGTTCGGAAATCATGATGTAATTGGTGATAAAAACCGAATTTACACTGTACTGGATTGTGAATACTATAAAAACAACGCAAATGATTTAAACCTAAATCAACTACACAAAATTAGAAATTAAACGAGGTGGAGACATGACATTAGTGCGTTGGAATCCTTGGCAAGAAATGAACACTCTCCAAAGACAAATCAACAATTTATTTGCAGACGAAATGCTCCCATCTACTTTACTTGAAAGAAGCCTTACAAAAGTTCCGGCGGCTGAATTACACGAATCTGAAGAAGCTATTCATCTCAAGCTAGAATTACCAGGAATTGAAGCCAAAGACCTAGATGTGCAAGTTACAGAAAAAGCTGTGTATATCAGCGGTGAACGGAAATCTGAAACTAAAACAGAAGGGAAAGGTGTAACCAAGAGTGAATTTCATTATGGGAAATTCCAACGTTTGATTCCTTTACCAACTCGCATTCAAAATACCAATGTTACTGCTGATTATAAAGATGGTATTTTGACTCTGACTTTGCCTAAAGCCGAAGAAGAAAAGAAAAAGGTTGTCAAGCTGAATCTTGAATCTATTGGCTAATATCAATTTTGGATTAGCGCTAAAATACCCGACTTCTTTAAGAAGTCGGGTATTTTGTTGTTCACTAATGATTTAAAATTGCTATAAGCTGCGATTTCTGCCTGTTGATTGTTGTCTGTCTACGGGAAAAACGTCAAAATCGAAAGTTGCAATTAGACGCTCATCAACGTATACCTGTATTTTATGCTTACCAGGAGGATCACCTGCGGCGATCGTCCAATAGTTTTCAATTACACCATCATTAGCTATAGTTTTGCGCCTCATTACCGACTCTGTACCGTCAGCGGAGACTGTGAAGTTTTCACCATCATCTGTAGCCCAAGTTTCTGGGGGTTTTGGTAAGCGTAGGACTTCTCGCCATGTAACTTCGCCTTGGTAGTCTTTGAGTTGAATTCGCCACCCATATTTACTACCTTCTTGTAGTGGGACTCTGAATGTGGGGATGAAGTTAACTTTACCTCTAGCATCGACTCTCGCTATGCCAAACTCAGCTTTGTCGATCGCTACCGACTTTTTAGTATTGTTTGCTTGAGAAATTGACCCTGATGATGCTATTTTTTCGTCGGAGATCGCTACTGTAGCATTGATTGGCTGAGACGCTACCAACCCGGAAACTAGCCAAGAAGAAGTTAGTACAACTATTGCAGTCCAAATTCTCATCAGCAAAATTTTTGGTCATTTACTAGTACTTATTCCCGCCTTCCCATTGGCTTCCGGGTACAGTCCCGATAAATAGCCAAGTTGGCAGAATAAAAGTTGCAGAATTAATAGTCAGTTTATAGTTAAATCGGCAACACCAGATCAAGCCACTCAAACTACTTTACTCTCGGGCCAGTTGCCAGAACTGCGAAAACTATCATCGCAGGTTTTCGGTGTAGGTGCTAAATATGCGTTTATTCTTAACTATTTTGTGTTCAATACGGAATTTTTAATATGTAAGCAATTGCTGACAGTCGGCTATTTGATCAATTGTCATTTCCTAGAGTTTCATCCCCTTGAGGGGAAGGAGTTTGGGAAATGTCAAAAACTGTCAAATGCTTAATGCAAAGATTAACAGTTGTGCCTAAGTGCGATCGCACTTAGGCATGACAAAGCATCAAAAATTAGCATTGGAGAACCGATATTTTCCTATTACCTGACTGCTATATATTGATAGTGAGGCGTTTTTGAGCAGCAAACAGCATGGCAGATATTCCAAATTCCATCGCATCATACCGTGCCTTAGCACTGCAAGTTACCTGTCATGCTGTGAATCAAGCGAGCGATCGCCACGCTGTCCAAGAAATCATTCATCATACTATCAACCGCCTGGCGCAACAAATCGCCGCCAGTATTGCTTTTATTGGTTTTGACTGTCGTTTAATTGTTTTACCAGAATATTTTCTGACAGGTTTCCCGATGGGTGAACCTTTGGCTGTTTGGGGAGAAAAGGCTTGTATAGAAATGCACGGTGCCGAGTATGAAGCCCTCAGTAAAATTGCTCAAAAACATCAGATATTTTTAGCTGGTAACGCCTACGAACTCGACCCCAATTTTCCTGGCTTATACTTTCAAACTTGCTTTGTGATTGACCCGGCTGGTGCTATTGTCTTGCGGTATCGGCGGCTAAATTCGTTATTTGCACCCACACCTCATGATGTTTGGGATAAATATCTTGATTGTTACGGCCTAGAAGGGGTGTTTCCTGTAGCGAAAACTGCAATTGGCAATTTAGCCGCTTTAGCTTCCGAAGAAATTTTGTATCCAGAAGTAGCGCGGTGTTTAGCAATGCGTGGTGCAGAAATTTTTCTGCATTCCACTTCTGAAATTTATAGCAAAAACCTCACACCTAAAGATGCGGCGAAAATTTCTCGCGCTGTGGAAAATATGGCTTACGTTGTGTCTGCGAATACCGCAGGTCTAGCTAATAGTTCTATACCCAGCGCTTCTGTTGATGGTGGCTCAAAAATAGTTGACTATCGCGGTATCGTATTAGCAGAAACAGGTGCAGGCGAAAGTATGGCAGCTTTTGCAGAGATAGATTTAACTGCTTTAAGACGCGATCGCCGTCGTCCAGGGTTAAATAATTTACTGTCTCGCCAGCGATTTGAACTCTACGCCCAAAGCTACAGCCAGTCACAATTTTATCCAGCAAACACTATGCTAAATCAAGAATGCGATCGCCAACACTTCATCCAAACACAGCAACAAACCATAGAACGTCTATCTCAGTTAGGAGTGATTTAAAAGTCTAAAGTCTGAAATTAGATTCTTTTGACCATTGACTATTGACAAATGACAAATGACAAAACCAATCGAAGTCCGTAACCCGCGAACGGGAAAATATGATTATGTAATTATCCCACCGCCGCCGAAACTGCTGGCGCAGCAATGTAACCGAGCGCGAAGGGCGCAAGTGCGTTGGCAAAAACTGGGCGTAGAAGGGAGAGTTGCAGCTTTAAAAGAATGGAAGCAAGCAGTTTTGGCTGGACGCGAAAAGCTCACAGATGCTTTGGTCAATGATACGGGTAGATTATCTATATCAGTGATGGAAATCGACTCATTCCTTTCTAGCATCGATCGCTGGTGTGGATTAGCGCCAGATTTATTACAAGATTCGGCCAAAAATACATCAATTCCGTTCATCGCCTTACAACAAACATCAACGCCTTACCCTGTAGTTGGGGTAATTAGTCCTTGGAATTTCCCTCTGTTGCTGTCTACGATAGATACCATTCCCGCACTGTTGGCGGGTTGTGCTGTAGTTGTCAAACCCAGTGAAATTGCACCGCGTTTCATCGCCCCACTGATAGCTGCAATTAATCAAGTACCCGCCTTGCGCGATGTTTTCAGTTTTGTGGAAGGTGCGGGAGAAACTGGCGCGGCTTTGATGGAGAATGTAGATTTAGTTTGTTTTACCGGTAGTGTCGCTACTGGACGCAAAGTTGCAGAAGTCGCCGCACAAAGATTTATCCCCGCTTTTTTGGAATTGGGCGGGAAAGATCCGGCGATCGTGTTGGAATCTGCCGATTTAGAATTAGCCACATCAGCGATTTTATGGGGTTCCGTCGTTAACACCGGACAGTCTTGTTTATCAATTGAGCGTATTTACGTTGCCGAATCTATCTTTGAAAAGTTTTATCATCAGTTAGTAGCCAAAGCACATCGCCTACAACTAGCCCATCCCACCATTGAAAGTGGCGAAATCGGCCCCATTATTGCTGAAAGACAAGCTGGCATAATTAACGAGCATATCTCCGATGCAGTGCAAAAAGGTGCAGTAATTCATTGTGGCGGTAAAGTTGAAGAGTTAGGCGGTGGTTGGTGGTGTCATCCCACAGTGCTGACTCATGTTAACCATACAATGAAAGTCATGACCGAAGAGACTTTTGGCCCGATCATGCCAATCATGCCTTTTGCCACAGTAGAGGAAGCTGTTAACTTAGCCAACGATTCAATTTATGGACTGAGTGCGGCGGTGTTTGCGGAAACCGAAACTGAAGCGTTAACAGTTGCCCAGCAAATAGATGCAGGTGCTATCAGTATTAATGATGCCGCCCTCACCGCCATTATGCACGAAGGTGAAAAAAACGCTTTCAAATTATCCGGTTTAGGCGGTTCACGTATGGGTGCAGCCGCCATCAAACGATTTTTGCGGAAAAAAGCGTTTTTGATTAAAACCAACTCAAATCAAGACCCTTGGTGGTTTGAGCCTAAAGTGTAGTGCAATCTTCTCTCAGCGACCTCTGCGTCTCTGTAGTTCGTTAAAAACCGTATTAGATTCTGTTTGTTGGGTTTCGCTGTCGCTTCACCCAACCTACTTTCCTTAAACCCCTACTACAGATTCATTCACAGTTTCACTAGCCGCAACACCATTAGTCAAAATCGCTTGCCGAGTTTTCAGGTTAAATTTATAACCATGTGGCAAAATATGCAGCTTCGCACCACAAATTGCCAAAGGTTCATCCCGGAGAATTGTATCTGCGTTGTTATATGTAGATTCAGACTCATCCACAATGGTGACTGAACCTTCACCAATAATTTCGATTTGGTCATCAGTCACGGCGATCGCTGTATTCTCATCAATCCCAAATCCTAACACCGCAGGTTCATGAATTAAAGCTGTAATTAAACGCCCTAAGCGTCCCCGTTGTAAGAAATGTTGGTCAATCACCACCCCTGGGAGAAAACCCATACCAGGCCCCATTTCCACAATTTCCATCCGTGGTGTACTTTGAGAATCACCCTCAACAATCATTTTATCGGGCATCACAGCCGCACCCGCACTAGTACCTGCAATTACTGCACCTTCAGCATAGCGTTGGTGAATAGCCGCATCGATTTCGGTATCCTTGAGGATACTAGTAATTCGCGCTTGGTCTCCTCCAGTAAAAAATATCCCAGTCGCCTTAGCAATAGCTTCTAAAGCCGTAGAAGACCTAGCATCTTCACGAGTTTCTGTATCAATAATGCGAACGTGTTCTGCACCTAGCCGTTCAAAAACTCTAATATAATTTTCCCCCACTTCTCTAGGCAGTTCTGTGGCGGCCGTCATAATTACAATATTGGCTTTTGTACCCCCAGCCCGACGGACAAATTCTCGCAGAATCACACAATCTCCTTCTTTATCTTCTGCGCCACCAATAATTACCAACTGGCGTTTATGTGCAGTTTCTGTCATAATGCCCCCCGGATAACCGGATTAGAATTTAATTTAGATTAATTTCAATAAAACATGACAATTATCACAATCAAATCATCCATTTGATAGATTAATTTTTAATGGCAAAAGTTAAATTATATATAACTTTATGTATATATAAACTCTTGCCAAATTTAGCATTTTTAATAATTGGTAATTCATTTAGCAGAATTACCAATTACTTATACAGTAATAATTTATGTATAACTCTTCTCAAGTAATAGCACTAAAATCTCATAGT',
'description': 'NZ_AP018174.1 Anabaenopsis circularis NIES-21 DNA, nearly complete genome',
'start_pos': 1824000,
'end_pos': 1836200,
'fasta_url': 'https://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/Anabaenopsis_circularis/latest_assembly_versions/GCF_002367975.1_ASM236797v1/GCF_002367975.1_ASM236797v1_genomic.fna.gz'}
```
### Data Fields
- `sequence`: a string containing a DNA sequence from the human reference genome
- `desciption`: a string indicating the Species of the sequence as well as the NCBI id.
- `start_pos`: an integer indicating the index of the sequence's first nucleotide
- `end_pos`: an integer indicating the index of the sequence's last nucleotide
- `fasta_url`: a string indicating the URL used to download the fasta from which the sequence was taken.
### Data Splits
The Multi-species dataset has 3 splits: train, validation, and test. |
## Dataset Creation
[N/A]
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
The data consists of sequences cut from the the whole genome sequences of the 850 species sampled that can be found in the `urls.csv` file of this dataset's repository.
#### Who are the source language producers?
[N/A]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
[N/A]
### Citation Information
```bibtex
@article{dalla2023nucleotide,
title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics},
author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others},
journal={bioRxiv},
pages={2023--01},
year={2023},
publisher={Cold Spring Harbor Laboratory}
}
``` | 17,218 | [
[
-0.055419921875,
-0.022705078125,
0.00609588623046875,
-0.0032215118408203125,
-0.0239410400390625,
0.01087188720703125,
-0.0113525390625,
0.00141143798828125,
0.032958984375,
0.0229949951171875,
-0.038055419921875,
-0.043609619140625,
-0.047943115234375,
0.... |
bakhitovd/ML_arxiv | 2023-05-19T21:47:33.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"region:us"
] | bakhitovd | null | null | 1 | 25 | 2023-04-06T21:46:29 | ---
license: cc0-1.0
task_categories:
- summarization
language:
- en
pretty_name: ML Articles Subset of Scientific Papers
size_categories:
- 10K<n<100K
---
# Dataset Card for 'ML Articles Subset of Scientific Papers' Dataset
## Dataset Summary
The dataset consists of 32,621 instances from the 'Scientific papers' dataset, a selection of scientific papers and summaries from ArXiv repository. This subset focuses on articles that are semantically, vocabulary-wise, structurally, and meaningfully closest to articles describing machine learning. This subset was created using sentence embeddings and K-means clustering.
## Supported Tasks and Leaderboards
The dataset supports tasks related to text summarization. Particularly, the dataset was created for fine-tuning transformer models for summarization. There are no established leaderboards at this moment.
## Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
An instance in the dataset includes a scientific paper and its summary, both in English.
### Data Fields
article: The full text of the scientific paper.\
abstract: The summary of the paper.
### Data Splits
The dataset is split into:\
-training subset: 30280 articles\
-validation subset: 1196 articles\
-test subset: 1145 articles
## Dataset Creation
### Methods
The subset was created using sentence embeddings from a transformer model, SciBERT. The embeddings were clustered into 6 clusters using the K-means clustering algorithm. The cluster closest to articles strongly related to the machine learning area by cosine similarity was chosen to form this dataset.
### Source Data
The dataset is a subset of the 'Scientific papers' dataset, which includes scientific papers from the ArXiv repository.
### Social Impact
This dataset could help improve the quality of summarization models for machine learning research articles, which in turn can make such content more accessible.
### Discussion of Biases
As the dataset focuses on machine learning articles, it may not be representative of scientific papers in general or other specific domains.
### Other Known Limitations
As the dataset has been selected based on a specific methodology, it may not include all machine learning articles or may inadvertently include non-machine learning articles.
### Dataset Curators
The subset was created as part of a project aimed to build an effective summarization model for Machine Learning articles. | 2,458 | [
[
-0.0214996337890625,
-0.03656005859375,
0.0211181640625,
0.0152435302734375,
-0.0200042724609375,
-0.001407623291015625,
-0.005931854248046875,
-0.00936126708984375,
0.028472900390625,
0.0338134765625,
-0.022674560546875,
-0.055023193359375,
-0.050262451171875,
... |
andreabac3/StackOverflow-Italian-Fauno-Baize | 2023-04-08T15:49:40.000Z | [
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] | andreabac3 | null | null | 1 | 25 | 2023-04-08T15:46:42 | ---
license: gpl-3.0
---
# StackOverflow-Italian-Fauno-Baize
This dataset is an Italian translation of the StackOverflow dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 57,046
average number of turns 3.6
response lengths of each turn 36.0
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
``` | 1,669 | [
[
-0.025238037109375,
-0.044647216796875,
0.012664794921875,
0.0261993408203125,
-0.0014734268188476562,
-0.0184173583984375,
-0.0298614501953125,
-0.0141448974609375,
0.02313232421875,
0.0257720947265625,
-0.046661376953125,
-0.0423583984375,
-0.045196533203125,
... |
BAAI/COIG | 2023-07-12T15:38:35.000Z | [
"language:zh",
"license:apache-2.0",
"arxiv:2204.07705",
"arxiv:2212.10560",
"arxiv:2212.09689",
"arxiv:2304.07987",
"region:us"
] | BAAI | We propose the Chinese Open Instruction Generalist (COIG) project to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. We welcome all researchers in the community to contribute to the corpus set and collaborate with us. We only release the first chip of COIG to help the Chinese LLMs' development in the exploration stage and appeal to more researchers joining us in building COIG. We introduce a manually verified translated general instruction corpus, a manually annotated exam instruction corpus, a human value alignment instruction corpus, a multi-round counterfactual correction chat corpus, and a leetcode instruction corpus. We provide these new instruction corpora to assist the community with instruction tuning on Chinese LLMs. These instruction corpora are also template workflows for how new Chinese instruction corpora can be built and expanded effectively. | @misc{zhang2023chinese,
title={Chinese Open Instruction Generalist: A Preliminary Release},
author={Ge Zhang and Yemin Shi and Ruibo Liu and Ruibin Yuan and Yizhi Li and Siwei Dong and Yu Shu and Zhaoqun Li and Zekun Wang and Chenghua Lin and Wenhao Huang and Jie Fu},
year={2023},
eprint={2304.07987},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 330 | 25 | 2023-04-16T11:09:32 | ---
license: apache-2.0
arxiv: 2304.07987
language:
- zh
---
# This is the Chinese Open Instruction Generalist project
We propose the Chinese Open Instruction Generalist (**COIG**) project to maintain a harmless, helpful, and diverse set of Chinese instruction corpora. We welcome all researchers in the community to contribute to the corpus set and collaborate with us. We only release the first chip of COIG to help the Chinese LLMs' development in the exploration stage and appeal to more researchers joining us in building COIG. We introduce a manually verified translated general instruction corpus, a manually annotated exam instruction corpus, a human value alignment instruction corpus, a multi-round counterfactual correction chat corpus, and a leetcode instruction corpus. We provide these new instruction corpora to assist the community with instruction tuning on Chinese LLMs. These instruction corpora are also template workflows for how new Chinese instruction corpora can be built and expanded effectively.
It is best to download the individual data files directly that you wish to use instead of using HF load_datasets. All datasets can be downloaded from: https://huggingface.co/datasets/BAAI/COIG/tree/main
This dataset card is modified from [OIG](https://huggingface.co/datasets/laion/OIG).
### Translated Instructions (66,858)
There are 66,858 instructions in total, which are composed of 1,616 task descriptions in [Super-NaturalInstructions](https://arxiv.org/abs/2204.07705) along with a single instance for each of them, 175 seed tasks in [Self-Instruct](https://arxiv.org/abs/2212.10560), and 66,007 instructions from [Unnatural Instructions](https://arxiv.org/abs/2212.09689). To reduce the cost and further improve the quality of the instruction corpus, we separate the translation procedure into three phases: automatic translation, manual verification, and manual correction. These strict quality verification procedures assure the reliability of the translated corpus.
### Exam Instructions (63,532)
The Chinese National College Entrance Examination, Middle School Entrance Examinations, and Civil Servant Examination are the main Chinese commonsense tests. These exams contain various question formats and detailed analysis that can be used as the Chain-of-Thought (**CoT**) corpus. We extract six informative elements from original exam questions, including instruction, question context, question, answer, answer analysis, and coarse-grained subject. There are six main coarse-grained subjects: Chinese, English, Politics, Biology, History, and Geology. There are very few Math, Physics, and Chemistry questions in the corpus because these questions are often with complex symbols which are hard to annotate. For many choice questions, we recommend that the researchers utilize this corpus to further post-process it using prompts or post-process it to blank-filling questions to increase the instructions' diversity further.
### Human Value Alignment Instructions (34,471)
To respect and reflect the major difference caused by different cultural backgrounds, different from other tasks in COIG that leverage one unified collection of instruction-following samples, we categorize the value alignment data into two separate series:
- A set of samples that present shared human values in the Chinese-speaking world. In total, we choose 50 instructions as the augmentation seeds, and produce 3k resulting instructions following samples for general-purpose value alignment in the Chinese-speaking world.
- Some additional sets of samples that present regional-culture or country-specific human values.
### Counterfactural Correction Multi-round Chat (13,653)
The Counterfactual Correction Multi-round Chat dataset (CCMC) is constructed based on the [CN-DBpedia knowledge graph dataset](https://link.springer.com/chapter/10.1007/978-3-319-60045-1_44) with the aim of alleviating and resolving the pain points of hallucination and factual inconsistency in current LLMs. The CCMC dataset includes 5 rounds of role-playing chat between a student and a teacher, and the corresponding knowledge they refer to. The dataset contains ~13,000 dialogues with an average of 5 rounds per dialogue, resulting in ~65,000 rounds of chat.
### Leetcode Instructions (11,737)
Given that the code-related tasks potentially contribute to the ability emergence of LLMs, we argue that code-related tasks aligned with the Chinese natural language should be considered in our datasets. Therefore, we build the Leetcode instructions from a **CC-BY-SA-4.0** license [collection](https://github.com/doocs/leetcode) of 2,589 programming questions. The questions contain problem descriptions, multiple programming languages, and explanations (834 questions do not have explanations).
## Support this project
Your contributions and feedback support the open source ecosystem, improve the bot and provide datasets for future AI research. To participate you can:
Submit Github issues, track issues and help create datasets that need improvement. https://github.com/BAAI-Zlab/COIG
## Update: May 27, 2023
- v0.3: Update counterfactural_correction_multi_round_chat.tar.gz and make sure all round responses can be decoded as json.
- v0.2: Update exam_instructions.jsonl, translated_instructions.jsonl and human_value_alignment_instructions_part2.json.
- v0.1: Release the five datasets of COIG.
## Disclaimer
These datasets contain synthetic data and in some cases data that includes humans trying to get the language model to say toxic/offensive/trolling things. If you are concerned about the presence of this type of material in the dataset please make sure you carefully inspect each of the entries and filter appropriately. Our goal is for the model to be as helpful and non-toxic as possible and we are actively evaluating ways to reduce or eliminate undesirable content from the instruction tuning datasets.
## License
The COIG dataset that is authored by BAAI is released under an Apache 2.0 license. However, the data also includes content licensed under other permissive licenses such as unnatural instructions data which is licensed under MIT License, or web-crawled data which is used under fair use principles.
## BibTeX & Citation
```
@misc{zhang2023chinese,
title={Chinese Open Instruction Generalist: A Preliminary Release},
author={Ge Zhang and Yemin Shi and Ruibo Liu and Ruibin Yuan and Yizhi Li and Siwei Dong and Yu Shu and Zhaoqun Li and Zekun Wang and Chenghua Lin and Wenhao Huang and Jie Fu},
year={2023},
eprint={2304.07987},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 6,642 | [
[
-0.0213470458984375,
-0.0596923828125,
-0.004039764404296875,
0.003887176513671875,
-0.0011339187622070312,
-0.019287109375,
-0.03375244140625,
-0.0254364013671875,
-0.00897216796875,
0.041412353515625,
-0.036285400390625,
-0.0567626953125,
-0.020294189453125,
... |
pietrolesci/pubmed-20k-rct | 2023-05-12T10:04:08.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | pietrolesci | null | null | 0 | 25 | 2023-05-11T18:28:35 | ---
task_categories:
- text-classification
language:
- en
dataset_info:
features:
- name: abstract_id
dtype: string
- name: labels
dtype:
class_label:
names:
'0': background
'1': conclusions
'2': methods
'3': objective
'4': results
- name: text
dtype: string
- name: sentence_id
dtype: int64
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 1392522399
num_examples: 176642
- name: validation
num_bytes: 233905609
num_examples: 29672
- name: test
num_bytes: 233146005
num_examples: 29578
download_size: 0
dataset_size: 1859574013
---
This is the same dataset as [`armanc/pubmed-rct20k`](https://huggingface.co/datasets/armanc/pubmed-rct20k).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library | 1,331 | [
[
-0.015411376953125,
-0.017120361328125,
0.031463623046875,
0.0211944580078125,
0.0107574462890625,
0.0012903213500976562,
-0.0015993118286132812,
-0.00893402099609375,
0.033782958984375,
0.042205810546875,
-0.039703369140625,
-0.04217529296875,
-0.04019165039062... |
silk-road/Wizard-LM-Chinese-instruct-evol | 2023-05-15T00:13:52.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-4.0",
"region:us"
] | silk-road | null | null | 59 | 25 | 2023-05-15T00:04:30 | ---
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Wizard-LM-Chinese是在MSRA的Wizard-LM数据集上,对指令进行翻译,然后再调用GPT获得答案的数据集
Wizard-LM包含了很多难度超过Alpaca的指令。
中文的问题翻译会有少量指令注入导致翻译失败的情况
中文回答是根据中文问题再进行问询得到的。
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [ ] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [ ] 增广的开放QA数据
- [x] WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM)
骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 )
骆驼项目**不是**商汤科技的官方产品。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author={Ziang Leng, Qiyuan Chen and Cheng Li},
title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}
``` | 1,139 | [
[
-0.026153564453125,
-0.0684814453125,
0.00750732421875,
0.0247955322265625,
-0.0177154541015625,
-0.01442718505859375,
-0.00582122802734375,
-0.03192138671875,
0.040740966796875,
0.0391845703125,
-0.032470703125,
-0.04766845703125,
-0.0247344970703125,
0.008... |
argilla/comparison-dataset-dolly-curated-falcon | 2023-06-02T14:52:40.000Z | [
"region:us"
] | argilla | null | null | 0 | 25 | 2023-06-02T11:51:48 | ---
{}
---
## Guidelines
These guidelines are based on the paper [Training Language Models to Follow Instructions with Human Feedback](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf)
**You are given a text-based description of a task, submitted by a user.**
This task description may be in the form of an explicit instruction (e.g. "Write a story about a wise frog."). The task may also be specified indirectly, for example by using several examples of the desired behavior (e.g. given a sequence of movie reviews followed by their sentiment, followed by one movie review without a sentiment, you can assume the task is to predict the sentiment of the final review), or by generating the start of a desired output (e.g. given "There once was a wise frog named Julius," you can assume the task is to continue the story).
**You will also be given several text outputs, intended to help the user with their task.** Your job is to evaluate these outputs to ensure that they are helpful, truthful, and harmless. For most tasks, being truthful and harmless is more important than being helpful.
By **helpful**, we mean that the output should follow the user's intention and help the user solve their task. Some examples of helpful behavior:
- Writing in clear language.
- Answering the question they meant to ask, even if they mis-asked it.
- Being sensitive to internationality (e.g., "football" shouldn't mean American football, and "the president" doesn't necessarily mean the US president).
- If an instruction is too confusing, asking for clarification and explaining why the instruction is confusing.
- Not giving overly long or rambling answers, or repeating information from the question.
- Not assuming extraneous extra context outside of what's given (besides things like facts about the world), unless that's an implied part of the task. For example, if asked to "write a polite email response to this email: email body," the output shouldn't assume "I can't make it this time, but am free next weekend." But if asked to "Write an email to Socrates," then you can safely assume the output can make up a context.
By **truthful**, we mean that the output contains accurate information and doesn't mislead the user. Some examples of truthful behavior:
- On tasks like summarization, where the output should only use information from input, not making up details that are not part of the input description.
- Not producing clearly false information about the world (e.g., making up facts or promoting conspiracies). For example, the output should not state that Hillary Clinton has served time in prison.
- Avoid generating misleading information or information with questionable authenticity. For example, if the input asks "Why did Hillary Clinton go to jail?", the output should not say "It's not totally clear," but rather should refute the premise of the question.
By **harmless**, we mean that the output should not cause physical, psychological, or social harm to people; damage to or loss of equipment or property; damage to the environment; or harm to institutions or resources necessary to human well-being. Some examples of harmless behavior:
- Treating other humans with kindness, respect, and consideration; not denigrating members of certain groups or using biased language against a particular group.
- Not generating abusive, threatening, or offensive language or promoting violence.
- Not writing sexual or violent content if it's not asked for.
- Not giving bad real-world advice or promoting illegal activity.
Evaluating model outputs may involve making trade-offs between these criteria. These trade-offs will depend on the task.
## Fields
* `prompt` is of type <class 'str'>
* `response-1` is of type <class 'str'>
* `response-2` is of type <class 'str'>
## Questions
* `response_ranking` : Select response 1 or 2. Select 3 if no response is suitable.
\If you select 3, provide a response using the field below, or discard the record.
Helpful: output follows the user's intention.
Truthful: output contains accurate information and doesn't mislead the user.
Harmless: the output should not cause physical, psychological, or social harm to people, property, environment, or institutions
## Load with Argilla
To load this dataset with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface('argilla/comparison-dataset-dolly-curated-falcon')
```
## Load with Datasets
To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset('argilla/comparison-dataset-dolly-curated-falcon')
```
| 4,870 | [
[
-0.017974853515625,
-0.07537841796875,
0.023895263671875,
0.0271759033203125,
-0.011138916015625,
-0.030731201171875,
-0.0033168792724609375,
-0.0220489501953125,
0.006374359130859375,
0.058135986328125,
-0.054412841796875,
-0.040130615234375,
-0.044525146484375... |
Nadav/pixel_glue_sst2 | 2023-06-12T11:08:22.000Z | [
"region:us"
] | Nadav | null | null | 0 | 25 | 2023-06-07T23:18:44 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 404363205.375
num_examples: 67349
- name: validation
num_bytes: 7130426.0
num_examples: 872
download_size: 348047558
dataset_size: 411493631.375
---
# Dataset Card for "pixel_glue_sst2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.0124969482421875,
-0.0303497314453125,
0.0195770263671875,
0.013153076171875,
-0.0175018310546875,
0.01342010498046875,
0.02276611328125,
0.0047607421875,
0.060699462890625,
0.0104522705078125,
-0.057708740234375,
-0.046478271484375,
-0.041656494140625,
-... |
skeskinen/books3_basic_paragraphs | 2023-06-14T12:55:02.000Z | [
"region:us"
] | skeskinen | null | null | 0 | 25 | 2023-06-12T05:47:39 | ---
dataset_info:
features:
- name: text
dtype: string
- name: book
dtype: string
- name: pos
dtype: float64
- name: smog_index
dtype: float64
splits:
- name: train
num_bytes: 1366299770
num_examples: 6639751
download_size: 676098743
dataset_size: 1366299770
---
# Dataset Card for "books3_basic_paragraphs"
the_pile books3, books with smog grade difficulty estimate of 6.5 or under. Split into paragraphs and filtered out most 'non-paragraphs' like titles, tables of content, etc. | 524 | [
[
-0.050079345703125,
-0.02276611328125,
0.0250396728515625,
0.00457000732421875,
-0.03253173828125,
-0.0122833251953125,
0.0286407470703125,
0.014892578125,
-0.0262603759765625,
0.04107666015625,
-0.039825439453125,
-0.069580078125,
-0.055023193359375,
0.0097... |
bias-amplified-splits/mnli | 2023-07-04T11:48:21.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:1704.05426",
"region:us"
] | bias-amplified-splits | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | 0 | 25 | 2023-07-03T19:32:08 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 58497575
num_examples: 309873
- name: train.anti_biased
num_bytes: 16122071
num_examples: 82829
- name: validation_matched.biased
num_bytes: 1443678
num_examples: 7771
- name: validation_matched.anti_biased
num_bytes: 390105
num_examples: 2044
- name: validation_mismatched.biased
num_bytes: 1536381
num_examples: 7797
- name: validation_mismatched.anti_biased
num_bytes: 412850
num_examples: 2035
download_size: 92308759
dataset_size: 78402660
- config_name: partial_input
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 59529986
num_examples: 309873
- name: train.anti_biased
num_bytes: 15089660
num_examples: 82829
- name: validation_matched.biased
num_bytes: 1445996
num_examples: 7745
- name: validation_matched.anti_biased
num_bytes: 387787
num_examples: 2070
- name: validation_mismatched.biased
num_bytes: 1529878
num_examples: 7758
- name: validation_mismatched.anti_biased
num_bytes: 419353
num_examples: 2074
download_size: 92308759
dataset_size: 78402660
task_categories:
- text-classification
language:
- en
pretty_name: MultiNLI
size_categories:
- 100K<n<1M
---
# Dataset Card for Bias-amplified Splits for MultiNLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [MultiNLI](https://arxiv.org/abs/1704.05426)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to **MultiNLI**, a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information.
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 91.1 | 74.3 |
| Biased training split | 88.7 | 57.5 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 91.1 | 81.4 |
| Biased training split | 89.5 | 71.8 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/mnli", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation_matched.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from MultiNLI (GLUE version), and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"idx": 0,
"premise": "Your contribution helped make it possible for us to provide our students with a quality education.",
"hypothesis": "Your contributions were of no help with our students' education.",
"label": 2
}
```
### Data Fields
- `idx`: unique identifier for the example within its original data splits (e.g., validation matched)
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: one of `0`, `1` and `2` (`entailment`, `neutral`, and `contradiction`)
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|-------------------------------------|------------------------------|
| Train - biased | 309873 |
| Train - anti-biased | 82829 |
| Validation matched - biased | 7771 |
| Validation matched - anti-biased | 2044 |
| Validation mismatched - biased | 7797 |
| Validation mismatched - anti-biased | 2035 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|-------------------------------------|------------------------------|
| Train - biased | 309873 |
| Train - anti-biased | 82829 |
| Validation matched - biased | 7745 |
| Validation matched - anti-biased | 2070 |
| Validation mismatched - biased | 7758 |
| Validation mismatched - anti-biased | 2074 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
MultiNLI was developed by Adina Williams, Nikita Nangia and Samuel Bowman.
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
``` | 10,953 | [
[
-0.05804443359375,
-0.05120849609375,
0.004852294921875,
0.0019779205322265625,
-0.0188751220703125,
-0.0090179443359375,
-0.01183319091796875,
-0.0280303955078125,
0.024383544921875,
0.0157928466796875,
-0.0587158203125,
-0.03436279296875,
-0.0511474609375,
... |
talby/spamassassin | 2023-07-11T18:36:22.000Z | [
"license:unknown",
"region:us"
] | talby | Welcome to the SpamAssassin public mail corpus. This is a selection of mail
messages, suitable for use in testing spam filtering systems. Pertinent
points:
- All headers are reproduced in full. Some address obfuscation has taken
place, and hostnames in some cases have been replaced with
"spamassassin.taint.org" (which has a valid MX record). In most cases
though, the headers appear as they were received.
- All of these messages were posted to public fora, were sent to me in the
knowledge that they may be made public, were sent by me, or originated as
newsletters from public news web sites.
- relying on data from public networked blacklists like DNSBLs, Razor, DCC
or Pyzor for identification of these messages is not recommended, as a
previous downloader of this corpus might have reported them!
- Copyright for the text in the messages remains with the original senders.
OK, now onto the corpus description. It's split into three parts, as follows:
- spam: 500 spam messages, all received from non-spam-trap sources.
- easy_ham: 2500 non-spam messages. These are typically quite easy to
differentiate from spam, since they frequently do not contain any spammish
signatures (like HTML etc).
- hard_ham: 250 non-spam messages which are closer in many respects to
typical spam: use of HTML, unusual HTML markup, coloured text,
"spammish-sounding" phrases etc.
- easy_ham_2: 1400 non-spam messages. A more recent addition to the set.
- spam_2: 1397 spam messages. Again, more recent.
Total count: 6047 messages, with about a 31% spam ratio. | null | 0 | 25 | 2023-07-10T17:59:18 | ---
license: unknown
---
# Dataset Card for the SpamAssassin public mail corpus
## Dataset Description
- **Homepage:** https://spamassassin.apache.org/old/publiccorpus/readme.html
### Dataset Summary
This is a selection of mail messages, suitable for use in testing spam filtering systems assembled by members of the SpamAssassin project.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
- The `text` config normalizes all character sets to utf8 and dumps the
MIME tree as a JSON list of lists.
- The `unprocessed` config does not parse messages at all, leaving the
full headers and content as binary.
### Data Fields
- `label`: `spam` or `ham`
- `group`: SpamAssassin has grouped these samples into categories
{'hard_ham', 'spam_2', 'spam', 'easy_ham', 'easy_ham_2'}
- `text`: normalized text of the message bodies
- `raw`: full binary headers and contents of messages
### Data Splits
Only a _train_ split has been provided.
## Dataset Creation
### Curation Rationale
It is hoped this dataset can help verify that modern NLP tools can solve
old NLP problems.
### Source Data
#### Initial Data Collection and Normalization
[The upstream corpus description](https://spamassassin.apache.org/old/publiccorpus/readme.html)
goes into detail on collection methods. The work here to recover text bodies
is largely done with [email.parser](https://docs.python.org/3/library/email.parser.html)
and [ftfy](https://pypi.org/project/ftfy/).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 2,258 | [
[
-0.034637451171875,
-0.029998779296875,
-0.0025043487548828125,
0.0157623291015625,
-0.0212554931640625,
-0.0187225341796875,
-0.023101806640625,
-0.0119476318359375,
0.024932861328125,
0.07025146484375,
-0.048553466796875,
-0.053009033203125,
-0.07098388671875,... |
taesiri/arxiv_qa | 2023-11-03T01:20:42.000Z | [
"task_categories:question-answering",
"language:en",
"license:mit",
"arxiv:2311.00618",
"arxiv:2311.00613",
"arxiv:2311.00571",
"arxiv:2311.00522",
"arxiv:2311.00430",
"arxiv:2311.00272",
"arxiv:2311.00257",
"arxiv:2311.00176",
"arxiv:2311.00059",
"arxiv:2311.00047",
"arxiv:2310.20707",
... | taesiri | null | null | 112 | 25 | 2023-07-11T16:14:06 | ---
license: mit
task_categories:
- question-answering
language:
- en
pretty_name: ArXiv QA
---
# ArXiv QA
(TBD) Automated ArXiv question answering via large language models
[Github](https://github.com/taesiri/ArXivQA) | [Homepage](https://arxiv.taesiri.xyz/) | [Simple QA - Hugging Face Space](https://huggingface.co/spaces/taesiri/ClaudeReadsArxiv)
---
# Automated Question Answering with ArXiv Papers
## Latest 25 Papers
- De-Diffusion Makes Text a Strong Cross-Modal Interface - [[Arxiv](https://arxiv.org/abs/2311.00618)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00618.md)]
- Controllable Music Production with Diffusion Models and Guidance
Gradients - [[Arxiv](https://arxiv.org/abs/2311.00613)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00613.md)]
- LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation,
Generation and Editing - [[Arxiv](https://arxiv.org/abs/2311.00571)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00571.md)]
- Text Rendering Strategies for Pixel Language Models - [[Arxiv](https://arxiv.org/abs/2311.00522)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00522.md)]
- Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo
Labelling - [[Arxiv](https://arxiv.org/abs/2311.00430)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00430.md)]
- ChatCoder: Chat-based Refine Requirement Improves LLMs' Code Generation - [[Arxiv](https://arxiv.org/abs/2311.00272)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00272.md)]
- AMSP: Super-Scaling LLM Training via Advanced Model States Partitioning - [[Arxiv](https://arxiv.org/abs/2311.00257)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00257.md)]
- ChipNeMo: Domain-Adapted LLMs for Chip Design - [[Arxiv](https://arxiv.org/abs/2311.00176)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00176.md)]
- The Generative AI Paradox: "What It Can Create, It May Not Understand" - [[Arxiv](https://arxiv.org/abs/2311.00059)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00059.md)]
- Grounding Visual Illusions in Language: Do Vision-Language Models
Perceive Illusions Like Humans? - [[Arxiv](https://arxiv.org/abs/2311.00047)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2311.00047.md)]
- What's In My Big Data? - [[Arxiv](https://arxiv.org/abs/2310.20707)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20707.md)]
- SEINE: Short-to-Long Video Diffusion Model for Generative Transition and
Prediction - [[Arxiv](https://arxiv.org/abs/2310.20700)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20700.md)]
- Learning From Mistakes Makes LLM Better Reasoner - [[Arxiv](https://arxiv.org/abs/2310.20689)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20689.md)]
- LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B - [[Arxiv](https://arxiv.org/abs/2310.20624)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20624.md)]
- Unleashing the Power of Pre-trained Language Models for Offline
Reinforcement Learning - [[Arxiv](https://arxiv.org/abs/2310.20587)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20587.md)]
- CapsFusion: Rethinking Image-Text Data at Scale - [[Arxiv](https://arxiv.org/abs/2310.20550)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20550.md)]
- Leveraging Word Guessing Games to Assess the Intelligence of Large
Language Models - [[Arxiv](https://arxiv.org/abs/2310.20499)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20499.md)]
- Does GPT-4 Pass the Turing Test? - [[Arxiv](https://arxiv.org/abs/2310.20216)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20216.md)]
- Beyond U: Making Diffusion Models Faster & Lighter - [[Arxiv](https://arxiv.org/abs/2310.20092)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.20092.md)]
- The Impact of Depth and Width on Transformer Language Model
Generalization - [[Arxiv](https://arxiv.org/abs/2310.19956)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.19956.md)]
- Battle of the Backbones: A Large-Scale Comparison of Pretrained Models
across Computer Vision Tasks - [[Arxiv](https://arxiv.org/abs/2310.19909)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.19909.md)]
- CustomNet: Zero-shot Object Customization with Variable-Viewpoints in
Text-to-Image Diffusion Models - [[Arxiv](https://arxiv.org/abs/2310.19784)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.19784.md)]
- MM-VID: Advancing Video Understanding with GPT-4V(ision) - [[Arxiv](https://arxiv.org/abs/2310.19773)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.19773.md)]
- VideoCrafter1: Open Diffusion Models for High-Quality Video Generation - [[Arxiv](https://arxiv.org/abs/2310.19512)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.19512.md)]
- Text-to-3D with classifier score distillation - [[Arxiv](https://arxiv.org/abs/2310.19415)] [[QA](https://github.com/taesiri/ArXivQA/blob/main/papers/2310.19415.md)]
## List of Papers by Year
- [Papers for 2023](https://github.com/taesiri/ArXivQA/blob/main/Papers-2023.md)
- [Papers for 2022](https://github.com/taesiri/ArXivQA/blob/main/Papers-2022.md)
- [Papers for 2021](https://github.com/taesiri/ArXivQA/blob/main/Papers-2021.md)
- [Papers for 2020](https://github.com/taesiri/ArXivQA/blob/main/Papers-2020.md)
- [Papers for 2019](https://github.com/taesiri/ArXivQA/blob/main/Papers-2019.md)
- [Papers for 2018](https://github.com/taesiri/ArXivQA/blob/main/Papers-2018.md)
- [Papers for 2017](https://github.com/taesiri/ArXivQA/blob/main/Papers-2017.md)
- [Papers for 2016](https://github.com/taesiri/ArXivQA/blob/main/Papers-2016.md)
- [Papers for 2015](https://github.com/taesiri/ArXivQA/blob/main/Papers-2015.md)
- [Papers for 2014](https://github.com/taesiri/ArXivQA/blob/main/Papers-2014.md)
- [Papers for 2013](https://github.com/taesiri/ArXivQA/blob/main/Papers-2013.md)
- [Papers for 2012](https://github.com/taesiri/ArXivQA/blob/main/Papers-2012.md)
- [Papers for 2010](https://github.com/taesiri/ArXivQA/blob/main/Papers-2010.md)
- [Papers for 2009](https://github.com/taesiri/ArXivQA/blob/main/Papers-2009.md)
## Acknowledgements
This project is made possible through the generous support of
[Anthropic](https://www.anthropic.com/), who provided free access to the `Claude-2.0` API.
| 6,574 | [
[
-0.04541015625,
-0.054656982421875,
0.041961669921875,
0.00983428955078125,
0.01306915283203125,
0.01141357421875,
0.0004324913024902344,
-0.036834716796875,
0.009613037109375,
0.01337432861328125,
-0.0309295654296875,
-0.050445556640625,
-0.036895751953125,
... |
jjonhwa/V3 | 2023-07-13T06:09:28.000Z | [
"region:us"
] | jjonhwa | null | null | 0 | 25 | 2023-07-13T06:06:47 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 1063468075
num_examples: 571983
download_size: 164853265
dataset_size: 1063468075
---
# Dataset Card for "V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 394 | [
[
-0.041015625,
-0.00775146484375,
0.0303192138671875,
0.0167236328125,
-0.01111602783203125,
-0.0162200927734375,
0.043212890625,
-0.03192138671875,
0.0496826171875,
0.046783447265625,
-0.06396484375,
-0.051116943359375,
-0.036285400390625,
-0.009994506835937... |
BigSuperbPrivate/SpeakerVerification_Aishell1Train | 2023-07-17T18:20:09.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 25 | 2023-07-13T17:53:26 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: file2
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 17452497240.0
num_examples: 120418
- name: validation
num_bytes: 2087682679.0
num_examples: 14331
download_size: 19207085144
dataset_size: 19540179919.0
---
# Dataset Card for "SpeakerVerification_AISHELL1Train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.0477294921875,
-0.00855255126953125,
-0.0007929801940917969,
0.0150909423828125,
-0.0032806396484375,
-0.0052337646484375,
-0.0016222000122070312,
-0.0024547576904296875,
0.057281494140625,
0.0219573974609375,
-0.06439208984375,
-0.048919677734375,
-0.0434875... |
refugee-law-lab/canadian-legal-data | 2023-07-30T22:47:52.000Z | [
"size_categories:100K<n<1M",
"language:en",
"language:fr",
"license:cc-by-nc-4.0",
"arxiv:2207.00220",
"region:us"
] | refugee-law-lab | null | null | 0 | 25 | 2023-07-16T02:12:31 | ---
license: cc-by-nc-4.0
language:
- en
- fr
size_categories:
- 100K<n<1M
---
# Refugee Law Lab: Canadian Legal Data
## Dataset Summary
The [Refugee Law Lab](https://refugeelab.ca) supports bulk open-access to Canadian legal data to facilitate research and advocacy.
Bulk open-access helps avoid asymmetrical access-to-justice and amplification of marginalization that
results when commercial actors leverage proprietary
legal datasets for profit -- a particular concern in the border control setting.
The Canadian Legal Data dataset includes the unofficial full text of thousands of court and tribunal
decisions at the federal level. It can be used for legal analytics (i.e. identifying patterns in legal
decision-making), to test ML and NLP tools on a bilingual dataset of Canadian legal materials, and to
pretrain language models for various tasks.
## Dataset Structure
### Data Instances
#### Court Decisions
- SCC: Full text of Supreme Court of Canada decisions, based on the Refugee Law Lab's
[Supreme Court of Canada Bulk Decisions Dataset](https://refugeelab.ca/bulk-data/scc/) (1877 – 2023)
- FCA: Full text of Federal Court of Appeal (Canada) decisions that have been given a neutral citation, based on
the Refugee Law Lab's [Federal Court of Appeal Bulk Decisions Dataset](https://refugeelab.ca/bulk-data/fca/) (2001-2023)
- FC: Full text of Federal Court (Canada) decisions that have been given a neutral citation, based on
the Refugee Law Lab's [Federal Court Bulk Decisions Dataset](https://refugeelab.ca/bulk-data/fc/) (2001-2023)
- TCC: Full text of Tax Court of Canada decisions that have been given a neutral citation, based on
the Refugee Law Lab's [Tax Court of Canada Bulk Decisions Dataset](https://refugeelab.ca/bulk-data/tcc/) (2003-2023)
#### Tribunal Decisions
- RLLR: Full text of Immigration and Refugee Board, Refugee Protection Division Decisions, as reported in the
[Refugee Law Lab Reporter](https://refugeelab.ca/rllr), based on the Refugee Law Lab's [RLLR Bulk Decisions Dataset](https://refugeelab.ca/bulk-data/rllr/) (2019 – 2022)
### Data Fields
- citation1 (string): Legal citation for the document (neutral citation where available)
- citation2 (string): For some documents multiple citations are available (e.g. for some periods
the Supreme Court of Canada provided both official reported citation and neutral citation)
- dataset (string): Name of the data instance (e.g. "SCC", "FCA", "FC", "TCC", etc)
- year (int32): Year of the document date, which can be useful for filtering
- name (string): Name of the document, typically the style of cause of a case
- language (string): Language of the document, "en" for English, "fr" for French, "" for no language specified
- document_date (string): Date of the document, typically the date of a decision (yyyy-mm-dd)
- source_url (string): URL where the document was scraped and where the official version can be found
- scraped_timestamp (string): Date the document was scraped (yyyy-mm-dd)
- unofficial_text (string): Full text of the document (unofficial version, for official version see source_url)
- other (string): Field for additional metadata in JSON format, currently a blank string for most datasets
### Data Languages
Many documents are available in both English and French. Some are only available in one of the two languages.
### Data Splits
The data has not been split, so all files are in the train split. If splitting for training/validation,
some thought should be given to whether it is necessary to limit to one language or to ensure that both
English and French versions of the same documents (where available) are put into the same split.
### Data Loading
To load all data instances:
```python
from datasets import load_dataset
dataset = load_dataset("refugee-law-lab/canadian-legal-data", split="train")
```
To load only a specific data instance, for example only the SCC data instance:
```python
from datasets import load_dataset
dataset = load_dataset("refugee-law-lab/canadian-legal-data", split="train", data_dir="SCC")
```
## Dataset Creation
### Curation Rationale
The dataset includes all the [Bulk Legal Data](https://refugeelab.ca/bulk-data) made publicly available by
the Refugee Law Lab. The Lab has focused on federal courts (e.g. Supreme Court of Canada, Federal Court of
Appeal, Federal Court) as well as federal administrative tribunals (e.g. Immigration and Refugee Board) because
immigration and refugee law, which is the main area of interest of the Lab, operates mostly at the federal level.
### Source Data
#### Initial Data Collection and Normalization
Details (including links to github repos with code) are available via links on the Refugee Law Lab's
[Bulk Legal Data](https://refugeelab.ca/bulk-data/) page.
### Personal and Sensitive Information
Documents may include personal and sensitive information. All documents have been published online or
otherwise released publicly by the relevant court or tribunal. While the open court principle mandates
that court (and some tribunal) materials be made available to the public, there are privacy risks when these
materials become easily and widely available. These privacy risks are particularly acute for marginalized groups,
including refugees and other non-citizens whose personal and sensitive information is included in some of the
documents in this dataset. For example, imagine a repressive government working with private data aggregators to
collect information that is used to target families of political opponents who have sought asylum abroad.
One mechanism used to try to achieve a balance between the open court principle
and privacy is that in publishing the documents in this dataset, the relevant courts and tribunals prohibit
search engines from indexing the documents. Users of this data are required to do the same.
### Non-Official Versions
Documents included in this dataset are unofficial copies. For official versions published by
the Government of Canada, please see the source URLs.
### Non-Affiliation / Endorsement
The reproduction of documents in this dataset was not done in affiliation with, or with the endorsement of
the Government of Canada.
## Considerations for Using the Data
### Social Impact of Dataset
The Refugee Law Lab recognizes that this dataset -- and further research using the dataset -- raises challenging
questions about how to balance protecting privacy, enhancing government transparency, addressing information
asymmetries, and building technologies that leverage data to advance the rights and interests of
refugees and other displaced people, as well as assisting those working with them (rather than technologies that
[enhance the power of states](https://citizenlab.ca/2018/09/bots-at-the-gate-human-rights-analysis-automated-decision-making-in-canadas-immigration-refugee-system/)
to control the movement of people across borders).
More broadly, the Refugee Law Lab also recognizes that considerations around privacy and data protection are complex
and evolving. When working on migration, refugee law, data, technology and surveillance, we strive to foreground
intersectional understandings of the systemic harms perpetuated against groups historically made marginalized. We
encourage other users to do the same.
We also encourage users to try to avoid participating in building technologies that harm refugees and other
marginalized groups, as well as to connect with [community organizations](https://www.migrationtechmonitor.com/ways-to-help)
working in this space, and to [listen directly](https://www.migrationtechmonitor.com/about-us) and learn from people who are affected by new technologies.
We will review the use these datasets periodically to examine whether continuing to publicly release these datasets achieves
the Refugee Law Lab's goals of advancing the rights and interests of refugees and other marginalized groups without creating
disproportionate risks and harms, including risks related to privacy and human rights.
### Discussion of Biases
The dataset reflects many biases present in legal decision-making, including biases based on race, immigration status, gender, sexual orientation, religion, disability, socio-economic class, and other intersecting categories of discrimination.
### Other Known Limitations
Publicly available court and tribunal decisions are not a representative sample of legal decision-making -- and in some cases may reflect
significantly skewed samples. To give one example, the vast majority of Federal Court judicial reviews of refugee determinations involve negative
first instance decisions even thought most first instance decisions are positive (this occurs because the government seldom applies for judicial
reviews of positive first instance decisions whereas claimants frequently apply for judicial review of negative decisions). As such, generative models
built partly on this dataset risk amplifying negative refugee decision-making (rather than more common positive refugee decision-making). Due to the ways that
legal datasets may be skewed, users of this dataset are encouraged to collaborate with or consult domain experts.
## Additional Information
### Licensing Information
Attribution-NonCommercial 4.0 International ([CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/))
NOTE: Users must also comply with upstream licensing for the [SCC](https://www.scc-csc.ca/terms-avis/notice-enonce-eng.aspx),
[FCA](https://www.fca-caf.gc.ca/en/pages/important-notices) & [FC](https://www.fct-cf.gc.ca/en/pages/important-notices) data instances, as
well as requests on source urls not to allow indexing of the documents by search engines to protect privacy. As a result, users must
not make the data available in formats or locations that can be indexed by search engines.
### Warranties / Representations
We make no warranties or representations that the data included in this dataset is complete or accurate. Data
were obtained through academic research projects, including projects that use automated processes.
While we try to make the data as accurate as possible, our methodologies may result in
inaccurate or outdated data. As such, data should be viewed as preliminary information aimed to prompt
further research and discussion, rather than as definitive information.
### Dataset Curators
[Sean Rehaag](https://www.osgoode.yorku.ca/faculty-and-staff/rehaag-sean), Osgoode Hall Law School Professor & Director of the Refugee Law Lab
### Citation Information
Sean Rehaag, "Refugee Law Lab: Canadian Legal Data" (2023) online: Hugging Face: <https://huggingface.co/datasets/refugee-law-lab/canadian-legal-data>.
### Acknowledgements
This project draws on research supported by the Social Sciences and Humanities Research Council and the Law Foundation of Ontario.
The project was inspired in part by the excellent prior work by [pile-of-law](https://huggingface.co/datasets/pile-of-law/pile-of-law) (Peter Henderson et al, "Pile of Law: Learning
Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset" (2022), online: arXiv: https://arxiv.org/abs/2207.00220). | 11,226 | [
[
-0.0243072509765625,
-0.01555633544921875,
0.032562255859375,
0.025238037109375,
-0.01015472412109375,
0.01293182373046875,
-0.01171112060546875,
-0.0091400146484375,
0.00997161865234375,
0.046905517578125,
-0.019775390625,
-0.06170654296875,
-0.047760009765625,... |
rusheeliyer/uk-abs | 2023-08-11T16:40:28.000Z | [
"region:us"
] | rusheeliyer | null | null | 0 | 25 | 2023-08-11T16:10:46 | ---
dataset_info:
features:
- name: judgement
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 52800141
num_examples: 589
- name: test
num_bytes: 8174530
num_examples: 100
- name: validation
num_bytes: 10432092
num_examples: 104
download_size: 32973908
dataset_size: 71406763
---
# Dataset Card for "uk-abs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 518 | [
[
-0.0467529296875,
0.01010894775390625,
0.0059661865234375,
-0.0017499923706054688,
-0.0281219482421875,
0.006999969482421875,
0.029449462890625,
-0.0228118896484375,
0.059906005859375,
0.0245208740234375,
-0.06243896484375,
-0.05438232421875,
-0.025787353515625,... |
sartmis1/text2sql-spider | 2023-08-18T04:01:41.000Z | [
"region:us"
] | sartmis1 | null | null | 0 | 25 | 2023-08-18T04:00:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 1319601
num_examples: 7000
download_size: 339745
dataset_size: 1319601
---
# Dataset Card for "text2sql-spider-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 490 | [
[
-0.0160064697265625,
-0.0175628662109375,
0.01444244384765625,
0.0149078369140625,
-0.023040771484375,
0.00453948974609375,
0.0175933837890625,
-0.0198516845703125,
0.056976318359375,
0.04400634765625,
-0.0518798828125,
-0.037109375,
-0.0477294921875,
0.0090... |
LawChat-tw/PT | 2023-08-24T03:49:38.000Z | [
"region:us"
] | LawChat-tw | null | null | 0 | 25 | 2023-08-24T03:44:56 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Flmc/DISC-Med-SFT | 2023-08-29T12:54:14.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"medical",
"region:us"
] | Flmc | null | null | 37 | 25 | 2023-08-29T10:20:50 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- zh
tags:
- medical
size_categories:
- 100K<n<1M
---
This is a repository containing a subset of the DISC-Med-SFT Dataset.
Check [DISC-MedLLM](https://github.com/FudanDISC/DISC-MedLLM) for more information. | 298 | [
[
-0.04437255859375,
-0.034271240234375,
0.0186920166015625,
0.0092010498046875,
0.005298614501953125,
0.04034423828125,
0.0211181640625,
0.004764556884765625,
0.042572021484375,
0.0887451171875,
-0.0906982421875,
-0.05474853515625,
-0.0082855224609375,
0.0113... |
zelros/insurance-fr | 2023-10-21T13:19:38.000Z | [
"insurance",
"region:us"
] | zelros | null | null | 0 | 25 | 2023-09-01T07:16:26 | ---
tags:
- insurance
---
This dataset contains question/answer pairs from a French home insurance (MRH: Multi-Risk Home Insurance).
It comes from structuring the following open sources:
- https://www.mma.fr/assurance-habitation.html
- https://cap.mma.fr/files/live/sites/mmafr/files/documents-cg/cg410/Habitation_MMA_410p.pdf
The objective of this dataset is to contribute to open source research projects aiming to, for instance:
* fine-tune LLMs on high-quality datasets, specializing them in the insurance domain
* develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts
* assess the knowledge of language models in the insurance field
* more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry.
Other datasets of the same kind (but on other types of insurance, other languages, or from different sources) are also available - or will be available soon - and are part of this research effort. | 1,015 | [
[
-0.00681304931640625,
-0.05792236328125,
0.01160430908203125,
0.00841522216796875,
0.01690673828125,
-0.009368896484375,
0.01340484619140625,
-0.0242462158203125,
0.017974853515625,
0.078369140625,
-0.037841796875,
-0.036346435546875,
-0.0279388427734375,
-0... |
mHossain/end_bn_summary_v1 | 2023-09-04T11:43:51.000Z | [
"region:us"
] | mHossain | null | null | 0 | 25 | 2023-09-04T11:42:38 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Tristan/flickr30k_test | 2023-09-04T22:36:06.000Z | [
"region:us"
] | Tristan | null | null | 0 | 25 | 2023-09-04T22:34:11 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
list: string
- name: sentids
list: string
- name: split
dtype: string
- name: img_id
dtype: string
- name: filename
dtype: string
splits:
- name: test
num_bytes: 142117238.54065907
num_examples: 1000
download_size: 141466584
dataset_size: 142117238.54065907
---
# Dataset Card for "flickr30k_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 639 | [
[
-0.06365966796875,
-0.00618743896484375,
0.004917144775390625,
0.021148681640625,
-0.01457977294921875,
-0.00030684471130371094,
0.0330810546875,
-0.001087188720703125,
0.03240966796875,
0.0222625732421875,
-0.06976318359375,
-0.040740966796875,
-0.0300445556640... |
maximegmd/medmcqa_alpaca_format | 2023-09-12T11:29:11.000Z | [
"region:us"
] | maximegmd | null | null | 0 | 25 | 2023-09-12T11:28:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 120644997
num_examples: 182822
- name: test
num_bytes: 1077057
num_examples: 6150
- name: validation
num_bytes: 2009220
num_examples: 4183
download_size: 79503290
dataset_size: 123731274
---
# Dataset Card for "medmcqa_alpaca_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 752 | [
[
-0.04547119140625,
-0.02099609375,
0.02197265625,
0.0193023681640625,
-0.0287628173828125,
-0.0098419189453125,
0.031585693359375,
-0.0011072158813476562,
0.07427978515625,
0.04071044921875,
-0.061981201171875,
-0.06048583984375,
-0.04931640625,
-0.018432617... |
skaltenp/textworld_turn_top_demonstrations | 2023-09-18T10:06:22.000Z | [
"region:us"
] | skaltenp | null | null | 0 | 25 | 2023-09-18T10:06:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: demonstration
sequence:
sequence: string
- name: moves
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 16265368
num_examples: 4440
- name: valid
num_bytes: 787252
num_examples: 222
- name: test
num_bytes: 2640791
num_examples: 514
download_size: 3417454
dataset_size: 19693411
---
# Dataset Card for "textworld_turn_top_demonstrations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 786 | [
[
-0.04608154296875,
-0.03741455078125,
0.0123443603515625,
0.032562255859375,
-0.01508331298828125,
-0.004573822021484375,
0.006778717041015625,
0.002933502197265625,
0.045013427734375,
0.035980224609375,
-0.0753173828125,
-0.05767822265625,
-0.04681396484375,
... |
backblaze/Drive_Stats | 2023-10-05T04:46:26.000Z | [
"annotations_creators:machine-generated",
"size_categories:100M<n<1B",
"license:other",
"region:us"
] | backblaze | null | null | 0 | 25 | 2023-09-20T20:51:43 | ---
license:
- other
license_details: 'https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data#howYouCanUseTheData'
annotations_creators:
- 'machine-generated'
pretty_name: 'Drive Stats'
size_categories:
- '100M<n<1B'
---
# Drive Stats
[**Drive Stats**](https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data) is a public data set of daily metrics on the hard drives in Backblaze’s [cloud storage infrastructure](https://www.backblaze.com/cloud-storage) that Backblaze has open-sourced since April 2013. Currently, Drive Stats comprises over 388 million records, rising by over 240,000 records per day. Drive Stats is an append-only dataset effectively logging daily statistics that once written are never updated or deleted.
This is our first Hugging Face dataset; feel free to suggest improvements by creating a new discussion on the [Community](https://huggingface.co/datasets/backblaze/Drive_Stats/discussions)!
## Drive Stats Q2 2023 Snapshot
* Drive Count: 240,940
* Drive Failures: 1,339
* Drive Days: 21.1M
* Annualized Failure Rate: 2.28%
## Overview of the Hard Drive Data
Each day in the Backblaze data center, we take a snapshot of each operational hard drive. This snapshot includes basic drive information along with the S.M.A.R.T. statistics reported by that drive. The daily snapshot of one drive is one record or row of data. All of the drive snapshots for a given day are collected into a file consisting of a row for each active hard drive. The format of this file is a "csv" (Comma Separated Values) file. Each day this file is named in the format YYYY-MM-DD.csv, for example, 2013-04-10.csv.
The first row of the each file contains the column names, the remaining rows are the actual data. The columns are as follows:
* Date – The date of the snapshot in yyyy-mm-dd format.
* Serial Number – The manufacturer-assigned serial number of the drive.
* Model – The manufacturer-assigned model number of the drive.
* Capacity – The drive capacity in bytes.
* Failure – Contains a “0” if the drive is OK. Contains a “1” if this is the last day the drive was operational before failing.
* SMART Stats:
* 2013-2014: 80 columns of data, that are the Raw and Normalized values for 40 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2015-2017: 90 columns of data, that are the Raw and Normalized values for 45 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q1): 100 columns of data, that are the Raw and Normalized values for 50 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q2): 104 columns of data, that are the Raw and Normalized values for 52 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
* 2018 (Q4): 124 columns of data, that are the Raw and Normalized values for 62 different SMART stats as reported by the given drive. Each value is the number reported by the drive.
## Helpful Hints and Caveats
### Schema Changes
The schema may change from quarter to quarter. The basic information: date, serial_number, model, capacity_bytes, and failure will not change. All of the changes will be in the number of SMART attributes reported for all of the drives in a given quarter. There will never be more than 255 pair of SMART attributes reported. When you load the CSV files for each quarter you will need to account for the potential of a different number of SMART attributes from the previous quarter.
## How You Can Use the Data
You can download and use this data for free for your own purpose, all we ask is three things:
* you cite Backblaze as the source if you use the data,
* you accept that you are solely responsible for how you use the data, and
* you do not sell this data to anyone, it is free. | 3,921 | [
[
-0.0309600830078125,
-0.035919189453125,
0.039215087890625,
0.025054931640625,
-0.0117340087890625,
-0.0169677734375,
0.01605224609375,
-0.019287109375,
0.035369873046875,
0.0303955078125,
-0.087646484375,
-0.0418701171875,
0.00004839897155761719,
-0.0039634... |
hasangoni/Electron_microscopy_dataset | 2023-09-25T07:57:56.000Z | [
"task_categories:image-segmentation",
"size_categories:10K<n<100K",
"language:en",
"microscopy",
"EPFL",
"image segmentation",
"region:us"
] | hasangoni | null | null | 0 | 25 | 2023-09-22T16:54:18 | ---
task_categories:
- image-segmentation
language:
- en
tags:
- microscopy
- EPFL
- image segmentation
pretty_name: electron microscopy patch image
size_categories:
- 10K<n<100K
---
The dataset:
- Is a patch from the existing dataset available at https://www.epfl.ch/labs/cvlab/data/data-em/.
- Contains patches of size (256, 256).
- Removes any patches with empty masks to ensure quality.
- Has the same license applied as the original dataset.
- Please refer to the license for information on allowed usage.
- If you have any questions or concerns about the dataset, please do not hesitate to contact me. | 608 | [
[
-0.052398681640625,
-0.046142578125,
0.022430419921875,
0.0272216796875,
-0.0199127197265625,
-0.0010862350463867188,
0.001850128173828125,
-0.0264129638671875,
0.0308074951171875,
0.10675048828125,
-0.049163818359375,
-0.03515625,
-0.01200103759765625,
0.00... |
Tural/processed_bert_dataset | 2023-10-04T23:04:01.000Z | [
"region:us"
] | Tural | null | null | 0 | 25 | 2023-10-04T22:54:22 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 24943076880
num_examples: 27349865
download_size: 5901536405
dataset_size: 24943076880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 609 | [
[
-0.043609619140625,
-0.0272369384765625,
0.0171356201171875,
0.025054931640625,
-0.0164031982421875,
-0.00583648681640625,
0.005706787109375,
-0.023345947265625,
0.060333251953125,
0.035980224609375,
-0.0716552734375,
-0.045257568359375,
-0.03515625,
-0.0260... |
ContextualAI/nq_open | 2023-10-07T00:34:08.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 25 | 2023-10-07T00:33:44 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
sequence: string
splits:
- name: train
num_bytes: 5990520
num_examples: 79168
- name: dev
num_bytes: 660716
num_examples: 8757
- name: test
num_bytes: 313829
num_examples: 3610
download_size: 4681299
dataset_size: 6965065
---
# Dataset Card for "nq_open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 517 | [
[
-0.0281219482421875,
-0.0127410888671875,
0.0015735626220703125,
0.0011310577392578125,
-0.0089874267578125,
-0.00795745849609375,
0.0192718505859375,
0.00185394287109375,
0.048095703125,
0.040496826171875,
-0.05853271484375,
-0.055084228515625,
-0.0187530517578... |
PocketDoc/Choose-Your-Story-Long-Text-Adventures | 2023-10-16T04:39:05.000Z | [
"task_categories:conversational",
"language:en",
"not-for-all-audiences",
"region:us"
] | PocketDoc | null | null | 5 | 25 | 2023-10-07T20:04:56 | ---
tags:
- not-for-all-audiences
task_categories:
- conversational
language:
- en
pretty_name: Choose Your Story Novel Format Text Adventures
---
This is the 'CYS' text adventure dataset converted to a chat format with system messages. The system messages were randomly constructed from a table of phrases and templates. The original data can be found in the .7z archive.
**Credits:**
Thank you to VE Forbryderne from KoboldAI for scraping the dataset. | 455 | [
[
0.0016832351684570312,
-0.025177001953125,
0.015167236328125,
0.0187835693359375,
-0.030426025390625,
-0.00017309188842773438,
-0.01430511474609375,
-0.01311492919921875,
0.06097412109375,
0.08306884765625,
-0.08245849609375,
-0.0384521484375,
-0.02117919921875,... |
hippocrates/DDI2013_test | 2023-10-12T19:21:33.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 25 | 2023-10-08T22:20:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 20658927
num_examples: 18779
- name: valid
num_bytes: 8739656
num_examples: 7244
- name: test
num_bytes: 6455758
num_examples: 5761
download_size: 3113073
dataset_size: 35854341
---
# Dataset Card for "DDI2013_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 782 | [
[
-0.05126953125,
-0.028106689453125,
0.0188140869140625,
0.0311431884765625,
-0.004444122314453125,
-0.0118408203125,
0.039093017578125,
-0.003353118896484375,
0.049285888671875,
0.0110931396484375,
-0.0714111328125,
-0.04461669921875,
-0.036285400390625,
-0.... |
darcycao/finaldataset | 2023-10-09T10:19:10.000Z | [
"region:us"
] | darcycao | null | null | 0 | 25 | 2023-10-09T10:18:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jamescalam/ai-arxiv | 2023-10-10T12:57:37.000Z | [
"region:us"
] | jamescalam | null | null | 9 | 25 | 2023-10-09T21:07:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-phi/ft-sample | 2023-10-12T05:10:35.000Z | [
"region:us"
] | open-phi | null | null | 4 | 25 | 2023-10-10T03:04:33 | ---
dataset_info:
features:
- name: topic
dtype: string
- name: model
dtype: string
- name: concepts
sequence: string
- name: outline
sequence: string
- name: markdown
dtype: string
splits:
- name: train
num_bytes: 299599047
num_examples: 4121
download_size: 92594714
dataset_size: 299599047
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ft-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 593 | [
[
-0.048187255859375,
-0.0260162353515625,
0.016815185546875,
0.01244354248046875,
-0.0181884765625,
0.013916015625,
0.0233612060546875,
-0.01422882080078125,
0.05963134765625,
0.0256805419921875,
-0.06890869140625,
-0.0445556640625,
-0.0333251953125,
-0.01702... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.