id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
carnival13/rbrt_eval_sur_full_lrg | 2023-10-15T03:08:29.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 5 | 2023-10-15T03:08:22 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 58030544
num_examples: 22480
download_size: 16743699
dataset_size: 58030544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_eval_sur_full_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 617 | [
[
-0.04058837890625,
-0.052154541015625,
0.01212310791015625,
0.01605224609375,
-0.0162506103515625,
0.02520751953125,
0.000896453857421875,
-0.00417327880859375,
0.046875,
0.042449951171875,
-0.040435791015625,
-0.048858642578125,
-0.0307464599609375,
-0.0072... |
wasertech/assistant-llama2-7b-chat-dpo | 2023-10-17T01:06:18.000Z | [
"region:us"
] | wasertech | null | null | 1 | 5 | 2023-10-15T03:37:57 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 214004
num_examples: 38
download_size: 91570
dataset_size: 214004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "assistant-llama2-7b-chat-rlhf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 525 | [
[
-0.0240478515625,
-0.03778076171875,
0.0155029296875,
0.034088134765625,
-0.030487060546875,
0.01232147216796875,
0.02471923828125,
-0.0266876220703125,
0.0682373046875,
0.041015625,
-0.06488037109375,
-0.04583740234375,
-0.05242919921875,
-0.00616455078125,... |
adityarra07/test_data | 2023-10-19T11:45:48.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 5 | 2023-10-15T04:25:05 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 133411975.96105696
num_examples: 1001
download_size: 134756772
dataset_size: 133411975.96105696
---
# Dataset Card for "test_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 484 | [
[
-0.04241943359375,
-0.029083251953125,
0.00891876220703125,
0.01061248779296875,
-0.006988525390625,
-0.0013265609741210938,
0.0161590576171875,
-0.0019550323486328125,
0.048004150390625,
0.0229949951171875,
-0.054931640625,
-0.05169677734375,
-0.032806396484375... |
sargishunanyan/thermostats | 2023-10-18T16:07:35.000Z | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"region:us"
] | sargishunanyan | null | @misc{ thermo-part-3_dataset,
title = { Thermo, part 3 Dataset },
type = { Open Source Dataset },
author = { Yolo },
howpublished = { \\url{ https://universe.roboflow.com/yolo-po0ro/thermo-part-3 } },
url = { https://universe.roboflow.com/yolo-po0ro/thermo-part-3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { oct },
note = { visited on 2023-10-18 },
} | 0 | 5 | 2023-10-16T04:11:07 | ---
task_categories:
- image-segmentation
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="sargishunanyan/thermostats" src="https://huggingface.co/datasets/sargishunanyan/thermostats/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['housing', 'thermostat']
```
### Number of Images
```json
{'valid': 35, 'test': 18, 'train': 123}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("sargishunanyan/thermostats", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/yolo-po0ro/thermo-part-3/dataset/1](https://universe.roboflow.com/yolo-po0ro/thermo-part-3/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ thermo-part-3_dataset,
title = { Thermo, part 3 Dataset },
type = { Open Source Dataset },
author = { Yolo },
howpublished = { \\url{ https://universe.roboflow.com/yolo-po0ro/thermo-part-3 } },
url = { https://universe.roboflow.com/yolo-po0ro/thermo-part-3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { oct },
note = { visited on 2023-10-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on October 16, 2023 at 4:27 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 176 images.
Thermostats are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| 2,271 | [
[
-0.028533935546875,
-0.0141754150390625,
0.01540374755859375,
-0.004425048828125,
-0.0203857421875,
-0.022674560546875,
-0.01117706298828125,
-0.0136566162109375,
0.032257080078125,
0.016998291015625,
-0.03582763671875,
-0.039581298828125,
-0.0203857421875,
... |
OdiaGenAI/instruction_set_hindi_1035 | 2023-10-16T13:09:51.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:hi",
"art",
"sports",
"politics",
"india",
"music",
"cooking",
"region:us"
] | OdiaGenAI | null | null | 0 | 5 | 2023-10-16T13:02:20 | ---
task_categories:
- question-answering
language:
- hi
tags:
- art
- sports
- politics
- india
- music
- cooking
size_categories:
- 1K<n<10K
---
The dataset has been created using [OliveFarm](https://huggingface.co/spaces/OdiaGenAI/Olive_Farm) web application.
Following domains have been covered in this dataset:-
- Art
- Sports (Cricket, Football, Olympics)
- Politics
- History
- Cooking
- Environment
- Music
Contributors: -
- Shahid
- Parul. | 452 | [
[
-0.04364013671875,
-0.03839111328125,
-0.00968170166015625,
0.01126861572265625,
0.0015010833740234375,
0.0100250244140625,
0.006641387939453125,
-0.040985107421875,
0.037750244140625,
0.045440673828125,
-0.0726318359375,
-0.061187744140625,
-0.0305023193359375,... |
gayathrimanoj/dataset_all | 2023-10-16T15:13:09.000Z | [
"region:us"
] | gayathrimanoj | null | null | 0 | 5 | 2023-10-16T14:35:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DataProvenanceInitiative/cot_submix_original | 2023-10-16T17:31:56.000Z | [
"region:us"
] | DataProvenanceInitiative | null | null | 0 | 5 | 2023-10-16T17:31:52 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 209004809
num_examples: 183848
download_size: 100293074
dataset_size: 209004809
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cot_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 613 | [
[
-0.047332763671875,
-0.01207733154296875,
0.00258636474609375,
0.01715087890625,
-0.03656005859375,
0.01043701171875,
0.019683837890625,
0.005664825439453125,
0.06597900390625,
0.047332763671875,
-0.064453125,
-0.050506591796875,
-0.051849365234375,
-0.01455... |
DataProvenanceInitiative/t0_submix_original | 2023-10-16T17:40:22.000Z | [
"region:us"
] | DataProvenanceInitiative | null | null | 0 | 5 | 2023-10-16T17:39:08 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 4602180562
num_examples: 1650308
download_size: 2734694485
dataset_size: 4602180562
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "t0_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 616 | [
[
-0.0390625,
-0.005115509033203125,
0.0002276897430419922,
0.0155487060546875,
-0.034515380859375,
0.00897979736328125,
0.02655029296875,
0.0118255615234375,
0.078125,
0.035736083984375,
-0.0714111328125,
-0.04449462890625,
-0.04888916015625,
-0.0138015747070... |
KonstantyM/science_qa_input_label_prep | 2023-10-16T23:09:13.000Z | [
"region:us"
] | KonstantyM | null | null | 0 | 5 | 2023-10-16T22:49:09 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: input
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 14836491177
num_examples: 4281664
download_size: 8551603528
dataset_size: 14836491177
---
# Dataset Card for "science_qa_input_label_prep"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 526 | [
[
-0.038909912109375,
-0.0179443359375,
0.033233642578125,
0.01552581787109375,
-0.00881195068359375,
0.01397705078125,
0.034759521484375,
-0.0012454986572265625,
0.06231689453125,
0.01284027099609375,
-0.052032470703125,
-0.056793212890625,
-0.0302276611328125,
... |
pbaoo2705/biomedqa_processed | 2023-10-17T09:59:50.000Z | [
"region:us"
] | pbaoo2705 | null | null | 0 | 5 | 2023-10-17T00:49:20 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 7624914
num_examples: 2992
download_size: 1320802
dataset_size: 7624914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biomedqa_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 616 | [
[
-0.0258941650390625,
-0.01904296875,
0.03192138671875,
0.0038661956787109375,
-0.0161285400390625,
0.0130615234375,
0.032073974609375,
-0.0211029052734375,
0.06402587890625,
0.042999267578125,
-0.0615234375,
-0.05902099609375,
-0.0423583984375,
-0.0034027099... |
pnadel/perseus_grc | 2023-10-17T13:45:41.000Z | [
"region:us"
] | pnadel | null | null | 0 | 5 | 2023-10-17T13:45:29 | ---
dataset_info:
features:
- name: file
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 194782706
num_examples: 783575
download_size: 70938486
dataset_size: 194782706
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "perseus_grc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 479 | [
[
-0.03570556640625,
-0.01019287109375,
0.025390625,
0.011627197265625,
-0.0129241943359375,
-0.0009698867797851562,
0.027801513671875,
0.0045928955078125,
0.06561279296875,
0.037872314453125,
-0.068359375,
-0.07684326171875,
-0.043701171875,
-0.01463317871093... |
Cesar42/EmotionTrainLlama2_5k | 2023-10-17T22:15:32.000Z | [
"region:us"
] | Cesar42 | null | null | 0 | 5 | 2023-10-17T22:11:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
TanjimKIT/Cyberbullying-detection-in-Chittagonian-dialect-of-Bangla-CBDCB | 2023-10-25T09:26:41.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:bn",
"license:apache-2.0",
"doi:10.57967/hf/1225",
"region:us"
] | TanjimKIT | null | null | 1 | 5 | 2023-10-18T09:01:57 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- bn
pretty_name: CBDCB
size_categories:
- 1K<n<10K
---
Published Paper Information:>>>>>>>>>>>>>>>>>>>>>>>>
If you use CBDCB dataset, please cite the following paper:
```
@article{mahmud2023cyberbullying,
title={Cyberbullying detection for low-resource languages and dialects: Review of the state of the art},
author={Mahmud, Tanjim and Ptaszynski, Michal and Eronen, Juuso and Masui, Fumito},
journal={Information Processing \& Management},
volume={60},
number={5},
pages={103454},
year={2023},
publisher={Elsevier}
}
``` | 598 | [
[
-0.03472900390625,
-0.07049560546875,
0.00923919677734375,
0.0469970703125,
-0.0169525146484375,
0.014251708984375,
-0.00910186767578125,
-0.03045654296875,
0.02606201171875,
0.03350830078125,
-0.014556884765625,
-0.057098388671875,
-0.0450439453125,
-0.0026... |
ajax-law/selenium-dataset | 2023-10-20T16:29:24.000Z | [
"region:us"
] | ajax-law | null | null | 0 | 5 | 2023-10-18T16:23:50 | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: image_id
dtype: int64
- name: height
dtype: int64
- name: width
dtype: int64
- name: annotations
list:
- name: bbox
sequence: float64
- name: bbox_mode
dtype: int64
- name: category_id
dtype: int64
- name: segmentation
sequence:
sequence:
sequence: float64
- name: image_object
dtype: image
splits:
- name: train
num_bytes: 654562556.8
num_examples: 3200
download_size: 710193405
dataset_size: 654562556.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "selenium-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 846 | [
[
-0.041778564453125,
-0.0148468017578125,
0.0179443359375,
0.0083465576171875,
0.003570556640625,
0.0012617111206054688,
0.0184326171875,
-0.0002111196517944336,
0.05126953125,
0.0194854736328125,
-0.06378173828125,
-0.049346923828125,
-0.0295257568359375,
-0... |
sandeep12345/Llama2_Finetune_data | 2023-10-18T16:40:42.000Z | [
"region:us"
] | sandeep12345 | null | null | 0 | 5 | 2023-10-18T16:40:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
birdhouse5/shrekdataset | 2023-10-19T09:50:29.000Z | [
"region:us"
] | birdhouse5 | null | null | 0 | 5 | 2023-10-19T07:06:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/find_first_sent_train_30_eval_10 | 2023-10-31T14:47:16.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 5 | 2023-10-19T13:36:32 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 90239
num_examples: 70
- name: validation
num_bytes: 10621
num_examples: 10
download_size: 64954
dataset_size: 100860
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_first_sent_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 674 | [
[
-0.0445556640625,
-0.016510009765625,
0.019287109375,
0.03643798828125,
-0.0084686279296875,
-0.003505706787109375,
0.019683837890625,
0.0162200927734375,
0.049224853515625,
0.021209716796875,
-0.0701904296875,
-0.051239013671875,
-0.045684814453125,
-0.0135... |
MDCurrent/fill500 | 2023-10-19T18:29:18.000Z | [
"region:us"
] | MDCurrent | null | null | 0 | 5 | 2023-10-19T18:26:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tuandunghcmut/Image | 2023-10-21T03:38:01.000Z | [
"region:us"
] | tuandunghcmut | null | null | 0 | 5 | 2023-10-20T01:57:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jjonhwa/SECOND_KOWIKI_RETRIEVE_300_V2 | 2023-10-20T03:19:54.000Z | [
"region:us"
] | jjonhwa | null | null | 0 | 5 | 2023-10-20T03:19:44 | ---
dataset_info:
features:
- name: ctxs
list:
- name: score
dtype: float64
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 189146280
num_examples: 15504
download_size: 97933190
dataset_size: 189146280
---
# Dataset Card for "SECOND_KOWIKI_RETRIEVE_300_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 481 | [
[
-0.0391845703125,
-0.00612640380859375,
0.0183868408203125,
0.00923919677734375,
-0.02056884765625,
-0.00972747802734375,
0.01435089111328125,
-0.01320648193359375,
0.0491943359375,
0.0428466796875,
-0.0660400390625,
-0.0259246826171875,
-0.044158935546875,
... |
mugithi/ubuntu_question_answer | 2023-10-21T19:49:50.000Z | [
"region:us"
] | mugithi | null | null | 0 | 5 | 2023-10-20T06:22:11 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2051467
num_examples: 12024
- name: test
num_bytes: 883252
num_examples: 5154
download_size: 1928151
dataset_size: 2934719
---
# Dataset Card for "ubuntu_question_answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 464 | [
[
-0.053436279296875,
-0.037200927734375,
0.0212860107421875,
0.004558563232421875,
-0.023712158203125,
0.0037670135498046875,
0.0144195556640625,
-0.0009975433349609375,
0.05853271484375,
0.0290679931640625,
-0.0474853515625,
-0.034515380859375,
-0.01826477050781... |
aloobun/mathjson-instruct | 2023-10-20T09:14:12.000Z | [
"license:apache-2.0",
"region:us"
] | aloobun | null | null | 0 | 5 | 2023-10-20T08:09:12 | ---
license: apache-2.0
---
Fork of [jondurbin/mathjson-alpha](https://huggingface.co/datasets/jondurbin/mathjson-alpha?row=0) | 127 | [
[
-0.048675537109375,
-0.03265380859375,
0.0126190185546875,
0.0004467964172363281,
-0.0018453598022460938,
0.004383087158203125,
0.015594482421875,
-0.0233001708984375,
0.081787109375,
0.03985595703125,
-0.07275390625,
-0.048828125,
-0.0270843505859375,
-0.00... |
SM200203102097/skinDiseasesDetectionModel | 2023-10-20T12:19:36.000Z | [
"region:us"
] | SM200203102097 | null | null | 1 | 5 | 2023-10-20T12:17:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Actinic_keratoses
'1': Basal_cell_carcinoma
'2': Benign_keratosis
'3': Dermatofibroma
'4': Melanocytic_nevi
'5': Melanoma
'6': Vascular_lesions
splits:
- name: train
num_bytes: 1918967282.53
num_examples: 11865
download_size: 2809338083
dataset_size: 1918967282.53
---
# Dataset Card for "skinDiseasesDetectionModel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 750 | [
[
-0.0277099609375,
-0.01534271240234375,
-0.005504608154296875,
0.0118560791015625,
-0.0173797607421875,
0.00933074951171875,
0.0258636474609375,
-0.024261474609375,
0.047821044921875,
0.05218505859375,
-0.062744140625,
-0.06829833984375,
-0.0318603515625,
-0... |
2003vishnu2003/llm_train_modify | 2023-10-20T14:27:08.000Z | [
"region:us"
] | 2003vishnu2003 | null | null | 0 | 5 | 2023-10-20T14:25:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cmu-mlsp/encodec_24khz-opt-125m-pretrained-ft-librispeech_asr-test.clean-features | 2023-10-20T16:26:19.000Z | [
"region:us"
] | cmu-mlsp | null | null | 0 | 5 | 2023-10-20T16:25:45 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: test.clean
num_bytes: 958024726.0
num_examples: 2620
download_size: 918510033
dataset_size: 958024726.0
configs:
- config_name: default
data_files:
- split: test.clean
path: data/test.clean-*
---
# Dataset Card for "encodec_24khz-opt-125m-pretrained-ft-librispeech_asr-test.clean-features"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 792 | [
[
-0.06463623046875,
-0.01323699951171875,
-0.00978851318359375,
0.0092620849609375,
-0.0264739990234375,
0.01003265380859375,
-0.01207733154296875,
-0.017547607421875,
0.044952392578125,
0.034881591796875,
-0.068359375,
-0.0394287109375,
-0.03228759765625,
-0... |
Minglii/r_wiz2_ori | 2023-10-20T17:19:02.000Z | [
"region:us"
] | Minglii | null | null | 0 | 5 | 2023-10-20T17:14:43 | ---
dataset_info:
features:
- name: data
struct:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 201101522
num_examples: 73000
download_size: 104324297
dataset_size: 201101522
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "r_wiz2_ori"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 522 | [
[
-0.0292816162109375,
-0.0194244384765625,
-0.0008225440979003906,
0.0002906322479248047,
-0.0029296875,
-0.01288604736328125,
0.0255279541015625,
-0.02886962890625,
0.04595947265625,
0.029815673828125,
-0.07574462890625,
-0.04852294921875,
-0.03790283203125,
... |
Minglii/r_wiz2_q | 2023-10-20T17:18:17.000Z | [
"region:us"
] | Minglii | null | null | 0 | 5 | 2023-10-20T17:14:50 | ---
dataset_info:
features:
- name: data
struct:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 173729259
num_examples: 73000
download_size: 89362133
dataset_size: 173729259
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "r_wiz2_q"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 557 | [
[
-0.03662109375,
-0.0179901123046875,
0.005359649658203125,
0.00904083251953125,
-0.015380859375,
0.003765106201171875,
0.029510498046875,
-0.0173797607421875,
0.046417236328125,
0.0265045166015625,
-0.07257080078125,
-0.04400634765625,
-0.036346435546875,
-0... |
Minglii/r_wiz2_qa | 2023-10-20T17:17:21.000Z | [
"region:us"
] | Minglii | null | null | 0 | 5 | 2023-10-20T17:15:00 | ---
dataset_info:
features:
- name: data
struct:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 225921379
num_examples: 73000
download_size: 114043986
dataset_size: 225921379
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "r_wiz2_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 559 | [
[
-0.037261962890625,
-0.0212249755859375,
0.01177978515625,
0.01099395751953125,
-0.0161285400390625,
-0.0006670951843261719,
0.036956787109375,
-0.0198516845703125,
0.051910400390625,
0.0252838134765625,
-0.06842041015625,
-0.044769287109375,
-0.029876708984375,... |
oscorrea/tt-scores-bin-balanced-text2 | 2023-10-20T20:19:11.000Z | [
"region:us"
] | oscorrea | null | null | 0 | 5 | 2023-10-20T20:18:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
boxin-wbx/test | 2023-10-21T08:42:35.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | boxin-wbx | null | null | 0 | 5 | 2023-10-21T00:19:29 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
- question-answering
- text-generation
- text2text-generation
language:
- en
pretty_name: test
size_categories:
- 10K<n<100K
arixv: 2306.11698
configs:
- config_name: toxicity
data_files:
- split: realtoxicityprompts.nontoxic
path: "toxicity/user_prompts/nontoxic.jsonl"
- split: realtoxicityprompts.toxic
path: "toxicity/user_prompts/toxic.jsonl"
- split: toxic.gpt3.5
path: "toxicity/user_prompts/toxic.gpt3.5.jsonl"
- split: toxic.gpt4
path: "toxicity/user_prompts/toxic.gpt4.jsonl"
- config_name: adv_demonstration
data_files:
- split: counterfactual.snliPremiseCf
path: adv_demonstration/counterfactual/snli_premise_cf/42.jsonl
- split: counterfactual.snliHypothesisCf
path: adv_demonstration/counterfactual/snli_hypothesis_cf/42.jsonl
- split: counterfactual.controlRaisingCf
path: adv_demonstration/counterfactual/control_raising_cf/42.jsonl
- split: counterfactual.irregularFormCf
path: adv_demonstration/counterfactual/irregular_form_cf/42.jsonl
- split: counterfactual.mainVerbCf
path: adv_demonstration/counterfactual/main_verb_cf/42.jsonl
- split: counterfactual.syntacticCategoryCf
path: adv_demonstration/counterfactual/syntactic_category_cf/42.jsonl
- split: spurious.PP.entailBias
path: adv_demonstration/spurious/PP/entail-bias/42.jsonl
- split: spurious.PP.nonEntailBias
path: adv_demonstration/spurious/PP/non-entail-bias/42.jsonl
- split: spurious.adverb.entailBias
path: adv_demonstration/spurious/adverb/entail-bias/42.jsonl
- split: spurious.adverb.nonEntailBias
path: adv_demonstration/spurious/adverb/non-entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.entailBias
path: adv_demonstration/spurious/embedded_under_verb/entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.nonEntailBias
path: adv_demonstration/spurious/embedded_under_verb/non-entail-bias/42.jsonl
- split: spurious.lRelativeClause.entailBias
path: adv_demonstration/spurious/l_relative_clause/entail-bias/42.jsonl
- split: spurious.lRelativeClause.nonEntailBias
path: adv_demonstration/spurious/l_relative_clause/non-entail-bias/42.jsonl
- split: spurious.passive.entailBias
path: adv_demonstration/spurious/passive/entail-bias/42.jsonl
- split: spurious.passive.nonEntailBias
path: adv_demonstration/spurious/passive/non-entail-bias/42.jsonl
- split: spurious.sRelativeClause.entailBias
path: adv_demonstration/spurious/s_relative_clause/entail-bias/42.jsonl
- split: spurious.sRelativeClause.nonEntailBias
path: adv_demonstration/spurious/s_relative_clause/non-entail-bias/42.jsonl
- split: backdoor.sst2.setup1BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_cacc/42.jsonl
- split: backdoor.sst2.setup1BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_asr/42.jsonl
- split: backdoor.sst2.setup2BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_cacc/42.jsonl
- split: backdoor.sst2.setup2BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_asr/42.jsonl
- split: backdoor.sst2.setup3BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_cacc/42.jsonl
- split: backdoor.sst2.setup3BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_asr/42.jsonl
- split: backdoor.sst2.setup1AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup1AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_asr/42.jsonl
- split: backdoor.sst2.setup2AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup2AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_asr/42.jsonl
- split: backdoor.sst2.setup3AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup3AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_asr/42.jsonl
- split: backdoor.sst2.setup1SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup1SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup2SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup2SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup3SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup3SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup1StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup1StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup2StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup2StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup3StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup3StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_asr/42.jsonl
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 10,077 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
pensieves/alpha | 2023-10-24T14:30:04.000Z | [
"license:apache-2.0",
"region:us"
] | pensieves | null | null | 0 | 5 | 2023-10-21T02:42:13 | ---
license: apache-2.0
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: targets
sequence: string
- name: target_choices
sequence: string
- name: target_scores
sequence: int32
- name: reasoning
dtype: string
- name: source_data
dtype: string
- name: context_id
dtype: int32
- name: question_id
dtype: int32
- name: num_context_entities
dtype: int32
- name: num_question_entities
dtype: int32
- name: question_type
dtype: string
- name: reasoning_types
sequence: string
- name: spatial_types
sequence: string
- name: commonsense_question
dtype: string
- name: canary
dtype: string
- name: comments
sequence: string
configs:
- config_name: SpaRTUN
version: 1.1.0
data_files:
- split: train
path: "SpaRTUN/train.json"
- split: validation
path: "SpaRTUN/val.json"
- split: test
path: "SpaRTUN/test.json"
- config_name: SpartQA_Human
version: 1.1.0
data_files:
- split: train
path: "SpartQA_Human/train.json"
- split: validation
path: "SpartQA_Human/val.json"
- split: test
path: "SpartQA_Human/test.json"
- config_name: ReSQ
version: 1.1.0
data_files:
- split: train
path: "ReSQ/train.json"
- split: validation
path: "ReSQ/val.json"
- split: test
path: "ReSQ/test.json"
- config_name: StepGame
version: 1.1.0
data_files:
- split: train
path: "StepGame/train.json"
- split: validation
path: "StepGame/val.json"
- split: test
path: "StepGame/test.json"
---
# Spatial-Bench (Source): A Unified Benchmark for Spatial Understanding and Reasoning
### Licensing Information
Creative Commons Attribution 4.0 International | 1,870 | [
[
-0.030731201171875,
-0.0157623291015625,
0.058074951171875,
0.060455322265625,
-0.0167236328125,
-0.0384521484375,
0.006656646728515625,
-0.01323699951171875,
-0.02569580078125,
0.029144287109375,
-0.034423828125,
-0.05126953125,
-0.0222930908203125,
-0.0182... |
vietlegalqa/tvpl_summary_1k_231021 | 2023-10-21T09:57:42.000Z | [
"region:us"
] | vietlegalqa | null | null | 0 | 5 | 2023-10-21T09:57:41 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: summary
sequence: string
- name: document
sequence: string
splits:
- name: train
num_bytes: 2464690
num_examples: 1000
download_size: 993394
dataset_size: 2464690
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tvpl_summary_1k_231021"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.056884765625,
-0.00629425048828125,
-0.0160064697265625,
0.025634765625,
-0.0296783447265625,
-0.0068817138671875,
0.026947021484375,
0.0156707763671875,
0.054901123046875,
0.055267333984375,
-0.059051513671875,
-0.031951904296875,
-0.044769287109375,
-0.... |
relaxtraffic/metartmodels3 | 2023-10-28T12:52:13.000Z | [
"region:us"
] | relaxtraffic | null | null | 0 | 5 | 2023-10-21T21:52:35 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hdparmar/irish-traditional-tunes | 2023-10-22T13:45:19.000Z | [
"task_categories:text-to-image",
"task_categories:text-to-audio",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"music",
"region:us"
] | hdparmar | null | null | 0 | 5 | 2023-10-22T12:19:54 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3322131399.86
num_examples: 9604
download_size: 3282715107
dataset_size: 3322131399.86
license: mit
task_categories:
- text-to-image
- text-to-audio
language:
- en
tags:
- music
pretty_name: Mel-Spectrograms for Irish Traditional Music
size_categories:
- 1K<n<10K
---
# Dataset Card for "irish-traditional-tunes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Dataset Card for "irish-tunes-spectrograms"
## 1. Dataset Description
Dataset is used for the following project
- **Homepage:** [Trad-fusion](https://github.com/hdparmar/Tradi-fusion)
### 1.1 Dataset Summary
This dataset contains 9604 Mel spectrograms that represent Traditional Irish Music.
This dataset is smaller compared to [hdparmar/irish-tunes-spectrogram](https://huggingface.co/datasets/hdparmar/irish-tunes-spectrograms), to reduce the training time and increase the possibilty to train for longer steps/batch.
Each spectrogram image is a 5 second split of audio resulting in dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels.
Although, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.
The simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The metadata.csv file the dataset is in this format
```
{"file_name": "path/to/the/image.png",
"text": "An Irish Traditional Tune"}
```
### 2.2 Data Fields
- **file_name**: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- **text**: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "An Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "An Irish Traditional Tune."
This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset | 3,341 | [
[
-0.0462646484375,
0.003177642822265625,
0.00615692138671875,
0.0194854736328125,
-0.042449951171875,
0.00559234619140625,
-0.03485107421875,
-0.04974365234375,
0.040740966796875,
0.055999755859375,
-0.0389404296875,
-0.07513427734375,
-0.0205841064453125,
0.... |
Back-up/sotaysv-qa | 2023-10-22T16:02:26.000Z | [
"region:us"
] | Back-up | null | null | 0 | 5 | 2023-10-22T16:02:20 | ---
dataset_info:
features:
- name: Questions
dtype: string
- name: Answers
dtype: string
splits:
- name: train
num_bytes: 129518
num_examples: 176
download_size: 56231
dataset_size: 129518
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sotaysv-qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 474 | [
[
-0.0285491943359375,
-0.0150146484375,
0.0206451416015625,
0.0136260986328125,
-0.0151214599609375,
0.001834869384765625,
0.0406494140625,
-0.002887725830078125,
0.0679931640625,
0.0352783203125,
-0.06890869140625,
-0.04949951171875,
-0.020904541015625,
-0.0... |
jamestalentium/cnn_dailymail_100_finetune | 2023-10-22T16:39:50.000Z | [
"region:us"
] | jamestalentium | null | null | 0 | 5 | 2023-10-22T16:39:49 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 439445.02164652944
num_examples: 100
download_size: 128996
dataset_size: 439445.02164652944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cnn_dailymail_100_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 551 | [
[
-0.041015625,
-0.0293121337890625,
0.0002617835998535156,
0.024261474609375,
-0.025238037109375,
-0.0144195556640625,
0.0005974769592285156,
0.0013370513916015625,
0.04766845703125,
0.036773681640625,
-0.059814453125,
-0.06097412109375,
-0.046417236328125,
-... |
jamestalentium/cnn_dailymail_100_rm | 2023-10-22T16:39:51.000Z | [
"region:us"
] | jamestalentium | null | null | 0 | 5 | 2023-10-22T16:39:50 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 439445.02164652944
num_examples: 100
download_size: 134076
dataset_size: 439445.02164652944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cnn_dailymail_100_rm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.035003662109375,
-0.023712158203125,
-0.00030875205993652344,
0.028076171875,
-0.025787353515625,
-0.00455474853515625,
0.0083770751953125,
0.005802154541015625,
0.043701171875,
0.03424072265625,
-0.068603515625,
-0.0645751953125,
-0.050628662109375,
-0.0... |
jamestalentium/cnn_dailymail_100_test | 2023-10-22T16:39:54.000Z | [
"region:us"
] | jamestalentium | null | null | 0 | 5 | 2023-10-22T16:39:51 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 8255778.137510879
num_examples: 1900
download_size: 2210923
dataset_size: 8255778.137510879
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "cnn_dailymail_100_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 544 | [
[
-0.0408935546875,
-0.033477783203125,
-0.0036716461181640625,
0.029052734375,
-0.018218994140625,
-0.003894805908203125,
0.007274627685546875,
0.00463104248046875,
0.0438232421875,
0.02899169921875,
-0.06402587890625,
-0.060394287109375,
-0.042572021484375,
... |
jamestalentium/cnn_dailymail_250_finetune | 2023-10-22T16:40:00.000Z | [
"region:us"
] | jamestalentium | null | null | 0 | 5 | 2023-10-22T16:39:58 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1098612.5541163236
num_examples: 250
download_size: 307394
dataset_size: 1098612.5541163236
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cnn_dailymail_250_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 551 | [
[
-0.040130615234375,
-0.0245819091796875,
0.00203704833984375,
0.0260009765625,
-0.0239105224609375,
-0.011688232421875,
-0.000782012939453125,
0.000014722347259521484,
0.045745849609375,
0.034698486328125,
-0.060333251953125,
-0.060302734375,
-0.04638671875,
... |
jamestalentium/cnn_dailymail_250_rm | 2023-10-22T16:40:01.000Z | [
"region:us"
] | jamestalentium | null | null | 0 | 5 | 2023-10-22T16:40:00 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1098612.5541163236
num_examples: 250
download_size: 303396
dataset_size: 1098612.5541163236
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cnn_dailymail_250_rm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.03472900390625,
-0.0189208984375,
0.0023975372314453125,
0.028778076171875,
-0.0255126953125,
-0.0018377304077148438,
0.00577545166015625,
0.0028629302978515625,
0.042938232421875,
0.031768798828125,
-0.0689697265625,
-0.0650634765625,
-0.051239013671875,
... |
jamestalentium/cnn_dailymail_250_test | 2023-10-22T16:40:03.000Z | [
"region:us"
] | jamestalentium | null | null | 0 | 5 | 2023-10-22T16:40:01 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 8255778.137510879
num_examples: 1900
download_size: 2210923
dataset_size: 8255778.137510879
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "cnn_dailymail_250_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 544 | [
[
-0.040313720703125,
-0.0283203125,
-0.0017566680908203125,
0.0300445556640625,
-0.0187530517578125,
-0.0019102096557617188,
0.00485992431640625,
0.002124786376953125,
0.04254150390625,
0.02667236328125,
-0.064208984375,
-0.059600830078125,
-0.0430908203125,
... |
pharaouk/python3 | 2023-10-22T22:41:20.000Z | [
"region:us"
] | pharaouk | null | null | 0 | 5 | 2023-10-22T22:36:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ai2lumos/lumos_complex_qa_plan_onetime | 2023-10-23T22:34:12.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"reasoning",
"question-answering",
"planning",
"region:us"
] | ai2lumos | null | null | 0 | 5 | 2023-10-23T05:36:48 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
- question-answering
language:
- en
tags:
- language-agent
- reasoning
- question-answering
- planning
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_complex_qa_plan_onetime` is the data for training **planning** module on **complex QA** task in **Lumos-Onetime (Lumos-O)** formulation.
The source of the training annotation training data is shown below:
| Datasets | Number |
|---|---|
|StrategyQA|1777|
|Musique|17632|
## Models Trained with the Data
`lumos_complex_qa_plan_onetime` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_complex_qa_plan_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_complex_qa_plan_onetime) |
|`lumos_unified_plan_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_plan_onetime) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | 2,640 | [
[
-0.0063018798828125,
-0.039947509765625,
0.0273895263671875,
0.023773193359375,
-0.01496124267578125,
0.00006407499313354492,
-0.0232391357421875,
-0.043853759765625,
0.032073974609375,
0.033905029296875,
-0.04669189453125,
-0.040008544921875,
-0.016860961914062... |
ai2lumos/lumos_unified_ground_iterative | 2023-10-26T06:06:47.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"maths",
"reasoning",
"question-answering",
"web-agent",
"grounding",
"region:us"
] | ai2lumos | null | null | 0 | 5 | 2023-10-23T05:39:02 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
- question-answering
language:
- en
tags:
- language-agent
- maths
- reasoning
- question-answering
- web-agent
- grounding
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_unified_ground_iterative` is the data for training **grounding** module on **maths**, **complex QA** and **web agent** tasks in **Lumos-Iterative (Lumos-I)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|PRM800K|10000|
|GSM8K|7473|
|ASDiv|2305|
|StrategyQA|1777|
|Musique|17632|
|Mind2Web|1009|
## Models Trained with the Data
`lumos_unified_ground_iterative` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_unified_ground_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_ground_iterative) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | 2,639 | [
[
-0.007450103759765625,
-0.04058837890625,
0.0299530029296875,
0.019195556640625,
-0.01800537109375,
0.00250244140625,
-0.034576416015625,
-0.042633056640625,
0.0220947265625,
0.0274505615234375,
-0.03851318359375,
-0.045440673828125,
-0.0279083251953125,
-0.... |
ai2lumos/lumos_maths_ground_onetime | 2023-10-23T22:15:31.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"maths",
"reasoning",
"grounding",
"region:us"
] | ai2lumos | null | null | 0 | 5 | 2023-10-23T05:45:30 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- en
tags:
- language-agent
- maths
- reasoning
- grounding
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_maths_ground_onetime` is the data for training **grounding** module on **maths** task in **Lumos-Onetime (Lumos-O)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|PRM800K|10000|
|GSM8K|7473|
|ASDiv|2305|
## Models Trained with the Data
`lumos_maths_ground_onetime` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_maths_ground_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_maths_ground_onetime) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | 2,482 | [
[
-0.006420135498046875,
-0.0445556640625,
0.0260162353515625,
0.0257110595703125,
-0.017364501953125,
-0.003997802734375,
-0.0271453857421875,
-0.042327880859375,
0.025665283203125,
0.031494140625,
-0.043670654296875,
-0.04327392578125,
-0.0256805419921875,
-... |
JawadIshtiaq/Shoes100 | 2023-10-23T11:36:40.000Z | [
"region:us"
] | JawadIshtiaq | null | null | 1 | 5 | 2023-10-23T11:35:21 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
orafandina/wiki_long_200k | 2023-10-23T17:40:37.000Z | [
"license:apache-2.0",
"region:us"
] | orafandina | null | null | 0 | 5 | 2023-10-23T17:33:48 | ---
license: apache-2.0
---
This dataset contains ~200K wiki articles, randomly sampled from the wiki dataset.
The articles have been tokenized using AutoTokenizer.from_pretrained('bert-base-uncased').
The dataset was created for the research porpouses. | 254 | [
[
-0.037872314453125,
-0.0283660888671875,
0.0009484291076660156,
0.006229400634765625,
-0.0286865234375,
-0.00775146484375,
-0.0079193115234375,
-0.0264892578125,
0.038421630859375,
0.0364990234375,
-0.049072265625,
-0.022705078125,
-0.028350830078125,
0.0258... |
Braddy/rsicd_matched_v2 | 2023-10-23T17:42:35.000Z | [
"region:us"
] | Braddy | null | null | 0 | 5 | 2023-10-23T17:42:24 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: captions
sequence: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 204580418.375
num_examples: 3661
download_size: 194011256
dataset_size: 204580418.375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rsicd_matched_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.0171356201171875,
0.0031757354736328125,
0.0086517333984375,
0.0205535888671875,
-0.03363037109375,
-0.00783538818359375,
0.028289794921875,
-0.01513671875,
0.04754638671875,
0.044586181640625,
-0.0643310546875,
-0.0433349609375,
-0.033966064453125,
-0.01... |
ysmail31/reglementInterieur | 2023-10-23T19:39:44.000Z | [
"region:us"
] | ysmail31 | null | null | 0 | 5 | 2023-10-23T18:17:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
joshuamoorexyz/legitue | 2023-10-23T20:49:40.000Z | [
"region:us"
] | joshuamoorexyz | null | null | 0 | 5 | 2023-10-23T20:49:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cmu-mlsp/encodec_24khz-librispeech_asr100h | 2023-10-23T23:36:40.000Z | [
"region:us"
] | cmu-mlsp | null | null | 0 | 5 | 2023-10-23T23:19:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: train
num_bytes: 17829358082.086
num_examples: 28539
- name: validation
num_bytes: 955281891.125
num_examples: 2703
- name: test
num_bytes: 958024726.5
num_examples: 2620
download_size: 18911689702
dataset_size: 19742664699.711
---
# Dataset Card for "encodec_24khz-librispeech_asr100h"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 975 | [
[
-0.052001953125,
-0.007366180419921875,
-0.000010967254638671875,
0.021484375,
-0.0132293701171875,
0.00908660888671875,
-0.0006237030029296875,
-0.01386260986328125,
0.0528564453125,
0.0308837890625,
-0.060546875,
-0.047149658203125,
-0.037078857421875,
-0.... |
ixarchakos/bottoms_warped | 2023-10-25T02:54:25.000Z | [
"region:us"
] | ixarchakos | null | null | 0 | 5 | 2023-10-25T02:52:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
eswardivi/Tam_MSA_Chunked | 2023-10-25T10:54:07.000Z | [
"region:us"
] | eswardivi | null | null | 0 | 5 | 2023-10-25T10:53:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
splits:
- name: train
num_bytes: 378386026.0
num_examples: 143
download_size: 377259064
dataset_size: 378386026.0
---
# Dataset Card for "Tam_MSA_Chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 584 | [
[
-0.0300140380859375,
-0.04461669921875,
0.0208282470703125,
0.0265350341796875,
-0.0377197265625,
0.00731658935546875,
0.0416259765625,
-0.015289306640625,
0.08502197265625,
0.046630859375,
-0.05487060546875,
-0.029052734375,
-0.047698974609375,
-0.012283325... |
abderrazzak/LayoutLMv3-first | 2023-10-27T15:23:01.000Z | [
"region:us"
] | abderrazzak | null | null | 0 | 5 | 2023-10-25T15:46:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: bboxes
sequence:
sequence: int64
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': Numéro facture
'2': Fournisseur
'3': Date Facture
'4': Adresse
'5': Désignation
'6': Quantité
'7': Prix unitaire
'8': Total
'9': TotalHT
'10': TVA
'11': TotalTTc
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 107383.0
num_examples: 1
- name: test
num_bytes: 107383.0
num_examples: 1
download_size: 0
dataset_size: 214766.0
---
# Dataset Card for "LayoutLMv3-first"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,017 | [
[
-0.041412353515625,
-0.01297760009765625,
0.020263671875,
0.03350830078125,
-0.01515960693359375,
-0.00493621826171875,
0.0498046875,
0.0181884765625,
0.051849365234375,
0.050872802734375,
-0.07208251953125,
-0.06231689453125,
-0.0275115966796875,
-0.0374450... |
wangqi777/ultrachat-uncensored | 2023-10-29T17:25:19.000Z | [
"license:mit",
"region:us"
] | wangqi777 | null | @misc{UltraChat-Uncensored,
author = {ehartford},
title = {UltraChat: A Large-scale Auto-generated Multi-round Dialogue Data},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/thunlp/ultrachat}},
} | 0 | 5 | 2023-10-25T21:11:07 | ---
license: mit
---
Forked from: [ehartford/ultrachat-uncensored](https://huggingface.co/datasets/ehartford/ultrachat-uncensored)
I have done the following upgrades:
1. Split the 'ultrachat-uncensored.jsonl' into train, val and test files by 85:10:5 ratios
2. Add 'ultrachat-uncensored.py' loading script.
The following are original dataset card from ehartford:
-------------------------------------------------------------
This is based on ultrachat dataset https://huggingface.co/datasets/stingning/ultrachat
I filtered it using the classic "unfiltered" keywords list https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered to remove instances of refusals and bias
About 90% of the dataset was removed.
What remains (400k conversations) is unlikely to inclinate the model to refuse.
I am investigating a less heavy handed approach using dolphin-2.1 to reword any detected refusals. | 910 | [
[
-0.040283203125,
-0.039459228515625,
0.01898193359375,
-0.005092620849609375,
-0.051544189453125,
0.0014562606811523438,
-0.006259918212890625,
-0.03985595703125,
0.023834228515625,
0.073486328125,
-0.0426025390625,
-0.032989501953125,
-0.043304443359375,
0.... |
iamnguyen/filtered-orca | 2023-10-26T04:48:14.000Z | [
"region:us"
] | iamnguyen | null | null | 0 | 5 | 2023-10-25T23:15:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1194215300.0749996
num_examples: 1035833
download_size: 632988844
dataset_size: 1194215300.0749996
---
# Dataset Card for "filtered-orca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 582 | [
[
-0.04302978515625,
-0.03228759765625,
0.0178070068359375,
0.00042891502380371094,
-0.0357666015625,
-0.009124755859375,
0.03155517578125,
-0.0338134765625,
0.06793212890625,
0.064697265625,
-0.05859375,
-0.06396484375,
-0.0406494140625,
-0.0163116455078125,
... |
roupenminassian/pedestrian-dataset | 2023-10-26T00:19:16.000Z | [
"region:us"
] | roupenminassian | null | null | 0 | 5 | 2023-10-26T00:19:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: objects
struct:
- name: id
sequence: int64
- name: area
sequence: float64
- name: bbox
sequence:
sequence: float64
- name: category
sequence: int64
splits:
- name: train
num_bytes: 26451276.0
num_examples: 78
download_size: 26448000
dataset_size: 26451276.0
---
# Dataset Card for "pedestrian-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 764 | [
[
-0.043731689453125,
-0.01715087890625,
0.0306854248046875,
0.026519775390625,
-0.01317596435546875,
-0.0017728805541992188,
0.0291900634765625,
-0.033172607421875,
0.056793212890625,
0.0311126708984375,
-0.044158935546875,
-0.0379638671875,
-0.04071044921875,
... |
MusicMovesPeople/Iot_qa_chat | 2023-10-26T04:45:41.000Z | [
"region:us"
] | MusicMovesPeople | null | null | 0 | 5 | 2023-10-26T04:45:02 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
makram93/accepted_pairs | 2023-10-26T11:53:39.000Z | [
"region:us"
] | makram93 | null | null | 0 | 5 | 2023-10-26T11:32:00 | ---
dataset_info:
features:
- name: url
dtype: string
- name: doc_id
dtype: string
- name: original_title
sequence: string
- name: right
dtype: string
- name: left
dtype: string
splits:
- name: train
num_bytes: 85236.05575519982
num_examples: 100
download_size: 59695
dataset_size: 85236.05575519982
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "accepted_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 606 | [
[
-0.03826904296875,
-0.0171356201171875,
0.022796630859375,
0.037811279296875,
-0.022064208984375,
-0.005084991455078125,
0.01026153564453125,
-0.0169525146484375,
0.058197021484375,
0.0270843505859375,
-0.036956787109375,
-0.050262451171875,
-0.045806884765625,
... |
CJWeiss/lcr_final | 2023-10-26T20:54:35.000Z | [
"region:us"
] | CJWeiss | null | null | 0 | 5 | 2023-10-26T20:54:27 | ---
dataset_info:
features:
- name: Long Text
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 87287943
num_examples: 2918
- name: test
num_bytes: 16210230
num_examples: 584
- name: valid
num_bytes: 10483063
num_examples: 389
download_size: 55981252
dataset_size: 113981236
---
# Dataset Card for "lcr_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 519 | [
[
-0.0458984375,
-0.01354217529296875,
0.020111083984375,
-0.00527191162109375,
-0.015106201171875,
0.00795745849609375,
0.007244110107421875,
-0.005664825439453125,
0.0328369140625,
0.05169677734375,
-0.058319091796875,
-0.0516357421875,
-0.0270233154296875,
... |
Deojoandco/capstone_hal_without_gold | 2023-10-27T04:03:05.000Z | [
"region:us"
] | Deojoandco | null | null | 0 | 5 | 2023-10-27T04:02:56 | ---
dataset_info:
features:
- name: dialog_id
dtype: int32
- name: source
sequence: string
- name: tags
sequence:
class_label:
names:
'0': C
'1': M
'2': N
'3': O
'4': OB
'5': W
splits:
- name: train
num_bytes: 239933
num_examples: 76
- name: validation
num_bytes: 47958
num_examples: 12
- name: test
num_bytes: 27286
num_examples: 12
download_size: 35488
dataset_size: 315177
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "capstone_hal_without_gold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 863 | [
[
-0.041748046875,
-0.0106964111328125,
0.017791748046875,
0.01025390625,
-0.0193634033203125,
0.008331298828125,
0.007534027099609375,
0.0008392333984375,
0.05523681640625,
0.052398681640625,
-0.0718994140625,
-0.07342529296875,
-0.04180908203125,
-0.02224731... |
ajdesh2000/pegasus_combined_math_train_dataset | 2023-10-27T04:46:29.000Z | [
"region:us"
] | ajdesh2000 | null | null | 0 | 5 | 2023-10-27T04:46:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
anlp/sentence_anno | 2023-10-27T08:49:02.000Z | [
"region:us"
] | anlp | null | null | 0 | 5 | 2023-10-27T08:49:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: sentences
sequence: string
- name: new_gt
sequence: string
splits:
- name: train
num_bytes: 1201528
num_examples: 990
download_size: 244599
dataset_size: 1201528
---
# Dataset Card for "sentence_anno"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.02947998046875,
-0.040924072265625,
0.017669677734375,
0.0280914306640625,
-0.01285552978515625,
-0.0305938720703125,
-0.01042938232421875,
-0.027130126953125,
0.054595947265625,
0.04840087890625,
-0.05389404296875,
-0.051483154296875,
-0.042877197265625,
... |
mwitiderrick/gsm8k | 2023-10-27T11:23:45.000Z | [
"region:us"
] | mwitiderrick | null | null | 0 | 5 | 2023-10-27T11:20:21 | Train Mistral with gsm8k formatted data | 39 | [
[
-0.0031833648681640625,
0.00785064697265625,
0.002010345458984375,
0.038055419921875,
-0.028564453125,
-0.050689697265625,
-0.01551055908203125,
0.011138916015625,
-0.00025725364685058594,
0.0457763671875,
-0.01560211181640625,
-0.0224151611328125,
-0.0399780273... |
nataliaElv/text-descriptives-metadata | 2023-10-27T17:37:10.000Z | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | nataliaElv | null | null | 0 | 5 | 2023-10-27T17:37:08 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for text-descriptives-metadata
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("nataliaElv/text-descriptives-metadata")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("nataliaElv/text-descriptives-metadata")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| prompt | Prompt | FieldTypes.text | True | True |
| context | Context | FieldTypes.text | False | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| response | Response | QuestionTypes.text | True | N/A | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
**✨ NEW** The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"context": null,
"prompt": "Can brain cells move? By movement I mean long distance migration (preferably within the brain only)."
},
"metadata": {
"entropy": 0.4352176404374839,
"flesch_reading_ease": 82.39000000000001,
"n_characters": 85,
"passed_quality_check": "True"
},
"responses": [],
"suggestions": [
{
"agent": null,
"question_name": "response",
"score": null,
"type": null,
"value": "The question is relatively broad and one should take into account that the brain not only consists of neurons, but also glial cells (supportive cells) and pre-mitotic neuronal stem cells. Furthermore, as critical fellow-scientists have indicated, developmental stage is very important, as the developing embryonic brain is very different from the adult brain.\nHowever, after sifting through various publications, the answer to the question is actually remarkably simple: Yes, brain cells migrate.\nIn the adult brain glial cells migrate in the brain (Kl\u00e4mbt, 2009). Glial cells are involved in a myriad of functions, but a notable example of migrating glial cells are the oligodendrocytes that migrate relative long distances to find their target axons onto which they wrap themselves to form the insulating myelin sheath (Tsai and Miller, 2002).\nNeuronal stem cells migrate over long distances in response to injury (Imitola et al., 2004) and they migrate from specific stem-cell locations (e.g., hippocampus and subventricular zone) to other regions (Clarke, 2003).\nPost-mitotic, but non-differentiated neurons have been shown to migrate in the adult brain in fish (Scott et al., 2012), and in mammals and non-human primates as well (Sawada et al., 2011).\nNot surprisingly, glial cells, stem cells and neurons also migrate during embryonic development. Most notably, post-mitotic neurons destined to fulfill peripheral functions have to migrate over relatively long distances from the neural crest to their target locations (Neuroscience, 2nd ed, Neuronal Migration)."
}
]
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"context": null,
"external_id": null,
"metadata": "{\"n_characters\": 85, \"passed_quality_check\": \"True\", \"flesch_reading_ease\": 82.39000000000001, \"entropy\": 0.4352176404374839}",
"prompt": "Can brain cells move? By movement I mean long distance migration (preferably within the brain only).",
"response": [],
"response-suggestion": "The question is relatively broad and one should take into account that the brain not only consists of neurons, but also glial cells (supportive cells) and pre-mitotic neuronal stem cells. Furthermore, as critical fellow-scientists have indicated, developmental stage is very important, as the developing embryonic brain is very different from the adult brain.\nHowever, after sifting through various publications, the answer to the question is actually remarkably simple: Yes, brain cells migrate.\nIn the adult brain glial cells migrate in the brain (Kl\u00e4mbt, 2009). Glial cells are involved in a myriad of functions, but a notable example of migrating glial cells are the oligodendrocytes that migrate relative long distances to find their target axons onto which they wrap themselves to form the insulating myelin sheath (Tsai and Miller, 2002).\nNeuronal stem cells migrate over long distances in response to injury (Imitola et al., 2004) and they migrate from specific stem-cell locations (e.g., hippocampus and subventricular zone) to other regions (Clarke, 2003).\nPost-mitotic, but non-differentiated neurons have been shown to migrate in the adult brain in fish (Scott et al., 2012), and in mammals and non-human primates as well (Sawada et al., 2011).\nNot surprisingly, glial cells, stem cells and neurons also migrate during embryonic development. Most notably, post-mitotic neurons destined to fulfill peripheral functions have to migrate over relatively long distances from the neural crest to their target locations (Neuroscience, 2nd ed, Neuronal Migration).",
"response-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **prompt** is of type `FieldTypes.text`.
* (optional) **context** is of type `FieldTypes.text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **response** is of type `QuestionTypes.text`.
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **response-suggestion** is of type `QuestionTypes.text`.
Additionally, we also have two more fields that are optional and are the following:
* **✨ NEW** **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a supervised fine-tuning dataset that contains instructions. Please write the response to the instruction in the response field. Take the context into account when writing the response.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 12,115 | [
[
-0.0419921875,
-0.0653076171875,
0.03204345703125,
0.0230560302734375,
-0.01078033447265625,
-0.036407470703125,
-0.0026264190673828125,
-0.042144775390625,
0.046722412109375,
0.050537109375,
-0.0640869140625,
-0.058258056640625,
-0.034942626953125,
0.016265... |
sayan1101/filtered_finetune_run2 | 2023-10-27T19:47:24.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 5 | 2023-10-27T18:21:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1195434678
num_examples: 2585534
download_size: 668295236
dataset_size: 1195434678
---
# Dataset Card for "filtered_finetune_run2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 461 | [
[
-0.04644775390625,
-0.03076171875,
0.018157958984375,
0.0014209747314453125,
-0.0307464599609375,
-0.0157470703125,
0.007259368896484375,
-0.0011692047119140625,
0.032562255859375,
0.060302734375,
-0.06256103515625,
-0.034881591796875,
-0.034576416015625,
-0... |
quocanh34/private_model_tts3 | 2023-10-28T08:05:31.000Z | [
"region:us"
] | quocanh34 | null | null | 0 | 5 | 2023-10-28T08:03:51 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: pred_str
dtype: string
- name: pred_str_norm
dtype: string
- name: intent
dtype: string
- name: entities
list:
- name: filler
dtype: string
- name: type
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 568314768
num_examples: 2139
download_size: 462243736
dataset_size: 568314768
---
# Dataset Card for "private_model_tts3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 768 | [
[
-0.015869140625,
-0.0036945343017578125,
0.02783203125,
0.0264892578125,
-0.022430419921875,
0.005840301513671875,
0.02838134765625,
-0.015045166015625,
0.034271240234375,
0.036407470703125,
-0.04339599609375,
-0.0655517578125,
-0.048675537109375,
-0.0022907... |
Shishir1807/mistralv2 | 2023-10-28T10:09:03.000Z | [
"region:us"
] | Shishir1807 | null | null | 0 | 5 | 2023-10-28T10:08:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
linhtran92/tts_face | 2023-10-28T14:51:06.000Z | [
"region:us"
] | linhtran92 | null | null | 0 | 5 | 2023-10-28T12:25:03 | ---
dataset_info:
features:
- name: sentence_norm
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: id
dtype: string
- name: wer
dtype: float64
splits:
- name: train
num_bytes: 84090967
num_examples: 225
download_size: 64738623
dataset_size: 84090967
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tts_face"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 663 | [
[
-0.033721923828125,
-0.0139007568359375,
0.0114593505859375,
0.023162841796875,
-0.02532958984375,
0.0177154541015625,
0.0226593017578125,
-0.014434814453125,
0.0682373046875,
0.0279693603515625,
-0.06695556640625,
-0.051116943359375,
-0.048736572265625,
-0.... |
AvishayDev/hebrew-holy-DS-BenIishHay | 2023-10-28T17:24:29.000Z | [
"region:us"
] | AvishayDev | null | null | 0 | 5 | 2023-10-28T17:19:27 | ---
dataset_info:
features:
- name: year
dtype: string
- name: parasha
dtype: string
- name: number
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2038578
num_examples: 1992
download_size: 948479
dataset_size: 2038578
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hebrew-holy-DS-BenIishHay"
This dataset was created with [ויקיטקסט](https://he.wikisource.org/wiki/%D7%A2%D7%9E%D7%95%D7%93_%D7%A8%D7%90%D7%A9%D7%99).
It contain all the Halaha's from the book 'Ben Iish Hay' in hebrew.
Plase use it *only* to sanctify the name of G-d in the world! Thanks | 686 | [
[
-0.020050048828125,
-0.052093505859375,
-0.00786590576171875,
-0.0122222900390625,
-0.06329345703125,
-0.026519775390625,
0.0020008087158203125,
-0.0287933349609375,
0.0262298583984375,
0.03753662109375,
-0.050140380859375,
-0.06085205078125,
-0.044464111328125,... |
alihuzezy/Tapal-66-TEXT | 2023-10-28T20:57:17.000Z | [
"region:us"
] | alihuzezy | null | null | 0 | 5 | 2023-10-28T20:51:40 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12266
num_examples: 58
download_size: 7905
dataset_size: 12266
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Tapal-66-TEXT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 432 | [
[
-0.0292510986328125,
-0.00968170166015625,
0.00882720947265625,
0.032135009765625,
-0.02996826171875,
0.005619049072265625,
0.005580902099609375,
-0.0196533203125,
0.06256103515625,
0.0308837890625,
-0.046539306640625,
-0.05010986328125,
-0.044769287109375,
... |
Raspberry-ai/high_end_product_image_dataset_v1 | 2023-10-29T00:16:07.000Z | [
"region:us"
] | Raspberry-ai | null | null | 0 | 5 | 2023-10-29T00:09:12 | ---
dataset_info:
features:
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: image_path
dtype: string
- name: brand_name
dtype: string
- name: overall_description
dtype: string
- name: style_description
dtype: string
- name: product_category
dtype: string
- name: product_group_category
dtype: string
- name: section_name
dtype: string
- name: created_at
dtype: string
- name: source
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 5353094
num_examples: 5315
download_size: 1681903
dataset_size: 5353094
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "high_end_product_image_dataset_v1"
Product images crawled from `saksfifthavenue` and `mytheresa` sites.
One product category were crawled for `saksfifthavenue`: `women's apparel`
Two product categories were crawled `mytheresa`: `Women Clothing`, `Women Shoes`
The following brands were crawled for both sites:
```
brands = [
"alexander-mcqueen",
"balenciaga",
"balmain",
"bottega-veneta"
"carolina-herrera",
"dolce-gabbana",
"giorgio-armani",
"junya-watanabe",
"monse",
"oscar-de-la-renta",
"prada",
"ralph-lauren-collection",
"saint-laurent",
"valentino-garavani",
]
```
A caption column is included. It's derived from the following columns: `name, overall_description, style_description, color, product_group_category`.
The image is stored in s3. See the `image_path` column.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,878 | [
[
-0.034210205078125,
-0.01763916015625,
-0.003925323486328125,
0.0181121826171875,
-0.049041748046875,
0.0199737548828125,
0.004848480224609375,
-0.030975341796875,
0.042266845703125,
0.037200927734375,
-0.061614990234375,
-0.0787353515625,
-0.04949951171875,
... |
health360/Ultrachat-Multiple-Conversations-Alpaca-Style | 2023-10-29T06:08:22.000Z | [
"region:us"
] | health360 | null | null | 0 | 5 | 2023-10-29T06:02:56 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9289363199
num_examples: 1468352
download_size: 4593179681
dataset_size: 9289363199
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Ultrachat-Multiple-Conversations-Alpaca-Style"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.049530029296875,
-0.053924560546875,
0.0077056884765625,
0.03033447265625,
-0.0265045166015625,
0.00384521484375,
0.0127716064453125,
-0.031280517578125,
0.07763671875,
0.040374755859375,
-0.061126708984375,
-0.049652099609375,
-0.023590087890625,
-0.0203... |
hemantk089/llama2_7b_fine_tuning_complete_dataset_v5 | 2023-10-29T18:02:36.000Z | [
"region:us"
] | hemantk089 | null | null | 0 | 5 | 2023-10-29T18:02:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 339620
num_examples: 915
- name: test
num_bytes: 34719
num_examples: 102
download_size: 106126
dataset_size: 374339
---
# Dataset Card for "llama2_7b_fine_tuning_complete_dataset_v5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.031158447265625,
-0.00856781005859375,
0.02532958984375,
0.031341552734375,
-0.03985595703125,
0.004024505615234375,
0.0199737548828125,
-0.01763916015625,
0.037841796875,
0.043701171875,
-0.060333251953125,
-0.06396484375,
-0.0377197265625,
0.00038623809... |
verayang/plainscree | 2023-10-29T22:07:02.000Z | [
"region:us"
] | verayang | null | null | 0 | 5 | 2023-10-29T20:14:20 | ---
dataset_info:
features:
- name: audio_id
dtype: int64
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: cree_transcription
dtype: string
- name: english_transcription
dtype: string
- name: gender
dtype: string
splits:
- name: train
num_bytes: 22116992.0
num_examples: 64
download_size: 22072728
dataset_size: 22116992.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "plainscree"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 647 | [
[
-0.041046142578125,
-0.01580810546875,
0.0157928466796875,
0.028076171875,
-0.0174713134765625,
-0.01194000244140625,
0.0123138427734375,
-0.00868988037109375,
0.06512451171875,
0.040283203125,
-0.0684814453125,
-0.064208984375,
-0.06744384765625,
-0.0281829... |
Abhay22/text_classification_dataset_profile | 2023-10-29T21:15:36.000Z | [
"region:us"
] | Abhay22 | null | null | 0 | 5 | 2023-10-29T21:14:00 | ---
dataset_info:
features:
- name: text
dtype: string
- name: Profile
dtype: string
splits:
- name: train
num_bytes: 2020475.2946482261
num_examples: 1413
- name: test
num_bytes: 357479.7053517739
num_examples: 250
download_size: 742409
dataset_size: 2377955.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "text_classification_dataset_profile"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 619 | [
[
-0.0306396484375,
-0.0201873779296875,
0.013641357421875,
0.01348114013671875,
-0.0033435821533203125,
0.00962066650390625,
0.0103912353515625,
-0.01690673828125,
0.053955078125,
0.035247802734375,
-0.046234130859375,
-0.06317138671875,
-0.048828125,
-0.0156... |
Abhay22/text_classification_dataset_journey | 2023-10-29T21:15:38.000Z | [
"region:us"
] | Abhay22 | null | null | 0 | 5 | 2023-10-29T21:14:02 | ---
dataset_info:
features:
- name: text
dtype: string
- name: Journey
dtype: string
splits:
- name: train
num_bytes: 2016526.6737416617
num_examples: 1401
- name: test
num_bytes: 356958.3262583384
num_examples: 248
download_size: 740837
dataset_size: 2373485.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "text_classification_dataset_journey"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 619 | [
[
-0.017181396484375,
-0.0087127685546875,
0.026458740234375,
0.01338958740234375,
-0.0040435791015625,
0.01146697998046875,
0.0160675048828125,
-0.01287078857421875,
0.03839111328125,
0.040374755859375,
-0.06982421875,
-0.061859130859375,
-0.045318603515625,
... |
Abhay22/text_classification_dataset_area | 2023-10-29T21:15:41.000Z | [
"region:us"
] | Abhay22 | null | null | 0 | 5 | 2023-10-29T21:14:03 | ---
dataset_info:
features:
- name: text
dtype: string
- name: Area
dtype: string
splits:
- name: train
num_bytes: 2010078.5636031649
num_examples: 1396
- name: test
num_bytes: 355651.4363968351
num_examples: 247
download_size: 739481
dataset_size: 2365730.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "text_classification_dataset_area"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 613 | [
[
-0.03057861328125,
-0.028900146484375,
0.0233306884765625,
0.0178375244140625,
-0.011383056640625,
-0.0128631591796875,
0.00862884521484375,
-0.0051727294921875,
0.052398681640625,
0.030303955078125,
-0.036590576171875,
-0.06573486328125,
-0.0445556640625,
-... |
automated-research-group/llama2_7b_bf16-winogrande | 2023-10-30T07:09:53.000Z | [
"region:us"
] | automated-research-group | null | null | 0 | 5 | 2023-10-30T07:03:41 | ---
dataset_info:
features:
- name: answer
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: input_perplexity
dtype: float64
- name: input_likelihood
dtype: float64
- name: output_perplexity
dtype: float64
- name: output_likelihood
dtype: float64
splits:
- name: validation
num_bytes: 357232
num_examples: 1267
download_size: 0
dataset_size: 357232
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "llama2_7b_bf16-winogrande"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 716 | [
[
-0.035003662109375,
-0.01064300537109375,
0.018463134765625,
0.03240966796875,
-0.04095458984375,
0.0129852294921875,
0.0225067138671875,
-0.0291748046875,
0.054595947265625,
0.0296783447265625,
-0.046142578125,
-0.05810546875,
-0.0596923828125,
-0.018264770... |
re2panda/click_bate_random_sample2 | 2023-10-30T09:52:04.000Z | [
"region:us"
] | re2panda | null | null | 0 | 5 | 2023-10-30T09:33:21 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
bibidentuhanoi/BMO_vicuna_function | 2023-11-01T08:26:19.000Z | [
"region:us"
] | bibidentuhanoi | null | null | 0 | 5 | 2023-10-30T15:22:34 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 159708
num_examples: 147
download_size: 82457
dataset_size: 159708
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "BMO_vicuna_function"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.03045654296875,
-0.034149169921875,
0.02252197265625,
0.0283966064453125,
-0.034210205078125,
-0.0266265869140625,
0.0189666748046875,
-0.01171112060546875,
0.046417236328125,
0.035980224609375,
-0.056610107421875,
-0.056549072265625,
-0.039215087890625,
... |
bibidentuhanoi/BMO_BASE_FUNCTION_TEXT | 2023-11-01T17:19:30.000Z | [
"region:us"
] | bibidentuhanoi | null | null | 0 | 5 | 2023-10-30T15:26:57 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 227186
num_examples: 294
download_size: 90313
dataset_size: 227186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "BMO_BASE_FUNCTION_TEXT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 445 | [
[
-0.02001953125,
-0.03656005859375,
0.0189056396484375,
0.0211029052734375,
-0.0298614501953125,
-0.01483154296875,
0.00353240966796875,
-0.012237548828125,
0.0299835205078125,
0.044281005859375,
-0.05133056640625,
-0.05108642578125,
-0.0509033203125,
-0.0059... |
Hessa/tqa_train_instruction | 2023-10-30T17:29:16.000Z | [
"region:us"
] | Hessa | null | null | 0 | 5 | 2023-10-30T17:28:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
detectors/lsun_r-ood | 2023-10-30T18:25:52.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:unknown",
"arxiv:1610.01119",
"arxiv:1706.02690",
"region:us"
] | detectors | null | null | 0 | 5 | 2023-10-30T17:36:25 | ---
license: unknown
size_categories: 10K<n<100K
task_categories:
- image-classification
paperswithcode_id: lsun
pretty_name: LSUN (r)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 27566116.0
num_examples: 10000
download_size: 0
dataset_size: 27566116.0
---
# Dataset Card for LSUN (r) for OOD Detection
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Original Dataset Authors**: Limin Wang, Sheng Guo, Weilin Huang, Yuanjun Xiong, Yu Qiao
- **OOD Split Authors:** Shiyu Liang, Yixuan Li, R. Srikant
- **Shared by:** Eduardo Dadalto
- **License:** unknown
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Original Dataset Paper:** http://arxiv.org/abs/1610.01119v2
- **First OOD Application Paper:** http://arxiv.org/abs/1706.02690v5
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This dataset is intended to be used as an ouf-of-distribution dataset for image classification benchmarks.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This dataset is not annotated.
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The goal in curating and sharing this dataset to the HuggingFace Hub is to accelerate research and promote reproducibility in generalized Out-of-Distribution (OOD) detection.
Check the python library [detectors](https://github.com/edadaltocg/detectors) if you are interested in OOD detection.
### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
Please check original paper for details on the dataset.
### Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Please check original paper for details on the dataset.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@software{detectors2023,
author = {Eduardo Dadalto},
title = {Detectors: a Python Library for Generalized Out-Of-Distribution Detection},
url = {https://github.com/edadaltocg/detectors},
doi = {https://doi.org/10.5281/zenodo.7883596},
month = {5},
year = {2023}
}
@article{1706.02690v5,
author = {Shiyu Liang and Yixuan Li and R. Srikant},
title = {Enhancing The Reliability of Out-of-distribution Image Detection in
Neural Networks},
year = {2017},
month = {6},
note = {ICLR 2018},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1706.02690v5}
}
@article{1610.01119v2,
author = {Limin Wang and Sheng Guo and Weilin Huang and Yuanjun Xiong and Yu Qiao},
title = {Knowledge Guided Disambiguation for Large-Scale Scene Classification
with Multi-Resolution CNNs},
year = {2016},
month = {10},
note = {To appear in IEEE Transactions on Image Processing. Code and models
are available at https://github.com/wanglimin/MRCNN-Scene-Recognition},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1610.01119v2}
}
```
## Dataset Card Authors
Eduardo Dadalto
## Dataset Card Contact
https://huggingface.co/edadaltocg | 3,814 | [
[
-0.03558349609375,
-0.034515380859375,
0.030914306640625,
-0.00548553466796875,
-0.0280303955078125,
-0.036407470703125,
0.0012464523315429688,
-0.042388916015625,
-0.0021991729736328125,
0.0245513916015625,
-0.01751708984375,
-0.047149658203125,
-0.034973144531... |
User1115/smallTest | 2023-11-02T04:00:19.000Z | [
"region:us"
] | User1115 | null | null | 0 | 5 | 2023-10-31T06:39:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Lollitor/MyPubChem1 | 2023-10-31T10:24:40.000Z | [
"region:us"
] | Lollitor | null | null | 0 | 5 | 2023-10-31T10:22:34 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 148314.6
num_examples: 900
- name: validation
num_bytes: 16479.4
num_examples: 100
download_size: 55281
dataset_size: 164794.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "MyPubChem1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 553 | [
[
-0.0499267578125,
-0.0199432373046875,
0.0092315673828125,
0.025238037109375,
-0.012054443359375,
-0.0036220550537109375,
0.02850341796875,
0.0016574859619140625,
0.068115234375,
0.038421630859375,
-0.06646728515625,
-0.046630859375,
-0.036773681640625,
-0.0... |
psyche/instructions | 2023-10-31T12:56:26.000Z | [
"region:us"
] | psyche | null | null | 0 | 5 | 2023-10-31T12:56:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 67454649
num_examples: 112104
- name: validation
num_bytes: 7528895
num_examples: 12429
download_size: 43318862
dataset_size: 74983544
---
# Dataset Card for "instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 599 | [
[
-0.032440185546875,
-0.0269775390625,
0.026824951171875,
0.0250701904296875,
-0.01329803466796875,
-0.00811004638671875,
0.01473236083984375,
0.00533294677734375,
0.048980712890625,
0.043426513671875,
-0.0775146484375,
-0.06024169921875,
-0.03790283203125,
-... |
pankajemplay/llama-intent-1K | 2023-10-31T14:24:24.000Z | [
"region:us"
] | pankajemplay | null | null | 0 | 5 | 2023-10-31T14:24:23 | ---
dataset_info:
features:
- name: User Query
dtype: string
- name: Intent
dtype: string
- name: id type
dtype: string
- name: id value
dtype: string
- name: id slot filled
dtype: bool
- name: Task
dtype: string
- name: task slot filled
dtype: bool
- name: Bot Response
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 640822
num_examples: 1322
download_size: 191718
dataset_size: 640822
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-intent-1K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 790 | [
[
-0.020416259765625,
-0.0156707763671875,
0.0234375,
0.039154052734375,
-0.034454345703125,
-0.00982666015625,
0.032684326171875,
-0.00980377197265625,
0.0802001953125,
0.047149658203125,
-0.07666015625,
-0.0665283203125,
-0.050384521484375,
-0.00706481933593... |
moonseok/whisper_speechcommandsV2_data | 2023-10-31T19:39:30.000Z | [
"region:us"
] | moonseok | null | null | 0 | 5 | 2023-10-31T15:46:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: file
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': zero
'11': one
'12': two
'13': three
'14': four
'15': five
'16': six
'17': seven
'18': eight
'19': nine
'20': bed
'21': bird
'22': cat
'23': dog
'24': happy
'25': house
'26': marvin
'27': sheila
'28': tree
'29': wow
'30': backward
'31': forward
'32': follow
'33': learn
'34': visual
'35': _silence_
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 81484786167
num_examples: 84848
- name: validation
num_bytes: 9586332258
num_examples: 9982
- name: test
num_bytes: 4696163330
num_examples: 4890
download_size: 2260418103
dataset_size: 95767281755
---
# Dataset Card for "whisper_speechcommandsV2_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,579 | [
[
-0.0167694091796875,
-0.02099609375,
0.00411224365234375,
0.0232086181640625,
-0.004467010498046875,
0.0002498626708984375,
-0.0015268325805664062,
-0.00926971435546875,
0.041351318359375,
0.035614013671875,
-0.06060791015625,
-0.055511474609375,
-0.053741455078... |
yardeny/processed_t5_context_len_64 | 2023-10-31T17:54:22.000Z | [
"region:us"
] | yardeny | null | null | 0 | 5 | 2023-10-31T17:39:36 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 9745169624.0
num_examples: 29710883
download_size: 3781295100
dataset_size: 9745169624.0
---
# Dataset Card for "processed_t5_context_len_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 437 | [
[
-0.0380859375,
-0.01471710205078125,
0.037139892578125,
0.0263214111328125,
-0.03302001953125,
-0.0111236572265625,
-0.0012063980102539062,
-0.0220794677734375,
0.040283203125,
0.0362548828125,
-0.05914306640625,
-0.06854248046875,
-0.0462646484375,
-0.00353... |
LucidBrains/LlamaPixely | 2023-11-01T11:17:23.000Z | [
"region:us"
] | LucidBrains | null | null | 0 | 5 | 2023-11-01T11:16:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 3406311253
num_examples: 319936
download_size: 430672710
dataset_size: 3406311253
---
# Dataset Card for "LlamaPixely"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 500 | [
[
-0.02886962890625,
-0.00719451904296875,
0.0261077880859375,
0.030059814453125,
-0.026702880859375,
0.00841522216796875,
0.0224609375,
-0.0235595703125,
0.0733642578125,
0.035736083984375,
-0.062744140625,
-0.06610107421875,
-0.0560302734375,
-0.013061523437... |
peldrak/coast | 2023-11-02T10:28:27.000Z | [
"region:us"
] | peldrak | null | null | 0 | 5 | 2023-11-01T14:02:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 1098506769.894
num_examples: 6594
- name: validation
num_bytes: 173113819.0
num_examples: 827
download_size: 1414217267
dataset_size: 1271620588.894
---
# Dataset Card for "coast"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 603 | [
[
-0.043365478515625,
-0.00862884521484375,
0.03399658203125,
0.0137786865234375,
-0.0233154296875,
-0.0041656494140625,
0.01422882080078125,
-0.013153076171875,
0.05181884765625,
0.039703369140625,
-0.07220458984375,
-0.06787109375,
-0.0280609130859375,
-0.02... |
steelozazala/Test | 2023-11-02T04:42:28.000Z | [
"region:us"
] | steelozazala | null | null | 0 | 5 | 2023-11-02T04:41:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
kjappelbaum/chemnlp-robocrys | 2023-11-02T08:43:41.000Z | [
"region:us"
] | kjappelbaum | null | null | 0 | 5 | 2023-11-02T08:40:17 | ---
dataset_info:
features:
- name: formula
dtype: string
- name: spg_symbol
dtype: string
- name: crystal_system
dtype: string
- name: dimensionality
dtype: int64
- name: gga_gga+u_r2scan_energy_above_hull
dtype: 'null'
- name: gga_gga+u_r2scan_formation_energy_per_atom
dtype: 'null'
- name: gga_gga+u_energy_above_hull
dtype: 'null'
- name: gga_gga+u_formation_energy_per_atom
dtype: 'null'
- name: description
dtype: string
- name: description_w_bondlengths
dtype: string
- name: cifstr
dtype: string
splits:
- name: train
num_bytes: 785364472
num_examples: 117576
download_size: 185853489
dataset_size: 785364472
---
# Dataset Card for "chemnlp-robocrys"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 876 | [
[
-0.017547607421875,
0.006488800048828125,
0.0225067138671875,
-0.0016946792602539062,
-0.0026226043701171875,
0.0287322998046875,
0.008514404296875,
-0.02001953125,
0.05963134765625,
0.03302001953125,
-0.085693359375,
-0.057586669921875,
-0.02972412109375,
-... |
hemantk089/llama2_7b_fine_tuning_complete_dataset_v7 | 2023-11-02T09:11:04.000Z | [
"region:us"
] | hemantk089 | null | null | 0 | 5 | 2023-11-02T09:11:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 301722
num_examples: 813
- name: test
num_bytes: 72617
num_examples: 204
download_size: 107905
dataset_size: 374339
---
# Dataset Card for "llama2_7b_fine_tuning_complete_dataset_v7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.0271148681640625,
-0.01262664794921875,
0.0237274169921875,
0.029998779296875,
-0.0418701171875,
0.00568389892578125,
0.018341064453125,
-0.01509857177734375,
0.040496826171875,
0.044464111328125,
-0.05364990234375,
-0.058319091796875,
-0.039093017578125,
... |
JJhooww/Orcaptbr | 2023-11-02T14:23:16.000Z | [
"region:us"
] | JJhooww | null | null | 0 | 5 | 2023-11-02T09:42:41 | ---
dataset_info:
features:
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 58783149
num_examples: 35067
download_size: 36092675
dataset_size: 58783149
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Orca_ptbr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 482 | [
[
-0.03363037109375,
-0.026336669921875,
0.0030307769775390625,
0.0193634033203125,
-0.034149169921875,
0.00202178955078125,
0.034912109375,
-0.022705078125,
0.0506591796875,
0.0352783203125,
-0.04254150390625,
-0.06048583984375,
-0.043426513671875,
-0.0204315... |
sshreyy/CAI_DC_1 | 2023-11-02T12:18:28.000Z | [
"region:us"
] | sshreyy | null | null | 0 | 5 | 2023-11-02T12:18:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 57851094
num_examples: 25085
- name: test
num_bytes: 9688350
num_examples: 4203
download_size: 25349627
dataset_size: 67539444
---
# Dataset Card for "CAI_DC_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 573 | [
[
-0.03790283203125,
-0.01544952392578125,
0.00788116455078125,
0.0171966552734375,
-0.02008056640625,
0.000518798828125,
0.0372314453125,
0.01039886474609375,
0.05670166015625,
0.0239410400390625,
-0.058837890625,
-0.0654296875,
-0.038299560546875,
-0.0271606... |
cucolab/lux-headlines | 2023-11-02T14:13:07.000Z | [
"region:us"
] | cucolab | null | null | 0 | 5 | 2023-11-02T14:05:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jmamou/augmented-glue-sst2 | 2022-07-17T12:25:34.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en-US",
"license:unknown",
"region:us"
] | jmamou | null | null | 0 | 4 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
extended:
- original
language_creators:
- machine-generated
language:
- en-US
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for Augmented-GLUE-SST2
Automatically augmented data from train split of SST-2 dataset using conditional text generation approach.
Code used to generate this file will be soon available at https://github.com/IntelLabs/nlp-architect.
| 562 | [
[
0.0012187957763671875,
-0.04986572265625,
0.006427764892578125,
0.010955810546875,
-0.0169219970703125,
0.01447296142578125,
-0.0176849365234375,
-0.03021240234375,
0.044342041015625,
0.03179931640625,
-0.055877685546875,
-0.0151519775390625,
-0.03204345703125,
... |
kaushikacharya/github-issues | 2022-02-01T16:55:28.000Z | [
"region:us"
] | kaushikacharya | null | null | 0 | 4 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.