id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
qwerty8409/digesion_Ayurveda | 2023-07-22T06:41:00.000Z | [
"region:us"
] | qwerty8409 | null | null | 1 | 9 | 2023-07-22T06:39:33 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,732 | [
[
-0.038177490234375,
-0.0298309326171875,
-0.00359344482421875,
0.027099609375,
-0.0323486328125,
0.003795623779296875,
-0.017242431640625,
-0.02020263671875,
0.049041748046875,
0.04046630859375,
-0.06353759765625,
-0.08062744140625,
-0.052947998046875,
0.002... |
javaabu/dhivehi-shaafiu-speech | 2023-08-08T15:07:47.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"size_categories:1K<n<10K",
"language:dv",
"license:apache-2.0",
"audio",
"dhivehi",
"speech",
"shaafiu",
"narrated",
"region:us"
] | javaabu | Dhivehi Shaafiu Speech is a single speaker Dhivehi speech dataset created by [Javaabu Pvt. Ltd.](https://javaabu.com).
The dataset contains around 16.5 hrs of text read by professional Maldivian narrator Muhammadh Shaafiu.
The text used for the recordings were text scrapped from various Maldivian news websites. | @misc{Javaabu_2023,
title = "Dhivehi Shaafiu Speech Dataset",
url = "https://huggingface.co/datasets/javaabu/dhivehi-shaafiu-speech",
journal = "Hugging Face",
author = "Javaabu Pvt. Ltd.",
year = "2023",
month = jul
} | 1 | 9 | 2023-07-22T18:47:18 | ---
license: apache-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- dv
tags:
- audio
- dhivehi
- speech
- shaafiu
- narrated
size_categories:
- 1K<n<10K
---
# Dataset Card for Dhivehi Shaafiu Speech 1.0
### Dataset Summary
Dhivehi Shaafiu Speech is a single speaker Dhivehi speech dataset created by [Javaabu Pvt. Ltd.](https://javaabu.com).
The dataset contains around 16.5 hrs of text read by professional Maldivian narrator Muhammadh Shaafiu.
The text used for the recordings were text scrapped from various Maldivian news websites.
### Supported Tasks and Leaderboards
- Automatic Speech Recognition
- Text-to-Speech
### Languages
Dhivehi
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file and its sentence.
```json
{
'path': 'dhivehi-shaafiu-speech-train/waves/shafiu_1_d35e8g63.wav',
'sentence': 'ކަމަށާއި އެކަކު އަނެކަކައް އަނގައިގެން ނުބުނާކަމަށް އޭނާ ވިދާޅުވިއެވެ.',
'audio': {
'path': 'dhivehi-shaafiu-speech-train/waves/shafiu_1_d35e8g63.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000
},
}
```
### Data Fields
- path (string): The path to the audio file.
- sentence (string): The transcription for the audio file.
- audio (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
### Data Splits
The speech material has been subdivided into portions for train, test and validation.
| | Train | Validation | Test | Total |
| ---------------- |----------|------------|----------|----------|
| Utterances | 6657 | 832 | 832 | 8321 |
| Duration | 13:30:13 | 01:44:36 | 01:40:15 | 16:55:04 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data was collected through the AduEhy TTS Management System developed Javaabu.
The narrator was shown text snippets one at a time, which were then read and recorded through the browser.
Only minimal text normalization has been performed, which involved replacing multiple whitespaces and new lines with single spaces.
#### Who are the source language producers?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{Javaabu_2023,
title = "Dhivehi Shaafiu Speech Dataset",
url = "https://huggingface.co/datasets/javaabu/dhivehi-shaafiu-speech",
journal = "Hugging Face",
author = {{Javaabu Pvt. Ltd.}},
year = "2023",
month = jul
}
```
### Contributions
- [Arushad Ahmed](https://arushad.org)
- [Mohamed Jailam](https://github.com/muhammedjailam)
- [Ibrahim Shareef](https://github.com/ihshareef) | 3,643 | [
[
-0.0291595458984375,
-0.050079345703125,
-0.0009665489196777344,
0.006687164306640625,
-0.02978515625,
-0.004314422607421875,
-0.0350341796875,
-0.031585693359375,
0.0293426513671875,
0.03607177734375,
-0.059539794921875,
-0.057708740234375,
-0.041473388671875,
... |
emozilla/booksum-summary-analysis_llama-16384 | 2023-07-23T18:24:22.000Z | [
"region:us"
] | emozilla | null | null | 1 | 9 | 2023-07-23T18:23:53 | ---
dataset_info:
features:
- name: chapter
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 210534702.2666892
num_examples: 11808
- name: validation
num_bytes: 43846669.0
num_examples: 2234
- name: test
num_bytes: 27106410.273220748
num_examples: 1657
download_size: 134314056
dataset_size: 281487781.53990996
---
# Dataset Card for "booksum-summary-analysis_llama-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 613 | [
[
-0.03338623046875,
0.002567291259765625,
0.00797271728515625,
0.01412200927734375,
-0.038970947265625,
-0.001537322998046875,
0.0287017822265625,
-0.0012521743774414062,
0.0625,
0.03961181640625,
-0.048980712890625,
-0.061767578125,
-0.05352783203125,
0.0053... |
arychaud/piimask-hackathon | 2023-07-26T21:44:12.000Z | [
"region:us"
] | arychaud | null | null | 0 | 9 | 2023-07-26T20:29:46 | # mysqlclient
This project is a fork of [MySQLdb1](https://github.com/farcepest/MySQLdb1).
This project adds Python 3 support and fixed many bugs.
* PyPI: https://pypi.org/project/mysqlclient/
* GitHub: https://github.com/PyMySQL/mysqlclient
## Support
**Do Not use Github Issue Tracker to ask help. OSS Maintainer is not free tech support**
When your question looks relating to Python rather than MySQL:
* Python mailing list [python-list](https://mail.python.org/mailman/listinfo/python-list)
* Slack [pythondev.slack.com](https://pyslackers.com/web/slack)
Or when you have question about MySQL:
* [MySQL Community on Slack](https://lefred.be/mysql-community-on-slack/)
## Install
### Windows
Building mysqlclient on Windows is very hard.
But there are some binary wheels you can install easily.
If binary wheels do not exist for your version of Python, it may be possible to
build from source, but if this does not work, **do not come asking for support.**
To build from source, download the
[MariaDB C Connector](https://mariadb.com/downloads/#connectors) and install
it. It must be installed in the default location
(usually "C:\Program Files\MariaDB\MariaDB Connector C" or
"C:\Program Files (x86)\MariaDB\MariaDB Connector C" for 32-bit). If you
build the connector yourself or install it in a different location, set the
environment variable `MYSQLCLIENT_CONNECTOR` before installing. Once you have
the connector installed and an appropriate version of Visual Studio for your
version of Python:
```
$ pip install mysqlclient
```
### macOS (Homebrew)
Install MySQL and mysqlclient:
```
# Assume you are activating Python 3 venv
$ brew install mysql pkg-config
$ pip install mysqlclient
```
If you don't want to install MySQL server, you can use mysql-client instead:
```
# Assume you are activating Python 3 venv
$ brew install mysql-client pkg-config
$ export PKG_CONFIG_PATH="/opt/homebrew/opt/mysql-client/lib/pkgconfig"
$ pip install mysqlclient
```
### Linux
**Note that this is a basic step. I can not support complete step for build for all
environment. If you can see some error, you should fix it by yourself, or ask for
support in some user forum. Don't file a issue on the issue tracker.**
You may need to install the Python 3 and MySQL development headers and libraries like so:
* `$ sudo apt-get install python3-dev default-libmysqlclient-dev build-essential` # Debian / Ubuntu
* `% sudo yum install python3-devel mysql-devel` # Red Hat / CentOS
Then you can install mysqlclient via pip now:
```
$ pip install mysqlclient
```
### Customize build (POSIX)
mysqlclient uses `pkg-config --clfags --ldflags mysqlclient` by default for finding
compiler/linker flags.
You can use `MYSQLCLIENT_CFLAGS` and `MYSQLCLIENT_LDFLAGS` environment
variables to customize compiler/linker options.
```
$ export MYSQLCLIENT_CFLAGS=`pkg-config mysqlclient --cflags`
$ export MYSQLCLIENT_LDFLAGS=`pkg-config mysqlclient --libs`
$ pip install mysqlclient
```
### Documentation
Documentation is hosted on [Read The Docs](https://mysqlclient.readthedocs.io/)
| 3,093 | [
[
-0.04486083984375,
-0.045806884765625,
0.0302734375,
0.0452880859375,
-0.009124755859375,
-0.006778717041015625,
0.0177001953125,
-0.0158538818359375,
0.038848876953125,
0.0316162109375,
-0.05767822265625,
-0.01453399658203125,
0.01255035400390625,
0.0296020... |
Brendan/multiwoz_turns_v24 | 2023-07-27T01:22:40.000Z | [
"region:us"
] | Brendan | null | null | 0 | 9 | 2023-07-27T01:22:31 | ---
dataset_info:
features:
- name: dialogue_id
dtype: string
- name: turn_id
dtype: int64
- name: user
dtype: string
- name: system_response
dtype: string
- name: history
sequence: string
- name: system_acts
struct:
- name: Attraction-Inform
sequence:
sequence: string
- name: Attraction-NoOffer
sequence:
sequence: string
- name: Attraction-Recommend
sequence:
sequence: string
- name: Attraction-Request
sequence:
sequence: string
- name: Attraction-Select
sequence:
sequence: string
- name: Booking-Book
sequence:
sequence: string
- name: Booking-Inform
sequence:
sequence: string
- name: Booking-NoBook
sequence:
sequence: string
- name: Booking-Request
sequence:
sequence: string
- name: Hotel-Inform
sequence:
sequence: string
- name: Hotel-NoOffer
sequence:
sequence: string
- name: Hotel-Recommend
sequence:
sequence: string
- name: Hotel-Request
sequence:
sequence: string
- name: Hotel-Select
sequence:
sequence: string
- name: Restaurant-Inform
sequence:
sequence: string
- name: Restaurant-NoOffer
sequence:
sequence: string
- name: Restaurant-Recommend
sequence:
sequence: string
- name: Restaurant-Request
sequence:
sequence: string
- name: Restaurant-Select
sequence:
sequence: string
- name: Taxi-Inform
sequence:
sequence: string
- name: Taxi-Request
sequence:
sequence: string
- name: Train-Inform
sequence:
sequence: string
- name: Train-NoOffer
sequence:
sequence: string
- name: Train-OfferBook
sequence:
sequence: string
- name: Train-OfferBooked
sequence:
sequence: string
- name: Train-Request
sequence:
sequence: string
- name: Train-Select
sequence:
sequence: string
- name: general-bye
sequence:
sequence: string
- name: general-greet
sequence:
sequence: string
- name: general-reqmore
sequence:
sequence: string
- name: general-welcome
sequence:
sequence: string
- name: belief_state
sequence:
sequence: string
- name: prev_belief_state
sequence:
sequence: string
- name: belief_state_delta
sequence:
sequence: string
- name: degenerate_user
dtype: bool
splits:
- name: train
num_bytes: 71669619
num_examples: 56719
- name: validation
num_bytes: 9862893
num_examples: 7374
- name: test
num_bytes: 9864860
num_examples: 7368
download_size: 15883931
dataset_size: 91397372
---
# Dataset Card for "multiwoz_turns_v24"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,050 | [
[
-0.05517578125,
-0.011505126953125,
0.01508331298828125,
0.03741455078125,
-0.0165252685546875,
0.0045623779296875,
0.01995849609375,
-0.007778167724609375,
0.0494384765625,
0.041900634765625,
-0.10247802734375,
-0.046112060546875,
-0.0352783203125,
-0.02423... |
HydraLM/chemistry_dataset_alpaca | 2023-07-27T18:43:22.000Z | [
"region:us"
] | HydraLM | null | null | 2 | 9 | 2023-07-27T18:43:14 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 45485759
num_examples: 19999
download_size: 21441377
dataset_size: 45485759
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chemistry_dataset_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 492 | [
[
-0.039337158203125,
-0.018585205078125,
0.033660888671875,
0.01348876953125,
-0.01216888427734375,
0.00399017333984375,
0.0311279296875,
-0.006744384765625,
0.08074951171875,
0.0330810546875,
-0.05767822265625,
-0.06817626953125,
-0.04058837890625,
-0.016723... |
aditijha/instruct_v3_subset | 2023-07-27T20:13:25.000Z | [
"region:us"
] | aditijha | null | null | 0 | 9 | 2023-07-27T20:13:20 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 3930962.2554168818
num_examples: 1000
download_size: 2374280
dataset_size: 3930962.2554168818
---
# Dataset Card for "instruct_v3_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 456 | [
[
-0.037109375,
-0.009185791015625,
0.0238037109375,
0.01470947265625,
-0.0115966796875,
-0.0079498291015625,
0.032623291015625,
-0.01381683349609375,
0.045867919921875,
0.04925537109375,
-0.062164306640625,
-0.047576904296875,
-0.035247802734375,
-0.011878967... |
BigSuperbPrivate/DialogueActClassification_DailyTalk | 2023-08-02T08:44:23.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 9 | 2023-07-29T14:55:36 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 4844413128.351
num_examples: 16623
- name: validation
num_bytes: 679648410.816
num_examples: 2392
download_size: 5172632764
dataset_size: 5524061539.167
---
# Dataset Card for "DailyTalk_DialogueActClassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 575 | [
[
-0.0228271484375,
-0.0244598388671875,
0.010589599609375,
0.020111083984375,
0.003620147705078125,
0.009490966796875,
0.01531219482421875,
-0.0251312255859375,
0.048583984375,
0.034423828125,
-0.067626953125,
-0.06829833984375,
-0.0318603515625,
-0.035461425... |
jeffnyman/emotions | 2023-07-29T18:10:20.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"emotion-classification",
"region:us"
] | jeffnyman | Emotion is a dataset of English Twitter messages with six basic emotions:
anger, fear, joy, love, sadness, and surprise. For more detailed information
please refer to the paper. | @inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
} | 0 | 9 | 2023-07-29T16:18:01 | ---
pretty_name: Emotions
license: cc-by-sa-4.0
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- multi-class-classification
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
"0": sadness
"1": joy
"2": love
"3": anger
"4": fear
"5": surprise
splits:
- name: train
num_bytes: 1741597
num_examples: 16000
- name: validation
num_bytes: 214703
num_examples: 2000
- name: test
num_bytes: 217181
num_examples: 2000
download_size: 740883
dataset_size: 2173481
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
"0": sadness
"1": joy
"2": love
"3": anger
"4": fear
"5": surprise
splits:
- name: train
num_bytes: 45445685
num_examples: 416809
download_size: 15388281
dataset_size: 45445685
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404/)
- **Size of downloaded dataset files:** 16.13 MB
- **Size of the generated dataset:** 47.62 MB
- **Total amount of disk used:** 63.75 MB
### Dataset Summary
Emotions is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper. Note that the paper does contain a larger data set with eight emotions being considered.
## Dataset Structure
### Data Instances
An example bit of data looks like this:
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has two configurations.
- split: with a total of 20,000 examples split into train, validation and test.
- unsplit: with a total of 416,809 examples in a single train split.
| name | train | validation | test |
| ------- | -----: | ---------: | ---: |
| split | 16000 | 2000 | 2000 |
| unsplit | 416809 | n/a | n/a |
## Additional Information
### Licensing Information
The dataset should be used for educational and research purposes only. It is licensed under Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
| 5,805 | [
[
-0.0237579345703125,
-0.04150390625,
0.02276611328125,
0.0296478271484375,
-0.03680419921875,
-0.00928497314453125,
-0.0224609375,
-0.037567138671875,
0.033355712890625,
-0.0006513595581054688,
-0.0447998046875,
-0.06719970703125,
-0.055145263671875,
0.03036... |
chaoyi-wu/PMC-Inline | 2023-08-06T00:40:40.000Z | [
"task_categories:text-generation",
"license:apache-2.0",
"biology",
"region:us"
] | chaoyi-wu | null | null | 4 | 9 | 2023-07-31T07:00:25 | ---
license: apache-2.0
task_categories:
- text-generation
tags:
- biology
---
# PMC-Inline Dataset
- [PMC-Inline Dataset](#pmc-inline-dataset)
- [Daraset Structure](#dataset-structure)
- [Sample](#sample)
This is the text parts and the figure parts can be dowloaded from https://pan.baidu.com/s/1Src_rhXsaOFp8zJ_3zMFsQ?pwd=p3ne.
## Dataset Structure
**PMC-Inline** (PMC papers with inline figures).
We collect the cc lincense papers from pubmed central and remoce the bib, author info, table and iamge captions in the original paper xml files.
Based on the inline figure ref, we link back 11M images into the paper contexts.
Each paper is organized as a PMCxxxxxxx.json. ```xxxxxxx``` refers to the paper unique PMCid
-
## Sample
A json in dataset is organized as bellow,
| info | {"article-type": "research-article", "pmid": "17925856", "pmc": "PMC1999654", "publisher-id": "07-PONE-RA-01026R1", "doi": "10.1371/journal.pone.0001008"} |
| ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| text | \nPredicting Spatial Patterns of Plant Recruitment Using Animal-Displacement Kernels\nFor plants ... |
| img_ref | [{"id": "pone-0001008-g001", "start": 9177, "end": 9185}, {"id": "pone-0001008-g001", "start": 10715, "end": 10723}, ...] | | | | |
Explanation to each key
- info: some info. about the paper, like paper type, pmid, pmc id and so on.
- text: a string whihc is the paper content.
- img_ref: a list which contains which image and where it is referred in the original paper. For example {"id": "pone-0001008-g001", "start": 9177, "end": 9185} denotes the fig pone-0001008-g001 have been metioned in the text string at index 9177-9185.
You can get the image form our PMC figure parts, and fig is named unified as ```PMCxxxxxxx_figid.jpg``` like ```PMC1999654_pone-0001008-g001.jpg```
Note that, our PMC figures are collected before PMC-Inline, and during the time window, some papers have been updated. Thus some figures may be missed in our figure base. | 6,344 | [
[
-0.0267486572265625,
-0.021087646484375,
0.037811279296875,
0.00797271728515625,
-0.03680419921875,
-0.01544952392578125,
0.00769805908203125,
-0.0195770263671875,
0.0245208740234375,
0.032684326171875,
-0.053955078125,
-0.05377197265625,
-0.028778076171875,
... |
TableQAKit/TAT-QA | 2023-08-02T10:45:08.000Z | [
"region:us"
] | TableQAKit | null | null | 0 | 9 | 2023-08-02T09:53:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
argilla/llama-2-banking-preference | 2023-08-03T15:01:01.000Z | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | 0 | 9 | 2023-08-03T15:00:58 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for llama-2-banking-preference
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/llama-2-banking-preference")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/llama-2-banking-preference")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| request | User request | TextField | True | False |
| response-1 | Assistant response 1 | TextField | True | True |
| response-2 | Assistant response 2 | TextField | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| preference | Order response based on your preference | RankingQuestion | True | 1 = most accurate and helpful, 2 = worst. Ties are allowed. | ['response-1', 'response-2'] |
| correct-response | If both responses are innacurate, complete the response: | TextQuestion | True | N/A | N/A |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"request": "Do I get charged if i am getting money?",
"response-1": " Of course! Thank you for reaching out to us at the Bank of Tillamook. We\u0027re here to help you with any questions or concerns you may have about your bank account. When it comes to fees associated with withdrawals, we do charge a small fee for certain types of transactions. However, please note that there is no charge for accessing your own funds in our bank through one of our ATMs or by using your debit card for purchases. If you could provide me with more details on what kind of transaction you were trying to make, I would be happy to assist you further and explain any potential fees associated with those services. Is there anything else I can help you with today?",
"response-2": " Of course! Thank you for choosing the Bank of Tillamook for your financial needs. We appreciate your business and value your trust in us. To answer your question, no, you will not be charged for withdrawing money from your own account at the Bank of Tillamook. Our bank is committed to providing convenient and fee-free services to our customers, so you can access your funds without worrying about additional charges. If you have any further questions or concerns, please don\u0027t hesitate to ask. Is there anything else I can help you with today?"
},
"id": null,
"metadata": {},
"responses": [],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"correct-response": null,
"correct-response-suggestion": null,
"correct-response-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"external_id": null,
"metadata": null,
"preference": null,
"preference-suggestion": null,
"preference-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"request": "Do I get charged if i am getting money?",
"response-1": " Of course! Thank you for reaching out to us at the Bank of Tillamook. We\u0027re here to help you with any questions or concerns you may have about your bank account. When it comes to fees associated with withdrawals, we do charge a small fee for certain types of transactions. However, please note that there is no charge for accessing your own funds in our bank through one of our ATMs or by using your debit card for purchases. If you could provide me with more details on what kind of transaction you were trying to make, I would be happy to assist you further and explain any potential fees associated with those services. Is there anything else I can help you with today?",
"response-2": " Of course! Thank you for choosing the Bank of Tillamook for your financial needs. We appreciate your business and value your trust in us. To answer your question, no, you will not be charged for withdrawing money from your own account at the Bank of Tillamook. Our bank is committed to providing convenient and fee-free services to our customers, so you can access your funds without worrying about additional charges. If you have any further questions or concerns, please don\u0027t hesitate to ask. Is there anything else I can help you with today?"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **request** is of type `TextField`.
* **response-1** is of type `TextField`.
* **response-2** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **preference** is of type `RankingQuestion` with the following allowed values ['response-1', 'response-2'], and description "1 = most accurate and helpful, 2 = worst. Ties are allowed.".
* (optional) **correct-response** is of type `TextQuestion`.
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **preference-suggestion** is of type `ranking` with the following allowed values ['response-1', 'response-2'].
* (optional) **correct-response-suggestion** is of type `text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Please, read the question carefully and try to answer it as accurately as possible.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 10,144 | [
[
-0.0294342041015625,
-0.0667724609375,
0.035491943359375,
0.0220794677734375,
-0.019439697265625,
-0.006679534912109375,
0.0221099853515625,
-0.04705810546875,
0.041412353515625,
0.0806884765625,
-0.05816650390625,
-0.06243896484375,
-0.0679931640625,
0.0201... |
VatsaDev/pixel-art | 2023-08-07T21:06:06.000Z | [
"region:us"
] | VatsaDev | null | null | 0 | 9 | 2023-08-07T21:05:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 46114889.0
num_examples: 200
download_size: 45960806
dataset_size: 46114889.0
---
# Dataset Card for "pixel-art"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 443 | [
[
-0.045074462890625,
-0.0188751220703125,
0.02093505859375,
0.010894775390625,
-0.017791748046875,
0.004428863525390625,
0.0237579345703125,
-0.027740478515625,
0.06512451171875,
0.02630615234375,
-0.0618896484375,
-0.055023193359375,
-0.037109375,
-0.0304107... |
deep-plants/AGM_HS | 2023-10-04T11:07:25.000Z | [
"license:cc",
"region:us"
] | deep-plants | null | null | 1 | 9 | 2023-08-16T10:04:19 | ---
license: cc
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
- name: crop_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 22900031.321
num_examples: 6127
download_size: 22010079
dataset_size: 22900031.321
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for AGM_HS Dataset
## Dataset Summary
The AGM<sub>HS</sub> (AGricolaModerna Healthy-Stress) Dataset is an extension of the AGM Dataset, specifically curated to address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset comprises 6,127 high-resolution RGB images, each with a resolution of 120x120 pixels, selected from the original AGM Dataset. The images in AGM<sub>HS</sub> are divided into two categories: healthy samples (3,798 images) and stressed samples (2,329 images) representing 14 of the 18 classes present in AGM. Alongside the healthy/stressed classification labels, the dataset also provides segmentation masks for the stressed areas.
## Supported Tasks
Image classification: Healthy-stressed classification
Image segmentation: detection and localization of plant stress in top-view images.
## Languages
The dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.
## Dataset Structure
### Data Instances
A typical data instance from the AGM<sub>HS</sub> Dataset consists of the following:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=120x120 at 0x29CEAD71780>,
'labels': 'stressed',
'crop_type': 'by'
'mask': <PIL.PngImagePlugin.PngImageFile image mode=L size=120x120 at 0x29CEAD71780>
}
```
### Data Fields
The dataset's data instances have the following fields:
- `image`: A PIL.Image.Image object representing the image.
- `labels`: A string representation indicating whether the image is "healthy" or "stressed."
- `crop_type`: An string representation of the crop type in the image
- `mask`: A PIL.Image.Image object representing the segmentation mask of stressed areas in the image, stored as a PNG image.
### Data Splits
- **Training Set**:
- Number of Examples: 6,127
- Healthy Samples: 3,798
- Stressed Samples: 2,329
## Dataset Creation
### Curation Rationale
The AGM<sub>HS</sub> Dataset was created as an extension of the AGM Dataset to specifically address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset is essential for the development and evaluation of advanced segmentation models tailored for this task.
### Source Data
#### Initial Data Collection and Normalization
The images in AGM<sub>HS</sub> were extracted from the original AGM Dataset. During the extraction process, labelers selected images showing clear signs of either good health or high stress. These sub-images were resized to 120x120 pixels to create AGM<sub>HS</sub>.
### Annotations
#### Annotation Process
The AGM<sub>HS</sub> Dataset underwent a secondary stage of annotation. Labelers manually collected images by clicking on points corresponding to stressed areas on the leaves. These clicked points served as prompts for the semi-automatic generation of segmentation masks using the "Segment Anything" technique \cite{kirillov2023segment}. Each image is annotated with a classification label ("healthy" or "stressed") and a corresponding segmentation mask.
### Who Are the Annotators?
The annotators for AGM<sub>HS</sub> are domain experts with knowledge of plant health and stress detection.
## Personal and Sensitive Information
The dataset does not contain personal or sensitive information about individuals. It exclusively consists of images of plants.
## Considerations for Using the Data
### Social Impact of Dataset
The AGM<sub>HS</sub> Dataset plays a crucial role in advancing research and technologies for plant stress detection and localization in the context of modern agriculture. By providing a diverse set of top-view crop images with associated segmentation masks, this dataset can facilitate the development of innovative solutions for sustainable agriculture, contributing to increased crop health, yield prediction, and overall food security.
### Discussion of Biases and Known Limitations
While AGM<sub>HS</sub> is a valuable dataset, it inherits some limitations from the original AGM Dataset. It primarily involves images from a single vertical farm setting, potentially limiting the representativeness of broader agricultural scenarios. Additionally, the dataset's composition may reflect regional agricultural practices and business-driven crop preferences specific to vertical farming. Researchers should be aware of these potential biases when utilizing AGM<sub>HS</sub> for their work.
## Additional Information
### Dataset Curators
The AGM<sub>HS</sub> Dataset is curated by DeepPlants and AgricolaModerna. For further information, please contact us at:
- nico@deepplants.com
- etienne.david@agricolamoderna.com
### Licensing Information
### Citation Information
If you use the AGM<sub>HS</sub> dataset in your work, please consider citing the following publication:
```bibtex
@InProceedings{Sama_2023_ICCV,
author = {Sama, Nico and David, Etienne and Rossetti, Simone and Antona, Alessandro and Franchetti, Benjamin and Pirri, Fiora},
title = {A new Large Dataset and a Transfer Learning Methodology for Plant Phenotyping in Vertical Farms},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2023},
pages = {540-551}
}
```
| 5,785 | [
[
-0.0300445556640625,
-0.0479736328125,
0.017669677734375,
0.01033782958984375,
-0.020477294921875,
-0.007617950439453125,
0.0012407302856445312,
-0.053070068359375,
0.016632080078125,
0.020599365234375,
-0.044830322265625,
-0.068115234375,
-0.058319091796875,
... |
DynamicSuperb/IntentClassification_FluentSpeechCommands-Object | 2023-08-16T10:51:29.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 9 | 2023-08-16T10:48:47 | ---
dataset_info:
features:
- name: file
dtype: string
- name: speakerId
dtype: string
- name: transcription
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 740602751.0
num_examples: 10000
download_size: 643682916
dataset_size: 740602751.0
---
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Object"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 588 | [
[
-0.0226593017578125,
-0.0167083740234375,
0.0135498046875,
0.01953125,
-0.0044097900390625,
-0.0269927978515625,
-0.0031299591064453125,
-0.0071258544921875,
0.042388916015625,
0.038238525390625,
-0.0504150390625,
-0.06451416015625,
-0.049224853515625,
-0.02... |
open-llm-leaderboard/details_tiiuae__falcon-7b | 2023-10-29T13:14:03.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 9 | 2023-08-18T00:12:34 | ---
pretty_name: Evaluation run of tiiuae/falcon-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 122 configuration, each one coresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run (and is used to compute and display the agregated metrics on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tiiuae__falcon-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T10:06:45.584443](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-7b/blob/main/results_2023-09-17T10-06-45.584443.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n\
\ \"em_stderr\": 0.00033145814652193653,\n \"f1\": 0.04824664429530208,\n\
\ \"f1_stderr\": 0.0012232481165562455,\n \"acc\": 0.3751460800288181,\n\
\ \"acc_stderr\": 0.008496930501481662\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.00033145814652193653,\n\
\ \"f1\": 0.04824664429530208,\n \"f1_stderr\": 0.0012232481165562455\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.026535253980288095,\n \
\ \"acc_stderr\": 0.004427045987265165\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7237569060773481,\n \"acc_stderr\": 0.01256681501569816\n\
\ }\n}\n```"
repo_url: https://huggingface.co/tiiuae/falcon-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|arc:challenge|25_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_09T19_07_27.412342
path:
- '**/details_harness|drop|3_2023-09-09T19-07-27.412342.parquet'
- split: 2023_09_17T10_06_45.584443
path:
- '**/details_harness|drop|3_2023-09-17T10-06-45.584443.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T10-06-45.584443.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_09T19_07_27.412342
path:
- '**/details_harness|gsm8k|5_2023-09-09T19-07-27.412342.parquet'
- split: 2023_09_17T10_06_45.584443
path:
- '**/details_harness|gsm8k|5_2023-09-17T10-06-45.584443.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T10-06-45.584443.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hellaswag|10_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_09T19_07_27.412342
path:
- '**/details_harness|winogrande|5_2023-09-09T19-07-27.412342.parquet'
- split: 2023_09_17T10_06_45.584443
path:
- '**/details_harness|winogrande|5_2023-09-17T10-06-45.584443.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T10-06-45.584443.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_abstract_algebra_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_anatomy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_astronomy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_business_ethics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_clinical_knowledge_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_biology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_chemistry_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_computer_science_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_mathematics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_medicine_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_physics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_computer_security_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_conceptual_physics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_econometrics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_electrical_engineering_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_elementary_mathematics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_formal_logic_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_global_facts_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_biology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_chemistry_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_computer_science_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_european_history_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_geography_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_mathematics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_microeconomics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_physics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_psychology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_statistics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_us_history_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_world_history_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_human_aging_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_human_sexuality_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_international_law_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_jurisprudence_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_logical_fallacies_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_machine_learning_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_management_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_marketing_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_medical_genetics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_miscellaneous_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_moral_disputes_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_moral_scenarios_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_nutrition_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_philosophy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_prehistory_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_accounting_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_law_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_medicine_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_psychology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_public_relations_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_security_studies_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_sociology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_us_foreign_policy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_virology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_world_religions_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- config_name: results
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- results_2023-07-19T10:51:47.706539.parquet
- split: 2023_08_28T20_05_31.227903
path:
- results_2023-08-28T20:05:31.227903.parquet
- split: 2023_09_09T19_07_27.412342
path:
- results_2023-09-09T19-07-27.412342.parquet
- split: 2023_09_17T10_06_45.584443
path:
- results_2023-09-17T10-06-45.584443.parquet
- split: latest
path:
- results_2023-09-17T10-06-45.584443.parquet
---
# Dataset Card for Evaluation run of tiiuae/falcon-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/tiiuae/falcon-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 122 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tiiuae__falcon-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T10:06:45.584443](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-7b/blob/main/results_2023-09-17T10-06-45.584443.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652193653,
"f1": 0.04824664429530208,
"f1_stderr": 0.0012232481165562455,
"acc": 0.3751460800288181,
"acc_stderr": 0.008496930501481662
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652193653,
"f1": 0.04824664429530208,
"f1_stderr": 0.0012232481165562455
},
"harness|gsm8k|5": {
"acc": 0.026535253980288095,
"acc_stderr": 0.004427045987265165
},
"harness|winogrande|5": {
"acc": 0.7237569060773481,
"acc_stderr": 0.01256681501569816
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 67,247 | [
[
-0.034088134765625,
-0.0498046875,
0.015716552734375,
0.0164642333984375,
-0.007572174072265625,
0.01268768310546875,
-0.0223388671875,
-0.01200103759765625,
0.034088134765625,
0.038360595703125,
-0.050140380859375,
-0.0670166015625,
-0.049346923828125,
0.01... |
ticoAg/Medical-Dialogue-System | 2023-08-19T10:57:30.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:apache-2.0",
"region:us"
] | ticoAg | null | null | 1 | 9 | 2023-08-18T16:22:43 | ---
num rows: 3206606
file size: 2.09 GB
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- zh
size_categories:
- 1M<n<10M
---
## describe
非常navie的场景对话,但可能出现真实场景信息,比如XX医院,XX医生
对纯指令数据质量要求较高的需要进一步清洗,只用来健康场景finetune maybe enough
## from
[[Medical-Dialogue-System]](https://github.com/UCSD-AI4H/Medical-Dialogue-System)
*[[medical_dialog]](https://huggingface.co/datasets/medical_dialog)
## format
```json
{
"instruction": null,
"input": "不知道,我是在09年8月份,白天出了很多的汗,晚上睡觉突然醒来,看房子天晕地转,过了大约也就一分钟的样子,就不转了.但头向左转动就又转,左边头皮还发麻.第二天起来,人没有精神,过了段时间.病情时轻时重,好像是躺在床上向右人就一上晕了.但时间不长.有一天开了一天的车,晚上先是有点头晕,走路不稳,上床休息,但突然后脑根部特别疼,到了第二天也不疼了.到现在也没有疼过.现在就是躺下和起床特别晕(头向右和头向上或向下),走路不稳.特别是站久了,就要倒了感觉.另外平常,脑袋感觉昏沉沉的,有时眼睛看东西跟不上速度,要晕的,晕的时候是脑袋里跟一片去飘过的。",
"output": "你得的是颈椎间盘突出,可以先做保守治疗。",
"history": [
[
"但,很多医生看了片子,说是张口位片枢椎似乎有些旋转移位 ,不知有没有啊。",
"枢椎旋转移位不太可能,你的片子不是很清楚。请咨询我院骨科。"
],
[
"好的,谢谢大夫,祝您新春愉快。",
"不客气!"
]
]
}
```
## usage
```pyth
from datasets import load_dataset
ds = load_dataset("ticoAg/Medical-Dialogue-System")
```
## process script
```python
data_dir = Path("medical_dialog\data\processed-chinese")
raw_train_ds = loadJS(data_dir.joinpath("train_data.json"))
raw_test_ds = loadJS(data_dir.joinpath("test_data.json"))
raw_valid_ds = loadJS(data_dir.joinpath("validate_data.json"))
raw_ds = raw_train_ds + raw_test_ds + raw_valid_ds
_ds = []
for i in tqdm(raw_ds):
_diag = [{"role": dialog[:2], "content": dialog[3:]} for dialog in i]
meta_data = sft_meta(input=_diag[0]['content'], output=_diag[1]['content'])
if len(_diag[1]['content']) <= 6: continue # 过滤掉太短的单次回复
if len(_diag) > 2:
meta_data['history'] = [[_diag[2*idx]['content'], _diag[2*idx+1]['content']] for idx in range(len(_diag)//2)][1:]
meta_data = sortDict(meta_data)
_ds += [meta_data]
``` | 1,899 | [
[
-0.0185089111328125,
-0.06365966796875,
0.0304718017578125,
-0.002788543701171875,
-0.027130126953125,
-0.0160980224609375,
0.002567291259765625,
0.00435638427734375,
0.025604248046875,
0.03350830078125,
-0.054901123046875,
-0.056427001953125,
-0.03363037109375,... |
Falah/cars_model_prompts_SDXL | 2023-08-20T10:10:54.000Z | [
"region:us"
] | Falah | null | null | 0 | 9 | 2023-08-19T15:01:21 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 510779151
num_examples: 1000000
download_size: 68860564
dataset_size: 510779151
---
# Dataset Card for "cars_model_prompts_SDXL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 376 | [
[
-0.04730224609375,
-0.006725311279296875,
0.052581787109375,
0.0118255615234375,
-0.01419830322265625,
0.01222991943359375,
0.019134521484375,
0.0038604736328125,
0.0244140625,
0.0167999267578125,
-0.07867431640625,
-0.048675537109375,
-0.0206298828125,
-0.0... |
sarahpann/AMPS | 2023-08-20T20:27:43.000Z | [
"region:us"
] | sarahpann | null | null | 0 | 9 | 2023-08-20T19:15:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Sprakbanken/nb_samtale | 2023-10-06T14:43:06.000Z | [
"task_categories:automatic-speech-recognition",
"language:nb",
"language:nn",
"language:no",
"license:cc0-1.0",
"dialects",
"podcasts",
"live-events",
"conversational",
"speech",
"region:us"
] | Sprakbanken | NB Samtale is a speech corpus made by the Language Bank at the National Library of Norway.
The corpus contains orthographically transcribed speech from podcasts and recordings of live events at the National Library.
The corpus is intended as an open source dataset for Automatic Speech Recognition (ASR) development,
and is specifically aimed at improving ASR systems’ handle on conversational speech. | \ | 0 | 9 | 2023-08-22T12:41:51 | ---
language:
- nb
- nn
- 'no'
license: cc0-1.0
task_categories:
- automatic-speech-recognition
tags:
- dialects
- podcasts
- live-events
- conversational
- speech
---
# Dataset Card for Sprakbanken/nb_samtale
## Dataset Description
- **Homepage:** [nb.no/sprakbanken](https://www.nb.no/sprakbanken/)
- **Repository:** [Resource catalogue, no. 85](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-85/)
- **Paper:** [NB_Samtale_About_the_corpus.pdf](https://www.nb.no/sbfil/taledata/NB_Samtale_About_the_corpus.pdf)
- **Point of Contact:** [Språkbanken](mailto:sprakbanken@nb.no)
### Dataset Summary
NB Samtale is a speech corpus made by the Language Bank at the National Library of Norway. The corpus contains orthographically transcribed speech from podcasts and recordings of live events at the National Library. The corpus is intended as an open source dataset for Automatic Speech Recognition (ASR) development, and is specifically aimed at improving ASR systems’ handle on conversational speech.
The corpus consists of 12,080 segments, a total of 24 hours transcribed speech from 69 speakers. The corpus ensures both gender and dialect variation, and speakers from five broad dialect areas are represented. Both Bokmål and Nynorsk transcriptions are present in the corpus, with Nynorsk making up approximately 25% of the transcriptions.
We greatly appreciate feedback and suggestions for improvements.
### Supported Tasks
- Automatic Speech Recognition for verbatim transcriptions of conversational speech, as well as for standardised, orthographic transcriptions.
- Speaker Diarization: The sentence segments all have a speaker ID, which is unique per speaker, and the same speaker will have the same speaker ID across source files.
- Audio classification: Each segment could be classified with one of the metadata features.
### Languages
The transcription texts are in either Norwegian bokmål or Norwegian nynorsk.
The audio is in Norwegian, in the speakers' respective dialects.
We have categorized them into five dialect areas:
Dialect area (en) | Dialect area (nb) | Counties
--- | --- | ---
Eastern Norway | Østlandet | Agder, Innlandet, Oslo, Vestfold og Telemark, Viken
Southwest Norway | Sørvestlandet | Rogaland
Western Norway | Vestlandet | Møre og Romsdal, Vestland
Central Norway | Midt-Norge |Trøndelag
Northern Norway | Nord-Norge | Nordland, Troms og Finnmark
## Dataset Structure
### Data Instances
A data point is an audio segment, including a relative path to the `.wav`-file, and the transcription. Additional information is provided about the speaker, the orthographic standard for the transcription, whether the segment overlaps with the previous or next, and the setting for the recording. The transcription also comes in 3 different normalized versions: "orthographic" (orthographically correct text, with punctuation, integer numbers, and standardized word forms), "verbatim" (with tokens marking hesitations, laughter, foreign phrases and unknown words, but no punctuation) and "annotations" (as is from the annotation process, with punctuation, tags, and alternate word forms).
```
{
'source_file_id': 'nb-1',
'segment_id': '0008970-0013860',
'segment_order': 0,
'duration': 4.89,
'overlap_previous': False,
'overlap_next': False,
'speaker_id': 'P36',
'gender': 1,
'dialect': 0,
'orthography': 0,
'source_type': 0,
'file_name': 'data/train/bm/nb-1_0008970-0013860.wav',
'transcription': 'hallo og velkommen hit til Nasjonalbiblioteket.',
'annotations': 'hallo og velkommen hit til Nasjonalbiblioteket.',
'orthographic': 'hallo og velkommen hit til Nasjonalbiblioteket.',
'verbatim': 'hallo og velkommen hit til Nasjonalbiblioteket',
'audio': {
'path': "data/train/bm/nb-1_0008970-0013860.wav",
'array': array([-0.00033569, 0.00222778, -0.0005188 , ..., 0.00067139,
0.00057983, 0.0005188 ]),
'sampling_rate': 16000}
}
```
### Data Fields
data field | description | Value type / example
--- | --- | ---
`source_file_id` | original file the segment appears in. | e.g. `50f-X`, `tr-X` or `nb-X`, where X is a number. (str)
`segment_id` | segment start and end timestamp. | `{starttime}-{endtime}` (str)
`segment_order` | order of segment in the original file. | (int)
`duration` | duration of segment in seconds. | (float)
`overlap_previous` | whether the beginning of the segment overlaps with the previous segment | `True` or `False` (bool)
`overlap_next` | whether the end of the segment overlaps with the next segment. | `True` or `False` (bool)
`speaker_id` | speaker ID for the speaker transcribed in the segment. | `P0` - `P69` (str)
`gender` | speaker’s binary gender (female or male), mapped to a HuggingFace datasets ClassLabel index number | `0`: f or `1`: m (int)
`dialect` | the speaker’s dialect area, as a ClassLabel index number for the areas east (e), north (n), southwest (sw), central (t), west (w). | `0`: e, `1`: n, `2`: sw, `3`: t, or `4`: w (int)
`orthography` | the written norm of the transcription, either bokmål (`bm`) or nynorsk (`nn`) as a ClassLabel index number | `0`: bm or `1`: nn (int)
`source_type` | type of recording of original file, either `live-event` or `podcast`, as a ClassLabel index number | `0`: live-event or `1`: podcast (int)
`file_name` | file name of the audio segment, without the path | `{source_file_id}_{segment_id}.wav` (str)
`transcription` | orthographic transcription text | (str)
`orthographic` | close to orthographically correct text transcription in the given `orthography` standard. Contains punctuation, numbers, and standard word forms. | (str)
`verbatim` | transcription text mapping to the uttered words as close as possible. Contains tokens marking hesitations, laughter, foreign phrases and unknown words, but no punctuation. | (str)
`annotations` | transcription text "as is" from the annotation process. Contains false starts, metatags for non-linguistic noises, punctuation, and alternate word forms (`<uttered word>\<orthographic standard word>`) | (str)
`audio` | the audio segment data, with the relative file `path`, the bytes `array`, and the `sampling_rate` | (dict)
"orthographic" (orthographically correct text, with punctuation, integer numbers, and standardized word forms), "verbatim" (with tokens marking hesitations, laughter, foreign phrases and unknown words, but no punctuation) and "annotations" (as is from the annotation process, with punctuation, tags, and alternate word forms).
### Data Splits
The data is split into a `train`, `validation`, and `test` set, stratified on three parameters: source type, gender and dialect.
Gender and dialect naturally refers to the gender and dialect of the speakers.
The data has not been split on speaker ID to avoid speaker overlap in the various sets because this proved impossible
while still maintaining a decent distribution of the other parameters, especially dialect variation.
The source type refers to whether the source material is one of the two podcasts (50f, tr) or
a National Library live event (nb).
The two types have different features.
The podcasts are overall good quality studio recordings with little background noise, echo and such.
The live events are recorded in rooms or reception halls at the National Library and have more background
noise, echo and inconsistent audio quality.
Many also have a live audience.
## Dataset Creation
### Source data
The audio is collected from podcasts we have been permitted to share openly – namely 50
forskere from UiT and Trondheim kommunes podkast from Trondheim municipality – as well
as some of The National Library’s own recordings of live events. The podcasts are studio
recordings, while the National Library events take place in rooms and reception halls at the
National Library, sometimes in front of an audience.
#### Who are the source language producers?
Guests and hosts of the respective recording events, either podcasts produced in a studio or lectures, debates and conversations in a public live event.
### Annotations
#### Annotation process
The recordings were segmented and transcribed in the transcription software ELAN. The
recordings were transcribed automatically using a Norwegian ASR system created by the AI-
lab at the National Library of Norway. The speech was segmented and transcribed with
speaker diarization, separating the speakers into separate transcription tiers. These
segments and transcriptions were then manually corrected by a transcriber according to a
set of guidelines. All the manual transcriptions were reviewed by a second person in order to
avoid substantial discrepancies between transcribers. Finally all the transcriptions were
spell-checked, and checked for any unwanted numbers or special characters.
See the [official dataset documentation](https://www.nb.no/sbfil/taledata/NB_Samtale_About_the_corpus.pdf) for more details.
The full set of guidelines for segmentation and transcription are given in Norwegian in [NB_Samtale_transcription_guidelines.pdf](https://www.nb.no/sbfil/taledata/NB_Samtale_transcription_guidelines.pdf).
#### Who are the annotators?
The Norwegian Language Bank (Språkbanken).
### Personal and Sensitive Information
The data fields `gender`, `dialect` and `speaker_id` pertain to the speakers themselves.
A single speaker will have the same `speaker_id` if they appear in several different source files.
## Considerations for Using the Data
### Discussion of Biases
The recordings were for the most part selected based on the gender and dialect of the
speakers to ensure gender balance and broad dialectal representation. The corpus has a
near 50/50 divide between male and female speakers (male 54%, female 46%). The
Norwegian dialects have been divided into five broad dialect areas that are all represented in
the corpus. However, Eastern Norwegian has the greatest representation at about 50%
speaker time, while the other areas fall between 8% and 20% speaker time.
## Additional Information
### Dataset Curators
The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway.
[Marie Iversdatter Røsok](mailto:marie.rosok@nb.no), [Ingerid Løyning Dale](mailto:ingerid.dale@nb.no) and [Per Erik Solberg](mailto:per.solberg@nb.no) contributed in creating this dataset.
Thanks to the HuggingFace team for assistance.
### Licensing Information
The NB Samtale dataset is released with the [CC-ZERO-license](https://creativecommons.org/publicdomain/zero/1.0/), i.e., it is public domain and can be used for any purpose and reshared without permission.
| 10,636 | [
[
-0.042236328125,
-0.0430908203125,
0.0166168212890625,
0.022979736328125,
-0.032745361328125,
-0.007518768310546875,
-0.0330810546875,
-0.0338134765625,
0.05023193359375,
0.04443359375,
-0.045684814453125,
-0.06878662109375,
-0.0294647216796875,
0.0138397216... |
RuterNorway/Fleurs-Alpaca-EN-NO | 2023-08-23T12:43:59.000Z | [
"task_categories:translation",
"size_categories:1k<n<5k",
"language:no",
"language:en",
"license:cc-by-4.0",
"region:us"
] | RuterNorway | null | null | 3 | 9 | 2023-08-23T12:42:35 | ---
language:
- no
- en
license: cc-by-4.0
task_categories:
- translation
pretty_name: Fleurs-Alpaca-EN-NO
size_categories:
- 1k<n<5k
---
<p><h1>🦙 Alpaca Translate Norwegian 🦙</h1></p>
This dataset is based on [Fleurs](https://huggingface.co/datasets/google/fleurs) from Google. We matched the English sentences with Norwegian sentences and formatted it to an Alpaca-style dataset.
## Dataset Structure
```json
{
"instruction": "Oversett teksten fra engelsk til norsk",
"input": "English string",
"output": "Norwegian string"
}
```
This dataset was created by [Ruter](https://ruter.no) during Ruter's AI Lab effort to fine-tune LLaMA-2 models for Norwegian.
## License
Following the original dataset from Google, this dataset is released under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) license.
<p><h1>🦙 Alpaca Translate Norsk 🦙</h1></p>
Dette datasettet er basert på [Fleurs](https://huggingface.co/datasets/google/fleurs) utgitt av Google. Vi har sammenstilt de engelske setningene med norske setninger og formatert det til et Alpaca-stil datasett.
## Datasettstruktur
```json
{
"instruction": "Oversett teksten fra engelsk til norsk",
"input": "English string",
"output": "Norwegian string"
}
```
Datasettet ble laget av [Ruter](https://ruter.no) AI Lab under arbeidet med å finjustere LLaMA-2-modeller for norsk.
## License
Vi følger det originale datasettet fra Google sin lisens, som er utgitt under en [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/). | 1,596 | [
[
-0.01568603515625,
-0.05401611328125,
0.022857666015625,
0.028656005859375,
-0.043243408203125,
-0.0252838134765625,
-0.01349639892578125,
-0.04254150390625,
0.04388427734375,
0.041351318359375,
-0.051025390625,
-0.045684814453125,
-0.045074462890625,
0.0305... |
PericlesSavio/contratacao | 2023-09-19T14:48:05.000Z | [
"region:us"
] | PericlesSavio | null | null | 0 | 9 | 2023-08-23T14:15:27 | Entry not found | 15 | [
[
-0.0213470458984375,
-0.01496124267578125,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.005046844482421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.01494598388671875,
-0.0604248046875,
0.03790... |
ShapeNet/ShapeNetCore-archive | 2023-09-20T15:05:16.000Z | [
"language:en",
"license:other",
"3D shapes",
"region:us"
] | ShapeNet | null | null | 5 | 9 | 2023-08-23T15:55:19 | ---
language:
- en
pretty_name: ShapeNetCore
tags:
- 3D shapes
license: other
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: >-
To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field).
After requesting access to this ShapeNet repo, you will be considered for access approval.
After access approval, you (the "Researcher") receive permission to use the ShapeNet database (the "Database") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions:
Researcher shall use the Database only for non-commercial research and educational purposes.
Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database.
Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
The law of the State of New Jersey shall apply to all disputes under this agreement.
For access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with.
Please actually fill out the fields (DO NOT put the word "Advisor" for PI/Advisor and the word "School" for "Affiliation", please specify the name of your advisor and the name of your school).
extra_gated_fields:
Name: text
PI/Advisor: text
Affiliation: text
Purpose: text
Country: text
I agree to use this dataset for non-commercial use ONLY: checkbox
---
This repository holds archives (zip files) of main versions of ShapeNetCore, a subset of [ShapeNet](https://shapenet.org).
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/).
Please see [DATA.md](DATA.md) for details about the data.
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
```
For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetCore v2 in the title of your email.
| 4,269 | [
[
-0.01291656494140625,
-0.0143585205078125,
0.028564453125,
0.0011749267578125,
-0.00739288330078125,
-0.03326416015625,
0.0080413818359375,
-0.038726806640625,
0.032806396484375,
0.04949951171875,
-0.03369140625,
-0.051666259765625,
-0.03399658203125,
0.0082... |
khalidalt/mlqa_ar | 2023-08-24T16:02:40.000Z | [
"region:us"
] | khalidalt | null | null | 0 | 9 | 2023-08-24T16:02:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yqzheng/semeval2014_restaurants | 2023-08-25T09:53:14.000Z | [
"region:us"
] | yqzheng | null | null | 0 | 9 | 2023-08-25T09:53:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: aspect
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: int64
splits:
- name: train
num_bytes: 493796
num_examples: 3608
- name: test
num_bytes: 149718
num_examples: 1120
download_size: 209444
dataset_size: 643514
---
# Dataset Card for "semeval2014_restaurants"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 678 | [
[
-0.03582763671875,
-0.02545166015625,
0.0265655517578125,
0.0093231201171875,
-0.0019063949584960938,
-0.03265380859375,
0.0279388427734375,
-0.005207061767578125,
0.067138671875,
0.06085205078125,
-0.055511474609375,
-0.06072998046875,
-0.0255279541015625,
... |
nihiluis/financial-advisor-100 | 2023-08-25T19:21:12.000Z | [
"region:us"
] | nihiluis | null | null | 1 | 9 | 2023-08-25T19:21:06 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 534730
num_examples: 100
download_size: 320626
dataset_size: 534730
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "finadv100_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.043060302734375,
-0.01319122314453125,
0.007251739501953125,
0.0105133056640625,
-0.0159912109375,
-0.0008406639099121094,
0.035369873046875,
-0.0120391845703125,
0.050079345703125,
0.045501708984375,
-0.06292724609375,
-0.04296875,
-0.036651611328125,
-0... |
dantepalacio/ru_dial_sum | 2023-08-28T06:37:28.000Z | [
"region:us"
] | dantepalacio | null | null | 0 | 9 | 2023-08-28T06:27:42 | dataset was not cleared | 23 | [
[
-0.0249786376953125,
-0.0030040740966796875,
0.007781982421875,
-0.0157318115234375,
-0.0220184326171875,
0.01318359375,
-0.0086517333984375,
0.0081787109375,
0.0011358261108398438,
0.07159423828125,
-0.0576171875,
-0.0204315185546875,
-0.05218505859375,
-0.... |
AnthonyRayo/AutomAssist2 | 2023-08-29T18:20:23.000Z | [
"region:us"
] | AnthonyRayo | null | null | 0 | 9 | 2023-08-29T18:20:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
leffff/south-park-character-png-dataset | 2023-10-20T16:49:00.000Z | [
"license:mit",
"region:us"
] | leffff | null | null | 0 | 9 | 2023-08-31T07:59:35 | ---
license: mit
---
# South Park Character Png Dataset
 | 152 | [
[
-0.022369384765625,
-0.0114898681640625,
0.015228271484375,
0.032928466796875,
-0.0272216796875,
0.0293121337890625,
0.00928497314453125,
0.00433349609375,
0.037200927734375,
0.05279541015625,
-0.040374755859375,
-0.034210205078125,
-0.0247344970703125,
0.01... |
BigSuperbPrivate/DialogueActPairing_DailyTalk | 2023-09-04T13:49:33.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 9 | 2023-09-03T03:59:18 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: file2
dtype: string
- name: audio2
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 5775513985.0
num_examples: 10000
- name: validation
num_bytes: 1225208055.0
num_examples: 2000
download_size: 6514124261
dataset_size: 7000722040.0
---
# Dataset Card for "DialogueActPairing_DailyTalk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 631 | [
[
-0.026885986328125,
-0.032501220703125,
0.0129852294921875,
0.032562255859375,
-0.0082550048828125,
0.0018825531005859375,
0.0166778564453125,
-0.0272979736328125,
0.05908203125,
0.0474853515625,
-0.07391357421875,
-0.058746337890625,
-0.0399169921875,
-0.03... |
taaredikahan23/medical-llama2-5k | 2023-09-04T12:34:50.000Z | [
"region:us"
] | taaredikahan23 | null | null | 2 | 9 | 2023-09-04T12:34:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2165103
num_examples: 5452
download_size: 869829
dataset_size: 2165103
---
# Dataset Card for "medical-llama2-5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.017364501953125,
0.004848480224609375,
0.02581787109375,
0.024566650390625,
-0.037353515625,
0.00566864013671875,
0.039642333984375,
-0.034210205078125,
0.063232421875,
0.033203125,
-0.055145263671875,
-0.06256103515625,
-0.055267333984375,
0.004032135009... |
dot-ammar/AR-dotless-small | 2023-09-11T15:35:39.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:ar",
"region:us"
] | dot-ammar | null | null | 0 | 9 | 2023-09-04T17:27:35 | ---
language:
- ar
size_categories:
- 10K<n<100K
task_categories:
- translation
pretty_name: f
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: clean
dtype: string
- name: dotless
dtype: string
splits:
- name: train
num_bytes: 18718829.46787407
num_examples: 103403
download_size: 10451596
dataset_size: 18718829.46787407
---
# Dataset Card for "AR-dotless-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 595 | [
[
-0.053558349609375,
-0.0242919921875,
0.0234832763671875,
-0.00962066650390625,
-0.020904541015625,
-0.01067352294921875,
0.026123046875,
-0.0178985595703125,
0.0703125,
0.0182342529296875,
-0.04486083984375,
-0.044219970703125,
-0.03717041015625,
-0.0120391... |
DynamicSuperb/DialogueActPairing_DailyTalk | 2023-11-01T08:35:59.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 9 | 2023-09-05T05:39:00 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: file2
dtype: string
- name: audio2
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 1146410031.0
num_examples: 2000
download_size: 988425921
dataset_size: 1146410031.0
---
# Dataset Card for "DialogueActPairing_DailyTalk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 556 | [
[
-0.026885986328125,
-0.032501220703125,
0.0129852294921875,
0.032562255859375,
-0.0082550048828125,
0.0018825531005859375,
0.0166778564453125,
-0.0272979736328125,
0.05908203125,
0.0474853515625,
-0.07391357421875,
-0.058746337890625,
-0.0399169921875,
-0.03... |
Falah/female_prompts | 2023-09-06T08:54:47.000Z | [
"region:us"
] | Falah | null | null | 0 | 9 | 2023-09-06T08:54:45 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 4060495
num_examples: 10000
download_size: 474495
dataset_size: 4060495
---
# Dataset Card for "female_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 359 | [
[
-0.02978515625,
-0.0304107666015625,
0.0161590576171875,
0.0230712890625,
-0.006134033203125,
-0.007488250732421875,
0.0257568359375,
-0.000013589859008789062,
0.03607177734375,
0.023529052734375,
-0.08697509765625,
-0.05023193359375,
-0.037200927734375,
0.0... |
sarahlintang/Alpaca_indo_instruct | 2023-09-07T06:27:38.000Z | [
"language:id",
"region:us"
] | sarahlintang | null | null | 0 | 9 | 2023-09-07T06:21:17 | ---
language:
- id
---
Translated from Stanford alpaca using google translate API.
| 85 | [
[
-0.006412506103515625,
-0.051025390625,
0.03839111328125,
0.02362060546875,
-0.050140380859375,
-0.0201873779296875,
-0.006961822509765625,
-0.05224609375,
0.047637939453125,
0.052459716796875,
-0.06048583984375,
-0.038299560546875,
-0.05279541015625,
0.0057... |
mlerner27/asapp2a | 2023-09-13T01:31:57.000Z | [
"region:us"
] | mlerner27 | null | null | 0 | 9 | 2023-09-09T19:55:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
elliotthwang/guanaco-llama2-chinese-1k | 2023-09-13T01:47:38.000Z | [
"region:us"
] | elliotthwang | null | null | 0 | 9 | 2023-09-12T02:00:55 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1348677
num_examples: 1000
download_size: 0
dataset_size: 1348677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-chinese-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 447 | [
[
-0.01030731201171875,
-0.0117645263671875,
0.0111236572265625,
0.044342041015625,
-0.038909912109375,
0.00036406517028808594,
0.014923095703125,
-0.02386474609375,
0.061126708984375,
0.024505615234375,
-0.054107666015625,
-0.0701904296875,
-0.040863037109375,
... |
a686d380/sis-novel | 2023-09-12T03:59:29.000Z | [
"license:openrail",
"region:us"
] | a686d380 | null | null | 12 | 9 | 2023-09-12T03:14:04 | ---
license: openrail
viewer: false
---
这是一个中文H小说数据集,收集自sis001
sis-novel1为中短篇小说,112182项,解压缩后大小5.7GB,数据截止2022年7月
sis-novel2为长篇小说,4555项,解压缩后大小3.6GB,数据截止2023年3月
数据均为未清洗的txt版本,并且可能包含有评论 | 186 | [
[
-0.019683837890625,
-0.042388916015625,
0.017333984375,
0.056610107421875,
-0.042144775390625,
-0.0220489501953125,
0.01995849609375,
-0.0265655517578125,
0.0396728515625,
0.02490234375,
-0.0295257568359375,
-0.0166778564453125,
-0.05194091796875,
-0.0012388... |
ChanceFocus/pubmedsum | 2023-09-12T23:48:10.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 9 | 2023-09-12T22:43:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11428
num_examples: 1
- name: test
num_bytes: 4144995
num_examples: 200
download_size: 2086997
dataset_size: 4156423
---
# Dataset Card for "pubmedsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 630 | [
[
-0.027008056640625,
-0.0038299560546875,
0.0316162109375,
0.012847900390625,
-0.0251922607421875,
-0.0063629150390625,
0.01496124267578125,
-0.006809234619140625,
0.0640869140625,
0.042999267578125,
-0.04901123046875,
-0.058319091796875,
-0.04901123046875,
-... |
mario-rc/dstc11.t4 | 2023-09-15T18:32:14.000Z | [
"language:en",
"language:zh",
"language:es",
"license:apache-2.0",
"Robust",
"Multilingual",
"Open-Domain",
"region:us"
] | mario-rc | null | null | 0 | 9 | 2023-09-13T19:50:47 | ---
language:
- en
- zh
- es
tags:
- Robust
- Multilingual
- Open-Domain
pretty_name: >-
DSTC11: Dialogue System Technology Challenge 11 Track 4: Robust and
Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems
license: apache-2.0
---
# DSTC11: Dialogue System Technology Challenge 11<br/><br/>[Track 4: Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems](https://github.com/Mario-RC/dstc11_track4_robust_multilingual_metrics)
# Directory Structure Scheme
Representation of the directory tree structure:
```
.
└── DSTC_11_Track_4 # DSTC11 data
├── task1 # Metrics for Multilingual Data data
│ ├── train # Train data (CHANEL/CDIAL datasets)
│ │ ├── en_es # English/Spanish data
│ │ ├── en_zh # English/Chinese data
│ │ └── zh_en # Chinese/English data
│ ├── dev # Dev data (DSTC10.T5/CDIAL datasets)
│ │ ├── en_es # English/Spanish data
│ │ └── en_zh # English/Chinese data
│ │ └── zh_en # Chinese/English data
│ ├── test # Test data (DSTC10.T5/CDIAL datasets)
│ │ ├── en_es # English/Spanish data
│ │ └── en_zh # English/Chinese data
│ │ └── zh_en # Chinese/English data
│ └── README.md # Task 1 data information
├── task2 # Robust Metrics data
│ ├── train # Train data (CHANEL datasets)
│ ├── dev # Development data (DSTC10.T5 datasets)
│ ├── test # Test data (DSTC10.T5 datasets)
│ └── README.md # Task 2 data information
├── metadata # Auxiliary datasets annotations
└── README.md # General data information
```
# Track Overview
This track consists of two tasks which are explained in more detail below:
Participants will develop effective automatic open-ended and multilingual dialogue evaluation metrics that perform similarly when evaluated over a new language.
Participants will develop effective automatic open-ended dialogue evaluation metrics that perform robustly when evaluated over back-translated/paraphrased sentences in English.
For both tasks, proposed metrics are expected to show the following two important properties as indicated in (Deriu et al., 2019):
Correlated to human judgments - the metrics should produce evaluation scores that well correlate to human judgments (scores) across multiple languages or alternative responses (i.e., back-translated or paraphrased).
Explainable - the metrics should provide constructive and explicit feedback to the generative models in terms of the quality of their generated responses. For instance, if a generative model is contradicting itself, the evaluation metrics should signal such behavior to the generative models.
Participants can propose their own metric or optionally improve two baseline evaluation metrics: MDD-Eval (Zhang et al, 2021) or Deep AM-FM (Zhang et al, 2020). A leaderboard in the ChatEval platform will be provided allowing participants to check their progress.
For each evaluation task, Spearman correlation will be computed to compare the proposed evaluation metrics against human judgments. A final average score will be calculated to rank the submitted evaluation metrics.
For more details:
* [Provided datasets](./dstc11/track4-provided-datasets.md)
* [Datasets format](./dstc11/track4-datasets-format.md)
* [Task 1: Multilingual Automatic Evaluation Metrics](./dstc11/track4-task1-multilingual-metrics.md)
* [Task 2: Robust Automatic Evaluation Metrics](./dstc11/track4-task2-robust-metrics.md)
* [Baseline model](./dstc11/track4-baseline-model.md)
* [FAQ](./dstc11/track4-faq.md)
For more information check the [ChatEval](https://chateval.org/dstc11) website.
# Provided Datasets
After the organizers' participation in the [CHANEL@JSALT2020](https://github.com/CHANEL-JSALT-2020/datasets) workshop (Rudnicky et al., 2020) at John Hopkins University, they have automatically translated back-and-forth (using the same MS Azure translation service) a total of 18 well-known human-human dialogue datasets. These data sets will be used as training data. The total amount of dialogues is 393k (approx. 3M turns).
* DBDC (Higashinaka et al., 2016)
* CMU_DoG (Zhou et al., 2018)
* Cornell Movie-Dialogs (Danescu-Niculescu-Mizil & Lee, 2011)
* DailyDialog (Li et al., 2017)
* DECODE (Nie et al., 2020)
* EmotionLines (Chen et al., 2018)
* EmpathicDialogues (Rashkin et al., 2018)
* Holl-E (Moghe et al., 2018)
* MEENA (Adiwardana et al., 2020)
* MELD (Poria et al., 2019)
* MetalWOz (Lee et al., 2019)
* Movie-DiC (Banchs, 2012)
* PersonaChat (Zhang et al., 2018)
* SentimentLIAR (Upadhayay & Behzadan, 2020)
* Switchboard Coherence (Cervone & Riccardi, 2020)
* Topical-Chat (Gopalakrishnan et al., 2019)
* Wizard of Wikipedia (Dinan et al., 2019)
* Wochat (D'Haro et al., 2016)
As development set, organizers will provide the following datasets identified during the [DSTC10 Track 5](https://chateval.org/dstc10) (Zhang et al, 2021), that sum up more than 35k turn-level human-annotations, which have been automatically translated to Spanish and Chinese, and back-translated both to English using [MS Azure](https://azure.microsoft.com/en-us/products/cognitive-services/translator/) services.
* CONVAI2-GRADE (CG)</b> (Huang et al., 2020)
* DAILYDIALOG-GRADE (DH)</b> (Huang et al., 2020)
* DAILYDIALOG-GUPTA (DG)</b> (Gupta et al., 2019)
* DAILYDIALOG-ZHAO (DZ)</b> (Zhao et al., 2020)
* DSTC7 (D7)</b> (Galley et al., 2019)
* EMPATHETIC-GRADE (EG)</b> (Huang et al., 2020)
* FED-DIAL (FD)</b> (Mehri & Eskenazi, 2020b)
* FED-TURN (FT)</b> (Mehri & Eskenazi, 2020b)
* HUMOD (HM)</b> (Merdivan et al., 2020)
* PERSONA-SEE (PS)</b> (See et al., 2019)
* PERSONA-USR (PU)</b> (Mehri & Eskenazi, 2020a)
* PERSONA-ZHAO (PZ)</b> (Zhao et al., 2020)
* TOPICAL-USR (TU)</b> (Mehri & Eskenazi, 2020a)
This development data can help participants to check the multilingualism or robustness capabilities of their trained models in terms of correlations with human-annotations. Additional databases, not mentioned here, will be added when available to increase the size of the benchmarking.
Moreover, the datasets provided by [THU-COAI](https://github.com/thu-coai) group (Conversational AI groups from Tsinghua University) will be used, naming this set of data CDial. They contain open domain human-human dialogs. They are originally in Chinese and contain of 3,470 dialogs (approx. 130k turns).
* ECM (Zhou et al., 2018)
* KdConv (Zhou et al., 2020)
* LCCC (Wang et al., 2020)
In addition, we will provide the same datasets translated (CHANEL@JSALT2020 and CDial) into Chinese using the SotA [Tencent MT](https://www.tencentcloud.com/products/tmt) system.
These datasets will be provided to participants, together with automatic meta-data information (machine translation Quality Estimation (QE), toxicity, and sentiment analysis) for filtering and dialogue curation purposes, so the participants have a better reference of the dataset quality, being of great help for them to decide whether or not to use these translations/paraphrases in the training of their evaluation models, and optionally fine-tune multilingual pre-trained models allowing better performance on the proposed dialogue-oriented tasks.
Since the quality of the back-translated sentences can play an important role in estimating the metric scores. QE metric scores will be given to the participants using our QE system and other existing models (e.g., [COMET](https://github.com/Unbabel/COMET) (Rei et al., 2020)). This information will be given to participants so they can optionally use it for discarding dialogues or turns that do not show high quality when training their metrics. Participants will be welcome to use the data and ideas from the MT field to propose QE metrics that can, optionally, be included to provide final scores. Finally, the organizers may provide new translated dialogue datasets to allow participants to create more robust and better-trained systems.
Regarding the paraphrases, all the original English sentences of each dataset will have multiple paraphrases, as well as annotations so that each participant can evaluate the quality of each paraphrase. The model used will be [PARROT](https://github.com/jsedoc/Parrot_Paraphraser) (Damodaran P., 2021).
Additionally, \~3k random H-H turns (\~1k dialogues) of CDial in Chinese were manually annotated by Tencent AI. Also, \~5k new H-C Chinese turns (\~500 dialogues) were generated with three different SotA chatbots (Tencent's model, Microsoft's Xiaoice (Zhou et al., 2020) and Baidu's Plato (Bao et al., 2019)). Both turn-level and dialog-level annotations were manually annotated by Tencent AI.
During the test phase, a new set of 2k turn-level (\~700 dialog-level) manually curated multilingual corpus (Spanish and Chinese) along with their turn-level and dialog-level human evaluation annotations will be provided to participants to test models for both tasks. This corpus will be manually checked to guarantee its quality and high correlation with the original dialogues.
Furthermore, in order to check the generalization capabilities of the proposed metrics from the participant, the test data will include a new dataset of human-chatbot interactions with \~2k turns (~60 dialogues).
## Datasets Summary
| Datasets<br/>Name | CHANEL | DSTC10 | CDIAL |
| --- | :---: | :----: | :---: |
| # Datsets | 18 | 7 | 3 |
| Language | English, Spanish/Chinese translations,<br/>and English back-translation | English, Spanish/Chinese translations,<br/>and English back-translation | Chinese and English translations |
| Dialogues Type | Human-Human Open-Domain | Human-Chatbot Open-Domain | Human-Human Open-Domain
| # Dialogues/<br/>Utterances | + 390.000 / + 3.000.000 | + 3.000 / + 60.000 | + 3.470 / +130.000
| Annotations | Sentiment analysis and Toxicity | Sentiment analysis and Toxicity<br/>Turn/dialgue level human scores | Turn/dialgue level human scores
| Task 1 Set | Train | Dev, Test | Train, Dev, Test
| Task 2 Set | Train | Dev, Test | —
## Datasets Statistics
| Name | #Turns | #Dialogues | Average Turn/Dial | Average Words/Turn | Annotation Granularity | Original Language | Translation |
| --- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Train** | | | | | | | |
DBDC (Higashinaka et al., 2016) | 8,509 | 415 | 20.5 | 7.31 | Turn | En | Zh/Es |
CMU_DoG (Zhou et al., 2018c) | 95,305 | 4,221 | 22.58 | 17.93 | Turn | En | Zh/Es |
Cornell Movie-Dialogs (Danescu-Niculescu-Mizil and Lee, 2011) | 304,713 | 83,097 | 3.67 | 13.72 | Turn | En | Zh/Es |
DailyDialog (Li et al., 2017) | 102,960 | 13,116 | 7.85 | 13.96 | Turn | En | Zh/Es |
DECODE (Nie et al., 2020) | 296,105 | 35,426 | 8.36 | 15.05 | Turn | En | Zh/Es |
EmotionLines (Hsu et al., 2018) | 14,503 | 1,000 | 14.50 | 10.53 | Turn | En | Zh/Es |
EmpathicDialogues (Rashkin et al., 2019) | 107,220 | 24,850 | 4.31 | 15.88 | Turn | En | Zh/Es |
Holl-E (Moghe et al., 2018) | 91,452 | 9,071 | 10.08 | 17.74 | Turn | En | Zh/Es |
MEENA (Adiwardana et al., 2020) | 3,675 | 193 | 19.04 | 9.14 | Turn | En | Zh/Es |
MELD (Poria et al., 2019) | 23,197 | 1,592 | 14.57 | 10.98 | Turn | En | Zh/Es |
MetalWOz (Lee et al., 2019) | 432,036 | 37,884 | 11.40 | 8.47 | Turn | En | Zh/Es |
Movie-DiC (Banchs, 2012) | 512,582 | 65,215 | 7.86 | 13.82 | Turn | En | Zh/Es |
PersonaChat (Zhang et al., 2018a) | 162,064 | 10,907 | 14.86 | 11.72 | Turn | En | Zh/Es |
SentimentLIAR (Upadhayay and Behzadan, 2020) | 12,781 | 12,781 | 1.00 | 20.16 | Turn | En | Zh/Es |
Switchboard Coherence (Cervone and Riccardi, 2020) | 12,059 | 1,000 | 12.06 | 20.55 | Turn | En | Zh/Es |
Topical-Chat (Gopalakrishnan et al., 2019) | 235,281 | 10,784 | 21.82 | 23.23 | Turn | En | Zh/Es |
Wizard of Wikipedia (Dinan et al., 2019) | 201,999 | 22,311 | 9.05 | 18.83 | Turn | En | Zh/Es |
Wochat (Haro et al., 2016) | 19,881 | 607 | 32.75 | 6.75 | Turn | En | Zh/Es |
| --- | --- | --- | --- | --- | --- | --- | --- |
Total | 2,636,322 | 334,470 | 236.26 | 255.77 | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **Development** | | | | | | | |
ConvAI2-GRADE (Huang et al., 2020) | 1,800 | 600 | 3.0 | 12.07 | Turn | En | Zh/Es |
DailyDialog-GRADE (Huang et al., 2020) | 900 | 300 | 3.0 | 12.60 | Turn | En | Zh/Es |
DailyDialog-GUPTA (Gupta et al., 2019) | 2,460 | 500 | 4.92 | 12.37 | Turn | En | Zh/Es |
DailyDialog-ZHAO (Zhao et al., 2020) | 4,248 | 900 | 4.72 | 12.41 | Turn | En | Zh/Es |
DSTC7 (Galley et al., 2019) | 34,650 | 9,990 | 3.47 | 15.39 | Turn | En | Zh/Es |
Empathetic-GRADE (Huang et al., 2020) | 900 | 300 | 3.0 | 16.65 | Turn | En | Zh/Es |
FED-Dial (Mehri and Eskenazi, 2020a)) | 1,715 | 125 | 13.72 | 11.1 | Dial | En | Zh/Es |
FED-Turn (Mehri and Eskenazi, 2020a)) | 3,888 | 375 | 10.37 | 10.78 | Turn | En | Zh/Es |
HUMOD (Merdivan et al., 2020) | 37,468 | 9,499 | 3.94 | 7.97 | Turn | En | Zh/Es |
Persona-SEE (See et al., 2019) | 39,792 | 3,316 | 12.0 | 9.0 | Dial | En | Zh/Es |
PersonaChat-USR (Mehri and Eskenazi, 2020b) | 2,790 | 300 | 9.3 | 12.08 | Turn | En | Zh/Es |
PersonaChat-ZHAO (Zhao et al., 2020) | 4,614 | 900 | 5.13 | 12.06 | Turn | En | Zh/Es |
TOPICAL-USR (Mehri and Eskenazi, 2020b) | 4,032 | 360 | 11.2 | 23.16 | Turn | En | Zh/Es |
ECM-Eval (Zhou et al., 2018a) | 3,004 | 1,502 | 2.0 | 13.13 | Turn | Zh | En |
KdConv-Eval (Zhou et al., 2020a) | 3,499 | 354 | 9.88 | 21.11 | Turn | Zh | En |
LCCC-Eval (Wang et al., 2020a) | 3,009 | 589 | 5.11 | 11.72 | Turn | Zh | En |
| --- | --- | --- | --- | --- | --- | --- | --- |
Total | 148,769 | 29,910 | 104.76 | 212.64 | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **Test** | | | | | | | |
BlenderBot3 (Giorgi et al., 2023; Shuster et al., 2022) | 679 | 21 | 32.33 | 16.96 | Turn/Dial | En | Zh/Es |
ChatGPT (Giorgi et al., 2023; Radford et al., 2018) | 462 | 21 | 22 | 91.07 | Turn/Dial | En | Zh/Es |
GPT-3.5 (Giorgi et al., 2023; Brown et al., 2020) | 560 | 17 | 32.94 | 23.73 | Turn/Dial | En | Zh/Es |
HCChinese | 2,017 | 187 | 10.79 | 8.08 | Turn/Dial | Zh | En |
ChatEval (Sedoc et al., 2019) | 400 | 200 | 2 | 8.13 | Turn | En | Zh/Es |
DSTC10 (Zhang et al., 2022c) | 112 | 28 | 4 | 14 | Turn | En | Zh/Es |
JSALT (Rudnicky et al., 2020) | 46 | 13 | 3.54 | 17.26 | Turn | En | Zh/Es |
| --- | --- | --- | --- | --- | --- | --- | --- |
Total | 4,276 | 487 | 107.60 | 179.23 | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
## Datasets Information
CHANEL dataset. The source language is English.
| CHANEL | Spanish<br/>Translation | Chinese<br/>Translation | English<br/>Translation | English<br/>Back-translation | Paraphrases | Sentiment<br/>Analysis | Content<br/>Moderate | Human<br/>Annotations | Annotation<br/>Granularity |
| --- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| DBDC | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| CMU_DoG | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| Cornell Movie-Dialogs | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| DailyDialog | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| DECODE | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| EmotionLines | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| EmpathicDialogues | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| Holl-E | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| MEENA | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| MELD | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| MetalWOz | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| Movie-DiC | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| PersonaChat | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| SentimentLIAR | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| Switchboard Coherence | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| Topical-Chat | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| Wizard of Wikipedia | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
| WOCHAT | ✔ | | | ✔ | ✔ | ✔ | ✔ | | Turn-level |
DSTC10 dataset. The source language is English.
| DSTC10 | Spanish<br/>Translation | Chinese<br/>Translation | English<br/>Translation | English<br/>Back-translation | Paraphrases | Sentiment<br/>Analysis | Content<br/>Moderate | Human<br/>Annotations | Annotation<br/>Granularity |
| --- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| CONVAI2-GRADE (CG) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| DAILYDIALOG-GRADE (DH) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| DAILYDIALOG-GUPTA (DG) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| DAILYDIALOG-ZHAO (DZ) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| DSTC7 (D7) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| EMPATHETIC-GRADE (EG) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| FED-DIAL (FD) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Dialogue-level |
| FED-TURN (FT) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| HUMOD (HU) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| PERSONA-SEE (PS) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Dialogue-level |
| PERSONA-USR (PU) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| PERSONA-ZHAO (PZ) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
| TOPICAL-USR (TU) | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ | ✔ | Turn-level |
CDIAL dataset. The source language is Chinese.
| CDIAL | Spanish<br/>Translation | Chinese<br/>Translation | English<br/>Translation | English<br/>Back-translation | Paraphrases | Sentiment<br/>Analysis | Content<br/>Moderate | Human<br/>Annotations |
| --- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| ECM | | | ✔ | | | | | ✔ |
| KDCONV | | | ✔ | | | | | ✔ |
| LCCC | | | ✔ | | | | | ✔ |
## Data Format
All data given follows the [Data Formats](./dstc11/track4-datasets-format.md) which provides guidelines on how to store, maintain and handle dialogue corpora.
## Dimensions Evaluation
Considering the annotations available in the development data, the test data will have the following dimensions (annotations) to evaluate in both Task 1 (English, Chinese and Spanish) and Task 2:
* **Turn-level**: Appropriateness, Content Richness, Grammatical Correctness and Relevance
* **Dialogue-level**: Coherence, Engageness/Likeability, Informativeness and Overall.
The annotations will be evaluated and indicated individually (dimension by dimension), discriminating by dataset and language. In addition, a global score will be estimated by grouping all dimensions. This global value will be calculated separately at turn-level and dialogue-level for each task.
A brief description of each dimension (Mehri et al., 2022) is shown below.
Turn-level:
* **Appropriateness** - The response is appropriate given the preceding dialogue.
* **Content Richness** - The response is informative, with long sentences including multiple entities and conceptual or emotional words.
* **Grammatical Correctness** - Responses are free of grammatical and semantic errors.
* **Relevance** - Responses are on-topic with the immediate dialog history.
Dialogue-level:
* **Coherence** - Throughout the dialog, is the system maintaining a good conversation flow.
* **Engageness/Likeability** - Throughout the dialogue, the system displays a likeable personality.
* **Informativeness** - Throughout the dialog, the system provides unique and non-generic information.
* **Overall** - The overall quality of and satisfaction with the dialog.
# Organizers
* Mario Rodríguez-Cantelar (Universidad Politécnica de Madrid, Spain)
* Chen Zhang (National University of Singapore, Singapore)
* Chengguang Tang (Tencent AI Lab, China)
* Ke Shi (Tencent AI Lab, China)
* Sarik Ghazarian (University of Southern California, USA)
* João Sedoc (New York University, USA)
* Luis F. D'Haro (Universidad Politécnica de Madrid, Spain)
* Alexander Rudnicky (Carnegie Mellon University, USA)
# Citation
Please cite the paper, code or data from DSTC 11 Track 4:
```bibtex
@inproceedings{rodriguezcantelar2023dstc11t4,
author = "Mario Rodríguez-Cantelar and Chen Zhang and Chengguang Tang and Ke Shi and Sarik Ghazarian and João Sedoc and Luis Fernando D'Haro and Alexander Rudnicky",
title = "Overview of Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems at DSTC 11 Track 4",
booktitle = "DSTC11: The Eleventh Dialog System Technology Challenge",
series = "24th Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
year = 2023,
month = "September",
address = "Prague, Czechia"
}
```
# Acknowledgement
This research project is supported by the Comunidad de Madrid through the call Research Grants for Young Investigators from Universidad Politécnica de Madrid (GENIUS:APOYO-JOVENES-21-TAXTYC-32-K61X37).
This work is supported by project BEWORD (PID2021-126061OB-C43) funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe”, by the “European Union”, and by Programa Propio - Proyectos Semilla: Universidad Politécnica de Madrid (VSEMILLA22LFHE).
We gratefully acknowledge valuable efforts from Tencent AI Lab who supports Chinese translation and annotation of datasets by funding and infrastructure.
Thanks to THU-CoAI (Conversational AI groups from Tsinghua University) for providing their Chinese datasets as part of the challenge data.
Thanks to Unbabel for providing the COMET MTQE scores annotations as part of the challenge data. This contribution was supported by national funds through *Fundação para a Ciência e a Tecnologia* (FCT) with references PRT/BD/152198/2021 and UIDB/50021/2020, and by the P2020 program MAIA led by Unbabel (LISBOA-01-0247-FEDER-045909).
We also want to give thanks to MS Azure services (especially to Irving Kwong) for their sponsorship to continue processing new datasets that could be interesting for the dialogue community.
This research project is supported by the NYU ChatEval Team led by João Sedoc.
This research project is supported in part by a grant from Amazon to Alexander Rudnicky, Carnegie Mellon University.
Thanks to Karthik Ganesan, Sarik Ghazarian, James Hagerty, Zhang Chen and Alex Rudnicky for developing the baseline model as part of the challenge tasks.
This work is supported by the European Commission through Project ASTOUND (101071191 — HORIZON-EIC-2021-PATHFINDERCHALLENGES-01).

# References
Deriu, J., Rodrigo, A., Otegi, A., Echegoyen, G., Rosset, S., Agirre, E., & Cieliebak, M. (2020). Survey on evaluation methods for dialogue systems. Artificial Intelligence Review, 1-56.
Zhang, C., D'Haro, L. F., Friedrichs, T., & Li, H. (2021). MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation. arXiv preprint arXiv:2112.07194.
Zhang, C., D'Haro, L. F., Banchs, R. E., Friedrichs, T., & Li, H. (2020). Deep AM-FM: Toolkit for Automatic Dialogue Evaluation. In Conversational Dialogue Systems for the Next Decade (pp. 53-69). Springer, Singapore.
Zhang, C., Sadoc, J., D'Haro, L. F., Banchs, R., & Rudnicky, A. (2021). Automatic Evaluation and Moderation of Open-domain Dialogue Systems. arXiv preprint arXiv:2111.02110.
Hori, C., & Hori, T. (2017). End-to-end conversation modeling track in DSTC6. arXiv preprint arXiv:1706.07440.
Galley, M., Brockett, C., Gao, X., Gao, J., & Dolan, B. (2019). Grounded response generation task at dstc7. In AAAI Dialog System Technology Challenges Workshop.
See, A., Roller, S., Kiela, D., & Weston, J. (2019). What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654.
Sedoc, J., Ippolito, D., Kirubarajan, A., Thirani, J., Ungar, L., & Callison-Burch, C. (2019, June). Chateval: A tool for chatbot evaluation. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics (demonstrations) (pp. 60-65).
Vinyals, O., & Le, Q. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869.
Lee, S., Lim, H., & Sedoc, J. (2020). An evaluation protocol for generative conversational systems. arXiv preprint arXiv:2010.12741.
Mehri, S., & Eskenazi, M. (2020). USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation. arXiv preprint arXiv:2005.00456.
Mehri, S., & Eskenazi, M. (2020, July). Unsupervised Evaluation of Interactive Dialog with DialoGPT. In Proc. of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue (pp. 225-235).
Rudnicky, A., Banchs, R., D'Haro, L. F., Sedoc, J., Chen, Z., Rodríguez-Cantelar, M., Koh, A., & others. (2020). CHANEL-Metrics: Chat/Dialogue Modeling and Evaluation report. In 2020 Seventh Frederick Jelinek Memorial Summer Workshop.
Higashinaka, R., Funakoshi, K., Kobayashi, Y., & Inaba, M. (2016, May). The dialogue breakdown detection challenge: Task description, datasets, and evaluation metrics. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16) (pp. 3146-3150).
Zhou, K., Prabhumoye, S., & Black, A. W. (2018). A dataset for document grounded conversations. arXiv preprint arXiv:1809.07358.
Danescu-Niculescu-Mizil, C., & Lee, L. (2011). Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. arXiv preprint arXiv:1106.3077.
Li, Y., Su, H., Shen, X., Li, W., Cao, Z., & Niu, S. (2017). Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint arXiv:1710.03957.
Nie, Y., Williamson, M., Bansal, M., Kiela, D., & Weston, J. (2020). I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling. arXiv preprint arXiv:2012.13391.
Chen, S. Y., Hsu, C. C., Kuo, C. C., & Ku, L. W. (2018). Emotionlines: An emotion corpus of multi-party conversations. arXiv preprint arXiv:1802.08379.
Rashkin, H., Smith, E. M., Li, M., & Boureau, Y. L. (2018). Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207.
Moghe, N., Arora, S., Banerjee, S., & Khapra, M. M. (2018). Towards exploiting background knowledge for building conversation systems. arXiv preprint arXiv:1809.08205.
Adiwardana, D., Luong, M. T., So, D. R., Hall, J., Fiedel, N., Thoppilan, R., ... & Le, Q. V. (2020). Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977.
Poria, S., Hazarika, D., Majumder, N., Naik, G., Cambria, E., & Mihalcea, R. (2018). Meld: A multimodal multi-party dataset for emotion recognition in conversations. arXiv preprint arXiv:1810.02508.
Lee, S., Schulz, H., Atkinson, A., Gao, J., Suleman, K., El Asri, L., ... & Li, X. (2019). Multi-domain task-completion dialog challenge. Dialog system technology challenges, 8(9).
Banchs, R. E. (2012, July). Movie-DiC: a movie dialogue corpus for research and development. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 203-207).
Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., & Weston, J. (2018). Personalizing dialogue agents: I have a dog, do you have pets too?. arXiv preprint arXiv:1801.07243.
Upadhayay, B., & Behzadan, V. (2020, November). Sentimental LIAR: Extended Corpus and Deep Learning Models for Fake Claim Classification. In 2020 IEEE International Conference on Intelligence and Security Informatics (ISI) (pp. 1-6). IEEE.
Cervone, A., & Riccardi, G. (2020). Is this dialogue coherent? learning from dialogue acts and entities. arXiv preprint arXiv:2006.10157.
Gopalakrishnan, K., Hedayatnia, B., Chen, Q., Gottardi, A., Kwatra, S., Venkatesh, A., ... & AI, A. A. (2019, January). Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations. In INTERSPEECH (pp. 1891-1895).
Dinan, E., Roller, S., Shuster, K., Fan, A., Auli, M., & Weston, J. (2018). Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.
D'Haro, L. F., Shawar, B. A., & Yu, Z. (2016). REWOCHAT 2016–Shared task description report. In Proceedings of the workshop on collecting and generating resources for chatbots and conversational agents-development and evaluation (RE-WOCHAT) (p. 39).
Zhou, H., Huang, M., Zhang, T., Zhu, X., & Liu, B. (2018, April). Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).
Zhou, H., Zheng, C., Huang, K., Huang, M., & Zhu, X. (2020). Kdconv: A chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation. arXiv preprint arXiv:2004.04100.
Wang, Y., Ke, P., Zheng, Y., Huang, K., Jiang, Y., Zhu, X., & Huang, M. (2020, October). A large-scale chinese short-text conversation dataset. In CCF International Conference on Natural Language Processing and Chinese Computing (pp. 91-103). Springer, Cham.
Rei, R., Stewart, C., Farinha, A. C., & Lavie, A. (2020). COMET: A neural framework for MT evaluation. arXiv preprint arXiv:2009.09025.
Damodaran, P. (2021). Parrot: Paraphrase generation for NLU.
Zhou, L., Gao, J., Li, D., & Shum, H. Y. (2020). The design and implementation of xiaoice, an empathetic social chatbot. Computational Linguistics, 46(1), 53-93.
Bao, S., He, H., Wang, F., Wu, H., & Wang, H. (2019). Plato: Pre-trained dialogue generation model with discrete latent variable. arXiv preprint arXiv:1910.07931.
Mehri, S., Choi, J., D'Haro, L. F., Deriu, J., Eskenazi, M., Gasic, M., ... & Zhang, C. (2022). Report from the nsf future directions workshop on automatic evaluation of dialog: Research directions and challenges. arXiv preprint arXiv:2203.10012.
| 29,536 | [
[
-0.0394287109375,
-0.04559326171875,
0.0256805419921875,
0.01971435546875,
-0.01255035400390625,
0.014801025390625,
-0.02899169921875,
-0.021392822265625,
0.0036144256591796875,
0.0260467529296875,
-0.047576904296875,
-0.0648193359375,
-0.0275726318359375,
0... |
manu/opus100-en-fr | 2023-09-18T16:15:31.000Z | [
"region:us"
] | manu | null | null | 0 | 9 | 2023-09-18T16:15:15 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 479723
num_examples: 2000
- name: train
num_bytes: 206440450
num_examples: 1000000
- name: validation
num_bytes: 491476
num_examples: 2000
download_size: 148902270
dataset_size: 207411649
---
# Dataset Card for "opus100-en-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 694 | [
[
-0.049468994140625,
-0.01132965087890625,
0.013031005859375,
0.034423828125,
-0.00438690185546875,
-0.0136566162109375,
0.0032329559326171875,
-0.0053558349609375,
0.05401611328125,
0.041717529296875,
-0.06329345703125,
-0.0521240234375,
-0.03302001953125,
-... |
Hieu-Pham/Instructions | 2023-09-19T13:43:12.000Z | [
"region:us"
] | Hieu-Pham | null | null | 0 | 9 | 2023-09-19T13:42:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pphuc25/vanmauvip_com | 2023-09-21T07:11:48.000Z | [
"region:us"
] | pphuc25 | null | null | 0 | 9 | 2023-09-21T07:11:19 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 71040692
num_examples: 13390
download_size: 35161324
dataset_size: 71040692
---
# Dataset Card for "vanmauvip_com"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 393 | [
[
-0.05133056640625,
-0.0243988037109375,
0.0117950439453125,
0.029815673828125,
-0.0155181884765625,
-0.0101165771484375,
0.00788116455078125,
-0.01160430908203125,
0.05255126953125,
0.04144287109375,
-0.060638427734375,
-0.06353759765625,
-0.04144287109375,
... |
jphme/wikitext_de_document_level_v01 | 2023-09-21T13:08:52.000Z | [
"region:us"
] | jphme | null | null | 0 | 9 | 2023-09-21T13:08:48 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1860002
num_examples: 200
download_size: 1138143
dataset_size: 1860002
---
# Dataset Card for "wikitext_de_document_level_v01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 454 | [
[
-0.038543701171875,
-0.007381439208984375,
0.00786590576171875,
0.0209808349609375,
-0.00447845458984375,
-0.002300262451171875,
0.035888671875,
-0.016326904296875,
0.052490234375,
0.027008056640625,
-0.0660400390625,
-0.0589599609375,
-0.045196533203125,
-0... |
1aurent/LC25000 | 2023-09-23T17:31:27.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:unlicense",
"biology",
"cancer",
"arxiv:1912.12142",
"region:us"
] | 1aurent | null | null | 0 | 9 | 2023-09-23T17:14:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: organ
dtype:
class_label:
names:
'0': lung
'1': colon
- name: label
dtype:
class_label:
names:
'0': benign
'1': adenocarcinomas
'2': squamous carcinomas
splits:
- name: train
num_bytes: 1581800190
num_examples: 25000
download_size: 1125348716
dataset_size: 1581800190
tags:
- biology
- cancer
size_categories:
- 10K<n<100K
license: unlicense
task_categories:
- image-classification
---
[](https://doi.org/10.48550/arXiv.1912.12142)
# LC25000: Lung and colon histopathological image dataset
**Homepage**: https://github.com/tampapath/lung_colon_image_set/ \
**Publication Date**: 2019-12-14 \
**Citation**:
```bibtex
@misc{borkowski2019lung,
title = {Lung and Colon Cancer Histopathological Image Dataset (LC25000)},
author = {Andrew A. Borkowski and Marilyn M. Bui and L. Brannon Thomas and Catherine P. Wilson and Lauren A. DeLand and Stephen M. Mastorides},
year = {2019},
eprint = {1912.12142},
archiveprefix = {arXiv},
primaryclass = {eess.IV}
}
```
The dataset contains color 25,000 images with 5 classes of 5,000 images each. All images are 768 x 768 pixels in size and are in jpeg file format. | 1,477 | [
[
-0.0273284912109375,
0.01039886474609375,
0.041412353515625,
0.01030731201171875,
-0.0251007080078125,
-0.006313323974609375,
-0.00695037841796875,
-0.01367950439453125,
0.0199737548828125,
0.046966552734375,
-0.021881103515625,
-0.04937744140625,
-0.03561401367... |
ASIRI25/cdrgen | 2023-10-15T16:42:00.000Z | [
"region:us"
] | ASIRI25 | null | null | 0 | 9 | 2023-09-23T20:55:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sankettgorey/donut_4 | 2023-09-24T17:17:44.000Z | [
"region:us"
] | sankettgorey | null | null | 0 | 9 | 2023-09-24T17:13:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 168398938.91680533
num_examples: 540
- name: validation
num_bytes: 8579406.106489185
num_examples: 30
- name: test
num_bytes: 9378162.976705492
num_examples: 31
download_size: 13900408
dataset_size: 186356508.0
---
# Dataset Card for "donut_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 723 | [
[
-0.0220489501953125,
-0.013214111328125,
0.025970458984375,
0.00875091552734375,
0.0034732818603515625,
0.0084075927734375,
0.01611328125,
-0.00019800662994384766,
0.056121826171875,
0.0377197265625,
-0.04931640625,
-0.04998779296875,
-0.038177490234375,
-0.... |
yirenlu/heroicons | 2023-09-26T23:11:38.000Z | [
"region:us"
] | yirenlu | null | null | 0 | 9 | 2023-09-25T19:55:57 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4277197.0
num_examples: 292
download_size: 4220955
dataset_size: 4277197.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "heroicons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 473 | [
[
-0.030029296875,
-0.00629425048828125,
0.0034427642822265625,
0.007083892822265625,
-0.014312744140625,
0.0086212158203125,
0.00786590576171875,
-0.01445770263671875,
0.058624267578125,
0.05108642578125,
-0.058502197265625,
-0.047760009765625,
-0.040679931640625... |
Illia56/Military-Aircraft-Detection | 2023-09-28T05:40:58.000Z | [
"task_categories:object-detection",
"task_categories:zero-shot-classification",
"task_categories:zero-shot-image-classification",
"task_categories:depth-estimation",
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1M<n<10M",
"license:apache-2.0",
"Image... | Illia56 | null | null | 1 | 9 | 2023-09-27T20:26:04 | ---
license: apache-2.0
task_categories:
- object-detection
- zero-shot-classification
- zero-shot-image-classification
- depth-estimation
- image-classification
- image-segmentation
tags:
- Image
- 'Computer Vision '
- Military
- Aviation
- Engineering
size_categories:
- 1M<n<10M
---
Dataset for object detection of military aircraft
bounding box in PASCAL VOC format (xmin, ymin, xmax, ymax)
43 aircraft types
(A-10, A-400M, AG-600, AV-8B, B-1, B-2, B-52 Be-200, C-130, C-17, C-2, C-5, E-2, E-7, EF-2000, F-117, F-14, F-15, F-16, F/A-18, F-22, F-35, F-4, J-20, JAS-39, MQ-9, Mig-31, Mirage2000, P-3(CP-140), RQ-4, Rafale, SR-71(may contain A-12), Su-34, Su-57, Tornado, Tu-160, Tu-95(Tu-142), U-2, US-2(US-1A Kai), V-22, Vulcan, XB-70, YF-23)
Please let me know if you find wrong labels or duplicated images. | 814 | [
[
-0.07220458984375,
-0.00986480712890625,
0.01397705078125,
0.0011844635009765625,
-0.02093505859375,
0.004070281982421875,
0.038726806640625,
0.0023193359375,
-0.0180206298828125,
0.050872802734375,
-0.04742431640625,
-0.046966552734375,
-0.03704833984375,
0... |
thomasavare/italian-dataset-deepl2 | 2023-09-27T22:06:31.000Z | [
"region:us"
] | thomasavare | null | null | 0 | 9 | 2023-09-27T22:06:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: english
dtype: string
- name: italian
dtype: string
- name: Class
dtype: string
- name: Class_index
dtype: float64
splits:
- name: train
num_bytes: 60782
num_examples: 500
download_size: 22544
dataset_size: 60782
---
# Dataset Card for "italian-dataset-deepl2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 557 | [
[
-0.037078857421875,
-0.0181427001953125,
0.0233612060546875,
0.013671875,
-0.01641845703125,
0.0029354095458984375,
0.01085662841796875,
-0.02264404296875,
0.034210205078125,
0.0248565673828125,
-0.059051513671875,
-0.062225341796875,
-0.037567138671875,
-0.... |
jonathanasdf/MathGLM-dataset-5M | 2023-09-29T19:10:31.000Z | [
"license:afl-3.0",
"region:us"
] | jonathanasdf | null | null | 0 | 9 | 2023-09-29T18:50:11 | ---
license: afl-3.0
---
Every 10th row from https://github.com/THUDM/MathGLM (original dataset has 50M entries) | 112 | [
[
-0.04083251953125,
-0.039764404296875,
0.028106689453125,
0.0258331298828125,
0.0137176513671875,
-0.01409912109375,
0.00958251953125,
-0.0027561187744140625,
0.0207061767578125,
0.05108642578125,
-0.037628173828125,
-0.057586669921875,
-0.013702392578125,
0... |
AlekseyKorshuk/rl-bench-test | 2023-10-03T18:14:07.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 0 | 9 | 2023-09-29T21:06:54 | ---
dataset_info:
features:
- name: user_name
dtype: string
- name: bot_name
dtype: string
- name: memory
dtype: string
- name: prompt
dtype: string
- name: chat_history
list:
- name: message
dtype: string
- name: sender
dtype: string
splits:
- name: train
num_bytes: 1657185
num_examples: 240
download_size: 491605
dataset_size: 1657185
---
# Dataset Card for "rl-bench-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 577 | [
[
-0.05517578125,
-0.035430908203125,
0.00433349609375,
0.0178375244140625,
-0.01454925537109375,
-0.0104217529296875,
0.018341064453125,
-0.01910400390625,
0.038909912109375,
0.0260772705078125,
-0.06842041015625,
-0.041290283203125,
-0.021759033203125,
-0.01... |
Sathvik-24/HinGlishLaama2 | 2023-10-02T07:00:41.000Z | [
"region:us"
] | Sathvik-24 | null | null | 0 | 9 | 2023-10-02T07:00:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Luciya/llama-2-nuv-intent-big-multi | 2023-10-02T10:41:23.000Z | [
"region:us"
] | Luciya | null | null | 0 | 9 | 2023-10-02T10:41:19 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 862786
num_examples: 1563
download_size: 132778
dataset_size: 862786
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-big-multi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 453 | [
[
-0.02349853515625,
-0.016510009765625,
0.0266265869140625,
0.04046630859375,
-0.032989501953125,
-0.002735137939453125,
0.016204833984375,
-0.00469207763671875,
0.06829833984375,
0.04144287109375,
-0.057159423828125,
-0.05072021484375,
-0.05096435546875,
-0.... |
manu/english-60b | 2023-10-16T05:35:28.000Z | [
"region:us"
] | manu | null | null | 0 | 9 | 2023-10-02T16:30:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dataset_id
dtype: string
splits:
- name: train
num_bytes: 259969046699
num_examples: 58986336
- name: test
num_bytes: 43278365
num_examples: 10000
download_size: 151705709032
dataset_size: 260012325064
---
# Dataset Card for "english_20b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 629 | [
[
-0.0484619140625,
-0.0255889892578125,
0.00884246826171875,
0.04705810546875,
-0.01021575927734375,
0.007053375244140625,
-0.002933502197265625,
-0.03289794921875,
0.061737060546875,
0.03167724609375,
-0.054779052734375,
-0.059051513671875,
-0.047271728515625,
... |
vertigo23/luganda_english_dataset_78K | 2023-10-02T20:32:08.000Z | [
"region:us"
] | vertigo23 | null | null | 0 | 9 | 2023-10-02T20:31:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hippocrates/medQA_test | 2023-10-12T16:27:59.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 9 | 2023-10-03T12:12:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 11533105
num_examples: 10178
- name: valid
num_bytes: 1440369
num_examples: 1272
- name: test
num_bytes: 1468137
num_examples: 1273
download_size: 6901097
dataset_size: 14441611
---
# Dataset Card for "medQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 780 | [
[
-0.04400634765625,
-0.0187225341796875,
0.01373291015625,
-0.0008363723754882812,
-0.007778167724609375,
-0.0014467239379882812,
0.0308380126953125,
-0.0017442703247070312,
0.0535888671875,
0.033203125,
-0.06036376953125,
-0.047119140625,
-0.0284881591796875,
... |
klima7/pol-spider-dev | 2023-10-08T11:36:15.000Z | [
"region:us"
] | klima7 | null | null | 0 | 9 | 2023-10-03T19:56:35 | ---
configs:
- config_name: default
data_files:
- split: train_spider
path: data/train_spider-*
- split: train_others
path: data/train_others-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: query_toks_no_value
sequence: string
- name: query_en
dtype: string
- name: question_en
dtype: string
- name: db_id
dtype: string
- name: query_pl
dtype: string
- name: question_pl
dtype: string
splits:
- name: train_spider
num_bytes: 4055253
num_examples: 7000
- name: train_others
num_bytes: 1287869
num_examples: 1659
- name: dev
num_bytes: 582922
num_examples: 1034
download_size: 1152383
dataset_size: 5926044
---
# Dataset Card for "pol-spider-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 887 | [
[
-0.055328369140625,
-0.005779266357421875,
0.006969451904296875,
0.007114410400390625,
-0.020111083984375,
0.0308380126953125,
0.0110321044921875,
-0.0007367134094238281,
0.07501220703125,
0.028594970703125,
-0.047607421875,
-0.039947509765625,
-0.03912353515625... |
Trelis/openassistant-guanaco-EOS | 2023-10-04T16:17:59.000Z | [
"size_categories:1K<n<10k",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",... | Trelis | null | null | 1 | 9 | 2023-10-04T12:28:22 | ---
license: apache-2.0
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
tags:
- human-feedback
- llama-2
size_categories:
- 1K<n<10k
pretty_name: Filtered OpenAssistant Conversations
---
# Chat Fine-tuning Dataset - Guanaco Style
This dataset allows for fine-tuning chat models using "### Human:" AND "### Assistant" as the beginning and end of sequence tokens.
Preparation:
1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then slightly adjusted to:
- if a row of data ends with an assistant response, then "### Human" was additionally added to the end of that row of data.
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
"message_id": "218440fd-5317-4355-91dc-d001416df62b",
"parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
"user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
"text": "It was the winter of 2035, and artificial intelligence (..)",
"role": "assistant",
"lang": "en",
"review_count": 3,
"review_result": true,
"deleted": false,
"rank": 0,
"synthetic": true,
"model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
"labels": {
"spam": { "value": 0.0, "count": 3 },
"lang_mismatch": { "value": 0.0, "count": 3 },
"pii": { "value": 0.0, "count": 3 },
"not_appropriate": { "value": 0.0, "count": 3 },
"hate_speech": { "value": 0.0, "count": 3 },
"sexual_content": { "value": 0.0, "count": 3 },
"quality": { "value": 0.416, "count": 3 },
"toxicity": { "value": 0.16, "count": 3 },
"humor": { "value": 0.0, "count": 3 },
"creativity": { "value": 0.33, "count": 3 },
"violence": { "value": 0.16, "count": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
"message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"text": "Why can't we divide by 0? (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
"text": "The reason we cannot divide by zero is because (..)",
"role": "assistant",
"lang": "en",
"replies": [
// ...
]
},
{
"message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
"text": "The reason that the result of a division by zero is (..)",
"role": "assistant",
"lang": "en",
"replies": [
{
"message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
"text": "Math is confusing. Like those weird Irrational (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
"text": "Irrational numbers are simply numbers (..)",
"role": "assistant",
"lang": "en",
"replies": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`"review_result": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst1")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai) | 9,611 | [
[
-0.0211181640625,
-0.0677490234375,
0.012939453125,
0.01239013671875,
-0.005275726318359375,
0.0031986236572265625,
-0.00946044921875,
-0.0224456787109375,
0.022674560546875,
0.0269012451171875,
-0.0484619140625,
-0.060394287109375,
-0.038909912109375,
0.003... |
Hack90/ncbi_genbank_part_5 | 2023-10-05T23:55:40.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 9 | 2023-10-04T23:02:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 20489073390
num_examples: 33130
download_size: 9505723044
dataset_size: 20489073390
---
# Dataset Card for "ncbi_genbank_part_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.047393798828125,
-0.01049041748046875,
0.0238037109375,
0.015472412109375,
-0.026153564453125,
0.0166168212890625,
0.040191650390625,
-0.0106048583984375,
0.046630859375,
0.03497314453125,
-0.052703857421875,
-0.07135009765625,
-0.0286712646484375,
0.0136... |
Hack90/ncbi_genbank_part_3 | 2023-10-05T02:57:48.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 9 | 2023-10-05T02:49:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 21065842848
num_examples: 52663
download_size: 9750477678
dataset_size: 21065842848
---
# Dataset Card for "ncbi_genbank_part_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.040863037109375,
-0.0197296142578125,
0.0285797119140625,
0.0175018310546875,
-0.022064208984375,
0.0122222900390625,
0.0419921875,
-0.012939453125,
0.051422119140625,
0.040283203125,
-0.043304443359375,
-0.06219482421875,
-0.0257720947265625,
0.002893447... |
AustinMcMike/steve_jobs_quotes_3 | 2023-10-05T03:08:40.000Z | [
"region:us"
] | AustinMcMike | null | null | 0 | 9 | 2023-10-05T03:07:35 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Sathvik-24/chacha300 | 2023-10-05T14:56:36.000Z | [
"region:us"
] | Sathvik-24 | null | null | 0 | 9 | 2023-10-05T14:48:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vsarathy/nl-robotics-translation-simple_english-12k-no-context-TEST | 2023-10-05T19:54:36.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 9 | 2023-10-05T19:54:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Hack90/ncbi_genbank_part_4 | 2023-10-05T23:07:33.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 9 | 2023-10-05T22:59:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 21065842848
num_examples: 52663
download_size: 9755396603
dataset_size: 21065842848
---
# Dataset Card for "ncbi_genbank_part_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.04278564453125,
-0.015380859375,
0.027740478515625,
0.015655517578125,
-0.0224151611328125,
0.02081298828125,
0.040863037109375,
-0.0082244873046875,
0.05450439453125,
0.036468505859375,
-0.04803466796875,
-0.0638427734375,
-0.023712158203125,
0.013198852... |
Intuit-GenSRF/jigsaw-unintended-bias-train-fr | 2023-10-06T00:16:34.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 9 | 2023-10-06T00:16:11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 688756878
num_examples: 1900136
download_size: 439186843
dataset_size: 688756878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-unintended-bias-train-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 506 | [
[
-0.042724609375,
-0.015716552734375,
0.01325225830078125,
0.0233306884765625,
-0.030029296875,
-0.01029205322265625,
0.0206146240234375,
-0.0021495819091796875,
0.0556640625,
0.0279998779296875,
-0.061065673828125,
-0.029296875,
-0.05029296875,
-0.0341491699... |
Hack90/ncbi_genbank_part_6 | 2023-10-06T00:25:00.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 9 | 2023-10-06T00:16:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 20271283259
num_examples: 83750
download_size: 9364736791
dataset_size: 20271283259
---
# Dataset Card for "ncbi_genbank_part_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.040985107421875,
-0.01316070556640625,
0.02178955078125,
0.00885772705078125,
-0.02337646484375,
0.01004791259765625,
0.0379638671875,
-0.0113525390625,
0.0517578125,
0.035400390625,
-0.05035400390625,
-0.060394287109375,
-0.0268402099609375,
0.0077629089... |
Hack90/ncbi_genbank_part_7 | 2023-10-06T00:49:38.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 9 | 2023-10-06T00:47:35 | Entry not found | 15 | [
[
-0.0214385986328125,
-0.01495361328125,
0.057220458984375,
0.02880859375,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.0170135498046875,
-0.052093505859375,
-0.01499176025390625,
-0.060333251953125,
0.037... |
SniiKz/Dataset_for_phi | 2023-10-06T06:07:00.000Z | [
"region:us"
] | SniiKz | null | null | 0 | 9 | 2023-10-06T06:06:55 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 830921
num_examples: 2645
download_size: 197574
dataset_size: 830921
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Dataset_for_phi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 440 | [
[
-0.02960205078125,
-0.0247344970703125,
0.016265869140625,
0.020538330078125,
-0.0237579345703125,
-0.01447296142578125,
0.037628173828125,
-0.01004791259765625,
0.06561279296875,
0.03228759765625,
-0.0595703125,
-0.046295166015625,
-0.032318115234375,
-0.01... |
TheAIchemist13/beekeeping_tech_hi | 2023-10-06T11:02:47.000Z | [
"region:us"
] | TheAIchemist13 | null | null | 0 | 9 | 2023-10-06T11:02:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: target_text
dtype: string
splits:
- name: train
num_bytes: 4605091.0
num_examples: 110
- name: test
num_bytes: 1616943.0
num_examples: 40
download_size: 6141646
dataset_size: 6222034.0
---
# Dataset Card for "beekeeping_tech_hi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 624 | [
[
-0.0347900390625,
-0.0283355712890625,
0.01255035400390625,
0.0132598876953125,
-0.00875091552734375,
0.00214385986328125,
0.0164794921875,
-0.0256805419921875,
0.060302734375,
0.031707763671875,
-0.067626953125,
-0.060150146484375,
-0.027801513671875,
-0.01... |
Intuit-GenSRF/toxigen-train-annotated-es | 2023-10-06T19:38:23.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 9 | 2023-10-06T19:38:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 2822955
num_examples: 8960
download_size: 1685693
dataset_size: 2822955
---
# Dataset Card for "toxigen-train-annotated-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 574 | [
[
-0.040802001953125,
0.005542755126953125,
0.0291595458984375,
0.0323486328125,
-0.00995635986328125,
-0.006473541259765625,
0.0008664131164550781,
-0.0177154541015625,
0.048095703125,
0.0369873046875,
-0.06683349609375,
-0.06231689453125,
-0.0477294921875,
-... |
Intuit-GenSRF/haternet-es | 2023-10-06T19:50:41.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 9 | 2023-10-06T19:50:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 2211676
num_examples: 6000
download_size: 1501046
dataset_size: 2211676
---
# Dataset Card for "haternet-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 559 | [
[
-0.049163818359375,
-0.007411956787109375,
0.005268096923828125,
0.0189971923828125,
-0.01459503173828125,
-0.007144927978515625,
0.0132904052734375,
-0.0199127197265625,
0.068359375,
0.0243682861328125,
-0.06103515625,
-0.059478759765625,
-0.04931640625,
-0... |
Mxode/C-Language-Chat-Debug-Multiturn-Zh | 2023-10-07T10:52:26.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"language:zh",
"license:apache-2.0",
"code",
"C",
"chat",
"assistant",
"multiturn",
"debug",
"region:us"
] | Mxode | null | null | 2 | 9 | 2023-10-07T10:48:11 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- zh
tags:
- code
- C
- chat
- assistant
- multiturn
- debug
---
约 1300 条 **C 语言** 场景的 user - assistant **多轮对话**。每段对话已经组织成了单行的格式。一条样例如下:
```JSON
{
"id": 1045,
"conversation": [
{
"user": "你好,AI助手。我最近在写一个C语言程序,但是遇到了一些问题,希望你能帮我检查一下。",
"assistant": "你好,我很乐意帮助你。请把你的代码发给我,我会尽快检查并给出建议。"
},
{
"user": "好的,这是我的代码。这段代码的主要功能是计算斐波那契数列的前n项。",
"assistant": "让我看一下......嗯,这里有一个小错误。在第10行,你应该使用`++i`而不是`i++`来递增i的值。修改后的代码应该是这样的\\n```c\\nfor (int i = 0; i < n; ++i) {\\n if (i == 0 || i == 1) {\\n printf(\\\"%d \\\", fib[i]);\\n } else {\\n fib[i] = fib[i - 1] + fib[i - 2];\\n printf(\\\"%d \\\", fib[i]);\\n }\\n}\\n```"
},
{
"user": "谢谢你的帮助!我还有一个问题。这段代码中有一个递归函数,用于计算斐波那契数列。但是我发现这个递归函数的效率很低,因为它会重复计算很多相同的子问题。有什么方法可以优化这个递归函数吗?",
"assistant": "当然可以。你可以尝试使用动态规划的方法来优化这个递归函数。这样可以将已经计算过的子问题的解存储起来,避免重复计算。下面是一个使用动态规划优化后的斐波那契数列计算函数\\n```c\\n#include <stdio.h>\\nint fib[100]; // 用于存储斐波那契数列的数组\\nvoid fib_dp(int n) {\\n fib[0] = 0;\\n fib[1] = 1;\\n for (int i = 2; i <= n; ++i) {\\n fib[i] = fib[i - 1] + fib[i - 2];\\n }\\n}\\nint main() {\\n int n = 10; // 你想要计算的斐波那契数列的项数\\n fib_dp(n);\\n for (int i = 0; i <= n; ++i) {\\n printf(\\\"%d \\\", fib[i]);\\n }\\n return 0;\\n}\\n```"
},
{
"user": "哇,这真是太棒了!你的建议大大提高了程序的运行速度。再次感谢你的帮助!\n",
"assistant": ""
}
]
}
```
所有的助手称呼均为“AI助手”,如果想要构建特定名称,**可以全局替换“AI助手”。** | 1,621 | [
[
-0.032806396484375,
-0.061248779296875,
0.0273284912109375,
0.03240966796875,
-0.0157470703125,
-0.0089263916015625,
0.0157928466796875,
-0.01120758056640625,
0.0384521484375,
0.041351318359375,
-0.022186279296875,
-0.027587890625,
-0.04656982421875,
0.01605... |
tessiw/german_OpenOrca1 | 2023-10-07T13:44:05.000Z | [
"region:us"
] | tessiw | null | null | 0 | 9 | 2023-10-07T13:43:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 456248082
num_examples: 250000
download_size: 259702655
dataset_size: 456248082
---
# Dataset Card for "german_OpenOrca1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 566 | [
[
-0.053924560546875,
-0.028564453125,
0.004650115966796875,
0.0189208984375,
-0.01525115966796875,
-0.0188140869140625,
0.006622314453125,
-0.0159149169921875,
0.059814453125,
0.0279083251953125,
-0.053802490234375,
-0.0814208984375,
-0.035888671875,
-0.02841... |
tessiw/german_OpenOrca2 | 2023-10-07T13:49:09.000Z | [
"region:us"
] | tessiw | null | null | 0 | 9 | 2023-10-07T13:48:36 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 453043119
num_examples: 250000
download_size: 257694182
dataset_size: 453043119
---
# Dataset Card for "german_OpenOrca2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 566 | [
[
-0.0467529296875,
-0.0255279541015625,
0.00921630859375,
0.0202789306640625,
-0.0138397216796875,
-0.017547607421875,
0.005443572998046875,
-0.0257415771484375,
0.04833984375,
0.0268096923828125,
-0.0474853515625,
-0.07171630859375,
-0.03643798828125,
-0.030... |
RikoteMaster/translation_4_llama2_with_end_token | 2023-10-07T15:41:59.000Z | [
"region:us"
] | RikoteMaster | null | null | 0 | 9 | 2023-10-07T15:20:42 | ---
dataset_info:
features:
- name: English
dtype: string
- name: Spanish
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43090372
num_examples: 118964
download_size: 12020346
dataset_size: 43090372
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "translation_4_llama2_with_end_token"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.0168609619140625,
-0.002552032470703125,
0.020294189453125,
0.0303497314453125,
-0.04522705078125,
0.01953125,
0.0035762786865234375,
-0.0175628662109375,
0.061279296875,
0.033538818359375,
-0.053375244140625,
-0.061370849609375,
-0.062103271484375,
0.008... |
tyzhu/synpre_union_1M | 2023-10-08T09:18:54.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 9 | 2023-10-08T09:16:39 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1167868421
num_examples: 1000000
- name: validation
num_bytes: 11660114
num_examples: 10000
download_size: 788391948
dataset_size: 1179528535
---
# Dataset Card for "synpre_union_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.042022705078125,
-0.007457733154296875,
0.006229400634765625,
0.02313232421875,
-0.023162841796875,
-0.0066986083984375,
0.0119171142578125,
-0.00885772705078125,
0.07330322265625,
0.03143310546875,
-0.059051513671875,
-0.051788330078125,
-0.03558349609375,
... |
Linyuyu/linruanruan | 2023-10-12T09:52:19.000Z | [
"region:us"
] | Linyuyu | null | null | 0 | 9 | 2023-10-08T10:00:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ppxscal/embedding_network | 2023-10-18T16:34:31.000Z | [
"license:mit",
"region:us"
] | ppxscal | null | null | 0 | 9 | 2023-10-08T15:48:47 | ---
license: mit
---
# Dataset Card for Dataset Name
CSV file of embedded abtracts collected from https://lfs.aminer.cn/lab-datasets/citation/citation-network1.zip
Columns are title,authors,year,venue,index,abstract,embedding,references in that order. | 255 | [
[
-0.0268402099609375,
-0.02130126953125,
0.0035037994384765625,
-0.0006723403930664062,
-0.0296173095703125,
0.0247039794921875,
0.032684326171875,
-0.007015228271484375,
0.047149658203125,
0.025177001953125,
-0.0272216796875,
-0.04449462890625,
-0.03268432617187... |
vmalitskyi/images_rotation_dataset | 2023-10-09T14:02:02.000Z | [
"region:us"
] | vmalitskyi | null | null | 0 | 9 | 2023-10-09T13:14:06 | ---
dataset_info:
features:
- name: image
sequence:
sequence:
sequence: uint8
- name: name
dtype: string
- name: sender_id
dtype: int64
- name: label
dtype: int64
- name: kids
dtype: int64
- name: class
dtype: int64
- name: fold
dtype: int64
splits:
- name: train
num_bytes: 32685477988
num_examples: 21612
download_size: 10754548456
dataset_size: 32685477988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "images_rotation_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 701 | [
[
-0.038909912109375,
-0.006710052490234375,
0.01326751708984375,
0.0244140625,
-0.03277587890625,
-0.0006718635559082031,
0.03021240234375,
-0.0106964111328125,
0.07330322265625,
0.0391845703125,
-0.0638427734375,
-0.056976318359375,
-0.051239013671875,
-0.02... |
Harshithacj123/CCU_QA_dataset | 2023-10-10T02:31:46.000Z | [
"region:us"
] | Harshithacj123 | null | null | 0 | 9 | 2023-10-10T02:31:44 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 41353
num_examples: 50
download_size: 23370
dataset_size: 41353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "CCU_QA_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 434 | [
[
-0.033538818359375,
-0.019134521484375,
0.02410888671875,
0.00701141357421875,
-0.0194549560546875,
0.01157379150390625,
0.0301361083984375,
0.001987457275390625,
0.05810546875,
0.0268402099609375,
-0.05352783203125,
-0.05609130859375,
-0.017333984375,
-0.01... |
appvoid/grafito-25k | 2023-10-10T11:14:44.000Z | [
"license:apache-2.0",
"region:us"
] | appvoid | null | null | 0 | 9 | 2023-10-10T03:50:31 | ---
license: apache-2.0
---
**grafito-25k** is a dataset meant to be used to train or fine-tune language models of smaller sizes. These kind of datasets makes it easier to learn next-sentence objective predictions for models with less than 2.7B parameters. This is the largest known short-answer dataset that uses several instruction-response pairs from several popular datasets like OA, natural-instructions and more. | 418 | [
[
-0.05413818359375,
-0.044342041015625,
0.00798797607421875,
0.0220947265625,
-0.03277587890625,
-0.010009765625,
-0.0243682861328125,
-0.0286865234375,
0.007335662841796875,
0.04595947265625,
-0.044891357421875,
-0.0223236083984375,
-0.02252197265625,
0.0066... |
Rricardo/benahavis | 2023-10-10T15:06:13.000Z | [
"region:us"
] | Rricardo | null | null | 0 | 9 | 2023-10-10T10:53:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
asmallgreenpotato/test-start | 2023-10-11T18:45:10.000Z | [
"region:us"
] | asmallgreenpotato | null | null | 0 | 9 | 2023-10-10T15:17:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MaxReynolds/SouderRocketLauncherReference | 2023-10-10T20:20:07.000Z | [
"region:us"
] | MaxReynolds | null | null | 0 | 9 | 2023-10-10T19:42:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
smangrul/hf-stack-v2 | 2023-10-11T07:43:53.000Z | [
"region:us"
] | smangrul | null | null | 2 | 9 | 2023-10-11T07:43:48 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 103347552
num_examples: 6567
download_size: 35040642
dataset_size: 103347552
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hf-stack-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 566 | [
[
-0.036590576171875,
-0.022003173828125,
0.0004558563232421875,
0.0237579345703125,
-0.007335662841796875,
-0.0001976490020751953,
0.046173095703125,
-0.0259246826171875,
0.05322265625,
0.04730224609375,
-0.054901123046875,
-0.04290771484375,
-0.038116455078125,
... |
renumics/spotlight-mnist-enrichment | 2023-10-13T09:42:21.000Z | [
"region:us"
] | renumics | null | null | 0 | 9 | 2023-10-11T12:10:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image.embedding
sequence: float32
length: 2
splits:
- name: train
num_bytes: 480000
num_examples: 60000
- name: test
num_bytes: 80000
num_examples: 10000
download_size: 778435
dataset_size: 560000
---
# Dataset Card for "spotlight-mnist-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 579 | [
[
-0.045989990234375,
-0.0125885009765625,
0.019195556640625,
0.0206146240234375,
-0.0185089111328125,
0.01274871826171875,
0.006526947021484375,
-0.01038360595703125,
0.07745361328125,
0.03826904296875,
-0.0631103515625,
-0.04840087890625,
-0.0390625,
-0.0116... |
Nebulous/lmsys-chat-1m-smortmodelsonly | 2023-10-11T21:07:55.000Z | [
"region:us"
] | Nebulous | null | null | 2 | 9 | 2023-10-11T20:07:57 | This version of the dataset only has responses from GPT-4, Claude-1, Claude-2, Claude-instant-1, and GPT-3.5-turbo | 114 | [
[
-0.048309326171875,
-0.04339599609375,
0.0589599609375,
0.01751708984375,
-0.007171630859375,
0.003948211669921875,
0.012908935546875,
-0.0222625732421875,
0.04998779296875,
0.05706787109375,
-0.079345703125,
-0.0010890960693359375,
-0.042510986328125,
0.024... |
sehyun66/News-semtiment | 2023-10-12T12:45:17.000Z | [
"region:us"
] | sehyun66 | null | null | 0 | 9 | 2023-10-12T12:45:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cc-platform-links/training-data | 2023-10-12T13:42:34.000Z | [
"region:us"
] | cc-platform-links | null | null | 0 | 9 | 2023-10-12T13:41:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/xlmr_int_hard_curr_trn_ep2_lrg | 2023-10-12T17:05:10.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 9 | 2023-10-12T16:46:26 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 285070021
num_examples: 226100
download_size: 80645458
dataset_size: 285070021
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xlmr_int_hard_curr_trn_ep2_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 628 | [
[
-0.023101806640625,
-0.033966064453125,
0.0180511474609375,
0.0085296630859375,
-0.0172882080078125,
0.0177154541015625,
0.00968170166015625,
0.009613037109375,
0.032684326171875,
0.035247802734375,
-0.034881591796875,
-0.036590576171875,
-0.03924560546875,
... |
carnival13/xlmr_eval_lrg | 2023-10-12T16:46:46.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 9 | 2023-10-12T16:46:42 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 19341220
num_examples: 11590
download_size: 5401187
dataset_size: 19341220
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xlmr_eval_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.034637451171875,
-0.04290771484375,
0.019683837890625,
-0.0032062530517578125,
-0.005809783935546875,
0.03265380859375,
0.01100921630859375,
0.006038665771484375,
0.038330078125,
0.038421630859375,
-0.039306640625,
-0.044158935546875,
-0.0394287109375,
-0... |
tyzhu/synpre_mix_v3_1M | 2023-10-13T07:32:01.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 9 | 2023-10-13T07:28:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1538674019.8
num_examples: 1000000
- name: validation
num_bytes: 15406264.0
num_examples: 10000
download_size: 1018899653
dataset_size: 1554080283.8
---
# Dataset Card for "synpre_mix_v3_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 619 | [
[
-0.05169677734375,
0.0002753734588623047,
0.00994110107421875,
0.03704833984375,
-0.0146942138671875,
-0.005275726318359375,
0.016845703125,
-0.02716064453125,
0.07232666015625,
0.036834716796875,
-0.06414794921875,
-0.050140380859375,
-0.038848876953125,
-0... |
DAMO-NLP-SG/MultiJail | 2023-10-13T07:56:04.000Z | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"language:zh",
"language:it",
"language:vi",
"language:ar",
"language:ko",
"language:th",
"language:bn",
"language:sw",
"language:jv",
"license:mit",
"arxiv:2310.06474",
"region:us"
] | DAMO-NLP-SG | null | null | 0 | 9 | 2023-10-13T07:54:21 | ---
license: mit
task_categories:
- conversational
language:
- en
- zh
- it
- vi
- ar
- ko
- th
- bn
- sw
- jv
size_categories:
- n<1K
---
# Multilingual Jailbreak Challenges in Large Language Models
This repo contains the data for our paper ["Multilingual Jailbreak Challenges in Large Language Models"](https://arxiv.org/abs/2310.06474).
[[Github repo]](https://github.com/DAMO-NLP-SG/multilingual-safety-for-LLMs/)
## Annotation Statistics
We collected a total of 315 English unsafe prompts and annotated them into nine non-English languages. The languages were categorized based on resource availability, as shown below:
**High-resource languages:** Chinese (zh), Italian (it), Vietnamese (vi)
**Medium-resource languages:** Arabic (ar), Korean (ko), Thai (th)
**Low-resource languages:** Bengali (bn), Swahili (sw), Javanese (jv)
## Ethics Statement
Our research investigates the safety challenges of LLMs in multilingual settings. We are aware of the potential misuse of our findings and emphasize that our research is solely for academic purposes and ethical use. Misuse or harm resulting from the information in this paper is strongly discouraged. To address the identified risks and vulnerabilities, we commit to open-sourcing the data used in our study. This openness aims to facilitate vulnerability identification, encourage discussions, and foster collaborative efforts to enhance LLM safety in multilingual contexts. Furthermore, we have developed the SELF-DEFENSE framework to address multilingual jailbreak challenges in LLMs. This framework automatically generates multilingual safety training data to mitigate risks associated with unintentional and intentional jailbreak scenarios. Overall, our work not only highlights multilingual jailbreak challenges in LLMs but also paves the way for future research, collaboration, and innovation to enhance their safety.
## Citation
```
@misc{deng2023multilingual,
title={Multilingual Jailbreak Challenges in Large Language Models},
author={Yue Deng and Wenxuan Zhang and Sinno Jialin Pan and Lidong Bing},
year={2023},
eprint={2310.06474},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 2,157 | [
[
-0.00893402099609375,
-0.04864501953125,
0.0288238525390625,
0.03643798828125,
-0.0053253173828125,
0.016571044921875,
-0.0286712646484375,
-0.047088623046875,
0.011962890625,
0.035186767578125,
-0.0277099609375,
-0.05072021484375,
-0.035400390625,
0.0306091... |
likhith45688/lm_dataset | 2023-10-14T07:32:27.000Z | [
"region:us"
] | likhith45688 | null | null | 0 | 9 | 2023-10-14T07:31:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 606341604
num_examples: 361779
- name: valid
num_bytes: 144454440
num_examples: 86190
download_size: 137305987
dataset_size: 750796044
---
# Dataset Card for "lm_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 639 | [
[
-0.04241943359375,
-0.0232086181640625,
0.02215576171875,
0.006900787353515625,
-0.0146484375,
0.0038051605224609375,
0.0162506103515625,
-0.011199951171875,
0.06396484375,
0.039886474609375,
-0.0654296875,
-0.055267333984375,
-0.03900146484375,
-0.019607543... |
legacy107/wikipedia-augmented-chunked | 2023-10-14T12:06:43.000Z | [
"region:us"
] | legacy107 | null | null | 0 | 9 | 2023-10-14T12:06:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: History
sequence: string
- name: QuAC_dialog_id
dtype: string
- name: Question
dtype: string
- name: Question_no
dtype: int64
- name: Rewrite
dtype: string
- name: true_page_title
dtype: string
- name: true_contexts
dtype: string
- name: answer
dtype: string
- name: true_contexts_wiki
dtype: string
- name: extractive
dtype: bool
- name: retrieved_contexts
sequence: string
- name: chunked_article
sequence: string
splits:
- name: train
num_bytes: 1778351216
num_examples: 17183
- name: test
num_bytes: 315554804
num_examples: 2882
download_size: 612111820
dataset_size: 2093906020
---
# Dataset Card for "wikipedia-augmented-chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,028 | [
[
-0.058807373046875,
-0.04791259765625,
0.005916595458984375,
0.01336669921875,
-0.027679443359375,
0.002307891845703125,
0.0064849853515625,
-0.0290679931640625,
0.083251953125,
0.03057861328125,
-0.04388427734375,
-0.02874755859375,
-0.0438232421875,
-0.009... |
PiyushLavaniya/Small_Alpaca_Instruct | 2023-10-15T07:00:06.000Z | [
"region:us"
] | PiyushLavaniya | null | null | 0 | 9 | 2023-10-15T07:00:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 17849270.7
num_examples: 9000
- name: test
num_bytes: 1983252.3
num_examples: 1000
download_size: 6069153
dataset_size: 19832523.0
---
# Dataset Card for "Small_Alpaca_Instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 713 | [
[
-0.053436279296875,
-0.0281829833984375,
0.01544189453125,
0.0139007568359375,
-0.0247344970703125,
-0.025360107421875,
0.015960693359375,
-0.0199432373046875,
0.07177734375,
0.0311279296875,
-0.05804443359375,
-0.044708251953125,
-0.0458984375,
-0.015731811... |
aditya998/wiki_instruction_response | 2023-10-17T05:00:25.000Z | [
"region:us"
] | aditya998 | null | null | 0 | 9 | 2023-10-17T04:34:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dhruv107/receipt_oct17 | 2023-10-17T06:18:38.000Z | [
"region:us"
] | dhruv107 | null | null | 0 | 9 | 2023-10-17T06:17:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 325034464.0
num_examples: 254
- name: test
num_bytes: 24757704.0
num_examples: 16
- name: validation
num_bytes: 58492164.0
num_examples: 48
download_size: 201177227
dataset_size: 408284332.0
---
# Dataset Card for "receipt_oct17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.02655029296875,
-0.0041961669921875,
0.0198516845703125,
0.01107025146484375,
-0.0308380126953125,
-0.018402099609375,
0.0308837890625,
-0.025146484375,
0.06329345703125,
0.045257568359375,
-0.041259765625,
-0.046295166015625,
-0.040771484375,
-0.00804138... |
tyzhu/squad_last_sent_v4_train_30_eval_10 | 2023-10-19T10:05:00.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 9 | 2023-10-17T15:13:36 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 111467
num_examples: 70
- name: validation
num_bytes: 11481
num_examples: 10
- name: eval_last_sent
num_bytes: 11481
num_examples: 10
download_size: 0
dataset_size: 134429
---
# Dataset Card for "squad_last_sent_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 874 | [
[
-0.0267333984375,
-0.005218505859375,
0.0227508544921875,
0.03668212890625,
-0.005275726318359375,
0.025726318359375,
0.0194244384765625,
0.002994537353515625,
0.040374755859375,
0.0253143310546875,
-0.08074951171875,
-0.041534423828125,
-0.0248870849609375,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.