id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
syaoran312/VHAC_QA | 2023-10-13T17:56:39.000Z | [
"region:us"
] | syaoran312 | null | null | 0 | 3 | 2023-10-13T17:56:06 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
liberatoratif/Credit-card-fraud-detection | 2023-10-13T18:18:49.000Z | [
"region:us"
] | liberatoratif | null | null | 0 | 3 | 2023-10-13T18:08:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
EZUNIGAF/amazon-product-credit-to-bprateek | 2023-10-13T19:02:13.000Z | [
"region:us"
] | EZUNIGAF | null | null | 0 | 3 | 2023-10-13T18:50:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/xlmr_hard_curr_uda_ep3_corr | 2023-10-14T00:43:08.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 3 | 2023-10-14T00:42:32 | ---
dataset_info:
features:
- name: input
dtype: string
- name: pass_label
dtype: int64
- name: domain_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 774087578
num_examples: 519240
download_size: 231183382
dataset_size: 774087578
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xlmr_hard_curr_uda_ep3_corr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 626 | [
[
-0.027862548828125,
-0.034393310546875,
0.021514892578125,
0.014190673828125,
-0.0103759765625,
0.0132293701171875,
0.025299072265625,
0.008544921875,
0.039031982421875,
0.044281005859375,
-0.038330078125,
-0.0479736328125,
-0.029815673828125,
0.002437591552... |
PiyushLavaniya/Small-Instruction-tuning-dataset | 2023-10-14T12:36:38.000Z | [
"region:us"
] | PiyushLavaniya | null | null | 0 | 3 | 2023-10-14T12:35:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 147200.32363636364
num_examples: 247
- name: test
num_bytes: 16686.676363636365
num_examples: 28
download_size: 26802
dataset_size: 163887.0
---
# Dataset Card for "Small-Instruction-tuning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 737 | [
[
-0.04302978515625,
-0.0273284912109375,
0.017181396484375,
0.00725555419921875,
-0.01360321044921875,
-0.01361083984375,
-0.0011243820190429688,
0.004230499267578125,
0.0504150390625,
0.0275726318359375,
-0.06585693359375,
-0.042327880859375,
-0.028106689453125,... |
hiddenvoices/translation | 2023-10-14T17:58:50.000Z | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | hiddenvoices | null | null | 0 | 3 | 2023-10-14T17:12:39 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for translation
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("hiddenvoices/translation")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("hiddenvoices/translation")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| translated-lang | Translated-lang | LabelQuestion | True | N/A | ['Assamese (as)', 'Hindi (hi)', 'Marathi (mr)', 'Tamil (ta)', 'Bengali (bn)', 'Kannada (kn)', 'Odia (or)', 'Telugu (te)', 'Gujarati (gu)', 'Malayalam (ml)', 'Punjabi (pa)'] |
| translated-text | Translated-text | TextQuestion | True | N/A | N/A |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"fields": {
"text": "wow ! i do love a good horror movie . loving this cooler weather"
},
"metadata": {},
"responses": [
{
"status": "submitted",
"user_id": "013f6259-5c26-4ceb-96ff-8030cbaa9243",
"values": {
"translated-lang": {
"value": "Hindi (hi)"
},
"translated-text": {
"value": "gajab! Mereko ek achchi darawani chitra pasand hai. anand hai iss thande mausam main."
}
}
}
],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"metadata": "{}",
"text": "wow ! i do love a good horror movie . loving this cooler weather",
"translated-lang": [
{
"status": "submitted",
"user_id": "013f6259-5c26-4ceb-96ff-8030cbaa9243",
"value": "Hindi (hi)"
}
],
"translated-lang-suggestion": null,
"translated-lang-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"translated-text": [
{
"status": "submitted",
"user_id": "013f6259-5c26-4ceb-96ff-8030cbaa9243",
"value": "gajab! Mereko ek achchi darawani chitra pasand hai. anand hai iss thande mausam main."
}
],
"translated-text-suggestion": null,
"translated-text-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **translated-lang** is of type `LabelQuestion` with the following allowed values ['Assamese (as)', 'Hindi (hi)', 'Marathi (mr)', 'Tamil (ta)', 'Bengali (bn)', 'Kannada (kn)', 'Odia (or)', 'Telugu (te)', 'Gujarati (gu)', 'Malayalam (ml)', 'Punjabi (pa)'].
* **translated-text** is of type `TextQuestion`.
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **translated-lang-suggestion** is of type `label_selection` with the following allowed values ['Assamese (as)', 'Hindi (hi)', 'Marathi (mr)', 'Tamil (ta)', 'Bengali (bn)', 'Kannada (kn)', 'Odia (or)', 'Telugu (te)', 'Gujarati (gu)', 'Malayalam (ml)', 'Punjabi (pa)'].
* (optional) **translated-text-suggestion** is of type `text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
English gender neutral dataset: translation into various Indic langugaes.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 8,381 | [
[
-0.0528564453125,
-0.057769775390625,
0.0116424560546875,
0.0239410400390625,
-0.026153564453125,
-0.022003173828125,
-0.01500701904296875,
-0.040069580078125,
0.05499267578125,
0.053985595703125,
-0.05303955078125,
-0.0631103515625,
-0.05206298828125,
0.032... |
lazaroq11/bill_qa | 2023-10-14T20:51:03.000Z | [
"region:us"
] | lazaroq11 | null | null | 0 | 3 | 2023-10-14T20:45:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rexionmars/llama-2-7B-portuguese-essay6k | 2023-10-14T21:00:15.000Z | [
"region:us"
] | rexionmars | null | null | 0 | 3 | 2023-10-14T20:57:05 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
poorguys/chinese_fonts_single_128x128 | 2023-10-15T08:04:37.000Z | [
"region:us"
] | poorguys | null | null | 0 | 3 | 2023-10-15T08:04:30 | ---
dataset_info:
features:
- name: image
dtype: image
- name: char
dtype: string
- name: unicode
dtype: string
- name: font
dtype: string
- name: font_type
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 244349.0
num_examples: 65
download_size: 239164
dataset_size: 244349.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chinese_fonts_single_128x128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 628 | [
[
-0.028289794921875,
-0.0177154541015625,
-0.005645751953125,
0.056915283203125,
-0.0292510986328125,
-0.0187225341796875,
-0.0030670166015625,
-0.00800323486328125,
0.06463623046875,
0.037933349609375,
-0.052642822265625,
-0.0560302734375,
-0.023681640625,
-... |
fishytorts/taylor_swift_clips_mini | 2023-10-16T08:06:58.000Z | [
"region:us"
] | fishytorts | null | null | 0 | 3 | 2023-10-15T10:17:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
TwoAbove/gpt4v-emotion-dataset | 2023-10-15T23:57:40.000Z | [
"region:us"
] | TwoAbove | null | null | 0 | 3 | 2023-10-15T13:31:48 | ---
dataset_info:
features:
- name: caption
dtype: string
- name: image
dtype: image
- name: link
dtype: string
- name: message_id
dtype: string
- name: timestamp
dtype: string
splits:
- name: train
num_bytes: 51420737.0
num_examples: 48
download_size: 51389687
dataset_size: 51420737.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gpt4v-emotion-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 600 | [
[
-0.045989990234375,
-0.0045623779296875,
0.0208892822265625,
0.02081298828125,
-0.0223236083984375,
-0.003177642822265625,
0.016143798828125,
-0.003940582275390625,
0.04949951171875,
0.007266998291015625,
-0.06256103515625,
-0.055877685546875,
-0.042327880859375... |
Soheil-FM/QA | 2023-10-15T17:47:36.000Z | [
"region:us"
] | Soheil-FM | null | null | 0 | 3 | 2023-10-15T17:46:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MemGPT/llamaindex-api-docs | 2023-10-16T02:47:14.000Z | [
"region:us"
] | MemGPT | null | null | 5 | 3 | 2023-10-16T00:05:02 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
xinqiyang/iruca_llama2_1kplus | 2023-10-16T03:11:42.000Z | [
"region:us"
] | xinqiyang | null | null | 0 | 3 | 2023-10-16T02:36:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1657441
num_examples: 1016
download_size: 970789
dataset_size: 1657441
---
# Dataset Card for "iruca_llama2_1kplus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 446 | [
[
-0.038482666015625,
-0.00637054443359375,
-0.01021575927734375,
0.038909912109375,
-0.03369140625,
0.00270843505859375,
0.02313232421875,
-0.0187225341796875,
0.064697265625,
0.036895751953125,
-0.06561279296875,
-0.05572509765625,
-0.047607421875,
-0.011932... |
shengqin/web-attacks-ab | 2023-10-16T03:09:35.000Z | [
"region:us"
] | shengqin | null | null | 0 | 3 | 2023-10-16T02:55:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
shengqin/web-attacks-ab2 | 2023-10-16T03:31:52.000Z | [
"region:us"
] | shengqin | null | null | 0 | 3 | 2023-10-16T03:27:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
xinqiyang/iruca_llama2_1kplus_shuffle | 2023-10-16T03:28:55.000Z | [
"region:us"
] | xinqiyang | null | null | 0 | 3 | 2023-10-16T03:28:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1657441.0
num_examples: 1016
download_size: 980826
dataset_size: 1657441.0
---
# Dataset Card for "iruca_llama2_1kplus_shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 458 | [
[
-0.037750244140625,
-0.003910064697265625,
-0.006866455078125,
0.044464111328125,
-0.04534912109375,
0.0065460205078125,
0.02752685546875,
-0.01099395751953125,
0.07000732421875,
0.03497314453125,
-0.0682373046875,
-0.04364013671875,
-0.046295166015625,
-0.0... |
Azinurrachman/OrangRimba | 2023-10-16T05:05:21.000Z | [
"region:us"
] | Azinurrachman | null | null | 0 | 3 | 2023-10-16T04:59:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Siva1995/Proton_GENAI | 2023-10-16T06:46:30.000Z | [
"region:us"
] | Siva1995 | null | null | 0 | 3 | 2023-10-16T06:44:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fahrialfiansyah/open_stax_instruction | 2023-10-16T12:10:08.000Z | [
"region:us"
] | fahrialfiansyah | null | null | 0 | 3 | 2023-10-16T07:57:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Tngarg/slovene_test | 2023-10-16T11:22:02.000Z | [
"region:us"
] | Tngarg | null | null | 0 | 3 | 2023-10-16T11:22:01 | ---
dataset_info:
features:
- name: content
dtype: string
- name: sentiment
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1887281
num_examples: 17479
download_size: 1288565
dataset_size: 1887281
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "slovene_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 527 | [
[
-0.041839599609375,
-0.02703857421875,
0.00396728515625,
0.031463623046875,
-0.014984130859375,
-0.01329803466796875,
0.01171875,
-0.006526947021484375,
0.051605224609375,
0.036712646484375,
-0.068603515625,
-0.067626953125,
-0.04248046875,
-0.01068878173828... |
marianna13/chemistry-stackexchange | 2023-10-16T13:00:56.000Z | [
"region:us"
] | marianna13 | null | null | 0 | 3 | 2023-10-16T11:51:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.057220458984375,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.00507354736328125,
0.0513916015625,
0.0169830322265625,
-0.052032470703125,
-0.014984130859375,
-0.060455322265625,
0.037... |
gayathrimanoj/dataset_new | 2023-10-16T16:37:15.000Z | [
"region:us"
] | gayathrimanoj | null | null | 0 | 3 | 2023-10-16T16:36:53 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.057220458984375,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.00507354736328125,
0.0513916015625,
0.0169830322265625,
-0.052032470703125,
-0.014984130859375,
-0.060455322265625,
0.037... |
kdawoud91/Arxiv_2000 | 2023-10-16T16:56:05.000Z | [
"region:us"
] | kdawoud91 | null | null | 0 | 3 | 2023-10-16T16:42:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DataProvenanceInitiative/dialog_submix_original | 2023-10-16T17:38:01.000Z | [
"region:us"
] | DataProvenanceInitiative | null | null | 0 | 3 | 2023-10-16T17:37:44 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 1024507265
num_examples: 553869
download_size: 583008075
dataset_size: 1024507265
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dialog_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 618 | [
[
-0.037750244140625,
-0.03448486328125,
0.0027484893798828125,
-0.0013570785522460938,
-0.0215911865234375,
-0.0010766983032226562,
0.0241851806640625,
0.01436614990234375,
0.06103515625,
0.054229736328125,
-0.08245849609375,
-0.034515380859375,
-0.02558898925781... |
vwxyzjn/cai-conversation-new | 2023-10-20T14:53:49.000Z | [
"region:us"
] | vwxyzjn | null | null | 0 | 3 | 2023-10-16T17:40:26 | ---
dataset_info:
features:
- name: init_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: init_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: critic_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: critic_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: revision_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: revision_response
struct:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 129500
num_examples: 100
download_size: 67693
dataset_size: 129500
---
# Dataset Card for "cai-conversation-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,008 | [
[
-0.046295166015625,
-0.0345458984375,
0.00527191162109375,
0.0147857666015625,
-0.01404571533203125,
0.0085296630859375,
0.0174713134765625,
-0.0194549560546875,
0.0712890625,
0.032684326171875,
-0.051544189453125,
-0.05194091796875,
-0.037567138671875,
-0.0... |
kewu93/dreambooth_full | 2023-10-16T18:27:34.000Z | [
"region:us"
] | kewu93 | null | null | 0 | 3 | 2023-10-16T18:27:18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: subject_name
dtype: string
splits:
- name: train
num_bytes: 111680598.0
num_examples: 158
download_size: 111587177
dataset_size: 111680598.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dreambooth_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 526 | [
[
-0.032623291015625,
-0.0096282958984375,
0.0211181640625,
0.02105712890625,
-0.0312347412109375,
0.02264404296875,
0.01383209228515625,
-0.0164642333984375,
0.08367919921875,
0.041778564453125,
-0.053192138671875,
-0.049346923828125,
-0.032470703125,
-0.0240... |
AlekhyaD/checkGPT | 2023-10-17T00:40:55.000Z | [
"region:us"
] | AlekhyaD | null | null | 0 | 3 | 2023-10-16T20:37:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Sunhyong/llama2_test | 2023-10-17T01:22:24.000Z | [
"region:us"
] | Sunhyong | null | null | 0 | 3 | 2023-10-17T01:22:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Phaedrus/CSAW_dense_30_train | 2023-10-17T06:50:16.000Z | [
"region:us"
] | Phaedrus | null | null | 0 | 3 | 2023-10-17T06:49:50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 308923702.0
num_examples: 264
download_size: 45246242
dataset_size: 308923702.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "CSAW_dense_30_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 488 | [
[
-0.048065185546875,
-0.0018672943115234375,
0.0229644775390625,
0.038055419921875,
-0.006397247314453125,
0.004367828369140625,
0.01160430908203125,
-0.004131317138671875,
0.04046630859375,
0.035919189453125,
-0.0675048828125,
-0.04541015625,
-0.037353515625,
... |
tr416/nils-GPT_dataset_20231017_075324 | 2023-10-17T07:53:27.000Z | [
"region:us"
] | tr416 | null | null | 0 | 3 | 2023-10-17T07:53:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 75203880.0
num_examples: 29285
- name: test
num_bytes: 760128.0
num_examples: 296
download_size: 12803835
dataset_size: 75964008.0
---
# Dataset Card for "nils-GPT_dataset_20231017_075324"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 618 | [
[
-0.038055419921875,
-0.01043701171875,
0.02099609375,
0.01428985595703125,
-0.022796630859375,
-0.0245208740234375,
0.0237884521484375,
-0.007541656494140625,
0.04180908203125,
0.03082275390625,
-0.0562744140625,
-0.042236328125,
-0.03985595703125,
-0.000426... |
carnival13/rbrt_test_val_lrg2 | 2023-10-17T08:05:17.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 3 | 2023-10-17T08:02:15 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 148079605
num_examples: 104550
download_size: 32715970
dataset_size: 148079605
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_test_val_lrg2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.04022216796875,
-0.040740966796875,
-0.00428009033203125,
0.00823211669921875,
-0.01532745361328125,
0.00689697265625,
0.02215576171875,
-0.0135650634765625,
0.032196044921875,
0.0297393798828125,
-0.042755126953125,
-0.0297088623046875,
-0.037933349609375,
... |
ThaLEE/last_pictures | 2023-10-17T10:45:52.000Z | [
"region:us"
] | ThaLEE | null | null | 0 | 3 | 2023-10-17T10:29:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
chemNLP/chemrxiv_licensing | 2023-11-01T15:54:42.000Z | [
"region:us"
] | chemNLP | null | null | 0 | 3 | 2023-10-17T14:30:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/eval_tag_nq_dev_v10_first_0 | 2023-10-17T16:44:42.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-17T16:44:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
- name: titles
dtype: string
splits:
- name: train
num_bytes: 2292
num_examples: 10
- name: validation
num_bytes: 1620691
num_examples: 6515
download_size: 941682
dataset_size: 1622983
---
# Dataset Card for "eval_tag_nq_dev_v10_first_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 859 | [
[
-0.04644775390625,
-0.0245208740234375,
0.006656646728515625,
0.01161956787109375,
-0.01201629638671875,
0.022247314453125,
0.030242919921875,
0.003147125244140625,
0.0662841796875,
0.032684326171875,
-0.050201416015625,
-0.05194091796875,
-0.02294921875,
0.... |
huangyt/FINETUNE10 | 2023-10-17T17:08:29.000Z | [
"region:us"
] | huangyt | null | null | 0 | 3 | 2023-10-17T17:07:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hishab/MegaBNSpeech_Test_Data | 2023-10-20T05:13:37.000Z | [
"task_categories:automatic-speech-recognition",
"language:bn",
"license:cc-by-nc-4.0",
"region:us"
] | hishab | null | null | 0 | 3 | 2023-10-17T18:25:59 | ---
language:
- bn
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: duration
dtype: float64
- name: category
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 219091915.875
num_examples: 1753
download_size: 214321460
dataset_size: 219091915.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# MegaBNSpeech Test Data
To evaluate the performance of the models, we used four test sets. Two of these were developed as part of the MegaBNSpeech corpus, while the remaining two (Fleurs and Common Voice) are commonly used test sets that are widely recognized by the speech community.
## Use dataset library:
```python
from datasets import load_dataset
dataset = load_dataset("hishab/MegaBNSpeech_Test_Data")
```
## Reported Word error rate (WER) /character error rate (CER) on four test sets using four ASR systems
| Category | Duration (hr) | Hishab BN Fastconformer | Google MMS | OOD-speech |
|-------------------- | -------------- | ------------ | ---------- | ----------- |
| MegaBNSpeech-YT | 8.1 | 6.4/3.39 | 28.3/18.88 | 51.1/23.49 |
| MegaBNSpeech-Tel | 1.9 | ∗40.7/24.38 | ∗59/41.26 | ∗76.8/39.36 |
## Reported Word error rate (WER) /character error rate (CER) on different categories present in Hishab BN FastConformer
| Category | Duration (hr) | Hishab BN FastConformer | Google MMS | OOD-speech |
|-------------------- | -------------- | ------------ | ---------- | ----------- |
| News | 1.21 | 2.5/1.21 | 18.9/10.46 | 52.2/21.65 |
| Talkshow | 1.39 | 6/3.29 | 28/18.71 | 48.8/21.5 |
| Courses | 3.81 | 6.8/3.79 | 30.8/21.64 | 50.2/23.52 |
| Drama | 0.03 | 10.3/7.47 | 37.3/27.43 | 64.3/32.74 |
| Science | 0.26 | 5/1.92 | 20.6/11.4 | 45.3/19.93 |
| Vlog | 0.18 | 11.3/6.69 | 33/22.9 | 57.9/27.18 |
| Recipie | 0.58 | 7.5/3.29 | 26.4/16.6 | 53.3/26.89 |
| Waz | 0.49 | 9.6/5.45 | 33.3/23.1 | 57.3/27.46 |
| Movie | 0.1 | 8/4.64 | 35.2/23.88 | 64.4/34.96 |
| 2,415 | [
[
-0.03790283203125,
-0.053253173828125,
0.003467559814453125,
0.0246124267578125,
-0.00591278076171875,
-0.003086090087890625,
-0.0189666748046875,
-0.01192474365234375,
0.0251617431640625,
0.0171051025390625,
-0.03546142578125,
-0.038970947265625,
-0.04617309570... |
kjappelbaum/chemnlp-mp-bulk-modulus | 2023-10-17T21:18:06.000Z | [
"region:us"
] | kjappelbaum | null | null | 0 | 3 | 2023-10-17T21:17:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hiddenbox/angry_dog_train | 2023-10-19T03:23:03.000Z | [
"region:us"
] | hiddenbox | null | null | 0 | 3 | 2023-10-18T02:46:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': angry
'1': happy
'2': sad
splits:
- name: train
num_bytes: 12998726.0
num_examples: 69
download_size: 0
dataset_size: 12998726.0
---
# Dataset Card for "angry_dog_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 563 | [
[
-0.0474853515625,
-0.00662994384765625,
0.006038665771484375,
0.031463623046875,
-0.006534576416015625,
-0.0074310302734375,
0.012908935546875,
-0.00691986083984375,
0.038818359375,
0.01097869873046875,
-0.06005859375,
-0.040374755859375,
-0.0411376953125,
-... |
joey234/sentiment_analysis_affix | 2023-10-18T04:39:24.000Z | [
"region:us"
] | joey234 | null | null | 0 | 3 | 2023-10-18T04:39:16 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: text
dtype: string
splits:
- name: train
num_bytes: 390794.9858712716
num_examples: 7318
download_size: 194325
dataset_size: 390794.9858712716
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sentiment_analysis_affix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 570 | [
[
-0.049285888671875,
-0.0235443115234375,
0.0038013458251953125,
0.035888671875,
-0.0134735107421875,
-0.00826263427734375,
0.013275146484375,
0.00006085634231567383,
0.0631103515625,
0.0226287841796875,
-0.07025146484375,
-0.05682373046875,
-0.04901123046875,
... |
joey234/sentiment_analysis_affix_small | 2023-10-18T05:37:50.000Z | [
"region:us"
] | joey234 | null | null | 0 | 3 | 2023-10-18T05:37:45 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: text
dtype: string
splits:
- name: train
num_bytes: 21821.81818181818
num_examples: 170
download_size: 18565
dataset_size: 21821.81818181818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sentiment_analysis_affix_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 574 | [
[
-0.05035400390625,
-0.02520751953125,
0.01229095458984375,
0.0245208740234375,
-0.0195159912109375,
-0.0217742919921875,
-0.001552581787109375,
0.003475189208984375,
0.07080078125,
0.01776123046875,
-0.0670166015625,
-0.0477294921875,
-0.04595947265625,
-0.0... |
Dodon/Corrosion | 2023-10-18T09:13:36.000Z | [
"region:us"
] | Dodon | \ | \ | 0 | 3 | 2023-10-18T06:41:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
SalarMaca/nips2023_dataset | 2023-10-26T09:32:20.000Z | [
"license:mit",
"region:us"
] | SalarMaca | null | null | 0 | 3 | 2023-10-18T10:14:15 | ---
license: mit
---
The datasets used in this competition come from many open-source databases and two manually written files, which We wrote by ourselves.
## manually written files
#### helpless_answer.txt
When human ask a meaningless question, the model will give a helpless reply.
We create this kind of chats by creating valid questions and answers from "helpless_answer.txt".
#### identity_chat.txt
The self-identity of our model should not be the same as its origin base model or other well-known models.
| 520 | [
[
-0.0168609619140625,
-0.04705810546875,
0.02105712890625,
0.01348114013671875,
-0.00713348388671875,
-0.005374908447265625,
0.01038360595703125,
-0.031097412109375,
0.00695037841796875,
0.054656982421875,
-0.06671142578125,
-0.0274505615234375,
-0.00379753112792... |
Tommert25/1810testsplit | 2023-10-18T12:01:05.000Z | [
"region:us"
] | Tommert25 | null | null | 0 | 3 | 2023-10-18T12:00:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vsarathy/nl-robotics-translation-simple_english-2k-novelty-no-context-TEST | 2023-10-18T14:23:24.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 3 | 2023-10-18T14:22:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yardeny/tokenized_gpt2_context_len_64 | 2023-10-18T14:49:57.000Z | [
"region:us"
] | yardeny | null | null | 0 | 3 | 2023-10-18T14:40:59 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 8074990564
num_examples: 80462898
download_size: 3552230822
dataset_size: 8074990564
---
# Dataset Card for "tokenized_gpt2_context_len_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 435 | [
[
-0.0307769775390625,
-0.024871826171875,
0.021820068359375,
0.0179595947265625,
-0.0401611328125,
-0.01465606689453125,
-0.0002574920654296875,
-0.01568603515625,
0.0352783203125,
0.0214996337890625,
-0.03851318359375,
-0.046051025390625,
-0.054962158203125,
... |
mrseba/boston_house_price | 2023-10-18T20:05:23.000Z | [
"region:us"
] | mrseba | null | null | 0 | 3 | 2023-10-18T20:04:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sproos/wikipedia-paragraphs-400k | 2023-10-18T20:09:50.000Z | [
"region:us"
] | sproos | null | null | 0 | 3 | 2023-10-18T20:05:37 | ---
dataset_info:
features:
- name: url
dtype: string
- name: section title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1111101079
num_examples: 415443
download_size: 651692921
dataset_size: 1111101079
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia-paragraphs-400k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.060821533203125,
-0.01502227783203125,
0.019287109375,
0.027862548828125,
-0.0180816650390625,
-0.0242767333984375,
0.0009474754333496094,
-0.00820159912109375,
0.046142578125,
0.028167724609375,
-0.055450439453125,
-0.049346923828125,
-0.037628173828125,
... |
mujammil131/eyeDiseasDdetectionModel | 2023-10-19T07:49:41.000Z | [
"region:us"
] | mujammil131 | null | null | 0 | 3 | 2023-10-18T20:33:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AMD
'1': Cataract
'2': Diabetes
'3': Glaucoma
'4': Hypertension
'5': Myopia
'6': Normal
'7': Other
splits:
- name: train
num_bytes: 379989329.608
num_examples: 6392
download_size: 0
dataset_size: 379989329.608
---
# Dataset Card for "eyeDiseasDdetectionModel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 703 | [
[
-0.0413818359375,
-0.0214691162109375,
0.002185821533203125,
0.01120758056640625,
-0.0122833251953125,
-0.00875091552734375,
0.0259552001953125,
-0.0289306640625,
0.04486083984375,
0.058197021484375,
-0.06658935546875,
-0.05615234375,
-0.04364013671875,
-0.0... |
octa-cba/codigo_procesal_laboral_sin_etiquetas.txt | 2023-10-18T23:38:14.000Z | [
"region:us"
] | octa-cba | null | null | 0 | 3 | 2023-10-18T23:37:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huangyt/FINETUNE11 | 2023-10-19T05:07:18.000Z | [
"region:us"
] | huangyt | null | null | 0 | 3 | 2023-10-19T05:06:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Hoga2/Shinchan | 2023-10-19T08:51:40.000Z | [
"region:us"
] | Hoga2 | null | null | 0 | 3 | 2023-10-19T08:41:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bh8648/split_dataset_3 | 2023-10-19T10:18:35.000Z | [
"region:us"
] | bh8648 | null | null | 0 | 3 | 2023-10-19T10:18:31 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: page_num
dtype: int64
splits:
- name: train
num_bytes: 560241
num_examples: 212
download_size: 279352
dataset_size: 560241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "split_dataset_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 517 | [
[
-0.046844482421875,
-0.0250091552734375,
0.020172119140625,
0.025909423828125,
-0.023101806640625,
0.00530242919921875,
0.029449462890625,
-0.0160064697265625,
0.055877685546875,
0.040618896484375,
-0.05303955078125,
-0.03857421875,
-0.039764404296875,
-0.00... |
bh8648/split_dataset_20 | 2023-10-19T10:19:33.000Z | [
"region:us"
] | bh8648 | null | null | 0 | 3 | 2023-10-19T10:19:31 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: page_num
dtype: int64
splits:
- name: train
num_bytes: 943819
num_examples: 215
download_size: 474283
dataset_size: 943819
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "split_dataset_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 518 | [
[
-0.058013916015625,
-0.0269012451171875,
0.0096282958984375,
0.032196044921875,
-0.025665283203125,
0.020263671875,
0.0192413330078125,
-0.01503753662109375,
0.055908203125,
0.040435791015625,
-0.06231689453125,
-0.038970947265625,
-0.0467529296875,
-0.01116... |
zhangyi617/my-image-captioning-dataset | 2023-10-20T14:04:44.000Z | [
"region:us"
] | zhangyi617 | null | null | 0 | 3 | 2023-10-19T10:51:09 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1583175.0
num_examples: 18
download_size: 1584330
dataset_size: 1583175.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "my-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 490 | [
[
-0.04901123046875,
-0.004383087158203125,
0.00873565673828125,
0.0220794677734375,
-0.024993896484375,
0.01268768310546875,
0.0255889892578125,
-0.0021686553955078125,
0.06292724609375,
0.0457763671875,
-0.05450439453125,
-0.0413818359375,
-0.044952392578125,
... |
tyzhu/find_second_sent_train_30_eval_10 | 2023-10-31T14:47:24.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-19T13:36:39 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 89174
num_examples: 70
- name: validation
num_bytes: 10923
num_examples: 10
download_size: 63471
dataset_size: 100097
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_second_sent_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 675 | [
[
-0.03631591796875,
-0.0120391845703125,
0.01617431640625,
0.03662109375,
-0.005218505859375,
0.004863739013671875,
0.0185546875,
0.0083160400390625,
0.0458984375,
0.0225372314453125,
-0.0660400390625,
-0.038543701171875,
-0.04052734375,
-0.0211029052734375,
... |
tyzhu/find_second_sent_train_50_eval_10 | 2023-10-31T14:48:02.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-19T15:05:42 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 133699
num_examples: 110
- name: validation
num_bytes: 9071
num_examples: 10
download_size: 80138
dataset_size: 142770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_second_sent_train_50_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 676 | [
[
-0.034515380859375,
-0.00982666015625,
0.015380859375,
0.034210205078125,
-0.004878997802734375,
0.00493621826171875,
0.0167694091796875,
0.0098724365234375,
0.047271728515625,
0.021514892578125,
-0.06427001953125,
-0.03973388671875,
-0.040924072265625,
-0.0... |
tyzhu/find_last_sent_train_50_eval_10 | 2023-10-31T14:48:10.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-19T15:05:48 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 133975
num_examples: 110
- name: validation
num_bytes: 8961
num_examples: 10
download_size: 80288
dataset_size: 142936
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_last_sent_train_50_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 674 | [
[
-0.03131103515625,
-0.00750732421875,
0.0296478271484375,
0.024139404296875,
-0.0013599395751953125,
0.00395965576171875,
0.0101318359375,
0.016448974609375,
0.04833984375,
0.0264129638671875,
-0.054718017578125,
-0.050537109375,
-0.02996826171875,
-0.005737... |
jiwon65/aihub_general_6000_for_train | 2023-10-19T15:13:05.000Z | [
"region:us"
] | jiwon65 | null | null | 0 | 3 | 2023-10-19T15:07:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
sequence: float32
splits:
- name: train
num_bytes: 1212419491
num_examples: 6000
download_size: 1071487189
dataset_size: 1212419491
---
# Dataset Card for "korean-general-command-voice_0-6000_samplingRate-16000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.04522705078125,
-0.0115509033203125,
0.0073394775390625,
0.0255126953125,
-0.02264404296875,
0.00382232666015625,
-0.002124786376953125,
0.0013742446899414062,
0.06134033203125,
0.039306640625,
-0.0703125,
-0.057769775390625,
-0.029815673828125,
-0.017715... |
tyzhu/find_last_sent_train_10_eval_10_hint3 | 2023-10-31T14:51:47.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-19T17:06:30 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 39585
num_examples: 30
- name: validation
num_bytes: 9250
num_examples: 10
download_size: 45630
dataset_size: 48835
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_last_sent_train_10_eval_10_hint3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 677 | [
[
-0.0275421142578125,
-0.01187896728515625,
0.041351318359375,
0.026611328125,
-0.00414276123046875,
-0.0015850067138671875,
0.01177978515625,
0.011993408203125,
0.042999267578125,
0.02191162109375,
-0.061492919921875,
-0.047760009765625,
-0.029449462890625,
... |
tyzhu/find_last_sent_train_50_eval_10_hint3 | 2023-10-31T14:54:40.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-19T17:09:32 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 135382
num_examples: 110
- name: validation
num_bytes: 9233
num_examples: 10
download_size: 81619
dataset_size: 144615
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_last_sent_train_50_eval_10_hint3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 680 | [
[
-0.0291748046875,
-0.00611114501953125,
0.0369873046875,
0.026763916015625,
-0.003509521484375,
-0.0007033348083496094,
0.01143646240234375,
0.016845703125,
0.039764404296875,
0.02001953125,
-0.0618896484375,
-0.0504150390625,
-0.026336669921875,
-0.00186157... |
tyzhu/find_last_sent_train_10_eval_10_hint10 | 2023-10-31T14:50:50.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-19T18:53:06 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 40003
num_examples: 30
- name: validation
num_bytes: 9702
num_examples: 10
download_size: 46718
dataset_size: 49705
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_last_sent_train_10_eval_10_hint10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 678 | [
[
-0.028961181640625,
-0.0103912353515625,
0.037261962890625,
0.0264739990234375,
-0.002628326416015625,
0.00202178955078125,
0.0077362060546875,
0.0140228271484375,
0.047760009765625,
0.0218658447265625,
-0.06256103515625,
-0.04632568359375,
-0.031768798828125,
... |
tyzhu/find_second_sent_train_10_eval_10_hint5 | 2023-10-31T14:51:10.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 3 | 2023-10-19T18:56:50 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 39713
num_examples: 30
- name: validation
num_bytes: 9412
num_examples: 10
download_size: 45009
dataset_size: 49125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_second_sent_train_10_eval_10_hint5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 679 | [
[
-0.032562255859375,
-0.00936126708984375,
0.024505615234375,
0.03369140625,
-0.0098724365234375,
0.0016946792602539062,
0.0164337158203125,
0.006076812744140625,
0.041473388671875,
0.0175018310546875,
-0.073974609375,
-0.04229736328125,
-0.041900634765625,
-... |
erhwenkuo/moss-003-sft-chinese-zhtw | 2023-10-21T00:38:13.000Z | [
"task_categories:conversational",
"size_categories:1M<n<10M",
"language:zh",
"license:cc",
"region:us"
] | erhwenkuo | null | null | 2 | 3 | 2023-10-20T00:19:41 | ---
dataset_info:
features:
- name: conversation_id
dtype: int64
- name: category
dtype: string
- name: conversation
list:
- name: human
dtype: string
- name: assistant
dtype: string
splits:
- name: train
num_bytes: 8438001353
num_examples: 1074551
download_size: 4047825896
dataset_size: 8438001353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc
task_categories:
- conversational
language:
- zh
size_categories:
- 1M<n<10M
---
# Dataset Card for "moss-003-sft-chinese-zhtw"
## 資料集摘要
本資料集主要是應用於專案:[MOSS: 開源對話語言模型](https://github.com/OpenLMLab/MOSS) 所收集的數據。
[MOSS]((https://github.com/OpenLMLab/MOSS)) 是支援中英雙語和多種外掛程式的開源對話語言模型,moss-moon 系列模型具有160億參數,在FP16精度下可在單張A100/A800或兩張3090顯示卡運行,在INT4/8精度下可在單張3090顯示卡運行。 MOSS基座語言模型在約七千億中英文以及程式碼單字上預訓練得到,後續經過對話指令微調、插件增強學習和人類偏好訓練具備多輪對話能力及使用多種插件的能力。
## 原始資料來源
- [moss-003-sft-data](https://github.com/OpenLMLab/MOSS/tree/main/SFT_data): `moss-moon-003-sft` 所使用的多輪對話數據,基於 MOSS-002 內測階段採集的約10萬用戶輸入數據和 gpt-3.5-turbo 構造而成,相比 `moss-002-sft-data`,`moss-003-sft-data` 更符合真實使用者意圖分佈,包含更細緻的有用性類別標記、更廣泛的無害性資料和更長對話輪數,約含110萬條對話數據。完整資料已全部開源。
- [YeungNLP/moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data)
## 資料下載清理
在原始資料集的基礎上,[YeungNLP/moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data) 去除了冗餘訊息,僅提取出有效的對話訊息,並且調整資料格式,以便在訓練中更加靈活地組織資料格式。
1. 下載 [YeungNLP/moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data)
2. 使用 OpenCC 來進行簡繁轉換
3. 使用 Huggingface Datasets 來上傳至 Huggingface Hub
## 資料集結構
本資料集原始為jsonl格式,每一行的樣本為一個多輪對話數據,格式如下:
```json
{
"conversation_id":1,
"category":"Brainstorming",
"conversation":[
{
"human":"如何保障工作中遵循正確的安全準則?",
"assistant":"為了保障工作中遵循正確的安全準則,以下是一些建議:\n\n1.瞭解相關安全規定和標準:瞭解相關的安全規定和標準,並遵守它們。這可以包括公司和政府的安全標準,以及行業標準和最佳實踐。\n\n2.培訓和教育:確保您和您的同事接受了必要的培訓和教育,以瞭解正確的安全準則和行為。\n\n3.使用正確的工具和裝置:確保您使用正確的工具和裝置,並且它們得到了正確的維護和保養。\n\n4.個人防護裝備:確保您和您的同事穿戴正確的個人防護裝備,如安全鞋、透明眼鏡或面罩、手套等。\n\n5.建立一個安全文化:鼓勵個人對安全的責任感和意識,並建立一個安全文化,使人們始終關注他們自己和他人的安全。\n\n6.持續監測和改進:持續監測和改進安全準則和程式,以確保它們保持最新,並適應變化的情況。\n\n這些措施可以幫助您和您的同事在工作中保障正確的安全準則,確保您的工作場所是一個安全的環境。"
},
{
"human":"如何應對突發緊急情況?",
"assistant":"突發緊急情況可能會在任何時間發生,為了應對這些情況,以下是一些建議:\n\n1.保持冷靜:保持冷靜和清醒,以便能夠更好地應對情況。\n\n2.評估情況:評估緊急情況的性質和範圍,以便了解應該採取哪些行動。\n\n3.觸發應急計劃:如果您的機構或公司有應急計劃,立即觸發它,並按照計劃的步驟行動。\n\n4.聯絡相關人員:通知必要的人員,例如緊急服務、管理層或其他相關人員,並與他們保持聯絡。\n\n5.遵循安全準則:在處理緊急情況時,請遵循正確的安全準則,確保自己和其他人的安全。\n\n6.提供及時資訊:儘快向相關人員提供準確的資訊,以便他們能夠做出正確的決策。\n\n7.合作應對:與其他人合作應對緊急情況並相互支援。\n\n總之,應對突發緊急情況需要保持冷靜和清醒,評估情況並採取適當的行動。同時,需要遵循正確的安全準則,並與其他人合作應對緊急情況。"
},
{
"human":"在網路安全方面,如何避免遭受駭客攻擊?",
"assistant":"以下是一些避免遭受駭客攻擊的建議:\n\n1.更新安全軟體:確保您的作業系統、應用程式和安全軟體是最新的,並隨時更新它們。這可以幫助您及時發現和修復安全漏洞。\n\n2.使用強密碼:使用強密碼,包括字母、數字和特殊字元。不要使用容易猜測的密碼,如生日、姓名或常用單詞。\n\n3.使用多因素身份驗證:使用多因素身份驗證來增加賬戶的安全性,例如傳送簡訊驗證碼或使用指紋識別。\n\n4.小心附件和連結:不要開啟未知的附件或連結,因為它們可能包含惡意軟體。\n\n5.備份重要檔案:定期備份重要檔案,並將其儲存在安全的位置。\n\n6.保護個人資訊:不要在網際網路上公開個人資訊,例如出生日期、地址或社交安全號碼。\n\n7.使用加密連線:在訪問敏感資訊時,使用加密連線,如HTTPS。\n\n8.教育員工:向員工提供網路安全培訓,並確保他們瞭解正確的安全措施和行為。\n\n總之,避免遭受駭客攻擊需要採取多種措施,包括更新安全軟體、使用強密碼、使用多因素身份驗證、小心附件和連結、備份重要檔案、保護個人資訊、使用加密連線以及教育員工。這些措施可以幫助您保護個人資訊和資料免受駭客攻擊。"
},
{
"human":"如何確保網路連線的安全性?",
"assistant":"以下是一些確保網路連線安全性的建議:\n\n1.使用加密連線:使用加密連線,如HTTPS,以確保資料傳輸過程中不被竊聽或篡改。\n\n2.使用防病毒軟體:安裝和更新防病毒軟體,可以幫助檢測和清除惡意軟體。\n\n3.使用防火牆:使用防火牆保護網路和裝置,可以幫助阻止未經授權的訪問和攻擊。\n\n4.禁止未經授權的訪問:禁止未經授權的訪問,包括限制網路訪問和裝置訪問許可權。\n\n5.定期更新作業系統和應用程式:定期更新作業系統和應用程式,以修復安全漏洞和缺陷。\n\n6.使用強密碼:使用強密碼,包括字母、數字和特殊字元,並定期更改密碼。\n\n7.控制訪問:控制訪問網路和裝置的人員,僅授權人員可訪問重要資源。\n\n8.教育員工:向員工提供網路安全培訓,並確保他們瞭解正確的安全措施和行為。\n\n總之,確保網路連線的安全性需要採取多種措施,包括使用加密連線、使用防病毒軟體、使用防火牆、禁止未經授權的訪問、定期更新作業系統和應用程式、使用強密碼、控制訪問和教育員工。這些措施可以幫助您保護網路和裝置免受未經授權的訪問和攻擊。"
},
{
"human":"對於金融機構來說,如何保護客戶的隱私和資料安全?",
"assistant":"金融機構需要對客戶的隱私和資料安全高度重視,以下是一些保護客戶隱私和資料安全的建議:\n\n1.遵守合規要求:確保遵守所有適用的合規要求和監管標準,包括資料隱私法規和安全標準。\n\n2.加強身份驗證:採用多因素身份驗證,例如使用簡訊驗證碼、指紋識別等,以確保客戶身份的真實性。\n\n3.加密敏感資料:使用加密技術對敏感資料進行加密,以保護客戶資訊不被未經授權的人員獲得。\n\n4.建立安全控制:建立安全控制,包括訪問控制、安全審計和安全事件響應,以保護客戶資訊的安全性。\n\n5.教育員工:向員工提供網路安全培訓,並確保他們瞭解正確的安全措施和行為。\n\n6.定期測試:定期進行安全測試和漏洞掃描,以發現和修復潛在的安全漏洞。\n\n7.建立應急計劃:建立應急計劃,並對其進行定期測試,以確保在緊急情況下能夠及時響應和恢復。\n\n總之,保護客戶隱私和資料安全需要採取多種措施,包括遵守合規要求、加強身份驗證、加密敏感資料、建立安全控制、教育員工、定期測試和建立應急計劃。這些措施可以幫助金融機構保護客戶資訊的安全性和隱私。"
}
]
}
```
## 資料欄位
- `conversation_id`: (int64) 資料編號
- `category`: (string) 對話類別
- `conversation`: (list) 多輪對話
- `human`: (string) 使用者角色的詢問
- `assistant`: (string) 小助手的回答
## 如何使用
```python
from datasets import load_dataset
dataset = load_dataset("erhwenkuo/moss-003-sft-chinese-zhtw", split="train")
```
## 許可資訊
[CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.zh-hant)
## 引用
```
@article{sun2023moss,
title={MOSS: Training Conversational Language Models from Synthetic Data},
author={Tianxiang Sun and Xiaotian Zhang and Zhengfu He and Peng Li and Qinyuan Cheng and Hang Yan and Xiangyang Liu and Yunfan Shao and Qiong Tang and Xingjian Zhao and Ke Chen and Yining Zheng and Zhejian Zhou and Ruixiao Li and Jun Zhan and Yunhua Zhou and Linyang Li and Xiaogui Yang and Lingling Wu and Zhangyue Yin and Xuanjing Huang and Xipeng Qiu},
year={2023}
}
``` | 5,235 | [
[
-0.035736083984375,
-0.05230712890625,
0.0078277587890625,
0.02801513671875,
-0.031982421875,
-0.0169677734375,
0.0013341903686523438,
-0.039093017578125,
0.04669189453125,
0.025665283203125,
-0.05010986328125,
-0.027587890625,
-0.042694091796875,
0.02070617... |
yatsy/GPT-wiki-intro-extension | 2023-10-20T02:03:35.000Z | [
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:aadityaubhat/GPT-wiki-intro",
"language:en",
"license:cc",
"facebook/opt-1.3b",
"facebook/opt-2.7b",
"fa... | yatsy | null | null | 0 | 3 | 2023-10-20T01:37:31 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- aadityaubhat/GPT-wiki-intro
task_categories:
- text-classification
task_ids: []
pretty_name: GPT-wiki-intro-extension
tags:
- facebook/opt-1.3b
- facebook/opt-2.7b
- facebook/opt-125m
- meta-llama/Llama-2-7b-chat-hf
- meta-llama/Llama-2-13b-chat-hf
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: int64
- name: generated
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 6375445
num_examples: 5000
download_size: 2174490
dataset_size: 6375445
---
# GPT Wiki Intro Extension
This dataset is extension of aadityaubhat/GPT-wiki-intro.
1000 promts processed through several LLM witout sampling. (opt-125m, opt-1.3b, opt-2.7b, llama2-7b-chat, and llama2-13b-chat)
Schema for the dataset
|Column |Datatype|Description |
|---------------------|--------|-------------------------------------------|
|id |int64 |ID from original dataset |
|generated |string |Model's output |
|source |string |opt-125m, opt-1.3b, opt-2.7b, llama2-7b or llama2-13b |
```
@misc {yatsy,
author = { {Kirill Safronov} },
title = { GPT-wiki-intro-extension },
year = 2023,
url = { https://huggingface.co/datasets/yatsy/GPT-wiki-intro-extension },
publisher = { Hugging Face }
}
``` | 1,670 | [
[
-0.033935546875,
-0.06463623046875,
0.010467529296875,
-0.0211334228515625,
-0.036865234375,
-0.0174560546875,
-0.0005664825439453125,
-0.021759033203125,
0.0289306640625,
0.0404052734375,
-0.0703125,
-0.050018310546875,
-0.036529541015625,
0.000363111495971... |
Rocinante/tulu_cnn | 2023-10-22T15:37:12.000Z | [
"region:us"
] | Rocinante | null | null | 0 | 3 | 2023-10-20T07:51:56 | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 1287288253
num_examples: 397417
download_size: 578074863
dataset_size: 1287288253
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tulu_cnn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 586 | [
[
-0.042266845703125,
-0.0285491943359375,
-0.01221466064453125,
0.01108551025390625,
-0.0237884521484375,
0.00653076171875,
0.01239013671875,
-0.0102996826171875,
0.03839111328125,
0.0144500732421875,
-0.033233642578125,
-0.049560546875,
-0.05218505859375,
-0... |
jiwon65/aihub_child-10k_general-6k_feature-extracted_for_test | 2023-10-20T11:00:25.000Z | [
"region:us"
] | jiwon65 | null | null | 0 | 3 | 2023-10-20T10:54:32 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
sequence: float32
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 18250978390
num_examples: 16000
download_size: 4220035968
dataset_size: 18250978390
---
# Dataset Card for "aihub_child-10k_general-6k_feature-extracted_for_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 628 | [
[
-0.048919677734375,
-0.033782958984375,
-0.0128326416015625,
0.0232086181640625,
-0.024627685546875,
0.0003705024719238281,
0.022796630859375,
-0.03143310546875,
0.043701171875,
0.027984619140625,
-0.047332763671875,
-0.043243408203125,
-0.0526123046875,
-0.... |
loubnabnl/kaggle_formatted_subset | 2023-10-20T14:48:57.000Z | [
"region:us"
] | loubnabnl | null | null | 0 | 3 | 2023-10-20T14:48:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: file_id
dtype: string
- name: content
dtype: string
- name: local_path
dtype: string
- name: kaggle_dataset_name
dtype: string
- name: kaggle_dataset_owner
dtype: string
- name: kversion
dtype: string
- name: kversion_datasetsources
dtype: string
- name: dataset_versions
dtype: string
- name: datasets
dtype: string
- name: users
dtype: string
- name: script
dtype: string
- name: df_info
dtype: string
- name: has_data_info
dtype: bool
- name: nb_filenames
dtype: int64
- name: retreived_data_description
dtype: string
- name: script_nb_tokens
dtype: int64
- name: upvotes
dtype: int64
- name: tokens_description
dtype: int64
- name: tokens_script
dtype: int64
splits:
- name: train
num_bytes: 218639956
num_examples: 10000
download_size: 92812862
dataset_size: 218639956
---
# Dataset Card for "kaggle_formatted_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,209 | [
[
-0.0301666259765625,
-0.027618408203125,
0.007293701171875,
0.0198516845703125,
-0.01087188720703125,
0.002231597900390625,
0.0009784698486328125,
0.0077362060546875,
0.06036376953125,
0.04010009765625,
-0.066650390625,
-0.049407958984375,
-0.048828125,
-0.0... |
aoome123/myvoice | 2023-10-21T12:03:57.000Z | [
"region:us"
] | aoome123 | null | null | 0 | 3 | 2023-10-21T11:13:01 | ---
dataset_info:
config_name: aoome123/voice
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 101968152
num_examples: 104
- name: test
num_bytes: 12717600
num_examples: 13
- name: valid
num_bytes: 12730056
num_examples: 13
download_size: 126599658
dataset_size: 127415808
configs:
- config_name: aoome123/voice
data_files:
- split: train
path: aoome123/voice/train-*
- split: test
path: aoome123/voice/test-*
- split: valid
path: aoome123/voice/valid-*
---
# Dataset Card for "my_voice_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 782 | [
[
-0.048583984375,
-0.0103302001953125,
0.007785797119140625,
0.0154876708984375,
0.0078125,
0.007083892822265625,
0.005054473876953125,
-0.0019197463989257812,
0.06903076171875,
0.0426025390625,
-0.06890869140625,
-0.0440673828125,
-0.029815673828125,
-0.0150... |
lipi17/building-cracks | 2023-10-21T11:53:45.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] | lipi17 | null | @misc{ crack-detection-y5kyg_dataset,
title = { Crack Detection Dataset },
type = { Open Source Dataset },
author = { António Raimundo },
howpublished = { \\url{ https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg } },
url = { https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { feb },
note = { visited on 2023-10-21 },
} | 0 | 3 | 2023-10-21T11:46:19 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="lipi17/building-cracks" src="https://huggingface.co/datasets/lipi17/building-cracks/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['crack']
```
### Number of Images
```json
{'valid': 433, 'test': 211, 'train': 1490}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("lipi17/building-cracks", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg/dataset/2](https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ crack-detection-y5kyg_dataset,
title = { Crack Detection Dataset },
type = { Open Source Dataset },
author = { António Raimundo },
howpublished = { \\url{ https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg } },
url = { https://universe.roboflow.com/antonio-raimundo/crack-detection-y5kyg },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { feb },
note = { visited on 2023-10-21 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on February 10, 2023 at 3:51 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 2134 images.
Soil are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| 2,316 | [
[
-0.0197601318359375,
-0.029266357421875,
0.022705078125,
0.007335662841796875,
-0.017333984375,
-0.00409698486328125,
0.0016489028930664062,
-0.037811279296875,
0.0256195068359375,
0.0172119140625,
-0.042877197265625,
-0.05914306640625,
-0.0352783203125,
0.0... |
Birchlabs/wds-dataset-viewer-test | 2023-10-22T01:08:42.000Z | [
"size_categories:n<1K",
"license:apache-2.0",
"region:us"
] | Birchlabs | null | null | 0 | 3 | 2023-10-21T22:00:42 | ---
license: apache-2.0
pretty_name: OpenAI guided-diffusion 256px class-conditional unguided samples (20 samples)
size_categories:
- n<1K
---
Read from the webdataset (after saving it somewhere on your disk) like this:
```python
from webdataset import WebDataset
from typing import TypedDict, Iterable
from PIL import Image
from PIL.PngImagePlugin import PngImageFile
from io import BytesIO
from os import makedirs
Example = TypedDict('Example', {
'__key__': str,
'__url__': str,
'img.png': bytes,
})
dataset = WebDataset('./wds-dataset-viewer-test/{00000..00001}.tar')
out_root = 'out'
makedirs(out_root, exist_ok=True)
it: Iterable[Example] = iter(dataset)
for ix, item in enumerate(it):
with BytesIO(item['img.png']) as stream:
img: PngImageFile = Image.open(stream)
img.load()
img.save(f'{out_root}/{ix}.png')
```
Or from the HF dataset like this:
```python
from datasets import load_dataset
from datasets.dataset_dict import DatasetDict
from datasets.arrow_dataset import Dataset
from PIL.PngImagePlugin import PngImageFile
from typing import TypedDict, Iterable
from os import makedirs
class Item(TypedDict):
index: int
tar: str
tar_path: str
img: PngImageFile
dataset: DatasetDict = load_dataset('Birchlabs/wds-dataset-viewer-test')
train: Dataset = dataset['train']
out_root = 'out'
makedirs(out_root, exist_ok=True)
it: Iterable[Item] = iter(train)
for item in it:
item['img'].save(f'{out_root}/{item["index"]}.png')
``` | 1,470 | [
[
-0.037994384765625,
-0.0159759521484375,
0.007045745849609375,
-0.0103912353515625,
-0.042755126953125,
-0.0008707046508789062,
0.007076263427734375,
-0.01421356201171875,
0.0234222412109375,
0.0199737548828125,
-0.031646728515625,
-0.027374267578125,
-0.0366516... |
Kabatubare/employment_transportation_tourism_culture_suomi | 2023-10-22T09:32:13.000Z | [
"region:us"
] | Kabatubare | null | null | 0 | 3 | 2023-10-22T06:00:55 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.00505828857421875,
0.051361083984375,
0.016998291015625,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.0379... |
xcz9811/4q | 2023-10-22T15:33:39.000Z | [
"region:us"
] | xcz9811 | null | null | 0 | 3 | 2023-10-22T15:30:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: quadrant
dtype:
class_label:
names:
'0': Q1
'1': Q2
'2': Q3
'3': Q4
splits:
- name: train
num_bytes: 291173680.0
num_examples: 900
download_size: 291039981
dataset_size: 291173680.0
---
# Dataset Card for "4q"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.032928466796875,
0.0005025863647460938,
0.0273590087890625,
0.0149078369140625,
-0.01474761962890625,
0.005706787109375,
0.048065185546875,
-0.007663726806640625,
0.0447998046875,
0.0386962890625,
-0.056121826171875,
-0.0516357421875,
-0.0191497802734375,
... |
umd-zhou-lab/claude2_alpaca | 2023-10-22T15:42:20.000Z | [
"region:us"
] | umd-zhou-lab | null | null | 0 | 3 | 2023-10-22T15:33:30 | ---
dataset_info:
features:
- name: data
struct:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 43416526
num_examples: 52002
download_size: 26338365
dataset_size: 43416526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "claude2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.054412841796875,
-0.0291748046875,
0.0238189697265625,
0.02703857421875,
-0.0254364013671875,
-0.021026611328125,
0.02734375,
-0.028900146484375,
0.07861328125,
0.04486083984375,
-0.053619384765625,
-0.04339599609375,
-0.060760498046875,
-0.0242919921875,... |
Back-up/control-chatbot-to-subdomain | 2023-10-22T18:00:04.000Z | [
"region:us"
] | Back-up | null | null | 0 | 3 | 2023-10-22T17:55:35 | ---
dataset_info:
features:
- name: answers
dtype: string
- name: questions
dtype: string
- name: system_prompt
dtype: string
splits:
- name: test
num_bytes: 2337677
num_examples: 5495
download_size: 241329
dataset_size: 2337677
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "control-chatbot-to-subdomain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.04119873046875,
-0.045135498046875,
-0.0179443359375,
0.01326751708984375,
-0.0195159912109375,
0.0303497314453125,
0.00498199462890625,
0.0008063316345214844,
0.055023193359375,
0.0308685302734375,
-0.07452392578125,
-0.0467529296875,
-0.050689697265625,
... |
kjappelbaum/chemnlp-mp-cifs | 2023-10-30T08:56:59.000Z | [
"region:us"
] | kjappelbaum | null | null | 0 | 3 | 2023-10-22T18:23:04 | ---
dataset_info:
features:
- name: formula
dtype: string
- name: density
dtype: float64
- name: spacegroup
dtype: string
- name: spacegroup_number
dtype: int64
- name: cif
dtype: string
- name: is_longer_than_allowed
dtype: bool
splits:
- name: train
num_bytes: 284514500
num_examples: 154387
download_size: 95647734
dataset_size: 284514500
---
# Dataset Card for "chemnlp-mp-cifs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.0312042236328125,
0.01456451416015625,
0.0380859375,
0.0134429931640625,
0.0004284381866455078,
0.0173492431640625,
0.016571044921875,
0.005245208740234375,
0.05108642578125,
0.0281524658203125,
-0.07232666015625,
-0.055877685546875,
-0.0305328369140625,
... |
AdapterOcean/physics_dataset_standardized_embedded | 2023-10-23T01:50:26.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-22T18:29:17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 131673144
num_examples: 19999
download_size: 0
dataset_size: 131673144
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "physics_dataset_standardized_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.0307159423828125,
-0.021331787109375,
0.0220184326171875,
0.0182952880859375,
-0.0284423828125,
0.0049591064453125,
0.022430419921875,
-0.01131439208984375,
0.053375244140625,
0.0056915283203125,
-0.041412353515625,
-0.05279541015625,
-0.03302001953125,
-... |
AdapterOcean/physics_dataset_standardized_cluster_3_alpaca | 2023-10-23T01:52:45.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-22T18:31:23 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8485286
num_examples: 5120
download_size: 0
dataset_size: 8485286
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "physics_dataset_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 503 | [
[
-0.04095458984375,
-0.022552490234375,
0.0374755859375,
0.032501220703125,
-0.0294189453125,
-0.01117706298828125,
0.038330078125,
-0.0171661376953125,
0.069580078125,
0.016204833984375,
-0.04150390625,
-0.0550537109375,
-0.036346435546875,
-0.01744079589843... |
AdapterOcean/biology_dataset_standardized_cluster_0_alpaca | 2023-10-23T14:44:38.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-22T18:46:37 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13081092
num_examples: 4053
download_size: 0
dataset_size: 13081092
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biology_dataset_standardized_cluster_0_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 505 | [
[
-0.039520263671875,
-0.03118896484375,
0.027435302734375,
0.02191162109375,
-0.035308837890625,
-0.00971221923828125,
0.03173828125,
-0.0199127197265625,
0.0947265625,
0.02288818359375,
-0.044921875,
-0.07171630859375,
-0.049957275390625,
-0.0047492980957031... |
AdapterOcean/chemistry_dataset_standardized_cluster_2_alpaca | 2023-10-22T21:15:38.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-22T21:15:36 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4614663
num_examples: 3338
download_size: 1961709
dataset_size: 4614663
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chemistry_dataset_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 511 | [
[
-0.0307159423828125,
-0.018585205078125,
0.038421630859375,
0.0230865478515625,
-0.0214080810546875,
0.002361297607421875,
0.02496337890625,
-0.00972747802734375,
0.07122802734375,
0.028961181640625,
-0.044525146484375,
-0.0743408203125,
-0.039703369140625,
... |
AdapterOcean/chemistry_dataset_standardized_cluster_3_alpaca | 2023-10-22T21:16:00.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-22T21:15:58 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 15934846
num_examples: 5331
download_size: 7586850
dataset_size: 15934846
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chemistry_dataset_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.03582763671875,
-0.0178375244140625,
0.046722412109375,
0.0257568359375,
-0.0167236328125,
-0.0011816024780273438,
0.0283660888671875,
-0.009307861328125,
0.072998046875,
0.033111572265625,
-0.041259765625,
-0.08197021484375,
-0.033203125,
-0.009880065917... |
pharaouk/data | 2023-10-22T22:51:32.000Z | [
"region:us"
] | pharaouk | null | null | 0 | 3 | 2023-10-22T22:50:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ai2lumos/lumos_complex_qa_ground_onetime | 2023-10-26T06:00:28.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"reasoning",
"question-answering",
"grounding",
"region:us"
] | ai2lumos | null | null | 1 | 3 | 2023-10-23T05:31:31 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
- question-answering
language:
- en
tags:
- language-agent
- reasoning
- question-answering
- grounding
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_complex_qa_ground_onetime` is the data for training **grounding** module on **complex QA** task in **Lumos-Onetime (Lumos-O)** formulation.
The source of the training annotation training data is shown below:
| Datasets | Number |
|---|---|
|StrategyQA|1777|
|Musique|17632|
## Models Trained with the Data
`lumos_complex_qa_ground_onetime` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_complex_qa_ground_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_complex_qa_ground_onetime) |
|`lumos_unified_ground_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_ground_onetime) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | 2,654 | [
[
-0.00830078125,
-0.044219970703125,
0.0293731689453125,
0.0241851806640625,
-0.017791748046875,
-0.0016031265258789062,
-0.0224456787109375,
-0.0413818359375,
0.0258331298828125,
0.031951904296875,
-0.0438232421875,
-0.046905517578125,
-0.0203094482421875,
-... |
ai2lumos/lumos_maths_ground_iterative | 2023-10-23T22:19:25.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"maths",
"reasoning",
"grounding",
"region:us"
] | ai2lumos | null | null | 0 | 3 | 2023-10-23T05:42:36 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- en
tags:
- language-agent
- maths
- reasoning
- grounding
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_maths_ground_iterative` is the data for training **grounding** module on **maths** task in **Lumos-Iterative (Lumos-I)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|PRM800K|10000|
|GSM8K|7473|
|ASDiv|2305|
## Models Trained with the Data
`lumos_maths_ground_iterative` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_maths_ground_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_maths_ground_iterative) |
|`lumos_unified_ground_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_ground_iterative) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | 2,613 | [
[
-0.0059814453125,
-0.039794921875,
0.029937744140625,
0.0233154296875,
-0.0197601318359375,
0.001148223876953125,
-0.0341796875,
-0.043121337890625,
0.0235595703125,
0.02825927734375,
-0.036712646484375,
-0.0450439453125,
-0.029296875,
-0.00882720947265625,
... |
ai2lumos/lumos_maths_plan_iterative | 2023-10-23T22:20:52.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"maths",
"reasoning",
"planning",
"region:us"
] | ai2lumos | null | null | 0 | 3 | 2023-10-23T05:43:59 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- en
tags:
- language-agent
- maths
- reasoning
- planning
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_maths_plan_iterative` is the data for training **planning** module on **maths** task in **Lumos-Iterative (Lumos-I)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|PRM800K|10000|
|GSM8K|7473|
|ASDiv|2305|
## Models Trained with the Data
`lumos_maths_plan_iterative` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_maths_plan_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_maths_plan_iterative) |
|`lumos_unified_plan_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_plan_iterative) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | 2,599 | [
[
-0.00435638427734375,
-0.035369873046875,
0.0272979736328125,
0.0254058837890625,
-0.0163421630859375,
0.002086639404296875,
-0.032928466796875,
-0.044219970703125,
0.0311431884765625,
0.0311431884765625,
-0.04132080078125,
-0.038299560546875,
-0.025283813476562... |
mb23/cvt2_GS3_1 | 2023-10-25T05:55:59.000Z | [
"region:us"
] | mb23 | null | null | 0 | 3 | 2023-10-23T11:20:15 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 1832280306.75
num_examples: 13258
- name: test
num_bytes: 640923801.75
num_examples: 4722
download_size: 2373732866
dataset_size: 2473204108.5
---
# Dataset Card for "cvt2_GS3_1"
* 適応的ヒストグラム平坦化を、GraySpectrogram3に適用
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 506 | [
[
-0.021026611328125,
-0.00754547119140625,
0.020355224609375,
0.037872314453125,
-0.058074951171875,
-0.0163726806640625,
0.021240234375,
-0.0035533905029296875,
0.0399169921875,
0.032257080078125,
-0.05938720703125,
-0.054962158203125,
-0.06097412109375,
-0.... |
AdapterOcean/med_alpaca_standardized_cluster_1_alpaca | 2023-10-23T19:03:58.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-23T19:03:56 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 26006437
num_examples: 14404
download_size: 13974289
dataset_size: 26006437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 508 | [
[
-0.046356201171875,
-0.0260162353515625,
0.0234832763671875,
0.030059814453125,
-0.03289794921875,
-0.01751708984375,
0.026336669921875,
-0.0134735107421875,
0.08074951171875,
0.0343017578125,
-0.056427001953125,
-0.078125,
-0.05267333984375,
-0.016357421875... |
bhaskars113/diageo-needs-theme-classification | 2023-10-23T19:06:30.000Z | [
"region:us"
] | bhaskars113 | null | null | 0 | 3 | 2023-10-23T19:06:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AdapterOcean/med_alpaca_standardized_cluster_2_alpaca | 2023-10-23T19:09:14.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-23T19:09:11 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 21938640
num_examples: 10908
download_size: 11706896
dataset_size: 21938640
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 508 | [
[
-0.03857421875,
-0.0232086181640625,
0.0263214111328125,
0.02935791015625,
-0.033660888671875,
-0.0157623291015625,
0.0235443115234375,
-0.023773193359375,
0.07080078125,
0.03253173828125,
-0.047943115234375,
-0.06695556640625,
-0.05377197265625,
-0.02111816... |
AdapterOcean/med_alpaca_standardized_cluster_3_alpaca | 2023-10-23T19:14:27.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 3 | 2023-10-23T19:14:22 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10552566
num_examples: 6427
download_size: 5184358
dataset_size: 10552566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 506 | [
[
-0.04339599609375,
-0.0223541259765625,
0.033538818359375,
0.0309600830078125,
-0.028717041015625,
-0.01898193359375,
0.0277099609375,
-0.0225372314453125,
0.0718994140625,
0.036407470703125,
-0.045501708984375,
-0.0751953125,
-0.04815673828125,
-0.010078430... |
surajbijjahalli/semantic_seg_atl_resized | 2023-10-24T03:05:31.000Z | [
"region:us"
] | surajbijjahalli | null | null | 0 | 3 | 2023-10-24T02:59:43 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 914897071.975
num_examples: 1407
download_size: 909035241
dataset_size: 914897071.975
---
# Dataset Card for "semantic_seg_atl_resized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 413 | [
[
-0.033447265625,
-0.036651611328125,
0.017608642578125,
-0.00222015380859375,
-0.0248260498046875,
-0.01424407958984375,
-0.0018405914306640625,
-0.0214080810546875,
0.07489013671875,
0.0450439453125,
-0.0675048828125,
-0.06878662109375,
-0.05133056640625,
-... |
guaixiaomei/bdcc | 2023-10-24T03:15:14.000Z | [
"region:us"
] | guaixiaomei | null | null | 0 | 3 | 2023-10-24T03:08:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Back-up/so_tay_sv_test | 2023-10-24T05:27:17.000Z | [
"region:us"
] | Back-up | null | null | 0 | 3 | 2023-10-24T05:27:16 | ---
dataset_info:
features:
- name: Questions
dtype: string
- name: Answers
dtype: string
splits:
- name: train
num_bytes: 26492.31818181818
num_examples: 36
download_size: 20842
dataset_size: 26492.31818181818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "so_tay_sv_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 499 | [
[
-0.04034423828125,
-0.029693603515625,
-0.0037078857421875,
0.01468658447265625,
-0.0184326171875,
-0.0175628662109375,
0.0325927734375,
-0.01041412353515625,
0.06341552734375,
0.02520751953125,
-0.07025146484375,
-0.050201416015625,
-0.02777099609375,
-0.02... |
hemantk089/llama2_7b_fine_tuning_complete_dataset_v3 | 2023-10-24T08:43:40.000Z | [
"region:us"
] | hemantk089 | null | null | 0 | 3 | 2023-10-24T08:43:39 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 287566
num_examples: 917
download_size: 77717
dataset_size: 287566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2_7b_fine_tuning_complete_dataset_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 464 | [
[
-0.027740478515625,
-0.0148162841796875,
0.0283966064453125,
0.033721923828125,
-0.0362548828125,
0.0025787353515625,
0.021484375,
-0.019287109375,
0.03741455078125,
0.046844482421875,
-0.05517578125,
-0.056671142578125,
-0.03741455078125,
-0.001711845397949... |
ArmelRandy/most_precious_4 | 2023-10-24T09:02:14.000Z | [
"region:us"
] | ArmelRandy | null | null | 0 | 3 | 2023-10-24T09:01:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 8031615.129808251
num_examples: 7744
- name: test
num_bytes: 892977.870191749
num_examples: 861
download_size: 5178596
dataset_size: 8924593.0
---
# Dataset Card for "most_precious_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 564 | [
[
-0.0305633544921875,
-0.00888824462890625,
0.011566162109375,
0.0176849365234375,
-0.0223541259765625,
0.0017948150634765625,
0.01517486572265625,
-0.01422119140625,
0.039886474609375,
0.0240478515625,
-0.04510498046875,
-0.052764892578125,
-0.041748046875,
... |
euclaise/SciCoT | 2023-10-24T13:39:30.000Z | [
"license:cc-by-nc-3.0",
"region:us"
] | euclaise | null | null | 2 | 3 | 2023-10-24T13:25:49 | ---
dataset_info:
features:
- name: rationale
dtype: string
- name: target
dtype: string
- name: source
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 4559510
num_examples: 7000
download_size: 2872385
dataset_size: 4559510
license: cc-by-nc-3.0
---
# Dataset Card for "SciCoT"
Combination of sciq, medmcqa, and pubmed_qa (human annotated part, with a maximum of 3k examples taken from each. | 462 | [
[
-0.031982421875,
-0.004436492919921875,
0.0221405029296875,
0.0100250244140625,
-0.050811767578125,
0.011474609375,
0.0230865478515625,
-0.021575927734375,
0.0299835205078125,
0.051971435546875,
-0.036773681640625,
-0.070556640625,
-0.0179290771484375,
0.040... |
zhangyi617/yun-car-dataset | 2023-10-24T14:19:39.000Z | [
"region:us"
] | zhangyi617 | null | null | 0 | 3 | 2023-10-24T14:19:37 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 11217081.0
num_examples: 33
download_size: 11218227
dataset_size: 11217081.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "yun-car-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 481 | [
[
-0.038055419921875,
-0.00865936279296875,
0.020721435546875,
0.020233154296875,
-0.0111083984375,
0.0020236968994140625,
0.0193023681640625,
-0.0178375244140625,
0.05023193359375,
0.01904296875,
-0.055023193359375,
-0.05126953125,
-0.0244598388671875,
-0.035... |
RadhikaS/bird_songs | 2023-10-24T20:26:11.000Z | [
"task_categories:audio-classification",
"size_categories:n<1K",
"region:us"
] | RadhikaS | null | null | 0 | 3 | 2023-10-24T20:00:08 | ---
task_categories:
- audio-classification
size_categories:
- n<1K
---
This dataset consists of songs and calls of multiple bird species (87 distinct labels). It was collected and shared for the Neural Information Processing Scaled for Bioacoustics: NIPS4B.
686 audio files, with multiple labels per recording, recorded with a sampling rate of 44100. Recording length varies from just above 1 second to just above 5 seconds. | 425 | [
[
-0.0499267578125,
-0.006816864013671875,
-0.0035877227783203125,
0.05828857421875,
0.01126861572265625,
0.00536346435546875,
0.0007290840148925781,
-0.0643310546875,
0.0396728515625,
0.021453857421875,
-0.04779052734375,
-0.01812744140625,
-0.03302001953125,
... |
w95/megachat-std | 2023-10-24T20:40:13.000Z | [
"region:us"
] | w95 | null | null | 0 | 3 | 2023-10-24T20:26:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
manfredmichael/quac-lamini-instruction-indo-2.6M | 2023-10-25T02:55:46.000Z | [
"region:us"
] | manfredmichael | null | null | 0 | 3 | 2023-10-25T01:50:24 | ---
dataset_info:
features:
- name: context
dtype: float64
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_sources
dtype: string
splits:
- name: train
num_bytes: 1276671946
num_examples: 2585614
download_size: 713557171
dataset_size: 1276671946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-lamini-instruction-indo-2.6M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 601 | [
[
-0.0298919677734375,
-0.01390838623046875,
0.01444244384765625,
0.021759033203125,
-0.02545166015625,
-0.011871337890625,
0.01384735107421875,
-0.00348663330078125,
0.04180908203125,
0.03472900390625,
-0.04608154296875,
-0.044281005859375,
-0.036224365234375,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.