id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
readerbench/ro-text-summarization | readerbench | 2022-09-17T08:34:03Z | 164 | 0 | null | [
"region:us"
] | 2022-09-17T08:34:03Z | 2022-09-17T08:27:29.000Z | 2022-09-17T08:27:29 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medalpaca/medical_meadow_wikidoc | medalpaca | 2023-04-06T17:05:18Z | 164 | 3 | null | [
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] | 2023-04-06T17:05:18Z | 2023-04-06T17:01:20.000Z | 2023-04-06T17:01:20 | ---
license: cc
task_categories:
- question-answering
language:
- en
---
# Dataset Card for WikiDoc
For the dataset containing patient information from wikidoc refer to [this dataset](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information)
## Dataset Description
- **Source:** https://www.wikidoc.org/index.php/Main_Page
- **Repository:** https://github.com/kbressem/medalpaca
- **Paper:** TBA
### Dataset Summary
This dataset containes medical question-answer pairs extracted from [WikiDoc](https://www.wikidoc.org/index.php/Main_Page),
a collaborative platform for medical professionals to share and contribute to up-to-date medical knowledge.
The platform has to main subsites, the "Living Textbook" and "Patient Information". The "Living Textbook"
contains chapters for various medical specialties, which we crawled. We then used GTP-3.5-Turbo to rephrase
the paragraph heading to a question and used the paragraph as answer. Patient Information is structured differently,
in that each section subheading is already a question, making rephrasing them obsolete.
**Note:** This dataset is still a WIP. While the Q/A pairs from the patient information seems to be mostly correct,
the conversion using GPT-3.5-Turbo yielded some unsatisfactory results in approximately 30% of cases. We are in the process of cleaning this dataset.
### Citation Information
TBA | [
-0.2638653814792633,
-0.6217974424362183,
0.5485785603523254,
0.011119185015559196,
-0.31216001510620117,
-0.22541534900665283,
0.08148043602705002,
-0.22258372604846954,
0.5251878499984741,
0.6433230042457581,
-0.7149476408958435,
-0.7603406310081482,
-0.42424285411834717,
0.1845791190862... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GATE-engine/vggflowers | GATE-engine | 2023-06-05T15:12:54Z | 164 | 0 | null | [
"region:us"
] | 2023-06-05T15:12:54Z | 2023-06-05T15:12:19.000Z | 2023-06-05T15:12:19 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 452124226.125
num_examples: 5655
- name: validation
num_bytes: 89403717.375
num_examples: 1109
- name: test
num_bytes: 115124265.875
num_examples: 1425
download_size: 656318272
dataset_size: 656652209.375
---
# Dataset Card for "vggflowers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5225649476051331,
-0.30244672298431396,
0.22019411623477936,
0.35702601075172424,
-0.053303178399801254,
-0.10371840000152588,
0.10971707105636597,
-0.4252541959285736,
0.8820129632949829,
0.3732931315898895,
-1.0392929315567017,
-0.761441707611084,
-0.6289492845535278,
-0.2517699003219... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
64bits/lima_vicuna_format | 64bits | 2023-06-09T02:47:39Z | 164 | 22 | null | [
"task_categories:text-generation",
"language:en",
"license:other",
"region:us"
] | 2023-06-09T02:47:39Z | 2023-06-09T02:46:06.000Z | 2023-06-09T02:46:06 | ---
license: other
task_categories:
- text-generation
language:
- en
---
LIMA dataset in Vicuna ShareGPT format.
License under LIMA's License.
Original Repo:
https://huggingface.co/datasets/GAIR/lima | [
-0.1602223813533783,
-0.3093717694282532,
0.37968146800994873,
0.9750137329101562,
-0.6892820596694946,
-0.654900312423706,
0.23254896700382233,
-0.2443114072084427,
0.7645636796951294,
1.1805766820907593,
-0.7053043246269226,
-0.6697830557823181,
-0.6812620162963867,
-0.14908799529075623,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PNLPhub/FarsTail | PNLPhub | 2023-07-09T07:39:52Z | 164 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:fa",
"license:apache-2.0",
"arxiv:2009.08820",
"region:us"
] | 2023-07-09T07:39:52Z | 2023-06-16T13:53:43.000Z | 2023-06-16T13:53:43 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- fa
size_categories:
- 1K<n<10K
---
## Dataset Description
- **Repository:https://github.com/dml-qom/FarsTail**
- **Paper:https://arxiv.org/abs/2009.08820**
### Dataset Summary
Persian (Farsi) language is a pluricentric language spoken by around 110 million people in countries like Iran, Afghanistan, and Tajikistan. Here, we present the first relatively large-scale Persian dataset for NLI task, called FarsTail. A total of 10,367 samples are generated from a collection of 3,539 multiple-choice questions. The train, validation, and test portions include 7,266, 1,537, and 1,564 instances, respectively
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{amirkhani2020farstail,
title={FarsTail: A Persian Natural Language Inference Dataset},
author={Hossein Amirkhani, Mohammad Azari Jafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan},
journal={arXiv preprint arXiv:2009.08820},
year={2020}
}
``` | [
-0.3322799503803253,
-0.7293292284011841,
0.7647852897644043,
0.38462918996810913,
-0.11725584417581558,
0.0006937592988833785,
-0.6455363035202026,
0.0911707878112793,
0.11671310663223267,
0.4053952991962433,
-0.8845998048782349,
-0.9682174921035767,
-0.217400923371315,
0.4145296216011047... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/guacamol | jxie | 2023-08-03T23:49:15Z | 164 | 0 | null | [
"region:us"
] | 2023-08-03T23:49:15Z | 2023-08-03T23:49:05.000Z | 2023-08-03T23:49:05 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 65660530
num_examples: 1273104
- name: validation
num_bytes: 4097829
num_examples: 79568
- name: test
num_bytes: 12306244
num_examples: 238706
download_size: 45009159
dataset_size: 82064603
---
# Dataset Card for "guacamol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44833260774612427,
-0.5973135232925415,
0.3002416789531708,
0.3761194050312042,
-0.20303495228290558,
0.024874523282051086,
0.19166576862335205,
-0.3600943386554718,
0.9767753481864929,
0.3139055669307709,
-0.7041424512863159,
-0.8810186982154846,
-0.8234974145889282,
-0.268698841333389... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nikhil090/Dataset | Nikhil090 | 2023-11-28T13:19:33Z | 164 | 0 | null | [
"region:us"
] | 2023-11-28T13:19:33Z | 2023-10-09T10:01:41.000Z | 2023-10-09T10:01:41 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pietrolesci/nli_fever | pietrolesci | 2022-04-25T09:03:28Z | 163 | 1 | null | [
"region:us"
] | 2022-04-25T09:03:28Z | 2022-03-25T10:01:17.000Z | 2022-03-25T10:01:17 | ## Overview
The original dataset can be found [here](https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0)
while the Github repo is [here](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md).
This dataset has been proposed in [Combining fact extraction and verification with neural semantic matching networks](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016859). This dataset has been created as a modification
of FEVER.
In the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label.
However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
To facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset
with the textual evidence and make it a *pair-of-sequence to label* formatted dataset.
## Dataset curation
The label mapping follows the paper and is the following
```python
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
```
Also, the "verifiable" column has been encoded as follows
```python
mapping = {"NOT VERIFIABLE": 0, "VERIFIABLE": 1}
```
Finally, a consistency check with the labels reported in the original FEVER dataset is performed.
NOTE: no label is available for the "test" split.
NOTE: there are 3 instances in common between `dev` and `train` splits.
## Code to generate the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, load_dataset, Value, Features, DatasetDict
import json
# download data from https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0
paths = {
"train": "<some_path>/nli_fever/train_fitems.jsonl",
"validation": "<some_path>/nli_fever/dev_fitems.jsonl",
"test": "<some_path>/nli_fever/test_fitems.jsonl",
}
# parsing code from https://github.com/facebookresearch/anli/blob/main/src/utils/common.py
registered_jsonabl_classes = {}
def register_class(cls):
global registered_jsonabl_classes
if cls not in registered_jsonabl_classes:
registered_jsonabl_classes.update({cls.__name__: cls})
def unserialize_JsonableObject(d):
global registered_jsonabl_classes
classname = d.pop("_jcls_", None)
if classname:
cls = registered_jsonabl_classes[classname]
obj = cls.__new__(cls) # Make instance without calling __init__
for key, value in d.items():
setattr(obj, key, value)
return obj
else:
return d
def load_jsonl(filename, debug_num=None):
d_list = []
with open(filename, encoding="utf-8", mode="r") as in_f:
print("Load Jsonl:", filename)
for line in in_f:
item = json.loads(line.strip(), object_hook=unserialize_JsonableObject)
d_list.append(item)
if debug_num is not None and 0 < debug_num == len(d_list):
break
return d_list
def get_original_fever() -> pd.DataFrame:
"""Get original fever datasets."""
fever_v1 = load_dataset("fever", "v1.0")
fever_v2 = load_dataset("fever", "v2.0")
columns = ["id", "label"]
splits = ["paper_test", "paper_dev", "labelled_dev", "train"]
list_dfs = [fever_v1[split].to_pandas()[columns] for split in splits]
list_dfs.append(fever_v2["validation"].to_pandas()[columns])
dfs = pd.concat(list_dfs, ignore_index=False)
dfs = dfs.drop_duplicates()
dfs = dfs.rename(columns={"label": "fever_gold_label"})
return dfs
def load_and_process(path: str, fever_df: pd.DataFrame) -> pd.DataFrame:
"""Load data split and merge with fever."""
df = pd.DataFrame(load_jsonl(path))
df = df.rename(columns={"query": "premise", "context": "hypothesis"})
# adjust dtype
df["cid"] = df["cid"].astype(int)
# merge with original fever to get labels
df = pd.merge(df, fever_df, left_on="cid", right_on="id", how="inner").drop_duplicates()
return df
def encode_labels(df: pd.DataFrame) -> pd.DataFrame:
"""Encode labels using the mapping used in SNLI and MultiNLI"""
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
df["label"] = df["fever_gold_label"].map(mapping)
# verifiable
df["verifiable"] = df["verifiable"].map({"NOT VERIFIABLE": 0, "VERIFIABLE": 1})
return df
if __name__ == "__main__":
fever_df = get_original_fever()
dataset_splits = {}
for split, path in paths.items():
# from json to dataframe and merge with fever
df = load_and_process(path, fever_df)
if not len(df) > 0:
print(f"Split `{split}` has no matches")
continue
if split == "train":
# train must have same labels
assert sum(df["fever_gold_label"] != df["label"]) == 0
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df = df.drop(columns=["label"])
df = encode_labels(df)
# cast to dataset
features = Features(
{
"cid": Value(dtype="int64", id=None),
"fid": Value(dtype="string", id=None),
"id": Value(dtype="int32", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"verifiable": Value(dtype="int64", id=None),
"fever_gold_label": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
if "test" in path:
# no features for test set
df["label"] = -1
df["verifiable"] = -1
df["fever_gold_label"] = "not available"
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
nli_fever = DatasetDict(dataset_splits)
nli_fever.push_to_hub("pietrolesci/nli_fever", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset_splits.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset_splits[i].to_pandas(),
dataset_splits[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> train - dev: 3
#> train - test: 0
#> dev - test: 0
``` | [
-0.2692255973815918,
-0.38187289237976074,
-0.01189202256500721,
0.2326158583164215,
-0.11126668006181717,
0.0855962485074997,
-0.16483928263187408,
-0.27583473920822144,
0.4787701368331909,
0.32418182492256165,
-0.32234153151512146,
-0.5783683657646179,
-0.49534744024276733,
0.57077234983... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
camel-ai/chemistry | camel-ai | 2023-05-23T21:12:52Z | 163 | 17 | null | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | 2023-05-23T21:12:52Z | 2023-04-16T01:30:56.000Z | 2023-04-16T01:30:56 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Chemistry
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Chemistry dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 chemistry topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
We provide the data in `chemistry.zip`.
## Data Fields
**The data fields for files in `chemistry.zip` are as follows:**
* `role_1`: assistant role
* `topic`: chemistry topic
* `sub_topic`: chemistry subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/chemistry", repo_type="dataset", filename="chemistry.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
| [
-0.34968921542167664,
-0.9189032316207886,
0.35564374923706055,
0.08854471892118454,
0.07087325304746628,
0.11547527462244034,
-0.46154773235321045,
-0.30917611718177795,
0.27508923411369324,
0.4008309543132782,
-0.5591922402381897,
-0.503498375415802,
-0.5781434774398804,
0.02688106521964... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jason-lee08/TinyStoriesExclamationValidation2 | jason-lee08 | 2023-09-15T20:28:30Z | 163 | 0 | null | [
"region:us"
] | 2023-09-15T20:28:30Z | 2023-09-15T20:28:29.000Z | 2023-09-15T20:28:29 | ---
dataset_info:
features:
- name: validation
dtype: string
splits:
- name: train
num_bytes: 168184
num_examples: 220
download_size: 89488
dataset_size: 168184
---
# Dataset Card for "TinyStoriesExclamationValidation2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.39002758264541626,
-0.2206745147705078,
0.28944793343544006,
0.27118951082229614,
-0.3020470142364502,
-0.043496888130903244,
0.09925064444541931,
-0.1657722443342209,
0.31873786449432373,
0.20363715291023254,
-0.7885676622390747,
-0.4424211382865906,
-0.5820953249931335,
-0.06162516400... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cyrilzhang/wiki-bpe-32k | cyrilzhang | 2023-09-22T16:02:48Z | 163 | 0 | null | [
"region:us"
] | 2023-09-22T16:02:48Z | 2023-09-22T15:56:45.000Z | 2023-09-22T15:56:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 21123228700
num_examples: 5152007
- name: test
num_bytes: 212326700
num_examples: 51787
download_size: 10331372531
dataset_size: 21335555400
---
# Dataset Card for "wiki-bpe-32k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9121424555778503,
0.05451425537467003,
0.19565054774284363,
0.3349602222442627,
-0.42353370785713196,
-0.19493447244167328,
0.24496160447597504,
-0.34560394287109375,
0.6162581443786621,
0.4424690306186676,
-0.8732936978340149,
-0.7690043449401855,
-0.6837524771690369,
-0.11769318580627... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/mutability_classifier-1-n | coastalcph | 2023-11-04T11:14:13Z | 163 | 0 | null | [
"region:us"
] | 2023-11-04T11:14:13Z | 2023-11-04T11:14:08.000Z | 2023-11-04T11:14:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1199436.065450644
num_examples: 6824
- name: validation
num_bytes: 1017521.3408544267
num_examples: 5911
- name: test
num_bytes: 837675.0175438597
num_examples: 4256
download_size: 1322347
dataset_size: 3054632.4238489303
---
# Dataset Card for "mutability_classifier-1-n"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6703414916992188,
-0.24256540834903717,
0.10282685607671738,
0.3365365266799927,
-0.16216062009334564,
0.07451306283473969,
0.3373417556285858,
-0.10715597122907639,
0.8940092921257019,
0.2575205862522125,
-0.7942156791687012,
-0.5442618727684021,
-0.7281131744384766,
-0.214195385575294... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alihmaou/AGR_RP2020_IND_DEPT | alihmaou | 2023-11-15T22:51:21Z | 163 | 0 | null | [
"region:us"
] | 2023-11-15T22:51:21Z | 2023-11-15T22:50:35.000Z | 2023-11-15T22:50:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4bharat/IndicSentenceSummarization | ai4bharat | 2022-10-13T06:08:31Z | 162 | 0 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:5K<n<112K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
... | 2022-10-13T06:08:31Z | 2022-03-10T09:59:05.000Z | 2022-03-10T09:59:05 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicSentenceSummarization
size_categories:
- 5K<n<112K
source_datasets:
- original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-sentence-summarization
---
# Dataset Card for "IndicSentenceSummarization"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
**Tasks:** Sentence Summarization
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '5',
'input': 'जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया।',
'target': 'जम्मू-कश्मीर : सुरक्षाबलों के साथ मुठभेड़ में 2 आतंकवादी ढेर',
'url': 'https://www.indiatv.in/india/national-jammu-kashmir-two-millitant-killed-in-encounter-with-security-forces-574529'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: Input sentence.
- `target (strings)`: Output summary.
- `url (string)`: Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 10,812 | 5,232 | 5,452 |
Bengali | bn | 17,035 | 2,355 | 2,384 |
Gujarati | gu | 54,788 | 8,720 | 8,460 |
Hindi | hi | 78,876 | 16,935 | 16,835 |
Kannada | kn | 61,220 | 9,024 | 1,485 |
Malayalam | ml | 2,855 | 1,520 | 1,580 |
Marathi | mr | 27,066 | 3,249 | 3,309 |
Oriya | or | 12,065 | 1,539 | 1,440 |
Punjabi | pa | 31,630 | 4,004 | 3,967 |
Tamil | ta | 23,098 | 2,874 | 2,948 |
Telugu | te | 7,119 | 878 | 862 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
It is a modified subset of [IndicHeadlineGeneration](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | [
-0.4122730791568756,
-0.5185443758964539,
-0.09124796092510223,
0.4871422052383423,
-0.2887119650840759,
0.15939801931381226,
-0.6423628330230713,
-0.41636112332344055,
0.5109829306602478,
0.3393899202346802,
-0.596296489238739,
-0.8525726199150085,
-0.6545722484588623,
0.5116732120513916,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Saptarshi7/covid_qa_cleaned_CS | Saptarshi7 | 2023-10-31T20:58:52Z | 162 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-31T20:58:52Z | 2022-05-04T19:04:01.000Z | 2022-05-04T19:04:01 | ---
license: apache-2.0
---
This is the _cleaned_ version of Deepset's [COVID-QA dataset](https://aclanthology.org/2020.nlpcovid19-acl.18/) which is described in section 3.2 of [Leveraging External Knowledge Resources to Enable Domain-Specific Comprehension](https://lifelong-ml.cc/virtual-2022/poster_78.html). While you can use either version of the dataset, we recommend using this version since we have corrected many mistakes in the original question set & also, it yields better scores, even zero-shot, as reported in our paper.
| [
-0.28805530071258545,
-0.6390842199325562,
0.05289816856384277,
-0.186351016163826,
-0.19890090823173523,
0.05394834280014038,
0.04992016777396202,
-0.5802999138832092,
0.18671660125255585,
0.6496481895446777,
-0.7554264664649963,
-0.2692294120788574,
0.022475041449069977,
0.00772563833743... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/catalanqa | projecte-aina | 2023-11-25T04:47:38Z | 162 | 1 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ca",
"license:cc-by-sa-4.0",
"arxiv:1606.05250",
"region:us"
] | 2023-11-25T04:47:38Z | 2022-06-29T14:22:10.000Z | 2022-06-29T14:22:10 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: catalanqa
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# Dataset Card for CatalanQA
## Dataset Description
- **Homepage:** https://github.com/projecte-aina
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
This dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) and [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad).
Splits have been balanced by kind of question, and unlike other datasets like [SQuAD](http://arxiv.org/abs/1606.05250), it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Supported Tasks and Leaderboards
Extractive-QA, Language Model.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
"title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya",
"paragraphs": [
{
"context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.",
"qas": [
{
"question": "Quants policies enviaran a Catalunya?",
"id": "0.5961700408283691",
"answers": [
{
"text": "521",
"answer_start": 57
}
]
}
]
}
]
},
```
### Data Fields
Follows [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets:
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the article.
- `context` (str): Article text.
- `question` (str): Question.
- `answers` (list): Answer to the question, containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train.json: 17135 question/answer pairs
- dev.json: 2157 question/answer pairs
- test.json: 2135 question/answer pairs
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
- [VilaWeb](https://www.vilaweb.cat/) and [Catalan Wikipedia](https://ca.wikipedia.org).
#### Initial Data Collection and Normalization
This dataset is a balanced aggregation from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
#### Who are the source language producers?
Volunteers from [Catalan Wikipedia](https://ca.wikipedia.org) and professional journalists from [VilaWeb](https://www.vilaweb.cat/).
### Annotations
#### Annotation process
We did an aggregation and balancing from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
To annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250).
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the annotators?
Annotation was commissioned by a specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information is included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Contributions
[N/A] | [
-0.4413263201713562,
-0.6676172614097595,
0.05741612985730171,
0.5357966423034668,
-0.1081407368183136,
0.18748007714748383,
-0.1702517569065094,
-0.306501567363739,
0.6036578416824341,
0.5522822141647339,
-0.4419136047363281,
-0.8237141370773315,
-0.43373337388038635,
0.2041861116886139,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jakartaresearch/inglish | jakartaresearch | 2022-08-19T15:23:15Z | 162 | 0 | null | [
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"language:en",
"license:cc-by-4.0",
"indonesian",
"english",
"translation",
... | 2022-08-19T15:23:15Z | 2022-08-19T15:05:58.000Z | 2022-08-19T15:05:58 | ---
annotations_creators:
- machine-generated
language:
- id
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- translation
pretty_name: 'Inglish: Indonesian English Machine Translation Dataset'
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- indonesian
- english
- translation
task_categories:
- translation
task_ids: []
---
# Dataset Card for Inglish: Indonesian English Translation Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The original dataset is from MSRP dataset. The translation was generated from google translate.
Feel free to check the translation if you find any error and open new discussion.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
English - Indonesian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | [
-0.2518840730190277,
-0.3271755576133728,
-0.2655699551105499,
0.4435791075229645,
-0.41692858934402466,
-0.05766066536307335,
-0.32720527052879333,
-0.3128088712692261,
0.5462908744812012,
0.8600209951400757,
-0.5950481295585632,
-0.8197922110557556,
-0.7595239877700806,
0.556607902050018... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
waylandy/phosformer_curated | waylandy | 2023-01-25T17:44:30Z | 162 | 1 | null | [
"region:us"
] | 2023-01-25T17:44:30Z | 2023-01-25T17:42:25.000Z | 2023-01-25T17:42:25 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlfoundations/datacomp_small | mlfoundations | 2023-08-21T21:44:22Z | 162 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-08-21T21:44:22Z | 2023-05-22T18:14:45.000Z | 2023-05-22T18:14:45 | ---
license: cc-by-4.0
---
## DataComp Small Pool
This repository contains metadata files for the small pool of DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage.
| [
-0.6461562514305115,
-0.46422913670539856,
0.3241730034351349,
0.19641532003879547,
-0.5521487593650818,
-0.15172694623470306,
0.0004339741135481745,
-0.5664898753166199,
0.36852318048477173,
0.6208929419517517,
-0.9970686435699463,
-0.6560982465744019,
-0.4737624526023865,
0.2717366516590... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GATE-engine/describable_textures | GATE-engine | 2023-06-05T17:13:02Z | 162 | 0 | null | [
"region:us"
] | 2023-06-05T17:13:02Z | 2023-06-04T23:57:38.000Z | 2023-06-04T23:57:38 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 350355304.0
num_examples: 3960
- name: validation
num_bytes: 72331220.0
num_examples: 840
- name: test
num_bytes: 73428430.0
num_examples: 840
download_size: 0
dataset_size: 496114954.0
---
# Dataset Card for "describable_textures"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.575410008430481,
-0.5152458548545837,
0.2913479506969452,
0.744124710559845,
-0.28985366225242615,
0.14409443736076355,
0.2395307570695877,
-0.364572674036026,
0.9445116519927979,
0.43780258297920227,
-0.7416390776634216,
-0.7601640224456787,
-0.6282216906547546,
-0.3638133108615875,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DISCOX/DISCO-10K-random | DISCOX | 2023-06-20T14:25:17Z | 162 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-06-20T14:25:17Z | 2023-06-10T19:17:26.000Z | 2023-06-10T19:17:26 | ---
license: cc-by-4.0
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: video_duration_youtube_sec
dtype: float64
- name: preview_url_spotify
dtype: string
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: search_query_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: primary_artist_name_spotify
dtype: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 47861223.0
num_examples: 10000
download_size: 57725964
dataset_size: 47861223.0
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-10K-random")
```
The dataset contains 10,000 random samples from the DISCO-10M dataset found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
More details about the dataset can be found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
<!--
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> | [
-0.7388488054275513,
-0.59786456823349,
0.0413445308804512,
0.5121540427207947,
-0.05012836307287216,
0.06795965135097504,
-0.12391551584005356,
0.007203939836472273,
0.7600476741790771,
0.6425574421882629,
-1.1849483251571655,
-0.7787899374961853,
-0.41055476665496826,
0.16334284842014313... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Trelis/function_calling_extended | Trelis | 2023-10-30T11:06:37Z | 162 | 26 | null | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"function call",
"function calling",
"function-calling",
"region:us"
] | 2023-10-30T11:06:37Z | 2023-07-31T10:44:02.000Z | 2023-07-31T10:44:02 | ---
task_categories:
- question-answering
- conversational
- text-generation
language:
- en
tags:
- function call
- function calling
- function-calling
size_categories:
- n<1K
extra_gated_prompt: "Access to this dataset requires the purchase of a license [here](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj)"
extra_gated_fields:
Name: text
Affiliation: text
Email: text
I have purchased a license (access will be granted once your payment clears): checkbox
I agree to the terms of the license described on the dataset card: checkbox
---
# Trelis Function Calling Dataset
- Allows models to be fine-tuned for function-calling.
- The dataset is human generated and does not make use of Llama 2 or OpenAI!
- Contains 59 training and 17 test rows
- Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat
Access this dataset by purchasing a license [HERE](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj).
Alternatively, you can find pre-trained function calling models for Llama 2 and Mistral [HERE](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2)
--Change-log--
11Oct2023: Minor update adding in short prompts like "duck" to which the LLM should respond with a description of a duck or ducks, not a function call.
22Aug2023: Major updates to the main branch:
- The 'systemPrompt' column is now replaced by 'functionList', which contains a raw list of function metadata without any guidance.
- The previous dataset, with 'systemPrompt' - containing specific instructions - has been moved to the 'explicit' branch.
- The 'implicit' branch is a copy of the 'explicit' branch, but with slightly less instruction provided to the LLM in the systemPrompt column.
The reason for these updates are:
- For one-shot model prompting, it is helpful to provide as much description as possible to the LLM.
- For fine-tuning, is is desirable to minimise the length of any added context to describe functions, especially if not necessary.
Users can play around with the different levels of instruction provided. In summary:
- 'main' - provides the lowest level of instruction on how to use the functions
- 'implicit' - moderate instructions
- 'explicit' - detailed instructions
18Aug2023: Added new 'implicit' branch with a shorter system prompt. Performs similarly to main branch, but uses less tokens for prompting.
15Aug2023: Added datasets to fine-tune models for awareness of available functions.
## Fine-Tuning Notes and Scripts
The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends **strongly** on how the attention mask and loss mask are set. For further details see the [Youtube Video Here](https://youtu.be/OQdp-OeG1as)
### QLoRa Training Notebook for Llama 2 (FREE)
- Access a basic Google Colab script for fine-tuning [here](https://colab.research.google.com/drive/1uMSS1o_8YOPyG1X_4k6ENEE3kJfBGGhH?usp=sharing).
### ADVANCED Fine-tuning Notebook for Structured Responses (incl. function calling) (PAID)
- Fine-tune models for function calling or other structured responses.
- Includes a prompt loss-mask for improved performance when structured responses are required.
- Includes a stop token after responses - allowing the model to provide a short reponse (e.g. a function call) and then stop.
- Request [access here](https://buy.stripe.com/5kAfZK6xT2Hxg7e8wW).
## Licensing
The Function Calling Extended dataset is commercially licensed. Users can purchase a license per seat/user from [here](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj).
Further terms:
- Licenses are not transferable to other users/entities.
### Attribution of data sources
This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.
## Dataset Structure
The datasets (train and test) contain three prompt types:
1. The first portion provides function metadata in the systemPrompt but then has userPrompt and assistantResponse values that do not require function calling. This is to get the language model accustomed to having function metadata available, but not using it. Questions and answers for these prompts are generated by running addBlank.py and the questions and answers come from [truthful_qa](https://huggingface.co/datasets/truthful_qa) - see below for license details.
2. The second portion of the train and test datasets provide examples where a function call is necessary.
3. The third portion (new as of August 13th 2023) acclimatises the model to recognising what functions it has available from the system prompt, and sharing that with the user when appropriate. Further extended on October 11th to add one and two word prompts not requiring function calls as responses.
## Branches
Specify the branch using:
```
data = load_dataset(
"Trelis/function_calling_extended",
revision="implicit" # optionally specify a branch
)
```
The 'main' branch uses short system/function prompt, with no instruction on usage (see the other branches for prompts with stronger instruction):
```
{ "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] }
```
The 'explicit' branch provides detailed instructions to the language model on how to call functions:
```
You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } }
```
The 'implicit' branch uses a shorter, less explicit branch that performs similarly and is therefore recommended as it reduces the length of the system prompt:
```
You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] }
```
Said differently, the 'implicit' branch omits the following portion of the prompt:
```
To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } }
```
## Training and Inference Syntax
Here is sample prompt syntax for Llama. This will depend on the language model you use and also how to wish to fine-tune the model:
```
# Define the roles and markers
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
system_prompt = data['test'][index]['systemPrompt']
user_prompt = data['test'][index]['userPrompt']
correct_answer = data['test'][index]['assistantResponse']
# Format your prompt template
prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n"
```
The `\n\n` after E_INST is important as it prevents E_INST from sometimes being tokenized with the ']' attached to the next characters. Using `\n\n` also provides the best chance for the model correctly telling whether to call a function or provide a usual response.
Alternatively, you may prefer to stay away from the system prompt and create a separate wrapper for function descriptions (as an example for the data on 'main'):
```
# Define the roles and markers
B_INST, E_INST = "[INST]", "[/INST]"
B_FUNC, E_FUNC = "<FUNCTIONS>", "</FUNCTIONS>\n\n"
functionList = data['test'][index]['functionList']
user_prompt = data['test'][index]['userPrompt']
correct_answer = data['test'][index]['assistantResponse']
# Format your prompt template
prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST}\n\n"
```
## File Structure (for prompt dataset generation)
- `functions/`: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.
- `generate_dataset.py`: This Python script generates the base training and testing dataset CSV files.
- `addBlank.py`: This adds in truthfulqa questions and answers after system prompts with functions
- `hello.py`: adds in prompts to accustomise the model to the presence of functions in the system prompt.
### JSON File Structure
Each function file should be a JSON file with the following structure:
```json
{
"functionMetaData": {
"function": "function_name",
"description": "function_description",
"arguments": [
{
"name": "argument_name",
"type": "argument_type",
"description": "argument_description"
},
...
]
},
"samplePromptResponsePairs": [
{
"prompt": "sample_prompt",
"response": {
"arguments": {
"argument_name": "argument_value",
...
}
}
},
...
]
}
```
The `functionMetaData` object describes the function. The `samplePromptResponsePairs` array contains sample prompts and responses for the function.
## Dataset Generation
To generate the dataset, run the `generate_dataset.py` script. This script will iterate over each function file and generate a CSV row for each sample prompt-response pair.
## CSV File Structure
The generated CSV file has the following columns:
'main' branches:
- `functionList`: Descriptions of two functions (the current function and a randomly selected other function).
- `userPrompt`: The user's prompt.
- `assistantResponse`: The assistant's response.
'explicit' and 'implicit' branches:
- `systemPrompt`: The system's prompt, which includes the descriptions of two functions (the current function and a randomly selected other function) and instructions on how to call a function ('explicit branch only').
- `userPrompt`: The user's prompt.
- `assistantResponse`: The assistant's response.
## Testing JSON Structure
A script named `validate.py` can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.
To use the script, call it from the command line with the name of the function file as an argument:
```
python validate.py my_function.json
```
| [
-0.2130764275789261,
-0.7885524034500122,
0.16303546726703644,
0.2459796518087387,
-0.23685920238494873,
0.17899002134799957,
0.01862400397658348,
-0.36704006791114807,
0.13395048677921295,
0.616457462310791,
-0.8642370104789734,
-0.7042756080627441,
-0.20509330928325653,
0.258669644594192... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JasiekKaczmarczyk/giant-midi-sustain-masked | JasiekKaczmarczyk | 2023-10-02T10:49:22Z | 162 | 0 | null | [
"region:us"
] | 2023-10-02T10:49:22Z | 2023-10-02T09:46:21.000Z | 2023-10-02T09:46:21 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: source
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: masking_spaces
struct:
- name: <Random Mask>
sequence: bool
length: 128
- name: <LH Mask>
sequence: bool
length: 128
- name: <RH Mask>
sequence: bool
length: 128
- name: <Harmonic Root Mask>
sequence: bool
length: 128
- name: <Harmonic Outliers Mask>
sequence: bool
length: 128
splits:
- name: train
num_bytes: 453725935
num_examples: 239612
- name: validation
num_bytes: 55936260
num_examples: 29544
- name: test
num_bytes: 52710054
num_examples: 27844
download_size: 211201981
dataset_size: 562372249
---
# Dataset Card for "giant-midi-sustain-masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7164289951324463,
-0.3113120496273041,
0.2465580850839615,
0.4118402600288391,
-0.21856637299060822,
0.300487756729126,
0.057579878717660904,
-0.2548016309738159,
1.1698718070983887,
0.624493420124054,
-0.9752367734909058,
-0.6790229082107544,
-0.5337690711021423,
-0.3467092514038086,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sayan1101/identity_finetune_data_3 | sayan1101 | 2023-11-03T04:16:41Z | 162 | 0 | null | [
"region:us"
] | 2023-11-03T04:16:41Z | 2023-11-02T18:35:32.000Z | 2023-11-02T18:35:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 262944
num_examples: 815
- name: test
num_bytes: 28835
num_examples: 91
download_size: 142606
dataset_size: 291779
---
# Dataset Card for "identity_finetune_data_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5617520809173584,
-0.31677815318107605,
0.2523365020751953,
0.16307677328586578,
-0.21438893675804138,
-0.20118260383605957,
0.42069950699806213,
-0.20584596693515778,
0.6233909130096436,
0.4499567151069641,
-0.6990079879760742,
-0.6962141394615173,
-0.4162916839122772,
-0.0752792060375... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gabrielmbmb/ultrafeedback-prompts-judgelm-gpt35 | gabrielmbmb | 2023-11-15T14:36:54Z | 162 | 0 | null | [
"region:us"
] | 2023-11-15T14:36:54Z | 2023-11-15T14:26:25.000Z | 2023-11-15T14:26:25 | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
dtype: string
- name: generation_prompt
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: ratings
sequence: int64
- name: rationale
dtype: string
splits:
- name: train
num_bytes: 13479349
num_examples: 1000
download_size: 6250632
dataset_size: 13479349
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ultrafeedback-prompts-judgelm-gpt35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5366240739822388,
-0.06143132224678993,
0.5367457270622253,
0.456336110830307,
-0.26565325260162354,
-0.10817385464906693,
0.25385284423828125,
0.060876455157995224,
0.4334137737751007,
0.5496681928634644,
-0.9682784676551819,
-0.7546988725662231,
-0.1654936522245407,
-0.173365667462348... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jimregan/lasid | jimregan | 2021-10-06T23:31:28Z | 161 | 0 | null | [
"region:us"
] | 2021-10-06T23:31:28Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mweiss/fashion_mnist_corrupted | mweiss | 2023-03-19T11:45:31Z | 161 | 3 | null | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|fashion_mnist",
"language:en",
"license:mit",
"ar... | 2023-03-19T11:45:31Z | 2022-04-21T11:34:02.000Z | 2022-04-21T11:34:02 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: fashion-mnist-corrupted
size_categories:
- 10K<n<100K
source_datasets:
- extended|fashion_mnist
task_categories:
- image-classification
task_ids: []
---
# Fashion-Mnist-C (Corrupted Fashion-Mnist)
A corrupted Fashion-MNIST benchmark for testing out-of-distribution robustness of computer vision models, which were trained on Fashion-Mmnist.
[Fashion-Mnist](https://github.com/zalandoresearch/fashion-mnist) is a drop-in replacement for MNIST and Fashion-Mnist-C is a corresponding drop-in replacement for [MNIST-C](https://arxiv.org/abs/1906.02337).
## Corruptions
The following corruptions are applied to the images, equivalently to MNIST-C:
- **Noise** (shot noise and impulse noise)
- **Blur** (glass and motion blur)
- **Transformations** (shear, scale, rotate, brightness, contrast, saturate, inverse)
In addition, we apply various **image flippings and turnings**: For fashion images, flipping the image does not change its label,
and still keeps it a valid image. However, we noticed that in the nominal fmnist dataset, most images are identically oriented
(e.g. most shoes point to the left side). Thus, flipped images provide valid OOD inputs.
Most corruptions are applied at a randomly selected level of *severity*, s.t. some corrupted images are really hard to classify whereas for others the corruption, while present, is subtle.
## Examples
| Turned | Blurred | Rotated | Noise | Noise | Turned |
| ------------- | ------------- | --------| --------- | -------- | --------- |
| <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_0.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_1.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_6.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_3.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_4.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_5.png" width="100" height="100"> |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Weiss2022SimpleTechniques,
title={Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning},
author={Weiss, Michael and Tonella, Paolo},
booktitle={Proceedings of the 31th ACM SIGSOFT International Symposium on Software Testing and Analysis},
year={2022}
}
```
Also, you may want to cite FMNIST and MNIST-C.
## Credits
- Fashion-Mnist-C is inspired by Googles MNIST-C and our repository is essentially a clone of theirs. See their [paper](https://arxiv.org/abs/1906.02337) and [repo](https://github.com/google-research/mnist-c).
- Find the nominal (i.e., non-corrupted) Fashion-MNIST dataset [here](https://github.com/zalandoresearch/fashion-mnist).
| [
-0.5019714832305908,
-0.579603374004364,
0.10623946040868759,
0.1270734667778015,
-0.3556719720363617,
-0.035827673971652985,
0.04747168347239494,
-0.4632614254951477,
0.3368651270866394,
0.08018629252910614,
-0.6884732842445374,
-0.4287882149219513,
-0.12473741918802261,
0.027069887146353... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iashchak/igor_link_dialogues-alpaca | iashchak | 2023-11-09T19:54:37Z | 161 | 0 | null | [
"not-for-all-audiences",
"region:us"
] | 2023-11-09T19:54:37Z | 2023-11-01T22:25:42.000Z | 2023-11-01T22:25:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 13212687.468058102
num_examples: 13756
- name: test
num_bytes: 1514340.375
num_examples: 1542
download_size: 0
dataset_size: 14727027.843058102
tags:
- not-for-all-audiences
---
# Dataset Card for "igor_link_dialogues-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6570791602134705,
-0.4002768099308014,
0.18959574401378632,
0.1694408506155014,
-0.3726145625114441,
-0.11575564742088318,
0.1917204111814499,
-0.2829782962799072,
1.1307259798049927,
0.37244150042533875,
-1.002134084701538,
-0.707201361656189,
-0.5275996923446655,
-0.12132201343774796,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
norec | null | 2023-01-25T14:41:38Z | 160 | 1 | norec | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:nb",
"language:nn",
"language:no",
"license:cc-by-nc-... | 2023-01-25T14:41:38Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- nb
- nn
- 'no'
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: norec
pretty_name: NoReC
dataset_info:
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
- name: xpos_tags
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1254757266
num_examples: 680792
- name: validation
num_bytes: 189534106
num_examples: 101106
- name: test
num_bytes: 193801708
num_examples: 101594
download_size: 212492611
dataset_size: 1638093080
---
# Dataset Card for NoReC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/ltgoslo/norec
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2018/pdf/851.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset contains Norwegian Review Corpus (NoReC), created for the purpose of training and evaluating models for document-level sentiment analysis. More than 43,000 full-text reviews have been collected from major Norwegian news sources and cover a range of different domains, including literature, movies, video games, restaurants, music and theater, in addition to product reviews across a range of categories. Each review is labeled with a manually assigned score of 1–6, as provided by the rating of the original author.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The sentences in the dataset are in Norwegian (nb, nn, no).
## Dataset Structure
### Data Instances
A sample from training set is provided below:
```
{'deprel': ['det',
'amod',
'cc',
'conj',
'nsubj',
'case',
'nmod',
'cop',
'case',
'case',
'root',
'flat:name',
'flat:name',
'punct'],
'deps': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None'],
'feats': ["{'Gender': 'Masc', 'Number': 'Sing', 'PronType': 'Dem'}",
"{'Definite': 'Def', 'Degree': 'Pos', 'Number': 'Sing'}",
'None',
"{'Definite': 'Def', 'Degree': 'Pos', 'Number': 'Sing'}",
"{'Definite': 'Def', 'Gender': 'Masc', 'Number': 'Sing'}",
'None',
'None',
"{'Mood': 'Ind', 'Tense': 'Pres', 'VerbForm': 'Fin'}",
'None',
'None',
'None',
'None',
'None',
'None'],
'head': ['5',
'5',
'4',
'2',
'11',
'7',
'5',
'11',
'11',
'11',
'0',
'11',
'11',
'11'],
'idx': '000000-02-01',
'lemmas': ['den',
'andre',
'og',
'sist',
'sesong',
'av',
'Rome',
'være',
'ute',
'på',
'DVD',
'i',
'Norge',
'$.'],
'misc': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
"{'SpaceAfter': 'No'}",
'None'],
'pos_tags': [5, 0, 4, 0, 7, 1, 11, 3, 1, 1, 11, 1, 11, 12],
'text': 'Den andre og siste sesongen av Rome er ute på DVD i Norge.',
'tokens': ['Den',
'andre',
'og',
'siste',
'sesongen',
'av',
'Rome',
'er',
'ute',
'på',
'DVD',
'i',
'Norge',
'.'],
'xpos_tags': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None']}
```
### Data Fields
The data instances have the following fields:
- deprel: [More Information Needed]
- deps: [More Information Needed]
- feats: [More Information Needed]
- head: [More Information Needed]
- idx: index
- lemmas: lemmas of all tokens
- misc: [More Information Needed]
- pos_tags: part of speech tags
- text: text string
- tokens: tokens
- xpos_tags: [More Information Needed]
The part of speech taggs correspond to these labels: "ADJ" (0), "ADP" (1), "ADV" (2), "AUX" (3), "CCONJ" (4), "DET" (5), "INTJ" (6), "NOUN" (7), "NUM" (8), "PART" (9), "PRON" (10), "PROPN" (11), "PUNCT" (12), "SCONJ" (13), "SYM" (14), "VERB" (15), "X" (16),
### Data Splits
The training, validation, and test set contain `680792`, `101106`, and `101594` sentences respectively.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{VelOvrBer18,
author = {Erik Velldal and Lilja {\O}vrelid and
Eivind Alexander Bergem and Cathrine Stadsnes and
Samia Touileb and Fredrik J{\o}rgensen},
title = {{NoReC}: The {N}orwegian {R}eview {C}orpus},
booktitle = {Proceedings of the 11th edition of the
Language Resources and Evaluation Conference},
year = {2018},
address = {Miyazaki, Japan},
pages = {4186--4191}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. | [
-0.5783214569091797,
-0.4752696454524994,
0.040902670472860336,
0.07016754895448685,
-0.3062487542629242,
0.042036961764097214,
-0.23793739080429077,
-0.3017394244670868,
0.5646600127220154,
0.5192346572875977,
-0.7524192929267883,
-1.191929817199707,
-0.5322146415710449,
0.243443131446838... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/linguisticprobing | metaeval | 2022-11-09T15:41:29Z | 160 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en",
"region:us"
] | 2022-11-09T15:41:29Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: linguisticprobing
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids: []
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
naver-clova-ix/cord-v1 | naver-clova-ix | 2022-07-14T14:08:12Z | 160 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-07-14T14:08:12Z | 2022-07-14T13:41:50.000Z | 2022-07-14T13:41:50 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
silk-road/alpaca-data-gpt4-chinese | silk-road | 2023-05-23T05:33:21Z | 160 | 37 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:apache-2.0",
"gpt",
"alpaca",
"fine-tune",
"region:us"
] | 2023-05-23T05:33:21Z | 2023-05-23T02:10:49.000Z | 2023-05-23T02:10:49 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
- en
tags:
- gpt
- alpaca
- fine-tune
pretty_name: Alpaca-Data-GPT4-Chinese
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
M-A-D/Mixed-Arabic-Datasets-Repo | M-A-D | 2023-10-16T21:25:35Z | 160 | 12 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:fill-mask",
"size_categories:1B<n... | 2023-10-16T21:25:35Z | 2023-08-27T01:19:21.000Z | 2023-08-27T01:19:21 | ---
language:
- ar
size_categories:
- 1B<n<10B
task_categories:
- text-classification
- question-answering
- translation
- summarization
- conversational
- text-generation
- text2text-generation
- fill-mask
pretty_name: Mixed Arabic Datasets (MAD) Corpus
dataset_info:
- config_name: Ara--Ali-C137--Hindawi-Books-dataset
features:
- name: BookLink
dtype: string
- name: BookName
dtype: string
- name: AuthorName
dtype: string
- name: AboutBook
dtype: string
- name: ChapterLink
dtype: string
- name: ChapterName
dtype: string
- name: ChapterText
dtype: string
- name: AboutAuthor
dtype: string
splits:
- name: train
num_bytes: 1364854259
num_examples: 49821
download_size: 494678002
dataset_size: 1364854259
- config_name: Ara--Goud--Goud-sum
features:
- name: article
dtype: string
- name: headline
dtype: string
- name: categories
dtype: string
splits:
- name: train
num_bytes: 288296544
num_examples: 139288
download_size: 147735776
dataset_size: 288296544
- config_name: Ara--J-Mourad--MNAD.v1
features:
- name: Title
dtype: string
- name: Body
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 1101921980
num_examples: 418563
download_size: 527154122
dataset_size: 1101921980
- config_name: Ara--JihadZa--IADD
features:
- name: Sentence
dtype: string
- name: Region
dtype: string
- name: DataSource
dtype: string
- name: Country
dtype: string
splits:
- name: train
num_bytes: 19167070
num_examples: 135804
download_size: 8644491
dataset_size: 19167070
- config_name: Ara--LeMGarouani--MAC-corpus
features:
- name: tweets
dtype: string
- name: type
dtype: string
- name: class
dtype: string
splits:
- name: train
num_bytes: 1945646
num_examples: 18087
download_size: 866198
dataset_size: 1945646
- config_name: Ara--MBZUAI--Bactrian-X
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: id
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 66093524
num_examples: 67017
download_size: 33063779
dataset_size: 66093524
- config_name: Ara--OpenAssistant--oasst1
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
dtype: 'null'
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 58168
num_examples: 56
download_size: 30984
dataset_size: 58168
- config_name: Ara--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3052201469
num_examples: 1205403
download_size: 1316212231
dataset_size: 3052201469
- config_name: Ara--bigscience--xP3
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4727881680
num_examples: 2148955
download_size: 2805060725
dataset_size: 4727881680
- config_name: Ara--cardiffnlp--tweet_sentiment_multilingual
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 306108
num_examples: 1839
- name: validation
num_bytes: 53276
num_examples: 324
- name: test
num_bytes: 141536
num_examples: 870
download_size: 279900
dataset_size: 500920
- config_name: Ara--miracl--miracl
features:
- name: query_id
dtype: string
- name: query
dtype: string
- name: positive_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: negative_passages
list:
- name: docid
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 32012083
num_examples: 3495
download_size: 15798509
dataset_size: 32012083
- config_name: Ara--mustapha--QuranExe
features:
- name: text
dtype: string
- name: resource_name
dtype: string
- name: verses_keys
dtype: string
splits:
- name: train
num_bytes: 133108687
num_examples: 49888
download_size: 58769417
dataset_size: 133108687
- config_name: Ara--pain--Arabic-Tweets
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 41639770853
num_examples: 202700438
download_size: 22561651700
dataset_size: 41639770853
- config_name: Ara--saudinewsnet
features:
- name: source
dtype: string
- name: url
dtype: string
- name: date_extracted
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 103654009
num_examples: 31030
download_size: 49117164
dataset_size: 103654009
- config_name: Ary--AbderrahmanSkiredj1--Darija-Wikipedia
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8104410
num_examples: 4862
download_size: 3229966
dataset_size: 8104410
- config_name: Ary--Ali-C137--Darija-Stories-Dataset
features:
- name: ChapterName
dtype: string
- name: ChapterLink
dtype: string
- name: Author
dtype: string
- name: Text
dtype: string
- name: Tags
dtype: int64
splits:
- name: train
num_bytes: 476926644
num_examples: 6142
download_size: 241528641
dataset_size: 476926644
- config_name: Ary--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10007364
num_examples: 6703
download_size: 4094377
dataset_size: 10007364
- config_name: Arz--Wikipedia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1364641408
num_examples: 1617770
download_size: 306420318
dataset_size: 1364641408
configs:
- config_name: Ara--Ali-C137--Hindawi-Books-dataset
data_files:
- split: train
path: Ara--Ali-C137--Hindawi-Books-dataset/train-*
- config_name: Ara--Goud--Goud-sum
data_files:
- split: train
path: Ara--Goud--Goud-sum/train-*
- config_name: Ara--J-Mourad--MNAD.v1
data_files:
- split: train
path: Ara--J-Mourad--MNAD.v1/train-*
- config_name: Ara--JihadZa--IADD
data_files:
- split: train
path: Ara--JihadZa--IADD/train-*
- config_name: Ara--LeMGarouani--MAC-corpus
data_files:
- split: train
path: Ara--LeMGarouani--MAC-corpus/train-*
- config_name: Ara--MBZUAI--Bactrian-X
data_files:
- split: train
path: Ara--MBZUAI--Bactrian-X/train-*
- config_name: Ara--OpenAssistant--oasst1
data_files:
- split: train
path: Ara--OpenAssistant--oasst1/train-*
- config_name: Ara--Wikipedia
data_files:
- split: train
path: Ara--Wikipedia/train-*
- config_name: Ara--bigscience--xP3
data_files:
- split: train
path: Ara--bigscience--xP3/train-*
- config_name: Ara--cardiffnlp--tweet_sentiment_multilingual
data_files:
- split: train
path: Ara--cardiffnlp--tweet_sentiment_multilingual/train-*
- split: validation
path: Ara--cardiffnlp--tweet_sentiment_multilingual/validation-*
- split: test
path: Ara--cardiffnlp--tweet_sentiment_multilingual/test-*
- config_name: Ara--miracl--miracl
data_files:
- split: train
path: Ara--miracl--miracl/train-*
- config_name: Ara--mustapha--QuranExe
data_files:
- split: train
path: Ara--mustapha--QuranExe/train-*
- config_name: Ara--pain--Arabic-Tweets
data_files:
- split: train
path: Ara--pain--Arabic-Tweets/train-*
- config_name: Ara--saudinewsnet
data_files:
- split: train
path: Ara--saudinewsnet/train-*
- config_name: Ary--AbderrahmanSkiredj1--Darija-Wikipedia
data_files:
- split: train
path: Ary--AbderrahmanSkiredj1--Darija-Wikipedia/train-*
- config_name: Ary--Ali-C137--Darija-Stories-Dataset
data_files:
- split: train
path: Ary--Ali-C137--Darija-Stories-Dataset/train-*
- config_name: Ary--Wikipedia
data_files:
- split: train
path: Ary--Wikipedia/train-*
- config_name: Arz--Wikipedia
data_files:
- split: train
path: Arz--Wikipedia/train-*
---
# Dataset Card for "Mixed Arabic Datasets (MAD) Corpus"
**The Mixed Arabic Datasets Corpus : A Community-Driven Collection of Diverse Arabic Texts**
## Dataset Description
The Mixed Arabic Datasets (MAD) presents a dynamic compilation of diverse Arabic texts sourced from various online platforms and datasets. It addresses a critical challenge faced by researchers, linguists, and language enthusiasts: the fragmentation of Arabic language datasets across the Internet. With MAD, we are trying to centralize these dispersed resources into a single, comprehensive repository.
Encompassing a wide spectrum of content, ranging from social media conversations to literary masterpieces, MAD captures the rich tapestry of Arabic communication, including both standard Arabic and regional dialects.
This corpus offers comprehensive insights into the linguistic diversity and cultural nuances of Arabic expression.
## Usage
If you want to use this dataset you pick one among the available configs:
`Ara--MBZUAI--Bactrian-X` | `Ara--OpenAssistant--oasst1` | `Ary--AbderrahmanSkiredj1--Darija-Wikipedia`
`Ara--Wikipedia` | `Ary--Wikipedia` | `Arz--Wikipedia`
`Ary--Ali-C137--Darija-Stories-Dataset` | `Ara--Ali-C137--Hindawi-Books-dataset` | ``
Example of usage:
```python
dataset = load_dataset('M-A-D/Mixed-Arabic-Datasets-Repo', 'Ara--MBZUAI--Bactrian-X')
```
If you loaded multiple datasets and wanted to merge them together then you can simply laverage `concatenate_datasets()` from `datasets`
```pyhton
dataset3 = concatenate_datasets([dataset1['train'], dataset2['train']])
```
Note : proccess the datasets before merging in order to make sure you have a new dataset that is consistent
## Dataset Size
The Mixed Arabic Datasets (MAD) is a dynamic and evolving collection, with its size fluctuating as new datasets are added or removed. As MAD continuously expands, it becomes a living resource that adapts to the ever-changing landscape of Arabic language datasets.
**Dataset List**
MAD draws from a diverse array of sources, each contributing to its richness and breadth. While the collection is constantly evolving, some of the datasets that are poised to join MAD in the near future include:
- [✔] OpenAssistant/oasst1 (ar portion) : [Dataset Link](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [✔] MBZUAI/Bactrian-X (ar portion) : [Dataset Link](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/ar/train)
- [✔] AbderrahmanSkiredj1/Darija-Wikipedia : [Dataset Link](https://huggingface.co/datasets/AbderrahmanSkiredj1/moroccan_darija_wikipedia_dataset)
- [✔] Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Moroccan Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Egyptian Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
- [✔] Darija Stories Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Darija-Stories-Dataset)
- [✔] Hindawi Books Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Hindawi-Books-dataset)
- [] uonlp/CulturaX - ar : [Dataset Link](https://huggingface.co/datasets/uonlp/CulturaX/viewer/ar/train)
- [✔] Pain/ArabicTweets : [Dataset Link](https://huggingface.co/datasets/pain/Arabic-Tweets)
- [] Abu-El-Khair Corpus : [Dataset Link](https://huggingface.co/datasets/arabic_billion_words)
- [✔] QuranExe : [Dataset Link](https://huggingface.co/datasets/mustapha/QuranExe)
- [✔] MNAD : [Dataset Link](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
- [✔] IADD : [Dataset Link](https://raw.githubusercontent.com/JihadZa/IADD/main/IADD.json)
- [] OSIAN : [Dataset Link](https://wortschatz.uni-leipzig.de/en/download/Arabic#ara-tn_newscrawl-OSIAN_2018)
- [✔] MAC corpus : [Dataset Link](https://raw.githubusercontent.com/LeMGarouani/MAC/main/MAC%20corpus.csv)
- [✔] Goud.ma-Sum : [Dataset Link](https://huggingface.co/datasets/Goud/Goud-sum)
- [✔] SaudiNewsNet : [Dataset Link](https://huggingface.co/datasets/saudinewsnet)
- [✔] Miracl : [Dataset Link](https://huggingface.co/datasets/miracl/miracl)
- [✔] CardiffNLP/TweetSentimentMulti : [Dataset Link](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
- [] OSCAR-2301 : [Dataset Link](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301/viewer/ar/train)
- [] mc4 : [Dataset Link](https://huggingface.co/datasets/mc4/viewer/ar/train)
- [✔] bigscience/xP3 : [Dataset Link](https://huggingface.co/datasets/bigscience/xP3/viewer/ar/train)
- [] Muennighoff/xP3x : [Dataset Link](https://huggingface.co/datasets/Muennighoff/xP3x)
- [] Ai_Society : [Dataset Link](https://huggingface.co/datasets/camel-ai/ai_society_translated)
## Potential Use Cases
The Mixed Arabic Datasets (MAD) holds the potential to catalyze a multitude of groundbreaking applications:
- **Linguistic Analysis:** Employ MAD to conduct in-depth linguistic studies, exploring dialectal variances, language evolution, and grammatical structures.
- **Topic Modeling:** Dive into diverse themes and subjects through the extensive collection, revealing insights into emerging trends and prevalent topics.
- **Sentiment Understanding:** Decode sentiments spanning Arabic dialects, revealing cultural nuances and emotional dynamics.
- **Sociocultural Research:** Embark on a sociolinguistic journey, unraveling the intricate connection between language, culture, and societal shifts.
## Dataset Access
MAD's access mechanism is unique: while it doesn't carry a general license itself, each constituent dataset within the corpus retains its individual license. By accessing the dataset details through the provided links in the "Dataset List" section above, users can understand the specific licensing terms for each dataset.
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! [](https://discord.gg/2NpJ9JGm)
### How to Contribute
Want to contribute to the Mixed Arabic Datasets project? Follow our comprehensive guide on Google Colab for step-by-step instructions: [Contribution Guide](https://colab.research.google.com/drive/1kOIRoicgCOV8TPvASAI_2uMY7rpXnqzJ?usp=sharing).
**Note**: If you'd like to test a contribution before submitting it, feel free to do so on the [MAD Test Dataset](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Dataset-test).
## Citation
```
@dataset{
title = {Mixed Arabic Datasets (MAD)},
author = {MAD Community},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo},
year = {2023},
}
``` | [
-0.6622640490531921,
-0.47547876834869385,
-0.17699255049228668,
0.28162142634391785,
-0.2328094094991684,
0.313719779253006,
-0.1652580201625824,
-0.5299223065376282,
0.4315398037433624,
0.17354780435562134,
-0.4767323434352875,
-0.8617352247238159,
-0.6946601867675781,
0.2383122593164444... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lavita/ChatDoctor-HealthCareMagic-100k | lavita | 2023-09-09T07:40:38Z | 160 | 6 | null | [
"region:us"
] | 2023-09-09T07:40:38Z | 2023-09-09T06:58:05.000Z | 2023-09-09T06:58:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 126454896
num_examples: 112165
download_size: 70518148
dataset_size: 126454896
---
# Dataset Card for "ChatDoctor-HealthCareMagic-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.39145123958587646,
-0.13041354715824127,
0.04713159054517746,
0.032073911279439926,
-0.12421388924121857,
0.036660533398389816,
0.2539384663105011,
0.0624750480055809,
0.8559966683387756,
0.6320862174034119,
-0.94383305311203,
-0.8039315938949585,
-0.7331421375274658,
-0.493008285760879... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Alexandre-Numind/IE_v2 | Alexandre-Numind | 2023-11-23T15:52:49Z | 160 | 0 | null | [
"region:us"
] | 2023-11-23T15:52:49Z | 2023-11-07T11:03:35.000Z | 2023-11-07T11:03:35 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Locutusque/InstructMixCleaned | Locutusque | 2023-11-10T21:01:20Z | 159 | 0 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-10T21:01:20Z | 2023-11-10T15:51:06.000Z | 2023-11-10T15:51:06 | ---
name: InstructiveMixCleaned
tagline: A Combined Dataset of Diverse Instructional Content
description: >
InstructiveMix is a comprehensive dataset that brings together various
instructional content from different domains. It combines instructions for
tasks, code, poems, math, essays, medical texts, and more. With a diverse
range of instructional data, this dataset is suitable for a wide range of
natural language processing (NLP) tasks and research.
authors:
- name: Locutusque
email: locutusque.airshipcraft@gmail.com
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
pretty_name: Instruct Mix Cleaned
license: apache-2.0
---
Cleaned the dataset https://huggingface.co/datasets/Locutusque/InstructMix to remove RLHF responses | [
-0.7199358344078064,
-0.7241762280464172,
-0.06554428488016129,
-0.007924863137304783,
-0.4467829167842865,
0.23598551750183105,
0.20773184299468994,
-0.6632645130157471,
0.8163736462593079,
1.138163685798645,
-1.2187929153442383,
-0.32756537199020386,
-0.5117447376251221,
-0.1375387161970... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Davlan/masakhanerV1 | Davlan | 2021-09-18T19:13:11Z | 158 | 0 | null | [
"region:us"
] | 2021-09-18T19:13:11Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KETI-AIR/nikl | KETI-AIR | 2021-06-08T06:42:34Z | 158 | 1 | null | [
"region:us"
] | 2021-06-08T06:42:34Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | <!--
Copyright 2021 san kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# National Institute of Korean Language(NIKL) Corpus
| [
-0.21801449358463287,
-0.4020957946777344,
0.43271368741989136,
0.9840845465660095,
-0.6607111692428589,
0.02626870945096016,
-0.3573562800884247,
-0.4397389590740204,
0.07122749835252762,
1.233420729637146,
-0.3867546617984772,
-0.9006748795509338,
-0.5169348120689392,
0.5452934503555298,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cdminix/iwslt2011 | cdminix | 2021-09-21T12:17:53Z | 158 | 0 | null | [
"region:us"
] | 2021-09-21T12:17:53Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggingface-course/documentation-images | huggingface-course | 2023-01-05T02:03:40Z | 158 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-01-05T02:03:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2B-en-aesthetic | laion | 2023-01-18T20:03:33Z | 158 | 23 | null | [
"region:us"
] | 2023-01-18T20:03:33Z | 2022-05-22T12:34:11.000Z | 2022-05-22T12:34:11 | details at https://github.com/LAION-AI/laion-datasets/blob/main/laion-aesthetic.md | [
-0.282520592212677,
-0.40030157566070557,
0.46563249826431274,
-0.125469371676445,
0.02029235102236271,
0.06949151307344437,
-0.08352173119783401,
-0.2508104741573334,
0.6883172392845154,
0.7658241391181946,
-0.9028742909431458,
-1.1707161664962769,
-0.13811877369880676,
-0.363814949989318... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
embedding-data/simple-wiki | embedding-data | 2022-08-02T03:34:17Z | 158 | 5 | embedding-data/simple-wiki | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"region:us"
] | 2022-08-02T03:34:17Z | 2022-07-07T22:57:40.000Z | 2022-07-07T22:57:40 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/simple-wiki
pretty_name: simple-wiki
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "simple-wiki"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cs.pomona.edu/~dkauchak/simplification/](https://cs.pomona.edu/~dkauchak/simplification/)
- **Repository:** [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
- **Paper:** [https://aclanthology.org/P11-2117/](https://aclanthology.org/P11-2117/)
- **Point of Contact:** [David Kauchak](dkauchak@cs.pomona.edu)
### Dataset Summary
This dataset contains pairs of equivalent sentences obtained from Wikipedia.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2]}
{"set": [sentence_1, sentence_2]}
...
{"set": [sentence_1, sentence_2]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/simple-wiki")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 102225
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Curation Rationale
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
#### Who are the source language producers?
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Annotations
#### Annotation process
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
#### Who are the annotators?
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Personal and Sensitive Information
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Discussion of Biases
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Other Known Limitations
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
## Additional Information
### Dataset Curators
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Licensing Information
[More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/)
### Contributions
| [
-0.4306252598762512,
-0.7176287174224854,
0.28877317905426025,
0.11160975694656372,
-0.14986591041088104,
-0.16819827258586884,
-0.4066363573074341,
-0.1349395215511322,
0.48437726497650146,
0.40019282698631287,
-0.8922340273857117,
-0.7056288719177246,
-0.5860430002212524,
0.3967338800430... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liyucheng/arxiv-march-2023 | liyucheng | 2023-06-02T17:59:35Z | 158 | 0 | null | [
"region:us"
] | 2023-06-02T17:59:35Z | 2023-06-02T17:59:27.000Z | 2023-06-02T17:59:27 | ---
dataset_info:
features:
- name: entry_id
dtype: string
- name: published
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: primary_category
dtype: string
- name: categories
sequence: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 20816482
num_examples: 500
download_size: 10224538
dataset_size: 20816482
---
# Dataset Card for "arxiv-march-2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6238784790039062,
-0.055311594158411026,
0.2833826541900635,
0.42497918009757996,
-0.3298899829387665,
-0.28940489888191223,
0.7092803716659546,
-0.2379104048013687,
0.6669519543647766,
0.6468740105628967,
-0.7929041981697083,
-0.7571713924407959,
-0.5644969344139099,
-0.102413639426231... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
felipeoes/filtered_long_answers_qa_blue_amazon_legislation_61k | felipeoes | 2023-11-08T18:33:52Z | 158 | 0 | null | [
"region:us"
] | 2023-11-08T18:33:52Z | 2023-11-08T18:33:37.000Z | 2023-11-08T18:33:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: file_name
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: new_questions
dtype: string
- name: new_long_answers
dtype: string
- name: url
sequence: 'null'
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 71640134.51601684
num_examples: 49223
- name: test
num_bytes: 8955198.741991581
num_examples: 6153
- name: validation
num_bytes: 8955198.741991581
num_examples: 6153
download_size: 46996410
dataset_size: 89550532.0
---
# Dataset Card for "filtered_long_answers_qa_blue_amazon_legislation_61k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.544011116027832,
-0.2828415334224701,
0.3278404772281647,
0.27114278078079224,
-0.6406450867652893,
-0.08614640682935715,
0.3405819535255432,
-0.2929854094982147,
0.63922119140625,
0.9845758676528931,
-0.8493829965591431,
-0.7880513072013855,
-0.20305192470550537,
-0.13416922092437744,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Binbin/my_dataset | Binbin | 2021-03-22T01:15:48Z | 157 | 0 | null | [
"region:us"
] | 2021-03-22T01:15:48Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ESZER/H | ESZER | 2021-07-10T18:14:47Z | 157 | 0 | null | [
"region:us"
] | 2021-07-10T18:14:47Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Felix-ML/quoteli3 | Felix-ML | 2022-10-25T08:54:20Z | 157 | 0 | null | [
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-25T08:54:20Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets: []
---
# Dataset Card for quoteli3
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/~muzny/quoteli.html
- **Repository:** https://nlp.stanford.edu/~muzny/quoteli.html
- **Paper:** Muzny, Grace, et al. "A two-stage sieve approach for quote attribution." Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. 2017.
### Dataset Summary
This dataset is based on the quoteli3 dataset by Muzny et al. (2017). It contains annotated quotes for three pieces of literature: Chekhov\\\\\\\\\\\\'s The Steppe, Austen\\\\\\\\\\\\'s Emma and Pride and Prejudice.
### Languages
The text in the dataset is English.
## Dataset Structure
Training data:
-Quotes (1575, 11)
-Characters (32, 6)
Test data:
-Quotes (1513, 11)
-Characters (145, 6)
### Data Splits
-Quotes:
- train:
- features: ['mention', 'oid', 'speaker', 'connection', 'id', 'answer', 'answer_mention {'answer', 'answer_start', 'answer_end', 'answer_in_context'}, 'question', 'context', 'large_context', 'book_title'],
- num_rows: 1575
- test:
- features: ['mention', 'oid', 'speaker', 'connection', 'id', 'answer', 'answer_mention {'answer', 'answer_start', 'answer_end', 'answer_in_context'}, 'question', 'context', 'large_context', 'book_title'],
- num_rows: 1513
-Characters:
- train:
- features: ['aliases', 'description', 'gender', 'name', 'id', 'book_title'],
- num_rows: 32
- test:
- features: ['aliases', 'description', 'gender', 'name', 'id', 'book_title'],
- num_rows: 146 | [
-0.21336792409420013,
-0.4792340099811554,
0.4267079532146454,
0.3182954490184784,
-0.4103913903236389,
-0.47386112809181213,
0.025028031319379807,
-0.3592202067375183,
0.10587602108716965,
0.48493677377700806,
-0.647637128829956,
-0.6829732060432434,
-0.6172529458999634,
0.072840616106987... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
batubayk/TR-News | batubayk | 2023-03-04T22:39:35Z | 157 | 4 | null | [
"task_categories:summarization",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:tr",
"region:us"
] | 2023-03-04T22:39:35Z | 2022-04-18T17:23:02.000Z | 2022-04-18T17:23:02 | ---
task_categories:
- summarization
- text-classification
- text-generation
- text2text-generation
language:
- tr
pretty_name: TR-News
size_categories:
- 100K<n<1M
---
# Citation
If you use the dataset, please cite the paper:
@article{10.1007/s10579-021-09568-y,
year = {2022},
title = {{Abstractive text summarization and new large-scale datasets for agglutinative languages Turkish and Hungarian}},
author = {Baykara, Batuhan and Güngör, Tunga},
journal = {Language Resources and Evaluation},
issn = {1574-020X},
doi = {10.1007/s10579-021-09568-y},
pages = {1--35}} | [
-0.26775893568992615,
-0.5715453028678894,
0.03477097302675247,
0.2687060534954071,
-0.370378702878952,
-0.03354595974087715,
-0.4402487277984619,
-0.07134601473808289,
0.39891505241394043,
0.34030812978744507,
0.11779508739709854,
-0.5527653098106384,
-0.651370108127594,
0.354646682739257... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
polinaeterna/vox_lingua | polinaeterna | 2022-12-06T11:09:02Z | 157 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-12-06T11:09:02Z | 2022-05-06T15:26:59.000Z | 2022-05-06T15:26:59 | ---
license: cc-by-4.0
---
Use it as usual:
```python
ds = load_dataset("polinaeterna/vox_lingua", "sco")
```
If you want to download all the languages, use `"all"` config:
```python
ds = load_dataset("polinaeterna/vox_lingua", "all")
``` | [
-0.5354390740394592,
-0.2973649799823761,
0.027097055688500404,
0.3774557113647461,
-0.7697197794914246,
0.031363751739263535,
-0.5191529393196106,
0.25522029399871826,
0.6190405488014221,
0.506865382194519,
-0.2646971642971039,
-0.3875400424003601,
-0.5895987153053284,
0.9021902084350586,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
asapp/slue | asapp | 2022-09-26T23:08:10Z | 157 | 2 | slue | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:sentiment-analysis",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",... | 2022-09-26T23:08:10Z | 2022-09-19T18:07:59.000Z | 2022-09-19T18:07:59 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc0-1.0
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: slue
pretty_name: SLUE (Spoken Language Understanding Evaluation benchmark)
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- automatic-speech-recognition
- audio-classification
- text-classification
- token-classification
task_ids:
- sentiment-analysis
- named-entity-recognition
configs:
- voxpopuli
- voxceleb
---
# Dataset Card for SLUE
## Table of Contents
- [Dataset Card for SLUE](#dataset-card-for-slue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Automatic Speech Recognition (ASR)](#automatic-speech-recognition-asr)
- [Named Entity Recognition (NER)](#named-entity-recognition-ner)
- [Sentiment Analysis (SA)](#sentiment-analysis-sa)
- [How-to-submit for your test set evaluation](#how-to-submit-for-your-test-set-evaluation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [voxpopuli](#voxpopuli)
- [voxceleb](#voxceleb)
- [Data Fields](#data-fields)
- [voxpopuli](#voxpopuli-1)
- [voxceleb](#voxceleb-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [SLUE-VoxPopuli Dataset](#slue-voxpopuli-dataset)
- [SLUE-VoxCeleb Dataset](#slue-voxceleb-dataset)
- [Original License of OXFORD VGG VoxCeleb Dataset](#original-license-of-oxford-vgg-voxceleb-dataset)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://asappresearch.github.io/slue-toolkit](https://asappresearch.github.io/slue-toolkit)
- **Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/)
- **Paper:** [https://arxiv.org/pdf/2111.10367.pdf](https://arxiv.org/pdf/2111.10367.pdf)
- **Leaderboard:** [https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html](https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html)
- **Size of downloaded dataset files:** 1.95 GB
- **Size of the generated dataset:** 9.59 MB
- **Total amount of disk used:** 1.95 GB
### Dataset Summary
We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to
- Track research progress on multiple SLU tasks
- Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks
- Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use.
For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to [Toolkit](https://github.com/asappresearch/slue-toolkit) and [Paper](https://arxiv.org/pdf/2111.10367.pdf) for more details.
### Supported Tasks and Leaderboards
#### Automatic Speech Recognition (ASR)
Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER).
#### Named Entity Recognition (NER)
Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1.
#### Sentiment Analysis (SA)
Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.[More Information Needed]
#### How-to-submit for your test set evaluation
See here https://asappresearch.github.io/slue-toolkit/how-to-submit.html
### Languages
The language data in SLUE is in English.
## Dataset Structure
### Data Instances
#### voxpopuli
- **Size of downloaded dataset files:** 398.45 MB
- **Size of the generated dataset:** 5.81 MB
- **Total amount of disk used:** 404.26 MB
An example of 'train' looks as follows.
```
{'id': '20131007-0900-PLENARY-19-en_20131007-21:26:04_3',
'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/e35757b0971ac7ff5e2fcdc301bba0364857044be55481656e2ade6f7e1fd372/slue-voxpopuli/fine-tune/20131007-0900-PLENARY-19-en_20131007-21:26:04_3.ogg',
'array': array([ 0.00132601, 0.00058881, -0.00052187, ..., 0.06857217,
0.07835515, 0.07845446], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'None',
'normalized_text': 'two thousand and twelve for instance the new brussels i regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in europe even if the employer is domiciled outside europe. the commission will',
'raw_text': '2012. For instance, the new Brussels I Regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in Europe, even if the employer is domiciled outside Europe. The Commission will',
'raw_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]},
'raw_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]}}
```
#### voxceleb
- **Size of downloaded dataset files:** 1.55 GB
- **Size of the generated dataset:** 3.78 MB
- **Total amount of disk used:** 1.55 GB
An example of 'train' looks as follows.
```
{'id': 'id10059_229vKIGbxrI_00004',
'audio': {'path': '/Users/felixwu/.cache/huggingface/datasets/downloads/extracted/400facb6d2f2496ebcd58a5ffe5fbf2798f363d1b719b888d28a29b872751626/slue-voxceleb/fine-tune_raw/id10059_229vKIGbxrI_00004.flac',
'array': array([-0.00442505, -0.00204468, 0.00628662, ..., 0.00158691,
0.00100708, 0.00033569], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'id10059',
'normalized_text': 'of god what is a creator the almighty that uh',
'sentiment': 'Neutral',
'start_second': 0.45,
'end_second': 4.52}
```
### Data Fields
#### voxpopuli
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `raw_text`: a `string` feature that contains the raw transcription of the audio.
- `normalized_text`: a `string` feature that contains the normalized transcription of the audio which is **used in the standard evaluation**.
- `raw_ner`: the NER annotation of the `raw_text` using the same 18 NER classes as OntoNotes.
- `normalized_ner`: the NER annotation of the `normalized_text` using the same 18 NER classes as OntoNotes.
- `raw_combined_ner`: the NER annotation of the `raw_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`).
- `normalized_combined_ner`: the NER annotation of the `normalized_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`) which is **used in the standard evaluation**.
Each NER annotation is a dictionary containing three lists: `type`, `start`, and `length`. `type` is a list of the NER tag types. `start` is a list of the start character position of each named entity in the corresponding text. `length` is a list of the number of characters of each named entity.
#### voxceleb
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. Please use `start_second` and `end_second` to crop the transcribed segment. For example, `dataset[0]["audio"]["array"][int(dataset[0]["start_second"] * dataset[0]["audio"]["sample_rate"]):int(dataset[0]["end_second"] * dataset[0]["audio"]["sample_rate"])]`. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `normalized_text`: a `string` feature that contains the transcription of the audio segment.
- `sentiment`: a `string` feature which can be `Negative`, `Neutral`, or `Positive`.
- `start_second`: a `float` feature that specifies the start second of the audio segment.
- `end_second`: a `float` feature that specifies the end second of the audio segment.
### Data Splits
| |train|validation|test|
|---------|----:|---------:|---:|
|voxpopuli| 5000| 1753|1842|
|voxceleb | 5777| 1454|3553|
Here we use the standard split names in Huggingface's datasets, so the `train` and `validation` splits are the original `fine-tune` and `dev` splits of SLUE datasets, respectively.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### SLUE-VoxPopuli Dataset
SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (https://www.europarl.europa.eu/legal-notice/en/)
Additionally, we provide named entity annotation (normalized_ner and raw_ner column in .tsv files) and it is covered with the same license as CC0.
#### SLUE-VoxCeleb Dataset
SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset.
##### Original License of OXFORD VGG VoxCeleb Dataset
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.
The speakers span a wide range of different ethnicities, accents, professions and ages.
We provide Youtube URLs, associated face detections, and timestamps, as
well as cropped audio segments and cropped face videos from the
dataset. The copyright of both the original and cropped versions
of the videos remains with the original owners.
The data is covered under a Creative Commons
Attribution 4.0 International license (Please read the
license terms here. https://creativecommons.org/licenses/by/4.0/).
Downloading this dataset implies agreement to follow the same
conditions for any modification and/or
re-distribution of the dataset in any form.
Additionally any entity using this dataset agrees to the following conditions:
THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Please cite [1,2] below if you make use of the dataset.
[1] J. S. Chung, A. Nagrani, A. Zisserman
VoxCeleb2: Deep Speaker Recognition
INTERSPEECH, 2018.
[2] A. Nagrani, J. S. Chung, A. Zisserman
VoxCeleb: a large-scale speaker identification dataset
INTERSPEECH, 2017
### Citation Information
```
@inproceedings{shon2022slue,
title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech},
author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7927--7931},
year={2022},
organization={IEEE}
}
```
### Contributions
Thanks to [@fwu-asapp](https://github.com/fwu-asapp) for adding this dataset. | [
-0.5526806116104126,
-0.5462300181388855,
0.09230177104473114,
0.23344705998897552,
-0.12759241461753845,
-0.08918260782957077,
-0.3151552677154541,
-0.4179629683494568,
0.45013898611068726,
0.4200079143047333,
-0.5872609615325928,
-0.8376959562301636,
-0.362684041261673,
0.050707254558801... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Muennighoff/flan | Muennighoff | 2022-12-23T18:57:00Z | 157 | 35 | null | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"arxiv:2109.01652",
"region:us"
] | 2022-12-23T18:57:00Z | 2022-12-12T11:32:26.000Z | 2022-12-12T11:32:26 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- other
---
This is a repreprocessed version of the [FLAN dataset](https://arxiv.org/abs/2109.01652) with any updates that have been made to the FLAN datasets since the release of the original FLAN. The script is available [here](https://github.com/Muennighoff/FLAN).
Tasks:
```
{'aeslc_10templates',
'ag_news_subset_10templates',
'anli_r1_10templates',
'anli_r2_10templates',
'anli_r3_10templates',
'arc_challenge_10templates',
'arc_easy_10templates',
'bool_q_10templates',
'cb_10templates',
'cnn_dailymail_10templates',
'cola_10templates',
'common_gen_10templates',
'copa_10templates',
'coqa_10templates',
'cosmos_qa_10templates',
'dart_10templates',
'definite_pronoun_resolution_10templates',
'drop_10templates',
'e2e_nlg_10templates',
'fix_punct_10templates',
'gigaword_10templates',
'glue_mrpc_10templates',
'glue_qqp_10templates',
'hellaswag_10templates',
'imdb_reviews_10templates',
'math_dataset_10templates',
'mnli_matched_10templates',
'mnli_mismatched_10templates',
'multi_news_10templates',
'multirc_10templates',
'natural_questions_10templates',
'openbookqa_10templates',
'opinion_abstracts_idebate_10templates',
'opinion_abstracts_rotten_tomatoes_10templates',
'para_crawl_enes_10templates',
'paws_wiki_10templates',
'piqa_10templates',
'qnli_10templates',
'quac_10templates',
'record_10templates',
'rte_10templates',
'samsum_10templates',
'sentiment140_10templates',
'snli_10templates',
'squad_v1_10templates',
'squad_v2_10templates',
'sst2_10templates',
'story_cloze_10templates',
'stsb_10templates',
'trec_10templates',
'trivia_qa_10templates',
'true_case_10templates',
'web_nlg_en_10templates',
'wic_10templates',
'wiki_lingua_english_en_10templates',
'wmt14_enfr_10templates',
'wmt16_translate_csen_10templates',
'wmt16_translate_deen_10templates',
'wmt16_translate_fien_10templates',
'wmt16_translate_roen_10templates',
'wmt16_translate_ruen_10templates',
'wmt16_translate_tren_10templates',
'wnli_10templates',
'word_segment_10templates',
'wsc_10templates',
'yelp_polarity_reviews_10templates'}
``` | [
-0.6695119738578796,
-0.43863099813461304,
0.3840363323688507,
-0.034897495061159134,
-0.008545554243028164,
0.09862994402647018,
-0.22762323915958405,
-0.33101072907447815,
0.8508748412132263,
0.9153398871421814,
-0.9363653063774109,
-0.976553738117218,
-0.5865129232406616,
0.512970685958... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seungheondoh/LP-MusicCaps-MTT | seungheondoh | 2023-08-04T10:39:28Z | 157 | 1 | null | [
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"art",
"music",
"text-to-music",
"music-to-text",
"arxiv:2307.16372",
"region:us"
] | 2023-08-04T10:39:28Z | 2023-08-04T10:31:39.000Z | 2023-08-04T10:31:39 | ---
license: mit
language:
- en
tags:
- art
- music
- text-to-music
- music-to-text
pretty_name: LP-MusicCaps-MTT
size_categories:
- 10K<n<100K
---
======================================
**!important**: Be careful when using `caption_attribute_prediction` (We don't recommend to use)!
======================================
# Dataset Card for LP-MusicCaps-MTT
## Dataset Description
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
## Dataset Summary
**LP-MusicCaps** is a Large Language Model based Pseudo Music Caption dataset for `text-to-music` and `music-to-text` tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task instructions). The data sources are MusicCaps, Magnatagtune, and Million Song Dataset ECALS subset.
- **LP-MusicCaps MTT (This Repo)**: 22k Audio with 88k Caption. We utilize 188 unique tags in the [Magnatagtune](https://mirg.city.ac.uk/codeapps/the-magnatagatune-dataset) to perform tag-to-caption generation through LLM. Magnatagtune consists of 26k music clips from 5,223 unique songs including genre, instrument, vocal, mood, perceptual tempo, origin, and sonority features. We used the full 188 tag vocabulary and did not generate captions for tracks that do not have associated tags (decreased to 22k).
- [LP-MusicCaps MSD](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD): 0.5M Audio with 2.2M Caption
- [LP-MusicCaps MC](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MC): 6k Audio with 22k Caption.
## Data Instances
Each instance in LP-MusicCaps MTT (This Repo) represents multiple image-text pair information with meta-attributes:
```
{
'track_id': '1541',
'title': 'Eyes Closed (The Seldon Plan)',
'artist_name': 'Magnatune.com',
'release': 'Magnatune At The CC Salon',
'tag_top50': ['guitar', 'country', 'male', 'singing'],
'tag_top188': ['guitar',
'male singer',
'country',
'male vocals',
'male',
'singing'
],
'caption_writing': 'This country song features twangy guitar riffs and heartfelt male vocals, with a male singer singing about love and loss.',
'caption_summary': 'A male singer with a country style voice accompanies his guitar while singing.',
'caption_paraphrase': 'This male artist croons in a deep, soulful voice over the twangy sounds of his guitar, crafting a classic country tune perfect for fans of male vocals and raw, authentic singing.',
'caption_attribute_prediction': 'A twangy mix of acoustic guitar and male vocals come together in this heartfelt country song. With lyrics that evoke a sense of nostalgia, the male singer weaves a story of love and loss through his storytelling. His emotive singing grips you from start to finish, as he sings about the trials and tribulations of life. This song is a must-listen for any fan of country.',
'pseudo_attribute': ['acoustic',
'twangy',
'heartfelt',
'storytelling',
'nostalgic'
],
'path': 'e/magnatune_com-magnatune_at_the_cc_salon-01-eyes_closed_the_seldon_plan-30-59.mp3'
}
```
## Pseudo Caption Example:
Input Tags:
*"video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy"*
Output Pseudo Captions
*"instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere"*
[More Information for pseudo caption generation](https://github.com/seungheondoh/lp-music-caps/blob/main/lpmc/llm_captioning/generate.py)
## Data Fields
| Name | Type | Description |
|------------------------------|-----------------|----------------------------------------------------------------------|
| track_id | string | Unique identifier for the track |
| title | string | Title of the song |
| artist_name | string | Name of the artist performing the song |
| release | string | Release name or album name of the song |
| tag_top50 | list of strings | List of top 50 tags associated with the song |
| tag_top188 | list of strings | List of top 188 tags associated with the song |
| caption_writing | string | Pseudo caption generated through a writing instruction |
| caption_summary | string | Pseudo caption generated through a summary instruction |
| caption_paraphrase | string | Pseudo caption generated through a paraphrase instruction |
| caption_attribute_prediction | string | Pseudo caption generated through an attribute_prediction instruction |
| pseudo_attribute | list of strings | List of pseudo-attributes used in caption_attribute_prediction |
| path | string | File path or location of the audio clip |
## Data Splits
We used the full 188 tag vocabulary and did not generate captions for tracks that do not have associated tags (26k => 22k). 4K examples have empty tag and caption.
- train: 18706
- valid: 1825
- test: 5329
## Considerations for Using the Data
The LP-MusicCaps dataset is recommended to be used for research purposes. Due to the wrong labeling issue, we recommend not using caption_attribute_prediction and pseudo_attribute unless it is specifically for large-scale pretraining. Additionally, the field "is_crawled" indicates the samples used in the reference paper mentioned below.
## Discussion of Biases
It will be described in a paper to be released soon.
## Other Known Limitations
It will be described in a paper to be released soon. | [
-0.6794100999832153,
-0.33191344141960144,
0.24364115297794342,
0.37481069564819336,
-0.37080976366996765,
0.27825167775154114,
-0.2897753417491913,
-0.19522282481193542,
0.623284637928009,
0.8029860258102417,
-1.2101082801818848,
-0.8588734269142151,
-0.4181572496891022,
0.180345743894577... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinaai/cities_wiki_clustering | jinaai | 2023-10-27T15:28:11Z | 157 | 1 | null | [
"language:en",
"region:us"
] | 2023-10-27T15:28:11Z | 2023-09-20T18:09:08.000Z | 2023-09-20T18:09:08 | ---
language:
- en
---
# WikiCities Clustering Dataset
This dataset was created from the (Wikipedia)[https://huggingface.co/datasets/wikipedia] training dataset by using a list of countries,
retrieving all cities for each country, and then finding their corresponding Wikipedia article in the Wikipedia dataset. Postprocessing
removed the last 25th percentile of countries with fewest city articles, and also took a maximum of 200 articles per country.
The final set has a total of 126 countries, and a total of 3531 cities.
Below is a distribution of cities by country.
 | [
-0.728240966796875,
-0.08822328597307205,
0.6478837728500366,
0.10802923142910004,
-0.22083617746829987,
-0.10465047508478165,
-0.136647567152977,
-0.3170546591281891,
0.7097369432449341,
0.21124237775802612,
-0.6792579889297485,
-0.9753071069717407,
-0.633111834526062,
0.2011772245168686,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AConsApart/anime_subtitles_DialoGPT | AConsApart | 2021-03-06T02:31:23Z | 156 | 1 | null | [
"region:us"
] | 2021-03-06T02:31:23Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Abdo1Kamr/Arabic_Hadith | Abdo1Kamr | 2021-08-21T12:40:44Z | 156 | 0 | null | [
"region:us"
] | 2021-08-21T12:40:44Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Hadith-Data-Sets
There are two files of Hadith, the first one for all `hadith With Tashkil and Without Tashkel` from the Nine Books that are 62,169 Hadith.
The second one it `Hadith pre-processing` data, which is applyed normalization and removeing stop words and lemmatization on it
<!-- ## `All Hadith Books`: All Hadith With Tashkil and Without Tashkel from the Nine Books that are 62,169 Hadith.
## `All Hadith Books_preprocessing`: All Hadith Without Tashkil which is applyed normalization and removeing stop words and lemmatization on it
-->
## Number of hadiths in whole books : 62,169
|Book Name |Number Of Hadiiths|
| ----------------------- |------------------|
|Sahih Bukhari: | 7008|
|Sahih Muslim: | 5362|
|Sunan al Tirmidhi: | 3891|
|Sunan al-Nasai: | 5662|
|Sunan Abu Dawud: | 4590|
|Sunan Ibn Maja: | 4332|
|Musnad Ahmad ibn Hanbal: | 26363|
|Maliks Muwatta: | 1594|
|Sunan al Darami: | 3367|
| [
-0.438189834356308,
-0.27737903594970703,
0.15319891273975372,
-0.16964145004749298,
-0.6985695362091064,
-0.16540153324604034,
-0.40685534477233887,
-0.025858430191874504,
0.15459883213043213,
0.8411790728569031,
-0.7794684171676636,
-0.7948164939880371,
-0.8177585601806641,
0.36275559663... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdWeeb/DravidianMT | AdWeeb | 2021-05-21T05:05:21Z | 156 | 0 | null | [
"region:us"
] | 2021-05-21T05:05:21Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Adnan/Urdu_News_Headlines | Adnan | 2021-03-01T08:39:32Z | 156 | 0 | null | [
"region:us"
] | 2021-03-01T08:39:32Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AhmadSawal/qa | AhmadSawal | 2022-01-26T20:57:27Z | 156 | 0 | null | [
"region:us"
] | 2022-01-26T20:57:27Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Akshith/aa | Akshith | 2021-05-14T15:36:14Z | 156 | 0 | null | [
"region:us"
] | 2021-05-14T15:36:14Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Akshith/g_rock | Akshith | 2021-05-14T15:34:01Z | 156 | 0 | null | [
"region:us"
] | 2021-05-14T15:34:01Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlgoveraAI/CryptoPunks | AlgoveraAI | 2022-02-28T15:25:44Z | 156 | 4 | null | [
"region:us"
] | 2022-02-28T15:25:44Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Dataset Card for CIFAR-10
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Ocean Protocol](#ocean-protocol)
- [Algovera](#algovera)
## Dataset Description
- **Homepage:** https://market.oceanprotocol.com/asset/did:op:C9D0568838fa670baEe7195Ea443b32EfCAc2281
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
<img src="punks.png" width="100%">
### Dataset Summary
CryptoPunks is a non-fungible token (NFT) collection on the Ethereum blockchain. The dataset contains 10,000 CryptoPunk images, most of humans but also of three special types: Zombie (88), Ape (24) and Alien (9). They are provided with both clear backgrounds and teal backgrounds.
## Dataset Structure
### Data Fields
- img: 24x24x3 image
## Additional Information
### Ocean Protocol
We are working on a Proof of Concept for using HuggingFace with datasets and algorithms on the [Ocean Marketplace](https://market.oceanprotocol.com/). Ocean is an open source infrastructure for monetizing private datasets and training using private AI technologies such as Compute-to-Data.
### Algovera
Algovera is a community working to facilitate and accelerate the development of decentralised AI applications and research.
* Join our community on [Discord](https://discord.com/invite/e65RuHSDS5).
* Contribute to our [GitHub](https://github.com/AlgoveraAI).
* Check out our [Website](https://www.algovera.ai/).
* Find more resources on our [Notion](https://algovera.notion.site/).
* Subscribe to our [Calendar](https://calendar.google.com/calendar/embed?src=c_4qajdfj4imie9cpnkbvkrc7ri4%40group.calendar.google.com). | [
-0.8037115335464478,
-0.5961942672729492,
0.15690840780735016,
0.18241430819034576,
-0.549872100353241,
0.14251378178596497,
0.01901940070092678,
-0.8260666131973267,
0.8775172829627991,
0.4009495675563812,
-0.5275659561157227,
-0.8612883687019348,
-0.35567790269851685,
-0.2332171648740768... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aliseyfi/event_token_type | Aliseyfi | 2021-12-23T18:07:33Z | 156 | 0 | null | [
"region:us"
] | 2021-12-23T18:07:33Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Annielytics/DoctorsNotes | Annielytics | 2021-05-07T14:35:26Z | 156 | 0 | null | [
"region:us"
] | 2021-05-07T14:35:26Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Anurag-Singh-creator/tasks | Anurag-Singh-creator | 2021-12-12T20:16:49Z | 156 | 0 | null | [
"region:us"
] | 2021-12-12T20:16:49Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Avishekavi/Avi | Avishekavi | 2021-03-29T18:59:18Z | 156 | 0 | null | [
"region:us"
] | 2021-03-29T18:59:18Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BatuhanYilmaz/github-issues | BatuhanYilmaz | 2022-01-24T08:40:25Z | 156 | 0 | null | [
"region:us"
] | 2022-01-24T08:40:25Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Check/a_re_gi | Check | 2021-08-31T08:46:20Z | 156 | 0 | null | [
"region:us"
] | 2021-08-31T08:46:20Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Chuu/Vhh | Chuu | 2021-11-25T11:15:52Z | 156 | 0 | null | [
"region:us"
] | 2021-11-25T11:15:52Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cropinky/flatearther | Cropinky | 2021-06-30T22:37:54Z | 156 | 0 | null | [
"region:us"
] | 2021-06-30T22:37:54Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ## Wow fishing bobber object detection dataset
Hello, here you will find a link to a csv i scraped using the scraper found at the same link. it contains paragraphs of text found on a flat earth conspiracy website
#TODO: turn it into an actualy huggingface dataset) | [
-0.6715256571769714,
-0.9550026059150696,
0.4140471816062927,
-0.03625128045678139,
-0.5978342294692993,
0.13738326728343964,
0.1701485812664032,
-0.40144798159599304,
0.21987956762313843,
0.6858495473861694,
-0.7080144882202148,
-0.7868682146072388,
-0.505304217338562,
0.18681633472442627... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dmitriy612/1 | Dmitriy612 | 2021-10-09T12:22:11Z | 156 | 0 | null | [
"region:us"
] | 2021-10-09T12:22:11Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DoyyingFace/github-issues-doy | DoyyingFace | 2022-01-19T10:57:15Z | 156 | 0 | null | [
"region:us"
] | 2022-01-19T10:57:15Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Emma121/aaaaa | Emma121 | 2022-02-24T14:29:51Z | 156 | 0 | null | [
"license:bsd-3-clause-clear",
"region:us"
] | 2022-02-24T14:29:51Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: bsd-3-clause-clear
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Emma121/testtest | Emma121 | 2022-02-14T13:18:46Z | 156 | 0 | null | [
"region:us"
] | 2022-02-14T13:18:46Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Enes3774/data | Enes3774 | 2021-08-15T19:43:29Z | 156 | 0 | null | [
"region:us"
] | 2021-08-15T19:43:29Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FL33TW00D/test-dataset | FL33TW00D | 2021-10-13T14:40:54Z | 156 | 0 | null | [
"region:us"
] | 2021-10-13T14:40:54Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FRTNX/worldbank-projects | FRTNX | 2021-09-03T14:04:26Z | 156 | 0 | null | [
"region:us"
] | 2021-09-03T14:04:26Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Francois/futures_es | Francois | 2021-07-29T17:19:50Z | 156 | 0 | null | [
"region:us"
] | 2021-07-29T17:19:50Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ebrigham/asr_files | ebrigham | 2022-01-03T11:29:38Z | 156 | 0 | null | [
"region:us"
] | 2022-01-03T11:29:38Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | asr files | [
-0.16755254566669464,
0.12407691776752472,
-0.04674995690584183,
0.4857272207736969,
-0.15857432782649994,
0.3716694712638855,
-0.0157189778983593,
-0.4261240065097809,
0.2736108899116516,
1.166854739189148,
-0.42949533462524414,
-0.17079591751098633,
-0.8071461915969849,
0.298043310642242... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
indonesian-nlp/id_newspapers_2018 | indonesian-nlp | 2022-10-25T13:47:43Z | 156 | 8 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"region:us"
] | 2022-10-25T13:47:43Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Indonesian Newspapers 2018
---
# Dataset of Indonesian Online Newspaper
This is a copy of dataset created by **Feryandi Nurdiantoro** (<https://github.com/feryandi/Dataset-Artikel>). The original dataset in json format is stored uncompressed in Google Drive in more than 500K files, one file per article. Unfortunately, due to its size, it is impossible to download the whole dataset as one big compressed file (it takes forever to compress it online). Therefore I provide here a copy and its cleaned version as compressed files.
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. I don't copy it here since it is about 60GB and mostly we only need the text content for NLP research.
Following is the compressed files:
* newspaper-json.gz: the compressed original 500K json files.
* newspaper.txt.gz: a dump of all json files in one big cleaned text file which is normally the only one needed for language model training.
The license has been copied from the source:
## License
Proyek ini dilisensikan dibawah lisensi **Creative Commons Attribution-ShareAlike 4.0 International License**\*. Kumpulan data yang dibagikan bertujuan untuk ilmu pengetahuan, pembelajaran, dan penelitian Bahasa Indonesia (komputasi maupun lingusitik), dan hanya dapat digunakan untuk hal tersebut. Kepemilikan data untuk setiap artikel dimiliki oleh media yang bersangkutan dimana data tersebut diambil; dan pemilik repository ini tidak melakukan klaim kepemilikan atas konten tersebut. Jika Anda mendapati bahwa data ini telah melanggar suatu hak cipta; mohon kontak pengelola repository ini.
This work is licensed under a **Creative Commons Attribution-ShareAlike 4.0 International License**. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer.
| [
-0.10053787380456924,
-0.660666286945343,
0.2399945706129074,
0.5008987784385681,
-0.47624653577804565,
-0.0952787771821022,
-0.40250691771507263,
-0.3855948746204376,
0.5652578473091125,
0.7322925925254822,
-0.261678010225296,
-0.2978069484233856,
-0.5444484949111938,
0.676056444644928,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
merionum/ru_paraphraser | merionum | 2022-07-28T15:01:08Z | 156 | 5 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-g... | 2022-07-28T15:01:08Z | 2022-05-26T14:53:46.000Z | 2022-05-26T14:53:46 | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
language:
- ru
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: ParaPhraser
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
- text-generation
- text2text-generation
- sentence-similarity
task_ids:
- semantic-similarity-scoring
---
# Dataset Card for ParaPhraser
### Dataset Summary
ParaPhraser is a news headlines corpus annotated according to the following schema:
```
1: precise paraphrases
0: near paraphrases
-1: non-paraphrases
```
The _Plus_ part is also available.
It contains clusters of news headline paraphrases labeled automatically by a fine-tuned paraphrase detection BERT model.
In order to load it:
```python
from datasets import load_dataset
corpus = load_dataset('merionum/ru_paraphraser', data_files='plus.jsonl')
```
## Dataset Structure
```
train: 7,227 pairs
test: 1,924 pairs
plus: 1,725,393 clusters (total: ~7m texts)
```
### Citation Information
```
@inproceedings{pivovarova2017paraphraser,
title={ParaPhraser: Russian paraphrase corpus and shared task},
author={Pivovarova, Lidia and Pronoza, Ekaterina and Yagunova, Elena and Pronoza, Anton},
booktitle={Conference on artificial intelligence and natural language},
pages={211--225},
year={2017},
organization={Springer}
}
```
```
@inproceedings{gudkov-etal-2020-automatically,
title = "Automatically Ranked {R}ussian Paraphrase Corpus for Text Generation",
author = "Gudkov, Vadim and
Mitrofanova, Olga and
Filippskikh, Elizaveta",
booktitle = "Proceedings of the Fourth Workshop on Neural Generation and Translation",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.ngt-1.6",
doi = "10.18653/v1/2020.ngt-1.6",
pages = "54--59",
abstract = "The article is focused on automatic development and ranking of a large corpus for Russian paraphrase generation which proves to be the first corpus of such type in Russian computational linguistics. Existing manually annotated paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus and ParaPlag which are suitable for a set of NLP tasks, such as paraphrase and plagiarism detection, sentence similarity and relatedness estimation, etc. Due to size restrictions, these datasets can hardly be applied in end-to-end text generation solutions. Meanwhile, paraphrase generation requires a large amount of training data. In our study we propose a solution to the problem: we collect, rank and evaluate a new publicly available headline paraphrase corpus (ParaPhraser Plus), and then perform text generation experiments with manual evaluation on automatically ranked corpora using the Universal Transformer architecture.",
}
```
### Contributions
Dataset maintainer:
Vadim Gudkov: [@merionum](https://github.com/merionum)
| [
-0.12187226116657257,
-0.555081307888031,
0.3593038320541382,
0.33248651027679443,
-0.5030381679534912,
-0.07631808519363403,
-0.2800639867782593,
0.017900623381137848,
0.08513763546943665,
0.4547509253025055,
-0.185777485370636,
-0.8098497986793518,
-0.47034695744514465,
0.399311751127243... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OGB/ogbg-molhiv | OGB | 2023-02-07T16:39:46Z | 156 | 3 | null | [
"task_categories:graph-ml",
"license:mit",
"region:us"
] | 2023-02-07T16:39:46Z | 2022-07-06T15:28:13.000Z | 2022-07-06T15:28:13 | ---
license: mit
task_categories:
- graph-ml
---
# Dataset Card for ogbg-molhiv
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://ogb.stanford.edu/docs/graphprop/#ogbg-mol)**
- **[Repository](https://github.com/snap-stanford/ogb):**:
- **Paper:**: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation)
- **Leaderboard:**: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv)
### Dataset Summary
The `ogbg-molhiv` dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.
### Supported Tasks and Leaderboards
`ogbg-molhiv` should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC.
The associated leaderboards are here: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv).
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
ogbg_molhiv = load_dataset("graphs-datasets/ogbg-molhiv")
# For the train set (replace by valid or test as needed)
ogbg_molhiv_pg_list = [Data(graph) for graph in ogbg_molhiv["train"]]
ogbg_molhiv_pg = DataLoader(ogbg_molhiv_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | small |
| #graphs | 41,127 |
| average #nodes | 25.5 |
| average #edges | 27.5 |
| average node degree | 2.2 |
| average cluster coefficient | 0.002 |
| MaxSCC ratio | 0.993 |
| graph diameter | 12.0 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
```python
from ogb.graphproppred import PygGraphPropPredDataset
dataset = PygGraphPropPredDataset(name = 'ogbg-molhiv')
split_idx = dataset.get_idx_split()
train = dataset[split_idx['train']] # valid, test
```
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@inproceedings{hu-etal-2020-open,
author = {Weihua Hu and
Matthias Fey and
Marinka Zitnik and
Yuxiao Dong and
Hongyu Ren and
Bowen Liu and
Michele Catasta and
Jure Leskovec},
editor = {Hugo Larochelle and
Marc Aurelio Ranzato and
Raia Hadsell and
Maria{-}Florina Balcan and
Hsuan{-}Tien Lin},
title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs},
booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference
on Neural Information Processing Systems 2020, NeurIPS 2020, December
6-12, 2020, virtual},
year = {2020},
url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html},
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | [
-0.5193139910697937,
-0.7366766929626465,
0.19815918803215027,
-0.3095521926879883,
-0.11155182868242264,
-0.28494128584861755,
-0.22992442548274994,
-0.4575105905532837,
0.20699533820152283,
0.25713643431663513,
-0.3739979863166809,
-0.8026565909385681,
-0.5691447854042053,
-0.21907797455... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CVdatasets/ImageNet15_animals_unbalanced_aug1 | CVdatasets | 2023-02-28T18:48:02Z | 156 | 0 | null | [
"region:us"
] | 2023-02-28T18:48:02Z | 2023-02-28T18:47:49.000Z | 2023-02-28T18:47:49 | ---
dataset_info:
features:
- name: labels
dtype:
class_label:
names:
'0': Italian_greyhound
'1': Coyote
'2': Beagle
'3': Rottweiler
'4': Hyena
'5': Greater_Swiss_Mountain_dog
'6': Triceratops
'7': French_bulldog
'8': Red_wolf
'9': Egyptian_cat
'10': Chihuahua
'11': Irish_terrier
'12': Tiger_cat
'13': White_wolf
'14': Timber_wolf
- name: img
dtype: image
- name: is_generated
dtype: bool
splits:
- name: validation
num_bytes: 60570648.125
num_examples: 1439
- name: train
num_bytes: 174270537.875
num_examples: 3705
download_size: 234762621
dataset_size: 234841186.0
---
# Dataset Card for "ImageNet15_animals_unbalanced_aug1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6802906394004822,
0.008215130306780338,
-0.3475547134876251,
0.6111717820167542,
-0.44349604845046997,
-0.14681315422058105,
0.44433239102363586,
-0.3495796322822571,
0.8810946941375732,
0.6430892944335938,
-0.5771498084068298,
-0.6392384767532349,
-0.6147363185882568,
-0.03906654193997... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/databricks-dolly-15k-curated-multilingual | argilla | 2023-06-14T07:47:54Z | 156 | 35 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:es",
"language:de",
"language:fr",
"license:cc-by-sa-3.0",
"machine-translated",
"instruction-following",
"region:us"
] | 2023-06-14T07:47:54Z | 2023-04-13T12:18:17.000Z | 2023-04-13T12:18:17 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: instruction_original_en
dtype: string
- name: context_original_en
dtype: string
- name: response_original_en
dtype: string
- name: id
dtype: int64
splits:
- name: de
num_bytes: 25985140
num_examples: 15015
- name: en
num_bytes: 24125109
num_examples: 15015
- name: es
num_bytes: 25902709
num_examples: 15015
- name: fr
num_bytes: 26704314
num_examples: 15015
download_size: 65586669
dataset_size: 102717272
license: cc-by-sa-3.0
task_categories:
- text-generation
- text2text-generation
language:
- es
- de
- fr
tags:
- machine-translated
- instruction-following
pretty_name: Databrick Dolly Instructions Multilingual
size_categories:
- 10K<n<100K
---
# Dataset Card for "databricks-dolly-15k-curated-multilingual"
A curated and multilingual version of the Databricks Dolly instructions dataset. It includes a programmatically and manually corrected version of the original `en` dataset. See below.
**STATUS**:
Currently, the original Dolly v2 English version has been curated combining automatic processing and collaborative human curation using Argilla (~400 records have been manually edited and fixed). The following graph shows a summary about the number of edited fields.

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/argilla/databricks-dolly-15k-multilingual/**
- **Repository: https://huggingface.co/datasets/argilla/databricks-dolly-15k-multilingual/**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: contact@argilla.io, https://github.com/argilla-io/argilla**
### Dataset Summary
This dataset collection is a curated and machine-translated version of the `databricks-dolly-15k` [dataset](https://github.com/databrickslabs/dolly/tree/master/data) originally created by Databricks, Inc. in 2023.
The goal is to give practitioners a starting point for training open-source instruction-following models with better-quality English data and translated data beyond English. However, as the translation quality will not be perfect, we highly recommend dedicating time to curate and fix translation issues. Below we explain how to load the datasets into [Argilla for data curation and fixing](https://github.com/argilla-io/argilla). Additionally, we'll be improving the datasets made available here, with the help of different communities.
Currently, the original English version has been curated combining automatic processing and collaborative human curation using Argilla (~400 records have been manually edited and fixed). The following graph shows a summary of the number of edited fields.
The main issues (likely many issues still remaining) are the following:
1. Some labelers misunderstood the usage of the `context` field. This `context` field is used as part of the prompt for instruction-tuning and in other works it's called `input` (e.g., Alpaca). Likely, the name context, has led to some labelers using it to provide the full context of where they have extracted the response. This is problematic for some types of tasks (summarization, closed-qa or information-extraction) because sometimes the context is shorter than or unrelated to summaries, or the information cannot be extracted from the context (closed-qa, information-extraction).
2. Some labelers misunderstood the way to give instructions for summarization or closed-qa, for example, they ask: Who is Thomas Jefferson? then provide a very long context and a response equally long.
We programmatically identified records with these potential issues and ran a campaign to fix it and as a result more than 400 records have been adapted. See below for statistics:

As a result of this curation process the content of the fields has been reduced, counted in number of tokens, especially for the responses:

If you want to browse and curate your dataset with Argilla, you can:
1. [Duplicate this Space](https://huggingface.co/spaces/argilla/dolly-multilingual-curation/settings?duplicate=true). IMPORTANT: The Space's Visibility need to be Public, but you can setup your own password and API KEYS [following this guide](https://docs.argilla.io/en/latest/getting_started/installation/deployments/huggingface-spaces.html#setting-up-secret-environment-variables).
2. Setup two secrets: `HF_TOKEN` and `LANG` for indicating the language split
3. Login with `admin`/`12345678` and start browsing and labelling.
4. Start labeling. Every 5 min the validations will be stored on a Hub dataset in your personal HF space.
5. Please get in touch to contribute fixes and improvements to the source datasets.
There's one split per language:
```python
from datasets import load_dataset
# loads all splits
load_dataset("argilla/databricks-dolly-15k-curate-multilingual")
# loads Spanish splits
load_dataset("argilla/databricks-dolly-15k-curated-multilingual", split="es")
```
### Supported Tasks and Leaderboards
As described in the README of the original dataset, this dataset can be used for:
* Training LLMs
* Synthetic Data Generation
* Data Augmentation
### Languages
Currently: `es`, `fr`, `de`, `en`
Join Argilla [Slack community](https://join.slack.com/t/rubrixworkspace/shared_invite/zt-whigkyjn-a3IUJLD7gDbTZ0rKlvcJ5g) if you want to help us include other languages.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
There's one split per language:
```python
from datasets import load_dataset
# loads all splits
load_dataset("argilla/databricks-dolly-15k-multilingual")
# loads Spanish splits
load_dataset("argilla/databricks-dolly-15k-multilingual", split="es")
```
## Dataset Creation
These datasets have been translated using the DeepL API from the original English dataset between the 13th and 14th of April
### Curation Logbook
* 28/04/23: Removed references from Wikipedia copy pastes for 8113 rows. Applied to context and response fields with the following regex: `r'\[[\w]+\]'`
### Source Data
#### Initial Data Collection and Normalization
Refer to the [original dataset](https://github.com/databrickslabs/dolly/tree/master/data) for more information.
#### Who are the source language producers?
[More Information Needed]
### Annotations
Annotations are planned but not performed yet.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
**Original dataset Owner: Databricks, Inc.**
### Citation Information
[More Information Needed] | [
-0.35320863127708435,
-0.9040662050247192,
0.029621774330735207,
0.4748792052268982,
-0.14969278872013092,
-0.088836170732975,
-0.0231095589697361,
-0.5130499005317688,
0.4125579595565796,
0.5813083052635193,
-0.6879153251647949,
-0.7902718782424927,
-0.6404671669006348,
0.3590002059936523... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chiyuanhsiao/ML2021_HungyiLee_Corpus | chiyuanhsiao | 2023-11-14T08:00:36Z | 156 | 1 | null | [
"region:us"
] | 2023-11-14T08:00:36Z | 2023-11-14T06:05:29.000Z | 2023-11-14T06:05:29 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 1086428751.563
num_examples: 31181
download_size: 1086479549
dataset_size: 1086428751.563
---
# Dataset Card for "debug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6623968482017517,
-0.6847857236862183,
0.1695978194475174,
0.3667190670967102,
-0.3533867597579956,
0.07067038118839264,
0.30647265911102295,
-0.25365644693374634,
0.795211136341095,
0.4046000838279724,
-0.81168532371521,
-0.8745891451835632,
-0.6794626712799072,
-0.1520860195159912,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BlakesOrb6/Fred-Flintstone | BlakesOrb6 | 2021-11-08T17:43:33Z | 155 | 0 | null | [
"region:us"
] | 2021-11-08T17:43:33Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276496291160583,
-0.22568435966968536,
0.8622260093688965,
0.43461480736732483,
-0.5282987952232361,
0.7012965083122253,
0.7915714979171753,
0.07618625462055206,
0.7746025323867798,
0.25632181763648987,
-0.7852815389633179,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CAGER/rick | CAGER | 2021-07-09T02:05:44Z | 155 | 0 | null | [
"region:us"
] | 2021-07-09T02:05:44Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | welcoe to cager data set | [
-0.3310050368309021,
0.22521927952766418,
-0.11157672107219696,
0.32875701785087585,
-0.17969584465026855,
-0.05238877609372139,
0.12029194831848145,
-0.0553862527012825,
0.22566668689250946,
0.8697126507759094,
-0.10367260873317719,
-0.5804027915000916,
-0.6360877156257629,
0.305333048105... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChadxxxxHall/Inter-vision | ChadxxxxHall | 2021-08-11T22:44:47Z | 155 | 0 | null | [
"region:us"
] | 2021-08-11T22:44:47Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276496291160583,
-0.22568435966968536,
0.8622260093688965,
0.43461480736732483,
-0.5282987952232361,
0.7012965083122253,
0.7915714979171753,
0.07618625462055206,
0.7746025323867798,
0.25632181763648987,
-0.7852815389633179,
-0.22573819756507874,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cheranga/test | Cheranga | 2022-02-10T01:34:34Z | 155 | 0 | null | [
"license:afl-3.0",
"region:us"
] | 2022-02-10T01:34:34Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: afl-3.0
---
| [
-0.1285337507724762,
-0.18616773188114166,
0.6529127359390259,
0.4943627715110779,
-0.193193256855011,
0.23607444763183594,
0.36071985960006714,
0.050563156604766846,
0.5793652534484863,
0.7400138974189758,
-0.6508103013038635,
-0.23783966898918152,
-0.7102247476577759,
-0.0478259548544883... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Darren/data | Darren | 2021-05-27T23:31:45Z | 155 | 0 | null | [
"region:us"
] | 2021-05-27T23:31:45Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HarveyBWest/mybot | HarveyBWest | 2021-07-03T13:22:20Z | 155 | 0 | null | [
"region:us"
] | 2021-07-03T13:22:20Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KTH/waxholm | KTH | 2023-08-09T10:36:10Z | 155 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:sv",
"region:us"
] | 2023-08-09T10:36:10Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- sv
task_categories:
- automatic-speech-recognition
---
# THE WAXHOLM CORPUS
The Waxholm corpus was collected in 1993 - 1994 at the department of
Speech, Hearing and Music (TMH), KTH. It is described in several
publications. Two are included in this archive. Publication of work
using the Waxholm corpus should refer to either of these. More
information on the Waxholm project can be found on the web page
http://www.speech.kth.se/waxholm/waxholm2.html
## FILE INFORMATION
### SAMPLED FILES
The .smp files contain the speech signal. The identity
of the speaker is coded by the two digits after 'fp20' in the file
name. The smp file format was developed by TMH. Recording information
is stored in a header as a 1024 byte text string. The speech signal in
the Waxholm corpus is quantised into 16 bits, 2 bytes/sample and the
byte order is big-endian (most significant byte first). The sampling
frequency is 16 kHz. Here is an example of a file header:
```
>head -9 fp2001.1.01.smp
file=samp ; file type is sampled signal
msb=first ; byte order
sftot=16000 ; sampling frequency in Hz
nchans=1 ; number of channels
preemph=no ; no signal preemphasis during recording
view=-10,10
born=/o/libhex/ad_da.h25
range=-12303,11168 ; amplitude range
=
```
### LABEL FILES
Normally, each sample file has a label file. This has been
produced in four steps. The first step was to manually enter the
orthographic text by listening. From this text a sequence of phonemes
were produced by a rule-based text-to-phoneme module. The endpoint
time positions of the phonemes were computed by an automatic alignment
program, followed by manual correction. Some of the speech files have
no label file, due to different problems in this process. These files
should not be used for training or testing.
The labels are stored in .mix files. Below is an example of the
beginning of a mix file.
```
>head -20 fp2001.1.01.smp.mix
CORRECTED: OK jesper Jesper Hogberg Thu Jun 22 13:26:26 EET 1995
AUTOLABEL: tony A. de Serpa-Leitao Mon Nov 15 13:44:30 MET 1993
Waxholm dialog. /u/wax/data/scenes/fp2001/fp2001.1.01.smp
TEXT:
jag vill }ka h{rifr}n .
J'A:+ V'IL+ "]:K'A H'[3RIFR]N.
CT 1
Labels: J'A: V'IL "]:KkA H'[3RIFR]N .
FR 11219 #J >pm #J >w jag 0.701 sec
FR 12565 $'A: >pm $'A:+ 0.785 sec
FR 13189 #V >pm #V >w vill 0.824 sec
FR 13895 $'I >pm $'I 0.868 sec
FR 14700 $L >pm $L+ 0.919 sec
```
The orthographic text representation is after the label 'TEXT:' CT is
the frame length in number of sample points. (Always = 1 in Waxholm
mix files) Each line starting with 'FR' contains up to three labels at
the phonetic, phonemic and word levels. FR is immediately followed by
the frame number of the start of the segment. Since CT = 1, FR is the
sample index in the file. If a frame duration is = 0, the label has
been judged as a non-pronounced segment and deleted by the manual
labeller, although it was generated by the text-to-phoneme or the
automatic alignment modules. Column 3 in an FR line is the phonetic
label. Initial '#' indicates word initial position. '$' indicates
other positions. The optional label '>pm' precedes the phonemic label,
which has been generated by the text-to-phoneme rules. Often, the
phonemic and the phonetic labels are identical. The optional '>w' is
followed by the identity of the word beginning at this frame. The
phoneme symbol inventory is mainly STA, used by the KTH/TMH RULSYS
system. It is specified in the included file 'sampa_latex_se.pdf'.
Some extra labels at the phonetic level have been defined.
The most common ones are:
| | |
|---------------------|------------------------------------------|
|sm | lip or tongue opening |
|p: | silent interval |
|pa | aspirative sound from breathing |
|kl | click sound |
|v | short vocalic segment between consonants |
|upper case of stops | occlusion |
|lower case of stops | burst |
The label 'Labels:' before the FR lines is a text string assembled
from the FR labels
The mix files in this archive correspond to those with the name
extension .mix.new in the original corpus. Besides a few other
corrections, the main difference is that burst segments after
retroflex stops were not labelled as retroflex in the original .mix
files ( d, t after 2D and 2T have been changed to 2d and 2t).
## REFERENCES
Bertenstam, J., Blomberg, M., Carlson, R., Elenius, K., Granström,
B., Gustafson, J., Hunnicutt, S., Högberg, J., Lindell, R., Neovius,
L., Nord, L., de Serpa-Leitao, A., and Ström, N.,(1995). "Spoken
dialogue data collected in the WAXHOLM project" STL-QPSR 1/1995,
KTH/TMH, Stockholm.
Bertenstam, J., Blomberg, M., Carlson, R.,
Elenius, K., Granström, B., Gustafson, J., Hunnicutt, S., Högberg, J.,
Lindell, R., Neovius, L., de Serpa-Leitao, A., Nord, L., & Ström,
N. (1995). The Waxholm application data-base. In Pardo, J.M. (Ed.),
Proceedings Eurospeech 1995 (pp. 833-836). Madrid.
Comments and error reports are welcome. These should be sent to:
Mats Blomberg <matsb@speech.kth.se> or Kjell Elenius <kjell@speech.kth.se> | [
-0.4742603600025177,
-0.738116443157196,
0.38715770840644836,
0.1491001397371292,
-0.1702861785888672,
-0.03153086453676224,
-0.11724853515625,
-0.5568428039550781,
0.691780149936676,
0.7144607305526733,
-0.5764013528823853,
-0.8472297191619873,
-0.5329518914222717,
0.14939692616462708,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pratik/Gujarati_OpenSLR | Pratik | 2021-11-17T13:36:56Z | 155 | 1 | null | [
"region:us"
] | 2021-11-17T13:36:56Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | OpenSLR is a site devoted to hosting speech and language resources,
such as training corpora for speech recognition, and software related to speech recognition.
They intend to be a convenient place for anyone to put resources that they have created,
so that they can be downloaded publicly.
They aim to provide a central, hassle-free place for others to put their speech resources. see there http://www.openslr.org/contributions.html
#Supported Task
Automatic Speech Recognition
#Languages
Gujarati
Identifier: SLR78
Summary: Data set which contains recordings of native speakers of Gujarati.
Category: Speech
License: Attribution-ShareAlike 4.0 International
Downloads (use a mirror closer to you):
about.html [1.5K] (Information about the data set ) Mirrors: [China]
LICENSE [20K] (License information for the data set ) Mirrors: [China]
line_index_female.tsv [423K] (Lines recorded by the female speakers ) Mirrors: [China]
line_index_male.tsv [393K] (Lines recorded by the male speakers ) Mirrors: [China]
gu_in_female.zip [917M] (Archive containing recordings from female speakers ) Mirrors: [China]
gu_in_male.zip [825M] (Archive file recordings from male speakers ) Mirrors: [China]
About this resource:
This data set contains transcribed high-quality audio of Gujarati sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
See LICENSE file for license information.
Copyright 2018, 2019 Google, Inc.
If you use this data in publications, please cite it as follows:
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
| [
-0.3383728265762329,
-0.5135298371315002,
0.16924481093883514,
0.2902177572250366,
-0.4019997715950012,
0.032865893095731735,
-0.4973421096801758,
-0.30625271797180176,
0.1262630671262741,
0.23359252512454987,
-0.4642760455608368,
-0.6262519359588623,
-0.42310717701911926,
0.15368966758251... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Recognai/imdb_spacy-ner | Recognai | 2022-02-17T12:49:07Z | 155 | 0 | null | [
"region:us"
] | 2022-02-17T12:49:07Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abdusah/masc | abdusah | 2023-11-16T10:48:30Z | 155 | 0 | null | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ar",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-11-16T10:48:30Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
license:
- cc-by-nc-4.0
multilinguality: []
pretty_name: 'MASC'
size_categories:
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for MASC: MASSIVE ARABIC SPEECH CORPUS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus
- **Repository:**
- **Paper:** https://dx.doi.org/10.21227/e1qb-jv46
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This corpus is a dataset that contains 1,000 hours of speech sampled at 16~kHz and crawled from over 700 YouTube channels. MASC is multi-regional, multi-genre, and multi-dialect dataset that is intended to advance the research and development of Arabic speech technology with the special emphasis on Arabic speech recognition
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Multi-dialect Arabic
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
#### masc_dev
- speech
- sampling_rate
- target_text (label)
### Data Splits
#### masc_dev
- train: 100
- test: 40
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
Note: this is a small development set for testing.
### Dataset Curators
[More Information Needed]
### Licensing Information
CC 4.0
### Citation Information
[More Information Needed]
### Contributions
Mohammad Al-Fetyani, Muhammad Al-Barham, Gheith Abandah, Adham Alsharkawi, Maha Dawas, August 18, 2021, "MASC: Massive Arabic Speech Corpus", IEEE Dataport, doi: https://dx.doi.org/10.21227/e1qb-jv46.
| [
-0.75612872838974,
-0.569111704826355,
-0.13782960176467896,
0.10838381201028824,
-0.1572423279285431,
0.22337204217910767,
-0.28685349225997925,
-0.22291219234466553,
0.4651292860507965,
0.38224658370018005,
-0.6343066096305847,
-1.0842399597167969,
-0.8388187885284424,
0.0255103614181280... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/scico | allenai | 2023-01-10T20:23:18Z | 155 | 3 | scico | [
"task_categories:token-classification",
"task_ids:coreference-resolution",
"annotations_creators:domain experts",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"cross-document-coreference-resolution",
"structure-prediction",
"region:us"
] | 2023-01-10T20:23:18Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- domain experts
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- coreference-resolution
paperswithcode_id: scico
tags:
- cross-document-coreference-resolution
- structure-prediction
---
# Dataset Card for SciCo
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SciCo homepage](https://scico.apps.allenai.org/)
- **Repository:** [SciCo repository](https://github.com/ariecattan/scico)
- **Paper:** [SciCo: Hierarchical Cross-document Coreference for Scientific Concepts](https://openreview.net/forum?id=OFLbgUP04nC)
- **Point of Contact:** [Arie Cattan](arie.cattan@gmail.com)
### Dataset Summary
SciCo consists of clusters of mentions in context and a hierarchy over them.
The corpus is drawn from computer science papers, and the concept mentions are methods and tasks from across CS.
Scientific concepts pose significant challenges: they often take diverse forms (e.g., class-conditional image
synthesis and categorical image generation) or are ambiguous (e.g., network architecture in AI vs.
systems research).
To build SciCo, we develop a new candidate generation
approach built on three resources: a low-coverage KB ([https://paperswithcode.com/](https://paperswithcode.com/)), a noisy hypernym extractor, and curated
candidates.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
* `flatten_tokens`: a single list of all tokens in the topic
* `flatten_mentions`: array of mentions, each mention is represented by [start, end, cluster_id]
* `tokens`: array of paragraphs
* `doc_ids`: doc_id of each paragraph in `tokens`
* `metadata`: metadata of each doc_id
* `sentences`: sentences boundaries for each paragraph in `tokens` [start, end]
* `mentions`: array of mentions, each mention is represented by [paragraph_id, start, end, cluster_id]
* `relations`: array of binary relations between cluster_ids [parent, child]
* `id`: id of the topic
* `hard_10` and `hard_20` (only in the test set): flag for 10% or 20% hardest topics based on Levenshtein similarity.
* `source`: source of this topic PapersWithCode (pwc), hypernym or curated.
### Data Splits
| |Train |Validation|Test |
|--------------------|-----:|---------:|----:|
|Topic | 221| 100| 200|
|Documents | 9013| 4120| 8237|
|Mentions | 10925| 4874|10424|
|Clusters | 4080| 1867| 3711|
|Relations | 2514| 1747| 2379|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
## Additional Information
### Dataset Curators
This dataset was initially created by Arie Cattan, Sophie Johnson, Daniel Weld, Ido Dagan, Iz Beltagy, Doug Downey and Tom Hope, while Arie was intern at Allen Institute of Artificial Intelligence.
### Licensing Information
This dataset is distributed under [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{
cattan2021scico,
title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
booktitle={3rd Conference on Automated Knowledge Base Construction},
year={2021},
url={https://openreview.net/forum?id=OFLbgUP04nC}
}
```
### Contributions
Thanks to [@ariecattan](https://github.com/ariecattan) for adding this dataset.
| [
-0.510222315788269,
-0.3759150803089142,
0.2527964115142822,
0.18180710077285767,
-0.20533514022827148,
0.03328118473291397,
-0.3120843172073364,
-0.4116086959838867,
0.6012455821037292,
0.34228813648223877,
-0.6175888776779175,
-0.9401429891586304,
-0.527744710445404,
0.2037549614906311,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cameronbc/synthtiger | cameronbc | 2022-02-08T04:24:39Z | 155 | 0 | null | [
"region:us"
] | 2022-02-08T04:24:39Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roszcz/maestro-v1-sustain | roszcz | 2023-04-23T13:35:49Z | 155 | 0 | null | [
"region:us"
] | 2023-04-23T13:35:49Z | 2023-02-28T20:38:48.000Z | 2023-02-28T20:38:48 | ---
dataset_info:
features:
- name: notes
struct:
- name: duration
sequence: float64
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: composer
dtype: string
- name: title
dtype: string
- name: year
dtype: int64
- name: midi_filename
dtype: string
splits:
- name: test
num_bytes: 29686362
num_examples: 177
- name: validation
num_bytes: 25599834
num_examples: 137
- name: train
num_bytes: 226534277
num_examples: 962
download_size: 87287914
dataset_size: 281820473
---
# Dataset Card for "maestro-v1-sustain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6881749033927917,
-0.3222571909427643,
0.04247104004025459,
0.4032193124294281,
-0.14913849532604218,
0.022668378427624702,
0.33783823251724243,
0.0025183691177517176,
1.0770422220230103,
0.523743212223053,
-1.1745346784591675,
-0.5328150987625122,
-0.4019303023815155,
-0.26457196474075... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iamtarun/code_instructions_120k_alpaca | iamtarun | 2023-07-27T15:49:10Z | 155 | 4 | null | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"code",
"region:us"
] | 2023-07-27T15:49:10Z | 2023-07-23T17:34:03.000Z | 2023-07-23T17:34:03 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 154022159
num_examples: 121959
download_size: 72306808
dataset_size: 154022159
task_categories:
- text-generation
- question-answering
- text2text-generation
tags:
- code
size_categories:
- 100K<n<1M
---
# Dataset Card for code_instructions_120k_alpaca
This dataset is taken from [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k), which adds a prompt column in alpaca style. Refer to the original source [here](https://huggingface.co/datasets/sahil2801/code_instructions_120k). | [
-0.6758257746696472,
-0.45922335982322693,
0.15038903057575226,
0.7006983160972595,
-0.5426548719406128,
-0.35432618856430054,
0.28741300106048584,
-0.049930326640605927,
0.7525915503501892,
0.8123900890350342,
-1.1482207775115967,
-0.7653018832206726,
-0.5823404788970947,
0.07265733927488... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
izumi-lab/wikinews-ja-20230728 | izumi-lab | 2023-07-29T03:06:48Z | 155 | 4 | null | [
"language:ja",
"license:cc-by-2.5",
"region:us"
] | 2023-07-29T03:06:48Z | 2023-07-28T07:01:06.000Z | 2023-07-28T07:01:06 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 7998861
num_examples: 4283
download_size: 4086208
dataset_size: 7998861
license: cc-by-2.5
language:
- ja
---
# Dataset Card for "wikinews-ja-20230728"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.605431079864502,
-0.16688163578510284,
0.12035010755062103,
0.22781294584274292,
-0.19310951232910156,
-0.12833724915981293,
0.2810050845146179,
-0.12967157363891602,
1.0513603687286377,
0.4868086576461792,
-0.9865174889564514,
-0.5310803651809692,
-0.6078165173530579,
-0.21215370297431... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JasiekKaczmarczyk/pianofor-ai-sustain-masked | JasiekKaczmarczyk | 2023-10-02T11:08:48Z | 155 | 0 | null | [
"region:us"
] | 2023-10-02T11:08:48Z | 2023-10-02T11:07:50.000Z | 2023-10-02T11:07:50 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: source
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: masking_spaces
struct:
- name: <Random Mask>
sequence: bool
length: 128
- name: <LH Mask>
sequence: bool
length: 128
- name: <RH Mask>
sequence: bool
length: 128
- name: <Harmonic Root Mask>
sequence: bool
length: 128
- name: <Harmonic Outliers Mask>
sequence: bool
length: 128
splits:
- name: train
num_bytes: 348650644
num_examples: 184108
- name: validation
num_bytes: 45493168
num_examples: 24183
- name: test
num_bytes: 38444406
num_examples: 20548
download_size: 198351270
dataset_size: 432588218
---
# Dataset Card for "pianofor-ai-sustain-masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6005434989929199,
-0.31891772150993347,
0.17688558995723724,
0.38723987340927124,
-0.15393726527690887,
0.1983565092086792,
-0.06406402587890625,
-0.1586962342262268,
0.8417283296585083,
0.6951951384544373,
-1.0864086151123047,
-0.8776437640190125,
-0.5249598622322083,
-0.18172971904277... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.