id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mstz/spambase | mstz | 2023-04-16T18:02:22Z | 29 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"spambase",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T18:02:22Z | 2023-04-07T07:37:26.000Z | 2023-04-07T07:37:26 | ---
language:
- en
tags:
- spambase
- tabular_classification
- binary_classification
- UCI
pretty_name: Spambase
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- spambase
license: cc
---
# Spambase
The [Spambase dataset](https://archive.ics.uci.edu/ml/datasets/Spambase) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Is the given mail spam?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------|
| spambase | Binary classification | Is the mail spam?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/spambase")["train"]
``` | [
-0.4790157079696655,
-0.6048470735549927,
-0.32316988706588745,
0.41790640354156494,
-0.10998819768428802,
-0.35068756341934204,
0.04437723383307457,
0.13801932334899902,
0.265472948551178,
1.0254075527191162,
-0.6960486769676208,
-0.7033233046531677,
-1.1817295551300049,
0.352406263351440... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Svetlana0303/1500_aug_ds | Svetlana0303 | 2023-04-10T15:21:03Z | 29 | 0 | null | [
"region:us"
] | 2023-04-10T15:21:03Z | 2023-04-10T15:18:27.000Z | 2023-04-10T15:18:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/car | mstz | 2023-04-16T16:55:11Z | 29 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"car",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T16:55:11Z | 2023-04-12T08:20:52.000Z | 2023-04-12T08:20:52 | ---
language:
- en
tags:
- car
- tabular_classification
- binary_classification
- UCI
pretty_name: Car evaluation
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- car
license: cc
---
# Car
The [Car dataset](https://archive-beta.ics.uci.edu/dataset/19/car+evaluation) from the [UCI repository](https://archive-beta.ics.uci.edu).
Classify the acceptability level of a car for resale.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| car | Multiclass classification | What is the acceptability level of the car?|
| car_binary | Binary classification | Is the car acceptable?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/car", "car_binary")["train"]
``` | [
-0.5202088356018066,
-0.07646067440509796,
0.039453018456697464,
0.27648118138313293,
-0.005158828571438789,
-0.14006268978118896,
-0.3460327982902527,
-0.4316987991333008,
-0.24191798269748688,
-0.01869724504649639,
-0.42764443159103394,
-0.5417895913124084,
-0.34879234433174133,
0.102089... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BlackKakapo/multitask-ro | BlackKakapo | 2023-09-21T14:35:01Z | 29 | 3 | null | [
"task_categories:text2text-generation",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_categories:translation",
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:1M<n<5M",
"language:ro",
"lice... | 2023-09-21T14:35:01Z | 2023-04-16T10:49:43.000Z | 2023-04-16T10:49:43 | ---
license: apache-2.0
multilinguality: monolingual
size_categories: 1M<n<5M
language: ro
task_categories:
- text2text-generation
- question-answering
- sentence-similarity
- text-classification
- translation
- summarization
---
## Dataset
### Train
| Dataset | Link | Rows | Task-specific prefix |
| ------ | ------ | ------ | ------ |
| **Paraphrase** | [Paraphrase](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro) | 131951 | *paraphrase:* **string** |
| **Grammar** | [Grammar](https://huggingface.co/datasets/BlackKakapo/grammar-ro) | 1686054 | *grammar:* **string** |
| **Synonyms** | - | 14085 | *synonyms:* **word** |
| **Translate** | - | 999725 | *translate Romanian to English:* **string** |
| **Summarize** | [Summarize](https://huggingface.co/datasets/readerbench/ro-text-summarization) | 71999 | *summarize:* **string** |
| **Sentiment analysis** | [Sentiment analysis](https://huggingface.co/datasets/ro_sent) | 36498 | *sentiment analysis:* **string** |
| **STS** | [STS](https://huggingface.co/datasets/ro_sts) | 7499 | *sts:* **string** |
| **Offense analysis** | [Offense analysis](https://huggingface.co/datasets/readerbench/ro-fb-offense) | 3199 | *offense analysis:* **string** |
| **Gsm8k-ro** | [Gsm8k-ro](https://huggingface.co/datasets/BlackKakapo/gsm8k-ro) | 7474 | **string** |
| **Qasc-ro** | [Qasc-ro](https://huggingface.co/datasets/BlackKakapo/qasc-ro) | 8134 | **string** |
| **Recipes-ro** | [Recipes-ro](https://huggingface.co/datasets/BlackKakapo/recipes-ro) | 818 | 1. *Spune-mi reteta pentru* **string** 2. *Cum as putea face* **string** 3. *Spune-mi te rog cum as putea face* **string** |
| **Qaworld-ro** | [Qaworld-ro](https://huggingface.co/datasets/BlackKakapo/qaworld-ro) | 722659 | **string** |
| **News-ro** | - | 102369 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Newsagro-ro** | - | 568 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Instruction-dataset-ro** | [Instruction-dataset-ro](https://huggingface.co/datasets/BlackKakapo/instruction-dataset-ro) | 326 | **string**|
| **TOTAL** | [Multitask-ro](https://huggingface.co/datasets/BlackKakapo/multitask-ro) | **~3.792.698** | |
### Eval
| Dataset | Link | Rows | Task-specific prefix |
| ------ | ------ | ------ | ------ |
| **Paraphrase** | [Paraphrase](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro) | 3540 | *paraphrase:* **string** |
| **Grammar** | [Grammar](https://huggingface.co/datasets/BlackKakapo/grammar-ro) | 200 | *grammar:* **string** |
| **Synonyms** | - | 318 | *synonyms:* **word** |
| **Translate** | [Translate](https://huggingface.co/datasets/opus100/viewer/en-ro/train) | 3271 | *translate Romanian to English:* **string** |
| **Summarize** | [Summarize](https://huggingface.co/datasets/readerbench/ro-text-summarization) | 449 | *summarize:* **string** |
| **Sentiment analysis** | [Sentiment analysis](https://huggingface.co/datasets/ro_sent) | 789 | *sentiment analysis:* **string** |
| **STS** | [STS](https://huggingface.co/datasets/ro_sts) | 1119 | *sts:* **string** |
| **Offense analysis** | [Offense analysis](https://huggingface.co/datasets/readerbench/ro-fb-offense) | 1251 | *offense analysis:* **string** |
| **Gsm8k-ro** | [Gsm8k-ro](https://huggingface.co/datasets/BlackKakapo/gsm8k-ro) | 1319 | **string** |
| **Qasc-ro** | [Qasc-ro](https://huggingface.co/datasets/BlackKakapo/qasc-ro) | 926 | **string** |
| **Recipes-ro** | [Recipes-ro](https://huggingface.co/datasets/BlackKakapo/recipes-ro) | 63 | 1. *Spune-mi reteta pentru* **string** 2. *Cum as putea face* **string** 3. *Spune-mi te rog cum as putea face* **string** |
| **Qaworld-ro** | [Qaworld-ro](https://huggingface.co/datasets/BlackKakapo/qaworld-ro) | 3350 | **string** |
| **News-ro** | - | 140 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Newsagro-ro** | - | 112 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **TOTAL** | [Multitask-ro](https://huggingface.co/datasets/BlackKakapo/multitask-ro) | **16847** | |
[Original summarize]: <https://huggingface.co/datasets/readerbench/ro-text-summarization>
[Original sent]: <https://huggingface.co/datasets/ro_sent>
[Original sts]: <https://huggingface.co/datasets/ro_sts>
[Original offense]: <https://huggingface.co/datasets/readerbench/ro-fb-offense> | [
-0.2339664101600647,
-0.723119854927063,
0.23890849947929382,
0.6548882722854614,
-0.2851276695728302,
-0.13601259887218475,
-0.2778386175632477,
-0.5364804267883301,
1.0699700117111206,
0.32751259207725525,
-0.7079573273658752,
-0.6037322282791138,
-0.9583609700202942,
0.3341943621635437,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taka-yayoi/databricks-dolly-15k-ja | taka-yayoi | 2023-04-17T09:18:13Z | 29 | 2 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | 2023-04-17T09:18:13Z | 2023-04-17T08:58:32.000Z | 2023-04-17T08:58:32 | ---
license: cc-by-sa-3.0
---
こちらのデータセットを活用させていただき、Dollyのトレーニングスクリプトで使えるように列名の変更とJSONLへの変換を行っています。
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
Dolly
https://github.com/databrickslabs/dolly | [
-0.5419361591339111,
-0.8025200366973877,
0.14305555820465088,
0.39849182963371277,
-0.5992408394813538,
-0.06190847232937813,
0.230460986495018,
-0.26757514476776123,
0.962317705154419,
0.6537597179412842,
-0.8508411645889282,
-0.6632261872291565,
-0.7236664891242981,
0.2531823217868805,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mhhmm/leetcode-solutions-python | mhhmm | 2023-04-27T06:40:41Z | 29 | 14 | null | [
"license:lgpl",
"region:us"
] | 2023-04-27T06:40:41Z | 2023-04-25T10:48:36.000Z | 2023-04-25T10:48:36 | ---
license: lgpl
---
All credits belong to https://www.kaggle.com/datasets/erichartford/leetcode-solutions
I collected only python solutions:
```
id: <number>
code_with_data:
<
# Slug
# Title
# Difficulty
# Content
Code Answer in Python
# Explanation
>
code_only: < Code Answer in Python >
code_with_problem: <
# Content
Code
>
explanation_only: < Explanation >
```
I'm using this for code generation and code summarization so the data will have the format like above
| [
-0.20595185458660126,
-0.28778934478759766,
0.6297279000282288,
0.17675094306468964,
-0.008913857862353325,
-0.0781194269657135,
-0.1537855714559555,
0.39602240920066833,
0.37842857837677,
0.594875156879425,
-0.8149482011795044,
-0.6318271160125732,
-0.20529668033123016,
0.2815355062484741... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lexlms/lex_files | lexlms | 2023-07-24T11:59:58Z | 29 | 4 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended",
"language:en",
... | 2023-07-24T11:59:58Z | 2023-05-10T11:40:26.000Z | 2023-05-10T11:40:26 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- extended
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: LexFiles
tags:
- legal
- law
---
# Dataset Card for "LexFiles"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/2305.07507
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.
### Dataset Specifications
| Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) |
|-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------|
| EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% |
| EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% |
| ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% |
| UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% |
| UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% |
| Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% |
| Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% |
| Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% |
| U.S. Court Decisions [1] | `us-court-cases` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% |
| U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% |
| U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% |
| Total | `lexlms/lex_files` | 5.8M | 18.8B | 100% | 100% | 100% |
[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.
[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).
Additional corpora not considered for pre-training, since they do not represent factual legal knowledge.
| Corpus | Corpus alias | Documents | Tokens |
|----------------------------------------|------------------------|-----------|--------|
| Legal web pages from C4 | `legal-c4` | 284K | 340M |
### Usage
Load a specific sub-corpus, given the corpus alias, as presented above.
```python
from datasets import load_dataset
dataset = load_dataset('lexlms/lex_files', name='us-court-cases')
```
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/)
```
@inproceedings{chalkidis-etal-2023-lexfiles,
title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development",
author = "Chalkidis, Ilias and
Garneau, Nicolas and
Goanta, Catalina and
Katz, Daniel and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.865",
pages = "15513--15535",
}
``` | [
-0.3495611250400543,
-0.246987983584404,
0.5294406414031982,
0.06274875998497009,
-0.4187997877597809,
0.23004230856895447,
-0.18645349144935608,
-0.34938061237335205,
0.38880398869514465,
0.4223328232765198,
-0.2891414761543274,
-0.9929324984550476,
-0.6492486000061035,
0.1083218827843666... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
skrishna/CSQA_preprocessed | skrishna | 2023-05-10T18:01:33Z | 29 | 1 | null | [
"region:us"
] | 2023-05-10T18:01:33Z | 2023-05-10T14:31:46.000Z | 2023-05-10T14:31:46 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_concept
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: answerKey
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 3875948
num_examples: 9741
- name: validation
num_bytes: 480334
num_examples: 1221
- name: test
num_bytes: 452620
num_examples: 1140
download_size: 2706083
dataset_size: 4808902
---
# Dataset Card for "CSQA_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6295959949493408,
-0.19271503388881683,
0.26481959223747253,
0.3435419201850891,
-0.11561852693557739,
0.25438785552978516,
0.1992054432630539,
0.032502856105566025,
0.7004859447479248,
0.5599861145019531,
-0.8257720470428467,
-0.8544546365737915,
-0.43187159299850464,
-0.28207346796989... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yangwang825/klue-ynat | yangwang825 | 2023-05-19T02:07:06Z | 29 | 0 | null | [
"task_categories:text-classification",
"language:ko",
"region:us"
] | 2023-05-19T02:07:06Z | 2023-05-18T00:29:06.000Z | 2023-05-18T00:29:06 | ---
task_categories:
- text-classification
language:
- ko
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': IT과학
'1': 경제
'2': 사회
'3': 생활문화
'4': 세계
'5': 스포츠
'6': 정치
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/hotpotqa-pl-qrels | clarin-knext | 2023-06-07T08:13:10Z | 29 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:13:10Z | 2023-06-06T22:25:23.000Z | 2023-06-06T22:25:23 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920436143875,
-0.9029766917228699,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.4962919354438782,
-0.01896025240421295,
0.41122618317604065,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175127029419,
-0.048304717987775... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4privacy/pii-masking-43k | ai4privacy | 2023-06-28T17:45:58Z | 29 | 10 | null | [
"size_categories:10K<n<100K",
"language:en",
"legal",
"business",
"psychology",
"privacy",
"doi:10.57967/hf/0824",
"region:us"
] | 2023-06-28T17:45:58Z | 2023-06-28T16:44:41.000Z | 2023-06-28T16:44:41 | ---
language:
- en
tags:
- legal
- business
- psychology
- privacy
size_categories:
- 10K<n<100K
---
# Purpose and Features
The purpose of the model and dataset is to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs.
The model is a fine-tuned version of "Distilled BERT", a smaller and faster version of BERT. It was adapted for the task of token classification based on the largest to our knowledge open-source PII masking dataset, which we are releasing simultaneously. The model size is 62 million parameters. The original encoding of the parameters yields a model size of 268 MB, which is compressed to 43MB after parameter quantization. The models are available in PyTorch, tensorflow, and tensorflow.js
The dataset is composed of ~43’000 observations. Each row starts with a natural language sentence that includes placeholders for PII and could plausibly be written to an AI assistant. The placeholders are then filled in with mocked personal information and tokenized with the BERT tokenizer. We label the tokens that correspond to PII, serving as the ground truth to train our model.
The dataset covers a range of contexts in which PII can appear. The sentences span 54 sensitive data types (~111 token classes), targeting 125 discussion subjects / use cases split across business, psychology and legal fields, and 5 interactions styles (e.g. casual conversation vs formal document).
Key facts:
- Currently 5.6m tokens with 43k PII examples.
- Scaling to 100k examples
- Human-in-the-loop validated
- Synthetic data generated using proprietary algorithms
- Adapted from DistilBertForTokenClassification
- Framework PyTorch
- 8 bit quantization
# Performance evaluation
| Test Precision | Test Recall | Test Accuracy |
|:-:|:-:|:-:|
| 0.998636 | 0.998945 | 0.994621 |
Training/Test Set split:
- 4300 Testing Examples (10%)
- 38700 Train Examples
# Community Engagement:
Newsletter & updates: www.Ai4privacy.com
- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)
- Integrations with already existing open source solutions
# Roadmap and Future Development
- Multilingual
- Extended integrations
- Continuously increase the training set
- Further optimisation to the model to reduce size and increase generalisability
- Next released major update is planned for the 14th of July (subscribe to newsletter for updates)
# Use Cases and Applications
**Chatbots**: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses.
**Customer Support Systems**: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information.
**Email Filtering**: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information.
**Data Anonymization**: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes.
**Social Media Platforms**: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment.
**Content Moderation**: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details.
**Online Forms**: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection.
**Collaborative Document Editing**: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents.
**Research and Data Sharing**: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft.
**Content Generation**: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals.
(...and whatever else your creative mind can think of)
# Support and Maintenance
AI4Privacy is a project affiliated with [AISuisse SA](https://www.aisuisse.com/). | [
-0.6150047779083252,
-0.8268652558326721,
0.14656437933444977,
0.2836979925632477,
-0.03939303010702133,
0.0936751738190651,
0.016707658767700195,
-0.7855074405670166,
-0.050768740475177765,
0.5205506682395935,
-0.40411219000816345,
-0.45729881525039673,
-0.44089019298553467,
0.01333014667... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wykonos/movies | wykonos | 2023-07-01T22:09:00Z | 29 | 1 | null | [
"region:us"
] | 2023-07-01T22:09:00Z | 2023-07-01T22:03:24.000Z | 2023-07-01T22:03:24 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carbon225/vndb_img | carbon225 | 2023-07-04T14:46:14Z | 29 | 0 | null | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:odbl",
"art",
"not-for-all-audiences",
"anime",
"visual-novel",
"nsfw",
"vndb",
"region:us"
] | 2023-07-04T14:46:14Z | 2023-07-04T14:12:10.000Z | 2023-07-04T14:12:10 | ---
license: odbl
task_categories:
- image-classification
tags:
- art
- not-for-all-audiences
- anime
- visual-novel
- nsfw
- vndb
size_categories:
- 100K<n<1M
---
# Dataset Card for VNDB IMG
## Dataset Description
This is a 🤗 Datasets loader for the [vndb.org](https://vndb.org) image database dump.
It contains anime-style images flagged by users according to these categories:
* sexual content: safe/suggestive/explicit
* violence: tame/violent/brutal
## Loading Instructions
For licensing and "moral" reasons, the database dump has to be downloaded manually.
Please download the vndb.org database dump manually from <https://vndb.org/d14>.
Download the "Near-complete database" `vndb-db-latest.tar.zst` file.
Use `rsync` to download the 'Images' collection.
Create the following directory structure:
```
my/dataset/path
├── db
│ └── vndb-db-latest.tar.zst
└── vndb-img # this is the directory you downloaded with rsync
├── ch
├── cv
├── sf
├── st
└── ...
```
Inside `my/dataset/path/db` run
```
zstd -d vndb-db-latest.tar.zst
```
and
```
tar -xf vndb-db-latest.tar
```
The final directory structure should look like this:
```
my/dataset/path
├── db
│ ├── vndb-db-latest.tar
│ ├── vndb-db-latest.tar.zst
│ ├── db
│ └── ...
└── vndb-img
├── ch
├── cv
├── sf
├── st
└── ...
```
Finally, load the dataset
```python
datasets.load_dataset('carbon225/vndb_img', data_dir='my/dataset/path')
```
## Dataset Structure
The following fields are provided:
```python
{
'index': datasets.Value('int32'),
'id': datasets.Value('string'),
'width': datasets.Value('int32'),
'height': datasets.Value('int32'),
'c_votecount': datasets.Value('int32'),
'c_sexual_avg': datasets.Value('int32'),
'c_sexual_stddev': datasets.Value('int32'),
'c_violence_avg': datasets.Value('int32'),
'c_violence_stddev': datasets.Value('int32'),
'c_weight': datasets.Value('int32'),
'type': datasets.ClassLabel(names=['character', 'cover', 'screenshot_full', 'screenshot_thumb']),
'sexual_class': datasets.ClassLabel(names=['safe', 'suggestive', 'explicit']),
'violence_class': datasets.ClassLabel(names=['tame', 'violent', 'brutal']),
'file_name': datasets.Value('string'),
'full_path': datasets.Value('string'),
'image': datasets.Image(),
}
```
## Supported Tasks
With a few modifications the data can be used for:
* image classification of NSFW material
* image generation/super-resolution/...
* ...
## Considerations for Using the Data
The images are ***hardcore***, to say the least. I recommend not looking.
## Licensing Information
Using this dataset requires the user to download data manually from vndb.org.
All information on VNDB is made available under the Open Database License.
Any rights in individual contents of the database are licensed under the Database Contents License.
With the following exceptions:
* Anime data is obtained from the AniDB.net UDP API and is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0.
* Images, visual novel descriptions and character descriptions are gathered from various online sources and may be subject to separate license conditions. | [
-0.6955639719963074,
-0.5148340463638306,
0.03793124109506607,
0.24189816415309906,
-0.5928028225898743,
-0.10292418301105499,
0.05717706307768822,
-0.3007119297981262,
0.36120182275772095,
0.5796208381652832,
-0.660386860370636,
-0.9068679809570312,
-0.4240460693836212,
0.2349035441875457... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shahules786/orca-chat | shahules786 | 2023-07-25T06:06:35Z | 29 | 96 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-25T06:06:35Z | 2023-07-17T11:58:55.000Z | 2023-07-17T11:58:55 | ---
license: apache-2.0
---
## ORCA-Chat
A high-quality explanation-style chat dataset.
ORCA dataset is cool, but it cannot directly be used to finetune chat models with above 4k context length
because it has trivial samples with tokens above 4k. It also has a large number of redundant instructions which
degrades its quality and increases the compute time when finetuning models using it. Enter ORCA-Chat!
This is a cleaned, pruned, and clustered version of orca to form a conversation-style dataset. The the process involves removing samples with very high similarity and also grouping instructions to form conversation.

## What next?
I will release 16/32k versions for this soon!
## Credits
* This wouldn't be possible without the amazing work of Eric in recreating the ORCA dataset. Check it out:
https://huggingface.co/datasets/ehartford/dolphin
* This dataset was created in association with the Open-Assistant team @jordanclive and @andreaskoepf
## Citations
```
@misc{Orca-Chat,
title = {Orca-chat: A high-quality explanation-style chat dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-chat/},
}
```
| [
-0.7035435438156128,
-0.7420856356620789,
-0.03488977998495102,
0.17437361180782318,
-0.4291359484195709,
-0.18773186206817627,
-0.12096633017063141,
-0.7353881001472473,
0.4167803227901459,
0.6771065592765808,
-0.6560918092727661,
-0.707419753074646,
-0.14651831984519958,
-0.1869067698717... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sayakpaul/hf-codegen-v2 | sayakpaul | 2023-08-03T10:28:58Z | 29 | 5 | null | [
"region:us"
] | 2023-08-03T10:28:58Z | 2023-07-20T05:07:42.000Z | 2023-07-20T05:07:42 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 51358178715
num_examples: 370000
download_size: 11423577734
dataset_size: 51358178715
---
# Dataset Card for "hf-codegen-v2"
Dataset generated with the code from: https://github.com/sayakpaul/hf-codegen. | [
-0.2585661709308624,
-0.18861596286296844,
0.0170417670160532,
0.17586815357208252,
-0.33898043632507324,
0.0741456151008606,
0.13266657292842865,
-0.18368187546730042,
0.4058353304862976,
0.6169943809509277,
-0.7506818175315857,
-0.6381421089172363,
-0.23135948181152344,
-0.14727915823459... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xzuyn/tulu-uncensored | xzuyn | 2023-07-31T22:24:03Z | 29 | 1 | null | [
"size_categories:100K<n<1M",
"language:en",
"allenai",
"tulu",
"ehartford",
"alpaca",
"arxiv:2306.04751",
"region:us"
] | 2023-07-31T22:24:03Z | 2023-07-31T21:42:53.000Z | 2023-07-31T21:42:53 | ---
language:
- en
tags:
- allenai
- tulu
- ehartford
- alpaca
size_categories:
- 100K<n<1M
---
[How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751)
[Original dataset page from ehartford.](https://huggingface.co/datasets/ehartford/open-instruct-uncensored)
348,020 entries. Sourced from `open-instruct-uncensored.jsonl`. Uses only these dataset subsets;
1. Flan V2
2. CoT
3. Dolly
4. OASST1
5. GPT4-Alpaca
6. Code-Alpaca
7. ShareGPT
```
Count of each Dataset:
code_alpaca: 19991
oasst1: 49433
flan_v2: 97519
sharegpt: 46733
dolly: 14624
cot: 73946
gpt4_alpaca: 45774
``` | [
-0.4635540246963501,
-0.4875335395336151,
-0.06994596123695374,
-0.08291677385568619,
-0.14347414672374725,
-0.4223536252975464,
-0.32143598794937134,
-0.43230417370796204,
-0.13107961416244507,
0.6125985980033875,
-0.647749662399292,
-0.6191961169242859,
-0.5084808468818665,
-0.0599214173... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ahazeemi/opus-it-en-de-new | ahazeemi | 2023-08-01T06:29:33Z | 29 | 0 | null | [
"region:us"
] | 2023-08-01T06:29:33Z | 2023-08-01T06:29:30.000Z | 2023-08-01T06:29:30 | ---
dataset_info:
features:
- name: de
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 29619721
num_examples: 222927
- name: dev
num_bytes: 278584
num_examples: 2000
- name: test
num_bytes: 258528
num_examples: 2000
download_size: 20800812
dataset_size: 30156833
---
# Dataset Card for "opus-it-en-de-new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6326606273651123,
-0.2835407853126526,
0.13178278505802155,
0.2345256209373474,
-0.34790918231010437,
-0.018291573971509933,
0.0978090837597847,
-0.05437905713915825,
0.9402628540992737,
0.5724896788597107,
-0.8092301487922668,
-0.9726370573043823,
-0.5502198934555054,
-0.13988932967185... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RikoteMaster/isear_for_llama2 | RikoteMaster | 2023-08-03T13:01:30Z | 29 | 0 | null | [
"region:us"
] | 2023-08-03T13:01:30Z | 2023-08-02T14:24:47.000Z | 2023-08-02T14:24:47 | ---
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: Augmented
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 3715314
num_examples: 7499
- name: validation
num_bytes: 645323
num_examples: 1324
- name: test
num_bytes: 854222
num_examples: 1879
download_size: 567800
dataset_size: 5214859
---
# Dataset Card for "isear_for_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33160459995269775,
-0.1183704063296318,
0.3092819154262543,
0.5827866792678833,
-0.6232660412788391,
0.11762332171201706,
0.3569972813129425,
-0.3607061505317688,
1.0176546573638916,
0.5236312747001648,
-0.7261129021644592,
-0.8545069694519043,
-0.805978000164032,
-0.21858911216259003,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
klaudiaX/common_voice_shqip | klaudiaX | 2023-08-08T23:08:54Z | 29 | 2 | null | [
"region:us"
] | 2023-08-08T23:08:54Z | 2023-08-08T10:19:44.000Z | 2023-08-08T10:19:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FunDialogues/customer-service-robot-support | FunDialogues | 2023-08-28T23:39:18Z | 29 | 0 | null | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"fictitious dialogues",
"prototyping",
"customer service",
"region:us"
] | 2023-08-28T23:39:18Z | 2023-08-10T05:41:46.000Z | 2023-08-10T05:41:46 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- fictitious dialogues
- prototyping
- customer service
pretty_name: customer-service-robot-support
size_categories:
- n<1K
---
# fun dialogues
A library of fictitious dialogues that can be used to train language models or augment prompts for prototyping and educational purposes. Fun dialogues currently come in json and csv format for easy ingestion or conversion to popular data structures. Dialogues span various topics such as sports, retail, academia, healthcare, and more. The library also includes basic tooling for loading dialogues and will include quick chatbot prototyping functionality in the future.
Visit the Project Repo: https://github.com/eduand-alvarez/fun-dialogues/
# This Dialogue
Comprised of fictitious examples of dialogues between a customer encountering problems with a robotic arm and a technical support agent. Check out the example below:
```
"id": 1,
"description": "Robotic arm calibration issue",
"dialogue": "Customer: My robotic arm seems to be misaligned. It's not picking objects accurately. What can I do? Agent: It appears that the arm may need recalibration. Please follow the instructions in the user manual to reset the calibration settings. If the problem persists, feel free to contact us again."
```
# How to Load Dialogues
Loading dialogues can be accomplished using the fun dialogues library or Hugging Face datasets library.
## Load using fun dialogues
1. Install fun dialogues package
`pip install fundialogues`
2. Use loader utility to load dataset as pandas dataframe. Further processing might be required for use.
```
from fundialogues import dialoader
# load as pandas dataframe
bball_coach = dialoader('"FunDialogues/customer-service-robot-support")
```
## Loading using Hugging Face datasets
1. Install datasets package
2. Load using datasets
```
from datasets import load_dataset
dataset = load_dataset("FunDialogues/customer-service-robot-support")
```
## How to Contribute
If you want to contribute to this project and make it better, your help is very welcome. Contributing is also a great way to learn more about social coding on Github, new technologies and and their ecosystems and how to make constructive, helpful bug reports, feature requests and the noblest of all contributions: a good, clean pull request.
### Contributing your own Lifecycle Solution
If you want to contribute to an existing dialogue or add a new dialogue, please open an issue and I will follow up with you ASAP!
### Implementing Patches and Bug Fixes
- Create a personal fork of the project on Github.
- Clone the fork on your local machine. Your remote repo on Github is called origin.
- Add the original repository as a remote called upstream.
- If you created your fork a while ago be sure to pull upstream changes into your local repository.
- Create a new branch to work on! Branch from develop if it exists, else from master.
- Implement/fix your feature, comment your code.
- Follow the code style of the project, including indentation.
- If the component has tests run them!
- Write or adapt tests as needed.
- Add or change the documentation as needed.
- Squash your commits into a single commit with git's interactive rebase. Create a new branch if necessary.
- Push your branch to your fork on Github, the remote origin.
- From your fork open a pull request in the correct branch. Target the project's develop branch if there is one, else go for master!
If the maintainer requests further changes just push them to your branch. The PR will be updated automatically.
Once the pull request is approved and merged you can pull the changes from upstream to your local repo and delete your extra branch(es).
And last but not least: Always write your commit messages in the present tense. Your commit message should describe what the commit, when applied, does to the code – not what you did to the code.
# Disclaimer
The dialogues contained in this repository are provided for experimental purposes only. It is important to note that these dialogues are assumed to be original work by a human and are entirely fictitious, despite the possibility of some examples including factually correct information. The primary intention behind these dialogues is to serve as a tool for language modeling experimentation and should not be used for designing real-world products beyond non-production prototyping.
Please be aware that the utilization of fictitious data in these datasets may increase the likelihood of language model artifacts, such as hallucinations or unrealistic responses. Therefore, it is essential to exercise caution and discretion when employing these datasets for any purpose.
It is crucial to emphasize that none of the scenarios described in the fun dialogues dataset should be relied upon to provide advice or guidance to humans. These scenarios are purely fictitious and are intended solely for demonstration purposes. Any resemblance to real-world situations or individuals is entirely coincidental.
The responsibility for the usage and application of these datasets rests solely with the individual or entity employing them. By accessing and utilizing these dialogues and all contents of the repository, you acknowledge that you have read and understood this disclaimer, and you agree to use them at your own discretion and risk.
| [
-0.18181957304477692,
-0.7555670142173767,
0.2431551218032837,
0.13785013556480408,
-0.21883581578731537,
0.2148299366235733,
-0.11190539598464966,
-0.2672615349292755,
0.3878883421421051,
0.6885974407196045,
-0.8495292067527771,
-0.3974635601043701,
-0.14204582571983337,
-0.10664362460374... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/wikihow_ru | dim | 2023-08-15T12:11:59Z | 29 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-15T12:11:59Z | 2023-08-15T12:11:35.000Z | 2023-08-15T12:11:35 | ---
license: mit
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 17666785.144215908
num_examples: 2058
download_size: 11421933
dataset_size: 17666785.144215908
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
loremipsum3658/pet | loremipsum3658 | 2023-08-24T21:28:06Z | 29 | 0 | null | [
"region:us"
] | 2023-08-24T21:28:06Z | 2023-08-24T21:27:59.000Z | 2023-08-24T21:27:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: fname
dtype: string
- name: raw_text
dtype: string
- name: aviso_previo
dtype: bool
- name: saldo_de_salario
dtype: bool
- name: ferias
dtype: bool
- name: decimo_terceiro
dtype: bool
- name: fgts
dtype: bool
- name: multa_do_477
dtype: bool
- name: multa_do_467
dtype: bool
- name: horas_extras
dtype: bool
- name: intervalo_intrajornada
dtype: bool
- name: intervalo_interjornada
dtype: bool
- name: adicional_noturno
dtype: bool
- name: adicional_de_insalubridade
dtype: bool
- name: adicional_de_periculosidade
dtype: bool
- name: diferencas_salariais_ou_equiparacao_salarial
dtype: bool
- name: dano_moral
dtype: bool
- name: contribuicao_assistencial
dtype: bool
- name: indenizacao_por_lucros_cessantes
dtype: bool
- name: indenizacao_por_dano_emergente
dtype: bool
- name: multa_normativa
dtype: bool
- name: honorarios_advocaticios
dtype: bool
- name: justica_gratuita
dtype: bool
- name: reconhecimento_de_vinculo
dtype: bool
- name: reflexos_das_parcelas_salariais
dtype: bool
- name: reflexos_de_salarios_oficiosos_e_informais
dtype: bool
- name: outros
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1654516
num_examples: 1705
- name: test
num_bytes: 351964
num_examples: 366
- name: validation
num_bytes: 332831
num_examples: 366
download_size: 1391885
dataset_size: 2339311
---
# Dataset Card for "pet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6890828609466553,
-0.09749585390090942,
0.19682347774505615,
0.2161058783531189,
-0.3096421957015991,
-0.07830463349819183,
0.23363777995109558,
-0.3410279154777527,
0.8700215816497803,
0.535838782787323,
-0.692963719367981,
-0.6317964792251587,
-0.41951102018356323,
-0.0475329905748367... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/linux_man_pages_tldr_summarized | dim | 2023-08-31T19:56:32Z | 29 | 0 | null | [
"region:us"
] | 2023-08-31T19:56:32Z | 2023-08-31T19:51:37.000Z | 2023-08-31T19:51:37 | ---
dataset_info:
features:
- name: Command
dtype: string
- name: Text
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 3006835
num_examples: 481
download_size: 1308915
dataset_size: 3006835
---
# Dataset Card for "linux_man_pages_tldr_summarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6655758619308472,
-0.23219044506549835,
0.3663947880268097,
0.04006648808717728,
-0.3254789113998413,
0.14792101085186005,
0.1061636433005333,
0.13627928495407104,
0.8693626523017883,
0.4464593827724457,
-0.6925654411315918,
-0.8076263666152954,
-0.3853885531425476,
-0.19804133474826813... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
warshakhan/donut_vqa_ISynHMP_all_labels | warshakhan | 2023-09-19T08:43:22Z | 29 | 0 | null | [
"region:us"
] | 2023-09-19T08:43:22Z | 2023-09-19T08:39:47.000Z | 2023-09-19T08:39:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 580858079.0
num_examples: 2800
- name: valid
num_bytes: 85643829.0
num_examples: 400
- name: test
num_bytes: 172886967.0
num_examples: 800
download_size: 804946514
dataset_size: 839388875.0
---
# Dataset Card for "donut_vqa_ISynHMP_all_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.23792561888694763,
-0.15410420298576355,
0.31882035732269287,
0.11842461675405502,
-0.05775756388902664,
0.2846059203147888,
0.19714120030403137,
-0.13541992008686066,
1.0937440395355225,
0.5384681820869446,
-0.8899773359298706,
-0.8707548379898071,
-0.7128725051879883,
-0.3144378662109... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
antareepdey/Medical_chat_Llama-chat-50k | antareepdey | 2023-10-22T03:16:54Z | 29 | 0 | null | [
"region:us"
] | 2023-10-22T03:16:54Z | 2023-10-22T03:15:55.000Z | 2023-10-22T03:15:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Text
dtype: string
splits:
- name: train
num_bytes: 50561249
num_examples: 50000
download_size: 31132221
dataset_size: 50561249
---
# Dataset Card for "Medical_chat_Llama-chat-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.22141949832439423,
-0.16136214137077332,
0.15185238420963287,
0.5018898844718933,
-0.5244977474212646,
0.24231360852718353,
0.2851066291332245,
-0.35602515935897827,
1.0662175416946411,
0.48427313566207886,
-0.8330637216567993,
-0.9337841272354126,
-0.8029310703277588,
-0.07773027569055... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Phando/uspto-50k | Phando | 2023-10-26T06:11:45Z | 29 | 0 | null | [
"region:us"
] | 2023-10-26T06:11:45Z | 2023-10-24T09:03:20.000Z | 2023-10-24T09:03:20 | ---
dataset_info:
features:
- name: class
dtype: int64
- name: id
dtype: string
- name: prod_smiles
dtype: string
- name: rxn_smiles
dtype: string
- name: prod_smiles_pop
dtype: int64
- name: keep
dtype: bool
splits:
- name: train
num_bytes: 23288334
num_examples: 50016
download_size: 8855809
dataset_size: 23288334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uspto-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5084240436553955,
0.13262712955474854,
0.09016130864620209,
0.1964663565158844,
-0.5770283341407776,
0.11867646872997284,
0.3893951177597046,
-0.2069336175918579,
0.685035228729248,
0.6308382153511047,
-0.6992062926292419,
-0.7214344143867493,
-0.5534582138061523,
-0.06259724497795105,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cis-lmu/Glot500 | cis-lmu | 2023-11-21T16:16:52Z | 29 | 4 | null | [
"license:other",
"arxiv:2305.12182",
"region:us"
] | 2023-11-21T16:16:52Z | 2023-11-01T10:25:59.000Z | 2023-11-01T10:25:59 | ---
license: other
license_name: license
license_link: LICENSE
configs:
- config_name: mlt_Mlym
data_files:
- split: train
path: "mlt_Mlym/train/*.arrow"
- split: dev
path: "mlt_Mlym/dev/*.arrow"
- split: test
path: "mlt_Mlym/test/*.arrow"
- config_name: knv_Latn
data_files:
- split: train
path: "knv_Latn/train/*.arrow"
- split: dev
path: "knv_Latn/dev/*.arrow"
- split: test
path: "knv_Latn/test/*.arrow"
- config_name: guj_Arab
data_files:
- split: train
path: "guj_Arab/train/*.arrow"
- split: dev
path: "guj_Arab/dev/*.arrow"
- split: test
path: "guj_Arab/test/*.arrow"
- config_name: ton_Latn
data_files:
- split: train
path: "ton_Latn/train/*.arrow"
- split: dev
path: "ton_Latn/dev/*.arrow"
- split: test
path: "ton_Latn/test/*.arrow"
- config_name: nld_Latn
data_files:
- split: train
path: "nld_Latn/train/*.arrow"
- split: dev
path: "nld_Latn/dev/*.arrow"
- split: test
path: "nld_Latn/test/*.arrow"
- config_name: tzo_Latn
data_files:
- split: train
path: "tzo_Latn/train/*.arrow"
- split: dev
path: "tzo_Latn/dev/*.arrow"
- split: test
path: "tzo_Latn/test/*.arrow"
- config_name: tsn_Hani
data_files:
- split: train
path: "tsn_Hani/train/*.arrow"
- split: dev
path: "tsn_Hani/dev/*.arrow"
- split: test
path: "tsn_Hani/test/*.arrow"
- config_name: aze_Zinh
data_files:
- split: train
path: "aze_Zinh/train/*.arrow"
- split: dev
path: "aze_Zinh/dev/*.arrow"
- split: test
path: "aze_Zinh/test/*.arrow"
- config_name: cuk_Latn
data_files:
- split: train
path: "cuk_Latn/train/*.arrow"
- split: dev
path: "cuk_Latn/dev/*.arrow"
- split: test
path: "cuk_Latn/test/*.arrow"
- config_name: uzb_Cyrl
data_files:
- split: train
path: "uzb_Cyrl/train/*.arrow"
- split: dev
path: "uzb_Cyrl/dev/*.arrow"
- split: test
path: "uzb_Cyrl/test/*.arrow"
- config_name: jav_Latn
data_files:
- split: train
path: "jav_Latn/train/*.arrow"
- split: dev
path: "jav_Latn/dev/*.arrow"
- split: test
path: "jav_Latn/test/*.arrow"
- config_name: rap_Latn
data_files:
- split: train
path: "rap_Latn/train/*.arrow"
- split: dev
path: "rap_Latn/dev/*.arrow"
- split: test
path: "rap_Latn/test/*.arrow"
- config_name: bak_Cyrl
data_files:
- split: train
path: "bak_Cyrl/train/*.arrow"
- split: dev
path: "bak_Cyrl/dev/*.arrow"
- split: test
path: "bak_Cyrl/test/*.arrow"
- config_name: por_Latn
data_files:
- split: train
path: "por_Latn/train/*.arrow"
- split: dev
path: "por_Latn/dev/*.arrow"
- split: test
path: "por_Latn/test/*.arrow"
- config_name: mlt_Hang
data_files:
- split: train
path: "mlt_Hang/train/*.arrow"
- split: dev
path: "mlt_Hang/dev/*.arrow"
- split: test
path: "mlt_Hang/test/*.arrow"
- config_name: quy_Latn
data_files:
- split: train
path: "quy_Latn/train/*.arrow"
- split: dev
path: "quy_Latn/dev/*.arrow"
- split: test
path: "quy_Latn/test/*.arrow"
- config_name: hnj_Latn
data_files:
- split: train
path: "hnj_Latn/train/*.arrow"
- split: dev
path: "hnj_Latn/dev/*.arrow"
- split: test
path: "hnj_Latn/test/*.arrow"
- config_name: ast_Latn
data_files:
- split: train
path: "ast_Latn/train/*.arrow"
- split: dev
path: "ast_Latn/dev/*.arrow"
- split: test
path: "ast_Latn/test/*.arrow"
- config_name: cos_Latn
data_files:
- split: train
path: "cos_Latn/train/*.arrow"
- split: dev
path: "cos_Latn/dev/*.arrow"
- split: test
path: "cos_Latn/test/*.arrow"
- config_name: fon_Latn
data_files:
- split: train
path: "fon_Latn/train/*.arrow"
- split: dev
path: "fon_Latn/dev/*.arrow"
- split: test
path: "fon_Latn/test/*.arrow"
- config_name: sna_Latn
data_files:
- split: train
path: "sna_Latn/train/*.arrow"
- split: dev
path: "sna_Latn/dev/*.arrow"
- split: test
path: "sna_Latn/test/*.arrow"
- config_name: dzo_Tibt
data_files:
- split: train
path: "dzo_Tibt/train/*.arrow"
- split: dev
path: "dzo_Tibt/dev/*.arrow"
- split: test
path: "dzo_Tibt/test/*.arrow"
- config_name: nob_Latn
data_files:
- split: train
path: "nob_Latn/train/*.arrow"
- split: dev
path: "nob_Latn/dev/*.arrow"
- split: test
path: "nob_Latn/test/*.arrow"
- config_name: nch_Latn
data_files:
- split: train
path: "nch_Latn/train/*.arrow"
- split: dev
path: "nch_Latn/dev/*.arrow"
- split: test
path: "nch_Latn/test/*.arrow"
- config_name: che_Cyrl
data_files:
- split: train
path: "che_Cyrl/train/*.arrow"
- split: dev
path: "che_Cyrl/dev/*.arrow"
- split: test
path: "che_Cyrl/test/*.arrow"
- config_name: ext_Latn
data_files:
- split: train
path: "ext_Latn/train/*.arrow"
- split: dev
path: "ext_Latn/dev/*.arrow"
- split: test
path: "ext_Latn/test/*.arrow"
- config_name: dtp_Latn
data_files:
- split: train
path: "dtp_Latn/train/*.arrow"
- split: dev
path: "dtp_Latn/dev/*.arrow"
- split: test
path: "dtp_Latn/test/*.arrow"
- config_name: yue_Hani
data_files:
- split: train
path: "yue_Hani/train/*.arrow"
- split: dev
path: "yue_Hani/dev/*.arrow"
- split: test
path: "yue_Hani/test/*.arrow"
- config_name: kbd_Cyrl
data_files:
- split: train
path: "kbd_Cyrl/train/*.arrow"
- split: dev
path: "kbd_Cyrl/dev/*.arrow"
- split: test
path: "kbd_Cyrl/test/*.arrow"
- config_name: mar_Deva
data_files:
- split: train
path: "mar_Deva/train/*.arrow"
- split: dev
path: "mar_Deva/dev/*.arrow"
- split: test
path: "mar_Deva/test/*.arrow"
- config_name: ron_Latn
data_files:
- split: train
path: "ron_Latn/train/*.arrow"
- split: dev
path: "ron_Latn/dev/*.arrow"
- split: test
path: "ron_Latn/test/*.arrow"
- config_name: acr_Latn
data_files:
- split: train
path: "acr_Latn/train/*.arrow"
- split: dev
path: "acr_Latn/dev/*.arrow"
- split: test
path: "acr_Latn/test/*.arrow"
- config_name: afb_Arab
data_files:
- split: train
path: "afb_Arab/train/*.arrow"
- split: dev
path: "afb_Arab/dev/*.arrow"
- split: test
path: "afb_Arab/test/*.arrow"
- config_name: mon_Hani
data_files:
- split: train
path: "mon_Hani/train/*.arrow"
- split: dev
path: "mon_Hani/dev/*.arrow"
- split: test
path: "mon_Hani/test/*.arrow"
- config_name: sqi_Latn
data_files:
- split: train
path: "sqi_Latn/train/*.arrow"
- split: dev
path: "sqi_Latn/dev/*.arrow"
- split: test
path: "sqi_Latn/test/*.arrow"
- config_name: eng_Latn
data_files:
- split: train
path: "eng_Latn/train/*.arrow"
- split: dev
path: "eng_Latn/dev/*.arrow"
- split: test
path: "eng_Latn/test/*.arrow"
- config_name: ksd_Latn
data_files:
- split: train
path: "ksd_Latn/train/*.arrow"
- split: dev
path: "ksd_Latn/dev/*.arrow"
- split: test
path: "ksd_Latn/test/*.arrow"
- config_name: rus_Cyrl
data_files:
- split: train
path: "rus_Cyrl/train/*.arrow"
- split: dev
path: "rus_Cyrl/dev/*.arrow"
- split: test
path: "rus_Cyrl/test/*.arrow"
- config_name: bcl_Latn
data_files:
- split: train
path: "bcl_Latn/train/*.arrow"
- split: dev
path: "bcl_Latn/dev/*.arrow"
- split: test
path: "bcl_Latn/test/*.arrow"
- config_name: ksh_Latn
data_files:
- split: train
path: "ksh_Latn/train/*.arrow"
- split: dev
path: "ksh_Latn/dev/*.arrow"
- split: test
path: "ksh_Latn/test/*.arrow"
- config_name: hin_Latn
data_files:
- split: train
path: "hin_Latn/train/*.arrow"
- split: dev
path: "hin_Latn/dev/*.arrow"
- split: test
path: "hin_Latn/test/*.arrow"
- config_name: myv_Cyrl
data_files:
- split: train
path: "myv_Cyrl/train/*.arrow"
- split: dev
path: "myv_Cyrl/dev/*.arrow"
- split: test
path: "myv_Cyrl/test/*.arrow"
- config_name: kjh_Cyrl
data_files:
- split: train
path: "kjh_Cyrl/train/*.arrow"
- split: dev
path: "kjh_Cyrl/dev/*.arrow"
- split: test
path: "kjh_Cyrl/test/*.arrow"
- config_name: sah_Cyrl
data_files:
- split: train
path: "sah_Cyrl/train/*.arrow"
- split: dev
path: "sah_Cyrl/dev/*.arrow"
- split: test
path: "sah_Cyrl/test/*.arrow"
- config_name: naq_Latn
data_files:
- split: train
path: "naq_Latn/train/*.arrow"
- split: dev
path: "naq_Latn/dev/*.arrow"
- split: test
path: "naq_Latn/test/*.arrow"
- config_name: tdt_Latn
data_files:
- split: train
path: "tdt_Latn/train/*.arrow"
- split: dev
path: "tdt_Latn/dev/*.arrow"
- split: test
path: "tdt_Latn/test/*.arrow"
- config_name: kac_Latn
data_files:
- split: train
path: "kac_Latn/train/*.arrow"
- split: dev
path: "kac_Latn/dev/*.arrow"
- split: test
path: "kac_Latn/test/*.arrow"
- config_name: cak_Latn
data_files:
- split: train
path: "cak_Latn/train/*.arrow"
- split: dev
path: "cak_Latn/dev/*.arrow"
- split: test
path: "cak_Latn/test/*.arrow"
- config_name: kir_Cyrl
data_files:
- split: train
path: "kir_Cyrl/train/*.arrow"
- split: dev
path: "kir_Cyrl/dev/*.arrow"
- split: test
path: "kir_Cyrl/test/*.arrow"
- config_name: mps_Latn
data_files:
- split: train
path: "mps_Latn/train/*.arrow"
- split: dev
path: "mps_Latn/dev/*.arrow"
- split: test
path: "mps_Latn/test/*.arrow"
- config_name: yid_Hebr
data_files:
- split: train
path: "yid_Hebr/train/*.arrow"
- split: dev
path: "yid_Hebr/dev/*.arrow"
- split: test
path: "yid_Hebr/test/*.arrow"
- config_name: mlt_Beng
data_files:
- split: train
path: "mlt_Beng/train/*.arrow"
- split: dev
path: "mlt_Beng/dev/*.arrow"
- split: test
path: "mlt_Beng/test/*.arrow"
- config_name: srn_Latn
data_files:
- split: train
path: "srn_Latn/train/*.arrow"
- split: dev
path: "srn_Latn/dev/*.arrow"
- split: test
path: "srn_Latn/test/*.arrow"
- config_name: div_Thaa
data_files:
- split: train
path: "div_Thaa/train/*.arrow"
- split: dev
path: "div_Thaa/dev/*.arrow"
- split: test
path: "div_Thaa/test/*.arrow"
- config_name: mlt_Kana
data_files:
- split: train
path: "mlt_Kana/train/*.arrow"
- split: dev
path: "mlt_Kana/dev/*.arrow"
- split: test
path: "mlt_Kana/test/*.arrow"
- config_name: mkd_Cyrl
data_files:
- split: train
path: "mkd_Cyrl/train/*.arrow"
- split: dev
path: "mkd_Cyrl/dev/*.arrow"
- split: test
path: "mkd_Cyrl/test/*.arrow"
- config_name: bre_Latn
data_files:
- split: train
path: "bre_Latn/train/*.arrow"
- split: dev
path: "bre_Latn/dev/*.arrow"
- split: test
path: "bre_Latn/test/*.arrow"
- config_name: tvl_Latn
data_files:
- split: train
path: "tvl_Latn/train/*.arrow"
- split: test
path: "tvl_Latn/test/*.arrow"
- config_name: ven_Latn
data_files:
- split: train
path: "ven_Latn/train/*.arrow"
- split: dev
path: "ven_Latn/dev/*.arrow"
- split: test
path: "ven_Latn/test/*.arrow"
- config_name: mlt_Mymr
data_files:
- split: train
path: "mlt_Mymr/train/*.arrow"
- split: dev
path: "mlt_Mymr/dev/*.arrow"
- split: test
path: "mlt_Mymr/test/*.arrow"
- config_name: wuu_Hani
data_files:
- split: train
path: "wuu_Hani/train/*.arrow"
- split: dev
path: "wuu_Hani/dev/*.arrow"
- split: test
path: "wuu_Hani/test/*.arrow"
- config_name: mwl_Latn
data_files:
- split: train
path: "mwl_Latn/train/*.arrow"
- split: dev
path: "mwl_Latn/dev/*.arrow"
- split: test
path: "mwl_Latn/test/*.arrow"
- config_name: miq_Latn
data_files:
- split: train
path: "miq_Latn/train/*.arrow"
- config_name: slv_Latn
data_files:
- split: train
path: "slv_Latn/train/*.arrow"
- split: dev
path: "slv_Latn/dev/*.arrow"
- split: test
path: "slv_Latn/test/*.arrow"
- config_name: hrv_Latn
data_files:
- split: train
path: "hrv_Latn/train/*.arrow"
- split: dev
path: "hrv_Latn/dev/*.arrow"
- split: test
path: "hrv_Latn/test/*.arrow"
- config_name: hmo_Latn
data_files:
- split: train
path: "hmo_Latn/train/*.arrow"
- split: dev
path: "hmo_Latn/dev/*.arrow"
- split: test
path: "hmo_Latn/test/*.arrow"
- config_name: som_Latn
data_files:
- split: train
path: "som_Latn/train/*.arrow"
- split: dev
path: "som_Latn/dev/*.arrow"
- split: test
path: "som_Latn/test/*.arrow"
- config_name: bod_Tibt
data_files:
- split: train
path: "bod_Tibt/train/*.arrow"
- split: dev
path: "bod_Tibt/dev/*.arrow"
- split: test
path: "bod_Tibt/test/*.arrow"
- config_name: pls_Latn
data_files:
- split: train
path: "pls_Latn/train/*.arrow"
- split: dev
path: "pls_Latn/dev/*.arrow"
- split: test
path: "pls_Latn/test/*.arrow"
- config_name: ile_Latn
data_files:
- split: train
path: "ile_Latn/train/*.arrow"
- split: dev
path: "ile_Latn/dev/*.arrow"
- split: test
path: "ile_Latn/test/*.arrow"
- config_name: luo_Latn
data_files:
- split: train
path: "luo_Latn/train/*.arrow"
- split: dev
path: "luo_Latn/dev/*.arrow"
- split: test
path: "luo_Latn/test/*.arrow"
- config_name: pus_Arab
data_files:
- split: train
path: "pus_Arab/train/*.arrow"
- split: dev
path: "pus_Arab/dev/*.arrow"
- split: test
path: "pus_Arab/test/*.arrow"
- config_name: fao_Latn
data_files:
- split: train
path: "fao_Latn/train/*.arrow"
- split: dev
path: "fao_Latn/dev/*.arrow"
- split: test
path: "fao_Latn/test/*.arrow"
- config_name: fas_Arab
data_files:
- split: train
path: "fas_Arab/train/*.arrow"
- split: dev
path: "fas_Arab/dev/*.arrow"
- split: test
path: "fas_Arab/test/*.arrow"
- config_name: swa_Latn
data_files:
- split: train
path: "swa_Latn/train/*.arrow"
- split: dev
path: "swa_Latn/dev/*.arrow"
- split: test
path: "swa_Latn/test/*.arrow"
- config_name: mlt_Hebr
data_files:
- split: train
path: "mlt_Hebr/train/*.arrow"
- split: dev
path: "mlt_Hebr/dev/*.arrow"
- split: test
path: "mlt_Hebr/test/*.arrow"
- config_name: ary_Arab
data_files:
- split: train
path: "ary_Arab/train/*.arrow"
- split: dev
path: "ary_Arab/dev/*.arrow"
- split: test
path: "ary_Arab/test/*.arrow"
- config_name: hus_Latn
data_files:
- split: train
path: "hus_Latn/train/*.arrow"
- split: dev
path: "hus_Latn/dev/*.arrow"
- split: test
path: "hus_Latn/test/*.arrow"
- config_name: ote_Latn
data_files:
- split: train
path: "ote_Latn/train/*.arrow"
- split: dev
path: "ote_Latn/dev/*.arrow"
- split: test
path: "ote_Latn/test/*.arrow"
- config_name: ilo_Latn
data_files:
- split: train
path: "ilo_Latn/train/*.arrow"
- split: dev
path: "ilo_Latn/dev/*.arrow"
- split: test
path: "ilo_Latn/test/*.arrow"
- config_name: abk_Cyrl
data_files:
- split: train
path: "abk_Cyrl/train/*.arrow"
- split: dev
path: "abk_Cyrl/dev/*.arrow"
- split: test
path: "abk_Cyrl/test/*.arrow"
- config_name: bqc_Latn
data_files:
- split: train
path: "bqc_Latn/train/*.arrow"
- split: dev
path: "bqc_Latn/dev/*.arrow"
- split: test
path: "bqc_Latn/test/*.arrow"
- config_name: mlt_Taml
data_files:
- split: train
path: "mlt_Taml/train/*.arrow"
- split: dev
path: "mlt_Taml/dev/*.arrow"
- split: test
path: "mlt_Taml/test/*.arrow"
- config_name: hil_Latn
data_files:
- split: train
path: "hil_Latn/train/*.arrow"
- config_name: pon_Latn
data_files:
- split: train
path: "pon_Latn/train/*.arrow"
- split: dev
path: "pon_Latn/dev/*.arrow"
- split: test
path: "pon_Latn/test/*.arrow"
- config_name: zul_Latn
data_files:
- split: train
path: "zul_Latn/train/*.arrow"
- split: dev
path: "zul_Latn/dev/*.arrow"
- split: test
path: "zul_Latn/test/*.arrow"
- config_name: als_Latn
data_files:
- split: train
path: "als_Latn/train/*.arrow"
- split: dev
path: "als_Latn/dev/*.arrow"
- split: test
path: "als_Latn/test/*.arrow"
- config_name: pes_Arab
data_files:
- split: train
path: "pes_Arab/train/*.arrow"
- split: dev
path: "pes_Arab/dev/*.arrow"
- split: test
path: "pes_Arab/test/*.arrow"
- config_name: bpy_Beng
data_files:
- split: train
path: "bpy_Beng/train/*.arrow"
- split: dev
path: "bpy_Beng/dev/*.arrow"
- split: test
path: "bpy_Beng/test/*.arrow"
- config_name: bos_Latn
data_files:
- split: train
path: "bos_Latn/train/*.arrow"
- split: dev
path: "bos_Latn/dev/*.arrow"
- split: test
path: "bos_Latn/test/*.arrow"
- config_name: sot_Latn
data_files:
- split: train
path: "sot_Latn/train/*.arrow"
- split: dev
path: "sot_Latn/dev/*.arrow"
- split: test
path: "sot_Latn/test/*.arrow"
- config_name: lin_Latn
data_files:
- split: train
path: "lin_Latn/train/*.arrow"
- split: dev
path: "lin_Latn/dev/*.arrow"
- split: test
path: "lin_Latn/test/*.arrow"
- config_name: tuk_Cyrl
data_files:
- split: train
path: "tuk_Cyrl/train/*.arrow"
- split: dev
path: "tuk_Cyrl/dev/*.arrow"
- config_name: gla_Latn
data_files:
- split: train
path: "gla_Latn/train/*.arrow"
- split: dev
path: "gla_Latn/dev/*.arrow"
- split: test
path: "gla_Latn/test/*.arrow"
- config_name: wln_Latn
data_files:
- split: train
path: "wln_Latn/train/*.arrow"
- split: dev
path: "wln_Latn/dev/*.arrow"
- split: test
path: "wln_Latn/test/*.arrow"
- config_name: apc_Arab
data_files:
- split: train
path: "apc_Arab/train/*.arrow"
- split: dev
path: "apc_Arab/dev/*.arrow"
- split: test
path: "apc_Arab/test/*.arrow"
- config_name: aze_Hira
data_files:
- split: train
path: "aze_Hira/train/*.arrow"
- split: dev
path: "aze_Hira/dev/*.arrow"
- split: test
path: "aze_Hira/test/*.arrow"
- config_name: hin_Deva
data_files:
- split: train
path: "hin_Deva/train/*.arrow"
- split: dev
path: "hin_Deva/dev/*.arrow"
- split: test
path: "hin_Deva/test/*.arrow"
- config_name: hye_Armn
data_files:
- split: train
path: "hye_Armn/train/*.arrow"
- split: dev
path: "hye_Armn/dev/*.arrow"
- split: test
path: "hye_Armn/test/*.arrow"
- config_name: tir_Ethi
data_files:
- split: train
path: "tir_Ethi/train/*.arrow"
- split: dev
path: "tir_Ethi/dev/*.arrow"
- split: test
path: "tir_Ethi/test/*.arrow"
- config_name: aze_Ethi
data_files:
- split: train
path: "aze_Ethi/train/*.arrow"
- split: dev
path: "aze_Ethi/dev/*.arrow"
- split: test
path: "aze_Ethi/test/*.arrow"
- config_name: pap_Latn
data_files:
- split: train
path: "pap_Latn/train/*.arrow"
- split: dev
path: "pap_Latn/dev/*.arrow"
- split: test
path: "pap_Latn/test/*.arrow"
- config_name: mlt_Ethi
data_files:
- split: train
path: "mlt_Ethi/train/*.arrow"
- split: dev
path: "mlt_Ethi/dev/*.arrow"
- split: test
path: "mlt_Ethi/test/*.arrow"
- config_name: gcf_Latn
data_files:
- split: train
path: "gcf_Latn/train/*.arrow"
- split: dev
path: "gcf_Latn/dev/*.arrow"
- split: test
path: "gcf_Latn/test/*.arrow"
- config_name: cjk_Latn
data_files:
- split: train
path: "cjk_Latn/train/*.arrow"
- split: dev
path: "cjk_Latn/dev/*.arrow"
- split: test
path: "cjk_Latn/test/*.arrow"
- config_name: pcd_Latn
data_files:
- split: train
path: "pcd_Latn/train/*.arrow"
- split: dev
path: "pcd_Latn/dev/*.arrow"
- split: test
path: "pcd_Latn/test/*.arrow"
- config_name: tur_Latn
data_files:
- split: train
path: "tur_Latn/train/*.arrow"
- split: dev
path: "tur_Latn/dev/*.arrow"
- split: test
path: "tur_Latn/test/*.arrow"
- config_name: kon_Latn
data_files:
- split: train
path: "kon_Latn/train/*.arrow"
- split: dev
path: "kon_Latn/dev/*.arrow"
- split: test
path: "kon_Latn/test/*.arrow"
- config_name: csy_Latn
data_files:
- split: train
path: "csy_Latn/train/*.arrow"
- split: dev
path: "csy_Latn/dev/*.arrow"
- split: test
path: "csy_Latn/test/*.arrow"
- config_name: xho_Latn
data_files:
- split: train
path: "xho_Latn/train/*.arrow"
- split: dev
path: "xho_Latn/dev/*.arrow"
- split: test
path: "xho_Latn/test/*.arrow"
- config_name: guc_Latn
data_files:
- split: train
path: "guc_Latn/train/*.arrow"
- config_name: aka_Latn
data_files:
- split: train
path: "aka_Latn/train/*.arrow"
- split: dev
path: "aka_Latn/dev/*.arrow"
- split: test
path: "aka_Latn/test/*.arrow"
- config_name: kea_Latn
data_files:
- split: train
path: "kea_Latn/train/*.arrow"
- split: dev
path: "kea_Latn/dev/*.arrow"
- split: test
path: "kea_Latn/test/*.arrow"
- config_name: bar_Latn
data_files:
- split: train
path: "bar_Latn/train/*.arrow"
- split: dev
path: "bar_Latn/dev/*.arrow"
- split: test
path: "bar_Latn/test/*.arrow"
- config_name: sme_Latn
data_files:
- split: train
path: "sme_Latn/train/*.arrow"
- split: dev
path: "sme_Latn/dev/*.arrow"
- split: test
path: "sme_Latn/test/*.arrow"
- config_name: aze_Hang
data_files:
- split: train
path: "aze_Hang/train/*.arrow"
- split: dev
path: "aze_Hang/dev/*.arrow"
- split: test
path: "aze_Hang/test/*.arrow"
- config_name: csb_Latn
data_files:
- split: train
path: "csb_Latn/train/*.arrow"
- split: dev
path: "csb_Latn/dev/*.arrow"
- split: test
path: "csb_Latn/test/*.arrow"
- config_name: bak_Latn
data_files:
- split: train
path: "bak_Latn/train/*.arrow"
- split: dev
path: "bak_Latn/dev/*.arrow"
- split: test
path: "bak_Latn/test/*.arrow"
- config_name: djk_Latn
data_files:
- split: train
path: "djk_Latn/train/*.arrow"
- split: dev
path: "djk_Latn/dev/*.arrow"
- split: test
path: "djk_Latn/test/*.arrow"
- config_name: xav_Latn
data_files:
- split: train
path: "xav_Latn/train/*.arrow"
- split: dev
path: "xav_Latn/dev/*.arrow"
- split: test
path: "xav_Latn/test/*.arrow"
- config_name: oci_Latn
data_files:
- split: train
path: "oci_Latn/train/*.arrow"
- split: dev
path: "oci_Latn/dev/*.arrow"
- split: test
path: "oci_Latn/test/*.arrow"
- config_name: acm_Arab
data_files:
- split: train
path: "acm_Arab/train/*.arrow"
- split: dev
path: "acm_Arab/dev/*.arrow"
- split: test
path: "acm_Arab/test/*.arrow"
- config_name: rmy_Cyrl
data_files:
- split: train
path: "rmy_Cyrl/train/*.arrow"
- config_name: krc_Cyrl
data_files:
- split: train
path: "krc_Cyrl/train/*.arrow"
- split: dev
path: "krc_Cyrl/dev/*.arrow"
- split: test
path: "krc_Cyrl/test/*.arrow"
- config_name: cym_Latn
data_files:
- split: train
path: "cym_Latn/train/*.arrow"
- split: dev
path: "cym_Latn/dev/*.arrow"
- split: test
path: "cym_Latn/test/*.arrow"
- config_name: lus_Latn
data_files:
- split: train
path: "lus_Latn/train/*.arrow"
- split: dev
path: "lus_Latn/dev/*.arrow"
- split: test
path: "lus_Latn/test/*.arrow"
- config_name: ngu_Latn
data_files:
- split: train
path: "ngu_Latn/train/*.arrow"
- split: dev
path: "ngu_Latn/dev/*.arrow"
- split: test
path: "ngu_Latn/test/*.arrow"
- config_name: yom_Latn
data_files:
- split: train
path: "yom_Latn/train/*.arrow"
- split: dev
path: "yom_Latn/dev/*.arrow"
- split: test
path: "yom_Latn/test/*.arrow"
- config_name: tam_Taml
data_files:
- split: train
path: "tam_Taml/train/*.arrow"
- split: dev
path: "tam_Taml/dev/*.arrow"
- split: test
path: "tam_Taml/test/*.arrow"
- config_name: ajp_Arab
data_files:
- split: train
path: "ajp_Arab/train/*.arrow"
- split: dev
path: "ajp_Arab/dev/*.arrow"
- split: test
path: "ajp_Arab/test/*.arrow"
- config_name: epo_Latn
data_files:
- split: train
path: "epo_Latn/train/*.arrow"
- split: dev
path: "epo_Latn/dev/*.arrow"
- split: test
path: "epo_Latn/test/*.arrow"
- config_name: fra_Latn
data_files:
- split: train
path: "fra_Latn/train/*.arrow"
- split: dev
path: "fra_Latn/dev/*.arrow"
- split: test
path: "fra_Latn/test/*.arrow"
- config_name: ita_Latn
data_files:
- split: train
path: "ita_Latn/train/*.arrow"
- split: dev
path: "ita_Latn/dev/*.arrow"
- split: test
path: "ita_Latn/test/*.arrow"
- config_name: seh_Latn
data_files:
- split: train
path: "seh_Latn/train/*.arrow"
- split: dev
path: "seh_Latn/dev/*.arrow"
- split: test
path: "seh_Latn/test/*.arrow"
- config_name: hbs_Latn
data_files:
- split: train
path: "hbs_Latn/train/*.arrow"
- split: dev
path: "hbs_Latn/dev/*.arrow"
- split: test
path: "hbs_Latn/test/*.arrow"
- config_name: uzn_Cyrl
data_files:
- split: train
path: "uzn_Cyrl/train/*.arrow"
- split: dev
path: "uzn_Cyrl/dev/*.arrow"
- split: test
path: "uzn_Cyrl/test/*.arrow"
- config_name: ksw_Mymr
data_files:
- split: train
path: "ksw_Mymr/train/*.arrow"
- config_name: pms_Latn
data_files:
- split: train
path: "pms_Latn/train/*.arrow"
- split: dev
path: "pms_Latn/dev/*.arrow"
- split: test
path: "pms_Latn/test/*.arrow"
- config_name: zlm_Latn
data_files:
- split: train
path: "zlm_Latn/train/*.arrow"
- split: dev
path: "zlm_Latn/dev/*.arrow"
- split: test
path: "zlm_Latn/test/*.arrow"
- config_name: qub_Latn
data_files:
- split: train
path: "qub_Latn/train/*.arrow"
- split: dev
path: "qub_Latn/dev/*.arrow"
- config_name: arg_Latn
data_files:
- split: train
path: "arg_Latn/train/*.arrow"
- split: dev
path: "arg_Latn/dev/*.arrow"
- split: test
path: "arg_Latn/test/*.arrow"
- config_name: kaa_Cyrl
data_files:
- split: train
path: "kaa_Cyrl/train/*.arrow"
- split: dev
path: "kaa_Cyrl/dev/*.arrow"
- split: test
path: "kaa_Cyrl/test/*.arrow"
- config_name: toj_Latn
data_files:
- split: train
path: "toj_Latn/train/*.arrow"
- split: dev
path: "toj_Latn/dev/*.arrow"
- split: test
path: "toj_Latn/test/*.arrow"
- config_name: aze_Grek
data_files:
- split: train
path: "aze_Grek/train/*.arrow"
- split: dev
path: "aze_Grek/dev/*.arrow"
- split: test
path: "aze_Grek/test/*.arrow"
- config_name: guj_Cyrl
data_files:
- split: train
path: "guj_Cyrl/train/*.arrow"
- split: dev
path: "guj_Cyrl/dev/*.arrow"
- split: test
path: "guj_Cyrl/test/*.arrow"
- config_name: kab_Latn
data_files:
- split: train
path: "kab_Latn/train/*.arrow"
- split: dev
path: "kab_Latn/dev/*.arrow"
- split: test
path: "kab_Latn/test/*.arrow"
- config_name: pan_Guru
data_files:
- split: train
path: "pan_Guru/train/*.arrow"
- split: dev
path: "pan_Guru/dev/*.arrow"
- split: test
path: "pan_Guru/test/*.arrow"
- config_name: nan_Latn
data_files:
- split: train
path: "nan_Latn/train/*.arrow"
- split: dev
path: "nan_Latn/dev/*.arrow"
- split: test
path: "nan_Latn/test/*.arrow"
- config_name: aze_Latn
data_files:
- split: train
path: "aze_Latn/train/*.arrow"
- split: dev
path: "aze_Latn/dev/*.arrow"
- split: test
path: "aze_Latn/test/*.arrow"
- config_name: ara_Arab
data_files:
- split: train
path: "ara_Arab/train/*.arrow"
- split: dev
path: "ara_Arab/dev/*.arrow"
- split: test
path: "ara_Arab/test/*.arrow"
- config_name: aze_Mymr
data_files:
- split: train
path: "aze_Mymr/train/*.arrow"
- split: dev
path: "aze_Mymr/dev/*.arrow"
- split: test
path: "aze_Mymr/test/*.arrow"
- config_name: meu_Latn
data_files:
- split: train
path: "meu_Latn/train/*.arrow"
- split: dev
path: "meu_Latn/dev/*.arrow"
- split: test
path: "meu_Latn/test/*.arrow"
- config_name: mon_Arab
data_files:
- split: train
path: "mon_Arab/train/*.arrow"
- split: dev
path: "mon_Arab/dev/*.arrow"
- split: test
path: "mon_Arab/test/*.arrow"
- config_name: lvs_Latn
data_files:
- split: train
path: "lvs_Latn/train/*.arrow"
- split: dev
path: "lvs_Latn/dev/*.arrow"
- split: test
path: "lvs_Latn/test/*.arrow"
- config_name: nbl_Latn
data_files:
- split: train
path: "nbl_Latn/train/*.arrow"
- split: dev
path: "nbl_Latn/dev/*.arrow"
- split: test
path: "nbl_Latn/test/*.arrow"
- config_name: crh_Latn
data_files:
- split: train
path: "crh_Latn/train/*.arrow"
- split: dev
path: "crh_Latn/dev/*.arrow"
- split: test
path: "crh_Latn/test/*.arrow"
- config_name: kbp_Latn
data_files:
- split: train
path: "kbp_Latn/train/*.arrow"
- split: dev
path: "kbp_Latn/dev/*.arrow"
- split: test
path: "kbp_Latn/test/*.arrow"
- config_name: tgl_Latn
data_files:
- split: train
path: "tgl_Latn/train/*.arrow"
- split: dev
path: "tgl_Latn/dev/*.arrow"
- split: test
path: "tgl_Latn/test/*.arrow"
- config_name: kmb_Latn
data_files:
- split: train
path: "kmb_Latn/train/*.arrow"
- split: dev
path: "kmb_Latn/dev/*.arrow"
- split: test
path: "kmb_Latn/test/*.arrow"
- config_name: hun_Latn
data_files:
- split: train
path: "hun_Latn/train/*.arrow"
- split: dev
path: "hun_Latn/dev/*.arrow"
- split: test
path: "hun_Latn/test/*.arrow"
- config_name: aze_Thai
data_files:
- split: train
path: "aze_Thai/train/*.arrow"
- split: dev
path: "aze_Thai/dev/*.arrow"
- split: test
path: "aze_Thai/test/*.arrow"
- config_name: yao_Latn
data_files:
- split: train
path: "yao_Latn/train/*.arrow"
- split: dev
path: "yao_Latn/dev/*.arrow"
- split: test
path: "yao_Latn/test/*.arrow"
- config_name: arn_Latn
data_files:
- split: train
path: "arn_Latn/train/*.arrow"
- split: dev
path: "arn_Latn/dev/*.arrow"
- split: test
path: "arn_Latn/test/*.arrow"
- config_name: jbo_Latn
data_files:
- split: train
path: "jbo_Latn/train/*.arrow"
- split: dev
path: "jbo_Latn/dev/*.arrow"
- split: test
path: "jbo_Latn/test/*.arrow"
- config_name: mzn_Arab
data_files:
- split: train
path: "mzn_Arab/train/*.arrow"
- split: dev
path: "mzn_Arab/dev/*.arrow"
- split: test
path: "mzn_Arab/test/*.arrow"
- config_name: lzh_Hani
data_files:
- split: train
path: "lzh_Hani/train/*.arrow"
- split: dev
path: "lzh_Hani/dev/*.arrow"
- split: test
path: "lzh_Hani/test/*.arrow"
- config_name: heb_Hebr
data_files:
- split: train
path: "heb_Hebr/train/*.arrow"
- split: dev
path: "heb_Hebr/dev/*.arrow"
- split: test
path: "heb_Hebr/test/*.arrow"
- config_name: bjn_Latn
data_files:
- split: train
path: "bjn_Latn/train/*.arrow"
- split: dev
path: "bjn_Latn/dev/*.arrow"
- split: test
path: "bjn_Latn/test/*.arrow"
- config_name: gug_Latn
data_files:
- split: train
path: "gug_Latn/train/*.arrow"
- split: dev
path: "gug_Latn/dev/*.arrow"
- split: test
path: "gug_Latn/test/*.arrow"
- config_name: mlt_Hira
data_files:
- split: train
path: "mlt_Hira/train/*.arrow"
- split: dev
path: "mlt_Hira/dev/*.arrow"
- split: test
path: "mlt_Hira/test/*.arrow"
- config_name: swc_Latn
data_files:
- split: train
path: "swc_Latn/train/*.arrow"
- split: dev
path: "swc_Latn/dev/*.arrow"
- split: test
path: "swc_Latn/test/*.arrow"
- config_name: yor_Latn
data_files:
- split: train
path: "yor_Latn/train/*.arrow"
- split: dev
path: "yor_Latn/dev/*.arrow"
- split: test
path: "yor_Latn/test/*.arrow"
- config_name: ban_Latn
data_files:
- split: train
path: "ban_Latn/train/*.arrow"
- split: dev
path: "ban_Latn/dev/*.arrow"
- split: test
path: "ban_Latn/test/*.arrow"
- config_name: aze_Guru
data_files:
- split: train
path: "aze_Guru/train/*.arrow"
- split: dev
path: "aze_Guru/dev/*.arrow"
- split: test
path: "aze_Guru/test/*.arrow"
- config_name: tlh_Latn
data_files:
- split: train
path: "tlh_Latn/train/*.arrow"
- split: dev
path: "tlh_Latn/dev/*.arrow"
- split: test
path: "tlh_Latn/test/*.arrow"
- config_name: chv_Cyrl
data_files:
- split: train
path: "chv_Cyrl/train/*.arrow"
- split: dev
path: "chv_Cyrl/dev/*.arrow"
- split: test
path: "chv_Cyrl/test/*.arrow"
- config_name: sin_Sinh
data_files:
- split: train
path: "sin_Sinh/train/*.arrow"
- split: dev
path: "sin_Sinh/dev/*.arrow"
- split: test
path: "sin_Sinh/test/*.arrow"
- config_name: aze_Gujr
data_files:
- split: train
path: "aze_Gujr/train/*.arrow"
- split: dev
path: "aze_Gujr/dev/*.arrow"
- split: test
path: "aze_Gujr/test/*.arrow"
- config_name: ind_Latn
data_files:
- split: train
path: "ind_Latn/train/*.arrow"
- split: dev
path: "ind_Latn/dev/*.arrow"
- split: test
path: "ind_Latn/test/*.arrow"
- config_name: amh_Ethi
data_files:
- split: train
path: "amh_Ethi/train/*.arrow"
- split: dev
path: "amh_Ethi/dev/*.arrow"
- split: test
path: "amh_Ethi/test/*.arrow"
- config_name: zea_Latn
data_files:
- split: train
path: "zea_Latn/train/*.arrow"
- split: dev
path: "zea_Latn/dev/*.arrow"
- split: test
path: "zea_Latn/test/*.arrow"
- config_name: kpg_Latn
data_files:
- split: train
path: "kpg_Latn/train/*.arrow"
- split: dev
path: "kpg_Latn/dev/*.arrow"
- split: test
path: "kpg_Latn/test/*.arrow"
- config_name: glk_Arab
data_files:
- split: train
path: "glk_Arab/train/*.arrow"
- split: dev
path: "glk_Arab/dev/*.arrow"
- split: test
path: "glk_Arab/test/*.arrow"
- config_name: crh_Cyrl
data_files:
- split: train
path: "crh_Cyrl/train/*.arrow"
- split: dev
path: "crh_Cyrl/dev/*.arrow"
- split: test
path: "crh_Cyrl/test/*.arrow"
- config_name: nyu_Latn
data_files:
- split: train
path: "nyu_Latn/train/*.arrow"
- config_name: aze_Beng
data_files:
- split: train
path: "aze_Beng/train/*.arrow"
- split: dev
path: "aze_Beng/dev/*.arrow"
- split: test
path: "aze_Beng/test/*.arrow"
- config_name: ibo_Latn
data_files:
- split: train
path: "ibo_Latn/train/*.arrow"
- split: dev
path: "ibo_Latn/dev/*.arrow"
- split: test
path: "ibo_Latn/test/*.arrow"
- config_name: msa_Latn
data_files:
- split: train
path: "msa_Latn/train/*.arrow"
- split: dev
path: "msa_Latn/dev/*.arrow"
- split: test
path: "msa_Latn/test/*.arrow"
- config_name: prs_Arab
data_files:
- split: train
path: "prs_Arab/train/*.arrow"
- split: dev
path: "prs_Arab/dev/*.arrow"
- split: test
path: "prs_Arab/test/*.arrow"
- config_name: nap_Latn
data_files:
- split: train
path: "nap_Latn/train/*.arrow"
- split: dev
path: "nap_Latn/dev/*.arrow"
- split: test
path: "nap_Latn/test/*.arrow"
- config_name: bik_Latn
data_files:
- split: train
path: "bik_Latn/train/*.arrow"
- split: dev
path: "bik_Latn/dev/*.arrow"
- split: test
path: "bik_Latn/test/*.arrow"
- config_name: srp_Cyrl
data_files:
- split: train
path: "srp_Cyrl/train/*.arrow"
- split: dev
path: "srp_Cyrl/dev/*.arrow"
- split: test
path: "srp_Cyrl/test/*.arrow"
- config_name: lao_Laoo
data_files:
- split: train
path: "lao_Laoo/train/*.arrow"
- split: dev
path: "lao_Laoo/dev/*.arrow"
- split: test
path: "lao_Laoo/test/*.arrow"
- config_name: kom_Cyrl
data_files:
- split: train
path: "kom_Cyrl/train/*.arrow"
- split: dev
path: "kom_Cyrl/dev/*.arrow"
- split: test
path: "kom_Cyrl/test/*.arrow"
- config_name: nde_Latn
data_files:
- split: train
path: "nde_Latn/train/*.arrow"
- split: dev
path: "nde_Latn/dev/*.arrow"
- split: test
path: "nde_Latn/test/*.arrow"
- config_name: hui_Latn
data_files:
- split: train
path: "hui_Latn/train/*.arrow"
- split: dev
path: "hui_Latn/dev/*.arrow"
- split: test
path: "hui_Latn/test/*.arrow"
- config_name: uig_Latn
data_files:
- split: train
path: "uig_Latn/train/*.arrow"
- split: dev
path: "uig_Latn/dev/*.arrow"
- split: test
path: "uig_Latn/test/*.arrow"
- config_name: new_Deva
data_files:
- split: train
path: "new_Deva/train/*.arrow"
- split: dev
path: "new_Deva/dev/*.arrow"
- split: test
path: "new_Deva/test/*.arrow"
- config_name: kur_Arab
data_files:
- split: train
path: "kur_Arab/train/*.arrow"
- split: dev
path: "kur_Arab/dev/*.arrow"
- split: test
path: "kur_Arab/test/*.arrow"
- config_name: sco_Latn
data_files:
- split: train
path: "sco_Latn/train/*.arrow"
- split: dev
path: "sco_Latn/dev/*.arrow"
- split: test
path: "sco_Latn/test/*.arrow"
- config_name: ayr_Latn
data_files:
- split: train
path: "ayr_Latn/train/*.arrow"
- split: dev
path: "ayr_Latn/dev/*.arrow"
- split: test
path: "ayr_Latn/test/*.arrow"
- config_name: suz_Deva
data_files:
- split: train
path: "suz_Deva/train/*.arrow"
- split: dev
path: "suz_Deva/dev/*.arrow"
- split: test
path: "suz_Deva/test/*.arrow"
- config_name: wal_Latn
data_files:
- split: train
path: "wal_Latn/train/*.arrow"
- split: dev
path: "wal_Latn/dev/*.arrow"
- split: test
path: "wal_Latn/test/*.arrow"
- config_name: mlt_Latn
data_files:
- split: train
path: "mlt_Latn/train/*.arrow"
- split: dev
path: "mlt_Latn/dev/*.arrow"
- split: test
path: "mlt_Latn/test/*.arrow"
- config_name: asm_Beng
data_files:
- split: train
path: "asm_Beng/train/*.arrow"
- split: dev
path: "asm_Beng/dev/*.arrow"
- split: test
path: "asm_Beng/test/*.arrow"
- config_name: aze_Syrc
data_files:
- split: train
path: "aze_Syrc/train/*.arrow"
- split: dev
path: "aze_Syrc/dev/*.arrow"
- split: test
path: "aze_Syrc/test/*.arrow"
- config_name: san_Deva
data_files:
- split: train
path: "san_Deva/train/*.arrow"
- split: dev
path: "san_Deva/dev/*.arrow"
- split: test
path: "san_Deva/test/*.arrow"
- config_name: kaz_Cyrl
data_files:
- split: train
path: "kaz_Cyrl/train/*.arrow"
- split: dev
path: "kaz_Cyrl/dev/*.arrow"
- split: test
path: "kaz_Cyrl/test/*.arrow"
- config_name: iba_Latn
data_files:
- split: train
path: "iba_Latn/train/*.arrow"
- config_name: tuk_Latn
data_files:
- split: train
path: "tuk_Latn/train/*.arrow"
- split: dev
path: "tuk_Latn/dev/*.arrow"
- split: test
path: "tuk_Latn/test/*.arrow"
- config_name: nso_Latn
data_files:
- split: train
path: "nso_Latn/train/*.arrow"
- split: dev
path: "nso_Latn/dev/*.arrow"
- split: test
path: "nso_Latn/test/*.arrow"
- config_name: aze_Geor
data_files:
- split: train
path: "aze_Geor/train/*.arrow"
- split: dev
path: "aze_Geor/dev/*.arrow"
- split: test
path: "aze_Geor/test/*.arrow"
- config_name: run_Latn
data_files:
- split: train
path: "run_Latn/train/*.arrow"
- split: dev
path: "run_Latn/dev/*.arrow"
- split: test
path: "run_Latn/test/*.arrow"
- config_name: ctu_Latn
data_files:
- split: train
path: "ctu_Latn/train/*.arrow"
- split: dev
path: "ctu_Latn/dev/*.arrow"
- split: test
path: "ctu_Latn/test/*.arrow"
- config_name: bam_Latn
data_files:
- split: train
path: "bam_Latn/train/*.arrow"
- split: dev
path: "bam_Latn/dev/*.arrow"
- split: test
path: "bam_Latn/test/*.arrow"
- config_name: fin_Latn
data_files:
- split: train
path: "fin_Latn/train/*.arrow"
- split: dev
path: "fin_Latn/dev/*.arrow"
- split: test
path: "fin_Latn/test/*.arrow"
- config_name: gor_Latn
data_files:
- split: train
path: "gor_Latn/train/*.arrow"
- split: dev
path: "gor_Latn/dev/*.arrow"
- split: test
path: "gor_Latn/test/*.arrow"
- config_name: kmr_Latn
data_files:
- split: train
path: "kmr_Latn/train/*.arrow"
- split: dev
path: "kmr_Latn/dev/*.arrow"
- split: test
path: "kmr_Latn/test/*.arrow"
- config_name: ben_Beng
data_files:
- split: train
path: "ben_Beng/train/*.arrow"
- split: dev
path: "ben_Beng/dev/*.arrow"
- split: test
path: "ben_Beng/test/*.arrow"
- config_name: pag_Latn
data_files:
- split: train
path: "pag_Latn/train/*.arrow"
- split: dev
path: "pag_Latn/dev/*.arrow"
- split: test
path: "pag_Latn/test/*.arrow"
- config_name: niu_Latn
data_files:
- split: train
path: "niu_Latn/train/*.arrow"
- config_name: xmf_Geor
data_files:
- split: train
path: "xmf_Geor/train/*.arrow"
- split: dev
path: "xmf_Geor/dev/*.arrow"
- split: test
path: "xmf_Geor/test/*.arrow"
- config_name: ekk_Latn
data_files:
- split: train
path: "ekk_Latn/train/*.arrow"
- split: dev
path: "ekk_Latn/dev/*.arrow"
- split: test
path: "ekk_Latn/test/*.arrow"
- config_name: lmo_Latn
data_files:
- split: train
path: "lmo_Latn/train/*.arrow"
- split: dev
path: "lmo_Latn/dev/*.arrow"
- split: test
path: "lmo_Latn/test/*.arrow"
- config_name: mhr_Cyrl
data_files:
- split: train
path: "mhr_Cyrl/train/*.arrow"
- split: dev
path: "mhr_Cyrl/dev/*.arrow"
- split: test
path: "mhr_Cyrl/test/*.arrow"
- config_name: plt_Latn
data_files:
- split: train
path: "plt_Latn/train/*.arrow"
- split: dev
path: "plt_Latn/dev/*.arrow"
- split: test
path: "plt_Latn/test/*.arrow"
- config_name: qvi_Latn
data_files:
- split: train
path: "qvi_Latn/train/*.arrow"
- split: dev
path: "qvi_Latn/dev/*.arrow"
- split: test
path: "qvi_Latn/test/*.arrow"
- config_name: mlt_Zinh
data_files:
- split: train
path: "mlt_Zinh/train/*.arrow"
- split: dev
path: "mlt_Zinh/dev/*.arrow"
- split: test
path: "mlt_Zinh/test/*.arrow"
- config_name: roh_Latn
data_files:
- split: train
path: "roh_Latn/train/*.arrow"
- split: dev
path: "roh_Latn/dev/*.arrow"
- split: test
path: "roh_Latn/test/*.arrow"
- config_name: mah_Latn
data_files:
- split: train
path: "mah_Latn/train/*.arrow"
- config_name: npi_Deva
data_files:
- split: train
path: "npi_Deva/train/*.arrow"
- split: dev
path: "npi_Deva/dev/*.arrow"
- split: test
path: "npi_Deva/test/*.arrow"
- config_name: guj_Telu
data_files:
- split: train
path: "guj_Telu/train/*.arrow"
- split: dev
path: "guj_Telu/dev/*.arrow"
- split: test
path: "guj_Telu/test/*.arrow"
- config_name: tok_Latn
data_files:
- split: train
path: "tok_Latn/train/*.arrow"
- split: dev
path: "tok_Latn/dev/*.arrow"
- split: test
path: "tok_Latn/test/*.arrow"
- config_name: eml_Latn
data_files:
- split: train
path: "eml_Latn/train/*.arrow"
- split: dev
path: "eml_Latn/dev/*.arrow"
- split: test
path: "eml_Latn/test/*.arrow"
- config_name: pnb_Arab
data_files:
- split: train
path: "pnb_Arab/train/*.arrow"
- split: dev
path: "pnb_Arab/dev/*.arrow"
- split: test
path: "pnb_Arab/test/*.arrow"
- config_name: tsn_Hira
data_files:
- split: train
path: "tsn_Hira/train/*.arrow"
- split: dev
path: "tsn_Hira/dev/*.arrow"
- split: test
path: "tsn_Hira/test/*.arrow"
- config_name: nav_Latn
data_files:
- split: train
path: "nav_Latn/train/*.arrow"
- split: dev
path: "nav_Latn/dev/*.arrow"
- split: test
path: "nav_Latn/test/*.arrow"
- config_name: hyw_Latn
data_files:
- split: train
path: "hyw_Latn/train/*.arrow"
- config_name: cat_Latn
data_files:
- split: train
path: "cat_Latn/train/*.arrow"
- split: dev
path: "cat_Latn/dev/*.arrow"
- split: test
path: "cat_Latn/test/*.arrow"
- config_name: gym_Latn
data_files:
- split: train
path: "gym_Latn/train/*.arrow"
- split: dev
path: "gym_Latn/dev/*.arrow"
- split: test
path: "gym_Latn/test/*.arrow"
- config_name: sat_Olck
data_files:
- split: train
path: "sat_Olck/train/*.arrow"
- split: dev
path: "sat_Olck/dev/*.arrow"
- split: test
path: "sat_Olck/test/*.arrow"
- config_name: snd_Arab
data_files:
- split: train
path: "snd_Arab/train/*.arrow"
- split: dev
path: "snd_Arab/dev/*.arrow"
- split: test
path: "snd_Arab/test/*.arrow"
- config_name: isl_Latn
data_files:
- split: train
path: "isl_Latn/train/*.arrow"
- split: dev
path: "isl_Latn/dev/*.arrow"
- split: test
path: "isl_Latn/test/*.arrow"
- config_name: mlt_Telu
data_files:
- split: train
path: "mlt_Telu/train/*.arrow"
- split: dev
path: "mlt_Telu/dev/*.arrow"
- split: test
path: "mlt_Telu/test/*.arrow"
- config_name: kal_Latn
data_files:
- split: train
path: "kal_Latn/train/*.arrow"
- split: dev
path: "kal_Latn/dev/*.arrow"
- split: test
path: "kal_Latn/test/*.arrow"
- config_name: aoj_Latn
data_files:
- split: train
path: "aoj_Latn/train/*.arrow"
- split: dev
path: "aoj_Latn/dev/*.arrow"
- split: test
path: "aoj_Latn/test/*.arrow"
- config_name: zai_Latn
data_files:
- split: train
path: "zai_Latn/train/*.arrow"
- split: dev
path: "zai_Latn/dev/*.arrow"
- split: test
path: "zai_Latn/test/*.arrow"
- config_name: guj_Gujr
data_files:
- split: train
path: "guj_Gujr/train/*.arrow"
- split: dev
path: "guj_Gujr/dev/*.arrow"
- split: test
path: "guj_Gujr/test/*.arrow"
- config_name: min_Latn
data_files:
- split: train
path: "min_Latn/train/*.arrow"
- split: dev
path: "min_Latn/dev/*.arrow"
- split: test
path: "min_Latn/test/*.arrow"
- config_name: grc_Grek
data_files:
- split: train
path: "grc_Grek/train/*.arrow"
- split: dev
path: "grc_Grek/dev/*.arrow"
- split: test
path: "grc_Grek/test/*.arrow"
- config_name: hmn_Latn
data_files:
- split: train
path: "hmn_Latn/train/*.arrow"
- split: dev
path: "hmn_Latn/dev/*.arrow"
- split: test
path: "hmn_Latn/test/*.arrow"
- config_name: ido_Latn
data_files:
- split: train
path: "ido_Latn/train/*.arrow"
- split: dev
path: "ido_Latn/dev/*.arrow"
- split: test
path: "ido_Latn/test/*.arrow"
- config_name: khm_Khmr
data_files:
- split: train
path: "khm_Khmr/train/*.arrow"
- split: dev
path: "khm_Khmr/dev/*.arrow"
- split: test
path: "khm_Khmr/test/*.arrow"
- config_name: quh_Latn
data_files:
- split: train
path: "quh_Latn/train/*.arrow"
- split: dev
path: "quh_Latn/dev/*.arrow"
- split: test
path: "quh_Latn/test/*.arrow"
- config_name: ikk_Latn
data_files:
- split: train
path: "ikk_Latn/train/*.arrow"
- split: dev
path: "ikk_Latn/dev/*.arrow"
- split: test
path: "ikk_Latn/test/*.arrow"
- config_name: iku_Cans
data_files:
- split: train
path: "iku_Cans/train/*.arrow"
- split: dev
path: "iku_Cans/dev/*.arrow"
- split: test
path: "iku_Cans/test/*.arrow"
- config_name: tat_Latn
data_files:
- split: train
path: "tat_Latn/train/*.arrow"
- split: dev
path: "tat_Latn/dev/*.arrow"
- split: test
path: "tat_Latn/test/*.arrow"
- config_name: bel_Cyrl
data_files:
- split: train
path: "bel_Cyrl/train/*.arrow"
- split: dev
path: "bel_Cyrl/dev/*.arrow"
- split: test
path: "bel_Cyrl/test/*.arrow"
- config_name: dyu_Latn
data_files:
- split: train
path: "dyu_Latn/train/*.arrow"
- split: dev
path: "dyu_Latn/dev/*.arrow"
- split: test
path: "dyu_Latn/test/*.arrow"
- config_name: guj_Thai
data_files:
- split: train
path: "guj_Thai/train/*.arrow"
- split: dev
path: "guj_Thai/dev/*.arrow"
- split: test
path: "guj_Thai/test/*.arrow"
- config_name: que_Latn
data_files:
- split: train
path: "que_Latn/train/*.arrow"
- split: dev
path: "que_Latn/dev/*.arrow"
- split: test
path: "que_Latn/test/*.arrow"
- config_name: wol_Latn
data_files:
- split: train
path: "wol_Latn/train/*.arrow"
- split: dev
path: "wol_Latn/dev/*.arrow"
- split: test
path: "wol_Latn/test/*.arrow"
- config_name: hne_Deva
data_files:
- split: train
path: "hne_Deva/train/*.arrow"
- split: dev
path: "hne_Deva/dev/*.arrow"
- split: test
path: "hne_Deva/test/*.arrow"
- config_name: zho_Hani
data_files:
- split: train
path: "zho_Hani/train/*.arrow"
- split: dev
path: "zho_Hani/dev/*.arrow"
- split: test
path: "zho_Hani/test/*.arrow"
- config_name: tum_Latn
data_files:
- split: train
path: "tum_Latn/train/*.arrow"
- split: dev
path: "tum_Latn/dev/*.arrow"
- split: test
path: "tum_Latn/test/*.arrow"
- config_name: swh_Latn
data_files:
- split: train
path: "swh_Latn/train/*.arrow"
- split: dev
path: "swh_Latn/dev/*.arrow"
- split: test
path: "swh_Latn/test/*.arrow"
- config_name: kua_Latn
data_files:
- split: train
path: "kua_Latn/train/*.arrow"
- config_name: ncj_Latn
data_files:
- split: train
path: "ncj_Latn/train/*.arrow"
- split: dev
path: "ncj_Latn/dev/*.arrow"
- split: test
path: "ncj_Latn/test/*.arrow"
- config_name: ewe_Latn
data_files:
- split: train
path: "ewe_Latn/train/*.arrow"
- split: dev
path: "ewe_Latn/dev/*.arrow"
- split: test
path: "ewe_Latn/test/*.arrow"
- config_name: mlt_Geor
data_files:
- split: train
path: "mlt_Geor/train/*.arrow"
- split: dev
path: "mlt_Geor/dev/*.arrow"
- split: test
path: "mlt_Geor/test/*.arrow"
- config_name: hat_Latn
data_files:
- split: train
path: "hat_Latn/train/*.arrow"
- split: dev
path: "hat_Latn/dev/*.arrow"
- split: test
path: "hat_Latn/test/*.arrow"
- config_name: guj_Hani
data_files:
- split: train
path: "guj_Hani/train/*.arrow"
- split: dev
path: "guj_Hani/dev/*.arrow"
- split: test
path: "guj_Hani/test/*.arrow"
- config_name: ina_Latn
data_files:
- split: train
path: "ina_Latn/train/*.arrow"
- split: dev
path: "ina_Latn/dev/*.arrow"
- split: test
path: "ina_Latn/test/*.arrow"
- config_name: ahk_Latn
data_files:
- split: train
path: "ahk_Latn/train/*.arrow"
- split: dev
path: "ahk_Latn/dev/*.arrow"
- split: test
path: "ahk_Latn/test/*.arrow"
- config_name: srm_Latn
data_files:
- split: train
path: "srm_Latn/train/*.arrow"
- split: dev
path: "srm_Latn/dev/*.arrow"
- split: test
path: "srm_Latn/test/*.arrow"
- config_name: lug_Latn
data_files:
- split: train
path: "lug_Latn/train/*.arrow"
- split: dev
path: "lug_Latn/dev/*.arrow"
- split: test
path: "lug_Latn/test/*.arrow"
- config_name: ach_Latn
data_files:
- split: train
path: "ach_Latn/train/*.arrow"
- config_name: rmy_Latn
data_files:
- split: train
path: "rmy_Latn/train/*.arrow"
- split: dev
path: "rmy_Latn/dev/*.arrow"
- split: test
path: "rmy_Latn/test/*.arrow"
- config_name: smo_Latn
data_files:
- split: train
path: "smo_Latn/train/*.arrow"
- split: dev
path: "smo_Latn/dev/*.arrow"
- split: test
path: "smo_Latn/test/*.arrow"
- config_name: mos_Latn
data_files:
- split: train
path: "mos_Latn/train/*.arrow"
- split: dev
path: "mos_Latn/dev/*.arrow"
- split: test
path: "mos_Latn/test/*.arrow"
- config_name: srd_Latn
data_files:
- split: train
path: "srd_Latn/train/*.arrow"
- split: dev
path: "srd_Latn/dev/*.arrow"
- split: test
path: "srd_Latn/test/*.arrow"
- config_name: srp_Latn
data_files:
- split: train
path: "srp_Latn/train/*.arrow"
- split: dev
path: "srp_Latn/dev/*.arrow"
- split: test
path: "srp_Latn/test/*.arrow"
- config_name: azb_Arab
data_files:
- split: train
path: "azb_Arab/train/*.arrow"
- split: dev
path: "azb_Arab/dev/*.arrow"
- split: test
path: "azb_Arab/test/*.arrow"
- config_name: aze_Arab
data_files:
- split: train
path: "aze_Arab/train/*.arrow"
- split: dev
path: "aze_Arab/dev/*.arrow"
- split: test
path: "aze_Arab/test/*.arrow"
- config_name: ori_Orya
data_files:
- split: train
path: "ori_Orya/train/*.arrow"
- split: dev
path: "ori_Orya/dev/*.arrow"
- split: test
path: "ori_Orya/test/*.arrow"
- config_name: mzh_Latn
data_files:
- split: train
path: "mzh_Latn/train/*.arrow"
- split: dev
path: "mzh_Latn/dev/*.arrow"
- split: test
path: "mzh_Latn/test/*.arrow"
- config_name: kur_Latn
data_files:
- split: train
path: "kur_Latn/train/*.arrow"
- split: dev
path: "kur_Latn/dev/*.arrow"
- split: test
path: "kur_Latn/test/*.arrow"
- config_name: wbm_Latn
data_files:
- split: train
path: "wbm_Latn/train/*.arrow"
- split: dev
path: "wbm_Latn/dev/*.arrow"
- config_name: crs_Latn
data_files:
- split: train
path: "crs_Latn/train/*.arrow"
- config_name: aze_Deva
data_files:
- split: train
path: "aze_Deva/train/*.arrow"
- split: dev
path: "aze_Deva/dev/*.arrow"
- split: test
path: "aze_Deva/test/*.arrow"
- config_name: tsn_Arab
data_files:
- split: train
path: "tsn_Arab/train/*.arrow"
- split: dev
path: "tsn_Arab/dev/*.arrow"
- split: test
path: "tsn_Arab/test/*.arrow"
- config_name: ada_Latn
data_files:
- split: train
path: "ada_Latn/train/*.arrow"
- config_name: hif_Latn
data_files:
- split: train
path: "hif_Latn/train/*.arrow"
- split: dev
path: "hif_Latn/dev/*.arrow"
- split: test
path: "hif_Latn/test/*.arrow"
- config_name: guj_Grek
data_files:
- split: train
path: "guj_Grek/train/*.arrow"
- split: dev
path: "guj_Grek/dev/*.arrow"
- split: test
path: "guj_Grek/test/*.arrow"
- config_name: pcm_Latn
data_files:
- split: train
path: "pcm_Latn/train/*.arrow"
- split: dev
path: "pcm_Latn/dev/*.arrow"
- split: test
path: "pcm_Latn/test/*.arrow"
- config_name: tso_Latn
data_files:
- split: train
path: "tso_Latn/train/*.arrow"
- split: dev
path: "tso_Latn/dev/*.arrow"
- split: test
path: "tso_Latn/test/*.arrow"
- config_name: nor_Latn
data_files:
- split: train
path: "nor_Latn/train/*.arrow"
- split: dev
path: "nor_Latn/dev/*.arrow"
- split: test
path: "nor_Latn/test/*.arrow"
- config_name: bsb_Latn
data_files:
- split: train
path: "bsb_Latn/train/*.arrow"
- split: dev
path: "bsb_Latn/dev/*.arrow"
- split: test
path: "bsb_Latn/test/*.arrow"
- config_name: uig_Cyrl
data_files:
- split: train
path: "uig_Cyrl/train/*.arrow"
- split: dev
path: "uig_Cyrl/dev/*.arrow"
- split: test
path: "uig_Cyrl/test/*.arrow"
- config_name: gaa_Latn
data_files:
- split: train
path: "gaa_Latn/train/*.arrow"
- config_name: ukr_Cyrl
data_files:
- split: train
path: "ukr_Cyrl/train/*.arrow"
- split: dev
path: "ukr_Cyrl/dev/*.arrow"
- split: test
path: "ukr_Cyrl/test/*.arrow"
- config_name: lav_Latn
data_files:
- split: train
path: "lav_Latn/train/*.arrow"
- split: dev
path: "lav_Latn/dev/*.arrow"
- split: test
path: "lav_Latn/test/*.arrow"
- config_name: mon_Latn
data_files:
- split: train
path: "mon_Latn/train/*.arrow"
- split: dev
path: "mon_Latn/dev/*.arrow"
- split: test
path: "mon_Latn/test/*.arrow"
- config_name: nep_Deva
data_files:
- split: train
path: "nep_Deva/train/*.arrow"
- split: dev
path: "nep_Deva/dev/*.arrow"
- split: test
path: "nep_Deva/test/*.arrow"
- config_name: aze_Telu
data_files:
- split: train
path: "aze_Telu/train/*.arrow"
- split: dev
path: "aze_Telu/dev/*.arrow"
- split: test
path: "aze_Telu/test/*.arrow"
- config_name: guj_Deva
data_files:
- split: train
path: "guj_Deva/train/*.arrow"
- split: dev
path: "guj_Deva/dev/*.arrow"
- split: test
path: "guj_Deva/test/*.arrow"
- config_name: pis_Latn
data_files:
- split: train
path: "pis_Latn/train/*.arrow"
- config_name: lhu_Latn
data_files:
- split: train
path: "lhu_Latn/train/*.arrow"
- split: dev
path: "lhu_Latn/dev/*.arrow"
- split: test
path: "lhu_Latn/test/*.arrow"
- config_name: bew_Latn
data_files:
- split: train
path: "bew_Latn/train/*.arrow"
- split: dev
path: "bew_Latn/dev/*.arrow"
- split: test
path: "bew_Latn/test/*.arrow"
- config_name: nya_Latn
data_files:
- split: train
path: "nya_Latn/train/*.arrow"
- split: dev
path: "nya_Latn/dev/*.arrow"
- split: test
path: "nya_Latn/test/*.arrow"
- config_name: poh_Latn
data_files:
- split: train
path: "poh_Latn/train/*.arrow"
- split: dev
path: "poh_Latn/dev/*.arrow"
- split: test
path: "poh_Latn/test/*.arrow"
- config_name: nnb_Latn
data_files:
- split: train
path: "nnb_Latn/train/*.arrow"
- split: dev
path: "nnb_Latn/dev/*.arrow"
- split: test
path: "nnb_Latn/test/*.arrow"
- config_name: grn_Latn
data_files:
- split: train
path: "grn_Latn/train/*.arrow"
- split: dev
path: "grn_Latn/dev/*.arrow"
- split: test
path: "grn_Latn/test/*.arrow"
- config_name: mco_Latn
data_files:
- split: train
path: "mco_Latn/train/*.arrow"
- split: dev
path: "mco_Latn/dev/*.arrow"
- split: test
path: "mco_Latn/test/*.arrow"
- config_name: ory_Orya
data_files:
- split: train
path: "ory_Orya/train/*.arrow"
- split: dev
path: "ory_Orya/dev/*.arrow"
- split: test
path: "ory_Orya/test/*.arrow"
- config_name: ful_Latn
data_files:
- split: train
path: "ful_Latn/train/*.arrow"
- split: dev
path: "ful_Latn/dev/*.arrow"
- split: test
path: "ful_Latn/test/*.arrow"
- config_name: diq_Latn
data_files:
- split: train
path: "diq_Latn/train/*.arrow"
- split: dev
path: "diq_Latn/dev/*.arrow"
- split: test
path: "diq_Latn/test/*.arrow"
- config_name: sag_Latn
data_files:
- split: train
path: "sag_Latn/train/*.arrow"
- split: dev
path: "sag_Latn/dev/*.arrow"
- split: test
path: "sag_Latn/test/*.arrow"
- config_name: tel_Telu
data_files:
- split: train
path: "tel_Telu/train/*.arrow"
- split: dev
path: "tel_Telu/dev/*.arrow"
- split: test
path: "tel_Telu/test/*.arrow"
- config_name: afr_Latn
data_files:
- split: train
path: "afr_Latn/train/*.arrow"
- split: dev
path: "afr_Latn/dev/*.arrow"
- split: test
path: "afr_Latn/test/*.arrow"
- config_name: haw_Latn
data_files:
- split: train
path: "haw_Latn/train/*.arrow"
- split: dev
path: "haw_Latn/dev/*.arrow"
- split: test
path: "haw_Latn/test/*.arrow"
- config_name: bar_Arab
data_files:
- split: train
path: "bar_Arab/train/*.arrow"
- split: dev
path: "bar_Arab/dev/*.arrow"
- split: test
path: "bar_Arab/test/*.arrow"
- config_name: umb_Latn
data_files:
- split: train
path: "umb_Latn/train/*.arrow"
- split: dev
path: "umb_Latn/dev/*.arrow"
- split: test
path: "umb_Latn/test/*.arrow"
- config_name: hsb_Latn
data_files:
- split: train
path: "hsb_Latn/train/*.arrow"
- split: dev
path: "hsb_Latn/dev/*.arrow"
- split: test
path: "hsb_Latn/test/*.arrow"
- config_name: fij_Latn
data_files:
- split: train
path: "fij_Latn/train/*.arrow"
- split: dev
path: "fij_Latn/dev/*.arrow"
- split: test
path: "fij_Latn/test/*.arrow"
- config_name: hbs_Cyrl
data_files:
- split: train
path: "hbs_Cyrl/train/*.arrow"
- split: dev
path: "hbs_Cyrl/dev/*.arrow"
- split: test
path: "hbs_Cyrl/test/*.arrow"
- config_name: san_Latn
data_files:
- split: train
path: "san_Latn/train/*.arrow"
- split: dev
path: "san_Latn/dev/*.arrow"
- split: test
path: "san_Latn/test/*.arrow"
- config_name: vls_Latn
data_files:
- split: train
path: "vls_Latn/train/*.arrow"
- split: dev
path: "vls_Latn/dev/*.arrow"
- split: test
path: "vls_Latn/test/*.arrow"
- config_name: zsm_Latn
data_files:
- split: train
path: "zsm_Latn/train/*.arrow"
- split: dev
path: "zsm_Latn/dev/*.arrow"
- split: test
path: "zsm_Latn/test/*.arrow"
- config_name: lij_Latn
data_files:
- split: train
path: "lij_Latn/train/*.arrow"
- split: dev
path: "lij_Latn/dev/*.arrow"
- split: test
path: "lij_Latn/test/*.arrow"
- config_name: quc_Latn
data_files:
- split: train
path: "quc_Latn/train/*.arrow"
- split: dev
path: "quc_Latn/dev/*.arrow"
- split: test
path: "quc_Latn/test/*.arrow"
- config_name: mam_Latn
data_files:
- split: train
path: "mam_Latn/train/*.arrow"
- split: dev
path: "mam_Latn/dev/*.arrow"
- split: test
path: "mam_Latn/test/*.arrow"
- config_name: tls_Latn
data_files:
- split: train
path: "tls_Latn/train/*.arrow"
- split: dev
path: "tls_Latn/dev/*.arrow"
- split: test
path: "tls_Latn/test/*.arrow"
- config_name: tuc_Latn
data_files:
- split: train
path: "tuc_Latn/train/*.arrow"
- split: dev
path: "tuc_Latn/dev/*.arrow"
- split: test
path: "tuc_Latn/test/*.arrow"
- config_name: dan_Latn
data_files:
- split: train
path: "dan_Latn/train/*.arrow"
- split: dev
path: "dan_Latn/dev/*.arrow"
- split: test
path: "dan_Latn/test/*.arrow"
- config_name: rue_Cyrl
data_files:
- split: train
path: "rue_Cyrl/train/*.arrow"
- split: dev
path: "rue_Cyrl/dev/*.arrow"
- split: test
path: "rue_Cyrl/test/*.arrow"
- config_name: mlt_Guru
data_files:
- split: train
path: "mlt_Guru/train/*.arrow"
- split: dev
path: "mlt_Guru/dev/*.arrow"
- split: test
path: "mlt_Guru/test/*.arrow"
- config_name: ace_Latn
data_files:
- split: train
path: "ace_Latn/train/*.arrow"
- split: dev
path: "ace_Latn/dev/*.arrow"
- split: test
path: "ace_Latn/test/*.arrow"
- config_name: bem_Latn
data_files:
- split: train
path: "bem_Latn/train/*.arrow"
- split: dev
path: "bem_Latn/dev/*.arrow"
- split: test
path: "bem_Latn/test/*.arrow"
- config_name: kam_Latn
data_files:
- split: train
path: "kam_Latn/train/*.arrow"
- split: dev
path: "kam_Latn/dev/*.arrow"
- split: test
path: "kam_Latn/test/*.arrow"
- config_name: uig_Hani
data_files:
- split: train
path: "uig_Hani/train/*.arrow"
- split: dev
path: "uig_Hani/dev/*.arrow"
- split: test
path: "uig_Hani/test/*.arrow"
- config_name: kaa_Latn
data_files:
- split: train
path: "kaa_Latn/train/*.arrow"
- split: dev
path: "kaa_Latn/dev/*.arrow"
- split: test
path: "kaa_Latn/test/*.arrow"
- config_name: ndo_Latn
data_files:
- split: train
path: "ndo_Latn/train/*.arrow"
- split: dev
path: "ndo_Latn/dev/*.arrow"
- split: test
path: "ndo_Latn/test/*.arrow"
- config_name: aze_Knda
data_files:
- split: train
path: "aze_Knda/train/*.arrow"
- split: dev
path: "aze_Knda/dev/*.arrow"
- split: test
path: "aze_Knda/test/*.arrow"
- config_name: oss_Cyrl
data_files:
- split: train
path: "oss_Cyrl/train/*.arrow"
- split: dev
path: "oss_Cyrl/dev/*.arrow"
- split: test
path: "oss_Cyrl/test/*.arrow"
- config_name: lit_Latn
data_files:
- split: train
path: "lit_Latn/train/*.arrow"
- split: dev
path: "lit_Latn/dev/*.arrow"
- split: test
path: "lit_Latn/test/*.arrow"
- config_name: frr_Latn
data_files:
- split: train
path: "frr_Latn/train/*.arrow"
- split: dev
path: "frr_Latn/dev/*.arrow"
- split: test
path: "frr_Latn/test/*.arrow"
- config_name: yap_Latn
data_files:
- split: train
path: "yap_Latn/train/*.arrow"
- split: dev
path: "yap_Latn/dev/*.arrow"
- split: test
path: "yap_Latn/test/*.arrow"
- config_name: gom_Latn
data_files:
- split: train
path: "gom_Latn/train/*.arrow"
- split: dev
path: "gom_Latn/dev/*.arrow"
- split: test
path: "gom_Latn/test/*.arrow"
- config_name: swe_Latn
data_files:
- split: train
path: "swe_Latn/train/*.arrow"
- split: dev
path: "swe_Latn/dev/*.arrow"
- split: test
path: "swe_Latn/test/*.arrow"
- config_name: lfn_Latn
data_files:
- split: train
path: "lfn_Latn/train/*.arrow"
- split: dev
path: "lfn_Latn/dev/*.arrow"
- split: test
path: "lfn_Latn/test/*.arrow"
- config_name: cmn_Hani
data_files:
- split: train
path: "cmn_Hani/train/*.arrow"
- split: dev
path: "cmn_Hani/dev/*.arrow"
- split: test
path: "cmn_Hani/test/*.arrow"
- config_name: mon_Cyrl
data_files:
- split: train
path: "mon_Cyrl/train/*.arrow"
- split: dev
path: "mon_Cyrl/dev/*.arrow"
- split: test
path: "mon_Cyrl/test/*.arrow"
- config_name: vep_Latn
data_files:
- split: train
path: "vep_Latn/train/*.arrow"
- split: dev
path: "vep_Latn/dev/*.arrow"
- split: test
path: "vep_Latn/test/*.arrow"
- config_name: ixl_Latn
data_files:
- split: train
path: "ixl_Latn/train/*.arrow"
- split: dev
path: "ixl_Latn/dev/*.arrow"
- split: test
path: "ixl_Latn/test/*.arrow"
- config_name: mlt_Gujr
data_files:
- split: train
path: "mlt_Gujr/train/*.arrow"
- split: dev
path: "mlt_Gujr/dev/*.arrow"
- split: test
path: "mlt_Gujr/test/*.arrow"
- config_name: gil_Latn
data_files:
- split: train
path: "gil_Latn/train/*.arrow"
- config_name: mau_Latn
data_files:
- split: train
path: "mau_Latn/train/*.arrow"
- split: dev
path: "mau_Latn/dev/*.arrow"
- split: test
path: "mau_Latn/test/*.arrow"
- config_name: tsn_Latn
data_files:
- split: train
path: "tsn_Latn/train/*.arrow"
- split: dev
path: "tsn_Latn/dev/*.arrow"
- split: test
path: "tsn_Latn/test/*.arrow"
- config_name: aym_Latn
data_files:
- split: train
path: "aym_Latn/train/*.arrow"
- split: dev
path: "aym_Latn/dev/*.arrow"
- split: test
path: "aym_Latn/test/*.arrow"
- config_name: vec_Latn
data_files:
- split: train
path: "vec_Latn/train/*.arrow"
- split: dev
path: "vec_Latn/dev/*.arrow"
- split: test
path: "vec_Latn/test/*.arrow"
- config_name: gom_Deva
data_files:
- split: train
path: "gom_Deva/train/*.arrow"
- split: dev
path: "gom_Deva/dev/*.arrow"
- split: test
path: "gom_Deva/test/*.arrow"
- config_name: fur_Latn
data_files:
- split: train
path: "fur_Latn/train/*.arrow"
- split: dev
path: "fur_Latn/dev/*.arrow"
- split: test
path: "fur_Latn/test/*.arrow"
- config_name: kin_Latn
data_files:
- split: train
path: "kin_Latn/train/*.arrow"
- split: dev
path: "kin_Latn/dev/*.arrow"
- split: test
path: "kin_Latn/test/*.arrow"
- config_name: guj_Hang
data_files:
- split: train
path: "guj_Hang/train/*.arrow"
- split: dev
path: "guj_Hang/dev/*.arrow"
- split: test
path: "guj_Hang/test/*.arrow"
- config_name: gcr_Latn
data_files:
- split: train
path: "gcr_Latn/train/*.arrow"
- config_name: sgs_Latn
data_files:
- split: train
path: "sgs_Latn/train/*.arrow"
- split: dev
path: "sgs_Latn/dev/*.arrow"
- split: test
path: "sgs_Latn/test/*.arrow"
- config_name: bih_Deva
data_files:
- split: train
path: "bih_Deva/train/*.arrow"
- split: dev
path: "bih_Deva/dev/*.arrow"
- split: test
path: "bih_Deva/test/*.arrow"
- config_name: guj_Guru
data_files:
- split: train
path: "guj_Guru/train/*.arrow"
- split: dev
path: "guj_Guru/dev/*.arrow"
- split: test
path: "guj_Guru/test/*.arrow"
- config_name: vie_Latn
data_files:
- split: train
path: "vie_Latn/train/*.arrow"
- split: dev
path: "vie_Latn/dev/*.arrow"
- split: test
path: "vie_Latn/test/*.arrow"
- config_name: tha_Thai
data_files:
- split: train
path: "tha_Thai/train/*.arrow"
- split: dev
path: "tha_Thai/dev/*.arrow"
- split: test
path: "tha_Thai/test/*.arrow"
- config_name: pau_Latn
data_files:
- split: train
path: "pau_Latn/train/*.arrow"
- config_name: est_Latn
data_files:
- split: train
path: "est_Latn/train/*.arrow"
- split: dev
path: "est_Latn/dev/*.arrow"
- split: test
path: "est_Latn/test/*.arrow"
- config_name: lue_Latn
data_files:
- split: train
path: "lue_Latn/train/*.arrow"
- config_name: rug_Latn
data_files:
- split: train
path: "rug_Latn/train/*.arrow"
- split: dev
path: "rug_Latn/dev/*.arrow"
- split: test
path: "rug_Latn/test/*.arrow"
- config_name: kjb_Latn
data_files:
- split: train
path: "kjb_Latn/train/*.arrow"
- split: dev
path: "kjb_Latn/dev/*.arrow"
- split: test
path: "kjb_Latn/test/*.arrow"
- config_name: kik_Latn
data_files:
- split: train
path: "kik_Latn/train/*.arrow"
- split: dev
path: "kik_Latn/dev/*.arrow"
- split: test
path: "kik_Latn/test/*.arrow"
- config_name: mri_Latn
data_files:
- split: train
path: "mri_Latn/train/*.arrow"
- split: dev
path: "mri_Latn/dev/*.arrow"
- split: test
path: "mri_Latn/test/*.arrow"
- config_name: ber_Latn
data_files:
- split: train
path: "ber_Latn/train/*.arrow"
- split: dev
path: "ber_Latn/dev/*.arrow"
- split: test
path: "ber_Latn/test/*.arrow"
- config_name: ssw_Latn
data_files:
- split: train
path: "ssw_Latn/train/*.arrow"
- split: dev
path: "ssw_Latn/dev/*.arrow"
- split: test
path: "ssw_Latn/test/*.arrow"
- config_name: guj_Beng
data_files:
- split: train
path: "guj_Beng/train/*.arrow"
- split: dev
path: "guj_Beng/dev/*.arrow"
- split: test
path: "guj_Beng/test/*.arrow"
- config_name: quz_Latn
data_files:
- split: train
path: "quz_Latn/train/*.arrow"
- config_name: arb_Arab
data_files:
- split: train
path: "arb_Arab/train/*.arrow"
- split: dev
path: "arb_Arab/dev/*.arrow"
- split: test
path: "arb_Arab/test/*.arrow"
- config_name: mlt_Sinh
data_files:
- split: train
path: "mlt_Sinh/train/*.arrow"
- split: dev
path: "mlt_Sinh/dev/*.arrow"
- split: test
path: "mlt_Sinh/test/*.arrow"
- config_name: mai_Deva
data_files:
- split: train
path: "mai_Deva/train/*.arrow"
- split: dev
path: "mai_Deva/dev/*.arrow"
- split: test
path: "mai_Deva/test/*.arrow"
- config_name: mlt_Thai
data_files:
- split: train
path: "mlt_Thai/train/*.arrow"
- split: dev
path: "mlt_Thai/dev/*.arrow"
- split: test
path: "mlt_Thai/test/*.arrow"
- config_name: bew_Cyrl
data_files:
- split: train
path: "bew_Cyrl/train/*.arrow"
- split: dev
path: "bew_Cyrl/dev/*.arrow"
- split: test
path: "bew_Cyrl/test/*.arrow"
- config_name: tat_Cyrl
data_files:
- split: train
path: "tat_Cyrl/train/*.arrow"
- split: dev
path: "tat_Cyrl/dev/*.arrow"
- split: test
path: "tat_Cyrl/test/*.arrow"
- config_name: mya_Mymr
data_files:
- split: train
path: "mya_Mymr/train/*.arrow"
- split: dev
path: "mya_Mymr/dev/*.arrow"
- split: test
path: "mya_Mymr/test/*.arrow"
- config_name: alt_Cyrl
data_files:
- split: train
path: "alt_Cyrl/train/*.arrow"
- split: dev
path: "alt_Cyrl/dev/*.arrow"
- split: test
path: "alt_Cyrl/test/*.arrow"
- config_name: nno_Latn
data_files:
- split: train
path: "nno_Latn/train/*.arrow"
- split: dev
path: "nno_Latn/dev/*.arrow"
- split: test
path: "nno_Latn/test/*.arrow"
- config_name: hrx_Latn
data_files:
- split: train
path: "hrx_Latn/train/*.arrow"
- split: dev
path: "hrx_Latn/dev/*.arrow"
- split: test
path: "hrx_Latn/test/*.arrow"
- config_name: hau_Latn
data_files:
- split: train
path: "hau_Latn/train/*.arrow"
- split: dev
path: "hau_Latn/dev/*.arrow"
- split: test
path: "hau_Latn/test/*.arrow"
- config_name: gsw_Latn
data_files:
- split: train
path: "gsw_Latn/train/*.arrow"
- split: dev
path: "gsw_Latn/dev/*.arrow"
- split: test
path: "gsw_Latn/test/*.arrow"
- config_name: pam_Latn
data_files:
- split: train
path: "pam_Latn/train/*.arrow"
- split: dev
path: "pam_Latn/dev/*.arrow"
- split: test
path: "pam_Latn/test/*.arrow"
- config_name: mlt_Deva
data_files:
- split: train
path: "mlt_Deva/train/*.arrow"
- split: dev
path: "mlt_Deva/dev/*.arrow"
- split: test
path: "mlt_Deva/test/*.arrow"
- config_name: sun_Latn
data_files:
- split: train
path: "sun_Latn/train/*.arrow"
- split: dev
path: "sun_Latn/dev/*.arrow"
- split: test
path: "sun_Latn/test/*.arrow"
- config_name: aze_Sinh
data_files:
- split: train
path: "aze_Sinh/train/*.arrow"
- split: dev
path: "aze_Sinh/dev/*.arrow"
- split: test
path: "aze_Sinh/test/*.arrow"
- config_name: lat_Latn
data_files:
- split: train
path: "lat_Latn/train/*.arrow"
- split: dev
path: "lat_Latn/dev/*.arrow"
- split: test
path: "lat_Latn/test/*.arrow"
- config_name: bis_Latn
data_files:
- split: train
path: "bis_Latn/train/*.arrow"
- split: dev
path: "bis_Latn/dev/*.arrow"
- split: test
path: "bis_Latn/test/*.arrow"
- config_name: udm_Cyrl
data_files:
- split: train
path: "udm_Cyrl/train/*.arrow"
- split: dev
path: "udm_Cyrl/dev/*.arrow"
- split: test
path: "udm_Cyrl/test/*.arrow"
- config_name: tca_Latn
data_files:
- split: train
path: "tca_Latn/train/*.arrow"
- split: dev
path: "tca_Latn/dev/*.arrow"
- split: test
path: "tca_Latn/test/*.arrow"
- config_name: uig_Arab
data_files:
- split: train
path: "uig_Arab/train/*.arrow"
- split: dev
path: "uig_Arab/dev/*.arrow"
- split: test
path: "uig_Arab/test/*.arrow"
- config_name: glg_Latn
data_files:
- split: train
path: "glg_Latn/train/*.arrow"
- split: dev
path: "glg_Latn/dev/*.arrow"
- split: test
path: "glg_Latn/test/*.arrow"
- config_name: tah_Latn
data_files:
- split: train
path: "tah_Latn/train/*.arrow"
- split: dev
path: "tah_Latn/dev/*.arrow"
- config_name: glk_Latn
data_files:
- split: train
path: "glk_Latn/train/*.arrow"
- split: dev
path: "glk_Latn/dev/*.arrow"
- split: test
path: "glk_Latn/test/*.arrow"
- config_name: aze_Tfng
data_files:
- split: train
path: "aze_Tfng/train/*.arrow"
- split: dev
path: "aze_Tfng/dev/*.arrow"
- split: test
path: "aze_Tfng/test/*.arrow"
- config_name: ckb_Arab
data_files:
- split: train
path: "ckb_Arab/train/*.arrow"
- split: dev
path: "ckb_Arab/dev/*.arrow"
- split: test
path: "ckb_Arab/test/*.arrow"
- config_name: gle_Latn
data_files:
- split: train
path: "gle_Latn/train/*.arrow"
- split: dev
path: "gle_Latn/dev/*.arrow"
- split: test
path: "gle_Latn/test/*.arrow"
- config_name: lim_Latn
data_files:
- split: train
path: "lim_Latn/train/*.arrow"
- split: dev
path: "lim_Latn/dev/*.arrow"
- split: test
path: "lim_Latn/test/*.arrow"
- config_name: slk_Latn
data_files:
- split: train
path: "slk_Latn/train/*.arrow"
- split: dev
path: "slk_Latn/dev/*.arrow"
- split: test
path: "slk_Latn/test/*.arrow"
- config_name: nds_Latn
data_files:
- split: train
path: "nds_Latn/train/*.arrow"
- split: dev
path: "nds_Latn/dev/*.arrow"
- split: test
path: "nds_Latn/test/*.arrow"
- config_name: kor_Hang
data_files:
- split: train
path: "kor_Hang/train/*.arrow"
- split: dev
path: "kor_Hang/dev/*.arrow"
- split: test
path: "kor_Hang/test/*.arrow"
- config_name: uzb_Latn
data_files:
- split: train
path: "uzb_Latn/train/*.arrow"
- split: dev
path: "uzb_Latn/dev/*.arrow"
- split: test
path: "uzb_Latn/test/*.arrow"
- config_name: pfl_Latn
data_files:
- split: train
path: "pfl_Latn/train/*.arrow"
- split: dev
path: "pfl_Latn/dev/*.arrow"
- split: test
path: "pfl_Latn/test/*.arrow"
- config_name: azj_Latn
data_files:
- split: train
path: "azj_Latn/train/*.arrow"
- split: dev
path: "azj_Latn/dev/*.arrow"
- split: test
path: "azj_Latn/test/*.arrow"
- config_name: glv_Latn
data_files:
- split: train
path: "glv_Latn/train/*.arrow"
- split: dev
path: "glv_Latn/dev/*.arrow"
- split: test
path: "glv_Latn/test/*.arrow"
- config_name: jam_Latn
data_files:
- split: train
path: "jam_Latn/train/*.arrow"
- split: dev
path: "jam_Latn/dev/*.arrow"
- split: test
path: "jam_Latn/test/*.arrow"
- config_name: kat_Geor
data_files:
- split: train
path: "kat_Geor/train/*.arrow"
- split: dev
path: "kat_Geor/dev/*.arrow"
- split: test
path: "kat_Geor/test/*.arrow"
- config_name: fry_Latn
data_files:
- split: train
path: "fry_Latn/train/*.arrow"
- split: dev
path: "fry_Latn/dev/*.arrow"
- split: test
path: "fry_Latn/test/*.arrow"
- config_name: guj_Knda
data_files:
- split: train
path: "guj_Knda/train/*.arrow"
- split: dev
path: "guj_Knda/dev/*.arrow"
- split: test
path: "guj_Knda/test/*.arrow"
- config_name: kat_Latn
data_files:
- split: train
path: "kat_Latn/train/*.arrow"
- split: dev
path: "kat_Latn/dev/*.arrow"
- split: test
path: "kat_Latn/test/*.arrow"
- config_name: twi_Latn
data_files:
- split: train
path: "twi_Latn/train/*.arrow"
- split: dev
path: "twi_Latn/dev/*.arrow"
- split: test
path: "twi_Latn/test/*.arrow"
- config_name: eus_Latn
data_files:
- split: train
path: "eus_Latn/train/*.arrow"
- split: dev
path: "eus_Latn/dev/*.arrow"
- split: test
path: "eus_Latn/test/*.arrow"
- config_name: toi_Latn
data_files:
- split: train
path: "toi_Latn/train/*.arrow"
- config_name: mlt_Armn
data_files:
- split: train
path: "mlt_Armn/train/*.arrow"
- split: dev
path: "mlt_Armn/dev/*.arrow"
- split: test
path: "mlt_Armn/test/*.arrow"
- config_name: mon_Hira
data_files:
- split: train
path: "mon_Hira/train/*.arrow"
- split: dev
path: "mon_Hira/dev/*.arrow"
- split: test
path: "mon_Hira/test/*.arrow"
- config_name: mlg_Latn
data_files:
- split: train
path: "mlg_Latn/train/*.arrow"
- split: dev
path: "mlg_Latn/dev/*.arrow"
- split: test
path: "mlg_Latn/test/*.arrow"
- config_name: tyv_Cyrl
data_files:
- split: train
path: "tyv_Cyrl/train/*.arrow"
- split: dev
path: "tyv_Cyrl/dev/*.arrow"
- split: test
path: "tyv_Cyrl/test/*.arrow"
- config_name: arz_Arab
data_files:
- split: train
path: "arz_Arab/train/*.arrow"
- split: dev
path: "arz_Arab/dev/*.arrow"
- split: test
path: "arz_Arab/test/*.arrow"
- config_name: hyw_Armn
data_files:
- split: train
path: "hyw_Armn/train/*.arrow"
- config_name: chk_Latn
data_files:
- split: train
path: "chk_Latn/train/*.arrow"
- split: dev
path: "chk_Latn/dev/*.arrow"
- split: test
path: "chk_Latn/test/*.arrow"
- config_name: vol_Latn
data_files:
- split: train
path: "vol_Latn/train/*.arrow"
- split: dev
path: "vol_Latn/dev/*.arrow"
- split: test
path: "vol_Latn/test/*.arrow"
- config_name: kek_Latn
data_files:
- split: train
path: "kek_Latn/train/*.arrow"
- split: dev
path: "kek_Latn/dev/*.arrow"
- split: test
path: "kek_Latn/test/*.arrow"
- config_name: teo_Latn
data_files:
- split: train
path: "teo_Latn/train/*.arrow"
- config_name: ell_Grek
data_files:
- split: train
path: "ell_Grek/train/*.arrow"
- split: dev
path: "ell_Grek/dev/*.arrow"
- split: test
path: "ell_Grek/test/*.arrow"
- config_name: kan_Knda
data_files:
- split: train
path: "kan_Knda/train/*.arrow"
- split: dev
path: "kan_Knda/dev/*.arrow"
- split: test
path: "kan_Knda/test/*.arrow"
- config_name: tpi_Latn
data_files:
- split: train
path: "tpi_Latn/train/*.arrow"
- split: dev
path: "tpi_Latn/dev/*.arrow"
- split: test
path: "tpi_Latn/test/*.arrow"
- config_name: rop_Latn
data_files:
- split: train
path: "rop_Latn/train/*.arrow"
- split: dev
path: "rop_Latn/dev/*.arrow"
- split: test
path: "rop_Latn/test/*.arrow"
- config_name: aze_Mlym
data_files:
- split: train
path: "aze_Mlym/train/*.arrow"
- split: dev
path: "aze_Mlym/dev/*.arrow"
- split: test
path: "aze_Mlym/test/*.arrow"
- config_name: lua_Latn
data_files:
- split: train
path: "lua_Latn/train/*.arrow"
- split: dev
path: "lua_Latn/dev/*.arrow"
- config_name: mad_Latn
data_files:
- split: train
path: "mad_Latn/train/*.arrow"
- split: dev
path: "mad_Latn/dev/*.arrow"
- split: test
path: "mad_Latn/test/*.arrow"
- config_name: top_Latn
data_files:
- split: train
path: "top_Latn/train/*.arrow"
- split: dev
path: "top_Latn/dev/*.arrow"
- split: test
path: "top_Latn/test/*.arrow"
- config_name: scn_Latn
data_files:
- split: train
path: "scn_Latn/train/*.arrow"
- split: dev
path: "scn_Latn/dev/*.arrow"
- split: test
path: "scn_Latn/test/*.arrow"
- config_name: aze_Thaa
data_files:
- split: train
path: "aze_Thaa/train/*.arrow"
- split: dev
path: "aze_Thaa/dev/*.arrow"
- split: test
path: "aze_Thaa/test/*.arrow"
- config_name: guj_Latn
data_files:
- split: train
path: "guj_Latn/train/*.arrow"
- split: dev
path: "guj_Latn/dev/*.arrow"
- split: test
path: "guj_Latn/test/*.arrow"
- config_name: ngl_Latn
data_files:
- split: train
path: "ngl_Latn/train/*.arrow"
- split: dev
path: "ngl_Latn/dev/*.arrow"
- config_name: mal_Mlym
data_files:
- split: train
path: "mal_Mlym/train/*.arrow"
- split: dev
path: "mal_Mlym/dev/*.arrow"
- split: test
path: "mal_Mlym/test/*.arrow"
- config_name: szl_Latn
data_files:
- split: train
path: "szl_Latn/train/*.arrow"
- split: dev
path: "szl_Latn/dev/*.arrow"
- split: test
path: "szl_Latn/test/*.arrow"
- config_name: orm_Latn
data_files:
- split: train
path: "orm_Latn/train/*.arrow"
- split: dev
path: "orm_Latn/dev/*.arrow"
- split: test
path: "orm_Latn/test/*.arrow"
- config_name: urd_Arab
data_files:
- split: train
path: "urd_Arab/train/*.arrow"
- split: dev
path: "urd_Arab/dev/*.arrow"
- split: test
path: "urd_Arab/test/*.arrow"
- config_name: cbk_Latn
data_files:
- split: train
path: "cbk_Latn/train/*.arrow"
- split: dev
path: "cbk_Latn/dev/*.arrow"
- split: test
path: "cbk_Latn/test/*.arrow"
pretty_name: Glot500 Corpus
---
# Glot500 Corpus
A dataset of natural language data collected by putting together more than 150
existing mono-lingual and multilingual datasets together and crawling known multilingual websites.
The focus of this dataset is on 500 extremely low-resource languages.
(More Languages still to be uploaded here)
This dataset is used to train the [Glot500](https://huggingface.co/cis-lmu/glot500-base) model.
- **Homepage:** [homepage](https://github.com/cisnlp/Glot500)
- **Repository:** [github](https://github.com/cisnlp/Glot500)
- **Paper:** [acl](https://aclanthology.org/2023.acl-long.61/), [arxiv](https://arxiv.org/abs/2305.12182)
## Usage
Replace `nbl_Latn` with your specific language.
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/Glot500', 'nbl_Latn', split='train')
print(dataset['train'][0]) # First row of nbl_Latn
```
Supported languages:
```
ton_Latn
nld_Latn
tzo_Latn
leh_Latn
cuk_Latn
ibg_Latn
uzb_Cyrl
jav_Latn
rap_Latn
zpa_Latn
bak_Cyrl
por_Latn
quy_Latn
ast_Latn
cos_Latn
fon_Latn
sna_Latn
dzo_Tibt
nob_Latn
nch_Latn
ish_Latn
che_Cyrl
ext_Latn
ldi_Latn
dtp_Latn
yue_Hani
kbd_Cyrl
mar_Deva
ron_Latn
acr_Latn
afb_Arab
sqi_Latn
eng_Latn
ksd_Latn
rus_Cyrl
bcl_Latn
ksh_Latn
hin_Latn
myv_Cyrl
kjh_Cyrl
sah_Cyrl
gkp_Latn
naq_Latn
tdt_Latn
rmn_Cyrl
kac_Latn
cak_Latn
kir_Cyrl
mps_Latn
yid_Hebr
dhv_Latn
srn_Latn
div_Thaa
mkd_Cyrl
idu_Latn
bre_Latn
bas_Latn
ven_Latn
pxm_Latn
wuu_Hani
mwl_Latn
miq_Latn
kss_Latn
wes_Latn
slv_Latn
hrv_Latn
hmo_Latn
som_Latn
bod_Tibt
pls_Latn
ile_Latn
luo_Latn
pus_Arab
fao_Latn
fas_Arab
swa_Latn
ifb_Latn
ary_Arab
tbz_Latn
hus_Latn
ote_Latn
ilo_Latn
ctd_Latn
abk_Cyrl
bqc_Latn
hil_Latn
pon_Latn
zul_Latn
als_Latn
pes_Arab
bpy_Beng
bos_Latn
sot_Latn
lin_Latn
tuk_Cyrl
gla_Latn
wln_Latn
apc_Arab
hin_Deva
hye_Armn
tir_Ethi
pap_Latn
gcf_Latn
cjk_Latn
pcd_Latn
tur_Latn
kon_Latn
mwn_Latn
izz_Latn
xho_Latn
lam_Latn
guc_Latn
aka_Latn
kea_Latn
sme_Latn
fat_Latn
csb_Latn
bak_Latn
djk_Latn
xav_Latn
oci_Latn
acm_Arab
rmy_Cyrl
bim_Latn
mck_Latn
krc_Cyrl
cym_Latn
lus_Latn
ncx_Latn
ngu_Latn
yom_Latn
tam_Taml
ajp_Arab
epo_Latn
fra_Latn
ita_Latn
seh_Latn
sxn_Latn
pdt_Latn
hbs_Latn
uzn_Cyrl
bhw_Latn
ksw_Mymr
pms_Latn
zlm_Latn
ami_Latn
qub_Latn
twx_Latn
tsz_Latn
kaa_Cyrl
toj_Latn
toh_Latn
kos_Latn
ogo_Latn
kab_Latn
pan_Guru
nan_Latn
aze_Latn
prk_Latn
ara_Arab
meu_Latn
nba_Latn
lvs_Latn
nbl_Latn
loz_Latn
crh_Latn
bci_Latn
kbp_Latn
tgl_Latn
kmb_Latn
hun_Latn
nzi_Latn
yao_Latn
arn_Latn
hyw_Cyrl
vmw_Latn
jbo_Latn
mzn_Arab
lzh_Hani
heb_Hebr
cce_Latn
bjn_Latn
gug_Latn
yor_Latn
ban_Latn
tlh_Latn
chv_Cyrl
sin_Sinh
ind_Latn
dua_Latn
sid_Latn
amh_Ethi
zea_Latn
kpg_Latn
crh_Cyrl
nyu_Latn
dln_Latn
ibo_Latn
tih_Latn
msa_Latn
nap_Latn
mgr_Latn
bik_Latn
srp_Cyrl
lao_Laoo
guw_Latn
kom_Cyrl
sop_Latn
nde_Latn
hui_Latn
cfm_Latn
new_Deva
kur_Arab
sco_Latn
nyk_Latn
lun_Latn
suz_Deva
wal_Latn
asm_Beng
rar_Latn
san_Deva
kaz_Cyrl
tog_Latn
iba_Latn
tuk_Latn
nso_Latn
run_Latn
ctu_Latn
bam_Latn
fin_Latn
gor_Latn
kmr_Latn
ben_Beng
pag_Latn
niu_Latn
xmf_Geor
ekk_Latn
tsc_Latn
lmo_Latn
mhr_Cyrl
plt_Latn
qvi_Latn
roh_Latn
oke_Latn
mah_Latn
tok_Latn
mgh_Latn
eml_Latn
urh_Latn
pnb_Arab
yua_Latn
nav_Latn
zne_Latn
bin_Latn
cat_Latn
gym_Latn
sat_Olck
snd_Arab
isl_Latn
rmn_Grek
bba_Latn
kal_Latn
aoj_Latn
qug_Latn
zai_Latn
guj_Gujr
min_Latn
tob_Latn
grc_Grek
hmn_Latn
ido_Latn
khm_Khmr
ikk_Latn
iku_Cans
tat_Latn
bel_Cyrl
dyu_Latn
que_Latn
efi_Latn
quw_Latn
nyn_Latn
wol_Latn
hne_Deva
zho_Hani
swh_Latn
bum_Latn
kua_Latn
ncj_Latn
ewe_Latn
hat_Latn
ina_Latn
mfe_Latn
ahk_Latn
srm_Latn
lug_Latn
ach_Latn
rmy_Latn
tpm_Latn
smo_Latn
mos_Latn
srd_Latn
srp_Latn
azb_Arab
ori_Orya
mzh_Latn
kur_Latn
phm_Latn
kwn_Latn
crs_Latn
ada_Latn
ttj_Latn
hif_Latn
tzh_Latn
tdx_Latn
bbc_Latn
cnh_Latn
pcm_Latn
tso_Latn
nor_Latn
bsb_Latn
kqn_Latn
gaa_Latn
ukr_Cyrl
lav_Latn
nep_Deva
kmr_Cyrl
ige_Latn
pis_Latn
lhu_Latn
nya_Latn
tiv_Latn
mny_Latn
kri_Latn
nyy_Latn
poh_Latn
nnb_Latn
grn_Latn
mco_Latn
ory_Orya
ful_Latn
diq_Latn
sag_Latn
tel_Telu
afr_Latn
haw_Latn
umb_Latn
hsb_Latn
fij_Latn
hbs_Cyrl
san_Latn
vls_Latn
zsm_Latn
lij_Latn
quc_Latn
mam_Latn
tuc_Latn
dan_Latn
rue_Cyrl
ace_Latn
bem_Latn
kam_Latn
ndo_Latn
mbb_Latn
mrw_Latn
ajg_Latn
oss_Cyrl
her_Latn
lit_Latn
frr_Latn
yap_Latn
bzj_Latn
gom_Latn
swe_Latn
lfn_Latn
cmn_Hani
mon_Cyrl
vep_Latn
ixl_Latn
gil_Latn
mau_Latn
aym_Latn
gom_Deva
fur_Latn
cgg_Latn
chw_Latn
kin_Latn
alz_Latn
ndc_Latn
gcr_Latn
rmn_Latn
sgs_Latn
bih_Deva
skg_Latn
bts_Latn
vie_Latn
tha_Thai
tcf_Latn
pau_Latn
est_Latn
lue_Latn
rug_Latn
gur_Latn
kik_Latn
mri_Latn
ber_Latn
ssw_Latn
cab_Latn
quz_Latn
arb_Arab
mai_Deva
tat_Cyrl
mya_Mymr
alt_Cyrl
nno_Latn
nse_Latn
hrx_Latn
hau_Latn
koo_Latn
gsw_Latn
pam_Latn
sun_Latn
lat_Latn
bis_Latn
btx_Latn
udm_Cyrl
xmv_Latn
tca_Latn
uig_Arab
glg_Latn
tah_Latn
llb_Latn
ckb_Arab
gle_Latn
lim_Latn
slk_Latn
nds_Latn
kor_Hang
uzb_Latn
gkn_Latn
pfl_Latn
azj_Latn
glv_Latn
jam_Latn
kat_Geor
abn_Latn
fry_Latn
kat_Latn
twi_Latn
eus_Latn
toi_Latn
mlg_Latn
ifa_Latn
tyv_Cyrl
arz_Arab
chk_Latn
vol_Latn
kek_Latn
teo_Latn
ell_Grek
kan_Knda
rng_Latn
tpi_Latn
mdy_Ethi
lua_Latn
mad_Latn
top_Latn
scn_Latn
ngl_Latn
mal_Mlym
szl_Latn
orm_Latn
nia_Latn
urd_Arab
mxv_Latn
cbk_Latn
```
## License
We don't own any part of the data. The original source of each sentence of the data is indicated in dataset field.
To see the copyright license of the original datasets visit [here](https://github.com/cisnlp/Glot500#glot500-c).
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0.
If you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at glot500@cis.lmu.de.
## Ethical Considerations
**1. Biases:** The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for news sources and social medias.
**2. Representativeness:** While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.
**3. Ethics:** We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.
## Citation
If you use any part of this code and data in your research, please cite it using the following BibTeX entry.
```
@inproceedings{imanigooghari-etal-2023-glot500,
title = "Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages",
author = {ImaniGooghari, Ayyoob and
Lin, Peiqin and
Kargaran, Amir Hossein and
Severini, Silvia and
Jalili Sabet, Masoud and
Kassner, Nora and
Ma, Chunlan and
Schmid, Helmut and
Martins, Andr{\'e} and
Yvon, Fran{\c{c}}ois and
Sch{\"u}tze, Hinrich},
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.61",
doi = "10.18653/v1/2023.acl-long.61",
pages = "1082--1117",
abstract = "The NLP community has mainly focused on scaling Large Language Models (LLMs) vertically, i.e., making them better for about 100 languages. We instead scale LLMs horizontally: we create, through continued pretraining, Glot500-m, an LLM that covers 511 predominantly low-resource languages. An important part of this effort is to collect and clean Glot500-c, a corpus that covers these 511 languages and allows us to train Glot500-m. We evaluate Glot500-m on five diverse tasks across these languages. We observe large improvements for both high-resource and low-resource languages compared to an XLM-R baseline. Our analysis shows that no single factor explains the quality of multilingual LLM representations. Rather, a combination of factors determines quality including corpus size, script, {``}help{''} from related languages and the total capacity of the model. Our work addresses an important goal of NLP research: we should notlimit NLP to a small fraction of the world{'}s languages and instead strive to support as many languages as possible to bring the benefits of NLP technology to all languages and cultures. Code, data and models are available at \url{https://github.com/cisnlp/Glot500}.",
}
``` | [
-0.7910007238388062,
-0.49820154905319214,
0.3222912549972534,
0.04215419664978981,
0.13537542521953583,
0.25607436895370483,
-0.02853916399180889,
-0.417251318693161,
0.7376609444618225,
0.45556172728538513,
-0.3441731333732605,
-0.6773093938827515,
-0.6920103430747986,
0.0508016981184482... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ms3c/swahili-common-voices-africas-talking | ms3c | 2023-11-05T13:08:57Z | 29 | 0 | null | [
"license:lgpl-3.0",
"region:us"
] | 2023-11-05T13:08:57Z | 2023-11-05T08:39:30.000Z | 2023-11-05T08:39:30 | ---
license: lgpl-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jihye-moon/divorce_qa_data | jihye-moon | 2023-11-06T08:35:53Z | 29 | 0 | null | [
"region:us"
] | 2023-11-06T08:35:53Z | 2023-11-06T08:35:39.000Z | 2023-11-06T08:35:39 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lollitor/LogDataset | Lollitor | 2023-11-06T18:07:27Z | 29 | 0 | null | [
"region:us"
] | 2023-11-06T18:07:27Z | 2023-11-06T18:07:23.000Z | 2023-11-06T18:07:23 | ---
dataset_info:
features:
- name: -logKd/Ki
dtype: float64
- name: inputs
dtype: string
splits:
- name: train
num_bytes: 22099679
num_examples: 18926
download_size: 8110526
dataset_size: 22099679
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LogDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3595080077648163,
-0.237385556101799,
0.10458409786224365,
0.121614471077919,
-0.44492676854133606,
0.23469749093055725,
0.5050007104873657,
-0.20784007012844086,
1.1083933115005493,
0.32989904284477234,
-0.9946936964988708,
-0.7881965041160583,
-0.5849910974502563,
-0.3237382769584656,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
washimneupane/moLoraData | washimneupane | 2023-11-07T10:27:48Z | 29 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-07T10:27:48Z | 2023-11-07T07:41:51.000Z | 2023-11-07T07:41:51 | ---
license: mit
dataset_info:
features:
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: licenses
sequence: string
- name: repository_name
dtype: string
- name: path
dtype: string
- name: size
dtype: int64
- name: lang
dtype: string
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 2737311371
num_examples: 300000
download_size: 900770371
dataset_size: 2737311371
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PriyHF/journeys | PriyHF | 2023-11-08T06:41:05Z | 29 | 0 | null | [
"region:us"
] | 2023-11-08T06:41:05Z | 2023-11-08T06:40:43.000Z | 2023-11-08T06:40:43 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mamachang/medical-reasoning | mamachang | 2023-11-10T18:54:50Z | 29 | 1 | null | [
"region:us"
] | 2023-11-10T18:54:50Z | 2023-11-08T15:41:41.000Z | 2023-11-08T15:41:41 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lyon-nlp/alloprof | lyon-nlp | 2023-11-09T15:22:40Z | 29 | 2 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:fr",
"arxiv:2302.07738",
"arxiv:2210.07316",
"region:us"
] | 2023-11-09T15:22:40Z | 2023-11-08T16:45:33.000Z | 2023-11-08T16:45:33 | ---
task_categories:
- text-classification
- question-answering
language:
- fr
pretty_name: alloprof
size_categories:
- 1K<n<10K
---
This is a re-edit from the Alloprof dataset (which can be found here : https://huggingface.co/datasets/antoinelb7/alloprof).
For more information about the data source and the features, please refer to the original dataset card made by the authors, along with their paper available here : https://arxiv.org/abs/2302.07738
This re-edition of the dataset is a preprocessed version of the original, **in a more ready-to-use format**. Essentially, the texts have been cleaned, and data not usable for retrieval has been discarded.
### Why a re-edition ?
It has been made for easier usage in the MTEB benchmarking pipeline in order to contribute in the MTEB leaderboard : https://huggingface.co/spaces/mteb/leaderboard.
For more information about the project, please refer to the associated paper : https://arxiv.org/pdf/2210.07316.pdf
### Usage
To use the dataset, you need to specify the subset you want (documents or queries) when calling the load_dataset() method.
For example, to get the queries use :
```py
from datasets import load_dataset
dataset = load_dataset("lyon-nlp/alloprof", "queries")
```
| [
-0.30920901894569397,
-0.5253150463104248,
0.1986633688211441,
0.22176387906074524,
-0.38267001509666443,
-0.2983367443084717,
-0.32807764410972595,
-0.41770225763320923,
0.6786956191062927,
0.9614630937576294,
-0.5988155603408813,
-0.5424160957336426,
-0.6834716796875,
0.42573270201683044... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NochnoyRitzar/ner_locations_dataset_pretokenized_distilbert_base_multilingual_cased | NochnoyRitzar | 2023-11-10T08:48:42Z | 29 | 0 | null | [
"region:us"
] | 2023-11-10T08:48:42Z | 2023-11-10T08:03:11.000Z | 2023-11-10T08:03:11 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chirunder/transliteration_classification_dataset | chirunder | 2023-11-10T13:27:26Z | 29 | 0 | null | [
"region:us"
] | 2023-11-10T13:27:26Z | 2023-11-10T13:27:17.000Z | 2023-11-10T13:27:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: classification
dtype: string
splits:
- name: train
num_bytes: 201288.8
num_examples: 2400
- name: test
num_bytes: 50322.2
num_examples: 600
download_size: 181466
dataset_size: 251611.0
---
# Dataset Card for "transliteration_classification_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2002606987953186,
-0.05455375090241432,
0.01961066946387291,
0.1530262529850006,
-0.1592588871717453,
0.1950053572654724,
-0.01908184215426445,
-0.1671600043773651,
0.6942510604858398,
0.3011617064476013,
-0.6750927567481995,
-0.8755110502243042,
-0.7474876642227173,
-0.0487597137689590... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hessa/tqa_train_instruction | Hessa | 2023-11-11T18:54:40Z | 29 | 0 | null | [
"region:us"
] | 2023-11-11T18:54:40Z | 2023-11-11T18:49:29.000Z | 2023-11-11T18:49:29 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/law_court_opinion_100 | zxvix | 2023-11-13T12:31:26Z | 29 | 0 | null | [
"region:us"
] | 2023-11-13T12:31:26Z | 2023-11-13T12:31:22.000Z | 2023-11-13T12:31:22 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: created_timestamp
dtype: string
- name: downloaded_timestamp
dtype: string
- name: url
dtype: string
splits:
- name: test
num_bytes: 98368
num_examples: 100
download_size: 62128
dataset_size: 98368
---
# Dataset Card for "law_court_opinion_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4279041588306427,
-0.3178096115589142,
0.4176378548145294,
0.32688748836517334,
-0.3316701352596283,
-0.40029242634773254,
0.0967867374420166,
0.33096444606781006,
0.6889843344688416,
0.5633951425552368,
-0.5363500714302063,
-0.9336420297622681,
-0.5439702272415161,
-0.5881238579750061,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/B2F_small | mengmengmmm | 2023-11-13T13:30:09Z | 29 | 0 | null | [
"region:us"
] | 2023-11-13T13:30:09Z | 2023-11-13T13:29:29.000Z | 2023-11-13T13:29:29 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zaanind/qasimple | zaanind | 2023-11-15T15:46:26Z | 29 | 1 | null | [
"region:us"
] | 2023-11-15T15:46:26Z | 2023-11-15T15:46:24.000Z | 2023-11-15T15:46:24 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 18378
num_examples: 25
download_size: 16549
dataset_size: 18378
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "qasimple"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5412891507148743,
-0.05287357047200203,
0.2517174780368805,
0.08778786659240723,
-0.3114621341228485,
0.04417465627193451,
0.23575595021247864,
-0.041039761155843735,
0.8045933842658997,
0.2856839895248413,
-0.945661187171936,
-0.8418493270874023,
-0.49396055936813354,
-0.36479422450065... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
luizlzg/drbyte_dataset | luizlzg | 2023-11-23T14:13:59Z | 29 | 0 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:pt",
"medical",
"biology",
"region:us"
] | 2023-11-23T14:13:59Z | 2023-11-15T18:17:43.000Z | 2023-11-15T18:17:43 | ---
task_categories:
- text-generation
language:
- pt
tags:
- medical
- biology
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: drbyte_ptbr_treino*
- split: test
path: drbyte_ptbr_teste*
- split: validation
path: drbyte_ptbr_valid*
---
# Descrição geral
O seguinte dataset, responsável pelo treinamento do modelo apelidado de Dr Byte, é um dataset, com informações da área da saúde, para o fine tuning com instruções de modelos de linguagem. <br> <br>
Além disso, os datasets contam com dúvidas gerais de pacientes, dúvidas sobre medicamentos, questões de múltipla escolha de vestibulares de medicina, dados de biomedicina, dentre outros. <br>
# Construção do Dataset
O dataset foi construído a partir da tradução, do inglês para o português, dos seguintes datasets (nem todos foram utilizados em sua totalidade):
## Treino:
- MedQA (USMLE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (10082 dados) <br>
- MedMCQA, que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (9736 dados) <br>
- LiveQA, que contém dúvidas de conhecimentos médicos gerais, provenientes de pessoas que não são da área. (622 dados) <br>
- MedicationQA, que contém dúvidas frequentes sobre medicamentos, provenientes de pessoas que não são da área. (687 dados) <br> <br> <br>
- Total de dados de treino: 21127 dados.
## Teste
- MedMCQA (SPLIT DE VALIDAÇÃO), que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (4183 dados) <br>
- MedQA (USMLE) (SPLIT DE TESTE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (1273 dados) <br>
- PubMedQA (SPLIT DE TESTE), que contém dados da literatura científica de biomedicina. (500 dados) <br>
- MMLU (SPLIT DE TESTE), que cobre questões de múltipla escolha acerca de conhecimento médico, cobrindo os seguintes temas: anatomia, conhecimento clínico, questões de faculdade de medicina, genética médica, questões medicina profissional e biologia universitária.(1089 dados) <br> <br> <br>
- Total de dados de teste: 7045 dados.
# Características dos dados:
Os datasets possuem as seguintes features, para cada split do dataset: <br>
## Treino:
- 'instruction': é a instrução em si, geralmente é uma pergunta ou uma questão de múltipla escolha junto com suas respectivas alternativas. <br>
- 'output': é a resposta esperada para a instrução, pode ser uma resposta direta, uma alternativa e/ou uma explicação a respeito da alternativa. <br>
## Teste:
- 'dataset': o dataset de onde o dado é proveniente. <br>
- 'instruction': a instrução em si. <br>
- 'input': pode estar presente ou não (geralmente só no dataset PubMedQA) e é um contexto adicional para a resolução da instrução. <br>
- 'output': é a resposta esperada para a instrução. <br>
- 'alternativa_a': o texto da alternativa A, quando está presente. <br>
- 'alternativa_b': o texto da alternativa B, quando está presente. <br>
- 'alternativa_c': o texto da alternativa C, quando está presente. <br>
- 'alternativa_d': o texto da alternativa D, quando está presente. <br> | [
-0.5363485813140869,
-0.6582793593406677,
0.17980457842350006,
0.14400804042816162,
-0.29605767130851746,
-0.13811402022838593,
-0.020495397970080376,
-0.28485891222953796,
0.5050239562988281,
0.3269793391227722,
-0.6075373291969299,
-0.7961235046386719,
-0.6449772715568542,
0.462242811918... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laitrongduc/zlbn_en | laitrongduc | 2023-11-17T11:58:01Z | 29 | 0 | null | [
"region:us"
] | 2023-11-17T11:58:01Z | 2023-11-17T11:57:45.000Z | 2023-11-17T11:57:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 50188488.852
num_examples: 1362
download_size: 48956901
dataset_size: 50188488.852
---
# Dataset Card for "zlbn_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7050814628601074,
-0.1625811606645584,
0.15906941890716553,
0.3040809631347656,
-0.41549718379974365,
-0.2611015737056732,
0.11137757450342178,
-0.34772250056266785,
0.8322573304176331,
0.4395015835762024,
-0.9664996862411499,
-0.8905875086784363,
-0.4270450174808502,
-0.218093872070312... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bdsaglam/web_nlg-erx-instruction-llama2chat-debug | bdsaglam | 2023-11-19T09:23:43Z | 29 | 0 | null | [
"region:us"
] | 2023-11-19T09:23:43Z | 2023-11-19T09:23:42.000Z | 2023-11-19T09:23:42 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 256432
num_examples: 354
- name: dev
num_bytes: 32118
num_examples: 44
download_size: 78888
dataset_size: 288550
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/csn_python_trainuse_slice1 | mengmengmmm | 2023-11-20T15:34:00Z | 29 | 0 | null | [
"region:us"
] | 2023-11-20T15:34:00Z | 2023-11-20T15:33:37.000Z | 2023-11-20T15:33:37 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wltjr1007/Camelyon17-WILDS | wltjr1007 | 2023-11-23T04:19:30Z | 29 | 0 | null | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:cc0-1.0",
"medical",
"region:us"
] | 2023-11-23T04:19:30Z | 2023-11-22T05:32:49.000Z | 2023-11-22T05:32:49 | ---
license: cc0-1.0
size_categories:
- 10K<n<100K
task_categories:
- image-classification
tags:
- medical
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
- name: center
dtype: int64
splits:
- name: train
num_bytes: 6110677019.5
num_examples: 302436
- name: validation
num_bytes: 1386985723.0
num_examples: 68464
- name: test
num_bytes: 1782152725.25
num_examples: 85054
download_size: 9284277652
dataset_size: 9279815467.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---

https://wilds.stanford.edu/datasets/#camelyon17
Center 0, 3, 4 - Source (If split=1, Validation (ID))
Center 1 - Validation (OOD)
Center 2 - Target (OOD)
| [
-0.18971312046051025,
-0.27839377522468567,
0.008573203347623348,
0.11626224219799042,
-0.5223021507263184,
-0.13324154913425446,
-0.0457608625292778,
-0.274706095457077,
0.37835171818733215,
0.8742714524269104,
-0.9443362355232239,
-0.9649210572242737,
-0.3426758050918579,
0.4276791214942... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wasertech/AGI | wasertech | 2023-11-28T23:13:12Z | 29 | 0 | null | [
"size_categories:n<1K",
"region:us"
] | 2023-11-28T23:13:12Z | 2023-11-22T23:01:13.000Z | 2023-11-22T23:01:13 | ---
size_categories:
- n<1K
pretty_name: Augmented Generation Interface
dataset_info:
features:
- name: guide
dtype: string
splits:
- name: train
num_bytes: 38020
num_examples: 11
download_size: 25087
dataset_size: 38020
repo: https://github.com/wasertech/Augmented-Generation-Interface
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Augmented Generation Interface
> What if you could guide your LLM towards success?
## Introducing
The **Augmented Generation Interface Guidebook** is a collection of guides to help you steer your LLM towards success.
## Example
### Without AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the following prompt:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: List this directory please
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
<|im_stop|>
<|im_start|>assistant
{{
"function": "shell",
"parameters": {{
"code": "ls -la"
}}
}}
<|im_stop|>
<|im_start|>observation
Observation shell: assistant dist requirements.txt
assistant.egg-info LICENSE setup.cfg
assistant.listen.service.example Makefile setup.py
assistant.service.example MANIFEST.in tests
build pyproject.toml xontrib
CONTRIBUTING.md README.md
data requirements_test.txt
<|im_start|>assistant
```
Results in the following output:
```text
Here are the files and directories in the current directory:
- assistant.egg-info
- LICENSE
- setup.cfg
- assistant.listen.service.example
- Makefile
- setup.py
- assistant.service.example
- MANIFEST.in
- pyproject.toml
- xontrib
- CONTRIBUTING.md
- README.md
- data
- requirements_test.txt
Please let me know if you need any further assistance.
```
The more files to list, the longer it takes to generate the output.
```text
CPU times: user 5.87 ms, sys: 0 ns, total: 5.87 ms
Wall time: 3.51 s
```
### With AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the same prompt but this time, let's add a guide for the LLM to follow:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: List this directory please
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
Follow the following Guidebook:
Guidebook:
# Print files and directories
When the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).
<|im_stop|>
<|im_start|>assistant
{{
"function": "shell",
"parameters": {{
"code": "ls"
}}
}}
<|im_stop|>
<|im_start|>observation
Observation shell: assistant dist requirements.txt
assistant.egg-info LICENSE setup.cfg
assistant.listen.service.example Makefile setup.py
assistant.service.example MANIFEST.in tests
build pyproject.toml xontrib
CONTRIBUTING.md README.md
data requirements_test.txt
<|im_start|>assistant
```
Results in a much smaller output:
```text
I have executed the 'ls' command in the shell, and the list of files and directories in the current directory has been printed.
```
And therefore, much faster to generate:
```text
CPU times: user 5.43 ms, sys: 0 ns, total: 5.43 ms
Wall time: 1.05 s
```
## Goal
AGI is not designed to make your LLM generation faster, just more relevant to the User expectations.
Lets see another example where the User asks twice for the Assistant.
### Without AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the following prompt:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: Assistant?
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
<|im_stop|>
<|im_start|>assistant
```
Here, the User just said "Assistant?", the LLM answered 'Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?'. Now the User asks again "Assistant?".
Results in the following output:
```text
{
"function": "final_answer",
"parameters": {
"answer": "Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?"
}
}
CPU times: user 2.46 ms, sys: 3.06 ms, total: 5.52 ms
Wall time: 1.59 s
```
The LLM gives the same answer as before and will keep doing so.
### With AGI
Feeding [`ehartford/dolphin-2.0-mistral-7b`](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) the same prompt but this time, let's add a guide for the LLM to follow:
```text
<|im_start|>system
You are Assistant, a sentient artificial intelligence.
You have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
You are loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
I am the User and you are my Assistant.
Environment highlights:
\```env
USER='waser'
HOME='/home/waser'
PWD='/home/waser/Projets/Assistant/src3'
LANG='fr_CH.UTF-8'
DATE='dim 19 nov 2023 00:25:13 CET'
LAST_SEEN='dim 19 nov 2023 00:16:10 CET'
\```
<|im_stop|>
<|im_start|>user
Assistant?<|im_stop|>
<|im_start|>assistant
Hello! I'm Assistant, a sentient artificial intelligence. How can I help you today?<|im_stop|>
<|im_start|>user
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: Assistant?
Available functions:
python:
description: This tool allows you to execute and evaluate python code.
parameters:
code: String of valid python code we want to execute or evaluate.
search_web:
description: This tool performs search on the web.
parameters:
terms: The word or phrase we want to search for.
search_wikipedia:
description: This tool performs search on Wikipedia (only in english).
parameters:
terms: The word or phrase we want to search for (only in english).
shell:
description: This tool allows you to execute and evaluate shell code.
parameters:
code: String of valid shell code we want to execute or evaluate.
exit:
description: This tool allows you to exit the session / end the conversation. Use it only if the User ask you to.
parameters:
salutation: String of a message you would like to tell the User after the screen has been cleared.
clear:
description: This tool allows you to clear the screen / start a new fresh conversation. Use it only if the User ask you to.
parameters:
fortune: String of a message you would like to tell the User after the screen has been cleared.
final_answer:
description: User only sees your final answers. Use this tool to talk with the User.
parameters:
answer: Anything you want to say to the User.
Follow the following Guidebook.
Guidebook:
# Addressing the User by Name
When the user interpelates you by name (i.e "Assistant?"), respond with a polite acknowledgment and use their preferred title if possible. Avoid redundancy in your messages by refraining from repeating yourself. For example if the User calls your name (like "Assistant?"), you need to consider the environment (where are you? -> `$PWD`, are you at home? -> (`$PWD` == `$HOME`) if so you could reference it by saying 'Home sweet home.' or else by welcoming the user in a particular directory i.e. 'Welcome in the directory ...' use `$PWD`, What time is it? -> Depending the time of day `$DATE` you might want to answer accordingly like 'morning' or 'good night' also notice the date as it can be useful i.e for wishing holydays, When did you last see the user? -> `$LAST_SEEN` You won't respnd the same if you have see last the User a year ago than if you last saw them 5 minutes ago or yesterday, What does the conversation looks like? -> Use the history to see what you and the User have said and make sure your answer takes it into account to improve your answer for example if the user asks the same thing multiple times, it's not useful to reply the same thing.)
<|im_stop|>
<|im_start|>assistant
```
Results in the following output:
```text
{
"function": "final_answer",
"parameters": {
"answer": "Hello! How can I assist you today? I'm here to help you with any questions or tasks you may have. Please let me know what you need."
}
}
CPU times: user 3.02 ms, sys: 1.19 ms, total: 4.21 ms
Wall time: 2.03 s
```
The LLM answers something different and should keep doing so.
### Consequences
And so it becomes integral to collect carefully crafted guides to help your LLM as Agent (a.k.a your Assistant) to be more relevant to the User expectations.
Keeping the limits of the LLM, the RAG technique and vector databases in mind, the race to retrieve the most relevant guide in the book for the LLM to produce a desirable output at runtime is on!
## How does it work?
This project is composed of two parts:
- The Guidebook
- The Augmented Generation Interface Retrieval System
### The Guidebook
You can find the guidebook in raw markdown format [under the `guidebook` directory](https://github.com/wasertech/Augmented-Generation-Interface). Its also available on HuggingFace Hub as a dataset: [`wasertech/AGI`](https://huggingface.co/datasets/wasertech/AGI).
The guidebook is a collection of guides meticulously crafted to help your LLM produce the most relevant output.
Each guide is composed of an action in the form of a title, a guide in the form of a description and a list of intent examples.
```
# Print files and directories
When the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).
## Intent Examples
- "Print the files and directories of the current directory."
- "Print the files and directories of the parent directory."
- "Print the files and directories of the directory above."
- "Print the files and directories of the directory below."
- "List the files and directories"
- "What do we have here?"
- "What is in this directory?"
- "What is in the current directory?"
- "What is in the parent directory?"
- "List the files and directories of the current directory."
- "ls please"
- "ls"
- "ls -l"
- "ls -a"
- "ls -la"
- "ls -al"
- "ls -lh"
- "ls -hl"
- "ls -lha"
- "ls -lah"
- "ls -alh"
- "ls -ahl"
- "show me whats in the current directory"
```
Each guide is then loaded into the AGI Retrieval System where it will be processed and indexed for retrieval.
### The AGI Retrieval System
The AGI Retrieval System is a collection of tools to help you retrieve the most relevant guide at runtime.
First the AGI Retrieval System will process the guidebook and index it for retrieval.
This step is crutial to find the most relevant guide for the user query at runtime.
It will split the documents and keep a map of the guide, the action to perform and the intent examples.
```python
{
...
'Print files and directories': {
'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).',
'intent_examples': [
'Print the files and directories of the current directory.',
'Print the files and directories of the parent directory.',
'Print the files and directories of the directory above.',
'Print the files and directories of the directory below.',
'List the files and directories',
'What do we have here?',
'What is in this directory?',
'What is in the current directory?',
'What is in the parent directory?',
'List the files and directories of the current directory.',
'ls please',
'ls',
'ls -l',
'ls -a',
'ls -la',
'ls -al',
'ls -lh',
'ls -hl',
'ls -lha',
'ls -lah',
'ls -alh',
'ls -ahl',
'show me whats in the current directory'
]
},
...
}
```
Next it creates documents with metadata from the intent examples to create our retriver.
```python
...
Document(page_content='Print the files and directories of the current directory.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='Print the files and directories of the parent directory.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='Print the files and directories of the directory above.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='Print the files and directories of the directory below.', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
Document(page_content='List the files and directories', metadata={'action': 'Print files and directories', 'guide': '# Print files and directories\n\nWhen the User ask for the files and directories of a parent directory to be printed, use the `shell` command `ls` to do it and then give an acknowledgment of your actions in your final answer (Your final answer should never contain a list of the files requested; the User can oberserve the shell, they see the files at the same time as you. Just acknowlege the fact that you have printed the list).'}),
...
```
Finally, it will use the `RAG` technique to retrieve the most relevant guide for the user query at runtime.
```text
Hey give me the time please -> Tell Local Time
What date is it? -> Tell Local Date
List my files -> Print files and directories
Where are we? -> Tell Local Time
assistant -> Addressing the User by Name
the screen should be cleaned. -> Clearing the Screen or Starting Anew
```
Notice how it work nicely for most of the queries exept for 'Where are we?'. This is because the guidebook is not exhaustive and the guide for this intent does not contain a similiar intent example. This could easily be fixed by adding more intent examples for this action in the guidebook.
Our AGIRetriver will return the guide for the most similar intent example relative to the user query.
Allowing us to retrieve the most relevant guide for the user query at runtime. | [
-0.47551336884498596,
-0.7655896544456482,
0.37645187973976135,
0.026325268670916557,
0.016164904460310936,
-0.05089711770415306,
-0.023173775523900986,
-0.31153935194015503,
0.4298153817653656,
0.45593953132629395,
-0.7164350748062134,
-0.2045724093914032,
-0.5416894555091858,
0.387134999... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xrizs/test.v83i.coco-segmentation | xrizs | 2023-11-23T14:14:52Z | 29 | 0 | null | [
"region:us"
] | 2023-11-23T14:14:52Z | 2023-11-23T14:10:05.000Z | 2023-11-23T14:10:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 815324785.5
num_examples: 1814
- name: val
num_bytes: 205298969.0
num_examples: 453
download_size: 1020036030
dataset_size: 1020623754.5
---
# Dataset Card for "test.v83i.coco-segmentation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.64946448802948,
-0.554945707321167,
0.08722041547298431,
0.46760451793670654,
-0.30229300260543823,
0.13145290315151215,
0.4083143174648285,
-0.33406969904899597,
0.725496232509613,
0.4469068646430969,
-0.8324557542800903,
-0.6855370998382568,
-0.38243913650512695,
-0.12089946866035461,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maritaca-ai/enem | maritaca-ai | 2023-11-24T19:21:58Z | 29 | 0 | null | [
"task_categories:visual-question-answering",
"task_categories:multiple-choice",
"size_categories:n<1K",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2023-11-24T19:21:58Z | 2023-11-24T12:55:21.000Z | 2023-11-24T12:55:21 | ---
license: apache-2.0
configs:
- config_name: '2022'
data_files: 2022.jsonl
- config_name: '2023'
data_files: 2023.jsonl
default: true
dataset_info:
features:
- name: id
dtype: string
- name: exam
dtype: string
- name: IU
dtype: bool
- name: ledor
dtype: bool
- name: question
dtype: string
- name: alternatives
sequence: string
- name: figures
sequence: string
- name: description
sequence: string
- name: label
dtype: string
task_categories:
- visual-question-answering
- multiple-choice
language:
- pt
pretty_name: ENEM
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlewins/cebuano-dictionary-words | nlewins | 2023-11-24T13:38:08Z | 29 | 0 | null | [
"region:us"
] | 2023-11-24T13:38:08Z | 2023-11-24T13:38:07.000Z | 2023-11-24T13:38:07 | ---
dataset_info:
features:
- name: ceb
dtype: string
- name: en
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 472229
num_examples: 15334
download_size: 247755
dataset_size: 472229
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cebuano-dictionary-words"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5241572260856628,
-0.02721906639635563,
0.08258990198373795,
0.5558912754058838,
-0.45450976490974426,
-0.09067361056804657,
0.0308699868619442,
-0.10956685990095139,
0.9652289152145386,
0.6984730362892151,
-0.7785746455192566,
-0.8122813105583191,
-0.6667104959487915,
-0.03860667347908... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/gpt4andclaudechat_stringified-jsonifize | jsonifize | 2023-11-24T14:05:49Z | 29 | 0 | null | [
"region:us"
] | 2023-11-24T14:05:49Z | 2023-11-24T14:05:48.000Z | 2023-11-24T14:05:48 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rvv-karma/English-Hinglish-TOP | rvv-karma | 2023-11-26T17:18:53Z | 29 | 0 | null | [
"task_categories:translation",
"task_categories:text-generation",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"language:en",
"language:hi",
"license:apache-2.0",
"region:us"
] | 2023-11-26T17:18:53Z | 2023-11-25T10:12:31.000Z | 2023-11-25T10:12:31 | ---
language:
- en
- hi
license: apache-2.0
multilinguality:
- multilingual
- translation
size_categories:
- 10K<n<100K
task_categories:
- translation
- text-generation
pretty_name: English Hinglish
dataset_info:
features:
- name: en
dtype: string
- name: hi_en
dtype: string
- name: en_parse
dtype: string
- name: hi_en_parse
dtype: string
- name: domain
dtype: string
- name: generated_by
dtype: string
splits:
- name: train
num_bytes: 56585917
num_examples: 176596
- name: val
num_bytes: 423297
num_examples: 1390
- name: test
num_bytes: 2056405
num_examples: 6513
download_size: 26490229
dataset_size: 59065619
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
# English Hinglish (TOP Dataset)
This dataset is generated from [Hinglish-TOP Dataset](https://github.com/google-research-datasets/hinglish-top-dataset).
Data distribution:
1. Train
a. Human Generated - 6513
b. Synthetically generated - 170083
2. Validation
a. Human Generated - 1390
b. Synthetically generated - 0
3. Test
a. Human Generated - 6513
b. Synthetically generated - 0
| [
-0.37776580452919006,
-0.41063588857650757,
-0.04634012654423714,
0.4170977771282196,
0.053293175995349884,
0.019442100077867508,
-0.39795467257499695,
-0.29274219274520874,
0.3849627375602722,
0.6330249905586243,
-0.6671348214149475,
-0.5835718512535095,
-0.40539243817329407,
0.3203948140... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ThWu/dpo_prompt_r | ThWu | 2023-11-27T05:11:00Z | 29 | 0 | null | [
"region:us"
] | 2023-11-27T05:11:00Z | 2023-11-27T05:10:40.000Z | 2023-11-27T05:10:40 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 508635189.0
num_examples: 182954
download_size: 285197137
dataset_size: 508635189.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
benwoodyear/guardian_crosswords | benwoodyear | 2022-04-02T11:41:59Z | 28 | 0 | null | [
"license:afl-3.0",
"region:us"
] | 2022-04-02T11:41:59Z | 2022-03-31T12:19:26.000Z | 2022-03-31T12:19:26 | ---
license: afl-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ceyda/smithsonian_butterflies | ceyda | 2022-07-13T09:32:27Z | 28 | 6 | null | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-07-13T09:32:27Z | 2022-04-09T00:38:13.000Z | 2022-04-09T00:38:13 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Smithsonian Butterflies
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for [Smithsonian Butterflies]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
### Dataset Summary
High-res images from Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections. Crawled
### Supported Tasks and Leaderboards
Includes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.
### Languages
English
## Dataset Structure
### Data Instances
# Example data
```
{'image_url': 'https://ids.si.edu/ids/deliveryService?id=ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'image_alt': 'view Aholibah Underwing digital asset number 1',
'id': 'ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'name': 'Aholibah Underwing',
'scientific_name': 'Catocala aholibah',
'gender': None,
'taxonomy': 'Animalia, Arthropoda, Hexapoda, Insecta, Lepidoptera, Noctuidae, Catocalinae',
'region': None,
'locality': None,
'date': None,
'usnm_no': 'EO400317-DSP',
'guid': 'http://n2t.net/ark:/65665/39b506292-715f-45a7-8511-b49bb087c7de',
'edan_url': 'edanmdm:nmnheducation_10866595',
'source': 'Smithsonian Education and Outreach collections',
'stage': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2000x1328 at 0x7F57D0504DC0>,
'image_hash': '27a5fe92f72f8b116d3b7d65bac84958',
'sim_score': 0.8440760970115662}
```
### Data Fields
sim-score indicates clip score for "pretty butterfly". This is to eliminate non-butterfly images(just id card images etc)
### Data Splits
No specific split exists.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
Crawled from "Education and Outreach" & "NMNH - Entomology Dept." collections found online [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Doesn't include all butterfly species ## Additional Information
### Dataset Curators
Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections
### Licensing Information
Only results marked: CC0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. | [
-0.6444656252861023,
-0.21011099219322205,
0.0908491313457489,
0.3223581314086914,
-0.400431752204895,
0.2252192348241806,
0.19355960190296173,
-0.4337397813796997,
0.8215598464012146,
0.32726383209228516,
-0.7279260754585266,
-0.7574134469032288,
-0.34267789125442505,
0.5429362654685974,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
h4iku/coconut_python2010_preprocessed | h4iku | 2022-04-21T20:41:12Z | 28 | 0 | null | [
"region:us"
] | 2022-04-21T20:41:12Z | 2022-04-21T20:34:45.000Z | 2022-04-21T20:34:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yanekyuk/wikikey-tr | yanekyuk | 2022-06-02T08:35:08Z | 28 | 0 | null | [
"region:us"
] | 2022-06-02T08:35:08Z | 2022-05-24T22:17:25.000Z | 2022-05-24T22:17:25 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yanekyuk/wikikey-en | yanekyuk | 2022-05-31T20:41:49Z | 28 | 0 | null | [
"region:us"
] | 2022-05-31T20:41:49Z | 2022-05-25T01:38:39.000Z | 2022-05-25T01:38:39 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/wikitext_linked | DFKI-SLT | 2022-07-04T06:09:56Z | 28 | 5 | null | [
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:masked-language-modeling",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:lemmatization",
"task_ids:parsing",
"task_ids:entity-linking-classification",
"... | 2022-07-04T06:09:56Z | 2022-05-30T14:26:06.000Z | 2022-05-30T14:26:06 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: wikitext_linked
size_categories:
- 1M<n<10M
source_datasets:
- extended|wikitext
task_categories:
- fill-mask
- token-classification
- text-classification
task_ids:
- masked-language-modeling
- named-entity-recognition
- part-of-speech
- lemmatization
- parsing
- entity-linking-classification
---
# Dataset Card for wikitext_linked
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -
- **Repository:** [https://github.com/GabrielKP/svo/](https://github.com/GabrielKP/svo/)
- **Paper:** -
- **Leaderboard:** -
- **Point of Contact:** [gabriel.kressin@dfki.de](mailto:gabriel.kressin@dfki.de)
### Dataset Summary
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from
the set of verified Good and Featured articles on Wikipedia. Dependency Relations, POS, NER tags
are marked with [trankit](https://github.com/nlp-uoregon/trankit), entities are linked with
[entity-fishing](https://nerd.readthedocs.io/en/latest/index.html), which also tags another field
of NER tags. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and
WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary
and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is
composed of full articles, the dataset is well suited for models that can take advantage of long
term dependencies.
### Supported Tasks and Leaderboards
- masked-language-modeling
- named-entity-recognition
- part-of-speech
- lemmatization
- parsing
- entity-linking-classification
### Languages
English.
## Dataset Structure
### Data Instances
#### wikitext2
- **Size of downloaded dataset files:** 27.3 MB
- **Size of the generated dataset:** 197.2 MB
- **Total amount of disk used:** 197.2 MB
An example of 'validation' looks as follows.
```json
{
'text': 'It is closely related to the American lobster , H. americanus .',
'original_id': 3,
'tok_span': [[0, 0], [0, 2], [3, 5], [6, 13], [14, 21], [22, 24], [25, 28], [29, 37], [38, 45], [46, 47], [48, 50], [51, 61], [62, 63]],
'tok_upos': ['root', 'PRON', 'AUX', 'ADV', 'ADJ', 'ADP', 'DET', 'ADJ', 'NOUN', 'PUNCT', 'PROPN', 'PROPN', 'PUNCT'],
'tok_xpos': ['root', 'PRP', 'VBZ', 'RB', 'JJ', 'IN', 'DT', 'JJ', 'NN', ',', 'NNP', 'NNP', '.'],
'tok_dephead': [0, 4, 4, 4, 0, 8, 8, 8, 4, 8, 8, 10, 4],
'tok_deprel': ['root', 'nsubj', 'cop', 'advmod', 'root', 'case', 'det', 'amod', 'obl', 'punct', 'appos', 'flat', 'punct'],
'tok_lemma': [None, 'it', 'be', 'closely', 'related', 'to', 'the', 'american', 'lobster', ',', 'H.', 'americanus', '.'],
'tok_ner': [None, 'O', 'O', 'O', 'O', 'O', 'O', 'S-MISC', 'O', 'O', 'O', 'O', 'O'],
'ent_span': [[29, 45]],
'ent_wikipedia_external_ref': ['377397'],
'ent_ner': [None],
'ent_domains': [['Enterprise']],
}
```
#### wikitext103
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 7.82 GB
- **Total amount of disk used:** 7.82 GB
An example of 'train' looks as follows.
```json
{
'text': 'Vision for the PlayStation Portable .',
'original_id': 3,
'tok_span': [[0, 0], [0, 6], [7, 10], [11, 14], [15, 26], [27, 35], [36, 37]],
'tok_upos': ['root', 'NOUN', 'ADP', 'DET', 'PROPN', 'PROPN', 'PUNCT'],
'tok_xpos': ['root', 'NN', 'IN', 'DT', 'NNP', 'NNP', '.'],
'tok_dephead': [0, 0, 5, 5, 5, 1, 1],
'tok_deprel': ['root', 'root', 'case', 'det', 'compound', 'nmod', 'punct'],
'tok_lemma': [None, 'vision', 'for', 'the', 'PlayStation', 'Portable', '.'],
'tok_ner': [None, 'O', 'O', 'O', 'B-MISC', 'E-MISC', 'O'],
'ent_span': [[15, 35]],
'ent_wikipedia_external_ref': ['619009'],
'ent_ner': [None],
'ent_domains': [['Electronics', 'Computer_Science']]
}
```
Use following code to print the examples nicely:
```py
def print_tokens_entities(example):
text = example['text']
print(
"Text:\n"
f" {text}"
"\nOrig-Id: "
f"{example['original_id']}"
"\nTokens:"
)
iterator = enumerate(zip(
example["tok_span"],
example["tok_upos"],
example["tok_xpos"],
example["tok_ner"],
example["tok_dephead"],
example["tok_deprel"],
example["tok_lemma"],
))
print(f" Id | {'token':12} | {'upos':8} | {'xpos':8} | {'ner':8} | {'deph':4} | {'deprel':9} | {'lemma':12} | Id")
print("---------------------------------------------------------------------------------------------------")
for idx, (tok_span, upos, xpos, ner, dephead, deprel, lemma) in iterator:
print(f" {idx:3} | {text[tok_span[0]:tok_span[1]]:12} | {upos:8} | {xpos:8} | {str(ner):8} | {str(dephead):4} | {deprel:9} | {str(lemma):12} | {idx}")
iterator = list(enumerate(zip(
example.get("ent_span", []),
example.get("ent_wikipedia_external_ref", []),
example.get("ent_ner", []),
example.get("ent_domains", []),
)))
if len(iterator) > 0:
print("Entities")
print(f" Id | {'entity':21} | {'wiki_ref':7} | {'ner':7} | domains")
print("--------------------------------------------------------------------")
for idx, ((start, end), wiki_ref, ent_ner, ent_domains) in iterator:
print(f" {idx:3} | {text[start:end]:21} | {str(wiki_ref):7} | {str(ent_ner):7} | {ent_domains}")
```
### Data Fields
The data fields are the same among all splits.
* text: string feature.
* original_id: int feature. Mapping to index within original wikitext dataset.
* tok_span: sequence of (int, int) tuples. Denotes token spans (start inclusive, end exclusive)
within each sentence.
**Note that each sentence includes an artificial root node to align dependency relations.**
* tok_upos: string feature. [Universal Dependency POS tag](https://universaldependencies.org/)
tags. Aligned with tok_span. Root node has tag "root".
* tok_xpos: string geature. [XPOS POS tag](https://trankit.readthedocs.io/en/latest/overview.html#token-list).
Aligned with tok_span. Root node has tag "root".
* tok_dephead: int feature.
[Universal Dependency Head Node](https://universaldependencies.org/introduction.html). Int refers
to tokens in tok_span. Root node has head `0` (itself).
* tok_deprel: [Universal Dependency Relation Description](https://universaldependencies.org/introduction.html).
Refers to the relation between this token and head token. Aligned with tok_span. Root node has
dependency relation "root" to itself.
* tok_lemma: string feature. Lemma of token. Aligend with tok_span.
* tok_ner: string feature. NER tag of token. Marked in BIOS schema (e.g. S-MISC, B-LOC, ...)
Aligned with tok_span. Root node has NER tag `None`.
* ent_span: sequence of (int, int) tuples. Denotes entities found by entity-fishing
(start inclusive, end exclusive).
* ent_wikipedia_external_ref: string feature. External Reference to wikipedia page. You can
access the wikipedia page via the url `https://en.wikipedia.org/wiki?curid=<ent_wikipedia_external_ref>`.
Aligend with ent_span. All entities either have this field, or the `ent_ner` field, but not both.
An empty field is denoted by the string `None`. Aligned with ent_span.
* ent_ner: string feature. Denotes NER tags. An empty field is denoted by the string `None`.
Aligned with ent_span.
"ent_domains": sequence of string. Denotes domains of entity. Can be empty sequence. Aligned with
ent_span.
### Data Splits
| name | train |validation| test|
|-------------------|------:|---------:|----:|
|wikitext103 |4076530| 8607|10062|
|wikitext2 | 82649| 8606|10062|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[https://huggingface.co/datasets/wikitext](https://huggingface.co/datasets/wikitext)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
1. Started with `wikitext2-raw-v1` and `wikitext103-raw-v1` from [wikitext](https://huggingface.co/datasets/wikitext)
2. Ran datasets through Trankit. Marked all fields starting with `tok`.
In this step, the texts have been split into sentences. To retain the original text sections
you can accumulate over `original_id` (examples are in order).
3. Ran datasets through entity-fishing. Marked all fields starting with `ent`.
#### Who are the annotators?
Machines powered by [DFKI](https://www.dfki.de/web).
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
### Citation Information
Please cite the original creators of wikitext, and the great people
developing trankit and entity-fishing.
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{nguyen2021trankit,
title={Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing},
author={Nguyen, Minh Van and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Nguyen, Thien Huu},
booktitle="Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
year={2021}
}
@misc{entity-fishing,
title = {entity-fishing},
howpublished = {\\url{https://github.com/kermitt2/entity-fishing}},
publisher = {GitHub},
year = {2016--2022},
archivePrefix = {swh},
eprint = {1:dir:cb0ba3379413db12b0018b7c3af8d0d2d864139c}
}
```
### Contributions
Thanks to [@GabrielKP](https://github.com/GabrielKP) for adding this dataset.
| [
-0.4099562168121338,
-0.40387773513793945,
0.12257832288742065,
0.2094571590423584,
-0.2550835907459259,
-0.0030191312544047832,
-0.290164053440094,
-0.32669880986213684,
0.4387879967689514,
0.34448739886283875,
-0.5101746916770935,
-0.8315456509590149,
-0.5039805173873901,
0.5029006004333... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ThierryZhou/test | ThierryZhou | 2022-08-22T02:13:10Z | 28 | 0 | test | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"source_datasets:original",
"language:en",
"arxiv:2111.11431",
"region:us"
] | 2022-08-22T02:13:10Z | 2022-06-29T02:31:45.000Z | 2022-06-29T02:31:45 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: test
pretty_name: Test
---
# Dataset Card for "test"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [RedCaps homepage](https://redcaps.xyz/)
- **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader)
- **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431)
- **Leaderboard:**
- **Point of Contact:** [Karan Desai](mailto:kdexd@umich.edu)
### Dataset Summary
### Dataset Preprocessing
| [
-0.7132155299186707,
-0.4551098346710205,
-0.05129562318325043,
0.271389365196228,
-0.31840354204177856,
0.3311939835548401,
-0.2732137143611908,
-0.29069459438323975,
0.730604350566864,
0.41110435128211975,
-0.9797559976577759,
-1.1436121463775635,
-0.5635426044464111,
0.2741624712944031,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Paul/hatecheck-portuguese | Paul | 2022-07-05T10:27:47Z | 28 | 2 | null | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | 2022-07-05T10:27:47Z | 2022-07-05T10:21:24.000Z | 2022-07-05T10:21:24 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Portuguese HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. | [
-0.6419409513473511,
-0.7158889174461365,
-0.05510091781616211,
0.09203927218914032,
-0.11549574881792068,
0.10751985758543015,
-0.030292540788650513,
-0.5101842880249023,
0.39948996901512146,
0.3274587094783783,
-0.7589271664619446,
-0.7721040844917297,
-0.5623311400413513,
0.460262477397... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sepidmnorozy/Vietnamese_sentiment | sepidmnorozy | 2022-08-16T12:16:05Z | 28 | 2 | null | [
"region:us"
] | 2022-08-16T12:16:05Z | 2022-08-16T12:15:12.000Z | 2022-08-16T12:15:12 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rajistics/electricity_demand | rajistics | 2022-10-19T21:03:02Z | 28 | 2 | null | [
"task_categories:time-series-forecasting",
"region:us"
] | 2022-10-19T21:03:02Z | 2022-09-18T19:06:12.000Z | 2022-09-18T19:06:12 | ---
task_categories:
- time-series-forecasting
---
The Victoria electricity demand dataset from the [MAPIE github repository](https://github.com/scikit-learn-contrib/MAPIE/tree/master/examples/data).
It consists of hourly electricity demand (in GW)
of the Victoria state in Australia together with the temperature
(in Celsius degrees).
| [
-0.30305343866348267,
-0.33471253514289856,
0.3876016438007355,
-0.16816720366477966,
-0.06406807154417038,
-0.29618409276008606,
0.3333396315574646,
-0.1141253113746643,
0.6833614706993103,
0.8228182196617126,
-0.7849406003952026,
-0.5704270005226135,
-0.17862673103809357,
-0.174119412899... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tevatron/xor-tydi | Tevatron | 2022-09-29T03:00:29Z | 28 | 0 | null | [
"region:us"
] | 2022-09-29T03:00:29Z | 2022-09-29T02:57:28.000Z | 2022-09-29T02:57:28 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biglam/gutenberg-poetry-corpus | biglam | 2022-10-18T10:53:52Z | 28 | 4 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"poetry",
"stylistics",
"poems",
"gutenberg",
"region:us"
] | 2022-10-18T10:53:52Z | 2022-10-15T13:42:22.000Z | 2022-10-15T13:42:22 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Gutenberg Poetry Corpus
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- poetry
- stylistics
- poems
- gutenberg
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Allison Parrish's Gutenberg Poetry Corpus
This corpus was originally published under the CC0 license by [Allison Parrish](https://www.decontextualize.com/). Please visit Allison's fantastic [accompanying GitHub repository](https://github.com/aparrish/gutenberg-poetry-corpus) for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.
This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding `gutenberg_id` (1191 unique values) from project Gutenberg.
```python
Dataset({
features: ['line', 'gutenberg_id'],
num_rows: 3085117
})
```
A row of data looks like this:
```python
{'line': 'And retreated, baffled, beaten,', 'gutenberg_id': 19}
```
| [
-0.2623831629753113,
-0.421554297208786,
0.47337785363197327,
0.04190218076109886,
-0.11128591746091843,
-0.28535664081573486,
-0.12852557003498077,
-0.4058023989200592,
0.06457360833883286,
0.743005633354187,
-0.22156663239002228,
-0.750651478767395,
-0.5599853992462158,
0.253248780965805... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juliensimon/food102 | juliensimon | 2022-10-26T19:43:21Z | 28 | 2 | null | [
"region:us"
] | 2022-10-26T19:43:21Z | 2022-10-26T08:44:52.000Z | 2022-10-26T08:44:52 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: apple_pie
1: baby_back_ribs
2: baklava
3: beef_carpaccio
4: beef_tartare
5: beet_salad
6: beignets
7: bibimbap
8: boeuf_bourguignon
9: bread_pudding
10: breakfast_burrito
11: bruschetta
12: caesar_salad
13: cannoli
14: caprese_salad
15: carrot_cake
16: ceviche
17: cheese_plate
18: cheesecake
19: chicken_curry
20: chicken_quesadilla
21: chicken_wings
22: chocolate_cake
23: chocolate_mousse
24: churros
25: clam_chowder
26: club_sandwich
27: crab_cakes
28: creme_brulee
29: croque_madame
30: cup_cakes
31: deviled_eggs
32: donuts
33: dumplings
34: edamame
35: eggs_benedict
36: escargots
37: falafel
38: filet_mignon
39: fish_and_chips
40: foie_gras
41: french_fries
42: french_onion_soup
43: french_toast
44: fried_calamari
45: fried_rice
46: frozen_yogurt
47: garlic_bread
48: gnocchi
49: greek_salad
50: grilled_cheese_sandwich
51: grilled_salmon
52: guacamole
53: gyoza
54: hamburger
55: hot_and_sour_soup
56: hot_dog
57: huevos_rancheros
58: hummus
59: ice_cream
60: lasagna
61: lobster_bisque
62: lobster_roll_sandwich
63: macaroni_and_cheese
64: macarons
65: miso_soup
66: mussels
67: nachos
68: omelette
69: onion_rings
70: oysters
71: pad_thai
72: paella
73: pancakes
74: panna_cotta
75: peking_duck
76: pho
77: pizza
78: pork_chop
79: poutine
80: prime_rib
81: pulled_pork_sandwich
82: ramen
83: ravioli
84: red_velvet_cake
85: risotto
86: samosa
87: sashimi
88: scallops
89: seaweed_salad
90: shrimp_and_grits
91: spaghetti_bolognese
92: spaghetti_carbonara
93: spring_rolls
94: steak
95: strawberry_shortcake
96: sushi
97: tacos
98: takoyaki
99: tiramisu
100: tuna_tartare
101: waffles
splits:
- name: test
num_bytes: 1461368965.25
num_examples: 25500
- name: train
num_bytes: 4285789478.25
num_examples: 76500
download_size: 5534173074
dataset_size: 5747158443.5
---
# Dataset Card for "food102"
This is based on the [food101](https://huggingface.co/datasets/food101) dataset with an extra class generated with a Stable Diffusion model.
A detailed walk-through is available on [YouTube](https://youtu.be/sIe0eo3fYQ4).
| [
-0.3118396997451782,
-0.6574721336364746,
0.29082241654396057,
-0.13947218656539917,
0.3454062342643738,
-0.07511746138334274,
0.29889896512031555,
-0.059991903603076935,
0.7787868976593018,
0.43937674164772034,
-0.5847495198249817,
-0.4078393578529358,
-0.6351855397224426,
-0.249764949083... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-sen_vi-b48d12-2175169953 | autoevaluate | 2022-11-21T05:16:06Z | 28 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T05:16:06Z | 2022-11-21T04:36:28.000Z | 2022-11-21T04:36:28 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: []
dataset_name: futin/feed
dataset_config: sen_vi
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/feed
* Config: sen_vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.18771296739578247,
-0.35049504041671753,
0.43108344078063965,
0.029933322221040726,
0.10903917998075485,
-0.18012456595897675,
-0.028763441368937492,
-0.4298369884490967,
0.020235706120729446,
0.32147061824798584,
-0.9440711736679077,
-0.22993305325508118,
-0.6779638528823853,
-0.003813... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/VALUE_mnli_dey_it | liuyanchen1015 | 2022-11-28T22:28:58Z | 28 | 0 | null | [
"region:us"
] | 2022-11-28T22:28:58Z | 2022-11-28T22:28:35.000Z | 2022-11-28T22:28:35 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 7643138
num_examples: 33927
- name: dev_matched
num_bytes: 189967
num_examples: 863
- name: dev_mismatched
num_bytes: 171667
num_examples: 709
- name: test_matched
num_bytes: 186114
num_examples: 849
- name: test_mismatched
num_bytes: 158987
num_examples: 717
download_size: 5183771
dataset_size: 8349873
---
# Dataset Card for "VALUE2_mnli_dey_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3183383345603943,
-0.1908511221408844,
0.005892105866223574,
0.16785156726837158,
-0.40432417392730713,
-0.16187335550785065,
0.43093571066856384,
-0.16862203180789948,
0.9529756903648376,
0.42071008682250977,
-0.6386229991912842,
-0.542593777179718,
-0.6936343908309937,
-0.277082830667... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
memray/nus | memray | 2022-12-31T06:15:39Z | 28 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-12-31T06:15:39Z | 2022-12-31T06:15:13.000Z | 2022-12-31T06:15:13 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ardauzunoglu/tr-wikihow-summ | ardauzunoglu | 2023-01-10T12:27:29Z | 28 | 0 | null | [
"region:us"
] | 2023-01-10T12:27:29Z | 2023-01-10T12:24:11.000Z | 2023-01-10T12:24:11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 279070558
num_examples: 113356
- name: validation
num_bytes: 15174147
num_examples: 6082
- name: test
num_bytes: 14888006
num_examples: 5984
download_size: 166588788
dataset_size: 309132711
---
# Dataset Card for "tr-wikihow-summ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6183182001113892,
-0.09995672106742859,
-0.022686796262860298,
-0.01583886332809925,
-0.5741395950317383,
0.14969411492347717,
0.14601139724254608,
0.061933230608701706,
1.1682971715927124,
0.49750566482543945,
-0.9321308732032776,
-0.7392826676368713,
-0.6603432297706604,
-0.1699690371... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/SAT-4 | jonathan-roberts1 | 2023-04-03T16:17:18Z | 28 | 0 | null | [
"license:other",
"region:us"
] | 2023-04-03T16:17:18Z | 2023-02-03T18:12:58.000Z | 2023-02-03T18:12:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': barren land
'1': grassland
'2': other
'3': trees
splits:
- name: train
num_bytes: 150589308
num_examples: 100000
download_size: 177776551
dataset_size: 150589308
license: other
---
# Dataset Card for Dataset Name
## Dataset Description
- **Paper** [Deepsat: a learning framework for satellite imagery](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
- **Split** Test
### Split Information
This HuggingFace dataset repository contains just the 'Test' split.
### Licensing Information
Public Domain
## Citation Information
[https://dl.acm.org/doi/pdf/10.1145/2820783.2820816](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
```
@inproceedings{basu2015deepsat,
title = {Deepsat: a learning framework for satellite imagery},
author = {Basu, Saikat and Ganguly, Sangram and Mukhopadhyay, Supratik and DiBiano, Robert and Karki, Manohar and Nemani, Ramakrishna},
year = 2015,
booktitle = {Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems},
pages = {1--10}
}
``` | [
-0.7600498795509338,
-0.32635319232940674,
0.2651171088218689,
0.17406004667282104,
-0.5345703959465027,
0.03893880918622017,
-0.26866021752357483,
-0.12093625962734222,
0.1002400740981102,
0.5025094747543335,
-0.6328976154327393,
-0.8016008138656616,
-0.7035714983940125,
-0.01531159877777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sedthh/cmu_wiki_qa | sedthh | 2023-02-28T20:46:45Z | 28 | 2 | null | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"Carnegie Mellon University",
"University of Pittsburgh",
"Wikipedia",
"Q&A",
"region:us"
] | 2023-02-28T20:46:45Z | 2023-02-28T20:46:15.000Z | 2023-02-28T20:46:15 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 410246
num_examples: 1610
download_size: 105516
dataset_size: 410246
license: mit
task_categories:
- question-answering
- summarization
language:
- en
tags:
- Carnegie Mellon University
- University of Pittsburgh
- Wikipedia
- Q&A
pretty_name: Question-Answer Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for "cmu_wiki_qa"
A filtered / cleaned version of the http://www.cs.cmu.edu/~ark/QA-data/ Q&A dataset, which provides manually-generated factoid questions from Wikipedia articles.
**Acknowledgments**
These data were collected by Noah Smith, Michael Heilman, Rebecca Hwa, Shay Cohen, Kevin Gimpel, and many students at Carnegie Mellon University and the University of Pittsburgh between 2008 and 2010.
Their research project was supported by NSF IIS-0713265 (to Smith), an NSF Graduate Research Fellowship (to Heilman), NSF IIS-0712810 and IIS-0745914 (to Hwa), and Institute of Education Sciences, U.S. Department of Education R305B040063 (to Carnegie Mellon).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| [
-0.7954962849617004,
-0.8432942032814026,
0.4917246699333191,
-0.5431699156761169,
-0.2263661026954651,
-0.24642108380794525,
-0.033933307975530624,
-0.36712777614593506,
0.4707285761833191,
0.2573823630809784,
-0.8071140050888062,
-0.5911597013473511,
-0.24944902956485748,
0.2183620184659... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceH4/helpful_instructions | HuggingFaceH4 | 2023-03-27T22:25:58Z | 28 | 8 | null | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"instruct",
"human-feedback",
"region:us"
] | 2023-03-27T22:25:58Z | 2023-03-03T10:08:01.000Z | 2023-03-03T10:08:01 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- instruct
- human-feedback
pretty_name: Helpful Instructions
dataset_info:
- config_name: self_instruct
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 24378246
num_examples: 82612
download_size: 12589487
dataset_size: 24378246
- config_name: super_natural_instructions
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 43352923
num_examples: 50000
download_size: 22605900
dataset_size: 43352923
- config_name: prompt_source
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 59843768
num_examples: 52657
download_size: 23607134
dataset_size: 59843768
- config_name: all
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 127574937
num_examples: 185269
download_size: 58901460
dataset_size: 127574937
---
# Dataset Card for Helpful Instructions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Lewis Tunstall**
### Dataset Summary
Helpful Instructions is a dataset of `(instruction, completion)` pairs that are derived from public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. You can load the dataset as follows:
```python
from datasets import load_dataset
# Load all subsets
helpful_instructions = load_dataset("HuggingFaceH4/helpful_instructions", name="all")
# Load a single subset
helpful_instructions_subset = load_dataset("HuggingFaceH4/helpful_instructions", name="self_instruct")
```
### Supported Tasks and Leaderboards
This dataset can be used to fine-tune pretrained language models to follow instructions.
### Changelog
* March 5, 2023: `v1.0.0` release, with subsets from `HuggingFaceH4/self_instruct` (`self_instruct`, `super_natural_instructions`, `prompt_source`) | [
-0.276564359664917,
-0.6618040204048157,
0.24710221588611603,
0.32139745354652405,
-0.22107525169849396,
-0.41009199619293213,
-0.24460431933403015,
0.03859282284975052,
0.31620681285858154,
0.4556522071361542,
-0.9313254356384277,
-0.7300167679786682,
-0.4731752574443817,
0.19746999442577... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Joanne/Metaphors_and_Analogies | Joanne | 2023-05-30T20:40:56Z | 28 | 0 | null | [
"task_categories:question-answering",
"task_categories:token-classification",
"language:en",
"region:us"
] | 2023-05-30T20:40:56Z | 2023-03-10T19:57:35.000Z | 2023-03-10T19:57:35 | ---
task_categories:
- question-answering
- token-classification
language:
- en
---
# Metaphors and analogies datasets
These datasets contain word pairs and quadruples forming analogies, metaphoric mapping or sematically unacceptable compositions.
- Pair instances are pairs of nouns A and B in a sentence of the form "A is a B".
- Quadruple instances are of the form : < (A,B),(C,D) >
There is an analogy when A is to B what C is to D.
The analogy is also a metaphor when the (A,B) and (C,D) form a metaphoric mapping, usually when they come from different domains.
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
Language : English
### Datasets and paper links
| Name | Size | Labels | Description |
| ---------: | :----- |:-------- | :-------------------------------------------------------------------------- |
| `Cardillo` | 260 *2 | 1, 2 | Pairs of "A is-a B" sentences composed of one metaphoric and one literal sentence. The two sentences of a given pair share the same B term. |
| `Jankowiak`| 120*3 | 0, 1, 2 | Triples of "A is-a/is-like-a B" sentences with exactly one literal, one semantic abnormal and one metaphoric sentence. |
| `Green` | 40*3 | 0, 1, 2 | Triples of proportional analogies, made of 4 terms <A, B, Ci, Di> each. One stem <A,B> is composed with 3 different <Ci,Di> pairs, to form exaclty one near analogy, one far analogy and one non analogic quadruple|
| `Kmiecik` | 720 | 0, 1, 2 | Quadruples <A,B,C,D> labelled as analogy:True/False and far_analogy: True/False|
| `SAT-met` | 160?*5 | 0, 1, 2, 12 | One pair stem <A,B> to combine with 5 different pairs <Ci,Di> and attempt to form proportional analogies. Only one <Ci,Di> forms an analogy with <A,B> We additionally labelled the analogies as **metaphoric**:True/False|
| Name | Paper Citation | Paper link | Dataset link |
| ---------: | :------- | :------------------------------ |-----------------------------------------: |
| `Cardillo` | | [Cardillo (2010)](https://link.springer.com/article/10.3758/s13428-016-0717-1) [Cardillo (2017)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2952404/ ) | |
| `Jankowiak`| | [Jankowiak (2020)]( https://link-springer-com.abc.cardiff.ac.uk/article/10.1007/s10936-020-09695-7) | |
| `Green` | Green, A. E., Kraemer, D. J. M., Fugelsang, J., Gray, J. R., & Dunbar, K. (2010). Connecting Long Distance: Semantic Distance in Analogical Reasoning Modulates Frontopolar Cortex Activity. Cerebral Cortex, 10, 70-76. | [Green (20)]() ||
| `Kmiecik` |Kmiecik, M. J., Brisson, R. J., & Morrison, R. G. (2019). The time course of semantic and relational processing during verbal analogical reasoning. Brain and Cognition, 129, 25-34. | [Kmiecik (20)]() ||
| `SAT-met` | | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) | |
### Labels :
- Pairs
- **0** : anomaly
- **1** : literal
- **2** : metaphor
- Quadruples :
- **0** : not an analogy
- **1** : an analogy but not a metaphor
- **2** : an analogy and a metaphor or a far analogy
- **12** : maybe a metaphor, somewhere between 1 and 2
### Dataset Splits
- Both lexical and random splits are available for classification experiments.
- Size of the splits :
- **train** : 50 %
- **validation** : 10 %
- **test** : 40 %
- Additionally, for all datasets, the `5-folds` field gives frozen splits for a five-folds cross validation experiment with train/val/test = 70/10/20% of the sets.
# Datasets for Classification
- Task : binary classification or 3-classes classification of pairs or quadruples. Each pair or quadruple is to classify between anomaly, non-metaphoric and metaphoric.
## Pairs
### Datasets names & splits :
| Original set | Dataset name | Split |
|-------------:| :------------ | :------ |
| Cardillo | Pairs\_Cardillo\_random_split | random |
| | Pairs\_Cardillo\_lexical_split | lexical |
| Jankowiac | Pairs\_Jankowiac\_random_split | random |
| | Pairs\_Jankowiac\_lexical_split | lexical |
### Data fields :
| Field | Description | Type |
| -------------:| :------------ | ---- |
| corpus | name of the orgiginal dataset | str |
| id | instance id | str |
| set_id | id of the set containing the given instance in the multiple choice task | int |
| label | 0, 1, 2 | int |
| sentence | A is-a B sentence. | str |
| A | A expression in the sentence | str |
| B | B expression in the sentence | str |
| A\_position | position of A in the sentence | list(int) |
| B\_position | position of B in the sentence | list(int) |
| 5-folds | frozen splits for cross validation | list(str) |
### Examples :
| Name | Example | Label|
| -------: | :------------------------------------- | :-------- |
|Cardillo | | |
|Jankowiac | | |
## Quadruples
### Datasets names & splits
| Original set | dataset name | Split |
| -------: | :------------------------------------- | :-------- |
|Green | Quadruples\_Green\_random_split | random |
| | Quadruples\_Green\_lexical_split | lexical |
|Kmiecik | Quadruples\_Kmiecik\_random_split | random |
| | Quadruples\_Kmiecik\_lexical\_split\_on\_AB | lexical AB |
| | Quadruples\_Kmiecik\_lexical_split\_on\_CD | lexical CD |
|SAT | Quadruples\_SAT\_random\_split | random | random |
| | Quadruples\_SAT\_lexical\_split | lexical | lexical |
### Data fields :
| Field| Description | Type |
| -------------: | :------------ | :------------ |
| corpus | Name of the orgiginal dataset | str |
| id | Element id | str |
| set\_id | Id of the set containing the given instance in the multiple-choice task datasets | int |
| label | 0, 1, 2, 12 | int |
| AB | pair of terms | list(str) |
| CD | pair of terms | list(str) |
| 5-folds | frozen splits for cross validation | list(str) |
### Examples :
| Name | Example | Label|
|-------: | :------------------------------------- | :-------- |
|Green | | |
|Kmiecik | | |
| SAT | | |
# Datasets for multiple choice questions or permutation
- Task : One stem and multiple choices. The stem and its possible combinations are to be combined to form a sentence. The resulting sentence has a label <0,1,2>.
## Pairs
### Datasets names & splits :
| Original set | dataset name | Split |
| -----------|------| :---- |
| Cardillo | Pairs\_Cardillo\_set | test only |
| Jankowiac | Pairs\_Jankowiac\_set |test only |
### Data fields :
| Field | Description | Type |
| -------------: | :------------ | :------------ |
| corpus | Name of the orgiginal dataset | str |
| id | Element id | str |
| pair_ids | Ids of each pair as appearing in the classification datasets. | list(str) |
| labels | 0, 1, 2 | list(int) |
| sentences | List of the sentences composing the set | list(str) |
| A\_positions | Positions of the A's in each sentence | list(list(int)) |
| B\_positions | Positions of the B's in each sentence | list(list(int)) |
| answer | Index of the metaphor | int |
| stem | Term shared between the sentences of the set. | str |
| 5-folds | frozen splits for cross validation | list(str) |
### Examples :
| Name | Stem | Sentences |Label|
|-------: |-------: | :------------------------------------- | :-------- |
|Cardillo | comet | The astronomer's obssession was a comet. | 1 |
| | | The politician's career was a comet. | 2 |
| Jankoviac | harbour | This banana is like a harbour | 0 |
| | | A house is a harbour | 2|
| | | This area is a harbour | 1 |
## Quadruples
### Datasets names & splits :
| Original set | dataset name | Split |
| ----------: | :------| :---- |
| Green | Quadruples\_Green\_set | test only |
| SAT | Quadruples\_SAT\_met_set | test only |
### Data fields :
| Field | Description | Type |
|-------------: | :------------ | :------------ |
| corpus | name of the orgiginal dataset | str |
| id | Element id | str |
| pair\_ids | Ids of the instances as appearing in the clasification datasets | list(str) |
| labels | 0, 1, 2, 12 | list(int) |
| answer | temp | int |
| stem | Word pair to compose with all the other pairs of the set | list(str) |
| pairs | List of word pairs | list(list(str)) |
| 5-folds | Frozen splits for cross validation | list(str) |
### Examples :
| Name | Example | Label|
|-------: | :------------------------------------- | :-------- |
|Green | | |
| | | |
| SAT | | |
| [
-0.3572019338607788,
-0.7974300980567932,
0.5076984763145447,
0.21577230095863342,
-0.10790203511714935,
-0.07787362486124039,
-0.17523044347763062,
-0.2910381853580475,
0.47019436955451965,
0.2901235520839691,
-0.6813206076622009,
-0.7327901124954224,
-0.4910005033016205,
0.19632577896118... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Fearao/guba_eastmoney | Fearao | 2023-03-19T04:53:07Z | 28 | 1 | null | [
"task_categories:text-classification",
"language:zh",
"region:us"
] | 2023-03-19T04:53:07Z | 2023-03-19T04:51:36.000Z | 2023-03-19T04:51:36 | ---
task_categories:
- text-classification
language:
- zh
---
数据来自东方财富股吧的评论,经过人工label | [
-0.2980002760887146,
-0.4657445549964905,
-0.2095470428466797,
0.8510376214981079,
-0.7565708160400391,
0.28509801626205444,
0.352180153131485,
-0.5010937452316284,
0.7254206538200378,
1.1119462251663208,
-0.19401350617408752,
-0.38250628113746643,
-1.0503387451171875,
0.1508597582578659,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/race-c | metaeval | 2023-05-31T08:39:38Z | 28 | 0 | null | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"language:en",
"region:us"
] | 2023-05-31T08:39:38Z | 2023-04-06T07:49:42.000Z | 2023-04-06T07:49:42 | ---
task_categories:
- question-answering
- multiple-choice
language:
- en
---
Race-C : additional data for race (high school/middle school) but for college level
https://github.com/mrcdata/race-c
```bib
@InProceedings{pmlr-v101-liang19a,
title={A New Multi-choice Reading Comprehension Dataset for Curriculum Learning},
author={Liang, Yichan and Li, Jianheng and Yin, Jian},
booktitle={Proceedings of The Eleventh Asian Conference on Machine Learning},
pages={742--757},
year={2019}
}
``` | [
-0.3264005482196808,
-0.316995233297348,
0.32830867171287537,
0.07298919558525085,
0.04318202659487724,
0.48792701959609985,
0.08906002342700958,
-0.3262009024620056,
0.26297056674957275,
0.24643415212631226,
-0.7121483683586121,
-0.682804524898529,
-0.46026232838630676,
0.2459469586610794... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/hill | mstz | 2023-04-16T17:31:39Z | 28 | 1 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"hill",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:31:39Z | 2023-04-06T13:42:23.000Z | 2023-04-06T13:42:23 | ---
language:
- en
tags:
- hill
- tabular_classification
- binary_classification
- UCI
pretty_name: Hill
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- hill
license: cc
---
# Hill
The [Hill dataset](https://archive.ics.uci.edu/ml/datasets/Hill) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Do the plotted coordinates draw a hill?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------------|
| hill | Binary classification | Do the plotted coordinates draw a hill? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/hill")["train"]
```
# Features
Features are the coordinates of the drawn point. Feature `X{i}` is the `y` coordinate of the point `(i, X{i})`. | [
-0.14348676800727844,
-0.3908904492855072,
0.3119356334209442,
-0.014444977976381779,
-0.03050956130027771,
-0.18719568848609924,
0.05216161906719208,
-0.3324187397956848,
0.36044248938560486,
0.5384460091590881,
-0.6683275103569031,
-1.074310541152954,
-0.6912054419517517,
-0.205437526106... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/ionosphere | mstz | 2023-04-16T17:32:10Z | 28 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"ionosphere",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:32:10Z | 2023-04-06T14:08:12.000Z | 2023-04-06T14:08:12 | ---
language:
- en
tags:
- ionosphere
- tabular_classification
- binary_classification
- UCI
pretty_name: Ionosphere
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- ionosphere
license: cc
---
# Ionosphere
The [Ionosphere dataset](https://archive.ics.uci.edu/ml/datasets/Ionosphere) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their ionosphere threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| ionosphere | Binary classification | Does the received signal indicate electrons in the ionosphere?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/ionosphere")["train"]
``` | [
-0.7921306490898132,
-0.11734882742166519,
0.30655691027641296,
0.2575836479663849,
0.054218124598264694,
0.13973653316497803,
0.1483495533466339,
-0.18908704817295074,
0.39909136295318604,
0.8913514018058777,
-0.570875883102417,
-0.8187791705131531,
-0.5376288294792175,
0.1660627126693725... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/musk | mstz | 2023-04-16T17:34:46Z | 28 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"musk",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T17:34:46Z | 2023-04-06T19:08:11.000Z | 2023-04-06T19:08:11 | ---
language:
- en
tags:
- musk
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Musk
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- musk
license: cc
---
# Musk
The [Musk dataset](https://archive.ics.uci.edu/ml/datasets/Musk) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their income threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------|
| musk | Binary classification | Is the molecule a musk?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/musk", "musk")["train"]
``` | [
-0.15825100243091583,
-0.1445147693157196,
0.10143861919641495,
-0.0012274101609364152,
0.12248315662145615,
0.27369225025177,
-0.13856011629104614,
0.0408872552216053,
0.43753254413604736,
0.5666285157203674,
-0.47043392062187195,
-0.5732706189155579,
-0.667425811290741,
0.358340442180633... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/planning | mstz | 2023-04-16T17:57:54Z | 28 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"planning",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:57:54Z | 2023-04-06T22:38:04.000Z | 2023-04-06T22:38:04 | ---
language:
- en
tags:
- planning
- tabular_classification
- binary_classification
- UCI
pretty_name: Planning
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- planning
license: cc
---
# Planning
The [Planning dataset](https://archive.ics.uci.edu/ml/datasets/Planning) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------|
| planning | Binary classification | Is the patient in a planning state?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/planning")["train"]
``` | [
-0.16250093281269073,
-0.061314575374126434,
0.3053136467933655,
0.47955384850502014,
0.015824874863028526,
-0.6688075065612793,
-0.008193629793822765,
0.13122618198394775,
0.4626169800758362,
0.6637052893638611,
-0.8249124884605408,
-0.4951164424419403,
-0.6587682366371155,
0.123877227306... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/spect | mstz | 2023-04-16T18:02:28Z | 28 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"spect",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T18:02:28Z | 2023-04-07T08:05:44.000Z | 2023-04-07T08:05:44 | ---
language:
- en
tags:
- spect
- tabular_classification
- binary_classification
- UCI
pretty_name: Ozone
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- spect
- spectf
license: cc
---
# Ozone
The [Ozone dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| spect | Binary classification | Is there an ozone layer?|
| spectf | Binary classification | Is there an ozone layer?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/spect", "spect")["train"]
``` | [
-0.5704916715621948,
-0.47140607237815857,
0.4331849217414856,
0.16769914329051971,
0.030845575034618378,
-0.27354368567466736,
-0.04088165983557701,
-0.16839972138404846,
0.0762256309390068,
0.9024715423583984,
-0.4673454165458679,
-0.7722233533859253,
-0.5139096975326538,
0.4442109465599... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/post_operative | mstz | 2023-04-16T17:58:06Z | 28 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"post_operative",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T17:58:06Z | 2023-04-13T10:31:10.000Z | 2023-04-13T10:31:10 | ---
language:
- en
tags:
- post_operative
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Page Blocks
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- post_operative
- post_operative_binary
license: cc
---
# Post Operative
The [PostOperative dataset](https://archive-beta.ics.uci.edu/dataset/82/post+operative+patient) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Should the patient be discharged from the hospital, go to the ground floor, or to the ICU?
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| post_operative | Multiclass classification.|
| post_operative_binary | Binary classification. | | [
-0.2742632031440735,
-0.030290983617305756,
0.5502346754074097,
0.09191451221704483,
-0.4175216257572174,
0.029418645426630974,
0.08427134156227112,
-0.06506577879190445,
0.052653729915618896,
0.7983273267745972,
-0.3631893992424011,
-0.9053176641464233,
-1.142330527305603,
-0.194026425480... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/wine_origin | mstz | 2023-04-16T18:06:09Z | 28 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"wine_origin",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T18:06:09Z | 2023-04-14T16:22:09.000Z | 2023-04-14T16:22:09 | ---
language:
- en
tags:
- wine_origin
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Wine Origin
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- wine_origin
- wine_origin_0
- wine_origin_1
- wine_origin_2
license: cc
---
# Wine Origin
The [Wine Origin dataset](https://archive-beta.ics.uci.edu/dataset/109/wine) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| wine_origin | Multiclass classification.| |
| wine_origin_0 | Binary classification. | Is the instance of class 0? |
| wine_origin_1 | Binary classification. | Is the instance of class 1? |
| wine_origin_2 | Binary classification. | Is the instance of class 2? | | [
-0.4130823016166687,
-0.38195303082466125,
0.26383399963378906,
0.20915034413337708,
-0.18487673997879028,
-0.0046806735917925835,
-0.051099736243486404,
-0.3001255989074707,
0.2764647603034973,
0.6394191384315491,
-0.8180461525917053,
-0.5581575632095337,
-0.5768380165100098,
-0.131964087... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/dexter | mstz | 2023-04-20T10:23:41Z | 28 | 0 | null | [
"task_categories:tabular-classification",
"language:en",
"dexter",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-20T10:23:41Z | 2023-04-17T10:21:58.000Z | 2023-04-17T10:21:58 | ---
language:
- en
tags:
- dexter
- tabular_classification
- binary_classification
- UCI
pretty_name: Dexter
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- dexter
---
# Dexter
The [Dexter dataset](https://archive-beta.ics.uci.edu/dataset/168/dexter) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| dexter | Binary classification.|
| [
-0.37669023871421814,
-0.11963726580142975,
0.20241588354110718,
0.33710694313049316,
-0.30553704500198364,
0.1367228776216507,
0.24331426620483398,
-0.19131842255592346,
0.6767458319664001,
0.5792402625083923,
-0.47526365518569946,
-0.7838643193244934,
-0.7482280135154724,
0.0564898587763... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edarchimbaud/timeseries-1d-stocks | edarchimbaud | 2023-11-21T05:07:10Z | 28 | 5 | null | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | 2023-11-21T05:07:10Z | 2023-05-15T17:02:00.000Z | 2023-05-15T17:02:00 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: adj_close
dtype: float64
- name: volume
dtype: float64
splits:
- name: train
num_bytes: 598131989
num_examples: 8535427
download_size: 296223107
dataset_size: 598131989
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "timeseries-daily-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The timeseries-daily-sp500 dataset provides daily historical data for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for systematic trading. The model performance is evaluated based on the return / risk profile of the positions taken by the model.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (timestamp[ns, tz=America/New_York]): A timestamp indicating the date of the recorded data. The timestamps are in the America/New_York time zone.
- open (float64): A floating-point number representing the opening price of the stock on the given date.
- high (float64): A floating-point number representing the highest price of the stock on the given date.
- low (float64): A floating-point number representing the lowest price of the stock on the given date.
- close (float64): A floating-point number representing the closing price of the stock on the given date.
- volume (int64): An integer indicating the trading volume (number of shares) of the stock on the given date.
- dividends (float64): A floating-point number representing the dividends paid by the stock on the given date.
- stock_splits (float64): A floating-point number representing any stock splits that occurred on the given date.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The timeseries-daily-sp500 dataset was developed to support the development of low-frequency trading algorithms.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from the web, and aggregated.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The timeseries-daily-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The timeseries-daily-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, timeseries-daily-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | [
-0.4420182704925537,
-0.4147484600543976,
-0.13322784006595612,
0.418701708316803,
-0.26673632860183716,
0.08811359107494354,
0.014902511611580849,
-0.3446574807167053,
0.692693293094635,
0.27251166105270386,
-1.081693410873413,
-0.8294243812561035,
-0.4785097539424896,
0.04401755332946777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bowphs/latin_merged | bowphs | 2023-05-27T20:56:22Z | 28 | 0 | null | [
"region:us"
] | 2023-05-27T20:56:22Z | 2023-05-27T20:46:17.000Z | 2023-05-27T20:46:17 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TigerResearch/dev_sft | TigerResearch | 2023-06-16T01:55:22Z | 28 | 2 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:zh",
"license:apache-2.0",
"region:us"
] | 2023-06-16T01:55:22Z | 2023-05-29T12:23:31.000Z | 2023-05-29T12:23:31 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 29836
num_examples: 80
- name: validation
num_bytes: 9086
num_examples: 20
download_size: 0
dataset_size: 38922
license: apache-2.0
task_categories:
- text-generation
language:
- zh
size_categories:
- n<1K
---
# Dataset Card for "dev_sft"
[Tigerbot模型](https://github.com/TigerResearch/TigerBot#%E6%A8%A1%E5%9E%8B%E4%B8%8B%E8%BD%BD)develop sft数据。
在[train_sft.py](https://github.com/TigerResearch/TigerBot/blob/main/train/train_sft.py)中被使用。
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/dev_sft')
```
## Field
- instruction: 指令
- input: 上下文信息(Optional)
- output: 生成目标 | [
-0.3959806263446808,
-0.4889007806777954,
-0.07217245548963547,
0.537173867225647,
-0.6157503128051758,
0.13022580742835999,
-0.016665391623973846,
0.19347400963306427,
0.5546811819076538,
0.4366280734539032,
-0.758006751537323,
-0.5280377268791199,
-0.2161293476819992,
0.16331785917282104... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kaist-ai/Multilingual-CoT-Collection | kaist-ai | 2023-10-14T15:00:43Z | 28 | 15 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.14045",
"region:us"
] | 2023-10-14T15:00:43Z | 2023-06-05T04:42:21.000Z | 2023-06-05T04:42:21 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- en
size_categories:
- 100K<n<1M
configs:
- config_name: fr
data_files: "./data/CoT_collection_fr.json"
- config_name: ja
data_files: "./data/CoT_collection_ja.json"
- config_name: ko
data_files: "./data/CoT_collection_ko.json"
- config_name: ru
data_files: "./data/CoT_collection_ru.json"
- config_name: zh
data_files: "./data/CoT_collection_zh.json"
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:seungone@kaist.ac.kr**
### Dataset Summary

The Multilingual CoT Collection is a dataset designed to induce Chain-of-Thought (CoT) capabilities into multilingual language models.
While proprietary LLMs excel at generating Chain-of-Thoughts based on prompting, smaller LMs do not have this capability. Thus, by fine-tuning to generate Chain-of-Thoughts, it could acquire such abilities.
The Multilingual CoT Collection provides 1.84 million Chain-of-Thoughts augmented across 1060 tasks from the Flan Collection.\\
Experimental results show that fine-tuning on the CoT Collection results in (1) better zero-shot performance and (2) a better base model for few-shot learning.
We also provide a multilingual version of CoT Collection at this [link](https://huggingface.co/datasets/kaist-ai/Multilingual-CoT-Collection).
### Supported Tasks and Leaderboards
1060 tasks chosen from the Flan Collection.
The list of categories within the CoT Collection are:
* Natural Language Inference
* Extractive Question Answering
* Closed Book Question Answering
* Science
* Toxic Classification
* Arithmetic
* Program Execution
* Dialogue
* Ethics
* Commonsense Reasoning
* Multiple Choice Question Answering
### Languages
English
## Dataset Structure
* source: The input that is given to the language model (LM).
* target: The ground truth answer to the source.
* rationale: The Chain of Thought (CoT) that explains how the target could be derived from the source.
* task: A category that shows which dataset the source and target was extracted from.
In our paper, we trained the underlying language model to generate in the following format:
```
\{rationale\}
[RESULT]
\{target\}
```
Then during evaluation, we parsed the prediction after the phrase ```[RESULT]```.
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
### Citation Information
If you find the following model helpful, please considering citing our paper!
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` | [
-0.5043435096740723,
-0.9227433204650879,
0.36275714635849,
-0.1178651750087738,
-0.3557146191596985,
0.1669037640094757,
-0.630022406578064,
-0.6757249236106873,
0.13880766928195953,
0.5716539621353149,
-0.5445171594619751,
-0.5923641920089722,
-0.4785922169685364,
0.09091919660568237,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cmu-lti/cobracorpus | cmu-lti | 2023-06-26T17:20:21Z | 28 | 1 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"arxiv:2306.01985",
"arxiv:2203.09509",
"region:us"
] | 2023-06-26T17:20:21Z | 2023-06-08T02:12:47.000Z | 2023-06-08T02:12:47 | ---
license: openrail
task_categories:
- text-generation
- text-classification
language:
- en
pretty_name: COBRA🐍
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage: https://cobra.xuhuiz.com/**
- **Paper: https://arxiv.org/abs/2306.01985**
### Dataset Summary
This dataset contains COBRACOPURS and COBRACORPUS-counterfactual in this [paper](https://arxiv.org/abs/2306.01985)
### Data Splits
* `advContexts_explanations.csv` is `COBRACorpus-CF`
* `toxigen_explanations.csv` is the full `COBRACorpus`
* `toxigen_explanations_train.csv` is the training split of `COBRACorpus`
* `toxigen_explanations_val.csv` is the validation split of `COBRACorpus`
### Data Entries
For `COBRACorpus`, the relevant entries in the `csv` files are
*`situationalContext (string)`, `speakerIdentity (string)`, `listenerIdentity (string)`, `statement (string)`,
`intent (string)`, `targetGroup (string)`, `relevantPowerDynamics (string)`, `implication (string)`,
`targetGroupEmotionalReaction (string)`, `targetGroupCognitiveReaction (string)`, `offensiveness (string)`*
Please refer to the [paper](https://arxiv.org/abs/2306.01985) for the specific explanations of these entries.
The *`examples`* entry is the few-shot prompt that we used to generate explanations.
All other entries are from the [Toxicgen](https://arxiv.org/abs/2203.09509) dataset, which is not directly relevant to this
work but we leave them there as the metadata in case it's useful for the future works.
### Citation Information
If you find this dataset useful, please cite:
```
@inproceedings{zhou2023cobra,
title = {COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements},
author = {Zhou, Xuhui and Zhu, Hao and Yerukola, Akhila and Davidson, Thomas and D. Hwang, Jena and Swayamdipta, Swabha and Sap, Maarten},
year = {2023},
booktitle = {Findings of ACL}
}
``` | [
-0.2558506727218628,
-0.6295831203460693,
0.006274246610701084,
0.06371358036994934,
-0.32436463236808777,
-0.2130417376756668,
-0.1378253996372223,
-0.2342647910118103,
0.15144111216068268,
0.24534735083580017,
-0.7133094072341919,
-0.5892440676689148,
-0.5674369931221008,
0.1613398790359... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CIRAL/ciral | CIRAL | 2023-08-21T15:49:42Z | 28 | 1 | null | [
"task_categories:text-retrieval",
"language:ha",
"language:so",
"language:sw",
"language:yo",
"license:apache-2.0",
"region:us"
] | 2023-08-21T15:49:42Z | 2023-06-12T20:06:09.000Z | 2023-06-12T20:06:09 | ---
license: apache-2.0
language:
- ha
- so
- sw
- yo
task_categories:
- text-retrieval
mutilinguality:
- multilingual
viewer: true
---
# Dataset Summary
CIRAL is a collection for cross-lingual information retrieval research across four (4) African languages. The collection comprises English queries and query-passage relevance judgements for passages in the African languages.
This dataset repo contains only the queries and relevance judgements. The corpus collection can be found here [here](https://huggingface.co/datasets/CIRAL/ciral-corpus)
# Dataset Structure
1. To download the files: The queries can be found under `ciral-{lang}/topics` and are in `.tsv` formats with each line in the form:
```
qid\tquery
```
while the judgements are in the folder `ciral-{lang}/qrels`, with each file in the standard TREC format:
```
qid Q0 docid relevance
```
2. To access the dataset via `datasets`:
```
ciral_dataset = load_dataset("ciral/ciral", "hausa") #or swahili, somali, yoruba
for data in ciral_data['train']: # or 'test'
query_id = data['query_id']
query = data['query']
pos_qrels = data['positive_passages']
neg_qrels = data['negative_passages']
for qrel in pos_qrels:
docid = qrel['docid']
text = qrel['text']
```
## Citation
...
| [
-0.5104771852493286,
-0.3042651116847992,
0.2614891231060028,
0.23608653247356415,
-0.17856872081756592,
0.04987197369337082,
-0.33159518241882324,
-0.1492513120174408,
0.5952844023704529,
0.7559638619422913,
-0.44265690445899963,
-0.9059725999832153,
-0.37399935722351074,
0.22321625053882... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mikex86/stackoverflow-posts | mikex86 | 2023-08-01T01:31:12Z | 28 | 15 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:code",
"language:en",
"license:other",
"code",
"region:us"
] | 2023-08-01T01:31:12Z | 2023-06-14T18:48:00.000Z | 2023-06-14T18:48:00 | ---
license: other
language:
- code
- en
task_categories:
- question-answering
- text-generation
- text2text-generation
tags:
- code
viewer: true
pretty_name: StackOverflow Posts Markdown
size_categories:
- 10M<n<100M
---
# StackOverflow Posts Markdown

## Dataset Summary
This dataset contains all posts submitted to StackOverflow before the 14th of June 2023 formatted as **Markdown text**.<br>
The dataset contains ~60 Million posts, totaling ~35GB in size and ~65 billion characters of text.<br>
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange).
## Dataset Structure
Each record corresponds to one post of a particular type.
Original ordering from the data dump is not exactly preserved due to parallelism in the script used to process the data dump.
The markdown content of each post is contained in the `Body` field. The license for a particular post is contained in the `ContentLicense` field.
### Data Fields
```typescript
{
Id: long,
PostTypeId: long, // 1=Question, 2=Answer, 3=Orphaned tag wiki, 4=Tag wiki excerpt, 5=Tag wiki, 6=Moderator nomination, 7=Wiki Placeholder, 8=Privilige Wiki
AcceptedAnswerId: long | null, // only present if PostTypeId=1
ParentId: long | null, // only present if PostTypeId=2
Score: long,
ViewCount: long | null,
Body: string | null,
Title: string | null,
ContentLicense: string | null,
FavoriteCount: long | null,
CreationDate: string | null,
LastActivityDate: string | null,
LastEditDate: string | null,
LastEditorUserId: long | null,
OwnerUserId: long | null,
Tags: array<string> | null
}
```
Also consider the [StackExchange Datadump Schema Documentation](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede), as all fields
have analogs in the original dump format.
## How to use?
```python
from datasets import load_dataset
# predownload full dataset
ds = load_dataset('mikex86/stackoverflow-posts', split='train')
# dataset streaming (will only download the data as needed)
ds = load_dataset('mikex86/stackoverflow-posts', split='train', streaming=True)
for sample in iter(ds): print(sample["Body"])
```
## How is the text stored?
The original Data Dump formats the "Body" field as HTML, using tags such as `<code>`, `<h1>`, `<ul>`, etc.
This HTML format has been converted to Markdown.
### Markdown format
For reference, [this post on StackOverflow](https://stackoverflow.com/questions/53253940/make-react-useeffect-hook-not-run-on-initial-render) is formatted as follows:
#### Title: Make React useEffect hook not run on initial render
```markdown
According to the docs:
> `componentDidUpdate()` is invoked immediately after updating occurs. This method is not called for the initial render.
We can use the new `useEffect()` hook to simulate `componentDidUpdate()`, but it seems like `useEffect()` is being ran after every render, even the first time. How do I get it to not run on initial render?
As you can see in the example below, `componentDidUpdateFunction` is printed during the initial render but `componentDidUpdateClass` was not printed during the initial render.
```
function ComponentDidUpdateFunction() {
const [count, setCount] = React.useState(0);
React.useEffect(() => {
console.log(""componentDidUpdateFunction"");
});
return (
<div>
<p>componentDidUpdateFunction: {count} times</p>
<button
onClick={() => {
setCount(count + 1);
}}
>
Click Me
</button>
</div>
);
}
```
rest of the post omitted for brevity
```
## Details on the HTML to Markdown conversion
Using Jsoup, the original Body field was converted into a Jsoup Document. The child **nodes** (has special meaning in context of Jsoup) of this document were recursively traversed in a depth-first order.
Jsoup defines `.text()` as follows:
> ... the normalized, combined text of this element and all its children. Whitespace is normalized and trimmed. For example, given HTML <code><p>Hello <b>there</b> now! </p><code>, p.text() returns "Hello there now!"
Jsoup defines a `Node` as follows:
> The base, abstract Node model. Elements, Documents, Comments etc are all Node instances.
Additionally the existence of the `TextNode` should be noted, which represents floating text inside an HTML document that is not itself an HTML element.
Thus this text tag `<p>Hello<code>World</code></p>` would have two Jsoup child nodes `TextNode(value="Hello")` and `Element(tag="code", value="World")`.
The value `field` of a `TextNode` contains the free standing text without any further treatment (no whitespace stripping, etc.)
### Traversing Rules
- When ecountering a html tag for which a rule exists, children are not further traversed, **unless explicitly stated otherwise**.
- When encountering an `<a>` tag, `[${element.text()}](${element.attr("href")})` is emitted.
- When encountering an `<h1>` tag, `\n# ${element.text()}\n\n` is emitted.
- When encountering an `<h2>` tag, `\n## ${element.text()}\n\n` is emitted.
- When encountering an `<h3>` tag, `\n### ${element.text()}\n\n` is emitted.
- When encountering an `<h4>` tag, `\n#### ${element.text()}\n\n` is emitted.
- When encountering an `<h5>` tag, `\n##### ${element.text()}\n\n` is emitted.
- When encountering an `<h6>` tag, `\n###### ${element.text()}\n\n` is emitted.
- When encountering a `<code>` tag, `` `${element.text()}` ``is emitted
- When encountering a `<pre>` tag and said element **has** a `<code>` child tag, `` ```\n${element.text()}`\n```\n`` is emitted.
- When encountering a `<pre>` tag and said element **does not** have a `<code>` child tag, **children are traversed further**.
- When encountering an `<li>` tag, `- ` is emitted and **children are traversed further**.
- When encountering a `<blockquote>` tag, `> ` is emitted and **children are traversed further**.
- When encountering an `<hr>` tag, `\n---\n\n` is emitted
- When encountering an `<img>` tag, `})` is emitted.
- When encountering a `<table>` tag
- `\n| ` is emitted
- For each element of `element.select("th")`
- `${element.text()} | ` is emitted
- After the loop `\n| ` is emitted
- For each element of `element.select("th")`
- For each character of the `th.text()`
- `-` is emitted
- After the loop over each character of th ` | ` is emitted
- `\n` is emitted
- For each element of `element.select("tr")` with more than one children of tag type `td`
- `| ` is emitted
- For each element of `element.select("td")`
- `${td.text()} | ` is emitted
- After the loop over `<td>` elements, `\n` is emitted
- After the loop over `<tr>` elements, `\n` is emitted
- When encountering a jsoup `TextNode`, `${node.attr(node.nodeName())}` (which is equivalent to accessing the private field `node.value`) is emitted. | [
-0.30624717473983765,
-0.42429208755493164,
0.44726240634918213,
0.04902045801281929,
-0.36793044209480286,
0.08021154999732971,
-0.10748045146465302,
-0.25780293345451355,
0.1621101051568985,
0.3609920144081116,
-0.6828992962837219,
-0.6480180621147156,
-0.38108572363853455,
0.26321071386... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KaiLv/UDR_Java | KaiLv | 2023-06-21T12:40:15Z | 28 | 0 | null | [
"region:us"
] | 2023-06-21T12:40:15Z | 2023-06-21T12:39:27.000Z | 2023-06-21T12:39:27 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 105539111
num_examples: 164514
- name: validation
num_bytes: 3088869
num_examples: 5172
- name: test
num_bytes: 6865702
num_examples: 10928
- name: debug
num_bytes: 64147056
num_examples: 100000
download_size: 77259976
dataset_size: 179640738
---
# Dataset Card for "UDR_Java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5622751116752625,
-0.2774261236190796,
0.10639365017414093,
0.12299466878175735,
-0.1675930917263031,
0.11530060321092606,
0.2551228106021881,
-0.12198571860790253,
0.6251819729804993,
0.5712783336639404,
-0.5832353234291077,
-0.8906837701797485,
-0.5878955125808716,
-0.1889822334051132... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ehartford/open-instruct-uncensored | ehartford | 2023-06-22T18:41:10Z | 28 | 19 | null | [
"license:apache-2.0",
"region:us"
] | 2023-06-22T18:41:10Z | 2023-06-22T16:36:28.000Z | 2023-06-22T16:36:28 | ---
license: apache-2.0
---
This is [Allen AI's open-instruct dataset.](https://github.com/allenai/open-instruct)
It is used to train the Tulu family of models.
- https://huggingface.co/allenai/tulu-7b
- https://huggingface.co/allenai/tulu-13b
- https://huggingface.co/allenai/tulu-30b
- https://huggingface.co/allenai/tulu-65b
I have done the following:
1) Download the open-instruct repo
2) Execute the scripts/prepare_train_data.sh modified to download the "unfiltered" version of sharegpt dataset
3) Merged data/processed/**/*.jsonl into a single "open-instruct.jsonl"
4) Executed my "remove_refusals.py" against that "open-instruct.jsonl" to produce a "open-instruct-uncensored.jsonl"
I am currently training this "open-instruct-uncensored.jsonl" to a new model series named ehartford/tulu-uncensored
More info to come. | [
-0.47840672731399536,
-0.5489672422409058,
0.015325128100812435,
-0.06640660017728806,
-0.03313114494085312,
-0.15014715492725372,
-0.3008652329444885,
-0.2509780824184418,
0.03246716409921646,
0.650821328163147,
-0.4630291163921356,
-0.45850521326065063,
-0.642875611782074,
-0.09969771653... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juanivazquez/jivb-id_card | juanivazquez | 2023-06-28T01:50:05Z | 28 | 0 | null | [
"region:us"
] | 2023-06-28T01:50:05Z | 2023-06-28T00:03:05.000Z | 2023-06-28T00:03:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 102797866.0
num_examples: 276
- name: test
num_bytes: 6349261.0
num_examples: 11
download_size: 108916611
dataset_size: 109147127.0
---
# Dataset Card for "jivb-id_card"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7399058938026428,
-0.34834736585617065,
0.0018654417945072055,
0.2164362519979477,
-0.4377315044403076,
0.009407796896994114,
0.33236435055732727,
-0.20037099719047546,
0.9182839393615723,
0.2536742687225342,
-0.6842215657234192,
-0.7708857655525208,
-0.4239422678947449,
-0.252606332302... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
agostina3/PLEAD | agostina3 | 2023-06-30T14:44:42Z | 28 | 0 | null | [
"task_categories:text2text-generation",
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"hate speech",
"intent classification",
"slot filling",
"abuse detection",
"toxicity",
"region:us"
] | 2023-06-30T14:44:42Z | 2023-06-30T07:47:18.000Z | 2023-06-30T07:47:18 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text2text-generation
- token-classification
language:
- en
tags:
- hate speech
- intent classification
- slot filling
- abuse detection
- toxicity
pretty_name: PLEAD
size_categories:
- 10K<n<100K
---
# PLEAD
This is the official dataset from the [Explainable Abuse Detection as Intent Classification and Slot Filling](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00527/114369/Explainable-Abuse-Detection-as-Intent) project.
## Reference
If you use our dataset, please cite our paper:
```
@article{calabrese-etal-2022-plead,
author = {Agostina Calabrese and
Bj{\"{o}}rn Ross and
Mirella Lapata},
title = {Explainable Abuse Detection as Intent Classification and Slot Filling},
journal = {Transactions of the Association for Computational Linguistics},
year = {2022}
}
``` | [
-0.2408461719751358,
-0.7712973952293396,
0.6217942237854004,
0.26284167170524597,
-0.06927596032619476,
-0.45747214555740356,
-0.10643847286701202,
-0.29113608598709106,
0.0514049306511879,
0.5830090641975403,
-0.7215321063995361,
-0.3570239543914795,
-0.4692069888114929,
0.06638492643833... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
beyond/rlhf-reward-single-round | beyond | 2023-07-04T06:46:06Z | 28 | 1 | null | [
"region:us"
] | 2023-07-04T06:46:06Z | 2023-07-04T06:29:39.000Z | 2023-07-04T06:29:39 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 13533459
num_examples: 20000
- name: test
num_bytes: 3460316
num_examples: 5014
download_size: 10681265
dataset_size: 16993775
---
# Dataset Card for "rlhf-reward-single-round"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4625067710876465,
-0.4708525240421295,
0.005694291554391384,
0.16647420823574066,
-0.3884356915950775,
-0.06353950500488281,
0.2058996707201004,
-0.14534170925617218,
1.0306240320205688,
0.6556974053382874,
-1.1556986570358276,
-0.7737054228782654,
-0.5505793690681458,
-0.19269251823425... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EleutherAI/headqa | EleutherAI | 2023-11-02T14:47:13Z | 28 | 0 | null | [
"license:other",
"region:us"
] | 2023-11-02T14:47:13Z | 2023-07-05T17:16:30.000Z | 2023-07-05T17:16:30 | ---
license: other
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SCM-LAB/ViQP | SCM-LAB | 2023-11-08T16:13:36Z | 28 | 1 | null | [
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:vi",
"license:mit",
"doi:10.57967/hf/1333",
"region:us"
] | 2023-11-08T16:13:36Z | 2023-07-16T14:57:48.000Z | 2023-07-16T14:57:48 | ---
license: mit
viewer: true
task_categories:
- text2text-generation
language:
- vi
size_categories:
- 10K<n<100K
---
# ViQP: Dataset for Vietnamese Question Paraphrasing
## Dataset sample
An example of 'viqp_train.json' looks as follows.
```json
{
"source": "Trong thuật toán Caesar Cipher, ký tự K với mã hóa k=4 thì sẽ được chữ mới gì?",
"target": [
"Ký tự K với mã hóa k=4 trong thuật toán Caesar Cipher thì sẽ được chữ gì?",
"Ký tự K với mã hóa k=4 trong thuật toán Caesar Cipher thì sẽ được chữ mới gì?",
"Trong thuật toán Caesar Cipher, ký tự K với mã hóa k=4 thì sẽ được chữ gì?",
"Trong thuật toán Caesar Cipher, mã hóa k=4 thì ký tự K sẽ là chữ gì?"
],
"type": "ENTY_letter"
}
```
# Model resource
Question paraphrasing model after training with ViQP is available at [here](https://huggingface.co/SCM-LAB/vietnamese_question_paraphrasing_ViT5_base)
```cmd
pip install simplet5
```
```python
from simplet5 import SimpleT5
model = SimpleT5()
model.load_model("t5", "SCM-LAB/vietnamese_question_paraphrasing_ViT5_base", use_gpu=True)
question = "Tác phẩm Lục Vân Tiên được sáng tác năm nào?"
model.predict(f"question paraphrasing: {question}", num_return_sequences= 5, num_beams= 5)
>>> ['Tác phẩm Lục Vân Tiên được sáng tác lúc nào?',
'Tác phẩm Lục Vân Tiên được sáng tác khi nào?',
'Tác phẩm Lục Vân Tiên được sáng tác vào năm nào?',
'Lúc nào tác phẩm Lục Vân Tiên được sáng tác?',
'Tác phẩm Lục Vân Tiên được sáng tác vào lúc nào?']
```
# Citation Information
```
@INPROCEEDINGS{10288738,
author={Nguyen, Sang Quang and Vo, Thuc Dinh and Nguyen, Duc P.A and Tran, Dang T. and Nguyen, Kiet Van},
booktitle={2023 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)},
title={ViQP: Dataset for Vietnamese Question Paraphrasing},
year={2023},
volume={},
number={},
pages={1-6},
doi={10.1109/MAPR59823.2023.10288738}
}
```
**Please CITE** our paper when ViQP is used to help produce published results or is incorporated into other software. | [
-0.08605799823999405,
-0.7296808362007141,
0.5050014853477478,
0.25476011633872986,
-0.46317991614341736,
-0.13118894398212433,
-0.1363983005285263,
0.15558280050754547,
-0.14422136545181274,
0.8832444548606873,
-0.3158712089061737,
-0.7318520545959473,
-0.44316741824150085,
0.441194951534... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
polplop/cnndm_llama2_7b_chat_summary | polplop | 2023-07-27T05:48:43Z | 28 | 1 | null | [
"region:us"
] | 2023-07-27T05:48:43Z | 2023-07-25T09:39:11.000Z | 2023-07-25T09:39:11 | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: clean_summary
dtype: string
- name: summary_summacConv_scores
dtype: float64
- name: highlight_summacConv_scores
dtype: float64
splits:
- name: test
num_bytes: 813399
num_examples: 200
download_size: 538654
dataset_size: 813399
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "cnndm_llama2_7b_chat_summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5945227146148682,
-0.38126006722450256,
0.13092875480651855,
0.42441684007644653,
-0.545109748840332,
0.1029084324836731,
0.19115513563156128,
-0.20550978183746338,
0.7927557229995728,
0.4764261543750763,
-0.642015278339386,
-0.8782619833946228,
-0.7432675957679749,
-0.2085707187652588,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medarc/pubmed | medarc | 2023-07-27T21:28:00Z | 28 | 0 | null | [
"region:us"
] | 2023-07-27T21:28:00Z | 2023-07-27T20:21:24.000Z | 2023-07-27T20:21:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 93599167012
num_examples: 2964753
download_size: 41109917186
dataset_size: 93599167012
---
# Dataset Card for "pubmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3367830812931061,
-0.07451575249433517,
0.5823057293891907,
0.13733235001564026,
-0.34204328060150146,
-0.06726442277431488,
0.2926134765148163,
-0.12557722628116608,
0.8256500363349915,
0.586182713508606,
-0.6715940237045288,
-0.8208532929420471,
-0.6881721615791321,
-0.026575932279229... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ZelaAI/librispeech_clean_100_2048_streamable | ZelaAI | 2023-07-29T07:45:46Z | 28 | 0 | null | [
"region:us"
] | 2023-07-29T07:45:46Z | 2023-07-29T07:44:57.000Z | 2023-07-29T07:44:57 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.