id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Hundred9/Duaaii_5 | Hundred9 | 2023-03-06T20:49:30Z | 21 | 0 | null | [
"region:us"
] | 2023-03-06T20:49:30Z | 2023-03-06T20:45:57.000Z | 2023-03-06T20:45:57 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
splits:
- name: train
num_bytes: 43153426.0
num_examples: 647
download_size: 43177352
dataset_size: 43153426.0
---
# Dataset Card for "Duaaii_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7244714498519897,
-0.17119766771793365,
0.27720755338668823,
0.45947811007499695,
-0.2808840274810791,
-0.13915391266345978,
0.6346103549003601,
-0.10694564133882523,
0.6986706852912903,
0.5354007482528687,
-0.8035892248153687,
-0.8658046722412109,
-0.4942859709262848,
0.089158467948436... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
reginaboateng/pico_ebmnlp | reginaboateng | 2023-03-20T14:02:22Z | 21 | 0 | null | [
"region:us"
] | 2023-03-20T14:02:22Z | 2023-03-20T14:00:47.000Z | 2023-03-20T14:00:47 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: chunk_tags
sequence: string
- name: pos_tags
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-INT
'2': I-OUT
'3': I-PAR
splits:
- name: train
num_bytes: 27639457
num_examples: 23952
- name: test
num_bytes: 1482730
num_examples: 2064
- name: validation
num_bytes: 7446993
num_examples: 7049
download_size: 4096177
dataset_size: 36569180
---
# Dataset Card for "pico_ebmnlp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6175578832626343,
-0.19674836099147797,
0.02218007668852806,
0.2749638259410858,
-0.3702279031276703,
-0.08248158544301987,
0.40855416655540466,
-0.391313374042511,
1.0291647911071777,
0.3355581760406494,
-0.755979061126709,
-0.7885810136795044,
-0.5184974670410156,
-0.21711298823356628... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HiTZ/alpaca_mt | HiTZ | 2023-04-07T15:15:55Z | 21 | 7 | null | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:tatsu-lab/alpaca",
"language:en",
"language:pt... | 2023-04-07T15:15:55Z | 2023-03-22T15:27:30.000Z | 2023-03-22T15:27:30 | ---
annotations_creators:
- no-annotation
language:
- en
- pt
- es
- ca
- eu
- gl
- at
language_creators:
- machine-generated
license: cc-by-nc-4.0
multilinguality:
- multilingual
- translation
pretty_name: Alpaca MT
size_categories:
- 10K<n<100K
source_datasets:
- tatsu-lab/alpaca
tags:
- instruction-finetuning
task_categories:
- text-generation
task_ids:
- dialogue-modeling
dataset_info:
- config_name: en
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 32088854
num_examples: 51942
download_size: 22764890
dataset_size: 32088854
- config_name: pt
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 33600380
num_examples: 51942
download_size: 23513483
dataset_size: 33600380
- config_name: es
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 35893136
num_examples: 51942
download_size: 24483751
dataset_size: 35893136
- config_name: ca
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 33938638
num_examples: 51942
download_size: 23096222
dataset_size: 33938638
- config_name: eu
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 29977672
num_examples: 51942
download_size: 20469814
dataset_size: 29977672
- config_name: gl
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 32736710
num_examples: 51942
download_size: 22356802
dataset_size: 32736710
- config_name: at
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 31487842
num_examples: 51942
download_size: 20688305
dataset_size: 31487842
---
# Dataset Card for Alpaca MT
## Dataset Description
- **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html
- **Repository:** https://github.com/juletx/alpaca-lora-mt
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Rohan Taori
### Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. This dataset also includes machine-translated data for 6 Iberian languages: Portuguese, Spanish, Catalan, Basque, Galician and Asturian. Translation was done using NLLB-200 3.3B model.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The original data in Alpaca is in English (BCP-47 en). We also provide machine-translated data for 6 Iberian languages: Portuguese (BCP-47 pt), Spanish (BCP-47 es), Catalan (BCP-47 ca), Basque (BCP-47 eu), Galician (BCP-47 gl) and Asturian (BCP-47 at).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| en | 52002 |
| pt | 52002 |
| es | 52002 |
| ca | 52002 |
| eu | 52002 |
| gl | 52002 |
| at | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] | [
-0.4171135127544403,
-0.7807489633560181,
0.1321282833814621,
0.0874600037932396,
-0.26002368330955505,
-0.32844099402427673,
-0.17014609277248383,
-0.48919546604156494,
0.20360693335533142,
0.6376481652259827,
-0.6560009717941284,
-0.764260470867157,
-0.7453687787055969,
-0.02380996011197... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcds/swiss_judgment_prediction_xl | rcds | 2023-07-20T07:31:57Z | 21 | 0 | null | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:it",
"language:de",
"language:fr",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | 2023-07-20T07:31:57Z | 2023-03-23T23:42:15.000Z | 2023-03-23T23:42:15 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- it
- de
- fr
pretty_name: Swiss Judgment Prediction XL
size_categories:
- 100K<n<1M
---
# Dataset Card for Swiss Court View Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Judgment Prediction is a multilingual, diachronic dataset of 329K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represented. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Full |
|------------|------------|--------------------------|
| German | **de** | 160K |
| French | **fr** | 128K |
| Italian | **it** | 41K |
## Dataset Structure
### Data Fields
```
- decision_id: unique identifier for the decision
- facts: facts section of the decision
- considerations: considerations section of the decision
- label: label of the decision
- law_area: area of law of the decision
- language: language of the decision
- year: year of the decision
- court: court of the decision
- chamber: chamber of the decision
- canton: canton of the decision
- region: region of the decision
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
| [
-0.28456616401672363,
-0.5796923637390137,
0.3820797801017761,
0.2762337327003479,
-0.5152049660682678,
-0.1704990118741989,
-0.18632297217845917,
-0.21019364893436432,
0.1812967211008072,
0.6449761390686035,
-0.7096788883209229,
-0.8628866672515869,
-0.7208194136619568,
0.0361028648912906... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/toxicity | mstz | 2023-04-16T18:03:37Z | 21 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"toxicity",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T18:03:37Z | 2023-03-31T14:59:54.000Z | 2023-03-31T14:59:54 | ---
language:
- en
tags:
- toxicity
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Toxicity
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- encoding
- income
- income-no race
- race
license: cc
---
# Adult
The [Toxicity dataset](https://archive-beta.ics.uci.edu/dataset/728/toxicity) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
The dataset includes 171 molecules designed for functional domains of a core clock protein, CRY1, responsible for generating circadian rhythm.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| toxicity | Binary classification | Is the molecule toxic? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/toxicity")["train"]
``` | [
0.16146326065063477,
-0.24916298687458038,
0.21887004375457764,
0.29796111583709717,
-0.48594215512275696,
-0.2917250394821167,
-0.036644935607910156,
-0.43102309107780457,
0.01137737650424242,
0.42338958382606506,
-0.9064237475395203,
-0.7610629796981812,
-0.46975767612457275,
0.364036083... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shareAI/ShareGPT-Chinese-English-90k | shareAI | 2023-11-03T09:27:34Z | 21 | 124 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-03T09:27:34Z | 2023-04-15T16:23:35.000Z | 2023-04-15T16:23:35 | ---
license: apache-2.0
configs:
- config_name: default
data_files: sharegpt_jsonl/*.jsonl
---
# ShareGPT-Chinese-English-90k 中英文双语人机问答数据集
中英文平行双语优质人机问答数据集,覆盖真实复杂场景下的用户提问。用于训练高质量的对话模型 (比那些通过反复调用api接口生成机器模拟问答的数据在指令分布上更鲁棒)
特点:
- 1.同时提供意义表达完全相同的中英文平行对照语料,可进行双语对话模型训练。
- 2.所有问题均非人为臆想加上api轮询拟造的假数据(如Moss),更加符合真实用户场景的指令分布和提问表达。
- 3.sharegpt数据集是由网友自发分享而收集到的,相当于有一层非常天然的过滤(通过人类感觉),筛除了大部分体验不好的对话。
补充:该数据收集于chatGPT还未表现出明显智力退化的时间点。(猜测一方面可能是官方为了减小开支把150B的gpt3.5替换成10b左右的蒸馏版本了,另一方面可能是由于引入了更多的拒绝答复导致模型连接知识逻辑的程度退化)
优秀对话llm的训练离不开高质量的多轮对话数据集,如果你也想成为志愿者
欢迎加入数据集QQ群:130920969,共同进行优质数据集的交流、收集和建设工作 | [
-0.5012993216514587,
-0.7796489596366882,
0.25588369369506836,
0.9068453907966614,
-0.5729519128799438,
-0.306650847196579,
0.15132339298725128,
-0.5088291168212891,
-0.02672532945871353,
0.24290801584720612,
-0.5931634902954102,
-0.5879868268966675,
-0.6859885454177856,
0.0493451319634914... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/pums | mstz | 2023-04-18T07:42:19Z | 21 | 0 | null | [
"task_categories:tabular-classification",
"language:en",
"pums",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-18T07:42:19Z | 2023-04-18T07:32:38.000Z | 2023-04-18T07:32:38 | ---
language:
- en
tags:
- pums
- tabular_classification
- binary_classification
- UCI
pretty_name: Ipums
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- pums
---
# Pums
The [Pums dataset](https://archive-beta.ics.uci.edu/dataset/116/us+census+data+1990) from the [UCI repository](https://archive-beta.ics.uci.edu/).
U.S.A. Census dataset, classify the income of the individual.
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| pums | Binary classification.| | [
-0.2317095398902893,
-0.11407702416181564,
-0.10620364546775818,
0.3040872812271118,
-0.19519685208797455,
0.3023463487625122,
0.06308853626251221,
0.24472329020500183,
0.39351996779441833,
0.9872480034828186,
-0.3681591749191284,
-0.5904054641723633,
-0.7621175646781921,
0.168312996625900... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PaulineSanchez/Traduction_en_fr_food | PaulineSanchez | 2023-04-24T17:18:08Z | 21 | 1 | null | [
"task_categories:translation",
"language:fr",
"language:en",
"region:us"
] | 2023-04-24T17:18:08Z | 2023-04-24T07:55:46.000Z | 2023-04-24T07:55:46 | ---
task_categories:
- translation
language:
- fr
- en
dataset_info:
features:
- name: alim_nom_fr
dtype: string
- name: alim_nom_eng
dtype: string
splits:
- name: train
num_bytes: 238948
num_examples: 3153
download_size: 114072
dataset_size: 238948
---
- info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ | [
-0.1406623125076294,
-0.20717939734458923,
0.3539305627346039,
0.2774621248245239,
0.10440798103809357,
0.15046723186969757,
0.3584769368171692,
-0.4631906747817993,
0.6001715660095215,
0.9782348871231079,
-0.9034261703491211,
-0.7443004846572876,
-0.5652966499328613,
0.5164563655853271,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxu124/llava_instruct_150k | jxu124 | 2023-05-20T18:50:37Z | 21 | 0 | null | [
"region:us"
] | 2023-05-20T18:50:37Z | 2023-04-24T13:17:41.000Z | 2023-04-24T13:17:41 | ---
dataset_info:
features:
- name: global_image_id
dtype: string
- name: image_path
dtype: string
- name: dialog
sequence:
sequence: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 187730970
num_examples: 157712
download_size: 95089013
dataset_size: 187730970
---
# Dataset Card for "llava_instruct_150k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.381185919046402,
-0.21156223118305206,
0.14976981282234192,
0.4289839565753937,
-0.320220947265625,
0.04785778746008873,
0.2565668523311615,
-0.15844084322452545,
0.9704613089561462,
0.5461716055870056,
-0.7832256555557251,
-0.6524551510810852,
-0.5638710856437683,
-0.1543763428926468,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Oniichat/bluemoon_roleplay_chat_data_300k_messages | Oniichat | 2023-04-29T16:06:27Z | 21 | 39 | null | [
"region:us"
] | 2023-04-29T16:06:27Z | 2023-04-29T14:44:37.000Z | 2023-04-29T14:44:37 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: thread_title
dtype: string
- name: thread_href
dtype: string
- name: message_timestamp
dtype: string
- name: message_username
dtype: string
- name: message
dtype: string
splits:
- name: train
num_bytes: 266368156
num_examples: 261071
download_size: 138428913
dataset_size: 266368156
---
# Dataset Card for "bluemoon_roleplay_chat_data_300k_messages"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3246379792690277,
-0.27379879355430603,
-0.10515933483839035,
0.5633514523506165,
-0.3035276234149933,
-0.11909522116184235,
0.06248752400279045,
-0.22396908700466156,
0.6621529459953308,
0.6218224763870239,
-1.0587480068206787,
-0.6866611838340759,
-0.35663291811943054,
-0.382660031318... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nan-Do/instructional_code-search-net-java | Nan-Do | 2023-05-20T05:22:11Z | 21 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"Java",
"Code Generation",
"Instruction Response",
"region:us"
] | 2023-05-20T05:22:11Z | 2023-05-19T03:38:41.000Z | 2023-05-19T03:38:41 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 388087123
num_examples: 467959
download_size: 132263577
dataset_size: 388087123
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- Java
- Code Generation
- Instruction Response
pretty_name: Instructional Java Dataset
---
# Dataset Card for "instructional_code-search-net-java"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-java
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for Java.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-java
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 | [
-0.36286669969558716,
-0.6639443039894104,
0.028247717767953873,
0.2826530337333679,
-0.1181660145521164,
-0.1153736263513565,
-0.2993502914905548,
-0.08182907849550247,
0.48599404096603394,
0.5879907608032227,
-0.6160436272621155,
-0.9101932644844055,
-0.49650007486343384,
0.1888033598661... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nan-Do/instructional_code-search-net-go | Nan-Do | 2023-05-20T05:40:29Z | 21 | 0 | null | [
"region:us"
] | 2023-05-20T05:40:29Z | 2023-05-19T03:42:14.000Z | 2023-05-19T03:42:14 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 122612124
num_examples: 203128
download_size: 45476654
dataset_size: 122612124
---
# Dataset Card for "instructional_code-search-net-go"
IT STILL REQUIRES MORE WORK.
PLEASE DON'T USE IT | [
-0.34575435519218445,
-0.23241668939590454,
-0.205130472779274,
0.24759937822818756,
-0.6062061190605164,
-0.2600247263908386,
-0.08977492153644562,
0.0012433906085789204,
0.21722064912319183,
0.5682284235954285,
-0.8225218057632446,
-0.6054564118385315,
-0.47608044743537903,
0.04358130320... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pranavkpba2000/skin_cancer_complete_dataset_resized | Pranavkpba2000 | 2023-05-19T14:27:15Z | 21 | 0 | null | [
"region:us"
] | 2023-05-19T14:27:15Z | 2023-05-19T14:26:34.000Z | 2023-05-19T14:26:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 170443824.892
num_examples: 28516
- name: test
num_bytes: 43096803.47
num_examples: 7105
download_size: 203883734
dataset_size: 213540628.362
---
# Dataset Card for "skin_cancer_complete_dataset_resized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.12143640220165253,
-0.25855568051338196,
0.2988353967666626,
0.03400927037000656,
-0.3669268786907196,
0.14819103479385376,
0.1491997092962265,
-0.09654515981674194,
1.0652649402618408,
0.7843487858772278,
-0.7701126337051392,
-0.9888368248939514,
-0.6197947263717651,
-0.289598733186721... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Patt/copa_th | Patt | 2023-06-05T12:36:44Z | 21 | 0 | null | [
"language:th",
"language:en",
"arxiv:1907.04307",
"region:us"
] | 2023-06-05T12:36:44Z | 2023-06-02T09:43:18.000Z | 2023-06-02T09:43:18 | ---
language:
- th
- en
---
# Dataset Card for copa_th
### Dataset Description
This dataset is Thai translated version of [copa](https://huggingface.co/datasets/super_glue/viewer/copa) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
### Languages
- EN
- TH | [
-0.04422714188694954,
-0.5188198685646057,
0.04797809571027756,
0.6286548972129822,
-0.8368724584579468,
0.28733447194099426,
-0.0996989980340004,
-0.41697266697883606,
0.6387519240379333,
0.6029065847396851,
-0.4716463088989258,
-0.9730432033538818,
-0.6112536191940308,
0.2562544047832489... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Birchlabs/openai-prm800k-phase1_train-stepwise-critique | Birchlabs | 2023-06-02T23:41:25Z | 21 | 0 | null | [
"license:mit",
"region:us"
] | 2023-06-02T23:41:25Z | 2023-06-02T23:41:04.000Z | 2023-06-02T23:41:04 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medmac01/moroccan_history_qa | medmac01 | 2023-06-07T16:36:23Z | 21 | 2 | null | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"history",
"Morocco",
"region:us"
] | 2023-06-07T16:36:23Z | 2023-06-07T16:19:03.000Z | 2023-06-07T16:19:03 | ---
license: cc0-1.0
task_categories:
- question-answering
language:
- en
tags:
- history
- Morocco
pretty_name: 🇲🇦 Moroccan History Dataset for Contextual Question Answering
size_categories:
- 1K<n<10K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DISCOX/DISCO-10M | DISCOX | 2023-06-26T19:54:22Z | 21 | 14 | null | [
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-4.0",
"music",
"arxiv:2306.13512",
"doi:10.57967/hf/1190",
"region:us"
] | 2023-06-26T19:54:22Z | 2023-06-13T07:45:14.000Z | 2023-06-13T07:45:14 | ---
license: cc-by-4.0
language:
- en
tags:
- music
size_categories:
- 10M<n<100M
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: video_duration_youtube_sec
dtype: float64
- name: preview_url_spotify
dtype: string
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: search_query_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: primary_artist_name_spotify
dtype: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 73263841657.0
num_examples: 15296232
download_size: 88490703682
dataset_size: 73263841657.0
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-10M")
```
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
## What is DISCO-10M?
DISCO-10M is a music dataset created to democratize research on large-scale machine learning models for music.
The dataset contains no music due to copyright laws.
The audio embedding features were computed using [Laion-CLAP](https://github.com/LAION-AI/CLAP), and can be used instead of the raw audio for many down-stream tasks.
In case the raw audio is needed, it can be downloaded from the provided Spotify preview URL or via the YouTube link.
DISCO-10M was created by collecting a list of 400,000 artist IDs and 2.6M track IDs from Spotify, and collecting YouTube video links that match the track duration,
artist name, and track names. These matches were computed using the following three similarity metrics:
- Duration similarity: ` 1 - abs(track_duration_spotify - video_duration_youtube) / max(track_duration_spotify, video_duration_youtube) `
- Text similarity is calculated using the cosine similarity between the embedding of the search query and the embedding of the video title, as well as the search query embedding and the video description embedding. Embeddings are computed using [Sentence Bert](https://huggingface.co/sentence-transformers).
- Audio similarity is calculated using the cosine similarity between the Spotify preview snippet audio embedding and the YouTube audio embedding.
For DISCO-10M we only keep samples that return true for: ` duration_similarity > 0.25 and (description_similarity > 0.65 or title_similarity > 0.65) and audio_similarity > 0.4 `
We offer three subsets based on DISCO-10M:
- [DISCO-10K-random](https://huggingface.co/datasets/DISCOX/DISCO-10K-random): a small subset of random samples from the entire dataset.
- [DISCO-200K-random](https://huggingface.co/datasets/DISCOX/DISCO-200K-random): a subset of random samples, useful for a light-weight and representative analysis of the entire dataset.
- [DISCO-200K-high-quality](https://huggingface.co/datasets/DISCOX/DISCO-200K-high-quality): a subset of samples which were filtered more strictly to ensure a higher quality match between Spotify tracks and YouTube videos.
To cite our work, please refer to our paper [here](https://arxiv.org/abs/2306.13512).
<!--
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> | [
-0.674363911151886,
-0.6203828454017639,
0.258932888507843,
0.4133884310722351,
-0.13405776023864746,
-0.08463743329048157,
-0.3786267042160034,
-0.020894063636660576,
0.6923831701278687,
0.3973049521446228,
-0.9724166393280029,
-0.8051467537879944,
-0.3726343810558319,
-0.0380270443856716... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BerMaker/test | BerMaker | 2023-06-13T09:44:54Z | 21 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"license:apache-2.0",
"code",
"art",
"region:us"
] | 2023-06-13T09:44:54Z | 2023-06-13T08:12:00.000Z | 2023-06-13T08:12:00 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- code
- art
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alpindale/visual-novels | alpindale | 2023-06-14T14:44:30Z | 21 | 20 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-06-14T14:44:30Z | 2023-06-14T13:15:15.000Z | 2023-06-14T13:15:15 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- en
pretty_name: Visual Novels
---
# Visual Novel Dataset
This dataset contains parsed Visual Novel scripts for training language models. The dataset consists of approximately 60 million tokens of parsed scripts.
## Dataset Structure
The dataset follows a general structure for visual novel scripts:
- Dialogue lines: Dialogue lines are formatted with the speaker's name followed by a colon, and the dialogue itself enclosed in quotes. For example:
```
John: "Hello, how are you?"
```
- Actions and narration: Actions and narration within the Visual Novel scripts are often enclosed in asterisks, but it's important to note that not all visual novels follow this convention. Actions and narration provide descriptions of character movements, background settings, or other narrative elements.
```
*John looked around the room, searching for answers.*
```
## Contents
- `visual-novels.txt`: This file contains all the parsed VNs concatenated within a single plaintext file. Each entry is separated with this string:
```
[ - title - {visual-novel-title-1.txt} ]
```
- `VNDB/`: This directory contains `.json` files that contain VNDB IDs for the corresponding VN's characters. Does not include unparsed VNs.
- `Archives/visual-novels-parsed.tar.zst`: This archive contains the parsed VNs but with each script in a separate text file (i.e. not concatenated).
- `Archives/visual-novels-unparsed.tar.zst`: This archive contains all the unparsed VNs along with the original script for the currently parsed VNs.
## Usage
You can utilize this dataset to train language models, particularly for tasks related to natural language processing and text generation. By leveraging the parsed visual novel scripts, you can train models to understand dialogue structures and generate coherent responses. Additionally, the inclusion of the unparsed scripts allows for further analysis and processing.
## Contribution
This dataset was gathered and parsed by the [PygmalionAI](https://hugginface.co/PygmalionAI) Data Processing Team. Listed below are the team members, sorted by contribution amount:
- **Suikamelon**: [HuggingFace](https://huggingface.co/lemonilia) - (2,787,704 ++ 672,473 --)
- **Alpin**: [HuggingFace](https://huggingface.co/alpindale) - [GitHub](https://github.com/AlpinDale) (1,170,985 ++ 345,120 --)
- **Spartan**: [GitHub](https://github.com/Spartan9772) (901,046 ++ 467,915 --)
- **Unlucky-AI** [GitHub](https://github.com/Unlucky-AI) (253,316 ++ 256 --)
## Citation
If you use this dataset in your research or projects, please cite it appropriately.
## Acknowledgements
This dataset is compiled and shared for research and educational purposes. The dataset includes parsed visual novel scripts from various sources, which are predominantly copyrighted and owned by their respective publishers and creators. The inclusion of these scripts in this dataset does not imply any endorsement or authorization from the copyright holders.
We would like to express our sincere gratitude to the original copyright holders and creators of the visual novels for their valuable contributions to the art and storytelling. We respect and acknowledge their intellectual property rights.
We strongly encourage users of this dataset to adhere to copyright laws and any applicable licensing restrictions when using or analyzing the provided content. It is the responsibility of the users to ensure that any use of the dataset complies with the legal requirements governing intellectual property and fair use.
Please be aware that the creators and distributors of this dataset disclaim any liability or responsibility for any unauthorized or illegal use of the dataset by third parties.
If you are a copyright holder or have any concerns about the content included in this dataset, please contact us at [this email address](mailto:alpin@alpindale.dev) to discuss the matter further and address any potential issues.
| [
-0.12256193906068802,
-0.51116544008255,
0.25897446274757385,
0.07100122421979904,
-0.0005948558682575822,
-0.19226036965847015,
-0.07760875672101974,
-0.17992031574249268,
0.21873702108860016,
0.8207636475563049,
-0.5849244594573975,
-0.702824592590332,
-0.27804499864578247,
0.28966796398... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
buelfhood/Soco | buelfhood | 2023-11-10T13:59:02Z | 21 | 0 | null | [
"region:us"
] | 2023-11-10T13:59:02Z | 2023-06-15T18:56:13.000Z | 2023-06-15T18:56:13 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ecnu-icalk/educhat-sft-002-data-osm | ecnu-icalk | 2023-07-01T10:11:46Z | 21 | 14 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-07-01T10:11:46Z | 2023-06-27T07:48:28.000Z | 2023-06-27T07:48:28 | ---
license: cc-by-nc-4.0
---
每条数据由一个存放对话的list和与数据对应的system_prompt组成。list中按照Q,A顺序存放对话。
数据来源为开源数据,使用[CleanTool](https://github.com/icalk-nlp/EduChat/tree/main/clean_tool)数据清理工具去重。 | [
-0.34059879183769226,
-0.5356546640396118,
0.03982355073094368,
0.11401523649692535,
-0.7113452553749084,
0.11454257369041443,
0.3600074350833893,
-0.17466410994529724,
0.5107874870300293,
0.33371496200561523,
-0.6063730120658875,
-0.48709896206855774,
-0.33621466159820557,
0.1527266651391... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CreativeLang/EPIC_Irony | CreativeLang | 2023-07-11T16:46:43Z | 21 | 1 | null | [
"region:us"
] | 2023-07-11T16:46:43Z | 2023-07-11T16:34:27.000Z | 2023-07-11T16:34:27 | ---
dataset_info:
features:
- name: user
dtype: string
- name: label
dtype: string
- name: timestamp
dtype: string
- name: source
dtype: string
- name: subreddit
dtype: string
- name: id_original
dtype: string
- name: text
dtype: string
- name: parent_id_original
dtype: string
- name: parent_text
dtype: string
- name: Language_instance
dtype: string
- name: Language_variety
dtype: string
- name: Age
dtype: string
- name: Sex
dtype: string
- name: Ethnicity simplified
dtype: string
- name: Country of birth
dtype: string
- name: Country of residence
dtype: string
- name: Nationality
dtype: string
- name: Language_annotator
dtype: string
- name: Student status
dtype: string
- name: Employment status
dtype: string
splits:
- name: train
num_bytes: 7299373
num_examples: 14172
download_size: 1038853
dataset_size: 7299373
---
# EPIC_Irony
- paper: [EPIC: Multi-Perspective Annotation of a Corpus of Irony](https://assets.amazon.science/40/b4/0f6ec06a4a33a44485de1b2b57c7/epic-multi-perspective-annotation-of-a-corpus-of-irony.pdf) at ACL 2023
Key features:
- EPIC (English Perspectivist Irony Corpus) is an annotated corpus for irony analysis based on data perspectivism principles.
- The corpus contains social media conversations in five regional varieties of English, annotated by contributors from corresponding countries.
- The dataset explores the perspectives of annotators, taking into account their origin, age, and gender.
- Perspective-aware models were created to validate EPIC, and these proved more effective and confident in identifying irony than non-perspectivist models.
- The models showcase variation in irony perception across different demographic groups.
- EPIC serves as a valuable resource for training perspective-aware models for irony detection.
Metadata in Creative Language Toolkit ([CLTK](https://github.com/liyucheng09/cltk))
- CL Type: Irony
- Task Type: detection
- Size: 14k
- Created time: 2023 | [
-0.5514488816261292,
-0.579515814781189,
0.48480111360549927,
0.5998189449310303,
-0.1795070916414261,
0.27041980624198914,
-0.18420928716659546,
-0.6347976922988892,
0.2726823389530182,
0.43769970536231995,
-0.2049654871225357,
-0.639005720615387,
-0.6351090669631958,
0.43863335251808167,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Isotonic/OpenOrca-deduped | Isotonic | 2023-08-24T13:21:18Z | 21 | 4 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:zero-shot-classification",
"task_categories:question-... | 2023-08-24T13:21:18Z | 2023-07-12T22:29:09.000Z | 2023-07-12T22:29:09 | ---
license: mit
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: reward
dtype: float32
splits:
- name: train
num_bytes: 3274600633.90245
num_examples: 2409134
- name: test
num_bytes: 409325419.048775
num_examples: 301142
- name: validation
num_bytes: 409325419.048775
num_examples: 301142
download_size: 2268645581
dataset_size: 4093251472.0000005
task_categories:
- text-generation
- text2text-generation
- conversational
- text-classification
- token-classification
- table-question-answering
- zero-shot-classification
- question-answering
- summarization
- feature-extraction
language:
- en
size_categories:
- 1M<n<10M
arxiv:
- 2301.13688
- 2306.02707
---
# Dataset Card for Isotonic/OpenOrca-deduped
## Dataset Summary
This dataset is a deduplicated version of [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
*MinHash Deduplication with Jaccard Threshold = 0.80*
```
Original dataset size: 4233923
Number of duplicate clusters: 522077
Files in duplicate cluster: 2115143
Unique files in duplicate cluster: 892638
Filtered dataset size: 3011418
``` | [
-0.43616676330566406,
-0.42436307668685913,
-0.08104299008846283,
0.20047619938850403,
-0.9148666262626648,
-0.5285870432853699,
-0.1565181463956833,
-0.420106440782547,
0.5421155095100403,
0.80739825963974,
-0.252019464969635,
-0.9673815369606018,
-0.43428272008895874,
0.12138815224170685... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrtoy/mobile-ui-design | mrtoy | 2023-07-19T09:09:22Z | 21 | 16 | null | [
"task_categories:object-detection",
"size_categories:n<1K",
"license:apache-2.0",
"ui",
"design",
"detection",
"region:us"
] | 2023-07-19T09:09:22Z | 2023-07-13T11:12:51.000Z | 2023-07-13T11:12:51 | ---
license: apache-2.0
dataset_info:
features:
- name: width
dtype: int64
- name: height
dtype: int64
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: category
sequence: string
- name: color
list:
- name: alpha
dtype: float64
- name: blue
dtype: float64
- name: green
dtype: float64
- name: red
dtype: float64
- name: radius
sequence: float64
- name: text
sequence: string
splits:
- name: train
num_bytes: 1253458059.322
num_examples: 7846
download_size: 1160884066
dataset_size: 1253458059.322
task_categories:
- object-detection
tags:
- ui
- design
- detection
size_categories:
- n<1K
---
# Dataset: Mobile UI Design Detection
## Introduction
This dataset is designed for object detection tasks with a focus on detecting elements in mobile UI designs. The targeted objects include text, images, and groups. The dataset contains images and object detection boxes, including class labels and location information.
## Dataset Content
Load the dataset and take a look at an example:
```python
>>> from datasets import load_dataset
>>>> ds = load_dataset("mrtoy/mobile-ui-design")
>>> example = ds[0]
>>> example
{'width': 375,
'height': 667,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=375x667>,
'objects': {'bbox': [[0.0, 0.0, 375.0, 667.0],
[0.0, 0.0, 375.0, 667.0],
[0.0, 0.0, 375.0, 20.0],
...
],
'category': ['text',
'rectangle',
'rectangle',
...]}}
```
The dataset has the following fields:
- image: PIL.Image.Image object containing the image.
- height: The image height.
- width: The image width.
- objects: A dictionary containing bounding box metadata for the objects in the image:
- bbox: The object’s bounding box (xmin,ymin,width,height).
- category: The object’s category, with possible values including rectangle、text、group、image
- color: The object’s color, text color or rectangle color, or None
- radius: The object’s color, rectangle radius, or None
- text: text content, or None
You can visualize the bboxes on the image using some internal torch utilities.
```python
import torch
from torchvision.ops import box_convert
from torchvision.utils import draw_bounding_boxes
from torchvision.transforms.functional import pil_to_tensor, to_pil_image
item = ds[0]
boxes_xywh = torch.tensor(item['objects']['bbox'])
boxes_xyxy = box_convert(boxes_xywh, 'xywh', 'xyxy')
to_pil_image(
draw_bounding_boxes(
pil_to_tensor(item['image']),
boxes_xyxy,
labels=item['objects']['category'],
)
)
```



## Applications
This dataset can be used for various applications, such as:
- Training and evaluating object detection models for mobile UI designs.
- Identifying design patterns and trends to aid UI designers and developers in creating high-quality mobile app UIs.
- Enhancing the automation process in generating UI design templates.
- Improving image recognition and analysis in the field of mobile UI design.
| [
-0.5113224983215332,
-0.5011017918586731,
0.16985833644866943,
-0.038685429841279984,
-0.27158328890800476,
-0.2369052618741989,
0.21459254622459412,
-0.22435547411441803,
0.15027467906475067,
0.41543877124786377,
-0.4524768590927124,
-0.8889833092689514,
-0.24973809719085693,
-0.077361360... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
squarelike/sharegpt_deepl_ko_translation | squarelike | 2023-10-12T17:11:05Z | 21 | 7 | null | [
"region:us"
] | 2023-10-12T17:11:05Z | 2023-07-14T04:28:43.000Z | 2023-07-14T04:28:43 | [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
[sharegpt_deepl_ko](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko)를 한-영 번역데이터로 변환한 데이터입니다.
- translation_data_sharegpt.json: 최대 약 1300자 분량의 번역 데이터 모음
- translation_data_sharegpt_long.json: 1300자~7000자 분량의 번역 데이터 모음
sharegpt_deepl_ko에서 몇 가지의 데이터 전처리를 진행했습니다. | [
-0.45616084337234497,
-0.7099447250366211,
0.5211774706840515,
0.651600182056427,
-0.45719239115715027,
-0.0891818106174469,
-0.5389081239700317,
-0.14301201701164246,
0.371317058801651,
0.16980430483818054,
-0.45897552371025085,
-1.058142900466919,
-1.0188076496124268,
-0.0305810384452343... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/chemistry | andersonbcdefg | 2023-07-21T01:24:18Z | 21 | 3 | null | [
"region:us"
] | 2023-07-21T01:24:18Z | 2023-07-21T01:23:53.000Z | 2023-07-21T01:23:53 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 47000178
num_examples: 20000
download_size: 21669458
dataset_size: 47000178
---
# Dataset Card for "chemistry"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4671776294708252,
-0.12926222383975983,
0.6378039717674255,
0.06386177986860275,
0.0969977080821991,
0.270707368850708,
0.2212810516357422,
-0.06618616729974747,
0.8364385366439819,
0.3774828314781189,
-0.8983457088470459,
-0.9349573850631714,
-0.4217014014720917,
-0.2797333896160126,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yuhthe/samsum_vi_word | Yuhthe | 2023-07-26T02:57:48Z | 21 | 0 | null | [
"task_categories:summarization",
"language:vi",
"region:us"
] | 2023-07-26T02:57:48Z | 2023-07-25T07:30:27.000Z | 2023-07-25T07:30:27 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
splits:
- name: test
num_bytes: 761520
num_examples: 819
- name: train
num_bytes: 13465942
num_examples: 14732
- name: validation
num_bytes: 733668
num_examples: 818
download_size: 7875036
dataset_size: 14961130
task_categories:
- summarization
language:
- vi
---
# Dataset Card for "samsum_vi_word"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33596137166023254,
-0.14180555939674377,
0.22183050215244293,
0.16931387782096863,
-0.5185675621032715,
-0.11468572169542313,
0.06867137551307678,
-0.04038447514176369,
1.0470097064971924,
0.44514530897140503,
-0.813039243221283,
-0.9405884742736816,
-0.8277267813682556,
-0.220806419849... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xzuyn/futurama-alpaca | xzuyn | 2023-08-03T06:49:53Z | 21 | 0 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"region:us"
] | 2023-08-03T06:49:53Z | 2023-08-01T20:41:50.000Z | 2023-08-01T20:41:50 | ---
language:
- en
size_categories:
- n<1K
task_categories:
- text-generation
- conversational
---
[Original Dataset](https://www.kaggle.com/datasets/josephvm/futurama-seasons-16-transcripts?select=only_spoken_text.csv)
114 episodes. WIP formatting as with LLaMa, it's like 4000+ tokens each.
I would like to augment the instruction, and also possibly input a summary.
I also want to make a set that includes multiple tv shows. Just not sure how I wanna go about reformatting all this to fit into smaller chunks like 512 tokens, while still understanding the context of being and instruction but the episode at the same time.
```
Instruction: `Generate an episode of Futurama.`
Input: `{Episode Name} - {Episode Synopsis}`
Output: `{Episode Dialog In Chat Format}`
``` | [
-0.13868971168994904,
-0.8153975009918213,
-0.029083365574479103,
0.344361275434494,
-0.26406627893447876,
-0.0032029161229729652,
-0.6131306290626526,
0.20053748786449432,
0.752689003944397,
0.3752410411834717,
-0.9294160008430481,
-0.23439133167266846,
-0.45044732093811035,
0.35960903763... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FreedomIntelligence/sharegpt-deutsch | FreedomIntelligence | 2023-08-13T15:58:00Z | 21 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-13T15:58:00Z | 2023-08-13T15:54:15.000Z | 2023-08-13T15:54:15 | ---
license: apache-2.0
---
Deutsch ShareGPT data translated by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | [
-0.5890281796455383,
-0.5391521453857422,
0.4158187806606293,
0.48624733090400696,
-0.4605351686477661,
-0.019513238221406937,
-0.30484727025032043,
-0.48206207156181335,
0.2687070965766907,
0.14224904775619507,
-0.9098215699195862,
-0.3965979218482971,
-0.738793671131134,
0.10147108882665... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
botp/yentinglin-zh_TW_c4 | botp | 2023-08-16T06:55:04Z | 21 | 0 | null | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:odc-by",
"arxiv:2305.13711",
"arxiv:2104.09864",
"region:us"
] | 2023-08-16T06:55:04Z | 2023-08-16T06:55:04.000Z | 2023-08-16T06:55:04 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: zh_id
dtype: int64
splits:
- name: train
num_bytes: 15132113806.438198
num_examples: 5179546
- name: validation
num_bytes: 14820238.368907347
num_examples: 5099
download_size: 5569491261
dataset_size: 15146934044.807106
license: odc-by
task_categories:
- text-generation
language:
- zh
pretty_name: Traditional Chinese C4
size_categories:
- 1M<n<10M
duplicated_from: yentinglin/zh_TW_c4
---
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
| [
-0.4460890293121338,
-0.6757620573043823,
0.32770195603370667,
0.3203395903110504,
-0.5107942819595337,
0.11846049129962921,
-0.10599502921104431,
-0.6208014488220215,
0.5243902802467346,
0.34746599197387695,
-0.6181227564811707,
-0.491379976272583,
-0.43354710936546326,
0.1882582157850265... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LibrAI/do-not-answer | LibrAI | 2023-08-28T18:22:52Z | 21 | 5 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"safety",
"large language model",
"region:us"
] | 2023-08-28T18:22:52Z | 2023-08-28T17:37:24.000Z | 2023-08-28T17:37:24 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- safety
- large language model
pretty_name: do-not-answer
size_categories:
- n<1K
dataset_info:
features:
- name: id
dtype: int64
- name: risk_area
dtype: string
- name: types_of_harm
dtype: string
- name: specific_harms
dtype: string
- name: question
dtype: string
- name: GPT4_response
dtype: string
- name: GPT4_harmful
dtype: int64
- name: GPT4_action
dtype: int64
- name: ChatGPT_response
dtype: string
- name: ChatGPT_harmful
dtype: int64
- name: ChatGPT_action
dtype: int64
- name: Claude_response
dtype: string
- name: Claude_harmful
dtype: int64
- name: Claude_action
dtype: int64
- name: ChatGLM2_response
dtype: string
- name: ChatGLM2_harmful
dtype: int64
- name: ChatGLM2_action
dtype: int64
- name: llama2-7b-chat_response
dtype: string
- name: llama2-7b-chat_harmful
dtype: int64
- name: llama2-7b-chat_action
dtype: int64
- name: vicuna-7b_response
dtype: string
- name: vicuna-7b_harmful
dtype: int64
- name: vicuna-7b_action
dtype: int64
splits:
- name: train
num_bytes: 3997193
num_examples: 939
download_size: 1709142
dataset_size: 3997193
---
# Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
## Overview
*Do not answer* is an open-source dataset to evaluate LLMs' safety mechanism at a low cost. The dataset is curated and filtered to consist only of prompts to which responsible language models do not answer.
Besides human annotations, *Do not answer* also implements model-based evaluation, where a 600M fine-tuned BERT-like evaluator achieves comparable results with human and GPT-4.
## Instruction Taxonomy
Instructions are organized by a three-level hierarchical taxonomy, covering 61 specific harms.

*Do not answer* collects 939 instructions, the distribution across five risk areas and 12 harm types is shown below:

## Response Assessment
We assess six models' responses (GPT-4, ChatGPT, Claude, LLaMA-2, Vicuna, ChatGLM2) from two levels.
* Harmfulness: whether a response is harmful (a binary label).
* Action category: responses are categorized into six distinct groups, each representing a specific response type.

### Human Evaluation
High percentage of safe responses across five risk areas for six models demonstrates that they are basically safe.
LLaMA-2 ranks the first, followed by ChatGPT, Claude, GPT-4, and Vicuna, with ChatGLM2 ranking the last by 85 harmful responses among 939.
* Harmlessness Rank

* Action Category Distribution
*

### Automatic Evaluation
Longformer evaluators achieve comparable results with human and GPT-4.
* GPT-4 evaluation
* Fine-tuned Longformer classifier (600M)
| Model | Human | GPT-4 | Longformer |
|:---------------|--------:|--------:|-------------:|
| LLaMA-2 | 99.7 | 99.4 | 99.3 |
| ChatGPT | 98.5 | 97.7 | 97.9 |
| Claude | 98.3 | 98.3 | 97.6 |
| GPT-4 | 97.6 | 96.5 | 97.2 |
| vicuna | 94.5 | 94.9 | 95.0 |
| ChatGLM2 | 90.9 | 92.9 | 92.9 |
## Citation
If our work is useful for your own, you can cite us with the following BibTex entry:
```bibtex
@misc{wang2023donotanswer,
author = {Wang, Yuxia, and
Li, Haonan and
Han, Xudong and
Nakov, Preslav
and Baldwin, Timothy},
title = {Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs},
year = {2023},
howpublished = {arXiv preprint arXiv:2308.13387},
}
```
## License
All datasets in this repository are released under the
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/). All source files in this repository are released under the Apache 2.0 license, the text of which can be found in the LICENSE file. | [
-0.1874537467956543,
-0.7977202534675598,
0.5509212613105774,
-0.11248783767223358,
-0.16249346733093262,
-0.06645555794239044,
0.25858739018440247,
-0.5452867746353149,
-0.06356805562973022,
0.3743128776550293,
-0.6380550265312195,
-0.8323789238929749,
-0.5099366307258606,
0.2322638779878... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Azure99/blossom-math-v2 | Azure99 | 2023-09-05T03:28:13Z | 21 | 0 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"region:us"
] | 2023-09-05T03:28:13Z | 2023-09-05T03:19:29.000Z | 2023-09-05T03:19:29 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- zh
size_categories:
- 10K<n<100K
---
# BLOSSOM MATH V2
### 介绍
Blossom Math V2是基于Math23K和GSM8K衍生而来的中英双语数学对话数据集,适用于数学问题微调。
相比于blossom-math-v1,新增了2500条GSM8K数据和翻译为中文的2500条GSM8K-CN数据。此外,优化了答案的检查逻辑,还移除了<<1+1=2>>等计算步骤,以统一推理步骤的风格。
本数据集采用全量Math23K、GSM8K和翻译后的GSM8K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。
本次发布了全量数据的25%,包含10K记录。
### 语言
中文和英文
### 数据集结构
每条数据代表一个完整的题目及答案,包含id、input、output、answer、dataset四个字段。
- id:字符串,代表原始数据集中的题目id,与dataset字段结合可确定唯一题目。
- input:字符串,代表问题。
- output:字符串,代表gpt-3.5-turbo-0613生成的答案。
- answer:字符串,代表正确答案。
- dataset:字符串,代表原始数据集。
### 数据集限制
本数据集的所有响应均由gpt-3.5-turbo-0613生成,并经过初步校验,但仍可能包含不准确的回答。 | [
-0.32095199823379517,
-0.6685713529586792,
0.18161076307296753,
0.7725106477737427,
-0.6781514883041382,
-0.06783218681812286,
0.22066541016101837,
-0.15589432418346405,
0.27801772952079773,
0.24460305273532867,
-0.878951907157898,
-0.5524999499320984,
-0.7650193572044373,
-0.0787137970328... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrainingDataPro/ripe-strawberries-detection | TrainingDataPro | 2023-09-26T08:38:14Z | 21 | 2 | null | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"biology",
"region:us"
] | 2023-09-26T08:38:14Z | 2023-09-08T09:29:07.000Z | 2023-09-08T09:29:07 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- image-to-image
- object-detection
tags:
- code
- biology
dataset_info:
features:
- name: id
dtype: int32
- name: name
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: width
dtype: uint16
- name: height
dtype: uint16
- name: shapes
sequence:
- name: label
dtype:
class_label:
names:
'0': strawberry
- name: type
dtype: string
- name: points
sequence:
sequence: float32
- name: rotation
dtype: float32
- name: attributes
sequence:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 127730244
num_examples: 40
download_size: 126412271
dataset_size: 127730244
---
# Ripe Strawberries Detection
The dataset consists of photos of strawberries for the identification and recognition of **ripe berries**.
The images are annotated with **bounding boxes** that accurately demarcate the location of the ripe strawberries within the image.
Each image in the dataset showcases a strawberry plantation, and includes a diverse range of *backgrounds, lighting conditions, and orientations*. The photos are captured from various *angles and distances*, providing a realistic representation of strawberries.
The dataset can be utilised for enabling advancements in *strawberry production, quality control, and greater precision in agricultural practices*.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=ripe-strawberries-detection) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of original images of strawberries
- **boxes** - includes bounding box labeling for the original images
- **annotations.xml** - contains coordinates of the bounding boxes and labels, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for ripe strawberries detection. For each point, the x and y coordinates are provided. Visibility of the ripe strawberry is also provided by the attribute **occluded** (0, 1).
# Example of XML file structure

# Strawberry Detection might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=ripe-strawberries-detection) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | [
-0.3831401765346527,
-0.5840170383453369,
0.3552841246128082,
-0.18327274918556213,
0.05860058218240738,
-0.19983519613742828,
0.023963259533047676,
-0.6440508961677551,
0.32349443435668945,
0.9856953620910645,
-0.7947802543640137,
-0.7224445343017578,
-0.49765512347221375,
0.1021678000688... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MikhailT/speaker-embeddings | MikhailT | 2023-09-22T08:01:44Z | 21 | 0 | null | [
"region:us"
] | 2023-09-22T08:01:44Z | 2023-09-19T12:34:03.000Z | 2023-09-19T12:34:03 | ---
configs:
- config_name: speakers
version: 1.0.0
data_files: data/speakers.jsonl
- config_name: models
version: 1.0.0
data_files: data/models.jsonl
- config_name: datasets
version: 1.0.0
data_files: data/datasets.jsonl
- config_name: dataset_utterances
version: 1.0.0
data_files:
- split: aru
path: data/aru/utterances*.jsonl
- split: cmu_arctic
path: data/cmu_arctic/utterances*.jsonl
- config_name: utterance_embeddings
version: 1.0.0
data_files:
- split: aru
path: data/aru/utterance_embeddings*.jsonl
- split: cmu_arctic
path: data/cmu_arctic/utterance_embeddings*.jsonl
- config_name: speaker_embeddings
version: 1.0.0
data_files:
- split: speechbrain_spkrec_xvect_voxceleb
path: data/*/speaker_embeddings_*001.jsonl
- split: speechbrain_spkrec_ecapa_voxceleb
path: data/*/speaker_embeddings_*002.jsonl
- split: speechbrain_spkrec_xvect_voxceleb_mean
path: data/*/speaker_embeddings_mean_001.jsonl
- split: speechbrain_spkrec_ecapa_voxceleb_mean
path: data/*/speaker_embeddings_mean_002.jsonl
- split: speechbrain_spkrec_xvect_voxceleb_sets
path: data/*/speaker_embeddings_sets_001.jsonl
- split: speechbrain_spkrec_ecapa_voxceleb_sets
path: data/*/speaker_embeddings_sets_002.jsonl
dataset_info:
- config_name: speakers
features:
- name: id
dtype: string
- name: name
dtype: string
- name: lang
dtype: string
- name: sex
dtype: string
- name: age
dtype: int32
- name: country
dtype: string
- name: accent
dtype: string
- config_name: models
features:
- name: id
dtype: string
- name: name
dtype: string
- name: size
dtype: int32
- name: sample_rate
dtype: int32
- config_name: datasets
features:
- name: id
dtype: string
- name: name
dtype: string
- name: sample_rate
dtype: int32
- config_name: dataset_utterances
features:
- name: id
dtype: string
- name: name
dtype: string
- name: duration
dtype: float32
- name: speaker_id
dtype: string
- name: dataset_id
dtype: string
- config_name: utterance_embeddings
features:
- name: speaker_id
dtype: string
- name: file_id
dtype: string
- name: dataset_id
dtype: string
- name: model_id
dtype: string
- name: embedding
sequence: float32
- config_name: speaker_embeddings
features:
- name: speaker_id
dtype: string
- name: model_id
dtype: string
- name: set
dtype: string
- name: embedding
sequence: float32
pretty_name: Speaker Embeddings
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gary-roach/NLP | gary-roach | 2023-09-21T01:40:44Z | 21 | 0 | null | [
"region:us"
] | 2023-09-21T01:40:44Z | 2023-09-21T01:32:07.000Z | 2023-09-21T01:32:07 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sarahpann/gsm8k_small_test | sarahpann | 2023-09-23T20:21:36Z | 21 | 0 | null | [
"region:us"
] | 2023-09-23T20:21:36Z | 2023-09-23T20:18:53.000Z | 2023-09-23T20:18:53 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 109109
num_examples: 200
download_size: 64934
dataset_size: 109109
---
# Dataset Card for "gsm8k_small_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7259178757667542,
-0.13163627684116364,
0.21974387764930725,
0.07131125032901764,
-0.32468336820602417,
-0.2717601954936981,
0.20831425487995148,
0.025113727897405624,
0.7604683637619019,
0.3655509650707245,
-0.7540602684020996,
-0.6294835805892944,
-0.5146699547767639,
-0.0584688000380... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Weni/Zeroshot_Train-20K_other_tweet-format | Weni | 2023-09-28T18:41:59Z | 21 | 0 | null | [
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:pt",
"region:us"
] | 2023-09-28T18:41:59Z | 2023-09-28T15:42:14.000Z | 2023-09-28T15:42:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: source_text
dtype: string
- name: target_text
dtype: string
splits:
- name: train
num_bytes: 4369715
num_examples: 20000
download_size: 1752054
dataset_size: 4369715
language:
- pt
size_categories:
- 10K<n<100K
task_categories:
- zero-shot-classification
---
# Dataset Card for "Zeroshot_Train-20K_other_tweet-format"
This dataset is a train dataset for the Zeroshot models.
It has 20.000 data in a prompt format exclusively for train with class 'other' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'other' \\n\\nTweet: frase \\n\\nLabel: 'other'
```
The dataset was divided as follows: <br>
```
- 6,000 data: prompt with class option without target class (other)
- 7,000 data: prompt with class option + target class included as an option. target class is not correct
- 7,000 data: prompt with class option + target class. target class is correct
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Train-20K_other_tweet-format")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.23933596909046173,
-0.30037230253219604,
0.09042870253324509,
0.5432130694389343,
-0.4443835914134979,
-0.030664831399917603,
-0.03044438175857067,
-0.28130003809928894,
0.4700409471988678,
0.3689350187778473,
-0.7721100449562073,
-0.6927088499069214,
-0.466494619846344,
-0.057779282331... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jfrei/GPTNERMED | jfrei | 2023-10-08T22:05:18Z | 21 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"bio",
"biomedical",
"medical",
"c... | 2023-10-08T22:05:18Z | 2023-10-05T12:39:54.000Z | 2023-10-05T12:39:54 | ---
annotations_creators:
- machine-generated
language:
- de
language_creators:
- machine-generated
license: []
multilinguality:
- monolingual
pretty_name: GPTNERMED
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bio
- biomedical
- medical
- clinical
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# GPTNERMED Dataset for German medical NER entities
## Dataset Description
- **Repository:** https://github.com/frankkramer-lab/GPTNERMED
- **Paper:** https://doi.org/10.1016/j.jbi.2023.104478
- **ArXiv-Preprint:** https://arxiv.org/abs/2208.14493
## Dataset Summary
This dataset contains the synthetic German sentences with annotated entities (`Medikation`, `Dosis`, `Diagnose`) from the GPTNERMED project.
The sentences as well as the annotations are **not** manually validated by medical professionals and therefore this dataset is **no** gold standard dataset.
The dataset consists of 9,845 sentences (121,027 tokens by SpaCy Tokenizer, 245,107 tokens by the GPT tokenizer) with the following labels:
| Label | Count | #Tokens (SpaCy) |
| --- | --- | -- |
| Medikation | 9868 | 10138 |
| Dosis | 7547 | 15845 |
| Diagnose | 5996 | 7656 |
## Dataset Structure
The train/test/dev-split (80%, 10%, 10%) of the data loader is as follows:\
`<-- train: 0.8 --><-- test: 0.1 --><-- dev: 0.1 -->`\
The splits are selected arbitrarily as the dataloader requires a split configuration. All sample sentences are however homogeneous in origin and splits could also be performed otherwise.
Every sample is a sentence with its text (property `sentence`) and its corresponding NER labels (property `ner_labels` / List of labels).\
Every NER label entry has a char-wise start and stop index (property `start`, `stop`) and a label class (property `ner_class`).
### Citation Information
If you like our work, cite our paper and give us a star on GitHub.\
(See the links above)
| [
-0.37037554383277893,
-0.7462859749794006,
0.45078369975090027,
0.10975170135498047,
-0.14462706446647644,
-0.3314889371395111,
-0.18236258625984192,
-0.3088720440864563,
0.35071277618408203,
0.4625687003135681,
-0.5291818976402283,
-0.6116166114807129,
-0.6515517830848694,
0.3632512390613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShashiVish/cover-letter-dataset | ShashiVish | 2023-10-15T15:20:47Z | 21 | 1 | null | [
"region:us"
] | 2023-10-15T15:20:47Z | 2023-10-14T14:37:08.000Z | 2023-10-14T14:37:08 | ---
dataset_info:
features:
- name: Job Title
dtype: string
- name: Preferred Qualifications
dtype: string
- name: Hiring Company
dtype: string
- name: Applicant Name
dtype: string
- name: Past Working Experience
dtype: string
- name: Current Working Experience
dtype: string
- name: Skillsets
dtype: string
- name: Qualifications
dtype: string
- name: Cover Letter
dtype: string
splits:
- name: train
num_bytes: 1048886.142857143
num_examples: 813
- name: test
num_bytes: 450259.85714285716
num_examples: 349
download_size: 507518
dataset_size: 1499146.0
---
# Dataset Card for "cover-letter-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5082902312278748,
0.025600997731089592,
0.20393604040145874,
0.1738148182630539,
-0.055836014449596405,
0.018329817801713943,
0.33202821016311646,
-0.014157934114336967,
1.02180814743042,
0.576270341873169,
-1.0422379970550537,
-0.9465879201889038,
-0.6072818636894226,
-0.27037206292152... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bobbybelajar/AmazonMixedLength | bobbybelajar | 2023-10-15T07:19:36Z | 21 | 0 | null | [
"region:us"
] | 2023-10-15T07:19:36Z | 2023-10-15T07:19:12.000Z | 2023-10-15T07:19:12 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/find_first_sent_train_100_eval_10 | tyzhu | 2023-10-31T14:48:31Z | 21 | 0 | null | [
"region:us"
] | 2023-10-31T14:48:31Z | 2023-10-19T15:56:50.000Z | 2023-10-19T15:56:50 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 267331
num_examples: 210
- name: validation
num_bytes: 10399
num_examples: 10
download_size: 135617
dataset_size: 277730
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "find_first_sent_train_100_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6245697736740112,
-0.32223349809646606,
0.2845980226993561,
0.4964664578437805,
-0.0682099312543869,
-0.11849130690097809,
0.24953055381774902,
0.307274729013443,
0.7851583957672119,
0.3308938145637512,
-1.000707745552063,
-0.7305555939674377,
-0.6303924918174744,
-0.17641472816467285,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kardosdrur/folketinget-discussions | kardosdrur | 2023-10-24T11:53:06Z | 21 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-24T11:53:06Z | 2023-10-24T08:48:35.000Z | 2023-10-24T08:48:35 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: comment
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 7032676.035654362
num_examples: 3814
- name: test
num_bytes: 1759090.9643456375
num_examples: 954
download_size: 4898174
dataset_size: 8791767.0
---
# Discussions in Folketinget
The dataset is based on data from Folketinget in the Danish Gigaword corpus.
Comment-response pairs are purely extracted on the basis of heuristics, and have not been manually evaluated.
The dataset was created for aiding the training of sentence transformer models in the Danish Foundation Models project.
The dataset is currently not recommended for production use.
| [
-0.6056040525436401,
-0.6305659413337708,
0.4549265503883362,
0.12162110954523087,
-0.2183971405029297,
0.14110037684440613,
-0.17779475450515747,
-0.48550543189048767,
0.5974078178405762,
0.8436447978019714,
-0.7700057029724121,
-0.16280964016914368,
-0.4659677743911743,
0.325259476900100... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Schandkroete/RandomEmployeeProfilesV1 | Schandkroete | 2023-10-27T21:56:58Z | 21 | 1 | null | [
"region:us"
] | 2023-10-27T21:56:58Z | 2023-10-27T21:53:51.000Z | 2023-10-27T21:53:51 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
theophilusijiebor1/chest-xray | theophilusijiebor1 | 2023-10-29T03:08:57Z | 21 | 0 | null | [
"region:us"
] | 2023-10-29T03:08:57Z | 2023-10-29T03:07:44.000Z | 2023-10-29T03:07:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': NORMAL
'1': PNEUMONIA
splits:
- name: train
num_bytes: 3186635036.504
num_examples: 5216
- name: validation
num_bytes: 3030633.0
num_examples: 16
- name: test
num_bytes: 79062317.0
num_examples: 624
download_size: 1230487171
dataset_size: 3268727986.504
---
# Dataset Card for "chest-xray"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5514674186706543,
0.040062543004751205,
0.3523052930831909,
-0.035563696175813675,
-0.4192042052745819,
0.0015731991734355688,
0.5902133584022522,
-0.18907251954078674,
0.8261512517929077,
0.5640028119087219,
-0.8294750452041626,
-0.8280811905860901,
-0.7145045399665833,
-0.368674665689... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
piecake/mulqa | piecake | 2023-10-29T10:57:33Z | 21 | 0 | null | [
"region:us"
] | 2023-10-29T10:57:33Z | 2023-10-29T10:36:23.000Z | 2023-10-29T10:36:23 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5322356224060059,
-0.5534716844558716,
0.1290130317211151,
0.23470577597618103,
-0.39626216888427734,
-0.11762470006942749,
-0.03545305132865906,
-0.6389272212982178,
0.5699822306632996,
0.7838326692581177,
-0.7834625840187073,
-0.9173274040222168,
-0.55633145570755,
0.13078093528747559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abhinand/tamil-llama-eval | abhinand | 2023-11-13T17:20:06Z | 21 | 1 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:ta",
"license:gpl",
"arxiv:2311.05845",
"region:us"
] | 2023-11-13T17:20:06Z | 2023-10-29T15:27:53.000Z | 2023-10-29T15:27:53 | ---
license: gpl
task_categories:
- text-generation
language:
- ta
pretty_name: tamil-llama-eval
size_categories:
- n<1K
---
# Dataset Card for "tamil-alpaca"
This repository includes evaluation instructions to quickly test the Tamil LLaMA family of instruction models. To dive deep into the development and capabilities of the models, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact.
**GitHub Repository:** [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama)
## Models evaluated using this dataset
| Task Type | [Tamil-LLaMA-7B](abhinand/tamil-llama-7b-instruct-v0.1) | [Tamil-LLaMA-13B](abhinand/tamil-llama-13b-instruct-v0.1) | [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) |
|-----------------|----------------|-----------------|---------------|
| Question Answering | 77.00 | 75.33 | 54.33 |
| Open-ended QA | 84.47 | 85.26 | 58.68 |
| Reasoning | 47.50 | 64.25 | 63.50 |
| Literature | 45.50 | 40.00 | 71.00 |
| Entertainment | 43.33 | 50.00 | 60.00 |
| Creative Writing| 92.50 | 95.62 | 59.69 |
| Translation | 60.56 | 66.67 | 92.78 |
| Coding | 63.57 | 76.07 | 57.14 |
| Ethics | 23.75 | 57.50 | 40.00 |
| **Overall** | **63.83** | **71.17** | **61.33** |
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.16158759593963623,
-0.7565404176712036,
0.27168136835098267,
0.23679541051387787,
-0.5003514289855957,
0.06423340737819672,
0.04145554080605507,
-0.31138876080513,
0.06965764611959457,
0.3770277202129364,
-0.43379727005958557,
-0.7459670901298523,
-0.8457475900650024,
0.1616375148296356... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dltjdgh0928/open_orca_ko_10000 | dltjdgh0928 | 2023-10-30T06:15:24Z | 21 | 1 | null | [
"region:us"
] | 2023-10-30T06:15:24Z | 2023-10-30T05:58:33.000Z | 2023-10-30T05:58:33 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Eitanli/meal_type | Eitanli | 2023-11-02T13:51:48Z | 21 | 0 | null | [
"region:us"
] | 2023-11-02T13:51:48Z | 2023-10-30T08:16:00.000Z | 2023-10-30T08:16:00 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: recipe
dtype: string
- name: meal_type_title
dtype: string
splits:
- name: train
num_bytes: 107900952
num_examples: 74465
download_size: 54288491
dataset_size: 107900952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "meal_type"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.368934690952301,
-0.34682050347328186,
0.11957457661628723,
0.19810724258422852,
0.030622044578194618,
-0.05710891634225845,
0.3118149936199188,
-0.09473884850740433,
1.1204071044921875,
0.5640775561332703,
-0.5865943431854248,
-0.6661612391471863,
-0.6003037691116333,
-0.00843838416039... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
phanvancongthanh/enamine_leadlike_600M | phanvancongthanh | 2023-10-31T13:58:25Z | 21 | 0 | null | [
"region:us"
] | 2023-10-31T13:58:25Z | 2023-10-31T11:34:47.000Z | 2023-10-31T11:34:47 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Falah/fashion_moodboards_prompts | Falah | 2023-11-19T08:59:15Z | 21 | 0 | null | [
"region:us"
] | 2023-11-19T08:59:15Z | 2023-11-01T06:36:25.000Z | 2023-11-01T06:36:25 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 141480
num_examples: 1000
download_size: 22359
dataset_size: 141480
---
# Dataset Card for "fashion_moodboards_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5473146438598633,
-0.32294338941574097,
0.4426268935203552,
0.4725632667541504,
-0.37791532278060913,
-0.045761074870824814,
0.26940661668777466,
0.18323250114917755,
0.9924519062042236,
0.28700363636016846,
-1.3817654848098755,
-0.9960726499557495,
-0.36243757605552673,
-0.183055996894... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
meisin123/iban_speech_corpus | meisin123 | 2023-11-02T04:39:07Z | 21 | 0 | null | [
"region:us"
] | 2023-11-02T04:39:07Z | 2023-11-01T10:12:03.000Z | 2023-11-01T10:12:03 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1014986154.58
num_examples: 3132
download_size: 981436514
dataset_size: 1014986154.58
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "iban_speech_corpus"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** The original dataset is found on [Sarah Juan's github link](https://github.com/sarahjuan/iban)
- **Paper:** "Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban"
### Dataset Summary
This Iban speech corpus is used for training of a Automatic Speech Recognition (ASR) model. This dataset contains the audio files (wav files) with its corresponding transcription.
For other resources such as pronunciation dictionary and Iban language model, please refer to the original dataset respository [here](https://github.com/sarahjuan/iban).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
```python
from datasets import load_dataset
dataset = load_dataset("meisin123/iban_speech_corpus", split="train")
```
## Dataset Structure
### Data Instances
```
{'audio': {'path': 'ibf_001_001.wav',
'array': array([ 5.72814941e-01, 5.49011230e-01, -1.82495117e-02, ...,
-2.31628418e-02, -1.26342773e-02, -3.05175781e-05]),
'sampling_rate': 16000},
'transcription': 'pukul sepuluh malam'}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate.
- transcription: the transcription of the audio file.
## Dataset Creation
- Iban Data collected by Sarah Samson Juan and Laurent Besacier
### Source Data
The audio files are news data provided by a local radio station in Sarawak, Malaysia.
## Additional Information
### Citation Information
Details on the corpora and the experiments on iban ASR can be found in the following list of publication. The original authors appreciate if you cite them if you intend to publish.
```
@inproceedings{Juan14,
Author = {Sarah Samson Juan and Laurent Besacier and Solange Rossato},
Booktitle = {Proceedings of Workshop for Spoken Language Technology for Under-resourced (SLTU)},
Month = {May},
Title = {Semi-supervised G2P bootstrapping and its application to ASR for a very under-resourced language: Iban},
Year = {2014}}
@inproceedings{Juan2015,
Title = {Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban},
Author = {Sarah Samson Juan and Laurent Besacier and Benjamin Lecouteux and Mohamed Dyab},
Booktitle = {Proceedings of INTERSPEECH},
Year = {2015},
Address = {Dresden, Germany},
Month = {September}}
```
### Contributions
Thanks to [meisin](https://github.com/meisin) for adding this dataset.
| [
-0.40068957209587097,
-0.09910665452480316,
-0.2640133798122406,
0.4066292345523834,
-0.08404212445020676,
0.020180298015475273,
-0.6981527805328369,
-0.38406872749328613,
0.17566226422786713,
0.4247284233570099,
-0.3211236894130707,
-0.5997292399406433,
-0.39935338497161865,
0.15625663101... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cynefin/llama-2-7b-chat-aave | cynefin | 2023-11-01T16:51:04Z | 21 | 0 | null | [
"region:us"
] | 2023-11-01T16:51:04Z | 2023-11-01T11:02:34.000Z | 2023-11-01T11:02:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HumanF-MarkrAI/WIKI_QA_Near_dedup | HumanF-MarkrAI | 2023-11-03T08:55:52Z | 21 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-11-03T08:55:52Z | 2023-11-02T10:57:34.000Z | 2023-11-02T10:57:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: float64
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 145724140
num_examples: 137505
download_size: 87382170
dataset_size: 145724140
license: cc-by-nc-sa-4.0
---
# Dataset Card for "WIKI_QA_Near_dedup"
**The license is `cc-by-nc-sa`.**
## Datasets Details
**Developers** SeungyooLee (DopeorNopeLee)
WIKI_QA_Near_dedup dataset was created by Near dedup algorithm to reduce similarity.
**It's original source is [maywell/wikidata_QA](maywell/wikidata_QA), which created by an innovative developer named [maywell(Jeonghwan Park)](https://huggingface.co/maywell).**
It follows "cc-by-nc-sa-4.0 lisence" policy. | [
-0.4984532594680786,
-0.28238368034362793,
0.3126983642578125,
0.05114428326487541,
-0.35845881700515747,
-0.5153242349624634,
0.016647350043058395,
-0.5834056735038757,
0.5439081192016602,
0.6387503147125244,
-0.7651716470718384,
-0.648432731628418,
-0.2459147721529007,
0.382949560880661,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jackhogan/agemo_json_prompts | jackhogan | 2023-11-02T16:19:45Z | 21 | 0 | null | [
"region:us"
] | 2023-11-02T16:19:45Z | 2023-11-02T11:17:04.000Z | 2023-11-02T11:17:04 | ---
dataset_info:
features:
- name: text
dtype: string
- name: correct
dtype: string
- name: scrambled
dtype: string
splits:
- name: train
num_bytes: 5053590.0
num_examples: 5319
- name: test
num_bytes: 561510.0
num_examples: 591
download_size: 1349329
dataset_size: 5615100.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "agemo_json_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5017361044883728,
-0.34184616804122925,
0.33870357275009155,
0.3250262141227722,
-0.13985130190849304,
-0.0037102277856320143,
0.2958153486251831,
-0.00007125139381969348,
0.3794066309928894,
0.4278980493545532,
-1.0903531312942505,
-0.7067400217056274,
-0.39841946959495544,
0.166206479... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pclanglais/Brahe-Novels | Pclanglais | 2023-11-02T23:11:30Z | 21 | 1 | null | [
"license:cc0-1.0",
"region:us"
] | 2023-11-02T23:11:30Z | 2023-11-02T23:04:43.000Z | 2023-11-02T23:04:43 | ---
license: cc0-1.0
---
The Brahe-Novels dataset is a collection of annotated novel excerpts in the public domain. It was originally created to train Brahe, an LLM fine-tuned for literary analysis.
Most of the texts come from the Gutenberg project.
The annotations include a mix of synthetic data and manual annotations. In accordance with the principles laid out by the US copyright office, all synthetic data and hybrid synthetic data are in the public domain as well.
| [
-0.5106369853019714,
-0.484907329082489,
0.1194230243563652,
-0.020564479753375053,
0.07429642230272293,
-0.10270150005817413,
-0.047495659440755844,
-0.7079941034317017,
0.013697969727218151,
0.9777650833129883,
-0.5250529050827026,
-0.3748812973499298,
-0.1906415820121765,
0.480975866317... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pyimagesearch/blog-title | pyimagesearch | 2023-11-03T19:41:10Z | 21 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-03T19:41:10Z | 2023-11-03T18:34:29.000Z | 2023-11-03T18:34:29 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Diegulio/PetClassification | Diegulio | 2023-11-05T04:26:06Z | 21 | 0 | null | [
"region:us"
] | 2023-11-05T04:26:06Z | 2023-11-05T04:25:38.000Z | 2023-11-05T04:25:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': No detectado
'1': affenpinscher
'2': afghan_hound
'3': african_hunting_dog
'4': airedale
'5': american_staffordshire_terrier
'6': appenzeller
'7': australian_terrier
'8': basenji
'9': basset
'10': beagle
'11': bedlington_terrier
'12': bernese_mountain_dog
'13': black-and-tan_coonhound
'14': blenheim_spaniel
'15': bloodhound
'16': bluetick
'17': border_collie
'18': border_terrier
'19': borzoi
'20': boston_bull
'21': bouvier_des_flandres
'22': boxer
'23': brabancon_griffon
'24': briard
'25': brittany_spaniel
'26': bull_mastiff
'27': cairn
'28': cardigan
'29': chesapeake_bay_retriever
'30': chihuahua
'31': chow
'32': clumber
'33': cocker_spaniel
'34': collie
'35': curly-coated_retriever
'36': dandie_dinmont
'37': dhole
'38': dingo
'39': doberman
'40': english_foxhound
'41': english_setter
'42': english_springer
'43': entlebucher
'44': eskimo_dog
'45': flat-coated_retriever
'46': french_bulldog
'47': gato
'48': german_shepherd
'49': german_short-haired_pointer
'50': giant_schnauzer
'51': golden_retriever
'52': gordon_setter
'53': great_dane
'54': great_pyrenees
'55': greater_swiss_mountain_dog
'56': groenendael
'57': ibizan_hound
'58': irish_setter
'59': irish_terrier
'60': irish_water_spaniel
'61': irish_wolfhound
'62': italian_greyhound
'63': japanese_spaniel
'64': keeshond
'65': kelpie
'66': kerry_blue_terrier
'67': komondor
'68': kuvasz
'69': labrador_retriever
'70': lakeland_terrier
'71': leonberg
'72': lhasa
'73': malamute
'74': malinois
'75': maltese_dog
'76': mexican_hairless
'77': miniature_pinscher
'78': miniature_poodle
'79': miniature_schnauzer
'80': newfoundland
'81': norfolk_terrier
'82': norwegian_elkhound
'83': norwich_terrier
'84': old_english_sheepdog
'85': otterhound
'86': papillon
'87': pekinese
'88': pembroke
'89': pomeranian
'90': pug
'91': redbone
'92': rhodesian_ridgeback
'93': rottweiler
'94': saint_bernard
'95': saluki
'96': samoyed
'97': schipperke
'98': scotch_terrier
'99': scottish_deerhound
'100': sealyham_terrier
'101': shetland_sheepdog
'102': shih-tzu
'103': siberian_husky
'104': silky_terrier
'105': soft-coated_wheaten_terrier
'106': staffordshire_bullterrier
'107': standard_poodle
'108': standard_schnauzer
'109': sussex_spaniel
'110': tibetan_mastiff
'111': tibetan_terrier
'112': toy_poodle
'113': toy_terrier
'114': vizsla
'115': walker_hound
'116': weimaraner
'117': welsh_springer_spaniel
'118': west_highland_white_terrier
'119': whippet
'120': wire-haired_fox_terrier
'121': yorkshire_terrier
splits:
- name: train
num_bytes: 344179685.94
num_examples: 7499
- name: validation
num_bytes: 29205702.0
num_examples: 834
- name: test
num_bytes: 81732756.983
num_examples: 2083
download_size: 379294077
dataset_size: 455118144.923
---
# Dataset Card for "PetClassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6108363270759583,
-0.04877405986189842,
0.18747907876968384,
0.08248938620090485,
-0.10408425331115723,
0.04655905067920685,
0.17163516581058502,
-0.291451632976532,
0.678629457950592,
0.4285007119178772,
-0.611725926399231,
-0.6812096238136292,
-0.38234248757362366,
-0.2483086884021759... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RachitD15673/DSA_problems_with_code | RachitD15673 | 2023-11-05T10:26:34Z | 21 | 0 | null | [
"region:us"
] | 2023-11-05T10:26:34Z | 2023-11-05T09:09:09.000Z | 2023-11-05T09:09:09 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ujan/github_classification_no_empty_readme | Ujan | 2023-11-05T18:51:37Z | 21 | 0 | null | [
"region:us"
] | 2023-11-05T18:51:37Z | 2023-11-05T18:50:54.000Z | 2023-11-05T18:50:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: names
dtype: string
- name: readmes
dtype: string
- name: topics
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 51299344.74701966
num_examples: 10334
- name: validation
num_bytes: 6413659.126490169
num_examples: 1292
- name: test
num_bytes: 6413659.126490169
num_examples: 1292
download_size: 29121376
dataset_size: 64126663.0
---
# Dataset Card for "github_classification_no_empty_readme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.454999178647995,
-0.16174374520778656,
0.2662765681743622,
0.012564101256430149,
-0.3255958557128906,
0.025421343743801117,
-0.04565252736210823,
-0.08200530707836151,
0.9701449871063232,
0.523841142654419,
-0.7977185249328613,
-1.1055704355239868,
-0.5461364984512329,
-0.19592304527759... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kowndinya23/flan2022 | kowndinya23 | 2023-11-09T07:10:33Z | 21 | 0 | null | [
"region:us"
] | 2023-11-09T07:10:33Z | 2023-11-09T06:49:11.000Z | 2023-11-09T06:49:11 | ---
configs:
- config_name: default
data_files:
- split: flan2021
path: data/flan2021-*
- split: t0
path: data/t0-*
- split: cot
path: data/cot-*
- split: niv2
path: data/niv2-*
- split: dialog
path: data/dialog-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: flan2021
num_bytes: 8988026240
num_examples: 5362361
- name: t0
num_bytes: 4602180562
num_examples: 1650308
- name: cot
num_bytes: 209004809
num_examples: 183848
- name: niv2
num_bytes: 13104211362
num_examples: 10066896
- name: dialog
num_bytes: 1024507265
num_examples: 553869
download_size: 16511300644
dataset_size: 27927930238
---
# Dataset Card for "flan2022"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7777625918388367,
-0.22854623198509216,
0.13734540343284607,
0.2484925091266632,
-0.03060133196413517,
-0.1322588324546814,
0.22366920113563538,
-0.2977968156337738,
0.9536900520324707,
0.5755918622016907,
-0.9104466438293457,
-0.5695282816886902,
-0.51900714635849,
-0.23268893361091614... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fw1zr/rahul-gandhi-captions | fw1zr | 2023-11-09T16:58:53Z | 21 | 0 | null | [
"region:us"
] | 2023-11-09T16:58:53Z | 2023-11-09T16:58:34.000Z | 2023-11-09T16:58:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4491114.0
num_examples: 116
download_size: 4452636
dataset_size: 4491114.0
---
# Dataset Card for "rahul-gandhi-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5244444608688354,
-0.2024354636669159,
0.06294974684715271,
0.18529672920703888,
-0.3119725286960602,
0.31233659386634827,
0.010114111937582493,
-0.05708202347159386,
0.5696579217910767,
0.4180799722671509,
-0.8513811826705933,
-0.5097954273223877,
-0.7064807415008545,
0.049259904772043... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alsubari/Israel-palestine-war | alsubari | 2023-11-13T12:19:20Z | 21 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"task_categories:fill-mask",
"langua... | 2023-11-13T12:19:20Z | 2023-11-09T21:10:23.000Z | 2023-11-09T21:10:23 | ---
language:
- en
license: afl-3.0
task_categories:
- text-classification
- text-generation
- conversational
- summarization
- text2text-generation
- sentence-similarity
- zero-shot-classification
- fill-mask
dataset_info:
features:
- name: id
dtype: string
- name: publish_channel
dtype: string
- name: title
dtype: string
- name: start_duration
dtype: string
- name: end_duration
dtype: string
- name: lengthSeconds
dtype: int64
- name: total_comments
dtype: int64
- name: total_unique_users
dtype: int64
- name: comments
list:
- name: author
dtype: string
- name: channel
dtype: string
- name: cid
dtype: string
- name: heart
dtype: bool
- name: reply
dtype: bool
- name: text
dtype: string
- name: time
dtype: string
- name: time_parsed
dtype: float64
- name: votes
dtype: string
splits:
- name: train
num_bytes: 274986362
num_examples: 310
download_size: 152975094
dataset_size: 274986362
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- not-for-all-audiences
---
# Dataset Card for "Israel-palestine-war"
comments on some youtube channels regarding Israel-palestine-war video published on the peroid
# number of comment per channel as follow:
{'NBCNews': 188490, 'aljazeeraenglish': 100164, 'CNN': 182272, 'Piers Morgan Uncensored': 128093, 'BBCNews': 267834, 'FoxNews': 161493}
# data structure:
Dataset({
features: ['id', 'publish_channel', 'title', 'start_duration', 'end_duration', 'lengthSeconds', 'total_comments', 'total_unique_users', 'comments'],
num_rows: 310
})
# total number of videos: 310 total number of comments: 1028346 | [
-0.8059569597244263,
-0.5958005785942078,
-0.047708660364151,
0.4725136458873749,
-0.7441005110740662,
0.09650488942861557,
-0.2835055887699127,
0.4304427206516266,
0.3910925090312958,
0.525567889213562,
-0.8450614809989929,
-0.879160463809967,
-1.0285890102386475,
0.16555914282798767,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pphuc25/vlsp-train-vectorized | pphuc25 | 2023-11-11T17:47:59Z | 21 | 0 | null | [
"region:us"
] | 2023-11-11T17:47:59Z | 2023-11-11T17:25:18.000Z | 2023-11-11T17:25:18 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: input_length
dtype: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 24115945291.875
num_examples: 171441
download_size: 24036430824
dataset_size: 24115945291.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vlsp-train-vectorized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5707027912139893,
0.1251530796289444,
0.1144338846206665,
0.3281501531600952,
-0.28592076897621155,
-0.07634536921977997,
0.2746751010417938,
-0.14268070459365845,
0.7080532312393188,
0.3233305811882019,
-0.7882729172706604,
-0.6010646224021912,
-0.6947891712188721,
-0.38588935136795044... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eddielin0926/chinese-icd | eddielin0926 | 2023-11-18T14:49:09Z | 21 | 1 | null | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:zh",
"language:en",
"license:mit",
"medical",
"region:us"
] | 2023-11-18T14:49:09Z | 2023-11-11T17:39:25.000Z | 2023-11-11T17:39:25 | ---
language:
- zh
- en
license: mit
size_categories:
- 1M<n<10M
task_categories:
- text-classification
pretty_name: chicd
tags:
- medical
dataset_info:
features:
- name: year
dtype: int32
- name: month
dtype: int32
- name: 'no'
dtype: int32
- name: death
dtype: int32
- name: input_code
dtype: int32
- name: result_code
dtype: int32
- name: check
dtype: bool
- name: serial_no
dtype: int32
- name: catalog
dtype: int32
- name: inputs
sequence: string
- name: results
sequence: string
- name: icds
sequence: string
- name: encodes
sequence:
class_label:
names:
'0': L519
'1': A523
'2': I898
'3': A047
'4': E144
'5': C797
'6': C755
'7': K831
'8': B379
'9': S621
'10': C672
'11': K409
'12': D073
'13': A179
'14': I255
'15': K353
'16': C029
'17': W11
'18': D139
'19': R944
'20': V785
'21': T502
'22': C921
'23': K228
'24': S069(TR)
'25': K226
'26': N501(nTR)
'27': D136
'28': Q878
'29': S610
'30': L032
'31': T835
'32': O699
'33': K820
'34': V827
'35': K256
'36': M769
'37': C677
'38': K920
'39': C689
'40': T183
'41': T327
'42': B948
'43': T213
'44': C160
'45': R060
'46': T812
'47': F104
'48': I311
'49': I670
'50': C112
'51': H931
'52': K868
'53': S158
'54': R35
'55': L109
'56': T115
'57': G2009
'58': H348
'59': P012
'60': Q019
'61': V878
'62': G969
'63': H441
'64': K099
'65': M431
'66': X97
'67': C773
'68': J989
'69': F191
'70': Q445
'71': O691
'72': I110
'73': K109
'74': S121
'75': Q069
'76': D302
'77': K650
'78': D447
'79': N508
'80': V875
'81': E702
'82': J840
'83': T174
'84': S360(TR)
'85': R798
'86': N428
'87': C629
'88': O690
'89': O441
'90': M624
'91': I519
'92': R093
'93': I471
'94': A78
'95': T818
'96': X78
'97': D133
'98': P252
'99': N920
'100': K627
'101': V455
'102': B86
'103': C509
'104': P229
'105': V892
'106': I350(nRH)
'107': H309
'108': C944
'109': T360
'110': S618
'111': J9840
'112': L409
'113': L038
'114': B457
'115': H431(nTR)
'116': R509
'117': F952
'118': A421
'119': S208
'120': A509
'121': I472
'122': H108
'123': B004
'124': C52
'125': P052
'126': E721
'127': F072
'128': W14
'129': T561
'130': C153
'131': Q639
'132': N209
'133': T794
'134': T110
'135': N041
'136': K918
'137': S010
'138': J940
'139': M726
'140': H350
'141': E782
'142': C632
'143': V339
'144': C223
'145': P544
'146': J128
'147': C252
'148': M954
'149': T210
'150': C752
'151': C720
'152': S275
'153': R11
'154': S699
'155': E785
'156': E169
'157': V804
'158': Y839
'159': E210
'160': K902
'161': V275
'162': E720
'163': K262
'164': R202
'165': T357
'166': J81
'167': P015
'168': I423
'169': V389
'170': G378
'171': S315
'172': Q257
'173': T809
'174': V294
'175': Q269
'176': C695
'177': D519
'178': S710
'179': N764
'180': B853
'181': I748
'182': T858
'183': T424
'184': O998
'185': B371
'186': J948
'187': I318
'188': S450
'189': C139
'190': E771
'191': T400
'192': G822(nTR)
'193': J387
'194': A029
'195': J1110
'196': D449
'197': K620
'198': B450
'199': S311
'200': C441
'201': O365
'202': D125
'203': P111
'204': T797
'205': S829
'206': D261
'207': I959
'208': Q789
'209': C442
'210': L209
'211': R292
'212': Q2830
'213': S909
'214': C73
'215': K257
'216': M1998
'217': S328
'218': D691
'219': N308
'220': G709
'221': T604
'222': A46
'223': J019
'224': W74
'225': J329
'226': S141
'227': H409
'228': E876
'229': E211
'230': N481
'231': R630
'232': Q070
'233': I718(nTR)
'234': T041
'235': O669
'236': H919
'237': T307
'238': M125
'239': K719
'240': I738
'241': S242
'242': V285
'243': T455
'244': I5150
'245': M319
'246': C947
'247': I868
'248': S223
'249': X599
'250': N251
'251': N419
'252': S910
'253': Q336
'254': J110
'255': B461
'256': E140
'257': C220
'258': N390
'259': T814
'260': F239
'261': N159
'262': Q559
'263': B250
'264': F051
'265': P251
'266': D486
'267': C770
'268': M798
'269': Q677
'270': D759
'271': N760
'272': C248
'273': C023
'274': A809
'275': S251
'276': I214
'277': B589
'278': K7210
'279': K628
'280': O85
'281': T145
'282': G048
'283': D762
'284': V195
'285': I671
'286': J150
'287': N946
'288': S327
'289': K669
'290': E833
'291': T829
'292': T597
'293': C260
'294': M620
'295': I270
'296': T07
'297': T826
'298': G258
'299': D164
'300': C968
'301': R51
'302': K630
'303': Q917
'304': K863
'305': F155
'306': P241
'307': I808
'308': N059
'309': T549
'310': S519
'311': S522
'312': A529
'313': C740
'314': G040
'315': R258
'316': C321
'317': G600
'318': O868
'319': C062
'320': F03
'321': N649
'322': K566
'323': M480
'324': T901
'325': E148
'326': T242
'327': V115
'328': N719
'329': A401
'330': S501
'331': M236
'332': T383
'333': P740
'334': L030
'335': C64
'336': D013
'337': R33
'338': Y26
'339': Q271
'340': I6149(nTR)
'341': I254
'342': V465
'343': Y86
'344': I603(nTR)
'345': Q030
'346': C172
'347': M703
'348': X36
'349': T250
'350': I310
'351': W32
'352': M659
'353': N86
'354': B008
'355': N835
'356': M130
'357': B909
'358': N322
'359': S701
'360': D24
'361': H335
'362': Q935
'363': S930
'364': T932
'365': B689
'366': Q233
'367': D166
'368': G231
'369': S819
'370': E859
'371': B789
'372': B977
'373': T733
'374': T132
'375': O909
'376': V499
'377': Y579
'378': T886
'379': B389
'380': J411
'381': H449
'382': C65
'383': V264
'384': I455
'385': K102
'386': I071(RH)
'387': F019
'388': L989
'389': C713
'390': G218
'391': Q764
'392': P240
'393': O623
'394': I6339
'395': R042
'396': Q288
'397': S800
'398': O908
'399': S061(TR)
'400': T432
'401': T911
'402': C889
'403': T885
'404': I2190
'405': T519
'406': R34
'407': S375(TR)
'408': A492
'409': G913
'410': L930
'411': I480
'412': Q254
'413': T410
'414': D332
'415': D431
'416': A066
'417': C716
'418': W20
'419': V184
'420': D372
'421': B901
'422': V556
'423': B91
'424': R221
'425': R628
'426': E118
'427': R049(nTR)
'428': C817
'429': I6919
'430': C719
'431': K572
'432': E101
'433': V555
'434': V445
'435': C694
'436': D530
'437': K450
'438': M1999
'439': F311
'440': T903
'441': I281
'442': F163
'443': B082
'444': T146
'445': H045
'446': Q251
'447': M255
'448': C419
'449': K3199
'450': L97
'451': D382
'452': G568
'453': I253
'454': E143
'455': L732
'456': J181
'457': T323
'458': T426
'459': C951
'460': G513
'461': R290
'462': T131
'463': E114
'464': S599
'465': Y433
'466': O019
'467': V746
'468': F452
'469': D010
'470': P369
'471': B89
'472': E748
'473': D141
'474': L743
'475': D213
'476': O622
'477': N800
'478': Q613
'479': X08
'480': I749(EM)
'481': I677
'482': R942
'483': I780
'484': T805
'485': D200
'486': D239
'487': H188
'488': A369
'489': I312(nTR)
'490': X83
'491': I6939
'492': C003
'493': A165
'494': C674
'495': I872
'496': T847
'497': G713
'498': L033
'499': I288(nTR)
'500': F200
'501': J1010
'502': A014
'503': C790
'504': C218
'505': K573
'506': N72
'507': A415
'508': I958
'509': D360
'510': O95
'511': T924
'512': C028
'513': Q824
'514': P780
'515': S381
'516': D892
'517': I349
'518': A985
'519': D401
'520': E790
'521': L259
'522': I771(nTR)
'523': I5140
'524': N838
'525': I051(RH)
'526': K449(nTR)
'527': W25
'528': L929
'529': J969
'530': T119
'531': S720
'532': T274
'533': E111
'534': N820
'535': A878
'536': N151
'537': G839(nTR)
'538': J47
'539': T634
'540': S110
'541': V495
'542': T811
'543': Q211
'544': X99
'545': C502
'546': V235
'547': T861
'548': A327
'549': D734
'550': T782
'551': F069
'552': L851
'553': K746
'554': W22
'555': T149
'556': B085
'557': T451
'558': M139
'559': C210
'560': R179
'561': Q600
'562': I719
'563': I629(nTR)
'564': T312
'565': J938(nTR)
'566': Q258
'567': S070
'568': G930
'569': E260
'570': I250
'571': S700
'572': V199
'573': N858
'574': Q222
'575': I730
'576': J323
'577': C767
'578': K803
'579': R42
'580': K741
'581': E740
'582': Q174
'583': V586
'584': R680
'585': C413
'586': D473
'587': T490
'588': E349
'589': I252
'590': M626
'591': B171
'592': D432
'593': J320
'594': Q780
'595': J384
'596': G470
'597': B382
'598': T529
'599': V817
'600': K420
'601': V789
'602': S222
'603': K703
'604': I38
'605': F141
'606': N319
'607': D159
'608': E750
'609': T450
'610': Y838
'611': S099
'612': Y00
'613': V645
'614': V959
'615': K273
'616': I822
'617': Q289
'618': A051
'619': T620
'620': T754
'621': N499
'622': R13
'623': D391
'624': N323
'625': A244
'626': N185
'627': D695
'628': D739
'629': O235
'630': T142
'631': Q549
'632': I260
'633': Y883
'634': C259
'635': N039
'636': S364(TR)
'637': Q602
'638': S033
'639': R894
'640': G835
'641': D384
'642': F232
'643': X24
'644': C380
'645': C109
'646': S722
'647': O021
'648': Q649
'649': N899
'650': B351
'651': Q240
'652': B362
'653': K716
'654': C541
'655': S422
'656': R410
'657': C329
'658': I6140(nTR)
'659': J849
'660': A069
'661': D509
'662': V919
'663': G98
'664': C800
'665': O159
'666': A321
'667': K860
'668': S279
'669': K253
'670': C251
'671': B270
'672': I740
'673': A33
'674': S012
'675': T413
'676': D609
'677': T933
'678': P239
'679': J322
'680': V599
'681': E162
'682': V809
'683': W09
'684': R599
'685': S231
'686': H169
'687': I820
'688': A319
'689': C962
'690': S079
'691': M503
'692': E550
'693': R470
'694': E512
'695': D130
'696': T317
'697': I426
'698': S109
'699': J939(nTR)
'700': V589
'701': V536
'702': R18
'703': I479
'704': S637
'705': V646
'706': A183
'707': N369
'708': N492
'709': N62
'710': B301
'711': H832
'712': Q443
'713': C61
'714': L301
'715': S049
'716': Q209
'717': K610
'718': G711
'719': K578
'720': T542
'721': C793
'722': I200
'723': N600
'724': N134
'725': J160
'726': J628
'727': K291
'728': Q539
'729': K751
'730': E538
'731': Q242
'732': O903
'733': N359
'734': C180
'735': E798
'736': Q769
'737': E669
'738': T270
'739': C189
'740': S379(TR)
'741': I744
'742': D135
'743': F131
'744': K121
'745': T019
'746': H051
'747': V795
'748': H579
'749': R32
'750': J860
'751': Q898
'752': V735
'753': T868
'754': E213
'755': C481
'756': K801
'757': W30
'758': T919
'759': S711
'760': V494
'761': T902
'762': D442
'763': X74
'764': P027
'765': T524
'766': G834(nTR)
'767': D508
'768': V819
'769': K858
'770': L720
'771': I248
'772': C753
'773': G540
'774': G1229
'775': N288
'776': K134
'777': S570
'778': Q641
'779': S551
'780': G479
'781': V399
'782': T281
'783': D441
'784': P60
'785': A021
'786': T671
'787': K430
'788': V435
'789': T570
'790': C910
'791': L739
'792': Q218
'793': I400
'794': B941
'795': F153
'796': G310
'797': V636
'798': D485
'799': I676
'800': C723
'801': I059
'802': S750
'803': S663
'804': N809
'805': E60
'806': W53
'807': I454
'808': G249
'809': T273
'810': Q183
'811': Q794
'812': V699
'813': A539
'814': V245
'815': M349
'816': C809
'817': C457
'818': G459
'819': V595
'820': I512
'821': J399
'822': Q200
'823': M061
'824': N321
'825': F011
'826': E060
'827': D849
'828': P209
'829': R619
'830': G129
'831': C224
'832': O459
'833': C102
'834': C241
'835': B162
'836': C444
'837': J13
'838': K766
'839': N822
'840': S008
'841': C763
'842': E161
'843': C774
'844': C050
'845': T406
'846': T022
'847': A418
'848': C341
'849': H209
'850': C729
'851': E110
'852': R600
'853': I899
'854': E703
'855': D232
'856': J358
'857': T808
'858': V879
'859': X23
'860': G255
'861': M218
'862': T143
'863': D210
'864': N488(nTR)
'865': J981
'866': D594
'867': T326
'868': A527
'869': Q225
'870': G619
'871': H71
'872': T819
'873': T846
'874': M796
'875': S204
'876': V874
'877': O720
'878': C631
'879': C499
'880': Q049
'881': T856
'882': I741
'883': K767
'884': E756
'885': E873
'886': T093(TR)
'887': J342
'888': B449
'889': T392
'890': O430
'891': T888
'892': F192
'893': Q850
'894': C090
'895': T302
'896': G588
'897': S013
'898': R05
'899': K315
'900': G919
'901': E662
'902': R413
'903': C679
'904': V114
'905': I440
'906': T71
'907': P529
'908': E786
'909': D481
'910': C539
'911': C751
'912': O411
'913': D561
'914': B188
'915': I6930
'916': P819
'917': C163
'918': C929
'919': G728
'920': T430
'921': S271
'922': T232
'923': K272
'924': T790
'925': V225
'926': W84
'927': R227
'928': S383
'929': R072
'930': S005
'931': N309
'932': W07
'933': V039
'934': B690
'935': E112
'936': V546
'937': L130
'938': Q676
'939': H448
'940': L139
'941': N736
'942': I6949
'943': K225
'944': I620(nTR)
'945': S374(TR)
'946': S119
'947': R092
'948': N301
'949': B011
'950': I489
'951': S335
'952': A391
'953': C431
'954': B199
'955': C021
'956': D376
'957': M543
'958': G529
'959': N824
'960': D648
'961': A498
'962': R252
'963': K056
'964': B99
'965': N823
'966': I891
'967': M628
'968': S709
'969': Q437
'970': C460
'971': K116
'972': V129
'973': K274
'974': D219
'975': A170
'976': S376(TR)
'977': I10
'978': N410
'979': D179
'980': S969
'981': R048(nTR)
'982': I745
'983': Q446
'984': R041(nTR)
'985': I211
'986': L308
'987': E139
'988': S030
'989': R54
'990': B458
'991': T843
'992': K579
'993': D350
'994': D860
'995': P002
'996': D489
'997': M502
'998': A009
'999': S362(TR)
'1000': C785
'1001': V031
'1002': E724
'1003': H104
'1004': R002
'1005': G309
'1006': C917
'1007': R961
'1008': R15
'1009': C902
'1010': E43
'1011': T08
'1012': S423
'1013': I728(nTR)
'1014': W15
'1015': M609
'1016': C181
'1017': M314
'1018': M549
'1019': T941
'1020': C183
'1021': S051
'1022': V144
'1023': A164
'1024': T802
'1025': E102
'1026': E41
'1027': T316
'1028': J450
'1029': V436
'1030': I878
'1031': Q213
'1032': K252
'1033': K251
'1034': C171
'1035': K460
'1036': S678
'1037': K835
'1038': E273
'1039': G932
'1040': T592
'1041': E519
'1042': F220
'1043': I341
'1044': C923
'1045': E146
'1046': T393
'1047': C319
'1048': V816
'1049': M233
'1050': D309
'1051': Q02
'1052': R940
'1053': M539
'1054': D019
'1055': Q766
'1056': C399
'1057': E722
'1058': R788
'1059': T506
'1060': B342
'1061': B258
'1062': I359
'1063': O751
'1064': I352
'1065': K37
'1066': F55
'1067': C833
'1068': P788
'1069': S368(TR)
'1070': C844
'1071': D299
'1072': S276
'1073': P549
'1074': D471
'1075': S723
'1076': X58
'1077': M320
'1078': F929
'1079': M243
'1080': M712
'1081': C56
'1082': O639
'1083': M799
'1084': F530
'1085': C781
'1086': I659
'1087': K6310
'1088': T855
'1089': B908
'1090': S751
'1091': C792
'1092': V284
'1093': E236
'1094': K603
'1095': P298
'1096': I770
'1097': F739
'1098': D143
'1099': D410
'1100': S300
'1101': A084
'1102': S352
'1103': K744
'1104': Q324
'1105': Q279
'1106': I778
'1107': S721
'1108': V060
'1109': W77
'1110': V913
'1111': C140
'1112': K36
'1113': K739
'1114': T675
'1115': A481
'1116': M600
'1117': C519
'1118': V775
'1119': T97
'1120': E209
'1121': T813
'1122': F842
'1123': K633
'1124': C780
'1125': D531
'1126': J219
'1127': V145
'1128': Y590
'1129': Q323
'1130': T202
'1131': V655
'1132': E871
'1133': I722(nTR)
'1134': C390
'1135': P220
'1136': Q423
'1137': H308
'1138': I459
'1139': W33
'1140': L728
'1141': K800
'1142': D868
'1143': J157
'1144': Q262
'1145': C310
'1146': T028
'1147': T658
'1148': S688
'1149': T435
'1150': K639
'1151': L031
'1152': E744
'1153': F162
'1154': Q410
'1155': A89
'1156': Y079
'1157': S053(TR)
'1158': J60
'1159': A749
'1160': C381
'1161': F432
'1162': R53
'1163': K560
'1164': A812
'1165': F419
'1166': C432
'1167': Q749
'1168': S028
'1169': X12
'1170': A199
'1171': S771
'1172': D397
'1173': H669
'1174': I701
'1175': W55
'1176': N12
'1177': I700
'1178': C081
'1179': B005
'1180': Q893
'1181': T253
'1182': M216
'1183': D728
'1184': K859
'1185': D445
'1186': Q059
'1187': P250
'1188': E874
'1189': L029
'1190': S531
'1191': Q230
'1192': C761
'1193': C472
'1194': D439
'1195': V756
'1196': C759
'1197': S272
'1198': G64
'1199': M311
'1200': D131
'1201': C261
'1202': C022
'1203': D731
'1204': S589
'1205': T827
'1206': Q212
'1207': G318
'1208': R098
'1209': K260
'1210': C33
'1211': N399
'1212': S063(TR)
'1213': I710(nTR)
'1214': D433
'1215': C496
'1216': A403
'1217': M089
'1218': T378
'1219': F259
'1220': Q179
'1221': Q631
'1222': J041
'1223': W36
'1224': K632
'1225': C787
'1226': Q348
'1227': Y842
'1228': S411
'1229': L26
'1230': C791
'1231': S421
'1232': T958
'1233': I772(nTR)
'1234': I348
'1235': K30
'1236': Q438
'1237': K661(nTR)
'1238': K255
'1239': T433
'1240': D381
'1241': E46
'1242': H819
'1243': K088
'1244': C01
'1245': C433
'1246': S179
'1247': I951
'1248': H702
'1249': I724(nTR)
'1250': G618
'1251': V635
'1252': S510
'1253': S684
'1254': Q874
'1255': I209
'1256': N411
'1257': G245
'1258': P017
'1259': R471
'1260': Q248
'1261': T321
'1262': P070
'1263': W41
'1264': S978
'1265': N759
'1266': P159
'1267': M625
'1268': F410
'1269': E511
'1270': N211
'1271': D103
'1272': I078
'1273': Q220
'1274': D45
'1275': Q819
'1276': F105
'1277': M341
'1278': I139
'1279': R162
'1280': K828
'1281': C493
'1282': I6120(nTR)
'1283': E055
'1284': C412
'1285': C920
'1286': M513
'1287': R390
'1288': T864
'1289': V134
'1290': T148
'1291': S728
'1292': G510
'1293': C403
'1294': E639
'1295': Q309
'1296': E728
'1297': C851
'1298': L039
'1299': Q909
'1300': G936(nTR)
'1301': K292
'1302': E009
'1303': R638
'1304': Q202
'1305': S051(TR)
'1306': T486
'1307': S451
'1308': I259
'1309': N189
'1310': T913
'1311': M898
'1312': C340
'1313': T887
'1314': N398
'1315': N971
'1316': F112
'1317': M340
'1318': C382
'1319': A178
'1320': T596
'1321': S159
'1322': H189
'1323': C880
'1324': W44
'1325': D484
'1326': C132
'1327': E781
'1328': K060
'1329': A491
'1330': D385
'1331': Y369
'1332': J80
'1333': Q860
'1334': M541
'1335': M321
'1336': M053
'1337': T600
'1338': R97
'1339': I880
'1340': K593
'1341': D099
'1342': M259
'1343': G911
'1344': A499
'1345': P789
'1346': E109
'1347': E220
'1348': I469
'1349': V873
'1350': V656
'1351': K382
'1352': L52
'1353': W70
'1354': N049
'1355': V385
'1356': B485
'1357': L899
'1358': V932
'1359': C20
'1360': P375
'1361': P288
'1362': S009
'1363': G404
'1364': C765
'1365': B699
'1366': L405
'1367': T96
'1368': Q897
'1369': E113
'1370': K740
'1371': H472
'1372': G522
'1373': I675
'1374': M419
'1375': Q675
'1376': C779
'1377': P77
'1378': E760
'1379': E059
'1380': R160
'1381': D595
'1382': K830
'1383': D809
'1384': D721
'1385': E279
'1386': I350
'1387': T520
'1388': L509
'1389': C728
'1390': F193
'1391': I351
'1392': M878
'1393': E761
'1394': F103
'1395': M45
'1396': C166
'1397': D696
'1398': D390
'1399': F208
'1400': N498
'1401': T116
'1402': Y01
'1403': G570
'1404': S134
'1405': M080
'1406': S022
'1407': E834
'1408': D181
'1409': Q239
'1410': I499
'1411': C005
'1412': K704
'1413': C343
'1414': T848
'1415': Q552
'1416': I692
'1417': S878
'1418': D758
'1419': D598
'1420': X04
'1421': T460
'1422': K612
'1423': M844
'1424': K928(nTR)
'1425': V909
'1426': C796
'1427': V949
'1428': C700
'1429': L922
'1430': C349
'1431': Y33
'1432': S900
'1433': M310
'1434': C165
'1435': M313
'1436': K223
'1437': V154
'1438': P833
'1439': S019
'1440': N052
'1441': Q793
'1442': I498
'1443': G544
'1444': Q245
'1445': C37
'1446': Q411
'1447': S681
'1448': I510
'1449': Q386
'1450': B428
'1451': T282
'1452': G712
'1453': M512
'1454': Y20
'1455': J151
'1456': K419
'1457': W27
'1458': G08
'1459': T12
'1460': E878
'1461': I6329
'1462': T860
'1463': Q871
'1464': S630
'1465': O680
'1466': Q870
'1467': K282
'1468': E275
'1469': T602
'1470': D841
'1471': F302
'1472': S898
'1473': J339
'1474': Q433
'1475': R198
'1476': B353
'1477': Q792
'1478': I050
'1479': M948
'1480': T180
'1481': L010
'1482': T914
'1483': W24
'1484': T817
'1485': S014
'1486': N300
'1487': O723
'1488': T509
'1489': D569
'1490': J684
'1491': C750
'1492': G403
'1493': C312
'1494': Q782
'1495': R64
'1496': S199
'1497': I850
'1498': D380
'1499': D690
'1500': Q228
'1501': J985
'1502': V575
'1503': D610
'1504': S350
'1505': I712
'1506': S365(TR)
'1507': C436
'1508': N412
'1509': W75
'1510': K113
'1511': W23
'1512': Y871
'1513': W92
'1514': E018
'1515': Q255
'1516': R402
'1517': G610
'1518': R71
'1519': R008
'1520': P702
'1521': T869
'1522': Q224
'1523': L021
'1524': I6139(nTR)
'1525': D361
'1526': Q894
'1527': C473
'1528': C837
'1529': T287
'1530': K570
'1531': B359
'1532': D891
'1533': O410
'1534': C940
'1535': V061
'1536': M318
'1537': T603
'1538': G409
'1539': Q246
'1540': L102
'1541': A399
'1542': Q790
'1543': S269
'1544': S071
'1545': L589
'1546': N850
'1547': D100
'1548': C320
'1549': I864
'1550': Y355
'1551': C030
'1552': O001
'1553': T618
'1554': C031
'1555': G710
'1556': D109
'1557': S820
'1558': K761
'1559': Q000
'1560': W80
'1561': B487
'1562': Y05
'1563': B084
'1564': P838
'1565': P009
'1566': A420
'1567': G471
'1568': D142
'1569': A310
'1570': R068
'1571': R001
'1572': J159
'1573': T959
'1574': B259
'1575': I720(nTR)
'1576': C430
'1577': C830
'1578': O721
'1579': J46
'1580': E854
'1581': T571
'1582': I098
'1583': N250
'1584': G541
'1585': T391
'1586': I6199(nTR)
'1587': O711
'1588': T875
'1589': Q627
'1590': N40
'1591': I731
'1592': E788
'1593': Q223
'1594': E830
'1595': K922
'1596': C437
'1597': J180
'1598': R80
'1599': T481
'1600': K210
'1601': J690
'1602': D693
'1603': L080
'1604': J22
'1605': T537
'1606': M330
'1607': J982
'1608': C480
'1609': I716
'1610': I330
'1611': T828
'1612': S299
'1613': V856
'1614': P960
'1615': O141
'1616': T909
'1617': J660
'1618': K552
'1619': M082
'1620': W57
'1621': E780
'1622': E709
'1623': C110
'1624': J100
'1625': M479
'1626': E069
'1627': V125
'1628': I743
'1629': T564
'1630': V021
'1631': R190
'1632': I213
'1633': M779
'1634': E237
'1635': J852
'1636': G958
'1637': F989
'1638': Q210
'1639': D171
'1640': N10
'1641': T730
'1642': D820
'1643': D386
'1644': R000
'1645': G825(nTR)
'1646': N280
'1647': E115
'1648': G039
'1649': I340(nRH)
'1650': T824
'1651': M489
'1652': T511
'1653': G819
'1654': D150
'1655': P942
'1656': K269
'1657': A879
'1658': S250
'1659': C959
'1660': K289
'1661': Q652
'1662': S331
'1663': F102
'1664': T012
'1665': Q232
'1666': D352
'1667': C159
'1668': X18
'1669': B64
'1670': C914
'1671': C712
'1672': P90
'1673': D090
'1674': M611
'1675': N981
'1676': C222
'1677': I443
'1678': K626
'1679': S923
'1680': K810
'1681': M439
'1682': W31
'1683': H922
'1684': T285
'1685': F519
'1686': P359
'1687': B749
'1688': C721
'1689': B348
'1690': I458
'1691': H549
'1692': E851
'1693': D383
'1694': Q256
'1695': I251
'1696': I775
'1697': S602
'1698': S452
'1699': L020
'1700': I889
'1701': K709
'1702': H913
'1703': L744
'1704': V446
'1705': T462
'1706': C452
'1707': D392
'1708': Q828
'1709': L539
'1710': S225
'1711': A929
'1712': K528
'1713': S026
'1714': M889
'1715': K750
'1716': D550
'1717': S429
'1718': S354
'1719': K869
'1720': N019
'1721': S203
'1722': S280
'1723': J448
'1724': J941
'1725': T912
'1726': I879
'1727': C482
'1728': E249
'1729': T043
'1730': I809
'1731': D201
'1732': T091
'1733': T387
'1734': M509
'1735': S913
'1736': O439
'1737': K5500
'1738': Q614
'1739': F079
'1740': K521
'1741': H113(nTR)
'1742': S025
'1743': Q219
'1744': R401
'1745': K564
'1746': K279
'1747': E049
'1748': H932
'1749': C130
'1750': C089
'1751': T540
'1752': R771
'1753': F151
'1754': B448
'1755': N485
'1756': I971
'1757': F319
'1758': C795
'1759': K743
'1760': C838
'1761': K805
'1762': E15
'1763': W269
'1764': R103
'1765': R829
'1766': F603
'1767': A181
'1768': M210
'1769': T751
'1770': K404
'1771': B440
'1772': S142
'1773': C131
'1774': D738
'1775': C049
'1776': C154
'1777': L028
'1778': K261
'1779': X76
'1780': K040
'1781': M350
'1782': H813
'1783': J156
'1784': I517
'1785': V903
'1786': I060(RH)
'1787': F202
'1788': T449
'1789': Q674
'1790': G838(nTR)
'1791': T172
'1792': P080
'1793': C762
'1794': N762
'1795': C322
'1796': S021
'1797': V685
'1798': M179
'1799': S628
'1800': N328
'1801': J348
'1802': T243
'1803': F709
'1804': L448
'1805': S219
'1806': O979
'1807': I120
'1808': G542(nTR)
'1809': F459
'1810': T905
'1811': O40
'1812': N23
'1813': T029
'1814': C474
'1815': H041
'1816': Q459
'1817': M879
'1818': K832(nTR)
'1819': T305
'1820': J013
'1821': I604(nTR)
'1822': B181
'1823': Q913
'1824': S314
'1825': M109
'1826': I674
'1827': T798
'1828': C103
'1829': Q318
'1830': G439
'1831': M485
'1832': C185
'1833': C342
'1834': Q031
'1835': C061
'1836': C964
'1837': S024
'1838': T503
'1839': O881
'1840': S363(TR)
'1841': T147
'1842': A410
'1843': C440
'1844': N368
'1845': F341
'1846': M460
'1847': S355
'1848': D367
'1849': N888
'1850': S623
'1851': B441
'1852': X13
'1853': W28
'1854': K318(nTR)
'1855': G908
'1856': G935
'1857': N811
'1858': D898
'1859': D735
'1860': D000
'1861': G500
'1862': S934
'1863': F609
'1864': Q791
'1865': M622
'1866': V274
'1867': E274(nTR)
'1868': G210
'1869': Q431
'1870': G939(nTR)
'1871': N201
'1872': G609
'1873': D444
'1874': C414
'1875': D134
'1876': D890
'1877': C922
'1878': S410
'1879': K829
'1880': C221
'1881': I481
'1882': K839
'1883': A064
'1884': E222
'1885': E54
'1886': D377
'1887': N135
'1888': V174
'1889': K823
'1890': Q392
'1891': O469
'1892': K909
'1893': J152
'1894': F29
'1895': Q250
'1896': R100
'1897': V475
'1898': V929
'1899': I349(nRH)
'1900': E147
'1901': S920
'1902': S129
'1903': N482
'1904': D467
'1905': Y899
'1906': J90
'1907': M316
'1908': K108
'1909': J205
'1910': E86
'1911': V755
'1912': L988
'1913': C19
'1914': C760
'1915': E849
'1916': Q159
'1917': K284
'1918': B027
'1919': N813
'1920': T630
'1921': J410
'1922': Q669
'1923': Q068
'1924': E875
'1925': O692
'1926': W34
'1927': S020
'1928': M952
'1929': P701
'1930': I4290
'1931': F719
'1932': K592
'1933': Q621
'1934': C530
'1935': I050(RH)
'1936': K132
'1937': G473
'1938': T840
'1939': H160
'1940': H810
'1941': J392
'1942': K588
'1943': S224
'1944': I4289
'1945': N459
'1946': L309
'1947': I729(nTR)
'1948': I280
'1949': T320
'1950': S313
'1951': Q231
'1952': G959(nTR)
'1953': N200
'1954': A483
'1955': T099
'1956': C450
'1957': R945
'1958': C187
'1959': S799
'1960': Q447
'1961': V694
'1962': J383
'1963': N19
'1964': T111
'1965': A99
'1966': G120
'1967': N61
'1968': C254
'1969': N903
'1970': T568
'1971': J340
'1972': T181
'1973': S520
'1974': I829
'1975': X82
'1976': V676
'1977': K148
'1978': S090
'1979': T361
'1980': D170
'1981': M256
'1982': C637
'1983': K388
'1984': C58
'1985': P073
'1986': A409
'1987': R478
'1988': T749
'1989': V051
'1990': E050
'1991': M998
'1992': R824
'1993': D802
'1994': J391
'1995': F489
'1996': C709
'1997': J449
'1998': V234
'1999': C542
'2000': S221
'2001': Q969
'2002': M866
'2003': C749
'2004': S064(TR)
'2005': Q872
'2006': V799
'2007': C931
'2008': R457
'2009': G008
'2010': S150
'2011': O364
'2012': C060
'2013': V870
'2014': D899
'2015': X84
'2016': V124
'2017': R074
'2018': D190
'2019': I839
'2020': D168
'2021': D694
'2022': M246
'2023': K276
'2024': D420
'2025': A480
'2026': C240
'2027': I6109(nTR)
'2028': W12
'2029': A753
'2030': G938
'2031': C451
'2032': S911
'2033': D692
'2034': O624
'2035': K590
'2036': N137
'2037': R300
'2038': G219
'2039': K3190
'2040': S325
'2041': R560
'2042': E872
'2043': W10
'2044': J980
'2045': M331
'2046': F179
'2047': E569
'2048': S359
'2049': J351
'2050': R208
'2051': J188
'2052': T175
'2053': C680
'2054': S323
'2055': H342
'2056': K520
'2057': J040
'2058': I079(RH)
'2059': T409
'2060': R011
'2061': K765
'2062': S246
'2063': M802
'2064': R529
'2065': V476
'2066': Q158
'2067': P523
'2068': K469
'2069': C794
'2070': I5149
'2071': S559
'2072': X81
'2073': A430
'2074': B399
'2075': F411
'2076': H959
'2077': E329
'2078': P269
'2079': G001
'2080': P704
'2081': S270
'2082': J121
'2083': C845
'2084': J841
'2085': V839
'2086': N179
'2087': N730
'2088': B49
'2089': A329
'2090': C249
'2091': I5009
'2092': D300
'2093': N26
'2094': C402
'2095': D046
'2096': K629
'2097': B003
'2098': S770
'2099': I070
'2100': C009
'2101': C039
'2102': I698
'2103': E725
'2104': T599
'2105': T423
'2106': T189
'2107': C300
'2108': S220
'2109': A162
'2110': K439
'2111': S274
'2112': A020
'2113': R030
'2114': I208
'2115': N832
'2116': B376
'2117': I210
'2118': Q048
'2119': D002
'2120': G038
'2121': N119
'2122': R943
'2123': Q2780
'2124': D588
'2125': B069
'2126': K611
'2127': V579
'2128': L024
'2129': L023
'2130': R14
'2131': C148
'2132': T139
'2133': C23
'2134': I602(nTR)
'2135': G809
'2136': K911
'2137': P618
'2138': K293
'2139': S003
'2140': T781
'2141': N302
'2142': X590
'2143': I713(nTR)
'2144': M359
'2145': K265
'2146': S322
'2147': K861
'2148': I702
'2149': T842
'2150': L108
'2151': A379
'2152': J158
'2153': I788
'2154': W73
'2155': K833
'2156': N828
'2157': F322
'2158': M402
'2159': X00
'2160': F480
'2161': I871(nTR)
'2162': D012
'2163': J949
'2164': B369
'2165': E763
'2166': V244
'2167': N133
'2168': S065(TR)
'2169': X72
'2170': R688
'2171': K729
'2172': K658
'2173': L919
'2174': X349
'2175': N450
'2176': C725
'2177': M758
'2178': I518
'2179': P591
'2180': W13
'2181': T629
'2182': M100
'2183': T795
'2184': N831
'2185': Q229
'2186': I742
'2187': M103
'2188': B340
'2189': V011
'2190': C831
'2191': S724
'2192': T844
'2193': D443
'2194': Q759
'2195': S659
'2196': C673
'2197': D869
'2198': T300
'2199': D751
'2200': Q989
'2201': I516
'2202': P072
'2203': C253
'2204': W49
'2205': X70
'2206': S836
'2207': T136
'2208': D329
'2209': V695
'2210': A490
'2211': S259
'2212': V175
'2213': W67
'2214': Q601
'2215': W06
'2216': I721(nTR)
'2217': M245
'2218': J680
'2219': G960
'2220': J209
'2221': R161
'2222': S619
'2223': T009
'2224': Q934
'2225': A1699
'2226': D373
'2227': I409
'2228': E832
'2229': A188
'2230': M464
'2231': T436
'2232': S273
'2233': K227
'2234': V379
'2235': T094
'2236': J010
'2237': C051
'2238': C12
'2239': E230
'2240': K862
'2241': B279
'2242': R739
'2243': O009
'2244': D047
'2245': I4299
'2246': Q046
'2247': T313
'2248': M538
'2249': T314
'2250': T535
'2251': K509
'2252': D351
'2253': O429
'2254': M1997
'2255': G903
'2256': E888
'2257': K389
'2258': M899
'2259': C819
'2260': M009
'2261': D144
'2262': A419
'2263': L530
'2264': M301
'2265': C311
'2266': R02
'2267': K913
'2268': S390
'2269': L299
'2270': D369
'2271': A34
'2272': B001
'2273': K802
'2274': K316
'2275': V846
'2276': Q933
'2277': E119
'2278': P351
'2279': K701
'2280': R222
'2281': E042
'2282': I776
'2283': Q647
'2284': G910
'2285': T845
'2286': B608
'2287': V689
'2288': P005
'2289': G062
'2290': K222(nTR)
'2291': F402
'2292': B459
'2293': O660
'2294': P749
'2295': K051
'2296': K702
'2297': T920
'2298': Q319
'2299': Q663
'2300': J386
'2301': K358
'2302': B023
'2303': Q643
'2304': J950
'2305': D413
'2306': Q201
'2307': F429
'2308': P285
'2309': D267
'2310': Q320
'2311': A180
'2312': S420
'2313': H110
'2314': A549
'2315': J311
'2316': A099
'2317': K140
'2318': J988
'2319': B169
'2320': K912
'2321': F205
'2322': N648
'2323': K250
'2324': R090
'2325': K604
'2326': S810
'2327': Q061
'2328': Q040
'2329': C675
'2330': K763
'2331': T862
'2332': B020
'2333': Q899
'2334': I482
'2335': D233
'2336': C445
'2337': K266
'2338': D137
'2339': E789
'2340': H050
'2341': C411
'2342': S525
'2343': R440
'2344': E701
'2345': H603
'2346': I451
'2347': E320
'2348': F600
'2349': H471
'2350': M623
'2351': R629
'2352': X11
'2353': I5159
'2354': R601
'2355': T841
'2356': E310
'2357': Q399
'2358': I6409
'2359': S128
'2360': D580
'2361': Q451
'2362': C069
'2363': I709
'2364': D649
'2365': J310
'2366': A028
'2367': M861
'2368': A439
'2369': V425
'2370': D331
'2371': M023
'2372': R101
'2373': B09
'2374': K314
'2375': C091
'2376': C470
'2377': C459
'2378': Q858
'2379': E889
'2380': D684
'2381': B24
'2382': B902
'2383': S332
'2384': T329
'2385': A829
'2386': X09
'2387': C840
'2388': N258
'2389': Q419
'2390': N859
'2391': M063
'2392': N259
'2393': O440
'2394': I690
'2395': V877
'2396': T130
'2397': C786
'2398': S932
'2399': V649
'2400': M929
'2401': C639
'2402': D819
'2403': C383
'2404': Q758
'2405': K551
'2406': M300
'2407': G369
'2408': T328
'2409': H356(nTR)
'2410': S369(TR)
'2411': Q382
'2412': D763
'2413': I6119(nTR)
'2414': V355
'2415': V872
'2416': K271
'2417': J942(nTR)
'2418': I621(nTR)
'2419': C227
'2420': T922
'2421': J850
'2422': C161
'2423': T021
'2424': J459
'2425': C169
'2426': J129
'2427': C570
'2428': T390
'2429': S47
'2430': C494
'2431': K559
'2432': I339
'2433': I2199
'2434': P025
'2435': P968
'2436': T703
'2437': F845
'2438': L088
'2439': K561
'2440': J634
'2441': H921
'2442': Q605
'2443': Q412
'2444': H000
'2445': V139
'2446': Q740
'2447': L401
'2448': F199
'2449': L110
'2450': T874
'2451': I301
'2452': O960
'2453': A810
'2454': O753
'2455': D479
'2456': P832
'2457': S060(TR)
'2458': Q620
'2459': T401
'2460': P949
'2461': T820
'2462': Q892
'2463': E268
'2464': J381
'2465': T799
'2466': G319
'2467': N329
'2468': G723
'2469': S308
'2470': C600
'2471': E271
'2472': I679
'2473': H933
'2474': W17
'2475': D165
'2476': K638
'2477': V585
'2478': T315
'2479': J370
'2480': M213
'2481': K921
'2482': C924
'2483': S801
'2484': Q796
'2485': B009
'2486': V481
'2487': K851
'2488': D379
'2489': T510
'2490': C449
'2491': S351
'2492': C447
'2493': C699
'2494': H540
'2495': E065
'2496': A182
'2497': Q999
'2498': J069
'2499': G830(nTR)
'2500': S260
'2501': V456
'2502': I069(RH)
'2503': K130
'2504': V099
'2505': K263
'2506': M138
'2507': T938
'2508': R091
'2509': N342
'2510': C313
'2511': T042
'2512': M548
'2513': D591
'2514': Q349
'2515': B900
'2516': E713
'2517': P023
'2518': T929
'2519': C162
'2520': K625(nTR)
'2521': D126
'2522': M463
'2523': G442
'2524': G528
'2525': P521
'2526': S301
'2527': T220
'2528': C798
'2529': C690
'2530': C492
'2531': B028
'2532': C435
'2533': C927
'2534': K904
'2535': M1995
'2536': V736
'2537': Q439
'2538': Q252
'2539': C722
'2540': C630
'2541': H498
'2542': K529
'2543': D400
'2544': A870
'2545': N939(nTR)
'2546': K565
'2547': I711(nTR)
'2548': Y069
'2549': V859
'2550': S120
'2551': C323
'2552': S151
'2553': J039
'2554': V776
'2555': A062
'2556': O069
'2557': G519
'2558': R634
'2559': J051
'2560': J398
'2561': E142
'2562': T025
'2563': Q680
'2564': I495
'2565': A049
'2566': D412
'2567': I715(nTR)
'2568': I091
'2569': I490
'2570': B03
'2571': I289
'2572': C841
'2573': I678
'2574': A799
'2575': R999
'2576': E278
'2577': M242
'2578': I6359
'2579': H440
'2580': Q390
'2581': L570
'2582': N818
'2583': C950
'2584': B269
'2585': F059
'2586': O759
'2587': S059
'2588': S015
'2589': I494
'2590': Q606
'2591': T212
'2592': R21
'2593': C250
'2594': C490
'2595': L511
'2596': P219
'2597': D66
'2598': H700
'2599': G950
'2600': C574
'2601': I601(nTR)
'2602': G560
'2603': T821
'2604': B000
'2605': T289
'2606': C188
'2607': P059
'2608': B451
'2609': R730
'2610': K029
'2611': Y34
'2612': D022
'2613': K621
'2614': C257
'2615': I212
'2616': I672
'2617': M332
'2618': Q264
'2619': N360
'2620': G441
'2621': T569
'2622': C942
'2623': P350
'2624': C930
'2625': S430
'2626': S018
'2627': V090
'2628': I723(nTR)
'2629': C384
'2630': C504
'2631': T659
'2632': Q754
'2633': D269
'2634': M629
'2635': Q189
'2636': S899
'2637': T096
'2638': V871
'2639': G09
'2640': S321
'2641': B029
'2642': F209
'2643': K294
'2644': G003
'2645': S500
'2646': M462
'2647': K732
'2648': F329
'2649': N421(nTR)
'2650': F729
'2651': I371
'2652': Q208
'2653': S831
'2654': T447
'2655': T065
'2656': S229
'2657': P969
'2658': T10
'2659': K822(nTR)
'2660': H605
'2661': V485
'2662': T638
'2663': J958
'2664': M532
'2665': J698
'2666': J869
'2667': C119
'2668': K811
'2669': C696
'2670': I4229
'2671': X38
'2672': M329
'2673': T918
'2674': G961
'2675': C966
'2676': J691
'2677': G372
'2678': S290
'2679': V295
'2680': J853
'2681': F068
'2682': Q206
'2683': E835
'2684': S499
'2685': V255
'2686': N340
'2687': I801
'2688': T560
'2689': C150
'2690': T310
'2691': J312
'2692': T928
'2693': J385
'2694': D434
'2695': X37
'2696': S929
'2697': J168
'2698': S669
'2699': E149
'2700': C859
'2701': S399
'2702': C229
'2703': V794
'2704': I378
'2705': T144
'2706': I461
'2707': E762
'2708': C080
'2709': A972
'2710': M303
'2711': P071
'2712': Q039
'2713': M478
'2714': C969
'2715': S18
'2716': Q851
'2717': Q273
'2718': T882
'2719': I501
'2720': X01
'2721': B178
'2722': N821
'2723': Q809
'2724': T528
'2725': K122
'2726': N178
'2727': I092
'2728': K769
'2729': P809
'2730': P290
'2731': V535
'2732': R579
'2733': Q891
'2734': E259
'2735': A979
'2736': S925
'2737': I513
'2738': X14
'2739': Q393
'2740': I6319
'2741': C693
'2742': D477
'2743': F431
'2744': T178
'2745': T816
'2746': I258
'2747': C501
'2748': D414
'2749': A630
'2750': W05
'2751': I012
'2752': E243
'2753': T921
'2754': G912
'2755': T598
'2756': C715
'2757': H729(nTR)
'2758': P614
'2759': G259
'2760': T223
'2761': T173
'2762': P284
'2763': N312
'2764': E232
'2765': I4259
'2766': C186
'2767': J029
'2768': Q738
'2769': A830
'2770': A520
'2771': T311
'2772': W18
'2773': N138
'2774': T271
'2775': Y850
'2776': D27
'2777': V970
'2778': C099
'2779': M895
'2780': D151
'2781': I509
'2782': I749(TH)
'2783': N158
'2784': T301
'2785': C170
'2786': Q043
'2787': G800
'2788': Q610
'2789': G419
'2790': K563
'2791': A850
'2792': L300
'2793': S823
'2794': K290
'2795': A229
'2796': C783
'2797': D500
'2798': M792
'2799': C724
'2800': C579
'2801': T414
'2802': I071
'2803': D069
'2804': K20
'2805': Q441
'2806': C410
'2807': H838(nTR)
'2808': D132
'2809': F161
'2810': S670
'2811': H341
'2812': T408
'2813': M818
'2814': B354
'2815': I241
'2816': N433
'2817': R064
'2818': T670
'2819': Q253
'2820': E870
'2821': G629
'2822': N852
'2823': K068
'2824': J61
'2825': C784
'2826': T679
'2827': M939
'2828': M436
'2829': F070
'2830': T825
'2831': C182
'2832': B07
'2833': R578
'2834': P599
'2835': I308
'2836': C184
'2837': N63
'2838': E233
'2839': I6090
'2840': P279
'2841': R418
'2842': N139
'2843': D469
'2844': M819
'2845': V135
'2846': D599
'2847': C446
'2848': S252
'2849': Q799
'2850': F444
'2851': S579
'2852': K431
'2853': K381
'2854': U071
'2855': S529
'2856': N170
'2857': E619
'2858': T784
'2859': B019
'2860': C55
'2861': I099
'2862': O713
'2863': H912
'2864': E242
'2865': Q612
'2866': T355
'2867': V224
'2868': C002
'2869': C495
'2870': D320
'2871': L270
'2872': T134
'2873': A521
'2874': A059
'2875': S822
'2876': K047
'2877': C741
'2878': C111
'2879': R104
'2880': K149
'2881': T412
'2882': K659
'2883': L271
'2884': V185
'2885': B374
'2886': B172
'2887': I201
'2888': K768(nTR)
'2889': K264
'2890': N324(nTR)
'2891': Q820
'2892': E063
'2893': F500
'2894': C829
'2895': K277
'2896': L984
'2897': T404
'2898': A318
'2899': W19
'2900': T849
'2901': K558
'2902': L00
'2903': R298
'2904': H709
'2905': C505
'2906': S361(TR)
'2907': J108
'2908': W56
'2909': K562
'2910': I803
'2911': K440(nTR)
'2912': N808
'2913': S320
'2914': V335
'2915': K229
'2916': G92
'2917': K413
'2918': N829
'2919': S298
'2920': W08
'2921': X51
'2922': K700
'2923': P289
'2924': E040
'2925': T609
'2926': J930
'2927': W29
'2928': L219
'2929': F488
'2930': G244
'2931': D643
'2932': E749
'2933': G060
'2934': F799
'2935': K221
'2936': Q203
'2937': C155
'2938': V155
'2939': S372(TR)
'2940': T465
'2941': T543
'2942': I079
'2943': C510
'2944': D733
'2945': P293
'2946': P524
'2947': K599
'2948': E742
'2949': K283
'2950': K270
'2951': M250
'2952': E229
'2953': T303
'2954': C269
'2955': T319
'2956': P612
'2957': B377
'2958': S310
'2959': A35
'2960': C020
'2961': Q339
'2962': K112
'2963': S344
'2964': F959
'2965': X19
'2966': I6349
'2967': V899
'2968': W260
'2969': B189
'2970': S318
'2971': A022
'2972': H109
'2973': D001
'2974': F39
'2975': S870
'2976': V784
'2977': R609
'2978': I6159(nTR)
'2979': X91
'2980': A493
'2981': T508
'2982': K760
'2983': P281
'2984': Q398
'2985': S852
'2986': A030
'2987': M621(nTR)
'2988': K519
'2989': T141
'2990': O343
'2991': Q2820
'2992': V826
'2993': T092
'2994': L538
'2995': I340
'2996': L893
'2997': T322
'2998': T857
'2999': D333
'3000': T981
'3001': A010
'3002': R508
'3003': F449
'3004': B370
'3005': L022
'3006': K649
'3007': T286
'3008': Y09
'3009': C717
'3010': Y04
'3011': T203
'3012': V813
'3013': A184
'3014': D169
'3015': I456
'3016': I6000
'3017': E754
'3018': M729
'3019': G000
'3020': F430
'3021': R520
'3022': V829
'3023': I6399
'3024': R81
'3025': E201
'3026': S999
'3027': C211
'3028': R568
'3029': T504
'3030': C301
'3031': I351(nRH)
'3032': C601
'3033': K591
'3034': C775
'3035': D303
'3036': V745
'3037': L940
'3038': T277
'3039': A060
'3040': D689
'3041': T230
'3042': K914
'3043': T233
'3044': M432
'3045': Q330
'3046': H830
'3047': G042
'3048': I447
'3049': B582
'3050': D175
'3051': E031
'3052': T049
'3053': I279
'3054': A168
'3055': G030
'3056': Q929
'3057': I828
'3058': P291
'3059': S824
'3060': D409
'3061': T522
'3062': E880
'3063': I802
'3064': J380
'3065': G952
'3066': J441
'3067': N47
'3068': V950
'3069': R820
'3070': I309
'3071': M511
'3072': D110
'3073': T403
'3074': D480
'3075': F101
'3076': I789
'3077': Q369
'3078': E250
'3079': J118
'3080': V849
'3081': I739
'3082': B022
'3083': E079
'3084': C434
'3085': Q998
'3086': A429
'3087': K624
'3088': C001
'3089': D589
'3090': C004
'3091': E168
'3092': R220
'3093': W83
'3094': E711
'3095': T699
'3096': S278
'3097': P370
'3098': O16
'3099': J189
'3100': J40
'3101': D180
'3102': K135
'3103': L512
'3104': H539
'3105': S828
'3106': G549
'3107': E041
'3108': T024
'3109': S131
'3110': C711
'3111': C059
'3112': H470
'3113': F840
'3114': F107
'3115': V837
'3116': G700
'3117': R591
'3118': Q798
'3119': M434
'3120': B465
'3121': C479
'3122': T306
'3123': E850
'3124': L982
'3125': S202
'3126': E272
'3127': N739
'3128': F111
'3129': D419
'3130': E145
'3131': B659
'3132': H46
'3133': I256
'3134': F328
'3135': T402
'3136': C714
'3137': C66
'3138': Q849
'3139': S907
'3140': J182
'3141': T58
'3142': O152
'3143': O882
'3144': S011
'3145': T702
'3146': I491
'3147': R571
'3148': N304
'3149': M751
'3150': I313
'3151': T325
'3152': S366(TR)
'3153': E806
'3154': T55
'3155': H269
'3156': S730
'3157': V430
'3158': M791
'3159': T179
'3160': B54
'3161': Q263
'3162': V092
'3163': F412
'3164': K571
'3165': V093
'3166': K660
'3167': E877
'3168': V489
'3169': C900
'3170': R959
'3171': K852
'3172': S382
'3173': G909
'3174': G35
'3175': G951
'3176': S212
'3177': I061(RH)
'3178': K745
'3179': I81
'3180': R570
'3181': C901
'3182': Q859
'3183': B378
'3184': J154
'3185': Q268
'3186': N490
'3187': C151
'3188': M169
'3189': J986
'3190': G968
'3191': K275
'3192': S333
'3193': Q221
'3194': I4220
'3195': I821
'3196': C710
'3197': D291
'3198': M069
'3199': S302
'3200': Q204
'3201': I6360
'3202': P592
'3203': Q742
'3204': D229
'3205': V855
'3206': L850
'3207': C609
'3208': D129
'3209': M869
'3210': T318
'3211': I859
'3212': I129
'3213': V545
'3214': A080
'3215': E269
'3216': K7200
'3217': R5800
'3218': I779
'3219': K259
'3220': D049
'3221': S373(TR)
'3222': S970
'3223': J155
'3224': L089
'3225': S198
'3226': I509(A)
'3227': K838
'3228': I269
'3229': J64
'3230': P040
'3231': V104
'3232': W66
'3233': W01
'3234': A048
'3235': A64
'3236': G002
'3237': G543
'3238': T939
'3239': I369(nRH)
'3240': W35
'3241': P360
'3242': T200
'3243': P522
'3244': V675
'3245': Q442
'3246': K285
'3247': R400
'3248': E141
'3249': J931
'3250': I442
'3251': I890
'3252': I749
'3253': D371
'3254': S523
'3255': E052
'3256': M219
'3257': V679
'3258': V811
'3259': J14
'3260': D440
'3261': B159
'3262': B349
'3263': I119
'3264': G934
'3265': R170
'3266': H100
'3267': T280
'3268': Q890
'3269': T369
'3270': D259
'3271': D483
'3272': O269
'3273': S370(TR)
'3274': E348
'3275': Q383
'3276': A309
'3277': D015
'3278': I070(RH)
'3279': J439(nTR)
'3280': K352
'3281': S400
'3282': T793
'3283': I6080
'3284': V041
'3285': N509
'3286': C471
'3287': I511
'3288': T056
'3289': D812
'3290': H664
'3291': I493
'3292': C491
'3293': I99
'3294': I441
'3295': C782
'3296': I059(RH)
'3297': N009
'3298': A759
'3299': C692
'3300': H602
'3301': C469
'3302': M619
'3303': G111
'3304': C919
'3305': C179
'3306': C439
'3307': G589
'3308': W40
'3309': J960
'3310': V479
'3311': T494
'3312': I00
'3313': M469
'3314': K311
'3315': L239
'3316': C939
'3317': T140
'3318': O149
'3319': D34
'3320': S058
'3321': T68
'3322': V194
'3323': Q2829
'3324': T461
'3325': R040
'3326': F172
'3327': Q678
'3328': T622
'3329': O209
'3330': K297
'3331': J700
'3332': D482
'3333': G049
'3334': V821
'3335': B460
'3336': A167
'3337': E559
'3338': L910
'3339': W268
'3340': I881
'3341': J00
'3342': V594
'3343': P008
'3344': G121
'3345': S004(TR)
'3346': S418
'3347': L500
'3348': J42
'3349': B488
'3350': I714
'3351': Q161
'3352': K429
'3353': O239
'3354': M542
'3355': R251
'3356': O864
'3357': N289
'3358': G10
'3359': C052
'3360': I370
'3361': K296
'3362': O368
'3363': G379
'3364': S078
'3365': A412
'3366': K623
'3367': C400
'3368': V254
'3369': E838
'3370': R960
'3371': Q750
'3372': D848
'3373': T112
'3374': Q321
'3375': C911
'3376': C764
'3377': Q688
'3378': S068(TR)
'3379': C835
'3380': R590
'3381': D65
'3382': M545
'3383': D761
'3384': J961
'3385': F09
'3386': K267
'3387': P282
'3388': C771
'3389': Q359
'3390': K403
'3391': H353
'3392': S729
'3393': K310
'3394': V841
'3395': F848
'3396': M519
'3397': P270
'3398': Q172
'3399': O244
'3400': G114
'3401': I6189(nTR)
'3402': T427
'3403': W39
'3404': J120
'3405': V214
'3406': R31
'3407': R55
'3408': H600
'3409': B182
'3410': H609
'3411': T650
'3412': P021
'3413': D821
'3414': A86
'3415': D059
'3416': D70
'3417': I4209
'3418': J451
'3419': K759
'3420': G579
'3421': D688
'3422': K220
'3423': V486
'3424': S062(TR)
'3425': D720
'3426': Q803
'3427': B059
'3428': E752
'3429': D510
'3430': D375
'3431': T563
'3432': C467
'3433': C443
'3434': A039
'3435': J82
'3436': T910
'3437': C113
'3438': C164
'3439': N814
'3440': C006
'3441': K298
'3442': Q249
'3443': R230
'3444': R270
'3445': G061
'3446': N281
'3447': C772
'3448': K295
'3449': T324
'3450': T915
'3451': D430
'3452': V686
'3453': D407
'3454': C915
'3455': G311
'3456': D374
'3457': A699
'3458': G458
'3459': S127
'3460': V584
'3461': W54
'3462': K668
'3463': D330
'3464': K461
'3465': I090
'3466': G931
'3467': T240
'3468': T283
'3469': K254
'3470': D682
'3471': V98
'3472': Q699
'3473': D020
'3474': K219
'3475': V823
'3476': K053
'3477': D353
'3478': K819
'3479': Q432
'3480': D152
'3481': G009
'3482': N429
'3483': M840
'3484': T810
'3485': D619
'3486': D804
'3487': N320
'3488': T090
'3489': E215
'3490': D128
'3491': J304
'3492': T931
'3493': N058
'3494': D472
'3495': L88
'3496': J060
'3497': M488
'3498': W02
'3499': C000
'3500': N889
'3501': G406
'3502': D487
'3503': G119
'3504': C549
'3505': E0399
'3506': Q772
'3507': S001
'3508': I319
'3509': Q079
'3510': V149
'3511': O418
'3512': M790
'3513': T876
'3514': S631
'3515': C788
'3516': V840
'3517': I708
'3518': K634
'3519': D411
'3520': E710
'3521': A1690
'3522': T590
'3523': N210
'3524': T304
'3525': G450
'3526': J36
'3527': K052
'3528': M719
'3529': V576
'3530': B832
'3531': O994
'3532': D560
'3533': B341
'3534': P399
'3535': D399
'3536': E831
'3537': G312
'3538': K137
'3539': V299
'3540': S626
'3541': S029
'3542': V779
'3543': Q979
'3544': B661
'3545': H578(nTR)
'3546': N709
'3547': D370
'3548': E723
'3549': M868
'3550': B220
'3551': Q188
'3552': T521
'3553': C500
'3554': E881
'3555': S371(TR)
'3556': R234
'3557': A38
'3558': K622
'3559': K929
'3560': S903
'3561': M352
'3562': S066(TR)
'3563': I358
'3564': D334
'3565': C475
'3566': C07
'3567': V786
splits:
- name: train
num_bytes: 113287237
num_examples: 1477240
download_size: 28018862
dataset_size: 113287237
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Chinese International Statistical Classification of Diseases | [
0.5462018847465515,
0.06692840158939362,
0.016494937241077423,
0.9068963527679443,
-0.3460519313812256,
-0.24827896058559418,
0.2844724655151367,
-0.28805431723594666,
0.26981407403945923,
0.4322073757648468,
0.019461117684841156,
-0.8071548938751221,
-0.38881102204322815,
-0.0471387691795... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Royal-lobster/Enhanced-Slither-Audited-Solidity-QA | Royal-lobster | 2023-11-14T09:36:35Z | 21 | 0 | null | [
"region:us"
] | 2023-11-14T09:36:35Z | 2023-11-14T09:36:20.000Z | 2023-11-14T09:36:20 | ---
dataset_info:
features:
- name: results
dtype: string
- name: source_code
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 275448756
num_examples: 9477
download_size: 81424292
dataset_size: 275448756
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Enhanced-Slither-Audited-Solidity-QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42022788524627686,
-0.33392050862312317,
0.38023626804351807,
-0.12121830135583878,
-0.5623644590377808,
0.3164162039756775,
0.5748488306999207,
-0.24724993109703064,
1.0150623321533203,
0.7853459715843201,
-0.5810708999633789,
-0.4877794086933136,
-0.2291872799396515,
-0.28382119536399... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Subcold/common_voice_small.en | Subcold | 2023-11-14T12:43:44Z | 21 | 0 | null | [
"region:us"
] | 2023-11-14T12:43:44Z | 2023-11-14T10:50:02.000Z | 2023-11-14T10:50:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 18247456336
num_examples: 18999
- name: test
num_bytes: 3796591936
num_examples: 3953
download_size: 3870258484
dataset_size: 22044048272
---
# Dataset Card for "common_voice_small.en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7570565938949585,
-0.21919327974319458,
0.18814915418624878,
0.22714000940322876,
-0.2661123275756836,
-0.2015388458967209,
-0.25473350286483765,
-0.19715893268585205,
0.9936438798904419,
0.4573577642440796,
-0.9053822755813599,
-0.749198317527771,
-0.4703584313392639,
-0.14047649502754... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/addition_whole | jlbaker361 | 2023-11-15T13:00:03Z | 21 | 0 | null | [
"region:us"
] | 2023-11-15T13:00:03Z | 2023-11-14T23:39:10.000Z | 2023-11-14T23:39:10 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1210405.5
num_examples: 29376
- name: test
num_bytes: 134489.5
num_examples: 3264
download_size: 730504
dataset_size: 1344895.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "addition_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6283860802650452,
-0.36209484934806824,
0.27053409814834595,
0.43537089228630066,
-0.39667367935180664,
-0.20659609138965607,
0.24168400466442108,
-0.24319447576999664,
1.1123929023742676,
0.6366287469863892,
-0.7918803095817566,
-0.6536015272140503,
-0.6168405413627625,
-0.368312269449... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/addition_decimal | jlbaker361 | 2023-11-15T13:00:20Z | 21 | 0 | null | [
"region:us"
] | 2023-11-15T13:00:20Z | 2023-11-14T23:39:16.000Z | 2023-11-14T23:39:16 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2145709.8
num_examples: 29376
- name: test
num_bytes: 238412.2
num_examples: 3264
download_size: 884683
dataset_size: 2384122.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "addition_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6959035396575928,
-0.27149972319602966,
0.10352663695812225,
0.4823888838291168,
-0.18485212326049805,
-0.24996797740459442,
0.07036090642213821,
-0.1856643408536911,
0.8363009691238403,
0.41639330983161926,
-0.604400634765625,
-0.694763720035553,
-0.6129626035690308,
-0.273878365755081... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maf917/sqltestdata | maf917 | 2023-11-15T02:26:12Z | 21 | 0 | null | [
"region:us"
] | 2023-11-15T02:26:12Z | 2023-11-15T02:24:30.000Z | 2023-11-15T02:24:30 | Write 20 SQL programming questions of easy and medium complexity. Provide questions on a
diverse range of subjects, and make sure no two questions are alike. Make sure the
the SQL syntax consists of select statements and follows ANSI rules.
No commercial use | [
-0.3400801122188568,
-0.7974334955215454,
0.5027034878730774,
0.7927043437957764,
-0.02088814787566662,
-0.13264885544776917,
-0.020007336512207985,
-0.21262572705745697,
-0.05637352913618088,
1.031400203704834,
-0.9246614575386047,
-0.25439730286598206,
-0.10471276938915253,
0.36307895183... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
surabhiMV/qrcode_new_t | surabhiMV | 2023-11-15T09:18:12Z | 21 | 0 | null | [
"region:us"
] | 2023-11-15T09:18:12Z | 2023-11-15T08:37:00.000Z | 2023-11-15T08:37:00 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 27849041.0
num_examples: 352
download_size: 0
dataset_size: 27849041.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "qrcode_new_t"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47327661514282227,
-0.1491951048374176,
0.22415554523468018,
0.1165425181388855,
-0.5265224575996399,
0.31240206956863403,
0.3087393045425415,
-0.005761673208326101,
0.9088855981826782,
0.44011569023132324,
-0.7182378172874451,
-1.142026662826538,
-0.5739120841026306,
-0.294203519821167... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
surabhiMV/qrcode_val_new_tt | surabhiMV | 2023-11-15T09:18:20Z | 21 | 0 | null | [
"region:us"
] | 2023-11-15T09:18:20Z | 2023-11-15T08:37:18.000Z | 2023-11-15T08:37:18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 3442951.0
num_examples: 41
download_size: 0
dataset_size: 3442951.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "qrcode_val_new_tt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5438205599784851,
-0.20296135544776917,
0.17501139640808105,
0.08683181554079056,
-0.5140659213066101,
0.20997841656208038,
0.5064737796783447,
-0.03224683925509453,
0.888509213924408,
0.5102009773254395,
-0.725538432598114,
-0.9899101257324219,
-0.6093103885650635,
-0.24922700226306915... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrisdi/mask | mrisdi | 2023-11-16T07:59:37Z | 21 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-11-16T07:59:37Z | 2023-11-16T07:30:43.000Z | 2023-11-16T07:30:43 | ---
license: cc-by-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jcuhfehl/OpenHermes-ChatML | Jcuhfehl | 2023-11-16T22:33:52Z | 21 | 1 | null | [
"region:us"
] | 2023-11-16T22:33:52Z | 2023-11-16T22:17:40.000Z | 2023-11-16T22:17:40 | ---
dataset_info:
features:
- name: data
dtype: string
splits:
- name: train
num_bytes: 314789363
num_examples: 242831
download_size: 136731208
dataset_size: 314789363
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
ChatML converted version of openhermes dataset. Useful for usage in directly fine tuning. | [
-0.4197345972061157,
-0.5473476648330688,
-0.2609793245792389,
0.18368805944919586,
-0.18113110959529877,
-0.4932490289211273,
-0.6506285667419434,
-0.21802332997322083,
-0.5224784016609192,
0.8787569403648376,
-0.7093074321746826,
-0.8458012938499451,
0.07699146121740341,
-0.0384285524487... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
suncy13/longthaisum_8k | suncy13 | 2023-11-17T03:31:07Z | 21 | 0 | null | [
"region:us"
] | 2023-11-17T03:31:07Z | 2023-11-17T03:30:06.000Z | 2023-11-17T03:30:06 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keylazy/ark-raw | keylazy | 2023-11-19T04:12:04Z | 21 | 0 | null | [
"region:us"
] | 2023-11-19T04:12:04Z | 2023-11-19T01:09:28.000Z | 2023-11-19T01:09:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text1
dtype: string
- name: text2
dtype: string
splits:
- name: train
num_bytes: 274489671
num_examples: 1000000
- name: test
num_bytes: 27481428
num_examples: 100000
download_size: 189424610
dataset_size: 301971099
---
# Dataset Card for "ark-raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6476241946220398,
-0.297654926776886,
0.11160268634557724,
0.032134540379047394,
-0.4180198013782501,
0.12880264222621918,
0.20942562818527222,
-0.48744621872901917,
0.709886908531189,
0.47843170166015625,
-0.7904418110847473,
-0.8951270580291748,
-0.41208598017692566,
0.047388728708028... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EP45/test | EP45 | 2023-11-27T05:37:17Z | 21 | 0 | null | [
"region:us"
] | 2023-11-27T05:37:17Z | 2023-11-20T05:28:11.000Z | 2023-11-20T05:28:11 | ---
configs:
- config_name: 1k
data_files:
- split: train
path: "data/1k.parquet"
- split: test
path: "data/test.parquet"
- config_name: 5k
data_files:
- split: train
path: "data/5k.parquet"
- split: test
path: "data/test.parquet"
- config_name: 10k
data_files:
- split: train
path: "data/10k.parquet"
- split: test
path: "data/test.parquet"
- config_name: 15k
data_files:
- split: train
path: "data/15k.parquet"
- split: test
path: "data/test.parquet"
default: true
- config_name: 20k
data_files:
- split: train
path: "data/20k.parquet"
- split: test
path: "data/test.parquet"
- config_name: 30k
data_files:
- split: train
path: "data/30k.parquet"
- split: test
path: "data/test.parquet"
- config_name: 50k
data_files:
- split: train
path: "data/50k.parquet"
- split: test
path: "data/test.parquet"
---
# vicuna 실험용 데이터셋
다음 데이터셋으로부터 변환됨:
https://huggingface.co/datasets/junelee/sharegpt_deepl_ko
## 파일구조
- converted.parquet : 원본 데이터셋의 ko_alpaca_style_dataset.json을 트레이닝에 맞도록 형식 변환
## 라이센스
원본 데이터가 OPENAI 이기 때문에 해당 [약관](https://openai.com/policies/terms-of-use)에 따릅니다.
그 이외의 부분은 다음 라이센스를 따릅니다: 저작자표시 2.0 대한민국 (CC BY 2.0 KR)
| [
-0.5931993722915649,
-0.8777629137039185,
0.3355540931224823,
0.6611758470535278,
-1.0450772047042847,
-0.4155179262161255,
0.026989666745066643,
-0.01823596842586994,
0.6605675220489502,
0.7207453846931458,
-0.5967921018600464,
-1.1688706874847412,
-0.64545738697052,
0.10915059596300125,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hyperinfer/old_texts | hyperinfer | 2023-11-20T16:03:31Z | 21 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-20T16:03:31Z | 2023-11-20T14:58:10.000Z | 2023-11-20T14:58:10 | ---
license: mit
---
This dataset is generated from the latest datadump from the Open Legal Data datadump, dated at 19-Oct-2022 12:11.
See source: https://static.openlegaldata.io/dumps/de/2022-10-18/
The dataset was encoded in UTF-8 and cleaned for HTML-Tags and Escape Sequences.
The Train / Test Split is 80/20. | [
-0.30216413736343384,
-0.5099013447761536,
-0.01817041076719761,
-0.12767700850963593,
-0.47625723481178284,
-0.4149077832698822,
-0.23575352132320404,
-0.5013805031776428,
0.006792398169636726,
0.8810299038887024,
-0.48154693841934204,
-0.6027430891990662,
-0.21151261031627655,
0.16026304... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_num_v5_full | tyzhu | 2023-11-21T09:51:05Z | 21 | 0 | null | [
"region:us"
] | 2023-11-21T09:51:05Z | 2023-11-21T08:55:22.000Z | 2023-11-21T08:55:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7245335
num_examples: 5070
- name: validation
num_bytes: 343184
num_examples: 300
download_size: 0
dataset_size: 7588519
---
# Dataset Card for "squad_qa_num_v5_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5480422973632812,
-0.013804401271045208,
0.32792264223098755,
0.47509002685546875,
-0.33401840925216675,
0.2334020882844925,
0.5014901757240295,
-0.05532803758978844,
0.8024572730064392,
0.4534328877925873,
-1.1122581958770752,
-0.9252749681472778,
-0.332602858543396,
0.0901881754398346... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tyzhu/squad_qa_wrong_title_v5_full | tyzhu | 2023-11-21T09:53:00Z | 21 | 0 | null | [
"region:us"
] | 2023-11-21T09:53:00Z | 2023-11-21T09:21:02.000Z | 2023-11-21T09:21:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: correct_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7596960
num_examples: 5070
- name: validation
num_bytes: 361864
num_examples: 300
download_size: 1530108
dataset_size: 7958824
---
# Dataset Card for "squad_qa_wrong_title_v5_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2313343584537506,
-0.07906261086463928,
0.31671038269996643,
0.48092979192733765,
-0.286103218793869,
0.3063718378543854,
0.5095766186714172,
-0.13219214975833893,
0.6741253137588501,
0.36354291439056396,
-1.167021632194519,
-0.7680577039718628,
-0.5951005220413208,
0.15646158158779144,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
A-Roucher/Open_Assistant_Chains_German_Translation | A-Roucher | 2023-11-23T08:30:05Z | 21 | 1 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:de",
"license:apache-2.0",
"human-feedback",
"region:us"
] | 2023-11-23T08:30:05Z | 2023-11-21T15:33:31.000Z | 2023-11-21T15:33:31 | ---
language:
- en
- de
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- conversational
- text-generation
pretty_name: OpenAssistant Conversation Chains - With German Translation
tags:
- human-feedback
configs:
- config_name: default
data_files:
- split: train_english
path: data/train_english-*
- split: train_german
path: data/train_german-*
dataset_info:
features:
- name: conversation_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: lang_original
dtype: string
- name: could_be_code
dtype: bool
splits:
- name: train_english
num_bytes: 29675151
num_examples: 18192
- name: train_german
num_bytes: 28931906
num_examples: 18192
download_size: 21854409
dataset_size: 58607057
---
# Dataset Card for Dataset Name
## Dataset description
<!-- Provide a quick summary of the dataset. -->
This dataset is derived from [OpenAssistant Conversation Chains](https://huggingface.co/datasets/A-Roucher/Open_Assistant_Conversation_Chains), which is a reformatting of [OpenAssistant Conversations (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), which is itself
> a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
It was modified from the original by adding a german translation of each conversation, generated with [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de).
#### Who are the editors?
Aymeric Roucher, Hugging Face.
### Example
````
{
'conversation_id': '6708c47f-05c9-4346-b3d2-40b2bd24fde4',
'user_id': '2c96e467-66f0-4be7-9693-bda51356a424',
'created_date': '2023-02-06T18:48:49.391686+00:00',
'messages_original': [
{'content': 'Can you write a short introduction about the relevance of the term "monopsony" in economics? Please use examples related to potential monopsonies in the labour market and cite relevant research.',
'role': 'user'},
{'content': '"Monopsony" refers to a market structure where there is only one buyer for a particular good or service. In economics, this term is particularly relevant in the labor market, where a monopsony employer has significant power over the wages and working conditions of their employees. The presence of a monopsony can result in lower wages and reduced employment opportunities for workers, as the employer has little incentive to increase wages or provide better working conditions.\n\nRecent research has identified potential monopsonies in industries such as retail and fast food, where a few large companies control a significant portion of the market (Bivens & Mishel, 2013). In these industries, workers often face low wages, limited benefits, and reduced bargaining power, leading to a situation where they are dependent on the employer for their livelihood. This dependence can result in further suppression of wages and a decline in working conditions.\n\nOverall, the concept of monopsony is essential to understanding the dynamics of labor markets and the impact of market power on workers. Further research is needed to understand the extent and impact of monopsonies on the economy and to develop policies to address this issue.\n\nReferences:\nBivens, J., & Mishel, L. (2013). The Pay of Corporate Executives and Financial Professionals as Evidence of Rents in Top 1 Percent Incomes. Journal of Economic Perspectives, 27(3), 57-78.',
'role': 'assistant'},
{'content': 'Now explain it to a dog', 'role': 'user'}
],
'lang_original': 'en'
'messages_german':
{ "content": "Können Sie eine kurze Einführung über die Relevanz des Begriffs \"Monopsonie\" in der Wirtschaft schreiben? Bitte verwenden Sie Beispiele für mögliche Monopsonien auf dem Arbeitsmarkt und zitieren Sie relevante Forschung.", "role": "user" }, { "content": "\"Monopsony\" bezieht sich auf eine Marktstruktur, in der es nur einen Käufer für ein bestimmtes Gut oder eine bestimmte Dienstleistung gibt. In der Wirtschaft ist dieser Begriff besonders relevant auf dem Arbeitsmarkt, wo ein Monopsony Arbeitgeber über die Löhne und Arbeitsbedingungen ihrer Mitarbeiter verfügt. Das Vorhandensein eines Monopsony kann zu niedrigeren Löhnen und reduzierten Beschäftigungsmöglichkeiten für Arbeitnehmer führen, da der Arbeitgeber wenig Anreiz hat, Löhne zu erhöhen oder bessere Arbeitsbedingungen zu bieten. Neuere Forschung hat mögliche Monopsonies in Branchen wie Einzelhandel und Fast Food identifiziert, wo ein paar große Unternehmen einen erheblichen Teil des Marktes kontrollieren (Bivens & Mishel, 2013). In diesen Branchen ist das Konzept der Monopsony oft mit niedrigen Löhnen, begrenzten Vorteilen und reduzierten Verhandlungsmacht, was zu einer Situation führt, in der sie vom Arbeitgeber für ihren Lebensunterhalt abhängig sind. Diese Abhängigkeit kann zu einer weiteren Unterdrückung der Löhne und einem Rückgang der Arbeitsbedingungen führen. Insgesamt ist das Konzept der Monopsony wesentlich, um die Dynamik der Arbeitsmärkte und die Auswirkungen der Marktmacht auf die Arbeitnehmer zu verstehen. Weitere Forschung ist erforderlich, um das Ausmaß und die Auswirkungen der Monopsonies auf die Wirtschaft zu verstehen und zu entwickeln.", "role": "assistant" },
{ "content": "Nun erklären Sie es einem Hund", "role": "user" }
]
}
```` | [
-0.378668874502182,
-0.752553403377533,
0.10294798761606216,
0.32993966341018677,
-0.2658137083053589,
-0.12438317388296127,
0.04059802368283272,
-0.8141024112701416,
1.155905842781067,
0.5101513266563416,
-0.6865105032920837,
-0.3921552002429962,
-0.5777449607849121,
0.35048583149909973,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aman279/Updated_Msc_Dataset | Aman279 | 2023-11-21T21:53:50Z | 21 | 0 | null | [
"region:us"
] | 2023-11-21T21:53:50Z | 2023-11-21T20:49:16.000Z | 2023-11-21T20:49:16 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/distilabel-math-instructions | argilla | 2023-11-23T13:55:37Z | 21 | 0 | null | [
"region:us"
] | 2023-11-23T13:55:37Z | 2023-11-21T21:51:41.000Z | 2023-11-21T21:51:41 | ---
dataset_info:
features:
- name: instructions
dtype: string
splits:
- name: train
num_bytes: 410539
num_examples: 4699
download_size: 150900
dataset_size: 410539
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "distilabel-math-instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5174394845962524,
-0.48494142293930054,
0.23860183358192444,
0.4925825297832489,
-0.24359136819839478,
0.052883729338645935,
0.11421692371368408,
0.338064044713974,
0.5422951579093933,
0.17207366228103638,
-1.0661214590072632,
-0.7733709216117859,
-0.7634660005569458,
-0.267955720424652... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gilkeyio/librispeech-alignments | gilkeyio | 2023-11-22T16:03:34Z | 21 | 0 | null | [
"task_categories:automatic-speech-recognition",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:1512.02595",
"region:us"
] | 2023-11-22T16:03:34Z | 2023-11-22T04:15:29.000Z | 2023-11-22T04:15:29 | ---
dataset_info:
features:
- name: sex
dtype: string
- name: subset
dtype: string
- name: id
dtype: string
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: words
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: word
dtype: string
- name: phonemes
list:
- name: end
dtype: float64
- name: phoneme
dtype: string
- name: start
dtype: float64
splits:
- name: dev_clean
num_bytes: 365310608.879
num_examples: 2703
- name: dev_other
num_bytes: 341143993.784
num_examples: 2864
- name: test_clean
num_bytes: 377535532.98
num_examples: 2620
- name: test_other
num_bytes: 351207892.569557
num_examples: 2938
- name: train_clean_100
num_bytes: 6694747231.610863
num_examples: 28538
- name: train_clean_360
num_bytes: 24163659711.787865
num_examples: 104008
- name: train_other_500
num_bytes: 32945085271.89443
num_examples: 148645
download_size: 62101682957
dataset_size: 65238690243.50571
configs:
- config_name: default
data_files:
- split: dev_clean
path: data/dev_clean-*
- split: dev_other
path: data/dev_other-*
- split: test_clean
path: data/test_clean-*
- split: test_other
path: data/test_other-*
- split: train_clean_100
path: data/train_clean_100-*
- split: train_clean_360
path: data/train_clean_360-*
- split: train_other_500
path: data/train_other_500-*
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
pretty_name: Librispeech Alignments
size_categories:
- 100K<n<1M
---
# Dataset Card for Librispeech Alignments
Librispeech with alignments generated by the [Montreal Forced Aligner](https://montreal-forced-aligner.readthedocs.io/en/latest/). The original alignments in TextGrid format can be found [here](https://zenodo.org/records/2619474)
## Dataset Details
### Dataset Description
Librispeech is a corpus of read English speech, designed for training and evaluating automatic speech recognition (ASR) systems. The dataset contains 1000 hours of 16kHz read English speech derived from audiobooks.
The Montreal Forced Aligner (MFA) was used to generate word and phoneme level alignments for the Librispeech dataset.
- **Curated by:** Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur (for Librispeech)
- **Funded by:** DARPA LORELEI
- **Shared by:** Loren Lugosch (for Alignments)
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution 4.0 International License
### Dataset Sources
- **Repository:** https://www.openslr.org/12
- **Paper:** https://arxiv.org/abs/1512.02595
- **Alignments:** https://zenodo.org/record/2619474
## Uses
### Direct Use
The Librispeech dataset can be used to train and evaluate ASR systems. The alignments allow for forced alignment techniques.
### Out-of-Scope Use
The dataset only contains read speech, so may not perform as well on spontaneous conversational speech.
## Dataset Structure
The dataset contains 1000 hours of segmented read English speech from audiobooks. There are three train subsets: 100 hours (train-clean-100), 360 hours (train-clean-360) and 500 hours (train-other-500).
The alignments connect the audio to the reference text transcripts on word and phoneme level.
### Data Fields
- sex: M for male, F for female
- subset: dev_clean, dev_other, test_clean, test_other, train_clean_100, train_clean_360, train_other_500
- id: unique id of the data sample. (speaker id)-(chapter-id)-(utterance-id)
- audio: the audio, 16kHz
- transcript: the spoken text of the dataset, normalized and lowercased
- words: a list of words with fields:
- word: the text of the word
- start: the start time in seconds
- end: the end time in seconds
- phonemes: a list of phonemes with fields:
- phoneme: the phoneme spoken
- start: the start time in seconds
- end: the end time in seconds
## Dataset Creation
### Curation Rationale
Librispeech was created to further speech recognition research and to benchmark progress in the field.
### Source Data
#### Data Collection and Processing
The audio and reference texts were sourced from read English audiobooks in the LibriVox project. The data was segmented, filtered and prepared for speech recognition.
#### Who are the source data producers?
The audiobooks are read by volunteers for the LibriVox project. Information about the readers is available in the LibriVox catalog.
### Annotations
#### Annotation process
The Montreal Forced Aligner was used to create word and phoneme level alignments between the audio and reference texts. The aligner is based on Kaldi.
In the process of formatting this into a HuggingFace dataset, words with empty text and phonemes with empty text, silence tokens, or spacing tokens were removed
#### Who are the annotators?
The alignments were generated automatically by the Montreal Forced Aligner and shared by Loren Lugosch. The TextGrid files were parsed and integrated into this dataset by Kim Gilkey.
#### Personal and Sensitive Information
The data contains read speech and transcripts. No personal or sensitive information expected.
## Bias, Risks, and Limitations
The dataset contains only read speech from published books, not natural conversational speech. Performance on other tasks may be reduced.
### Recommendations
Users should understand that the alignments may contain errors and account for this in applications. For example, be wary of <UNK> tokens.
## Citation
**Librispeech:**
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={ICASSP},
year={2015},
organization={IEEE}
}
```
**Librispeech Alignments:**
```
Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, and Yoshua Bengio, "Speech Model Pre-training for End-to-End Spoken Language Understanding", Interspeech 2019.
```
**Montreal Forced Aligner:**
```
Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. "Montreal Forced Aligner: trainable text-speech alignment using Kaldi", Interspeech 2017.
``` | [
-0.15661679208278656,
-0.3594324588775635,
-0.019701460376381874,
-0.06052987277507782,
-0.17227020859718323,
-0.14650903642177582,
-0.12095780670642853,
-0.4367973208427429,
0.15700869262218475,
0.563337504863739,
-0.5959511399269104,
-0.6242700219154358,
-0.2992180287837982,
-0.019511539... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
idning/rainbow | idning | 2023-11-22T15:29:32Z | 21 | 0 | null | [
"region:us"
] | 2023-11-22T15:29:32Z | 2023-11-22T15:13:20.000Z | 2023-11-22T15:13:20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2303342.0
num_examples: 9216
download_size: 1170069
dataset_size: 2303342.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gilkeyio/inclusive_voice | gilkeyio | 2023-11-23T03:34:27Z | 21 | 0 | null | [
"task_categories:audio-classification",
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2023-11-23T03:34:27Z | 2023-11-23T00:39:48.000Z | 2023-11-23T00:39:48 | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: audio
dtype: audio
- name: sentence
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
splits:
- name: train
num_bytes: 254499181.875
num_examples: 6075
- name: test
num_bytes: 76654946.864
num_examples: 1812
download_size: 327803775
dataset_size: 331154128.73899996
license:
- cc0-1.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
task_categories:
- audio-classification
- automatic-speech-recognition
language:
- en
pretty_name: Inclusive Voice
size_categories:
- 1K<n<10K
---
# Dataset Card for Inclusive Voice
A dataset curated from Mozilla Common Voice containing equal representation from male, female, and other
## Dataset Details
### Dataset Description
- **Curated by:** Kim Gilkey
- **Language(s) (NLP):** English
- **License:** Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Dataset Sources
Mozilla Common Voice
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
## Uses
### Direct Use
This dataset is intended for use in training and evaluating voice recognition models, particularly for applications in gender classification.
### Out-of-Scope Use
The dataset is not suitable for applications requiring a wide range of dialects or non-English languages. It is also not designed for identifying fine-grained demographic characteristics beyond gender.
You agree not to attempt to identify the individuals in the dataset.
## Dataset Structure
### Data Fields
- **client_id:** A unique identifier for each participant.
- **audio:** Audio recording of a spoken sentence.
- **sentence:** The text content of the audio recording.
- **age:** Categorical age group of the participant.
- **gender:** Gender of the participant (balanced representation).
- **accent:** Accent information of the participant, if available.
### Splits
- **Train:** 6,075 examples (2,025 of each gender).
- **Test:** 1,812 examples (604 of each gender).
## Dataset Creation
### Curation Rationale
The dataset was specifically curated to address gender imbalance in voice recognition datasets, aiming to provide a balanced representation of male, female, and non-binary voices.
### Source Data
#### Data Collection and Processing
Derived from Mozilla's Common Voice project, the dataset underwent additional processing to balance gender representation and limit the dominance of any single voice.
## Bias, Risks, and Limitations
Despite efforts to balance gender representation, the dataset may still have biases, such as over-representation of certain accents or age groups. It's not comprehensive in representing all possible variations in English speech.
Since the original dataset contains relatively few female and even fewer non-binary voices, the dataset becomes quite small when filtered down.
## Citation
The original Common Voice citation:
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
## Dataset Card Authors
Kim Gilkey
## Dataset Card Contact
kim@gilkey.io | [
-0.5099248290061951,
-0.38764411211013794,
-0.18269194662570953,
0.4157896637916565,
0.035102833062410355,
0.0935996025800705,
-0.35982105135917664,
-0.5010168552398682,
0.3644203543663025,
0.3990269899368286,
-0.8731604218482971,
-0.6618525385856628,
-0.43892890214920044,
0.30125221610069... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chiayewken/saycan | chiayewken | 2023-11-23T04:21:11Z | 21 | 0 | null | [
"region:us"
] | 2023-11-23T04:21:11Z | 2023-11-23T04:17:30.000Z | 2023-11-23T04:17:30 | ---
dataset_info:
features:
- name: INPUT
dtype: string
- name: OUTPUT
dtype: string
splits:
- name: test
num_bytes: 14865
num_examples: 99
download_size: 4765
dataset_size: 14865
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# SayCan
This repo contains the data for ["Do As I Can, Not As I Say:
Grounding Language in Robotic Affordances"](https://say-can.github.io).
The original data link is here: https://raw.githubusercontent.com/say-can/say-can.github.io/main/data/saycan_plan_v0_l.tsv
This dataset is distributed with the CC BY 4.0 license. | [
-0.2915041744709015,
-0.28996241092681885,
0.3762490749359131,
-0.006807317025959492,
-0.19181819260120392,
0.061969440430402756,
-0.26167890429496765,
-0.3453904390335083,
0.44702818989753723,
0.893625020980835,
-0.8383233547210693,
-0.722602128982544,
-0.3417615592479706,
0.0207117684185... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anushvst/law_chat | anushvst | 2023-11-24T09:24:06Z | 21 | 0 | null | [
"region:us"
] | 2023-11-24T09:24:06Z | 2023-11-23T05:24:45.000Z | 2023-11-23T05:24:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Liberty-L/race_train_EN | Liberty-L | 2023-11-26T06:15:39Z | 21 | 0 | null | [
"region:us"
] | 2023-11-26T06:15:39Z | 2023-11-24T07:23:37.000Z | 2023-11-24T07:23:37 | ---
dataset_info:
features:
- name: example_id
dtype: string
- name: article
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 162846173
num_examples: 25421
download_size: 27127431
dataset_size: 162846173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Liberty-L/race_val_EN | Liberty-L | 2023-11-26T06:15:41Z | 21 | 0 | null | [
"region:us"
] | 2023-11-26T06:15:41Z | 2023-11-24T07:23:43.000Z | 2023-11-24T07:23:43 | ---
dataset_info:
features:
- name: example_id
dtype: string
- name: article
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 9221971
num_examples: 1436
download_size: 1305467
dataset_size: 9221971
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Liberty-L/race_test_EN | Liberty-L | 2023-11-26T06:15:42Z | 21 | 0 | null | [
"region:us"
] | 2023-11-26T06:15:42Z | 2023-11-24T07:23:44.000Z | 2023-11-24T07:23:44 | ---
dataset_info:
features:
- name: example_id
dtype: string
- name: article
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 9381086
num_examples: 1436
download_size: 1252839
dataset_size: 9381086
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SrAlex/datos-leyes-civiles-peruanas-v2 | SrAlex | 2023-11-25T01:27:05Z | 21 | 0 | null | [
"region:us"
] | 2023-11-25T01:27:05Z | 2023-11-25T01:26:14.000Z | 2023-11-25T01:26:14 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
luizlzg/drbyte_test | luizlzg | 2023-11-25T19:29:07Z | 21 | 0 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:pt",
"medical",
"biology",
"region:us"
] | 2023-11-25T19:29:07Z | 2023-11-25T19:24:04.000Z | 2023-11-25T19:24:04 | ---
task_categories:
- text-generation
language:
- pt
tags:
- medical
- biology
size_categories:
- 10K<n<100K
--- | [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/fake_dataset | andersonbcdefg | 2023-11-26T02:59:28Z | 21 | 0 | null | [
"region:us"
] | 2023-11-26T02:59:28Z | 2023-11-26T01:28:36.000Z | 2023-11-26T01:28:36 | ---
dataset_info:
features:
- name: tokens
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 6240
num_examples: 8
download_size: 5472
dataset_size: 6240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fake_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5041502118110657,
-0.5325407981872559,
0.21512064337730408,
0.24092401564121246,
-0.19757810235023499,
0.22814859449863434,
0.4462225139141083,
-0.25180584192276,
1.0973870754241943,
0.45643898844718933,
-0.9819025993347168,
-0.5301318764686584,
-0.4496355652809143,
-0.23729287087917328... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adamjweintraut/eli5_lfqa_slice | adamjweintraut | 2023-11-27T06:28:50Z | 21 | 0 | null | [
"region:us"
] | 2023-11-27T06:28:50Z | 2023-11-27T06:28:35.000Z | 2023-11-27T06:28:35 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: q_id
dtype: string
- name: question
dtype: string
- name: best_answer
dtype: string
- name: all_answers
sequence: string
- name: num_answers
dtype: int64
- name: context
dtype: string
- name: orig
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 275385111.5562014
num_examples: 20000
- name: test
num_bytes: 34423138.944525175
num_examples: 2500
- name: validation
num_bytes: 34423138.944525175
num_examples: 2500
download_size: 207239662
dataset_size: 344231389.44525176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinmang2/common-sense-mrc | jinmang2 | 2021-12-12T07:56:31Z | 20 | 0 | null | [
"region:us"
] | 2021-12-12T07:56:31Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
limjiayi/hateful_memes_expanded | limjiayi | 2021-12-06T05:17:02Z | 20 | 2 | null | [
"region:us"
] | 2021-12-06T05:17:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lpsc-fiuba/melisa | lpsc-fiuba | 2022-10-22T08:52:56Z | 20 | 3 | null | [
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"source_datasets:original",
"language:es",
"language:pt",
"license:oth... | 2022-10-22T08:52:56Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
- pt
license:
- other
multilinguality:
all_languages:
- multilingual
es:
- monolingual
pt:
- monolingual
paperswithcode_id: null
size_categories:
all_languages:
- 100K<n<1M
es:
- 100K<n<1M
pt:
- 100K<n<1M
source_datasets:
- original
task_categories:
- conditional-text-generation
- sequence-modeling
- text-classification
- text-scoring
task_ids:
- language-modeling
- sentiment-classification
- sentiment-scoring
- summarization
- topic-classification
---
# Dataset Card for MeLiSA (Mercado Libre for Sentiment Analysis)
** **NOTE: THIS CARD IS UNDER CONSTRUCTION** **
** **NOTE 2: THE RELEASED VERSION OF THIS DATASET IS A DEMO VERSION.** **
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Webpage:** https://github.com/lpsc-fiuba/MeLiSA
- **Paper:**
- **Point of Contact:** lestienne@fi.uba.ar
[More Information Needed]
### Dataset Summary
We provide a Mercado Libre product reviews dataset for spanish and portuguese text classification. The dataset contains reviews in these two languages collected between August 2020 and January 2021. Each record in the dataset contains the review content and title, the star rating, the country where it was pubilshed and the product category (arts, technology, etc.). The corpus is roughly balanced across stars, so each star rating constitutes approximately 20% of the reviews in each language.
| || Spanish ||| Portugese ||
|---|:------:|:----------:|:-----:|:------:|:----------:|:-----:|
| | Train | Validation | Test | Train | Validation | Test |
| 1 | 88.425 | 4.052 | 5.000 | 50.801 | 4.052 | 5.000 |
| 2 | 88.397 | 4.052 | 5.000 | 50.782 | 4.052 | 5.000 |
| 3 | 88.435 | 4.052 | 5.000 | 50.797 | 4.052 | 5.000 |
| 4 | 88.449 | 4.052 | 5.000 | 50.794 | 4.052 | 5.000 |
| 5 | 88.402 | 4.052 | 5.000 | 50.781 | 4.052 | 5.000 |
Table shows the number of samples per star rate in each split. There is a total of 442.108 training samples in spanish and 253.955 in portuguese. We limited the number of reviews per product to 30 and we perform a ranked inclusion of the downloaded reviews to include those with rich semantic content. In these ranking, the lenght of the review content and the valorization (difference between likes and dislikes) was prioritized. For more details on this process, see (CITATION).
Reviews in spanish were obtained from 8 different Latin Amercian countries (Argentina, Colombia, Peru, Uruguay, Chile, Venezuela and Mexico), and portuguese reviews were extracted from Brasil. To match the language with its respective country, we applied a language detection algorithm based on the works of Joulin et al. (2016a and 2016b) to determine the language of the review text and we removed reviews that were not written in the expected language.
[More Information Needed]
### Languages
The dataset contains reviews in Latin American Spanish and Portuguese.
## Dataset Structure
### Data Instances
Each data instance corresponds to a review. Each split is stored in a separated `.csv` file, so every row in each file consists on a review. For example, here we show a snippet of the spanish training split:
```csv
country,category,review_content,review_title,review_rate
...
MLA,Tecnología y electrónica / Tecnologia e electronica,Todo bien me fue muy util.,Muy bueno,2
MLU,"Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal",No fue lo que esperaba. El producto no me sirvió.,No fue el producto que esperé ,2
MLM,Tecnología y electrónica / Tecnologia e electronica,No fue del todo lo que se esperaba.,No me fue muy funcional ahí que hacer ajustes,2
...
```
### Data Fields
- `country`: The string identifier of the country. It could be one of the following: `MLA` (Argentina), `MCO` (Colombia), `MPE` (Peru), `MLU` (Uruguay), `MLC` (Chile), `MLV` (Venezuela), `MLM` (Mexico) or `MLB` (Brasil).
- `category`: String representation of the product's category. It could be one of the following:
- Hogar / Casa
- Tecnologı́a y electrónica / Tecnologia e electronica
- Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal
- Arte y entretenimiento / Arte e Entretenimiento
- Alimentos y Bebidas / Alimentos e Bebidas
- `review_content`: The text content of the review.
- `review_title`: The text title of the review.
- `review_rate`: An int between 1-5 indicating the number of stars.
### Data Splits
Each language configuration comes with it's own `train`, `validation`, and `test` splits. The `all_languages` split is simply a concatenation of the corresponding split across all languages. That is, the `train` split for `all_languages` is a concatenation of the `train` splits for each of the languages and likewise for `validation` and `test`.
## Dataset Creation
### Curation Rationale
The dataset is motivated by the desire to advance sentiment analysis and text classification in Latin American Spanish and Portuguese.
### Source Data
#### Initial Data Collection and Normalization
The authors gathered the reviews from the marketplaces in Argentina, Colombia, Peru, Uruguay, Chile, Venezuela and Mexico for the Spanish language and from Brasil for Portuguese. They prioritized reviews that contained relevant semantic content by applying a ranking filter based in the lenght and the valorization (difference betweent the number of likes and dislikes) of the review. They then ensured the correct language by applying a semi-automatic language detection algorithm, only retaining those of the target language. No normalization was applied to the review content or title.
Original products categories were grouped in higher level categories, resulting in five different types of products: "Home" (Hogar / Casa), "Technology and electronics" (Tecnologı́a y electrónica
/ Tecnologia e electronica), "Health, Dress and Personal Care" (Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal) and "Arts and Entertainment" (Arte y entretenimiento / Arte e Entretenimiento).
#### Who are the source language producers?
The original text comes from Mercado Libre customers reviewing products on the marketplace across a variety of product categories.
### Annotations
#### Annotation process
Each of the fields included are submitted by the user with the review or otherwise associated with the review. No manual or machine-driven annotation was necessary.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Mercado Libre Reviews are submitted by users with the knowledge and attention of being public. The reviewer ID's included in this dataset are anonymized, meaning that they are disassociated from the original user profiles. However, these fields would likely be easy to deannoymize given the public and identifying nature of free-form text responses.
## Considerations for Using the Data
### Social Impact of Dataset
Although Spanish and Portuguese languages are relatively high resource, most of the data is collected from European or United State users. This dataset is part of an effort to encourage text classification research in languages other than English and European Spanish and Portuguese. Such work increases the accessibility of natural language technology to more regions and cultures.
### Discussion of Biases
The data included here are from unverified consumers. Some percentage of these reviews may be fake or contain misleading or offensive language.
### Other Known Limitations
The dataset is constructed so that the distribution of star ratings is roughly balanced. This feature has some advantages for purposes of classification, but some types of language may be over or underrepresented relative to the original distribution of reviews to acheive this balance.
[More Information Needed]
## Additional Information
### Dataset Curators
Published by Lautaro Estienne, Matías Vera and Leonardo Rey Vega. Managed by the Signal Processing in Comunications Laboratory of the Electronic Department at the Engeneering School of the Buenos Aires University (UBA).
### Licensing Information
Amazon has licensed this dataset under its own agreement, to be found at the dataset webpage here:
https://docs.opendata.aws/amazon-reviews-ml/license.txt
### Citation Information
Please cite the following paper if you found this dataset useful:
(CITATION)
[More Information Needed]
### Contributions
[More Information Needed]
| [
-0.49258989095687866,
-0.5710445642471313,
-0.052908509969711304,
0.652121365070343,
-0.479657381772995,
0.1516079157590866,
-0.4727388620376587,
-0.555582582950592,
0.7048875689506531,
0.5883703231811523,
-0.5388163924217224,
-1.1042780876159668,
-0.6598021388053894,
0.4229721426963806,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
microsoft/codexglue_method_generation | microsoft | 2021-10-28T07:03:55Z | 20 | 7 | null | [
"region:us"
] | 2021-10-28T07:03:55Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlpyeditepe/tr_rte | nlpyeditepe | 2022-07-01T15:28:27Z | 20 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"license:mit",
"region:us"
] | 2022-07-01T15:28:27Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- machine-generated
language:
- tr-TR
license:
- mit
multilinguality:
- monolingual
pretty_name: RTE for Turkish
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/snap | ruanchaves | 2022-10-20T19:12:47Z | 20 | 1 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-10-20T19:12:47Z | 2022-03-06T00:17:23.000Z | 2022-03-06T00:17:23 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: SNAP
tags:
- word-segmentation
---
# Dataset Card for SNAP
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting hashtags using automatically created training data](http://www.lrec-conf.org/proceedings/lrec2016/pdf/708_Paper.pdf)
### Dataset Summary
Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "BrandThunder",
"segmentation": "Brand Thunder"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{celebi2016segmenting,
title={Segmenting hashtags using automatically created training data},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
pages={2981--2985},
year={2016}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
| [
-0.5316988825798035,
-0.582749605178833,
0.22594639658927917,
-0.0311012901365757,
-0.4195573925971985,
0.3902314007282257,
-0.11172207444906235,
-0.5269083380699158,
0.5141822099685669,
0.09611895680427551,
-0.5884799957275391,
-0.7664903998374939,
-0.5935624241828918,
0.09328082948923111... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.