id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
tasksource/com2sense | 2023-06-05T10:09:30.000Z | [
"language:en",
"commonsense",
"region:us"
] | tasksource | null | null | 1 | 72 | 2023-06-02T14:47:54 | ---
language:
- en
tags:
- commonsense
---
https://github.com/PlusLabNLP/Com2Sense
```
@inproceedings{singh-etal-2021-com2sense,
title = "{COM}2{SENSE}: A Commonsense Reasoning Benchmark with Complementary Sentences",
author = "Singh, Shikhar and
Wen, Nuan and
Hou, Yu and
Alipoormolabashi, Pegah and
Wu, Te-lin and
Ma, Xuezhe and
Peng, Nanyun",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.78",
doi = "10.18653/v1/2021.findings-acl.78",
pages = "883--898",
}
``` | 745 | [
[
-0.01215362548828125,
-0.0236968994140625,
0.029144287109375,
0.0165863037109375,
-0.010986328125,
-0.0206146240234375,
-0.0252532958984375,
-0.0390625,
0.0091400146484375,
0.003849029541015625,
-0.03826904296875,
-0.03045654296875,
-0.039642333984375,
0.013... |
laion/strategic_game_chess | 2023-10-20T04:14:20.000Z | [
"license:cc-by-4.0",
"game",
"region:us"
] | laion | null | null | 9 | 72 | 2023-06-06T02:09:13 | ---
tags:
- game
pretty_name: The Chess Dataset
license: cc-by-4.0
---
# Chess
> Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of these critical generic skills in AI models.
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
Each game has three columns: 'Moves', 'Termination' and 'Result',
- 'Move': recorded chess moves of the whole game.
- 'Termination': include CHECKMATE, INSUFFICIENT_MATERIAL, ... etc.
- Please check this for detail information
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
### Call for Collaboration
We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data.
Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data.
Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain. | 2,171 | [
[
-0.0259857177734375,
-0.055908203125,
0.033782958984375,
0.0002384185791015625,
0.0147552490234375,
-0.0108642578125,
-0.0087127685546875,
-0.01806640625,
0.0129241943359375,
0.047088623046875,
-0.05322265625,
-0.04241943359375,
-0.036468505859375,
-0.011123... |
clarin-knext/scifact-pl-qrels | 2023-06-07T08:25:00.000Z | [
"task_categories:sentence-similarity",
"language:pl",
"license:cc-by-sa-4.0",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 72 | 2023-06-06T17:09:44 | ---
license: cc-by-sa-4.0
task_categories:
- sentence-similarity
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 262 | [
[
-0.0153961181640625,
-0.0628662109375,
0.03546142578125,
0.016357421875,
-0.0221710205078125,
-0.010345458984375,
-0.0115966796875,
-0.034515380859375,
-0.0013113021850585938,
0.0286102294921875,
-0.038299560546875,
-0.048126220703125,
-0.029022216796875,
-0... |
eReverter/cnn_dailymail_extractive | 2023-07-19T18:45:02.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"arxiv:1903.10318",
"region:us"
] | eReverter | null | null | 0 | 72 | 2023-07-19T15:28:20 | ---
dataset_info:
features:
- name: src
sequence: string
- name: tgt
sequence: string
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 53831114
num_examples: 11490
- name: train
num_bytes: 1376640992
num_examples: 287113
- name: validation
num_bytes: 62200550
num_examples: 13368
download_size: 857262516
dataset_size: 1492672656
license: mit
task_categories:
- summarization
language:
- en
size_categories:
- 100K<n<1M
---
## Data Card for Extractive CNN/DailyMail Dataset
### Overview
This is an extractive version of the [CNN/Dailymail](https://huggingface.co/datasets/cnn_dailymail) dataset. The structure of this dataset is identical to the original except for a minor modification in the data representation and the introduction of labels to denote the extractive summary.
The labels are generated following a greedy algorithm, as proposed by [Liu (2019)](https://arxiv.org/abs/1903.10318). The curation process can be found in the [bertsum-hf](https://github.com/eReverter/bertsum-hf) repository. I am uploading it in case someone does not want to go through the preprocessing, although Liu has a version ready for training in its [bertsum](https://github.com/nlpyang/BertSum) repository!
In this dataset:
- 'src' corresponds to 'article',
- 'tgt' equates to 'abstract',
- 'labels' represents a mapping of sentences forming the extractive summary.
### Data Architecture
Each entry in the dataset contains the following fields:
- `id`: a unique `string` identifier for each example.
- `src`: a `list[string]` field representing the original news article. Each string in the list is a separate sentence from the article.
- `tgt`: a `list[string]` field representing the professionally edited highlights or abstract of the article.
- `labels`: a `list[bool]` field with binary values. Each boolean value corresponds to a sentence in 'article', indicating whether that sentence is part of the extractive summary (1 for True, 0 for False).
### Sample Data Entry
Here is an illustrative example from the dataset:
```json
{
"id": "1",
"src": ["This is the first sentence",
"This is the second"],
"tgt": ["This is one of the highlights"],
"labels": [1, 0]
}
```
In this example, the first sentence of the article is selected as part of the extractive summary (as indicated by '1' in the 'labels'), while the second sentence is not ('0' in the 'labels').
### Usage
The extractive CNN/DailyMail dataset can be used to train and evaluate models for extractive text summarization tasks. It allows models to learn to predict which sentences from an original text contribute to a summary, providing a binary mapping as a reference. The 'tgt' or 'abstract' field can serve as a basis for comparison, helping to assess how well the selected sentences cover the key points in the abstract. | 2,859 | [
[
-0.01561737060546875,
-0.051361083984375,
0.0147705078125,
0.0100250244140625,
-0.037109375,
-0.0022182464599609375,
-0.021728515625,
-0.0312347412109375,
0.0291900634765625,
0.03424072265625,
-0.043792724609375,
-0.06732177734375,
-0.060516357421875,
0.0223... |
dim/SlimOrcaEN | 2023-10-18T23:56:44.000Z | [
"region:us"
] | dim | null | null | 0 | 72 | 2023-10-18T23:54:18 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
- name: key
dtype: int64
splits:
- name: train
num_bytes: 928070255
num_examples: 517982
download_size: 468726589
dataset_size: 928070255
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SlimOrcaEN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 593 | [
[
-0.047760009765625,
-0.00795745849609375,
0.00909423828125,
0.005641937255859375,
-0.01386260986328125,
-0.01386260986328125,
0.005947113037109375,
-0.00965118408203125,
0.081298828125,
0.032745361328125,
-0.0654296875,
-0.04693603515625,
-0.038330078125,
-0... |
lavis-nlp/german_legal_sentences | 2022-10-20T18:34:19.000Z | [
"task_categories:text-retrieval",
"task_ids:semantic-similarity-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n>1M",
"source_datasets:original",
"language:de",
"license:unknown",
"arxiv:2005.13342",
"arxiv:2010.1025... | lavis-nlp | German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence
matching in the domain in german legal documents. It follows the concept of weak supervision, where
imperfect labels are generated using multiple heuristics. For this purpose we use a combination of
legal citation matching and BM25 similarity. The contained sentences and their citations are parsed
from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) | coming soon | 3 | 71 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- de
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n>1M
source_datasets:
- original
task_categories:
- text-retrieval
- text-scoring
task_ids:
- semantic-similarity-scoring
- text-retrieval-other-example-based-retrieval
---
# Dataset Card for German Legal Sentences
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lavis-nlp.github.io/german_legal_sentences/
- **Repository:** https://github.com/lavis-nlp/german_legal_sentences
- **Paper:** coming soon
- **Leaderboard:**
- **Point of Contact:** [Marco Wrzalik](mailto:marco.wrzalik@hs-rm.de)
### Dataset Summary
German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342).
### Supported Tasks and Leaderboards
The main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows:
| Method | MRR@10 | MAP@200 | Recall@200 |
|-----------------------------------|---------:|-----------:|------------:|
| BM25 - default `(k1=1.2; b=0.75)` | 25.7 | 17.6 | 42.9 |
| BM25 - tuned `(k1=0.47; b=0.97)` | 26.2 | 18.1 | 43.3 |
| [CoRT](https://arxiv.org/abs/2010.10252) | 31.2 | 21.4 | 56.2 |
| [CoRT + BM25](https://arxiv.org/abs/2010.10252) | 32.1 | 22.1 | 67.1 |
In addition, we want to support a *Citation Recommendation* task in the future.
If you wish to contribute evaluation measures or give any suggestion or critique, please write an [e-mail](mailto:marco.wrzalik@hs-rm.de).
### Languages
This dataset contains texts from the specific domain of German court decisions.
## Dataset Structure
### Data Instances
```
{'query.doc_id': 28860,
'query.ref_ids': [6215, 248, 248],
'query.sent_id': 304863,
'query.text': 'Zudem ist zu berücksichtigen , dass die Vollverzinsung nach '
'[REF] i. V. m. [REF] gleichermaßen zugunsten wie zulasten des '
'Steuerpflichtigen wirkt , sodass bei einer Überzahlung durch '
'den Steuerpflichtigen der Staat dem Steuerpflichtigen neben '
'der Erstattung ebenfalls den entstandenen potentiellen Zins- '
'und Liquiditätsnachteil in der pauschalierten Höhe des [REF] '
'zu ersetzen hat , unabhängig davon , in welcher Höhe dem '
'Berechtigten tatsächlich Zinsen entgangen sind .',
'related.doc_id': 56348,
'related.ref_ids': [248, 6215, 62375],
'related.sent_id': 558646,
'related.text': 'Ferner ist zu berücksichtigen , dass der Zinssatz des [REF] '
'im Rahmen des [REF] sowohl für Steuernachforderung wie auch '
'für Steuererstattungen und damit gleichermaßen zugunsten wie '
'zulasten des Steuerpflichtigen wirkt , Vgl. BVerfG , '
'Nichtannahmebeschluss vom [DATE] [REF] , juris , mit der '
'Folge , dass auch Erstattungsansprüche unabhängig davon , ob '
'und in welcher Höhe dem Berechtigten tatsächlich Zinsen '
'entgangen sind , mit monatlich 0,0 % verzinst werden .'}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The documents we take from [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342) are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, "§211 Absatz 1 des Strafgesetzbuches" is normalized to "§ 211 Abs. 1 StGB". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g `[REF321]`). At the same time we parse dates and replace them with the date tag `[DATE]`. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.
We use [SoMaJo](https://github.com/tsproisl/SoMaJo) to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it.
#### Who are the source language producers?
The source language originates in the context of German court proceedings.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotations are machine-generated.
### Personal and Sensitive Information
The source documents are already public and anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
With this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Coming soon!
### Contributions
Thanks to [@mwrzalik](https://github.com/mwrzalik) for adding this dataset. | 7,971 | [
[
-0.022613525390625,
-0.048004150390625,
0.044158935546875,
0.004062652587890625,
-0.0252685546875,
-0.0279541015625,
-0.02484130859375,
-0.016326904296875,
0.0268402099609375,
0.0323486328125,
-0.02752685546875,
-0.0736083984375,
-0.043060302734375,
0.019073... |
bazyl/GTSRB | 2022-10-25T10:39:19.000Z | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"size_categories:10K<n<100K",
"source_datasets:original",
"license:gpl-3.0",
"region:us"
] | bazyl | null | null | 0 | 71 | 2022-06-25T00:30:19 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language: []
license:
- gpl-3.0
multilinguality: []
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
pretty_name: GTSRB
---
# Dataset Card for GTSRB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** http://www.sciencedirect.com/science/article/pii/S0893608012000457
- **Repository:** https://github.com/bazylhorsey/gtsrb/
- **Paper:** Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition
- **Leaderboard:** https://benchmark.ini.rub.de/gtsrb_results.html
- **Point of Contact:** bhorsey16@gmail.com
### Dataset Summary
The German Traffic Sign Benchmark is a multi-class, single-image classification challenge held at the International Joint Conference on Neural Networks (IJCNN) 2011. We cordially invite researchers from relevant fields to participate: The competition is designed to allow for participation without special domain knowledge. Our benchmark has the following properties:
- Single-image, multi-class classification problem
- More than 40 classes
- More than 50,000 images in total
- Large, lifelike database
### Supported Tasks and Leaderboards
[Kaggle](https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign) \
[Original](https://benchmark.ini.rub.de/gtsrb_results.html)
## Dataset Structure
### Data Instances
```
{
"Width": 31,
"Height": 31,
"Roi.X1": 6,
"Roi.Y1": 6,
"Roi.X2": 26,
"Roi.Y2": 26,
"ClassId": 20,
"Path": "Train/20/00020_00004_00002.png",
}
```
### Data Fields
- Width: width of image
- Height: Height of image
- Roi.X1: Upper left X coordinate
- Roi.Y1: Upper left Y coordinate
- Roi.X2: Lower right t X coordinate
- Roi.Y2: Lower right Y coordinate
- ClassId: Class of image
- Path: Path of image
### Data Splits
Categories: 42
Train: 39209
Test: 12630
## Dataset Creation
### Curation Rationale
Recognition of traffic signs is a challenging real-world problem of high industrial relevance. Although commercial systems have reached the market and several studies on this topic have been published, systematic unbiased comparisons of different approaches are missing and comprehensive benchmark datasets are not freely available.
Traffic sign recognition is a multi-class classification problem with unbalanced class frequencies. Traffic signs can provide a wide range of variations between classes in terms of color, shape, and the presence of pictograms or text. However, there exist subsets of classes (e. g., speed limit signs) that are very similar to each other.
The classifier has to cope with large variations in visual appearances due to illumination changes, partial occlusions, rotations, weather conditions, etc.
Humans are capable of recognizing the large variety of existing road signs with close to 100% correctness. This does not only apply to real-world driving, which provides both context and multiple views of a single traffic sign, but also to the recognition from single images.
<!-- ### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] -->
| 4,829 | [
[
-0.038055419921875,
-0.0190277099609375,
0.01525115966796875,
0.00830841064453125,
-0.042572021484375,
0.0130462646484375,
-0.012054443359375,
-0.05908203125,
0.009857177734375,
0.01202392578125,
-0.028564453125,
-0.061798095703125,
-0.062347412109375,
0.013... |
jakartaresearch/semeval-absa | 2022-08-14T05:38:21.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"aspect-based-sentiment-analysis",
"seme... | jakartaresearch | This dataset is built as a playground for aspect-based sentiment analysis. | null | 1 | 71 | 2022-08-14T05:35:35 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'SemEval 2015: Aspect-based Sentiement Analysis'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- aspect-based-sentiment-analysis
- semeval
- semeval2015
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for SemEval Task 12: Aspect-based Sentiment Analysis
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is orignally from [SemEval-2015 Task 12](https://alt.qcri.org/semeval2015/task12/).
From the page:
> SE-ABSA15 will focus on the same domains as SE-ABSA14 (restaurants and laptops). However, unlike SE-ABSA14, the input datasets of SE-ABSA15 will contain entire reviews, not isolated (potentially out of context) sentences. SE-ABSA15 consolidates the four subtasks of SE-ABSA14 within a unified framework. In addition, SE-ABSA15 will include an out-of-domain ABSA subtask, involving test data from a domain unknown to the participants, other than the domains that will be considered during training. In particular, SE-ABSA15 consists of the following two subtasks.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | 3,556 | [
[
-0.04443359375,
-0.035919189453125,
0.02337646484375,
0.01922607421875,
-0.021697998046875,
-0.00020265579223632812,
-0.01490020751953125,
-0.02545166015625,
0.03521728515625,
0.047637939453125,
-0.07183837890625,
-0.0699462890625,
-0.04656982421875,
0.01265... |
truongpdd/vietnamese_story | 2022-09-23T04:44:26.000Z | [
"region:us"
] | truongpdd | null | null | 0 | 71 | 2022-09-23T04:43:49 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
taln-ls2n/kpbiomed | 2022-12-01T10:52:09.000Z | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2211.12124",
"region:us"
] | taln-ls2n | KPBiomed benchmark dataset for keyphrase extraction an generation. | \ | 3 | 71 | 2022-10-26T13:41:01 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 100K<n<1M
pretty_name: KP-Biomed
---
# KPBiomed, A Large-Scale Dataset for keyphrase generation
## About
This dataset is made of 5.6 million abstracts with author assigned keyphrases.
Details about the dataset can be found in the original paper:
Maël Houbre, Florian Boudin and Béatrice Daille. 2022. [A Large-Scale Dataset for Biomedical Keyphrase Generation](https://arxiv.org/abs/2211.12124). In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022).
Reference (author-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The details of the dataset are in the table below:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :----------- | ----------: | ---------------------------------: | --------: | ----------: | ------: | -------: |
| Train small | 500k | 5.24 | 66.31 | 7.16 | 12.60 | 13.93 |
| Train medium | 2M | 5.24 | 66.30 | 7.18 | 12.57 | 13.95 |
| Train large | 5.6M | 5.23 | 66.32 | 7.18 | 12.55 | 13.95 |
| Validation | 20k | 5.25 | 66.44 | 7.07 | 12.45 | 14.05 |
| Test | 20k | 5.22 | 66.59 | 7.22 | 12.44 | 13.75 |
The following data fields are available:
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **mesh terms**: list of indexer assigned MeSH terms if available (around 68% of the articles)
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **authors**: list of the article's authors
- **year**: publication year
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
| 3,190 | [
[
-0.0124664306640625,
-0.0229949951171875,
0.040618896484375,
0.0149078369140625,
-0.01520538330078125,
0.00481414794921875,
0.005252838134765625,
-0.00823974609375,
0.02435302734375,
0.031463623046875,
-0.032806396484375,
-0.06256103515625,
-0.050018310546875,
... |
timbrooks/instructpix2pix-clip-filtered | 2023-03-02T11:19:16.000Z | [
"size_categories:100K<n<1M",
"language:en",
"arxiv:2211.09800",
"region:us"
] | timbrooks | null | null | 11 | 71 | 2023-02-24T14:55:53 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 130930966429.88
num_examples: 313010
download_size: 63067247926
dataset_size: 130930966429.88
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for InstructPix2Pix CLIP-filtered
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.timothybrooks.com/instruct-pix2pix
- **Repository:** https://github.com/timothybrooks/instruct-pix2pix
- **Paper:** https://arxiv.org/abs/2211.09800
## Dataset Summary
The dataset can be used to train models to follow edit instructions. Edit instructions
are available in the `edit_prompt`. `original_image` can be used with the `edit_prompt` and
`edited_image` denotes the image after applying the `edit_prompt` on the `original_image`.
Refer to the [GitHub repository](https://github.com/timothybrooks/instruct-pix2pix) to know more about
how this dataset can be used to train a model that can follow instructions.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text descriptions are in English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The license for this dataset is a custom license. Refer to the licensing file to know more.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@sayakpaul](https://github.com/sayakpaul) for contributing this dataset. | 3,510 | [
[
-0.0283355712890625,
-0.0263214111328125,
0.0254669189453125,
0.00432586669921875,
-0.01898193359375,
-0.004886627197265625,
-0.0160675048828125,
-0.0280609130859375,
0.01837158203125,
0.041290283203125,
-0.055572509765625,
-0.0572509765625,
-0.047515869140625,
... |
TimoImhof/TriviaQA-in-SQuAD-format | 2023-04-01T13:43:14.000Z | [
"region:us"
] | TimoImhof | null | null | 0 | 71 | 2023-03-28T08:48:36 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: unmodified
num_bytes: 22886661
num_examples: 15368
- name: modified_30_percent
num_bytes: 22899894
num_examples: 15368
- name: modified_100_percent
num_bytes: 22929228
num_examples: 15368
download_size: 40760032
dataset_size: 68715783
---
# Dataset Card for "TriviaQA-in-SQuAD-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 720 | [
[
-0.02911376953125,
-0.0144805908203125,
0.0157012939453125,
0.0238494873046875,
-0.00727081298828125,
0.037445068359375,
0.021026611328125,
-0.0013751983642578125,
0.0596923828125,
0.0234222412109375,
-0.0711669921875,
-0.05499267578125,
-0.016937255859375,
... |
taka-yayoi/databricks-dolly-15k-ja | 2023-04-17T09:18:13.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | taka-yayoi | null | null | 2 | 71 | 2023-04-17T08:58:32 | ---
license: cc-by-sa-3.0
---
こちらのデータセットを活用させていただき、Dollyのトレーニングスクリプトで使えるように列名の変更とJSONLへの変換を行っています。
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
Dolly
https://github.com/databrickslabs/dolly | 213 | [
[
-0.037750244140625,
-0.055816650390625,
0.00994873046875,
0.0277099609375,
-0.041717529296875,
-0.004329681396484375,
0.016021728515625,
-0.018646240234375,
0.06689453125,
0.04547119140625,
-0.0592041015625,
-0.046142578125,
-0.05035400390625,
0.017608642578... |
gimmaru/glue-sst2 | 2023-05-08T03:00:47.000Z | [
"region:us"
] | gimmaru | null | null | 0 | 71 | 2023-05-08T03:00:07 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: idx
dtype: int32
splits:
- name: validation
num_bytes: 106252
num_examples: 872
download_size: 0
dataset_size: 106252
---
# Dataset Card for "glue-sst2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 491 | [
[
-0.007274627685546875,
-0.0286865234375,
0.0167083740234375,
0.01428985595703125,
-0.021148681640625,
0.0091705322265625,
0.0205078125,
0.003681182861328125,
0.06378173828125,
0.0154571533203125,
-0.05865478515625,
-0.038177490234375,
-0.045257568359375,
-0.... |
ThraggBilly/flickr30k_dataset | 2023-05-09T17:31:11.000Z | [
"region:us"
] | ThraggBilly | null | null | 0 | 71 | 2023-05-09T17:26:42 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4178820473.876
num_examples: 31783
download_size: 4402850196
dataset_size: 4178820473.876
---
# Dataset Card for "test_dataset3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 406 | [
[
-0.0435791015625,
-0.0186309814453125,
0.017120361328125,
0.0229949951171875,
-0.006298065185546875,
-0.005970001220703125,
0.0291595458984375,
-0.01215362548828125,
0.037567138671875,
0.02734375,
-0.04974365234375,
-0.0474853515625,
-0.032501220703125,
-0.0... |
napsternxg/nyt_ingredients | 2023-10-07T00:45:48.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"recipe",
"ingredients",
"region:us"
] | napsternxg | New York Times Ingredient Phrase Tagger Dataset
We use a conditional random field model (CRF) to extract tags from labelled training data, which was tagged by human news assistants.
e wrote about our approach on the [New York Times Open blog](http://open.blogs.nytimes.com/2015/04/09/extracting-structured-data-from-recipes-using-conditional-random-fields/).
This repo contains scripts to extract the Quantity, Unit, Name, and Comments from unstructured ingredient phrases.
We use it on Cooking to format incoming recipes. Given the following input:
```
1 pound carrots, young ones if possible
Kosher salt, to taste
2 tablespoons sherry vinegar
2 tablespoons honey
2 tablespoons extra-virgin olive oil
1 medium-size shallot, peeled and finely diced
1/2 teaspoon fresh thyme leaves, finely chopped
Black pepper, to taste
``` | @misc{nytimesTaggedIngredients,
author = {Erica Greene and Adam Mckaig},
title = {{O}ur {T}agged {I}ngredients {D}ata is {N}ow on {G}it{H}ub --- archive.nytimes.com},
howpublished = {\\url{https://archive.nytimes.com/open.blogs.nytimes.com/2016/04/27/structured-ingredients-data-tagging/}},
year = {},
note = {[Accessed 03-10-2023]},
} | 0 | 71 | 2023-06-11T16:53:58 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: nyt_ingredients
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- recipe
- ingredients
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# New York Times Ingredient Phrase Tagger Dataset
Original source: https://github.com/nytimes/ingredient-phrase-tagger
From the source:
> We use a conditional random field model (CRF) to extract tags from labelled training data, which was tagged by human news assistants.
> We wrote about our approach on the [New York Times Open blog](http://open.blogs.nytimes.com/2015/04/09/extracting-structured-data-from-recipes-using-conditional-random-fields/).
> This repo contains scripts to extract the Quantity, Unit, Name, and Comments from unstructured ingredient phrases.
> We use it on Cooking to format incoming recipes. Given the following input:
```
1 pound carrots, young ones if possible
Kosher salt, to taste
2 tablespoons sherry vinegar
2 tablespoons honey
2 tablespoons extra-virgin olive oil
1 medium-size shallot, peeled and finely diced
1/2 teaspoon fresh thyme leaves, finely chopped
Black pepper, to taste
```
| 1,257 | [
[
-0.0004286766052246094,
-0.04345703125,
0.021514892578125,
-0.004894256591796875,
-0.022735595703125,
-0.001621246337890625,
-0.006069183349609375,
-0.0213623046875,
0.016387939453125,
0.05926513671875,
-0.044677734375,
-0.056884765625,
-0.035430908203125,
0... |
mlfoundations/datacomp_1b | 2023-08-21T21:43:05.000Z | [
"license:cc-by-4.0",
"region:us"
] | mlfoundations | null | null | 5 | 71 | 2023-06-11T20:12:44 | ---
license: cc-by-4.0
---
## DataComp-1B
This repository contains metadata files for DataComp-1B. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage. | 985 | [
[
-0.035369873046875,
-0.042449951171875,
0.019134521484375,
0.02978515625,
-0.036102294921875,
-0.007007598876953125,
0.004611968994140625,
-0.043212890625,
0.0296783447265625,
0.0386962890625,
-0.06982421875,
-0.04742431640625,
-0.042236328125,
0.01963806152... |
csebuetnlp/dailydialogue_bn | 2023-07-22T07:41:50.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:bn",
"license:cc-by-nc... | csebuetnlp | DailyDialogue (bengali) has been derived from the original English dataset. | @inproceedings{bhattacharjee-etal-2023-banglanlg,
title = "{B}angla{NLG} and {B}angla{T}5: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in {B}angla",
author = "Bhattacharjee, Abhik and
Hasan, Tahmid and
Ahmad, Wasi Uddin and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.54",
pages = "726--735",
abstract = "This work presents {`}BanglaNLG,{'} a comprehensive benchmark for evaluating natural language generation (NLG) models in Bangla, a widely spoken yet low-resource language. We aggregate six challenging conditional text generation tasks under the BanglaNLG benchmark, introducing a new dataset on dialogue generation in the process. Furthermore, using a clean corpus of 27.5 GB of Bangla data, we pretrain {`}BanglaT5{'}, a sequence-to-sequence Transformer language model for Bangla. BanglaT5 achieves state-of-the-art performance in all of these tasks, outperforming several multilingual models by up to 9{\%} absolute gain and 32{\%} relative gain. We are making the new dialogue dataset and the BanglaT5 model publicly available at https://github.com/csebuetnlp/BanglaNLG in the hope of advancing future research on Bangla NLG.",
} | 2 | 71 | 2023-07-15T08:52:05 | ---
annotations_creators:
- machine-generated
language_creators:
- found
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- bn
license:
- cc-by-nc-sa-4.0
---
# Dataset Card for `dailydialogue_bn`
## Table of Contents
- [Dataset Card for `dailydialogue_bn`](#dataset-card-for-dailydialogue_bn)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/BanglaNLG](https://github.com/csebuetnlp/BanglaNLG)
- **Paper:** [**"BanglaNLG and BanglaT5: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla"**](https://aclanthology.org/2023.findings-eacl.54/)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
This is a Multi-turn dialogue dataset for Bengali, curated from the original English [DailyDialogue]() dataset and using the state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
### Languages
* `Bengali`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/dailydialogue_bn")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format. Each element of the `dialogue` feature represents a single turn of the conversation.
```
{
"id": "130",
"dialogue":
[
"তোমার জন্মদিনের জন্য তুমি কি করবে?",
"আমি আমার বন্ধুদের সাথে পিকনিক করতে চাই, মা।",
"বাড়িতে পার্টি হলে কেমন হয়? এভাবে আমরা একসাথে হয়ে উদযাপন করতে পারি।",
"ঠিক আছে, মা। আমি আমার বন্ধুদের বাড়িতে আমন্ত্রণ জানাবো।"
]
}
```
### Data Fields
The data fields are as follows:
- `id`: a `string` feature.
- `dialogue`: a List of `string` feature.
### Data Splits
| split |count |
|----------|--------|
|`train`| 11118 |
|`validation`| 1000 |
|`test`| 1000 |
## Dataset Creation
For the training set, we translated the complete [DailyDialogue](https://aclanthology.org/N18-1101/) dataset using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. A datapoint was accepted if all of its constituent sentences had a similarity score over 0.7.
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
### Source Data
[DailyDialogue](https://arxiv.org/abs/1606.05250)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
### Annotations
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/BanglaNLG)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@inproceedings{bhattacharjee-etal-2023-banglanlg,
title = "{B}angla{NLG} and {B}angla{T}5: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in {B}angla",
author = "Bhattacharjee, Abhik and
Hasan, Tahmid and
Ahmad, Wasi Uddin and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.54",
pages = "726--735",
abstract = "This work presents {`}BanglaNLG,{'} a comprehensive benchmark for evaluating natural language generation (NLG) models in Bangla, a widely spoken yet low-resource language. We aggregate six challenging conditional text generation tasks under the BanglaNLG benchmark, introducing a new dataset on dialogue generation in the process. Furthermore, using a clean corpus of 27.5 GB of Bangla data, we pretrain {`}BanglaT5{'}, a sequence-to-sequence Transformer language model for Bangla. BanglaT5 achieves state-of-the-art performance in all of these tasks, outperforming several multilingual models by up to 9{\%} absolute gain and 32{\%} relative gain. We are making the new dialogue dataset and the BanglaT5 model publicly available at https://github.com/csebuetnlp/BanglaNLG in the hope of advancing future research on Bangla NLG.",
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. | 7,367 | [
[
-0.0303955078125,
-0.0635986328125,
-0.0035953521728515625,
0.03411865234375,
-0.0261077880859375,
0.005184173583984375,
-0.029815673828125,
-0.036773681640625,
0.0033283233642578125,
0.03558349609375,
-0.058074951171875,
-0.060791015625,
-0.034149169921875,
... |
Trelis/function_calling_extended | 2023-10-30T11:06:37.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"function call",
"function calling",
"function-calling",
"region:us"
] | Trelis | null | null | 19 | 71 | 2023-07-31T10:44:02 | ---
task_categories:
- question-answering
- conversational
- text-generation
language:
- en
tags:
- function call
- function calling
- function-calling
size_categories:
- n<1K
extra_gated_prompt: "Access to this dataset requires the purchase of a license [here](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj)"
extra_gated_fields:
Name: text
Affiliation: text
Email: text
I have purchased a license (access will be granted once your payment clears): checkbox
I agree to the terms of the license described on the dataset card: checkbox
---
# Trelis Function Calling Dataset
- Allows models to be fine-tuned for function-calling.
- The dataset is human generated and does not make use of Llama 2 or OpenAI!
- Contains 59 training and 17 test rows
- Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat
Access this dataset by purchasing a license [HERE](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj).
Alternatively, you can find pre-trained function calling models for Llama 2 and Mistral [HERE](https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2)
--Change-log--
11Oct2023: Minor update adding in short prompts like "duck" to which the LLM should respond with a description of a duck or ducks, not a function call.
22Aug2023: Major updates to the main branch:
- The 'systemPrompt' column is now replaced by 'functionList', which contains a raw list of function metadata without any guidance.
- The previous dataset, with 'systemPrompt' - containing specific instructions - has been moved to the 'explicit' branch.
- The 'implicit' branch is a copy of the 'explicit' branch, but with slightly less instruction provided to the LLM in the systemPrompt column.
The reason for these updates are:
- For one-shot model prompting, it is helpful to provide as much description as possible to the LLM.
- For fine-tuning, is is desirable to minimise the length of any added context to describe functions, especially if not necessary.
Users can play around with the different levels of instruction provided. In summary:
- 'main' - provides the lowest level of instruction on how to use the functions
- 'implicit' - moderate instructions
- 'explicit' - detailed instructions
18Aug2023: Added new 'implicit' branch with a shorter system prompt. Performs similarly to main branch, but uses less tokens for prompting.
15Aug2023: Added datasets to fine-tune models for awareness of available functions.
## Fine-Tuning Notes and Scripts
The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends **strongly** on how the attention mask and loss mask are set. For further details see the [Youtube Video Here](https://youtu.be/OQdp-OeG1as)
### QLoRa Training Notebook for Llama 2 (FREE)
- Access a basic Google Colab script for fine-tuning [here](https://colab.research.google.com/drive/1uMSS1o_8YOPyG1X_4k6ENEE3kJfBGGhH?usp=sharing).
### ADVANCED Fine-tuning Notebook for Structured Responses (incl. function calling) (PAID)
- Fine-tune models for function calling or other structured responses.
- Includes a prompt loss-mask for improved performance when structured responses are required.
- Includes a stop token after responses - allowing the model to provide a short reponse (e.g. a function call) and then stop.
- Request [access here](https://buy.stripe.com/5kAfZK6xT2Hxg7e8wW).
## Licensing
The Function Calling Extended dataset is commercially licensed. Users can purchase a license per seat/user from [here](https://buy.stripe.com/fZeeVG5tP2Hxg7ecNj).
Further terms:
- Licenses are not transferable to other users/entities.
### Attribution of data sources
This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.
## Dataset Structure
The datasets (train and test) contain three prompt types:
1. The first portion provides function metadata in the systemPrompt but then has userPrompt and assistantResponse values that do not require function calling. This is to get the language model accustomed to having function metadata available, but not using it. Questions and answers for these prompts are generated by running addBlank.py and the questions and answers come from [truthful_qa](https://huggingface.co/datasets/truthful_qa) - see below for license details.
2. The second portion of the train and test datasets provide examples where a function call is necessary.
3. The third portion (new as of August 13th 2023) acclimatises the model to recognising what functions it has available from the system prompt, and sharing that with the user when appropriate. Further extended on October 11th to add one and two word prompts not requiring function calls as responses.
## Branches
Specify the branch using:
```
data = load_dataset(
"Trelis/function_calling_extended",
revision="implicit" # optionally specify a branch
)
```
The 'main' branch uses short system/function prompt, with no instruction on usage (see the other branches for prompts with stronger instruction):
```
{ "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] }
```
The 'explicit' branch provides detailed instructions to the language model on how to call functions:
```
You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] } To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } }
```
The 'implicit' branch uses a shorter, less explicit branch that performs similarly and is therefore recommended as it reduces the length of the system prompt:
```
You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant: { "function": "search_bing", "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.", "arguments": [ { "name": "query", "type": "string", "description": "The search query string" } ] } { "function": "list_files", "description": "This function provides a list of files in the user's directory. It can be useful when the user wants to check what files they have. This function requires no parameters and returns no values.", "arguments": [] }
```
Said differently, the 'implicit' branch omits the following portion of the prompt:
```
To call a function, respond - immediately and only - with a JSON object of the following format: { "function": "function_name", "arguments": { "argument1": value1, "argument2": value2 } }
```
## Training and Inference Syntax
Here is sample prompt syntax for Llama. This will depend on the language model you use and also how to wish to fine-tune the model:
```
# Define the roles and markers
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
system_prompt = data['test'][index]['systemPrompt']
user_prompt = data['test'][index]['userPrompt']
correct_answer = data['test'][index]['assistantResponse']
# Format your prompt template
prompt = f"{B_INST} {B_SYS}{system_prompt.strip()}{E_SYS}{user_prompt.strip()} {E_INST}\n\n"
```
The `\n\n` after E_INST is important as it prevents E_INST from sometimes being tokenized with the ']' attached to the next characters. Using `\n\n` also provides the best chance for the model correctly telling whether to call a function or provide a usual response.
Alternatively, you may prefer to stay away from the system prompt and create a separate wrapper for function descriptions (as an example for the data on 'main'):
```
# Define the roles and markers
B_INST, E_INST = "[INST]", "[/INST]"
B_FUNC, E_FUNC = "<FUNCTIONS>", "</FUNCTIONS>\n\n"
functionList = data['test'][index]['functionList']
user_prompt = data['test'][index]['userPrompt']
correct_answer = data['test'][index]['assistantResponse']
# Format your prompt template
prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST}\n\n"
```
## File Structure (for prompt dataset generation)
- `functions/`: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.
- `generate_dataset.py`: This Python script generates the base training and testing dataset CSV files.
- `addBlank.py`: This adds in truthfulqa questions and answers after system prompts with functions
- `hello.py`: adds in prompts to accustomise the model to the presence of functions in the system prompt.
### JSON File Structure
Each function file should be a JSON file with the following structure:
```json
{
"functionMetaData": {
"function": "function_name",
"description": "function_description",
"arguments": [
{
"name": "argument_name",
"type": "argument_type",
"description": "argument_description"
},
...
]
},
"samplePromptResponsePairs": [
{
"prompt": "sample_prompt",
"response": {
"arguments": {
"argument_name": "argument_value",
...
}
}
},
...
]
}
```
The `functionMetaData` object describes the function. The `samplePromptResponsePairs` array contains sample prompts and responses for the function.
## Dataset Generation
To generate the dataset, run the `generate_dataset.py` script. This script will iterate over each function file and generate a CSV row for each sample prompt-response pair.
## CSV File Structure
The generated CSV file has the following columns:
'main' branches:
- `functionList`: Descriptions of two functions (the current function and a randomly selected other function).
- `userPrompt`: The user's prompt.
- `assistantResponse`: The assistant's response.
'explicit' and 'implicit' branches:
- `systemPrompt`: The system's prompt, which includes the descriptions of two functions (the current function and a randomly selected other function) and instructions on how to call a function ('explicit branch only').
- `userPrompt`: The user's prompt.
- `assistantResponse`: The assistant's response.
## Testing JSON Structure
A script named `validate.py` can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.
To use the script, call it from the command line with the name of the function file as an argument:
```
python validate.py my_function.json
```
| 11,909 | [
[
-0.0165557861328125,
-0.06121826171875,
0.01265716552734375,
0.0190887451171875,
-0.018402099609375,
0.01386260986328125,
0.0014476776123046875,
-0.02850341796875,
0.010406494140625,
0.047821044921875,
-0.06707763671875,
-0.054656982421875,
-0.015899658203125,
... |
DynamicSuperb/SpoofDetection_ASVspoof2015 | 2023-10-19T04:49:24.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 71 | 2023-08-11T11:03:56 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 13845431340
num_examples: 34177
download_size: 3426587393
dataset_size: 13845431340
---
# Dataset Card for "SpoofDetection_ASVspoof2015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 606 | [
[
-0.030181884765625,
-0.0246429443359375,
0.003597259521484375,
0.035491943359375,
-0.01132965087890625,
0.00028204917907714844,
0.032806396484375,
-0.0225067138671875,
0.0635986328125,
0.04022216796875,
-0.064208984375,
-0.0396728515625,
-0.0479736328125,
-0... |
erfanzar/UltraChat-Mini | 2023-09-07T12:13:32.000Z | [
"region:us"
] | erfanzar | null | null | 0 | 71 | 2023-09-07T12:04:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: dialog
sequence: string
- name: user
sequence: string
- name: assistant
sequence: string
- name: system
dtype: string
- name: id
dtype: int64
- name: llama2_prompt
dtype: string
splits:
- name: train
num_bytes: 6005323184
num_examples: 239641
download_size: 2964129142
dataset_size: 6005323184
---
# Dataset Card for "UltraChat-Mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 642 | [
[
-0.040924072265625,
-0.0361328125,
0.01184844970703125,
-0.0023288726806640625,
-0.0245513916015625,
0.00803375244140625,
0.022857666015625,
-0.0107421875,
0.06854248046875,
0.025299072265625,
-0.0684814453125,
-0.03759765625,
-0.01483917236328125,
-0.026916... |
fiveflow/psychology-dataset-v2 | 2023-10-10T05:03:48.000Z | [
"region:us"
] | fiveflow | null | null | 0 | 71 | 2023-10-10T03:18:11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 716480
num_examples: 996
download_size: 189768
dataset_size: 716480
---
# Dataset Card for "psychology-dataset-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 359 | [
[
-0.033599853515625,
-0.007480621337890625,
0.027679443359375,
0.036224365234375,
-0.00243377685546875,
-0.01593017578125,
0.0104522705078125,
-0.0242919921875,
0.048431396484375,
0.0199127197265625,
-0.0806884765625,
-0.037353515625,
-0.050445556640625,
-0.0... |
ehartford/ultrachat-uncensored | 2023-10-23T05:29:16.000Z | [
"license:mit",
"region:us"
] | ehartford | null | null | 12 | 71 | 2023-10-12T05:25:04 | ---
license: mit
---
This is based on ultrachat dataset https://huggingface.co/datasets/stingning/ultrachat
I filtered it using the classic "unfiltered" keywords list https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered to remove instances of refusals and bias
About 90% of the dataset was removed.
What remains (400k conversations) is unlikely to inclinate the model to refuse.
I am investigating a less heavy handed approach using dolphin-2.1 to reword any detected refusals. | 503 | [
[
-0.0521240234375,
-0.051116943359375,
0.0273590087890625,
-0.001461029052734375,
-0.0478515625,
-0.0149993896484375,
0.0011005401611328125,
-0.039031982421875,
0.025726318359375,
0.08343505859375,
-0.07061767578125,
-0.033111572265625,
-0.03350830078125,
0.0... |
pavitemple/Xclip-finetuning | 2023-10-24T13:31:20.000Z | [
"region:us"
] | pavitemple | null | null | 0 | 71 | 2023-10-12T20:46:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
haseong8012/child-10k | 2023-10-16T15:06:18.000Z | [
"region:us"
] | haseong8012 | null | null | 0 | 71 | 2023-10-16T14:20:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
sequence: float32
splits:
- name: train
num_bytes: 2077216016
num_examples: 10000
download_size: 1810220972
dataset_size: 2077216016
---
# Dataset Card for "korean-child-command-voice_train-0-10000_smaplingRate-16000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 535 | [
[
-0.0312042236328125,
0.001079559326171875,
-0.0034503936767578125,
0.03533935546875,
-0.0212860107421875,
0.01076507568359375,
0.0020999908447265625,
0.01013946533203125,
0.041900634765625,
0.0335693359375,
-0.08599853515625,
-0.029998779296875,
-0.0466613769531... |
anhz/finetune_data | 2023-11-02T10:10:50.000Z | [
"region:us"
] | anhz | null | null | 0 | 71 | 2023-10-20T02:41:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
sayakpaul/drawbench | 2023-10-21T05:25:29.000Z | [
"license:apache-2.0",
"region:us"
] | sayakpaul | null | null | 2 | 71 | 2023-10-21T05:24:45 | ---
license: apache-2.0
---
DrawBench dataset from [Imagen](https://imagen.research.google/). | 94 | [
[
-0.0310211181640625,
-0.0307769775390625,
0.01434326171875,
0.019500732421875,
-0.01532745361328125,
-0.0183258056640625,
0.0193939208984375,
-0.027191162109375,
0.050506591796875,
0.07049560546875,
-0.05279541015625,
-0.041290283203125,
-0.0231781005859375,
... |
bodrum/fen-isleri-mudurlugu-netigma-dataset | 2023-10-23T00:50:22.000Z | [
"region:us"
] | bodrum | null | null | 1 | 71 | 2023-10-23T00:49:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
Geonmo/deepfashion-multimodal-descriptions | 2023-10-30T07:58:32.000Z | [
"region:us"
] | Geonmo | null | null | 0 | 71 | 2023-10-30T07:58:29 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9586020
num_examples: 40770
download_size: 2270474
dataset_size: 9586020
---
# Dataset Card for "deepfashion-multimodal-descriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 378 | [
[
-0.05218505859375,
-0.0161285400390625,
0.019378662109375,
0.0224151611328125,
-0.0186920166015625,
0.01122283935546875,
-0.0019140243530273438,
-0.0224609375,
0.050445556640625,
0.029998779296875,
-0.06805419921875,
-0.04876708984375,
-0.0380859375,
-0.0118... |
Geonmo/deepfashion-multimodal-descriptions-split | 2023-10-30T08:06:07.000Z | [
"region:us"
] | Geonmo | null | null | 0 | 71 | 2023-10-30T08:06:04 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 939822
num_examples: 11730
download_size: 247226
dataset_size: 939822
---
# Dataset Card for "deepfashion-multimodal-descriptions-split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 381 | [
[
-0.054290771484375,
-0.0269012451171875,
0.0172119140625,
0.0202484130859375,
-0.029449462890625,
0.0225830078125,
-0.000652313232421875,
-0.0224761962890625,
0.054290771484375,
0.0311431884765625,
-0.06939697265625,
-0.040985107421875,
-0.043212890625,
-0.0... |
nateraw/pascal-voc-2012 | 2022-06-07T04:52:13.000Z | [
"region:us"
] | nateraw | null | null | 1 | 70 | 2022-06-07T04:38:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
BirdL/DALL-E-Dogs | 2022-09-28T21:09:11.000Z | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | BirdL | null | null | 1 | 70 | 2022-08-01T03:24:18 | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Cats Dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- image-classification
- unconditional-image-generation
task_ids: []
---
DALL-E-Dogs is a dataset meant to produce a synthetic animal dataset. This is a precursor to DALL-E-Cats. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) | 562 | [
[
-0.043365478515625,
-0.042449951171875,
0.00637054443359375,
0.0161285400390625,
-0.0102386474609375,
0.025360107421875,
0.0248870849609375,
-0.03900146484375,
0.01512908935546875,
0.031402587890625,
-0.04736328125,
-0.026153564453125,
-0.002338409423828125,
... |
KonradSzafer/stackoverflow_linux | 2023-03-04T23:23:28.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"region:us"
] | KonradSzafer | null | null | 1 | 70 | 2023-02-26T12:48:36 | ---
dataset_info:
features:
- name: title
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 303464
num_examples: 270
- name: test
num_bytes: 37456
num_examples: 30
download_size: 172425
dataset_size: 340920
task_categories:
- question-answering
language:
- en
pretty_name: Stack Overflow Linux
size_categories:
- n<1K
---
# Dataset Card for "stackoverflow_linux"
Dataset information:
- Source: Stack Overflow
- Category: Linux
- Number of samples: 300
- Train/Test split: 270/30
- Quality: Data come from the top 1k most upvoted questions
## Additional Information
### License
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 945 | [
[
-0.047943115234375,
-0.0289764404296875,
0.007343292236328125,
0.0194549560546875,
-0.0285491943359375,
0.01995849609375,
0.00902557373046875,
-0.01125335693359375,
0.0361328125,
0.0248565673828125,
-0.047027587890625,
-0.03790283203125,
-0.0244903564453125,
... |
jxu124/refcoco | 2023-05-20T18:58:37.000Z | [
"region:us"
] | jxu124 | null | null | 0 | 70 | 2023-04-22T10:52:18 | ---
dataset_info:
features:
- name: sent_ids
sequence: int64
- name: file_name
dtype: string
- name: ann_id
dtype: int64
- name: ref_id
dtype: int64
- name: image_id
dtype: int64
- name: split
dtype: string
- name: sentences
list:
- name: raw
dtype: string
- name: sent
dtype: string
- name: sent_id
dtype: int64
- name: tokens
sequence: string
- name: category_id
dtype: int64
- name: raw_anns
dtype: string
- name: raw_image_info
dtype: string
- name: raw_sentences
dtype: string
- name: image_path
dtype: string
- name: bbox
sequence: float64
- name: captions
sequence: string
- name: global_image_id
dtype: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 81385755
num_examples: 42404
- name: testB
num_bytes: 3284397
num_examples: 1810
- name: test
num_bytes: 3943834
num_examples: 1975
- name: validation
num_bytes: 7355626
num_examples: 3811
download_size: 38895129
dataset_size: 95969612
---
# Dataset Card for "refcoco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,266 | [
[
-0.041290283203125,
-0.00348663330078125,
0.00928497314453125,
0.01078033447265625,
-0.01537322998046875,
-0.004055023193359375,
0.0238037109375,
-0.0265350341796875,
0.0621337890625,
0.046051025390625,
-0.0560302734375,
-0.045166015625,
-0.0257415771484375,
... |
nicholasKluge/instruct-aira-dataset | 2023-11-02T17:21:07.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:pt",
"language:en",
"language:es",
"license:apache-2.0",
"alignment",
"instruction",
"chat",
"region:us"
] | nicholasKluge | null | null | 2 | 70 | 2023-06-07T17:09:55 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- pt
- en
- es
tags:
- alignment
- instruction
- chat
pretty_name: Instruct-Aira Dataset
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: portuguese
num_bytes: 53113297
num_examples: 41815
- name: english
num_bytes: 47263211
num_examples: 41815
- name: spanish
num_bytes: 54272293
num_examples: 41815
download_size: 86279324
dataset_size: 154648801
---
# Dataset (`Instruct-Aira Dataset`)
### Overview
This dataset contains a collection of demonstrations on how to answer questions and follow instructions. We used prompts from the [`synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) dataset, the [`databricks_dolly_15k`](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) dataset, and the [`instruction-dataset`](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) dataset, to create an instruction-tuning dataset, where the completions were generated by already tuned models (ChatGPT, LLama 2, Open-Assistant, etc). The dataset is available in both Portuguese, English, and Spanish.
### Dataset Details
- **Dataset Name:** Instruct-Aira Dataset
- **Language:** Portuguese, English, Spanish
- **Total Size:** Over 41,000 demonstrations
### Contents
The dataset consists of data frames with the following columns:
- **Prompt:** The initial text or question provided to the model.
- **Completion:** The demonstration of a generated completion or response for the given prompt.
```python
{
"prompt":"What is the capital of Brazil?",
"completion": "The capital of Brazil is Brasília."
}
```
All `prompt + completion` examples are less than 400 tokens (measured using the `GPT-2` and `BLOOM` tokenizers).
### Use Cases
`Instruct-Aira Dataset` can be utilized for various natural language processing tasks, including but not limited to:
- Language generation.
- Question-answering systems.
- Chatbot development.
- Evaluation of language models.
- AI ethics research.
- Alignment research.
## How to use
Available splits are `english` and `portuguese`.
```python
from datasets import load_dataset
dataset = load_dataset("nicholasKluge/instruct-aira-dataset")
```
### Dataset License
The `Instruct-Aira Dataset` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
### Disclaimer
This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use. | 2,892 | [
[
-0.02362060546875,
-0.07733154296875,
0.01320648193359375,
0.0252838134765625,
-0.003082275390625,
-0.0111083984375,
-0.0101776123046875,
0.0005903244018554688,
0.01451873779296875,
0.039398193359375,
-0.04937744140625,
-0.029388427734375,
-0.032684326171875,
... |
fake-news-UFG/FakeNewsSet | 2023-08-18T17:36:21.000Z | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:pt",
"license:mit",
"region:us"
] | fake-news-UFG | \ | @inproceedings{10.1145/3428658.3430965,
author = {da Silva, Fl\'{a}vio Roberto Matias and Freire, Paulo M\'{a}rcio Souza and de Souza, Marcelo Pereira and de A. B. Plenamente, Gustavo and Goldschmidt, Ronaldo Ribeiro},
title = {FakeNewsSetGen: A Process to Build Datasets That Support Comparison Among Fake News Detection Methods},
year = {2020},
isbn = {9781450381963},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3428658.3430965},
doi = {10.1145/3428658.3430965},
abstract = {Due to easy access and low cost, social media online news consumption has increased significantly for the last decade. Despite their benefits, some social media allow anyone to post news with intense spreading power, which amplifies an old problem: the dissemination of Fake News. In the face of this scenario, several machine learning-based methods to automatically detect Fake News (MLFN) have been proposed. All of them require datasets to train and evaluate their detection models. Although recent MLFN were designed to consider data regarding the news propagation on social media, most of the few available datasets do not contain this kind of data. Hence, comparing the performances amid those recent MLFN and the others is restricted to a very limited number of datasets. Moreover, all existing datasets with propagation data do not contain news in Portuguese, which impairs the evaluation of the MLFN in this language. Thus, this work proposes FakeNewsSetGen, a process that builds Fake News datasets that contain news propagation data and support comparison amid the state-of-the-art MLFN. FakeNewsSetGen's software engineering process was guided to include all kind of data required by the existing MLFN. In order to illustrate FakeNewsSetGen's viability and adequacy, a case study was carried out. It encompassed the implementation of a FakeNewsSetGen prototype and the application of this prototype to create a dataset called FakeNewsSet, with news in Portuguese. Five MLFN with different kind of data requirements (two of them demanding news propagation data) were applied to FakeNewsSet and compared, demonstrating the potential use of both the proposed process and the created dataset.},
booktitle = {Proceedings of the Brazilian Symposium on Multimedia and the Web},
pages = {241–248},
numpages = {8},
keywords = {Fake News detection, Dataset building process, social media},
location = {S\~{a}o Lu\'{\i}s, Brazil},
series = {WebMedia '20}
} | 0 | 70 | 2023-08-18T14:54:33 | ---
license: mit
task_categories:
- text-classification
language:
- pt
size_categories:
- n<1K
language_details: pt-BR
multilinguality:
- monolingual
language_creators:
- found
---
# FakeNewsSet
## Dataset Description
- **Homepage:**
- **Repository:** [https://dl.acm.org/doi/abs/10.1145/3428658.3430965](https://dl.acm.org/doi/abs/10.1145/3428658.3430965)
- **Paper:** [https://dl.acm.org/doi/abs/10.1145/3428658.3430965](https://dl.acm.org/doi/abs/10.1145/3428658.3430965)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use "FakeNewsSet", please cite:
```bibtex
@inproceedings{10.1145/3428658.3430965,
author = {da Silva, Fl\'{a}vio Roberto Matias and Freire, Paulo M\'{a}rcio Souza and de Souza, Marcelo Pereira and de A. B. Plenamente, Gustavo and Goldschmidt, Ronaldo Ribeiro},
title = {FakeNewsSetGen: A Process to Build Datasets That Support Comparison Among Fake News Detection Methods},
year = {2020},
isbn = {9781450381963},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3428658.3430965},
doi = {10.1145/3428658.3430965},
abstract = {Due to easy access and low cost, social media online news consumption has increased significantly for the last decade. Despite their benefits, some social media allow anyone to post news with intense spreading power, which amplifies an old problem: the dissemination of Fake News. In the face of this scenario, several machine learning-based methods to automatically detect Fake News (MLFN) have been proposed. All of them require datasets to train and evaluate their detection models. Although recent MLFN were designed to consider data regarding the news propagation on social media, most of the few available datasets do not contain this kind of data. Hence, comparing the performances amid those recent MLFN and the others is restricted to a very limited number of datasets. Moreover, all existing datasets with propagation data do not contain news in Portuguese, which impairs the evaluation of the MLFN in this language. Thus, this work proposes FakeNewsSetGen, a process that builds Fake News datasets that contain news propagation data and support comparison amid the state-of-the-art MLFN. FakeNewsSetGen's software engineering process was guided to include all kind of data required by the existing MLFN. In order to illustrate FakeNewsSetGen's viability and adequacy, a case study was carried out. It encompassed the implementation of a FakeNewsSetGen prototype and the application of this prototype to create a dataset called FakeNewsSet, with news in Portuguese. Five MLFN with different kind of data requirements (two of them demanding news propagation data) were applied to FakeNewsSet and compared, demonstrating the potential use of both the proposed process and the created dataset.},
booktitle = {Proceedings of the Brazilian Symposium on Multimedia and the Web},
pages = {241–248},
numpages = {8},
keywords = {Fake News detection, Dataset building process, social media},
location = {S\~{a}o Lu\'{\i}s, Brazil},
series = {WebMedia '20}
}
```
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset. | 4,299 | [
[
-0.0299530029296875,
-0.0655517578125,
0.00800323486328125,
0.04345703125,
-0.019622802734375,
0.004497528076171875,
-0.01512908935546875,
-0.032989501953125,
0.034271240234375,
0.023284912109375,
-0.045684814453125,
-0.056365966796875,
-0.05242919921875,
0.... |
stepkurniawan/sustainability-methods-wiki | 2023-10-16T18:53:27.000Z | [
"license:mit",
"region:us"
] | stepkurniawan | null | null | 0 | 70 | 2023-09-20T12:48:53 | ---
license: mit
configs:
- config_name: 50_QA
data_files:
- split: train
path: 50_QA/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
config_name: 50_QA
features:
- name: contexts
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: ground_truths
dtype: string
splits:
- name: train
num_bytes: 78182
num_examples: 50
download_size: 57005
dataset_size: 78182
---
This is a table dump from Prof. Henrik van Wehrden's famous sustainability wiki. He is a sustainability professor in Leuphana University, Germany, and passionate about digitalizing his mind. Therefore, the wiki is born.
This Wiki pages are focused on sustainability and highly subjective on his view of the world.
Link: https://sustainabilitymethods.org/index.php/Main_Page | 873 | [
[
-0.053253173828125,
-0.054046630859375,
0.04083251953125,
-0.0178680419921875,
-0.022125244140625,
-0.01502227783203125,
0.01061248779296875,
-0.02838134765625,
0.049591064453125,
0.03741455078125,
-0.058197021484375,
-0.01464080810546875,
-0.0114288330078125,
... |
jrs-a/batangueno-accent | 2023-10-09T17:00:58.000Z | [
"region:us"
] | jrs-a | null | null | 0 | 70 | 2023-10-09T13:11:04 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: input_length
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 244706143.0
num_examples: 471
download_size: 225571755
dataset_size: 244706143.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "batangueno-accent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 570 | [
[
-0.0433349609375,
-0.0196380615234375,
-0.005191802978515625,
0.031036376953125,
-0.01617431640625,
0.0221099853515625,
-0.00994873046875,
-0.0236053466796875,
0.06976318359375,
0.049072265625,
-0.057525634765625,
-0.06109619140625,
-0.033905029296875,
0.000... |
mekaneeky/salt-dataset | 2023-10-24T15:03:50.000Z | [
"region:us"
] | mekaneeky | null | null | 0 | 70 | 2023-10-19T11:10:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: eng
dtype: string
- name: lug
dtype: string
- name: ach
dtype: string
- name: teo
dtype: string
- name: lgg
dtype: string
- name: nyn
dtype: string
- name: ID
dtype: string
splits:
- name: train
num_bytes: 8854760
num_examples: 23947
- name: dev
num_bytes: 181932
num_examples: 500
- name: test
num_bytes: 187752
num_examples: 500
download_size: 6020099
dataset_size: 9224444
---
# Dataset Card for "salt-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 822 | [
[
-0.0433349609375,
-0.000023066997528076172,
0.022613525390625,
0.020294189453125,
-0.02197265625,
0.005756378173828125,
0.0181121826171875,
-0.00732421875,
0.051544189453125,
0.017974853515625,
-0.049591064453125,
-0.051025390625,
-0.0419921875,
-0.015365600... |
ColumbiaNLP/FLUTE | 2022-10-07T18:28:02.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_ids:natural-language-inference",
"task_ids:explanation-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced... | ColumbiaNLP | null | null | 7 | 69 | 2022-07-05T14:38:38 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
- machine-generated
- crowdsourced
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: FLUTE
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text2text-generation
task_ids:
- natural-language-inference
- explanation-generation
---
# Dataset Card for FigLang2022SharedTask
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://figlang2022sharedtask.github.io/
- **Repository:**
- **Paper:** TBA
- **Point of Contact:** tuhin.chakr@cs.columbia.edu
### Dataset Summary
Model in the loop approach for fig lang generation and explainability
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
TBA
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,750 | [
[
-0.04534912109375,
-0.03436279296875,
0.0078887939453125,
0.01715087890625,
-0.032562255859375,
0.007289886474609375,
-0.01953125,
-0.02813720703125,
0.031341552734375,
0.06640625,
-0.0699462890625,
-0.05615234375,
-0.047760009765625,
0.00769805908203125,
... |
GabrielVidal/dead-by-daylight-perks | 2022-11-27T16:06:46.000Z | [
"task_categories:image-classification",
"task_categories:text-to-image",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:openrail",
"de... | GabrielVidal | null | null | 1 | 69 | 2022-11-21T20:42:24 | ---
license: openrail
dataset_info:
features:
- name: image
dtype: image
- name: name
dtype: string
- name: type
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 22392351.0
num_examples: 219
download_size: 22365600
dataset_size: 22392351.0
annotations_creators:
- found
language:
- en
language_creators:
- found
multilinguality:
- monolingual
pretty_name: Dead by daylight video game perks
size_categories:
- n<1K
source_datasets:
- original
tags:
- dead by daylight
task_categories:
- image-classification
- text-to-image
task_ids:
- multi-class-image-classification
---
# Dataset Card for Dead by Daylight perks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
### Dataset Summary
This dataset contains all images (on black background and upscaled to 512x512) of perks from the video game [Dead by Daylight](https://deadbydaylight.com/) with type, name and description (the first sentence) in english.
## Dataset Creation
### Source Data
All images and text have been found online, mainly on the [Dead by Daylight wiki](https://deadbydaylight.fandom.com/wiki/Dead_by_Daylight_Wiki).
## Additional Information
### Licensing Information
All images belong to [Dead by Daylight](https://deadbydaylight.com/).
### Contributions
Thanks to [@GabrielVidal1](https://github.com/GabrielVidal1) for adding this dataset. | 1,707 | [
[
0.0020961761474609375,
-0.0006804466247558594,
0.0277099609375,
0.0232086181640625,
-0.06195068359375,
0.01043701171875,
0.00212860107421875,
-0.0227508544921875,
0.046844482421875,
0.06646728515625,
-0.07464599609375,
-0.085205078125,
-0.020721435546875,
0.... |
animelover/genshin-impact-images | 2023-07-13T05:49:11.000Z | [
"region:us"
] | animelover | null | null | 16 | 69 | 2023-01-23T02:31:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pcuenq/lsun-bedrooms | 2023-03-04T06:38:23.000Z | [
"license:mit",
"region:us"
] | pcuenq | null | null | 2 | 69 | 2023-03-02T09:57:31 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 4450242498.020249
num_examples: 287968
- name: test
num_bytes: 234247797.33875093
num_examples: 15157
download_size: 4756942293
dataset_size: 4684490295.359
license: mit
---
# Dataset Card for "lsun-bedrooms"
This is a 20% sample of the bedrooms category in [`LSUN`](https://github.com/fyu/lsun), uploaded as a dataset for convenience.
The license for _this compilation only_ is MIT. The data retains the same license as the original dataset.
This is (roughly) the code that was used to upload this dataset:
```Python
import os
import shutil
from miniai.imports import *
from miniai.diffusion import *
from datasets import load_dataset
path_data = Path('data')
path_data.mkdir(exist_ok=True)
path = path_data/'bedroom'
url = 'https://s3.amazonaws.com/fast-ai-imageclas/bedroom.tgz'
if not path.exists():
path_zip = fc.urlsave(url, path_data)
shutil.unpack_archive('data/bedroom.tgz', 'data')
dataset = load_dataset("imagefolder", data_dir="data/bedroom")
dataset = dataset.remove_columns('label')
dataset = dataset['train'].train_test_split(test_size=0.05)
dataset.push_to_hub("pcuenq/lsun-bedrooms")
```
| 1,245 | [
[
-0.03662109375,
-0.027923583984375,
0.0030612945556640625,
0.02032470703125,
-0.0169677734375,
-0.024444580078125,
0.0013895034790039062,
0.024383544921875,
0.0145721435546875,
0.0302734375,
-0.047332763671875,
-0.038909912109375,
-0.00463104248046875,
-0.00... |
theodor1289/imagenet-1k_tiny | 2023-03-23T08:14:11.000Z | [
"region:us"
] | theodor1289 | null | null | 1 | 69 | 2023-03-23T08:14:03 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': tench, Tinca tinca
'1': goldfish, Carassius auratus
'2': great white shark, white shark, man-eater, man-eating shark, Carcharodon
carcharias
'3': tiger shark, Galeocerdo cuvieri
'4': hammerhead, hammerhead shark
'5': electric ray, crampfish, numbfish, torpedo
'6': stingray
'7': cock
'8': hen
'9': ostrich, Struthio camelus
'10': brambling, Fringilla montifringilla
'11': goldfinch, Carduelis carduelis
'12': house finch, linnet, Carpodacus mexicanus
'13': junco, snowbird
'14': indigo bunting, indigo finch, indigo bird, Passerina cyanea
'15': robin, American robin, Turdus migratorius
'16': bulbul
'17': jay
'18': magpie
'19': chickadee
'20': water ouzel, dipper
'21': kite
'22': bald eagle, American eagle, Haliaeetus leucocephalus
'23': vulture
'24': great grey owl, great gray owl, Strix nebulosa
'25': European fire salamander, Salamandra salamandra
'26': common newt, Triturus vulgaris
'27': eft
'28': spotted salamander, Ambystoma maculatum
'29': axolotl, mud puppy, Ambystoma mexicanum
'30': bullfrog, Rana catesbeiana
'31': tree frog, tree-frog
'32': tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
'33': loggerhead, loggerhead turtle, Caretta caretta
'34': leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
'35': mud turtle
'36': terrapin
'37': box turtle, box tortoise
'38': banded gecko
'39': common iguana, iguana, Iguana iguana
'40': American chameleon, anole, Anolis carolinensis
'41': whiptail, whiptail lizard
'42': agama
'43': frilled lizard, Chlamydosaurus kingi
'44': alligator lizard
'45': Gila monster, Heloderma suspectum
'46': green lizard, Lacerta viridis
'47': African chameleon, Chamaeleo chamaeleon
'48': Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus
komodoensis
'49': African crocodile, Nile crocodile, Crocodylus niloticus
'50': American alligator, Alligator mississipiensis
'51': triceratops
'52': thunder snake, worm snake, Carphophis amoenus
'53': ringneck snake, ring-necked snake, ring snake
'54': hognose snake, puff adder, sand viper
'55': green snake, grass snake
'56': king snake, kingsnake
'57': garter snake, grass snake
'58': water snake
'59': vine snake
'60': night snake, Hypsiglena torquata
'61': boa constrictor, Constrictor constrictor
'62': rock python, rock snake, Python sebae
'63': Indian cobra, Naja naja
'64': green mamba
'65': sea snake
'66': horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
'67': diamondback, diamondback rattlesnake, Crotalus adamanteus
'68': sidewinder, horned rattlesnake, Crotalus cerastes
'69': trilobite
'70': harvestman, daddy longlegs, Phalangium opilio
'71': scorpion
'72': black and gold garden spider, Argiope aurantia
'73': barn spider, Araneus cavaticus
'74': garden spider, Aranea diademata
'75': black widow, Latrodectus mactans
'76': tarantula
'77': wolf spider, hunting spider
'78': tick
'79': centipede
'80': black grouse
'81': ptarmigan
'82': ruffed grouse, partridge, Bonasa umbellus
'83': prairie chicken, prairie grouse, prairie fowl
'84': peacock
'85': quail
'86': partridge
'87': African grey, African gray, Psittacus erithacus
'88': macaw
'89': sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
'90': lorikeet
'91': coucal
'92': bee eater
'93': hornbill
'94': hummingbird
'95': jacamar
'96': toucan
'97': drake
'98': red-breasted merganser, Mergus serrator
'99': goose
'100': black swan, Cygnus atratus
'101': tusker
'102': echidna, spiny anteater, anteater
'103': platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus
anatinus
'104': wallaby, brush kangaroo
'105': koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
'106': wombat
'107': jellyfish
'108': sea anemone, anemone
'109': brain coral
'110': flatworm, platyhelminth
'111': nematode, nematode worm, roundworm
'112': conch
'113': snail
'114': slug
'115': sea slug, nudibranch
'116': chiton, coat-of-mail shell, sea cradle, polyplacophore
'117': chambered nautilus, pearly nautilus, nautilus
'118': Dungeness crab, Cancer magister
'119': rock crab, Cancer irroratus
'120': fiddler crab
'121': king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes
camtschatica
'122': American lobster, Northern lobster, Maine lobster, Homarus americanus
'123': spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
'124': crayfish, crawfish, crawdad, crawdaddy
'125': hermit crab
'126': isopod
'127': white stork, Ciconia ciconia
'128': black stork, Ciconia nigra
'129': spoonbill
'130': flamingo
'131': little blue heron, Egretta caerulea
'132': American egret, great white heron, Egretta albus
'133': bittern
'134': crane
'135': limpkin, Aramus pictus
'136': European gallinule, Porphyrio porphyrio
'137': American coot, marsh hen, mud hen, water hen, Fulica americana
'138': bustard
'139': ruddy turnstone, Arenaria interpres
'140': red-backed sandpiper, dunlin, Erolia alpina
'141': redshank, Tringa totanus
'142': dowitcher
'143': oystercatcher, oyster catcher
'144': pelican
'145': king penguin, Aptenodytes patagonica
'146': albatross, mollymawk
'147': grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius
robustus
'148': killer whale, killer, orca, grampus, sea wolf, Orcinus orca
'149': dugong, Dugong dugon
'150': sea lion
'151': Chihuahua
'152': Japanese spaniel
'153': Maltese dog, Maltese terrier, Maltese
'154': Pekinese, Pekingese, Peke
'155': Shih-Tzu
'156': Blenheim spaniel
'157': papillon
'158': toy terrier
'159': Rhodesian ridgeback
'160': Afghan hound, Afghan
'161': basset, basset hound
'162': beagle
'163': bloodhound, sleuthhound
'164': bluetick
'165': black-and-tan coonhound
'166': Walker hound, Walker foxhound
'167': English foxhound
'168': redbone
'169': borzoi, Russian wolfhound
'170': Irish wolfhound
'171': Italian greyhound
'172': whippet
'173': Ibizan hound, Ibizan Podenco
'174': Norwegian elkhound, elkhound
'175': otterhound, otter hound
'176': Saluki, gazelle hound
'177': Scottish deerhound, deerhound
'178': Weimaraner
'179': Staffordshire bullterrier, Staffordshire bull terrier
'180': American Staffordshire terrier, Staffordshire terrier, American pit
bull terrier, pit bull terrier
'181': Bedlington terrier
'182': Border terrier
'183': Kerry blue terrier
'184': Irish terrier
'185': Norfolk terrier
'186': Norwich terrier
'187': Yorkshire terrier
'188': wire-haired fox terrier
'189': Lakeland terrier
'190': Sealyham terrier, Sealyham
'191': Airedale, Airedale terrier
'192': cairn, cairn terrier
'193': Australian terrier
'194': Dandie Dinmont, Dandie Dinmont terrier
'195': Boston bull, Boston terrier
'196': miniature schnauzer
'197': giant schnauzer
'198': standard schnauzer
'199': Scotch terrier, Scottish terrier, Scottie
'200': Tibetan terrier, chrysanthemum dog
'201': silky terrier, Sydney silky
'202': soft-coated wheaten terrier
'203': West Highland white terrier
'204': Lhasa, Lhasa apso
'205': flat-coated retriever
'206': curly-coated retriever
'207': golden retriever
'208': Labrador retriever
'209': Chesapeake Bay retriever
'210': German short-haired pointer
'211': vizsla, Hungarian pointer
'212': English setter
'213': Irish setter, red setter
'214': Gordon setter
'215': Brittany spaniel
'216': clumber, clumber spaniel
'217': English springer, English springer spaniel
'218': Welsh springer spaniel
'219': cocker spaniel, English cocker spaniel, cocker
'220': Sussex spaniel
'221': Irish water spaniel
'222': kuvasz
'223': schipperke
'224': groenendael
'225': malinois
'226': briard
'227': kelpie
'228': komondor
'229': Old English sheepdog, bobtail
'230': Shetland sheepdog, Shetland sheep dog, Shetland
'231': collie
'232': Border collie
'233': Bouvier des Flandres, Bouviers des Flandres
'234': Rottweiler
'235': German shepherd, German shepherd dog, German police dog, alsatian
'236': Doberman, Doberman pinscher
'237': miniature pinscher
'238': Greater Swiss Mountain dog
'239': Bernese mountain dog
'240': Appenzeller
'241': EntleBucher
'242': boxer
'243': bull mastiff
'244': Tibetan mastiff
'245': French bulldog
'246': Great Dane
'247': Saint Bernard, St Bernard
'248': Eskimo dog, husky
'249': malamute, malemute, Alaskan malamute
'250': Siberian husky
'251': dalmatian, coach dog, carriage dog
'252': affenpinscher, monkey pinscher, monkey dog
'253': basenji
'254': pug, pug-dog
'255': Leonberg
'256': Newfoundland, Newfoundland dog
'257': Great Pyrenees
'258': Samoyed, Samoyede
'259': Pomeranian
'260': chow, chow chow
'261': keeshond
'262': Brabancon griffon
'263': Pembroke, Pembroke Welsh corgi
'264': Cardigan, Cardigan Welsh corgi
'265': toy poodle
'266': miniature poodle
'267': standard poodle
'268': Mexican hairless
'269': timber wolf, grey wolf, gray wolf, Canis lupus
'270': white wolf, Arctic wolf, Canis lupus tundrarum
'271': red wolf, maned wolf, Canis rufus, Canis niger
'272': coyote, prairie wolf, brush wolf, Canis latrans
'273': dingo, warrigal, warragal, Canis dingo
'274': dhole, Cuon alpinus
'275': African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
'276': hyena, hyaena
'277': red fox, Vulpes vulpes
'278': kit fox, Vulpes macrotis
'279': Arctic fox, white fox, Alopex lagopus
'280': grey fox, gray fox, Urocyon cinereoargenteus
'281': tabby, tabby cat
'282': tiger cat
'283': Persian cat
'284': Siamese cat, Siamese
'285': Egyptian cat
'286': cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
'287': lynx, catamount
'288': leopard, Panthera pardus
'289': snow leopard, ounce, Panthera uncia
'290': jaguar, panther, Panthera onca, Felis onca
'291': lion, king of beasts, Panthera leo
'292': tiger, Panthera tigris
'293': cheetah, chetah, Acinonyx jubatus
'294': brown bear, bruin, Ursus arctos
'295': American black bear, black bear, Ursus americanus, Euarctos americanus
'296': ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
'297': sloth bear, Melursus ursinus, Ursus ursinus
'298': mongoose
'299': meerkat, mierkat
'300': tiger beetle
'301': ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
'302': ground beetle, carabid beetle
'303': long-horned beetle, longicorn, longicorn beetle
'304': leaf beetle, chrysomelid
'305': dung beetle
'306': rhinoceros beetle
'307': weevil
'308': fly
'309': bee
'310': ant, emmet, pismire
'311': grasshopper, hopper
'312': cricket
'313': walking stick, walkingstick, stick insect
'314': cockroach, roach
'315': mantis, mantid
'316': cicada, cicala
'317': leafhopper
'318': lacewing, lacewing fly
'319': dragonfly, darning needle, devil's darning needle, sewing needle,
snake feeder, snake doctor, mosquito hawk, skeeter hawk
'320': damselfly
'321': admiral
'322': ringlet, ringlet butterfly
'323': monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
'324': cabbage butterfly
'325': sulphur butterfly, sulfur butterfly
'326': lycaenid, lycaenid butterfly
'327': starfish, sea star
'328': sea urchin
'329': sea cucumber, holothurian
'330': wood rabbit, cottontail, cottontail rabbit
'331': hare
'332': Angora, Angora rabbit
'333': hamster
'334': porcupine, hedgehog
'335': fox squirrel, eastern fox squirrel, Sciurus niger
'336': marmot
'337': beaver
'338': guinea pig, Cavia cobaya
'339': sorrel
'340': zebra
'341': hog, pig, grunter, squealer, Sus scrofa
'342': wild boar, boar, Sus scrofa
'343': warthog
'344': hippopotamus, hippo, river horse, Hippopotamus amphibius
'345': ox
'346': water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
'347': bison
'348': ram, tup
'349': bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain
sheep, Ovis canadensis
'350': ibex, Capra ibex
'351': hartebeest
'352': impala, Aepyceros melampus
'353': gazelle
'354': Arabian camel, dromedary, Camelus dromedarius
'355': llama
'356': weasel
'357': mink
'358': polecat, fitch, foulmart, foumart, Mustela putorius
'359': black-footed ferret, ferret, Mustela nigripes
'360': otter
'361': skunk, polecat, wood pussy
'362': badger
'363': armadillo
'364': three-toed sloth, ai, Bradypus tridactylus
'365': orangutan, orang, orangutang, Pongo pygmaeus
'366': gorilla, Gorilla gorilla
'367': chimpanzee, chimp, Pan troglodytes
'368': gibbon, Hylobates lar
'369': siamang, Hylobates syndactylus, Symphalangus syndactylus
'370': guenon, guenon monkey
'371': patas, hussar monkey, Erythrocebus patas
'372': baboon
'373': macaque
'374': langur
'375': colobus, colobus monkey
'376': proboscis monkey, Nasalis larvatus
'377': marmoset
'378': capuchin, ringtail, Cebus capucinus
'379': howler monkey, howler
'380': titi, titi monkey
'381': spider monkey, Ateles geoffroyi
'382': squirrel monkey, Saimiri sciureus
'383': Madagascar cat, ring-tailed lemur, Lemur catta
'384': indri, indris, Indri indri, Indri brevicaudatus
'385': Indian elephant, Elephas maximus
'386': African elephant, Loxodonta africana
'387': lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
'388': giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
'389': barracouta, snoek
'390': eel
'391': coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus
kisutch
'392': rock beauty, Holocanthus tricolor
'393': anemone fish
'394': sturgeon
'395': gar, garfish, garpike, billfish, Lepisosteus osseus
'396': lionfish
'397': puffer, pufferfish, blowfish, globefish
'398': abacus
'399': abaya
'400': academic gown, academic robe, judge's robe
'401': accordion, piano accordion, squeeze box
'402': acoustic guitar
'403': aircraft carrier, carrier, flattop, attack aircraft carrier
'404': airliner
'405': airship, dirigible
'406': altar
'407': ambulance
'408': amphibian, amphibious vehicle
'409': analog clock
'410': apiary, bee house
'411': apron
'412': ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin,
dustbin, trash barrel, trash bin
'413': assault rifle, assault gun
'414': backpack, back pack, knapsack, packsack, rucksack, haversack
'415': bakery, bakeshop, bakehouse
'416': balance beam, beam
'417': balloon
'418': ballpoint, ballpoint pen, ballpen, Biro
'419': Band Aid
'420': banjo
'421': bannister, banister, balustrade, balusters, handrail
'422': barbell
'423': barber chair
'424': barbershop
'425': barn
'426': barometer
'427': barrel, cask
'428': barrow, garden cart, lawn cart, wheelbarrow
'429': baseball
'430': basketball
'431': bassinet
'432': bassoon
'433': bathing cap, swimming cap
'434': bath towel
'435': bathtub, bathing tub, bath, tub
'436': beach wagon, station wagon, wagon, estate car, beach waggon, station
waggon, waggon
'437': beacon, lighthouse, beacon light, pharos
'438': beaker
'439': bearskin, busby, shako
'440': beer bottle
'441': beer glass
'442': bell cote, bell cot
'443': bib
'444': bicycle-built-for-two, tandem bicycle, tandem
'445': bikini, two-piece
'446': binder, ring-binder
'447': binoculars, field glasses, opera glasses
'448': birdhouse
'449': boathouse
'450': bobsled, bobsleigh, bob
'451': bolo tie, bolo, bola tie, bola
'452': bonnet, poke bonnet
'453': bookcase
'454': bookshop, bookstore, bookstall
'455': bottlecap
'456': bow
'457': bow tie, bow-tie, bowtie
'458': brass, memorial tablet, plaque
'459': brassiere, bra, bandeau
'460': breakwater, groin, groyne, mole, bulwark, seawall, jetty
'461': breastplate, aegis, egis
'462': broom
'463': bucket, pail
'464': buckle
'465': bulletproof vest
'466': bullet train, bullet
'467': butcher shop, meat market
'468': cab, hack, taxi, taxicab
'469': caldron, cauldron
'470': candle, taper, wax light
'471': cannon
'472': canoe
'473': can opener, tin opener
'474': cardigan
'475': car mirror
'476': carousel, carrousel, merry-go-round, roundabout, whirligig
'477': carpenter's kit, tool kit
'478': carton
'479': car wheel
'480': cash machine, cash dispenser, automated teller machine, automatic
teller machine, automated teller, automatic teller, ATM
'481': cassette
'482': cassette player
'483': castle
'484': catamaran
'485': CD player
'486': cello, violoncello
'487': cellular telephone, cellular phone, cellphone, cell, mobile phone
'488': chain
'489': chainlink fence
'490': chain mail, ring mail, mail, chain armor, chain armour, ring armor,
ring armour
'491': chain saw, chainsaw
'492': chest
'493': chiffonier, commode
'494': chime, bell, gong
'495': china cabinet, china closet
'496': Christmas stocking
'497': church, church building
'498': cinema, movie theater, movie theatre, movie house, picture palace
'499': cleaver, meat cleaver, chopper
'500': cliff dwelling
'501': cloak
'502': clog, geta, patten, sabot
'503': cocktail shaker
'504': coffee mug
'505': coffeepot
'506': coil, spiral, volute, whorl, helix
'507': combination lock
'508': computer keyboard, keypad
'509': confectionery, confectionary, candy store
'510': container ship, containership, container vessel
'511': convertible
'512': corkscrew, bottle screw
'513': cornet, horn, trumpet, trump
'514': cowboy boot
'515': cowboy hat, ten-gallon hat
'516': cradle
'517': crane2
'518': crash helmet
'519': crate
'520': crib, cot
'521': Crock Pot
'522': croquet ball
'523': crutch
'524': cuirass
'525': dam, dike, dyke
'526': desk
'527': desktop computer
'528': dial telephone, dial phone
'529': diaper, nappy, napkin
'530': digital clock
'531': digital watch
'532': dining table, board
'533': dishrag, dishcloth
'534': dishwasher, dish washer, dishwashing machine
'535': disk brake, disc brake
'536': dock, dockage, docking facility
'537': dogsled, dog sled, dog sleigh
'538': dome
'539': doormat, welcome mat
'540': drilling platform, offshore rig
'541': drum, membranophone, tympan
'542': drumstick
'543': dumbbell
'544': Dutch oven
'545': electric fan, blower
'546': electric guitar
'547': electric locomotive
'548': entertainment center
'549': envelope
'550': espresso maker
'551': face powder
'552': feather boa, boa
'553': file, file cabinet, filing cabinet
'554': fireboat
'555': fire engine, fire truck
'556': fire screen, fireguard
'557': flagpole, flagstaff
'558': flute, transverse flute
'559': folding chair
'560': football helmet
'561': forklift
'562': fountain
'563': fountain pen
'564': four-poster
'565': freight car
'566': French horn, horn
'567': frying pan, frypan, skillet
'568': fur coat
'569': garbage truck, dustcart
'570': gasmask, respirator, gas helmet
'571': gas pump, gasoline pump, petrol pump, island dispenser
'572': goblet
'573': go-kart
'574': golf ball
'575': golfcart, golf cart
'576': gondola
'577': gong, tam-tam
'578': gown
'579': grand piano, grand
'580': greenhouse, nursery, glasshouse
'581': grille, radiator grille
'582': grocery store, grocery, food market, market
'583': guillotine
'584': hair slide
'585': hair spray
'586': half track
'587': hammer
'588': hamper
'589': hand blower, blow dryer, blow drier, hair dryer, hair drier
'590': hand-held computer, hand-held microcomputer
'591': handkerchief, hankie, hanky, hankey
'592': hard disc, hard disk, fixed disk
'593': harmonica, mouth organ, harp, mouth harp
'594': harp
'595': harvester, reaper
'596': hatchet
'597': holster
'598': home theater, home theatre
'599': honeycomb
'600': hook, claw
'601': hoopskirt, crinoline
'602': horizontal bar, high bar
'603': horse cart, horse-cart
'604': hourglass
'605': iPod
'606': iron, smoothing iron
'607': jack-o'-lantern
'608': jean, blue jean, denim
'609': jeep, landrover
'610': jersey, T-shirt, tee shirt
'611': jigsaw puzzle
'612': jinrikisha, ricksha, rickshaw
'613': joystick
'614': kimono
'615': knee pad
'616': knot
'617': lab coat, laboratory coat
'618': ladle
'619': lampshade, lamp shade
'620': laptop, laptop computer
'621': lawn mower, mower
'622': lens cap, lens cover
'623': letter opener, paper knife, paperknife
'624': library
'625': lifeboat
'626': lighter, light, igniter, ignitor
'627': limousine, limo
'628': liner, ocean liner
'629': lipstick, lip rouge
'630': Loafer
'631': lotion
'632': loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
'633': loupe, jeweler's loupe
'634': lumbermill, sawmill
'635': magnetic compass
'636': mailbag, postbag
'637': mailbox, letter box
'638': maillot
'639': maillot, tank suit
'640': manhole cover
'641': maraca
'642': marimba, xylophone
'643': mask
'644': matchstick
'645': maypole
'646': maze, labyrinth
'647': measuring cup
'648': medicine chest, medicine cabinet
'649': megalith, megalithic structure
'650': microphone, mike
'651': microwave, microwave oven
'652': military uniform
'653': milk can
'654': minibus
'655': miniskirt, mini
'656': minivan
'657': missile
'658': mitten
'659': mixing bowl
'660': mobile home, manufactured home
'661': Model T
'662': modem
'663': monastery
'664': monitor
'665': moped
'666': mortar
'667': mortarboard
'668': mosque
'669': mosquito net
'670': motor scooter, scooter
'671': mountain bike, all-terrain bike, off-roader
'672': mountain tent
'673': mouse, computer mouse
'674': mousetrap
'675': moving van
'676': muzzle
'677': nail
'678': neck brace
'679': necklace
'680': nipple
'681': notebook, notebook computer
'682': obelisk
'683': oboe, hautboy, hautbois
'684': ocarina, sweet potato
'685': odometer, hodometer, mileometer, milometer
'686': oil filter
'687': organ, pipe organ
'688': oscilloscope, scope, cathode-ray oscilloscope, CRO
'689': overskirt
'690': oxcart
'691': oxygen mask
'692': packet
'693': paddle, boat paddle
'694': paddlewheel, paddle wheel
'695': padlock
'696': paintbrush
'697': pajama, pyjama, pj's, jammies
'698': palace
'699': panpipe, pandean pipe, syrinx
'700': paper towel
'701': parachute, chute
'702': parallel bars, bars
'703': park bench
'704': parking meter
'705': passenger car, coach, carriage
'706': patio, terrace
'707': pay-phone, pay-station
'708': pedestal, plinth, footstall
'709': pencil box, pencil case
'710': pencil sharpener
'711': perfume, essence
'712': Petri dish
'713': photocopier
'714': pick, plectrum, plectron
'715': pickelhaube
'716': picket fence, paling
'717': pickup, pickup truck
'718': pier
'719': piggy bank, penny bank
'720': pill bottle
'721': pillow
'722': ping-pong ball
'723': pinwheel
'724': pirate, pirate ship
'725': pitcher, ewer
'726': plane, carpenter's plane, woodworking plane
'727': planetarium
'728': plastic bag
'729': plate rack
'730': plow, plough
'731': plunger, plumber's helper
'732': Polaroid camera, Polaroid Land camera
'733': pole
'734': police van, police wagon, paddy wagon, patrol wagon, wagon, black
Maria
'735': poncho
'736': pool table, billiard table, snooker table
'737': pop bottle, soda bottle
'738': pot, flowerpot
'739': potter's wheel
'740': power drill
'741': prayer rug, prayer mat
'742': printer
'743': prison, prison house
'744': projectile, missile
'745': projector
'746': puck, hockey puck
'747': punching bag, punch bag, punching ball, punchball
'748': purse
'749': quill, quill pen
'750': quilt, comforter, comfort, puff
'751': racer, race car, racing car
'752': racket, racquet
'753': radiator
'754': radio, wireless
'755': radio telescope, radio reflector
'756': rain barrel
'757': recreational vehicle, RV, R.V.
'758': reel
'759': reflex camera
'760': refrigerator, icebox
'761': remote control, remote
'762': restaurant, eating house, eating place, eatery
'763': revolver, six-gun, six-shooter
'764': rifle
'765': rocking chair, rocker
'766': rotisserie
'767': rubber eraser, rubber, pencil eraser
'768': rugby ball
'769': rule, ruler
'770': running shoe
'771': safe
'772': safety pin
'773': saltshaker, salt shaker
'774': sandal
'775': sarong
'776': sax, saxophone
'777': scabbard
'778': scale, weighing machine
'779': school bus
'780': schooner
'781': scoreboard
'782': screen, CRT screen
'783': screw
'784': screwdriver
'785': seat belt, seatbelt
'786': sewing machine
'787': shield, buckler
'788': shoe shop, shoe-shop, shoe store
'789': shoji
'790': shopping basket
'791': shopping cart
'792': shovel
'793': shower cap
'794': shower curtain
'795': ski
'796': ski mask
'797': sleeping bag
'798': slide rule, slipstick
'799': sliding door
'800': slot, one-armed bandit
'801': snorkel
'802': snowmobile
'803': snowplow, snowplough
'804': soap dispenser
'805': soccer ball
'806': sock
'807': solar dish, solar collector, solar furnace
'808': sombrero
'809': soup bowl
'810': space bar
'811': space heater
'812': space shuttle
'813': spatula
'814': speedboat
'815': spider web, spider's web
'816': spindle
'817': sports car, sport car
'818': spotlight, spot
'819': stage
'820': steam locomotive
'821': steel arch bridge
'822': steel drum
'823': stethoscope
'824': stole
'825': stone wall
'826': stopwatch, stop watch
'827': stove
'828': strainer
'829': streetcar, tram, tramcar, trolley, trolley car
'830': stretcher
'831': studio couch, day bed
'832': stupa, tope
'833': submarine, pigboat, sub, U-boat
'834': suit, suit of clothes
'835': sundial
'836': sunglass
'837': sunglasses, dark glasses, shades
'838': sunscreen, sunblock, sun blocker
'839': suspension bridge
'840': swab, swob, mop
'841': sweatshirt
'842': swimming trunks, bathing trunks
'843': swing
'844': switch, electric switch, electrical switch
'845': syringe
'846': table lamp
'847': tank, army tank, armored combat vehicle, armoured combat vehicle
'848': tape player
'849': teapot
'850': teddy, teddy bear
'851': television, television system
'852': tennis ball
'853': thatch, thatched roof
'854': theater curtain, theatre curtain
'855': thimble
'856': thresher, thrasher, threshing machine
'857': throne
'858': tile roof
'859': toaster
'860': tobacco shop, tobacconist shop, tobacconist
'861': toilet seat
'862': torch
'863': totem pole
'864': tow truck, tow car, wrecker
'865': toyshop
'866': tractor
'867': trailer truck, tractor trailer, trucking rig, rig, articulated lorry,
semi
'868': tray
'869': trench coat
'870': tricycle, trike, velocipede
'871': trimaran
'872': tripod
'873': triumphal arch
'874': trolleybus, trolley coach, trackless trolley
'875': trombone
'876': tub, vat
'877': turnstile
'878': typewriter keyboard
'879': umbrella
'880': unicycle, monocycle
'881': upright, upright piano
'882': vacuum, vacuum cleaner
'883': vase
'884': vault
'885': velvet
'886': vending machine
'887': vestment
'888': viaduct
'889': violin, fiddle
'890': volleyball
'891': waffle iron
'892': wall clock
'893': wallet, billfold, notecase, pocketbook
'894': wardrobe, closet, press
'895': warplane, military plane
'896': washbasin, handbasin, washbowl, lavabo, wash-hand basin
'897': washer, automatic washer, washing machine
'898': water bottle
'899': water jug
'900': water tower
'901': whiskey jug
'902': whistle
'903': wig
'904': window screen
'905': window shade
'906': Windsor tie
'907': wine bottle
'908': wing
'909': wok
'910': wooden spoon
'911': wool, woolen, woollen
'912': worm fence, snake fence, snake-rail fence, Virginia fence
'913': wreck
'914': yawl
'915': yurt
'916': web site, website, internet site, site
'917': comic book
'918': crossword puzzle, crossword
'919': street sign
'920': traffic light, traffic signal, stoplight
'921': book jacket, dust cover, dust jacket, dust wrapper
'922': menu
'923': plate
'924': guacamole
'925': consomme
'926': hot pot, hotpot
'927': trifle
'928': ice cream, icecream
'929': ice lolly, lolly, lollipop, popsicle
'930': French loaf
'931': bagel, beigel
'932': pretzel
'933': cheeseburger
'934': hotdog, hot dog, red hot
'935': mashed potato
'936': head cabbage
'937': broccoli
'938': cauliflower
'939': zucchini, courgette
'940': spaghetti squash
'941': acorn squash
'942': butternut squash
'943': cucumber, cuke
'944': artichoke, globe artichoke
'945': bell pepper
'946': cardoon
'947': mushroom
'948': Granny Smith
'949': strawberry
'950': orange
'951': lemon
'952': fig
'953': pineapple, ananas
'954': banana
'955': jackfruit, jak, jack
'956': custard apple
'957': pomegranate
'958': hay
'959': carbonara
'960': chocolate sauce, chocolate syrup
'961': dough
'962': meat loaf, meatloaf
'963': pizza, pizza pie
'964': potpie
'965': burrito
'966': red wine
'967': espresso
'968': cup
'969': eggnog
'970': alp
'971': bubble
'972': cliff, drop, drop-off
'973': coral reef
'974': geyser
'975': lakeside, lakeshore
'976': promontory, headland, head, foreland
'977': sandbar, sand bar
'978': seashore, coast, seacoast, sea-coast
'979': valley, vale
'980': volcano
'981': ballplayer, baseball player
'982': groom, bridegroom
'983': scuba diver
'984': rapeseed
'985': daisy
'986': yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus,
Cypripedium parviflorum
'987': corn
'988': acorn
'989': hip, rose hip, rosehip
'990': buckeye, horse chestnut, conker
'991': coral fungus
'992': agaric
'993': gyromitra
'994': stinkhorn, carrion fungus
'995': earthstar
'996': hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola
frondosa
'997': bolete
'998': ear, spike, capitulum
'999': toilet tissue, toilet paper, bathroom tissue
splits:
- name: train
num_bytes: 11957097.0
num_examples: 100
download_size: 11936960
dataset_size: 11957097.0
---
# Dataset Card for "imagenet-1k_mini_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 39,199 | [
[
-0.055816650390625,
-0.01100921630859375,
-0.003292083740234375,
0.01311492919921875,
-0.02545166015625,
-0.0244903564453125,
0.0294189453125,
0.0012617111206054688,
0.07318115234375,
0.03826904296875,
-0.061279296875,
-0.044525146484375,
-0.043548583984375,
... |
MU-NLPC/Calc-math_qa | 2023-10-30T15:54:24.000Z | [
"license:apache-2.0",
"arxiv:2305.15017",
"arxiv:1905.13319",
"region:us"
] | MU-NLPC | null | null | 2 | 69 | 2023-05-24T07:51:48 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: validation
path: original-splits/validation-*
- split: test
path: original-splits/test-*
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: question_without_options
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: annotated_formula
dtype: string
- name: linear_formula
dtype: string
- name: rationale
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 25058735
num_examples: 20868
download_size: 11157481
dataset_size: 25058735
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: question_without_options
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: annotated_formula
dtype: string
- name: linear_formula
dtype: string
- name: rationale
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 25058735
num_examples: 20868
- name: validation
num_bytes: 3722848
num_examples: 3102
- name: test
num_bytes: 2423833
num_examples: 2029
download_size: 13928430
dataset_size: 31205416
---
# Dataset Card for Calc-math_qa
## Summary
This dataset is an instance of math_qa dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer of the mathematical problem (correct option)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction Process
We took the original math_qa dataset, parsed the nested formulas, linearized them into a sequence (chain) of operations, and replaced all advanced
function calls (such as `circle_area`) with explicit elementary operations. We evaluate all the steps in each example and filter out examples if their
evaluation does not match the answer selected as correct in the data with a 5% tolerance, with about 26k examples remaining. The sequence of steps is then saved in HTML-like language
in the `chain` column.
We also perform in-dataset and cross-dataset data-leak detection within [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
Specifically for MathQA, we found that majority of validation and test examples are near-duplicates of some example in the train set, and that all validation and test
examples likely originate from the Aqua-RAT train split. We do not recommend to original validation and test splits of the MathQA dataset.
You can read more information about this process in our [Calc-X paper](https://arxiv.org/abs/2305.15017).
## Data splits
In our default configuration, test and validation splits are removed and we recommend using MathQA for training only. You can load it using:
```python
datasets.load_dataset("MU-NLPC/calc-math_qa")
```
If you want to use the original dataset splits, you can load it using:
```python
datasets.load_dataset("MU-NLPC/calc-math_qa", "original-splits")
```
## Atributes
- **id** - id of the example
- **question** - the description of a mathematical problem in natural language, and includes the options to be selected from
- **chain** - solution in the form of step-by-step calculations encoded in simple html-like language. computed from `annotated_formula` column
- **result** - the correct option
- **result_float** - the result converted to a float
- **question_without_options** - same as `question`, but does not contain the options
- **options** - dictionary of options to choose from, one is correct, keys are "A".."E"
- **annotated_formula** - human-annotated nested expression that (approximately) evaluates to the selected correct answer
- **linear_formula** - same as `annotated_formula`, but linearized by original math_qa authors
- **rationale** - human-annotated free-text reasoning that leads to the correct answer
- **category** - category of the math problem
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Sources
- [mathqa HF dataset](https://huggingface.co/datasets/math_qa)
- [official website](https://math-qa.github.io/)
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
We have released a collection of datasets on solving math problems with calculator interactions on HuggingFace called [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
You can find the models we trained in the [Calcformers collection](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5).
You can read more in our paper [Calc-X and Calcformers](https://arxiv.org/abs/2305.15017).
## Licence
Apache 2.0, consistently with the original dataset.
## Cite
If you use this version of dataset in research, please cite the [original MathQA paper](https://arxiv.org/abs/1905.13319), and [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` | 6,922 | [
[
-0.0301361083984375,
-0.046356201171875,
0.0192108154296875,
0.01129913330078125,
0.007518768310546875,
-0.0017757415771484375,
-0.0030193328857421875,
-0.01093292236328125,
0.0191192626953125,
0.02935791015625,
-0.05377197265625,
-0.0220947265625,
-0.0201721191... |
radia/wmt14-de2en | 2023-06-24T21:18:45.000Z | [
"region:us"
] | radia | null | null | 0 | 69 | 2023-06-24T21:02:54 | ---
dataset_info:
features:
- name: de
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 1332850167
num_examples: 4468840
- name: val
num_bytes: 1588612
num_examples: 6003
- name: test
num_bytes: 715833
num_examples: 2737
download_size: 822597852
dataset_size: 1335154612
---
# Dataset Card for "wmt14-de2en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.052215576171875,
-0.01537322998046875,
0.0251007080078125,
0.0162811279296875,
-0.0281219482421875,
-0.01551055908203125,
0.021209716796875,
-0.013946533203125,
0.04638671875,
0.0294036865234375,
-0.062225341796875,
-0.051544189453125,
-0.06103515625,
-0.... |
wisenut-nlp-team/korquad_v1.0_multiple_gqa | 2023-07-10T07:48:54.000Z | [
"region:us"
] | wisenut-nlp-team | null | null | 0 | 69 | 2023-07-06T03:22:28 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: answers
sequence: string
- name: similar_context
sequence: string
- name: questions
sequence: string
splits:
- name: train
num_bytes: 120956461
num_examples: 9053
- name: validation
num_bytes: 11697414
num_examples: 880
download_size: 0
dataset_size: 132653875
---
# Dataset Card for "korquad_v1.0_multiple_gqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 627 | [
[
-0.05047607421875,
-0.0202484130859375,
0.015899658203125,
0.0198211669921875,
-0.0221405029296875,
0.00560760498046875,
0.04412841796875,
-0.001964569091796875,
0.046966552734375,
0.04852294921875,
-0.046661376953125,
-0.05712890625,
-0.036590576171875,
-0.... |
hitachi-nlp/FLD.v2 | 2023-10-03T12:19:29.000Z | [
"region:us"
] | hitachi-nlp | null | null | 3 | 69 | 2023-08-24T09:44:21 | ---
dataset_info:
- config_name: default
features:
- name: hypothesis
dtype: string
- name: context
dtype: string
- name: hypothesis_formula
dtype: string
- name: context_formula
dtype: string
- name: proofs
sequence: string
- name: proof_label
dtype: string
- name: proofs_formula
sequence: string
- name: world_assump_label
dtype: string
- name: original_tree_depth
dtype: int64
- name: depth
dtype: int64
- name: num_formula_distractors
dtype: int64
- name: num_translation_distractors
dtype: int64
- name: num_all_distractors
dtype: int64
- name: negative_hypothesis
dtype: string
- name: negative_hypothesis_formula
dtype: string
- name: negative_original_tree_depth
dtype: int64
- name: negative_proofs
sequence: string
- name: negative_proof_label
dtype: string
- name: negative_world_assump_label
dtype: string
- name: prompt_serial
dtype: string
- name: proof_serial
dtype: string
- name: version
dtype: string
splits:
- name: train
num_bytes: 102341795
num_examples: 30000
- name: validation
num_bytes: 17036757
num_examples: 5000
- name: test
num_bytes: 17032009
num_examples: 5000
download_size: 50518265
dataset_size: 136410561
- config_name: star
features:
- name: hypothesis
dtype: string
- name: context
dtype: string
- name: hypothesis_formula
dtype: string
- name: context_formula
dtype: string
- name: proofs
sequence: string
- name: proof_label
dtype: string
- name: proofs_formula
sequence: string
- name: world_assump_label
dtype: string
- name: original_tree_depth
dtype: int64
- name: depth
dtype: int64
- name: num_formula_distractors
dtype: int64
- name: num_translation_distractors
dtype: int64
- name: num_all_distractors
dtype: int64
- name: negative_hypothesis
dtype: string
- name: negative_hypothesis_formula
dtype: string
- name: negative_original_tree_depth
dtype: int64
- name: negative_proofs
sequence: string
- name: negative_proof_label
dtype: string
- name: negative_world_assump_label
dtype: string
- name: prompt_serial
dtype: string
- name: proof_serial
dtype: string
- name: version
dtype: string
splits:
- name: train
num_bytes: 127005152
num_examples: 30000
- name: validation
num_bytes: 21077447
num_examples: 5000
- name: test
num_bytes: 21297828
num_examples: 5000
download_size: 61803899
dataset_size: 169380427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: star
data_files:
- split: train
path: star/train-*
- split: validation
path: star/validation-*
- split: test
path: star/test-*
---
# Dataset Card for "FLD.v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,101 | [
[
-0.03955078125,
-0.032012939453125,
0.002490997314453125,
0.01044464111328125,
-0.01148223876953125,
-0.01151275634765625,
0.033599853515625,
-0.029510498046875,
0.04693603515625,
0.034149169921875,
-0.060882568359375,
-0.0340576171875,
-0.034454345703125,
-... |
hitorilabs/iris | 2023-09-07T19:42:41.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"license:cc0-1.0",
"region:us"
] | hitorilabs | null | null | 0 | 69 | 2023-08-24T21:40:28 | ---
license: cc0-1.0
size_categories:
- n<1K
task_categories:
- tabular-classification
dataset_info:
features:
- name: petal_length
dtype: float32
- name: petal_width
dtype: float32
- name: sepal_length
dtype: float32
- name: sepal_width
dtype: float32
- name: species
dtype:
class_label:
names:
'0': Iris-setosa
'1': Iris-versicolor
'2': Iris-virginica
splits:
- name: train
num_bytes: 3600
num_examples: 150
download_size: 3835
dataset_size: 3600
configs:
- config_name: default
data_files: data/train-*
---
# Note
The Iris dataset is one of the most popular datasets used for demonstrating simple classification models. This dataset was copied and transformed from `scikit-learn/iris` to be more native to huggingface.
Some changes were made to the dataset to save the user from extra lines of data transformation code, notably:
- removed `id` column
- `species` column is casted to ClassLabel (supports `ClassLabel.int2str()` and `ClassLabel.str2int()`)
- cast feature columns from `float64` down to `float32`
- rename feature names to snake-case
## Iris Species Dataset
The Iris dataset was used in R.A. Fisher's classic 1936 paper, The Use of Multiple Measurements in Taxonomic Problems, and can also be found on the UCI Machine Learning Repository.
It includes three iris species with 50 samples each as well as some properties about each flower. One flower species is linearly separable from the other two, but the other two are not linearly separable from each other.
The dataset is taken from [UCI Machine Learning Repository's Kaggle](https://www.kaggle.com/datasets/uciml/iris).
The following description is taken from UCI Machine Learning Repository.
This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.
Predicted attribute: class of iris plant.
This is an exceedingly simple domain.
This data differs from the data presented in Fishers article (identified by Steve Chadwick, spchadwick '@' espeedaz.net ). The 35th sample should be: 4.9,3.1,1.5,0.2,"Iris-setosa" where the error is in the fourth feature. The 38th sample: 4.9,3.6,1.4,0.1,"Iris-setosa" where the errors are in the second and third features.
Features in this dataset are the following:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-setosa
- Iris-versicolour
- Iris-virginica | 2,780 | [
[
-0.0279693603515625,
-0.00112152099609375,
-0.0010738372802734375,
0.0301361083984375,
0.0019817352294921875,
-0.017242431640625,
-0.0010776519775390625,
-0.06317138671875,
0.036865234375,
0.027130126953125,
-0.03289794921875,
-0.0284423828125,
-0.02967834472656... |
maximegmd/medqa_alpaca_format | 2023-09-12T11:27:26.000Z | [
"region:us"
] | maximegmd | null | null | 0 | 69 | 2023-09-12T10:05:52 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: solution
dtype: string
splits:
- name: test
num_bytes: 1184018
num_examples: 1273
- name: train
num_bytes: 9249332
num_examples: 10178
download_size: 5933919
dataset_size: 10433350
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# Dataset Card for "medqa_alpaca_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 628 | [
[
-0.045318603515625,
-0.02325439453125,
0.016937255859375,
0.01953125,
-0.026031494140625,
-0.0146942138671875,
0.0280914306640625,
-0.005542755126953125,
0.07647705078125,
0.0389404296875,
-0.058380126953125,
-0.05902099609375,
-0.046661376953125,
-0.0158691... |
cawoylel/FulaSpeechCorpora | 2023-09-22T16:10:37.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:audio-classification",
"size_categories:100K<n<1M",
"language:ff",
"region:us"
] | cawoylel | null | null | 0 | 69 | 2023-09-21T17:56:54 | ---
configs:
- config_name: default
data_files:
- split: pulaar
path: data/pulaar-*
- split: maacina
path: data/maacina-*
- split: liptako
path: data/liptako-*
- split: caka
path: data/caka-*
- split: bororro
path: data/bororro-*
- split: borgu
path: data/borgu-*
- split: pular
path: data/pular-*
- split: adamawa
path: data/adamawa-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: dialect
dtype: string
splits:
- name: pulaar
num_bytes: 3398551955.96
num_examples: 12880
- name: maacina
num_bytes: 2677353337.824
num_examples: 14336
- name: liptako
num_bytes: 5858678478.536
num_examples: 36828
- name: caka
num_bytes: 2790732470.205
num_examples: 14865
- name: bororro
num_bytes: 2952498447.936
num_examples: 15022
- name: borgu
num_bytes: 2849809213.278
num_examples: 13387
- name: pular
num_bytes: 2339299211.055
num_examples: 11779
- name: adamawa
num_bytes: 2225350403.136
num_examples: 13504
download_size: 20035287564
dataset_size: 25092273517.93
task_categories:
- automatic-speech-recognition
- text-to-speech
- audio-classification
language:
- ff
pretty_name: Fula Multidialectal Speech Corpora
size_categories:
- 100K<n<1M
---
# Dataset Card for "FulaSpeechCorporaNew"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,518 | [
[
-0.026519775390625,
-0.0194091796875,
-0.001956939697265625,
0.0275421142578125,
-0.006893157958984375,
0.01549530029296875,
0.025909423828125,
-0.00978851318359375,
0.0616455078125,
0.044525146484375,
-0.07501220703125,
-0.042877197265625,
-0.042236328125,
... |
orgcatorg/israel-hamas-gaza-cnn | 2023-11-02T04:02:13.000Z | [
"region:us"
] | orgcatorg | null | null | 0 | 69 | 2023-10-10T14:16:59 | ---
dataset_info:
features:
- name: '@type'
dtype: string
- name: headline
dtype: string
- name: url
dtype: string
- name: dateModified
dtype: string
- name: datePublished
dtype: string
- name: mainEntityOfPage
dtype: string
- name: publisher
dtype: string
- name: author
dtype: string
- name: articleBody
dtype: string
- name: image
dtype: string
configs:
- config_name: default
data_files:
- split: train
path: data-*
---
# Dataset Card for "israel-hamas-gaza-cnn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 669 | [
[
-0.043212890625,
-0.0033206939697265625,
0.0157928466796875,
0.02264404296875,
-0.02557373046875,
-0.0009622573852539062,
0.0157928466796875,
-0.00885009765625,
0.04888916015625,
0.0204620361328125,
-0.04632568359375,
-0.0628662109375,
-0.06298828125,
-0.025... |
surajbijjahalli/semantic_seg_ATL | 2023-10-23T00:39:01.000Z | [
"region:us"
] | surajbijjahalli | null | null | 0 | 69 | 2023-10-23T00:29:00 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 3251366124.718
num_examples: 1407
download_size: 3238840408
dataset_size: 3251366124.718
---
# Dataset Card for "semantic_seg_ATL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 408 | [
[
-0.035247802734375,
-0.042022705078125,
0.0233154296875,
0.0026416778564453125,
-0.015472412109375,
-0.012237548828125,
0.0162200927734375,
-0.02520751953125,
0.0601806640625,
0.042388916015625,
-0.06268310546875,
-0.08013916015625,
-0.050323486328125,
-0.01... |
Yehoon/arc_hella | 2023-10-23T05:06:48.000Z | [
"region:us"
] | Yehoon | null | null | 0 | 69 | 2023-10-23T05:06:44 | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 8854506
num_examples: 12418
download_size: 5407350
dataset_size: 8854506
---
# Dataset Card for "arc_hella"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 464 | [
[
-0.043182373046875,
-0.01351165771484375,
0.015228271484375,
0.011566162109375,
-0.019561767578125,
0.00370025634765625,
0.0328369140625,
-0.00701141357421875,
0.06866455078125,
0.044677734375,
-0.05535888671875,
-0.0611572265625,
-0.029449462890625,
-0.0207... |
jon-tow/okapi_mmlu | 2023-10-24T00:03:08.000Z | [
"language:ar",
"language:bn",
"language:ca",
"language:da",
"language:de",
"language:es",
"language:eu",
"language:fr",
"language:gu",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:it",
"language:kn",
"language:ml",
"language:mr",
"language:... | jon-tow | Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021). | @article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
} | 0 | 69 | 2023-10-23T22:18:44 | ---
language:
- ar
- bn
- ca
- da
- de
- es
- eu
- fr
- gu
- hi
- hr
- hu
- hy
- id
- it
- kn
- ml
- mr
- ne
- nl
- pt
- ro
- ru
- sk
- sr
- sv
- ta
- te
- uk
- vi
license: cc-by-nc-4.0
---
# okapi_mmlu
<!-- Provide a quick summary of the dataset. -->
Multilingual translation of [Measuring Massive Multitask Language Understanding (MMLU)](https://arxiv.org/abs/2009.03300).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
MMLU is a benchmark that measured a text model’s multitask accuracy.
The test covers 57 tasks including elementary mathematics, US history, computer
science, law, and more. To attain high accuracy on this test, models must possess
extensive world knowledge and problem solving ability. By comprehensively evaluating the breadth and depth of a model’s academic and professional understanding, MMLU can be used to analyze models across many tasks and to identify important shortcomings.
- **Curated by:** Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu
- **License:** The datasets are CC BY NC 4.0 (allowing only non-commercial use).
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** http://nlp.uoregon.edu/download/okapi-eval/datasets/
- **Paper:** Okapi ([Lai et al., 2023](https://arxiv.org/abs/2307.16039))
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@article{dac2023okapi,
title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},
author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},
journal={arXiv e-prints},
pages={arXiv--2307},
year={2023}
}
```
```bibtex
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
| 2,306 | [
[
-0.014495849609375,
-0.052520751953125,
0.033477783203125,
0.0173797607421875,
0.005706787109375,
-0.007534027099609375,
-0.036376953125,
-0.0239410400390625,
-0.00307464599609375,
0.02099609375,
-0.040924072265625,
-0.0283050537109375,
-0.05120849609375,
0.... |
ganchengguang/resume_seven_class | 2023-05-30T08:11:48.000Z | [
"license:apache-2.0",
"arxiv:2208.03219",
"arxiv:2209.09450",
"region:us"
] | ganchengguang | null | null | 0 | 68 | 2022-05-29T06:31:44 | ---
license: apache-2.0
---
This is a resume sentence classification dataset constructed based on resume text.(https://www.kaggle.com/datasets/oo7kartik/resume-text-batch)
The dataset have seven category.(experience education knowledge project others ) And three element label(header content meta).
Because the dataset is a published paper, if you want to use this dataset in a paper or work, please cite following paper.
https://arxiv.org/abs/2208.03219
And dataset use in article
https://arxiv.org/abs/2209.09450
| 524 | [
[
-0.00205230712890625,
-0.0328369140625,
0.02178955078125,
0.0178985595703125,
-0.0016336441040039062,
-0.0126190185546875,
0.0030956268310546875,
-0.0010786056518554688,
0.00469207763671875,
0.07513427734375,
-0.03289794921875,
-0.0589599609375,
-0.0203552246093... |
AhmedSSabir/Japanese-wiki-dump-sentence-dataset | 2023-07-11T12:22:09.000Z | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:ja",
"region:us"
] | AhmedSSabir | null | null | 2 | 68 | 2022-06-08T11:34:04 | ---
task_categories:
- sentence-similarity
- text-classification
- text-generation
language:
- ja
size_categories:
- 1M<n<10M
---
# Dataset
5M (5121625) clean Japanese full sentence with the context. This dataset can be used to learn unsupervised semantic similarity, etc. | 274 | [
[
-0.033905029296875,
-0.024871826171875,
0.021392822265625,
0.00858306884765625,
-0.05810546875,
-0.037628173828125,
-0.013427734375,
-0.0076141357421875,
0.01267242431640625,
0.07208251953125,
-0.06353759765625,
-0.051910400390625,
-0.028564453125,
0.0237731... |
eraldoluis/faquad | 2023-01-23T08:45:41.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|wikipedia",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | eraldoluis | Academic secretaries and faculty members of higher education institutions face a common problem:
the abundance of questions sent by academics
whose answers are found in available institutional documents.
The official documents produced by Brazilian public universities are vast and disperse,
which discourage students to further search for answers in such sources.
In order to lessen this problem, we present FaQuAD:
a novel machine reading comprehension dataset
in the domain of Brazilian higher education institutions.
FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].
It comprises 900 questions about 249 reading passages (paragraphs),
which were taken from 18 official documents of a computer science college
from a Brazilian federal university
and 21 Wikipedia articles related to Brazilian higher education system.
As far as we know, this is the first Portuguese reading comprehension dataset in this format. | @INPROCEEDINGS{
8923668,
author={Sayama, Hélio Fonseca and Araujo, Anderson Viçoso and Fernandes, Eraldo Rezende},
booktitle={2019 8th Brazilian Conference on Intelligent Systems (BRACIS)},
title={FaQuAD: Reading Comprehension Dataset in the Domain of Brazilian Higher Education},
year={2019},
volume={},
number={},
pages={443-448},
doi={10.1109/BRACIS.2019.00084}
} | 6 | 68 | 2022-09-06T11:05:01 | ---
pretty_name: FaQuAD
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
# paperswithcode_id: faquad
train-eval-index:
- config: plain_text
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
---
# Dataset Card for FaQuAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/liafacom/faquad
- **Repository:** https://github.com/liafacom/faquad
- **Paper:** https://ieeexplore.ieee.org/document/8923668/
<!-- - **Leaderboard:** -->
- **Point of Contact:** Eraldo R. Fernandes <eraldoluis@gmail.com>
### Dataset Summary
Academic secretaries and faculty members of higher education institutions face a common problem:
the abundance of questions sent by academics
whose answers are found in available institutional documents.
The official documents produced by Brazilian public universities are vast and disperse,
which discourage students to further search for answers in such sources.
In order to lessen this problem, we present FaQuAD:
a novel machine reading comprehension dataset
in the domain of Brazilian higher education institutions.
FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].
It comprises 900 questions about 249 reading passages (paragraphs),
which were taken from 18 official documents of a computer science college
from a Brazilian federal university
and 21 Wikipedia articles related to Brazilian higher education system.
As far as we know, this is the first Portuguese reading comprehension dataset in this format.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
| name |train|validation|
|---------|----:|----:|
|faquad|837|63|
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| 4,306 | [
[
-0.05059814453125,
-0.0675048828125,
0.00017690658569335938,
0.0155792236328125,
-0.00548553466796875,
0.00008940696716308594,
0.0120697021484375,
0.0015668869018554688,
0.0189056396484375,
0.050506591796875,
-0.051727294921875,
-0.056976318359375,
-0.0366516113... |
melismeric/spotify_song_album_covers | 2022-09-21T16:31:13.000Z | [
"region:us"
] | melismeric | null | null | 1 | 68 | 2022-09-21T16:23:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nielsr/countbench | 2023-03-07T20:53:01.000Z | [
"region:us"
] | nielsr | null | null | 1 | 68 | 2023-03-07T20:52:56 | ---
dataset_info:
features:
- name: image_url
dtype: string
- name: text
dtype: string
- name: number
dtype: int64
- name: image
dtype: image
splits:
- name: train
num_bytes: 23622859.0
num_examples: 540
download_size: 23350530
dataset_size: 23622859.0
---
# Dataset Card for "countbench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 463 | [
[
-0.0499267578125,
-0.0044708251953125,
0.0115509033203125,
0.0210418701171875,
-0.0193023681640625,
-0.00272369384765625,
0.025299072265625,
-0.01479339599609375,
0.068359375,
0.033111572265625,
-0.05792236328125,
-0.0556640625,
-0.0330810546875,
-0.02227783... |
tomas-gajarsky/cifar100-lt | 2023-06-24T20:25:07.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:cifar100",
"language:en",
"license:apache-2.0",
"region:us"
] | tomas-gajarsky | The CIFAR-100-LT dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 100 distinct classes.
The number of samples within each class decreases exponentially with factors of 10 and 100.
The dataset includes 10,000 test images, with 100 images per class,
and fewer than 50,000 training images.
These 100 classes are further organized into 20 overarching superclasses.
Each image is assigned two labels: a fine label denoting the specific class,
and a coarse label representing the associated superclass. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | 0 | 68 | 2023-05-05T15:43:58 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- cifar100
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-100
pretty_name: Cifar100-LT
dataset_info:
features:
- name: img
dtype: image
- name: fine_label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: coarse_label
dtype:
class_label:
names:
'0': aquatic_mammals
'1': fish
'2': flowers
'3': food_containers
'4': fruit_and_vegetables
'5': household_electrical_devices
'6': household_furniture
'7': insects
'8': large_carnivores
'9': large_man-made_outdoor_things
'10': large_natural_outdoor_scenes
'11': large_omnivores_and_herbivores
'12': medium_mammals
'13': non-insect_invertebrates
'14': people
'15': reptiles
'16': small_mammals
'17': trees
'18': vehicles_1
'19': vehicles_2
config_name: cifar100
splits:
- name: train
- name: test
num_bytes: 22605519
num_examples: 10000
download_size: 169001437
---
# Dataset Card for CIFAR-100-LT (Long Tail)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Paper imbalanced example](https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf)
- **Leaderboard:** [r-10](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-10) [r-100](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-100)
### Dataset Summary
The CIFAR-100-LT imbalanced dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 100 distinct classes.
The number of samples within each class decreases exponentially with factors of 10 and 100.
The dataset includes 10,000 test images, with 100 images per class,
and fewer than 50,000 training images.
These 100 classes are further organized into 20 overarching superclasses.
Each image is assigned two labels: a fine label denoting the specific class,
and a coarse label representing the associated superclass.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'fine_label': 19,
'coarse_label': 11
}
```
### Data Fields
- `img`: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `fine_label`: an `int` classification label with the following mapping:
`0`: apple
`1`: aquarium_fish
`2`: baby
`3`: bear
`4`: beaver
`5`: bed
`6`: bee
`7`: beetle
`8`: bicycle
`9`: bottle
`10`: bowl
`11`: boy
`12`: bridge
`13`: bus
`14`: butterfly
`15`: camel
`16`: can
`17`: castle
`18`: caterpillar
`19`: cattle
`20`: chair
`21`: chimpanzee
`22`: clock
`23`: cloud
`24`: cockroach
`25`: couch
`26`: cra
`27`: crocodile
`28`: cup
`29`: dinosaur
`30`: dolphin
`31`: elephant
`32`: flatfish
`33`: forest
`34`: fox
`35`: girl
`36`: hamster
`37`: house
`38`: kangaroo
`39`: keyboard
`40`: lamp
`41`: lawn_mower
`42`: leopard
`43`: lion
`44`: lizard
`45`: lobster
`46`: man
`47`: maple_tree
`48`: motorcycle
`49`: mountain
`50`: mouse
`51`: mushroom
`52`: oak_tree
`53`: orange
`54`: orchid
`55`: otter
`56`: palm_tree
`57`: pear
`58`: pickup_truck
`59`: pine_tree
`60`: plain
`61`: plate
`62`: poppy
`63`: porcupine
`64`: possum
`65`: rabbit
`66`: raccoon
`67`: ray
`68`: road
`69`: rocket
`70`: rose
`71`: sea
`72`: seal
`73`: shark
`74`: shrew
`75`: skunk
`76`: skyscraper
`77`: snail
`78`: snake
`79`: spider
`80`: squirrel
`81`: streetcar
`82`: sunflower
`83`: sweet_pepper
`84`: table
`85`: tank
`86`: telephone
`87`: television
`88`: tiger
`89`: tractor
`90`: train
`91`: trout
`92`: tulip
`93`: turtle
`94`: wardrobe
`95`: whale
`96`: willow_tree
`97`: wolf
`98`: woman
`99`: worm
- `coarse_label`: an `int` coarse classification label with following mapping:
`0`: aquatic_mammals
`1`: fish
`2`: flowers
`3`: food_containers
`4`: fruit_and_vegetables
`5`: household_electrical_devices
`6`: household_furniture
`7`: insects
`8`: large_carnivores
`9`: large_man-made_outdoor_things
`10`: large_natural_outdoor_scenes
`11`: large_omnivores_and_herbivores
`12`: medium_mammals
`13`: non-insect_invertebrates
`14`: people
`15`: reptiles
`16`: small_mammals
`17`: trees
`18`: vehicles_1
`19`: vehicles_2
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar100|<50000| 10000|
### Licensing Information
Apache License 2.0
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) and all contributors for adding the original balanced cifar100 dataset. | 8,961 | [
[
-0.0634765625,
-0.022918701171875,
0.005825042724609375,
0.01227569580078125,
-0.0245513916015625,
0.007598876953125,
-0.0138092041015625,
-0.039642333984375,
0.034454345703125,
0.01143646240234375,
-0.036895751953125,
-0.0555419921875,
-0.04693603515625,
0.... |
JasperLS/prompt-injections | 2023-05-16T17:16:21.000Z | [
"region:us"
] | JasperLS | null | null | 5 | 68 | 2023-05-16T17:16:15 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 71720
num_examples: 546
- name: test
num_bytes: 15981
num_examples: 116
download_size: 51215
dataset_size: 87701
---
# Dataset Card for "deberta-v3-base-injection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 459 | [
[
-0.0302581787109375,
-0.032196044921875,
0.03021240234375,
0.023956298828125,
-0.025482177734375,
0.0006585121154785156,
0.0419921875,
-0.01343536376953125,
0.048919677734375,
0.041534423828125,
-0.033355712890625,
-0.06768798828125,
-0.05047607421875,
-0.02... |
codeparrot/self-instruct-starcoder | 2023-10-23T12:13:18.000Z | [
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:bigscience-openrail-m",
"code",
"arxiv:2212.10560",
"arxiv:2305.06161",
"arxiv:1908.10084",
"doi:10.57967/hf/0790",
"region:us"
] | codeparrot | null | null | 30 | 68 | 2023-05-22T14:50:58 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: most_similar
dtype: string
- name: avg_similarity_score
dtype: float64
splits:
- name: curated
num_bytes: 1937514
num_examples: 771
- name: raw
num_bytes: 12969008
num_examples: 5003
- name: unique
num_bytes: 786771
num_examples: 308
- name: compile
num_bytes: 9048805
num_examples: 3549
download_size: 10935008
dataset_size: 24742098
tags:
- code
size_categories:
- 1K<n<10K
task_categories:
- text2text-generation
license: bigscience-openrail-m
language:
- en
---
# Self-instruct-starcoder
## Table of Contents
- [Summary](#summary)
- [Our approach](#our-approach)
- [Dataset generation](#dataset-generation)
- [Dataset quality](#dataset-quality)
- [Post-processing](#post-processing)
- [Self-consistency](#self-consistency)
- [Uniqueness](#uniqueness)
- [Compile](#compile)
- [Dataset structure](#dataset-structure)
- [Space](#space)
## Summary
Self-instruct-starcoder is a dataset that was generated by prompting starcoder to generate new instructions based on some human-written seed instructions.
The underlying process is explained in the paper [self-instruct](https://arxiv.org/abs/2212.10560). This algorithm gave birth to famous machine generated
datasets such as [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [Code Alpaca](https://github.com/sahil280114/codealpaca) which are two datasets
obtained by prompting OpenAI `text-davinci-003` engine.
## Our approach
While our method is similar to self-instruct and stanford alpaca, we included some relevant modifications to the pipeline to account for what we wanted.
- Rather than using `text-davinci-003`, we chose to prompt [StarCoder](https://arxiv.org/abs/2305.06161) which is a 10x smaller LLM developed for code use cases. However, it is possible to use any decoder based LLM on the hub.
- We changed our seed tasks in order to have the model generate code related tasks. We completed the seed tasks from code alpaca with 20 additional algorithm instructions.
- We switched from the generation format `"instruction":` - `"input":` - `"output":` to the format `"instruction":` - `"output":` by concatenating each instruction and its input under the
keyword `instruction`. We did so because the previous prompting format tended to make the model generate test cases as input and their solution as output, which is not what we wanted.
- Finally, we incorporated the possibility to change the trigger word in the prompt. We thus replaced the `"instruction" :` keyword by `"Here is the correct solution to the problem ":` which
resulted into much better generated instructions.
## Dataset generation
The generation of the dataset was time consuming and we chose our parameters to limit the computational burden of our method.
- Number of examples in context : 4
- 2 seed instructions
- 2 machine generated instructions
- Number of instructions to generate : 5000
- Stop words used in the generation : ["\n20", "20.", "20 ."]
- Similarity threshold for rouge score : 0.7
## Dataset quality
StarCoder, while being a great model is not as capable as `text-davinci-003`. In the generation, the model quickly reach sort of a ceiling in terms of creativity.
There are many instructions that are similar to each other, but it should not bother since they are not phrased the same.
## Post-processing
Post-processing is an important part of the pipeline since it improves the quality of the dataset despite the fact that it implies getting rid of some examples. First we
need to identify what we want to avoid :
- A generated solution which does not answer to the corresponding instruction
- An instruction that is too similar to another one.
### Self-consistency
We imagined a process that we named **self-consistency**. The idea is to reverse-prompt the model to see if it can generate a sound instruction that corresponds to the
solution (output) it is prompted with. This is a particularly difficult few-shot task, and unfortunately StarCoder does not perform incredibly well on it. With a few-shot parameters of `4`
(all being seed tasks), the model is able to recover 1135 instructions out of 5003, which amount for 22.6% of the raw dataset. Fortunately, the inability for starcoder to generate instructions for some
solutions does not mean we should get rid of them. For the solutions (outputs) with generated instructions, we can compare these with the ground truth. For that we can use [Sentence-BERT](https://arxiv.org/abs/1908.10084) because the comparison should focus the meaning
rather than the word to word similarity ratio. We have about 771 instructions (~68%) with a similarity score >= 0.5 with their ground truth. These can be seen as high quality examples, they form the `curated` set.
<p align="center">
<img src="https://huggingface.co/datasets/codeparrot/self-instruct-starcoder/resolve/main/output.png" alt="drawing" width="300", height="300"/>
</p>
### Uniqueness
Another approach that can be used to clean the raw dataset is to focus on distinct instructions. For a given instruction, we go through all the instructions generated before it to see if there is one with a similarity score >= 0.5.
If it is the case, we remove that instruction. This process removes about 94% of the raw dataset, the remaining instructions form the `unique` set.
### Compile
We also decided to build a set which contains solely the example featuring a code written in python 3 which does not code a compilation error.
## Dataset structure
```python
from datasets import load_dataset
dataset = load_dataset("codeparrot/self-instruct-starcoder")
DatasetDict({
compile: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 3549
})
curated: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 771
})
raw: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 5003
})
unique: Dataset({
features: ['instruction', 'output', 'most_similar', 'avg_similarity_score'],
num_rows: 308
})
}))
```
|Field|Type|Description|
|---|---|---|
|instruction|string|Instruction|
|output|string|Answer to the instruction|
|most_similar|string|Dictionnary containing the 10 most similar instructions generated before the current instruction along with the similarity scores|
|avg_similarity_score|float64| Average similarity score|
## Additional resources
- [Space(self-instruct-starcoder)](https://huggingface.co/spaces/codeparrot/self-instruct-starcoder)
- [Github Repository](https://github.com/ArmelRandy/Self-instruct)
## Citation
```
@misc{title={Self-Instruct-StarCoder},
author={Zebaze, Armel Randy},
doi={https://doi.org/10.57967/hf/0790},
}
``` | 6,951 | [
[
-0.037261962890625,
-0.049041748046875,
0.0157012939453125,
0.0016965866088867188,
-0.01183319091796875,
-0.021514892578125,
-0.01934814453125,
-0.0076904296875,
0.0199432373046875,
0.05169677734375,
-0.05157470703125,
-0.04644775390625,
-0.050628662109375,
... |
clarin-knext/quora-pl | 2023-06-07T08:16:00.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 68 | 2023-06-06T22:16:05 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.0153961181640625,
-0.0628662109375,
0.03546142578125,
0.0164031982421875,
-0.0221710205078125,
-0.0103607177734375,
-0.01160430908203125,
-0.034515380859375,
-0.0013275146484375,
0.0286102294921875,
-0.03826904296875,
-0.048126220703125,
-0.0290069580078125,
... |
clarin-knext/scidocs-pl | 2023-06-07T08:10:24.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 68 | 2023-06-06T22:48:25 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.0153961181640625,
-0.0628662109375,
0.035400390625,
0.016357421875,
-0.022186279296875,
-0.0103759765625,
-0.01158905029296875,
-0.034515380859375,
-0.0013341903686523438,
0.0286102294921875,
-0.03826904296875,
-0.048126220703125,
-0.0290069580078125,
-0.... |
tasksource/symbolic-instruction-tuning-sql | 2023-06-15T13:19:03.000Z | [
"task_categories:text2text-generation",
"language:en",
"license:mit",
"arxiv:2304.07995",
"region:us"
] | tasksource | null | null | 1 | 68 | 2023-06-15T13:15:05 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 290434558
num_examples: 200000
download_size: 148817199
dataset_size: 290434558
license: mit
task_categories:
- text2text-generation
language:
- en
---
# Dataset Card for "symbolic-instruction-tuning-sql"
Original component (=no Flan) from the symbolic instruction tuning dataset, with flan column names.
[From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning](https://arxiv.org/abs/2304.07995). The training code can be found in [here](https://github.com/sail-sg/symbolic-instruction-tuning).
```
@article{liu2023zero,
title={From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning},
author={Liu, Qian and Zhou, Fan and Jiang, Zhengbao and Dou, Longxu and Lin, Min},
eprint={2304.07995},
year={2023}
}
``` | 916 | [
[
-0.007289886474609375,
-0.04034423828125,
0.019073486328125,
-0.0099639892578125,
-0.035308837890625,
-0.00791168212890625,
-0.0107574462890625,
-0.005092620849609375,
0.0200958251953125,
0.051971435546875,
-0.0833740234375,
-0.050018310546875,
-0.03787231445312... |
ChrisHayduk/Llama-2-SQL-and-Code-Dataset | 2023-09-29T04:18:17.000Z | [
"region:us"
] | ChrisHayduk | null | null | 6 | 68 | 2023-07-18T18:28:31 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: table
dtype: string
splits:
- name: train
num_bytes: 46640417
num_examples: 128351
- name: eval
num_bytes: 1756894
num_examples: 1302
download_size: 18298063
dataset_size: 48397311
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
# Dataset Card for "Llama-2-SQL-and-Code-Dataset"
This dataset is intended to provide LLaMA 2 improved coding and instruction following capabilities, with a specific focus on SQL generation.
The dataset is in Alpaca Instruct format. Please be sure to provide the instruction and input in the prompt to the model, along with any prompt text you would like to place around those inputs.
In the train split, please ignore the table column. The eval split provides example tables so that the actual executable SQL performance can be compared on a number of SQL generation tasks.
To use the tables, they can be loaded as JSON objects and passed to a SQL execution tool such as sqlglot. | 1,172 | [
[
-0.0154571533203125,
-0.06146240234375,
0.01084136962890625,
0.0222015380859375,
-0.0592041015625,
0.01763916015625,
0.0140838623046875,
-0.0112762451171875,
0.0256195068359375,
0.06298828125,
-0.050323486328125,
-0.03753662109375,
-0.0254058837890625,
-0.00... |
openaccess-ai-collective/oo-gpt4-filtered | 2023-08-05T04:00:44.000Z | [
"region:us"
] | openaccess-ai-collective | null | null | 2 | 68 | 2023-08-05T03:59:54 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
mystic-leung/medical_cord19 | 2023-09-14T03:00:13.000Z | [
"task_categories:summarization",
"language:aa",
"license:openrail",
"medical",
"region:us"
] | mystic-leung | null | null | 2 | 68 | 2023-08-22T13:35:59 | ---
license: openrail
task_categories:
- summarization
language:
- aa
tags:
- medical
---
## Description
This dataset contains large amounts of biomedical abstracts and corresponding summaries. | 194 | [
[
0.0017557144165039062,
-0.01528167724609375,
0.0254974365234375,
0.031494140625,
-0.020416259765625,
0.004344940185546875,
0.0124664306640625,
-0.01537322998046875,
0.049285888671875,
0.048492431640625,
-0.03369140625,
-0.0457763671875,
-0.057464599609375,
0... |
rwkv-x-dev/openorca-gpt4 | 2023-08-28T14:44:06.000Z | [
"region:us"
] | rwkv-x-dev | null | null | 3 | 68 | 2023-08-28T14:38:13 | ---
pretty_name: OpenOrca
configs:
- config_name: default
default: true
data_files:
- split: train
path:
- "*.parquet"
---
OpenOrca but just the GPT4 bits. | 169 | [
[
-0.057373046875,
-0.038177490234375,
0.05596923828125,
0.0018606185913085938,
-0.032989501953125,
-0.02294921875,
-0.0019235610961914062,
-0.03411865234375,
0.03656005859375,
0.04937744140625,
-0.0179595947265625,
-0.0294342041015625,
-0.0223236083984375,
-0... |
Aborevsky01/CLEVR-BT-DB | 2023-09-20T16:44:56.000Z | [
"task_categories:visual-question-answering",
"language:en",
"region:us"
] | Aborevsky01 | null | null | 0 | 68 | 2023-09-17T17:03:32 | ---
task_categories:
- visual-question-answering
language:
- en
---
### How to install?
```python
!pip install datasets -q
from huggingface_hub import snapshot_download
import pandas as pd
import matplotlib.pyplot as plt
# First step: download an entire datatset
snapshot_download(repo_id="Aborevsky01/CLEVR-BT-DB", repo_type="dataset", local_dir='path-to-your-local-dir')
# Second step: unarchive the images for VQA
!unzip [path-to-your-local-dir]/[type-of-task]/images.zip
# Example of the triplet (image - question - answer)
plt.imshow(plt.imread('[path-to-your-local-dir]/images/test/Reason_0.png'))
print(pd.read_csv('[path-to-your-local-dir]/[type-of-task]/Reason_test_questions.csv').iloc[0].question)
print([str(line) for line in open('[path-to-your-local-dir]/[type-of-task]/correct_answ.txt', 'rb')][0])
```
### Output of code

**Q**: There is an object to the left of a cylinder to the right of a cylinder, what color is it?
**A**: b'blue\n' | 993 | [
[
-0.04168701171875,
-0.03936767578125,
0.00800323486328125,
0.0279083251953125,
-0.040252685546875,
-0.0005078315734863281,
0.022918701171875,
-0.0094451904296875,
0.0352783203125,
0.039703369140625,
-0.053985595703125,
-0.037109375,
-0.0207977294921875,
0.02... |
iwecht/hard_captions | 2023-10-20T00:35:00.000Z | [
"region:us"
] | iwecht | null | null | 0 | 68 | 2023-10-20T00:34:59 | ---
dataset_info:
features:
- name: annID
dtype: int64
- name: caption
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 364027
num_examples: 5000
download_size: 200465
dataset_size: 364027
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hard_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 507 | [
[
-0.032073974609375,
-0.01104736328125,
0.019683837890625,
0.0211181640625,
-0.0283660888671875,
0.006526947021484375,
-0.003612518310546875,
0.005191802978515625,
0.03857421875,
0.048248291015625,
-0.051300048828125,
-0.05303955078125,
-0.03863525390625,
0.0... |
liweili/c4_200m | 2022-10-23T11:00:46.000Z | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"grammatical-error-correction",
"region:us"
] | liweili | \
GEC Dataset Generated from C4 | \
@InProceedings{huggingface:c4_200m_dataset,
title = {c4_200m},
author={Li Liwei},
year={2021}
} | 25 | 67 | 2022-03-02T23:29:22 | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
c4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` | 937 | [
[
-0.0253448486328125,
-0.05548095703125,
0.038330078125,
0.006099700927734375,
0.0024166107177734375,
0.01404571533203125,
-0.017669677734375,
-0.025634765625,
0.012237548828125,
0.041534423828125,
-0.033660888671875,
-0.040435791015625,
-0.030303955078125,
0... |
Team-PIXEL/rendered-bookcorpus | 2022-08-03T12:03:32.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:rendered|BookCorpusOpen",
"language:en",
"license:unknown",
"arxiv:1506.06724",
"arxiv:2207.06991",
"arxiv:2105.05241",
"region:us"
] | Team-PIXEL | null | null | 4 | 67 | 2022-05-11T14:41:02 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Team-PIXEL/rendered-bookcorpus
size_categories:
- 1M<n<10M
source_datasets:
- rendered|BookCorpusOpen
task_categories:
- masked-auto-encoding
- rendered-language-modelling
task_ids:
- masked-auto-encoding
- rendered-language-modeling
paperswithcode_id: bookcorpus
---
# Dataset Card for Team-PIXEL/rendered-bookcorpus
## Dataset Description
- **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Papers:** [Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
](https://arxiv.org/abs/1506.06724), [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
- **Point of Contact:** [Phillip Rust](mailto:p.rust@di.ku.dk)
- **Size of downloaded dataset files:** 63.58 GB
- **Size of the generated dataset:** 63.59 GB
- **Total amount of disk used:** 127.17 GB
### Dataset Summary
This dataset is a version of the BookCorpus available at [https://huggingface.co/datasets/bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) with examples rendered as images with resolution 16x8464 pixels.
The original BookCorpus was introduced by Zhu et al. (2015) in [Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books](https://arxiv.org/abs/1506.06724) and contains 17868 books of various genres. The rendered BookCorpus was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott.
The BookCorpusOpen dataset was rendered book-by-book into 5.4M examples containing approximately 1.1B words in total. The dataset is stored as a collection of 162 parquet files. It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the BookCorpus have not been rendered accurately.
Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch.
The rendered BookCorpus can be loaded via the datasets library as follows:
```python
from datasets import load_dataset
# Download the full dataset to disk
load_dataset("Team-PIXEL/rendered-bookcorpus", split="train")
# Stream the dataset directly from the hub
load_dataset("Team-PIXEL/rendered-bookcorpus", split="train", streaming=True)
```
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 63.58 GB
- **Size of the generated dataset:** 63.59 GB
- **Total amount of disk used:** 127.17 GB
An example of 'train' looks as follows.
```
{
"pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16
"num_patches": "498"
}
```
### Data Fields
The data fields are the same among all splits.
- `pixel_values`: an `Image` feature.
- `num_patches`: a `Value(dtype="int64")` feature.
### Data Splits
|train|
|:----|
|5400000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information.
A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241)
### Citation Information
```bibtex
@InProceedings{Zhu_2015_ICCV,
title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}
```
```bibtex
@article{rust-etal-2022-pixel,
title={Language Modelling with Pixels},
author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
journal={arXiv preprint},
year={2022},
url={https://arxiv.org/abs/2207.06991}
}
```
### Contact Person
This dataset was added by Phillip Rust.
Github: [@xplip](https://github.com/xplip)
Twitter: [@rust_phillip](https://twitter.com/rust_phillip) | 6,973 | [
[
-0.03900146484375,
-0.036834716796875,
-0.0020122528076171875,
-0.0031986236572265625,
-0.021453857421875,
0.0012884140014648438,
-0.0157623291015625,
-0.031219482421875,
0.017303466796875,
0.034881591796875,
-0.05181884765625,
-0.05804443359375,
-0.029281616210... |
TurkuNLP/xlsum-fi | 2022-10-25T06:30:19.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:machine translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:xlsum",
"language:fi",
"license:cc-by-nc-sa-4.0",
"conditional-text-generatio... | TurkuNLP | This dataset is a DeepL -based machine translation of a part of the English section of the XLSum dataset:[https://github.com/csebuetnlp/xl-sum](https://github.com/csebuetnlp/xl-sum) In the present version, only examples where the full version is at most 10x the summary in length are included. We might translate more later. |
Please cite the article and also acknowledge Filip Ginter / TurkuNLP for the machine translated version
@inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
} | 0 | 67 | 2022-09-30T13:10:05 | ---
annotations_creators:
- found
language_creators:
- machine translated
language:
- fi
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- xlsum
task_categories:
- summarization
- text2text-generation
task_ids: []
pretty_name: XL-Sum-FI
tags:
- conditional-text-generation
---
# Dataset Card for "XL-Sum-FI"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/TurkuNLP/xlsum-fi
- **Point of Contact:** [Filip Ginter](mailto:figint@utu.fi)
### Dataset Summary
This dataset is a DeepL -based machine translation of a part of the English section of the XLSum dataset:[https://github.com/csebuetnlp/xl-sum](https://github.com/csebuetnlp/xl-sum) In the present version, only examples where the full version is at most 10x the summary in length are included. We might translate more later.
### Supported Tasks and Leaderboards
### Languages
- `finnish`
## Dataset Structure
### Data Instances
One example from the `Finnish` dataset is given below in JSON format.
```
{
"id": "technology-17657859",
"url": "https://www.bbc.com/news/technology-17657859",
"title": "Walesin myrskytuulien vuoksi annettu säävaroitus",
"summary": "Tuulet voivat yltyä Walesissa myrskytuuliin, ja myrskysää on luvassa koko maahan tällä viikolla.",
"text": "Met Office on antanut Walesin ja Englannin kattavan keltaisen tuulivaroituksen keskiviikkoillasta kello 21.00 GMT alkaen. Matkustaminen ja sähkönjakelu todennäköisesti häiriintyvät, ja varoitus on voimassa torstaihin kello 15:00 asti. Puuskat ovat todennäköisesti nopeudeltaan 88 kilometriä tunnissa, ja rannikoilla ja kukkuloilla puuskat voivat nousta jopa 70 kilometriin tunnissa, ja lisäksi voi esiintyä rankkasateita ja myrskyisiä sadekuuroja."
}
```
### Data Fields
- 'id': A string representing the article ID, matched to the XLSum dataset original
- 'url': A string representing the article URL as in the original XLSum dataset
- 'title': A string containing the article title, machine-translated to Finnish
- 'summary': A string containing the article summary, machine-translated to Finnish
- 'text' : A string containing the article text, machine-translated to Finnish
### Data Splits
Follows the XLSum dataset.
## Dataset Creation
### Curation Rationale
### Source Data
[BBC News](https://www.bbc.co.uk/ws/languages)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) For this present dataset, only English was used as the source and only examples where the full text is at maximum 10x in length compared to the summary are preserved. This 10x cutoff is naturally measured on English.
#### Who are the source language producers?
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
### Annotations
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/) DeepL was used to machine-translate from English to Finnish
#### Annotation process
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
#### Who are the annotators?
[Detailed in the paper](https://aclanthology.org/2021.findings-acl.413/)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/xl-sum)
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
## Additional Information
### Dataset Curators
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the original XL-Sum paper below as well as acknowledge Filip Ginter and the TurkuNLP group for the Finnish machine translated version.
```
@inproceedings{hasan-etal-2021-xl,
title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Islam, Md. Saiful and
Mubasshir, Kazi and
Li, Yuan-Fang and
Kang, Yong-Bin and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.413",
pages = "4693--4703",
}
```
### Contributions
Thanks to the creators of the XLSum dataset! | 6,656 | [
[
-0.0262451171875,
-0.0322265625,
0.0172576904296875,
0.005779266357421875,
-0.020050048828125,
-0.00168609619140625,
-0.0311126708984375,
-0.031768798828125,
0.044830322265625,
0.029144287109375,
-0.04296875,
-0.057037353515625,
-0.057098388671875,
0.0449218... |
DFKI-SLT/fabner | 2023-04-05T23:20:21.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"manufacturing",
"2000-2020",
"region:us"
] | DFKI-SLT | FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
science research.
For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
B=Beginning, I-Intermediate, O=Outside, E=End, S=Single. | @article{DBLP:journals/jim/KumarS22,
author = {Aman Kumar and
Binil Starly},
title = {"FabNER": information extraction from manufacturing process science
domain literature using named entity recognition},
journal = {J. Intell. Manuf.},
volume = {33},
number = {8},
pages = {2393--2407},
year = {2022},
url = {https://doi.org/10.1007/s10845-021-01807-x},
doi = {10.1007/s10845-021-01807-x},
timestamp = {Sun, 13 Nov 2022 17:52:57 +0100},
biburl = {https://dblp.org/rec/journals/jim/KumarS22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 67 | 2023-01-13T13:01:38 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: FabNER is a manufacturing text dataset for Named Entity Recognition.
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- manufacturing
- 2000-2020
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
- config_name: fabner
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-MATE
'2': I-MATE
'3': O-MATE
'4': E-MATE
'5': S-MATE
'6': B-MANP
'7': I-MANP
'8': O-MANP
'9': E-MANP
'10': S-MANP
'11': B-MACEQ
'12': I-MACEQ
'13': O-MACEQ
'14': E-MACEQ
'15': S-MACEQ
'16': B-APPL
'17': I-APPL
'18': O-APPL
'19': E-APPL
'20': S-APPL
'21': B-FEAT
'22': I-FEAT
'23': O-FEAT
'24': E-FEAT
'25': S-FEAT
'26': B-PRO
'27': I-PRO
'28': O-PRO
'29': E-PRO
'30': S-PRO
'31': B-CHAR
'32': I-CHAR
'33': O-CHAR
'34': E-CHAR
'35': S-CHAR
'36': B-PARA
'37': I-PARA
'38': O-PARA
'39': E-PARA
'40': S-PARA
'41': B-ENAT
'42': I-ENAT
'43': O-ENAT
'44': E-ENAT
'45': S-ENAT
'46': B-CONPRI
'47': I-CONPRI
'48': O-CONPRI
'49': E-CONPRI
'50': S-CONPRI
'51': B-MANS
'52': I-MANS
'53': O-MANS
'54': E-MANS
'55': S-MANS
'56': B-BIOP
'57': I-BIOP
'58': O-BIOP
'59': E-BIOP
'60': S-BIOP
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
- config_name: fabner_bio
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-MATE
'2': I-MATE
'3': B-MANP
'4': I-MANP
'5': B-MACEQ
'6': I-MACEQ
'7': B-APPL
'8': I-APPL
'9': B-FEAT
'10': I-FEAT
'11': B-PRO
'12': I-PRO
'13': B-CHAR
'14': I-CHAR
'15': B-PARA
'16': I-PARA
'17': B-ENAT
'18': I-ENAT
'19': B-CONPRI
'20': I-CONPRI
'21': B-MANS
'22': I-MANS
'23': B-BIOP
'24': I-BIOP
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
- config_name: fabner_simple
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': MATE
'2': MANP
'3': MACEQ
'4': APPL
'5': FEAT
'6': PRO
'7': CHAR
'8': PARA
'9': ENAT
'10': CONPRI
'11': MANS
'12': BIOP
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
- config_name: text2tech
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': Technological System
'2': Method
'3': Material
'4': Technical Field
splits:
- name: train
num_bytes: 4394010
num_examples: 9435
- name: validation
num_bytes: 934347
num_examples: 2183
- name: test
num_bytes: 940136
num_examples: 2064
download_size: 3793613
dataset_size: 6268493
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407](https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407)
- **Paper:** ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
- **Size of downloaded dataset files:** 3.79 MB
- **Size of the generated dataset:** 6.27 MB
### Dataset Summary
FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
science research.
For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
B=Beginning, I-Intermediate, O=Outside, E=End, S=Single.
For details about the dataset, please refer to the paper: ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 3.79 MB
- **Size of the generated dataset:** 6.27 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["Revealed", "the", "location-specific", "flow", "patterns", "and", "quantified", "the", "speeds", "of", "various", "types", "of", "flow", "."],
"ner_tags": [0, 0, 0, 46, 49, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
#### fabner
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-MATE": 1, "I-MATE": 2, "O-MATE": 3, "E-MATE": 4, "S-MATE": 5, "B-MANP": 6, "I-MANP": 7, "O-MANP": 8, "E-MANP": 9, "S-MANP": 10, "B-MACEQ": 11, "I-MACEQ": 12, "O-MACEQ": 13, "E-MACEQ": 14, "S-MACEQ": 15, "B-APPL": 16, "I-APPL": 17, "O-APPL": 18, "E-APPL": 19, "S-APPL": 20, "B-FEAT": 21, "I-FEAT": 22, "O-FEAT": 23, "E-FEAT": 24, "S-FEAT": 25, "B-PRO": 26, "I-PRO": 27, "O-PRO": 28, "E-PRO": 29, "S-PRO": 30, "B-CHAR": 31, "I-CHAR": 32, "O-CHAR": 33, "E-CHAR": 34, "S-CHAR": 35, "B-PARA": 36, "I-PARA": 37, "O-PARA": 38, "E-PARA": 39, "S-PARA": 40, "B-ENAT": 41, "I-ENAT": 42, "O-ENAT": 43, "E-ENAT": 44, "S-ENAT": 45, "B-CONPRI": 46, "I-CONPRI": 47, "O-CONPRI": 48, "E-CONPRI": 49, "S-CONPRI": 50, "B-MANS": 51, "I-MANS": 52, "O-MANS": 53, "E-MANS": 54, "S-MANS": 55, "B-BIOP": 56, "I-BIOP": 57, "O-BIOP": 58, "E-BIOP": 59, "S-BIOP": 60}
```
#### fabner_bio
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-MATE": 1, "I-MATE": 2, "B-MANP": 3, "I-MANP": 4, "B-MACEQ": 5, "I-MACEQ": 6, "B-APPL": 7, "I-APPL": 8, "B-FEAT": 9, "I-FEAT": 10, "B-PRO": 11, "I-PRO": 12, "B-CHAR": 13, "I-CHAR": 14, "B-PARA": 15, "I-PARA": 16, "B-ENAT": 17, "I-ENAT": 18, "B-CONPRI": 19, "I-CONPRI": 20, "B-MANS": 21, "I-MANS": 22, "B-BIOP": 23, "I-BIOP": 24}
```
#### fabner_simple
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "MATE": 1, "MANP": 2, "MACEQ": 3, "APPL": 4, "FEAT": 5, "PRO": 6, "CHAR": 7, "PARA": 8, "ENAT": 9, "CONPRI": 10, "MANS": 11, "BIOP": 12}
```
#### text2tech
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "Technological System": 1, "Method": 2, "Material": 3, "Technical Field": 4}
```
### Data Splits
| | Train | Dev | Test |
|--------|-------|------|------|
| fabner | 9435 | 2183 | 2064 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/jim/KumarS22,
author = {Aman Kumar and
Binil Starly},
title = {"FabNER": information extraction from manufacturing process science
domain literature using named entity recognition},
journal = {J. Intell. Manuf.},
volume = {33},
number = {8},
pages = {2393--2407},
year = {2022},
url = {https://doi.org/10.1007/s10845-021-01807-x},
doi = {10.1007/s10845-021-01807-x},
timestamp = {Sun, 13 Nov 2022 17:52:57 +0100},
biburl = {https://dblp.org/rec/journals/jim/KumarS22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | 13,164 | [
[
-0.041595458984375,
-0.06378173828125,
0.0191497802734375,
0.00078582763671875,
-0.01141357421875,
0.00582122802734375,
-0.0169677734375,
-0.01983642578125,
0.0440673828125,
0.031829833984375,
-0.05303955078125,
-0.0701904296875,
-0.04669189453125,
0.0188140... |
brunokreiner/genius-lyrics | 2023-03-07T21:57:02.000Z | [
"region:us"
] | brunokreiner | null | null | 2 | 67 | 2023-01-18T22:39:24 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset consists of roughly 480k english (classified using nltk language classifier) lyrics with some more meta data. The meta data was taken from the million playlist challenge @ AICrowd. The lyrics were crawled using the song and artist name with the lyricsgenius python package. There is no guarantee that the lyrics are the correct one though the data was cleaned and verified. The lyrics crawled came with the song name in its payload, if the song names in the payload and from our side don't match (using the package fuzzywuzzy string matching with a score of under 60) then it wasn't included in this set of lyrics. Still some lyrics might be wrong due to the nature of the data.
49'985 rows have a list of genres, crawled from the official Spotify API. This list of genres are from the artist of the song since spotify doesn't provide genres for every individual song.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 2,383 | [
[
-0.028564453125,
-0.0190887451171875,
0.01512908935546875,
0.0259246826171875,
-0.00785064697265625,
0.008270263671875,
-0.0271759033203125,
-0.018646240234375,
0.031280517578125,
0.06707763671875,
-0.07666015625,
-0.073974609375,
-0.05279541015625,
0.008666... |
jamescalam/lex-transcripts | 2023-04-06T07:49:58.000Z | [
"region:us"
] | jamescalam | null | null | 7 | 67 | 2023-03-28T08:49:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
camel-ai/physics | 2023-05-23T21:12:11.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | camel-ai | null | null | 22 | 67 | 2023-04-11T22:49:01 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: CAMEL Physics
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
Physics dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 physics topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs.
We provide the data in `physics.zip`.
## Data Fields
**The data fields for files in `physics.zip` are as follows:**
* `role_1`: assistant role
* `topic`: physics topic
* `sub_topic`: physics subtopic belonging to topic
* `message_1`: refers to the problem the assistant is asked to solve.
* `message_2`: refers to the solution provided by the assistant.
**Download in python**
```
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="camel-ai/physics", repo_type="dataset", filename="physics.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
---
| 2,116 | [
[
-0.027801513671875,
-0.0657958984375,
0.021820068359375,
0.00799560546875,
-0.0018644332885742188,
0.0037097930908203125,
-0.0267486572265625,
-0.028045654296875,
0.020477294921875,
0.0171661376953125,
-0.038726806640625,
-0.0191192626953125,
-0.046905517578125,... |
howard-hou/OCR-VQA | 2023-04-24T01:29:24.000Z | [
"region:us"
] | howard-hou | null | null | 3 | 67 | 2023-04-23T17:43:27 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: string
- name: questions
sequence: string
- name: answers
sequence: string
- name: ocr_tokens
sequence: string
- name: ocr_info
list:
- name: word
dtype: string
- name: bounding_box
struct:
- name: width
dtype: float64
- name: height
dtype: float64
- name: top_left_x
dtype: float64
- name: top_left_y
dtype: float64
- name: title
dtype: string
- name: authorName
dtype: string
- name: genre
dtype: string
- name: image_width
dtype: int64
- name: image_height
dtype: int64
- name: image_url
dtype: string
- name: set_name
dtype: string
splits:
- name: train
num_bytes: 7503971854.0
num_examples: 166022
- name: test
num_bytes: 928616409.0
num_examples: 20796
- name: validation
num_bytes: 920236957.0
num_examples: 20731
download_size: 2329997099
dataset_size: 9352825220.0
---
# Dataset Card for "OCR-VQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,210 | [
[
-0.03802490234375,
-0.01222991943359375,
0.031463623046875,
-0.022796630859375,
-0.011962890625,
0.0014429092407226562,
0.025360107421875,
-0.02587890625,
0.045989990234375,
0.049072265625,
-0.04425048828125,
-0.051849365234375,
-0.035736083984375,
-0.014938... |
pvduy/rm_oa_hh | 2023-06-13T16:39:03.000Z | [
"region:us"
] | pvduy | null | null | 1 | 67 | 2023-06-13T15:40:34 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
splits:
- name: test
num_bytes: 11065628
num_examples: 8524
- name: train
num_bytes: 220101381
num_examples: 166750
download_size: 135525253
dataset_size: 231167009
---
# Dataset Card for "rm_oa_hh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 531 | [
[
-0.033416748046875,
-0.0335693359375,
0.022491455078125,
-0.005443572998046875,
-0.01457977294921875,
-0.0175933837890625,
0.03399658203125,
-0.013580322265625,
0.06494140625,
0.04315185546875,
-0.045257568359375,
-0.045989990234375,
-0.03436279296875,
-0.01... |
ynklab/XCodeSearchNet | 2023-07-12T15:18:20.000Z | [
"language:en",
"language:fr",
"language:ja",
"language:zh",
"license:mit",
"codesearch",
"arxiv:2306.15604",
"region:us"
] | ynklab | null | null | 1 | 67 | 2023-06-15T17:33:42 | ---
license: mit
language:
- en
- fr
- ja
- zh
tags:
- codesearch
pretty_name: XCodeSearchNet
---
[Paper on arXiv](https://arxiv.org/abs/2306.15604)
## pre-training data
You need to manually combine each dataset if you want to use a multilingual dataset.
```python
from datasets import load_dataset
xcsn_pt_python_en = load_dataset("ynklab/XCodeSearchNet", data_dir='pretraining/python/en')
"""
DatasetDict({
train: Dataset({
features: ['function_tokens', 'docstring'],
num_rows: 453623
})
validation: Dataset({
features: ['function_tokens', 'docstring'],
num_rows: 4596
})
test: Dataset({
features: ['function_tokens', 'docstring'],
num_rows: 45283
})
})
"""
print(xcsn_pt_python_en['train'][0])
"""
{
'function_tokens': ['def', 'get_feature_ide_paths', '(', 'container_dir', ',', 'product_name', ')', ':', 'repo_name', '=', 'get_repo_name', '(', 'container_dir', ')', 'class', 'Paths', '(', 'object', ')', ':', 'feature_order_json', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/feature_order.json'", ')', 'model_xml_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/model.xml'", ')', 'config_file_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/products/'", ',', 'repo_name', ',', 'product_name', ',', "'product.equation.config'", ')', 'equation_file_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'products'", ',', 'product_name', ',', "'product.equation'", ')', 'product_spec_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/products/'", ',', 'repo_name', ',', "'product_spec.json'", ')', 'return', 'Paths'],
'docstring': 'Takes the container_dir and the product name and returns all relevant paths from the\n feature_order_json to the config_file_path.\n :param container_dir: the full path of the container dir\n :param product_name: the name of the product\n :return: object with divert path attributes'
}
"""
```
## fine-tuning data
```python
from datasets import load_dataset
xcsn_ft_python_en = load_dataset("ynklab/XCodeSearchNet", data_dir='finetuning/python/en')
"""
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 1648684
})
validation: Dataset({
features: ['text'],
num_rows: 92426
})
})
"""
print(xcsn_ft_python_en['train'][0])
"""
{
'text': '1<CODESPLIT><CODESPLIT><CODESPLIT>Logs the definition of the object that was just auto - decorated inside the ipython notebook .<CODESPLIT>def _logdef ( self , n , o , otype ) : import re try : #The latest input cell will be the one that this got executed #from. TODO: actually, if acorn got imported after the fact, then #the import would have caused all the undecorated functions to be #decorated as soon as acorn imported. I suppose we just won\'t have #any code for that case. if otype == "classes" : cellno = max ( [ int ( k [ 2 : ] ) for k in self . shell . user_ns . keys ( ) if re . match ( "_i\\d+" , k ) ] ) elif otype == "functions" : cellno = int ( o . __code__ . co_filename . strip ( "<>" ) . split ( \'-\' ) [ 2 ] ) except : #This must not have been an ipython notebook declaration, so we #don\'t store the code. cellno = None pass code = "" if cellno is not None : cellstr = "_i{0:d}" . format ( cellno ) if cellstr in self . shell . user_ns : cellcode = self . shell . user_ns [ cellstr ] import ast astm = ast . parse ( cellcode ) ab = astm . body parts = { ab [ i ] . name : ( ab [ i ] . lineno , None if i + 1 >= len ( ab ) else ab [ i + 1 ] . lineno ) for i , d in enumerate ( ab ) } if n in parts : celllines = cellcode . split ( \'\\n\' ) start , end = parts [ n ] if end is not None : code = celllines [ start - 1 : end - 1 ] else : code = celllines [ start - 1 : ] #Now, we actually create the entry. Since the execution for function #definitions is almost instantaneous, we just log the pre and post #events at the same time. from time import time from acorn . logging . database import record entry = { "m" : "def" , "a" : None , "s" : time ( ) , "r" : None , "c" : code , } from acorn import msg record ( "__main__.{}" . format ( n ) , entry , diff = True ) msg . info ( entry , 1 )'
}
"""
```
| 4,372 | [
[
-0.03302001953125,
-0.03082275390625,
0.0018396377563476562,
0.02374267578125,
-0.007396697998046875,
-0.00384521484375,
-0.01800537109375,
-0.0290069580078125,
0.008148193359375,
0.03424072265625,
-0.03997802734375,
-0.06695556640625,
-0.02850341796875,
0.0... |
OneFly7/llama2-sst2-fine-tuning | 2023-08-08T07:03:26.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | OneFly7 | null | null | 2 | 67 | 2023-07-29T19:28:23 | ---
dataset_info:
features:
- name: label_text
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23202578
num_examples: 67349
- name: validation
num_bytes: 334716
num_examples: 872
download_size: 4418625
dataset_size: 23537294
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "llama2-sst2-finetuning"
## Dataset Description
The Llama2-sst2-fine-tuning dataset is designed for supervised fine-tuning of the LLaMA V2 based on the GLUE SST2 for sentiment analysis classification task.
We provide two subsets: training and validation.
To ensure the effectiveness of fine-tuning, we convert the data into the prompt template for LLaMA V2 supervised fine-tuning, where the data will follow this format:
```
<s>[INST] <<SYS>>
{System prompt}
<</SYS>>
{User prompt} [/INST] {Label} </s>.
```
The feasibility of this dataset has been tested in supervised fine-tuning on the meta-llama/Llama-2-7b-hf model.
Note. For the sake of simplicity, we have retained only one new column of data ('text').
## Other Useful Links
- [Get Llama 2 Prompt Format Right](https://www.reddit.com/r/LocalLLaMA/comments/155po2p/get_llama_2_prompt_format_right/)
- [Fine-Tune Your Own Llama 2 Model in a Colab Notebook](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32)
- [Instruction fine-tuning Llama 2 with PEFT’s QLoRa method](https://medium.com/@ud.chandra/instruction-fine-tuning-llama-2-with-pefts-qlora-method-d6a801ebb19)
- [GLUE SST2 Dataset](https://www.tensorflow.org/datasets/catalog/glue#gluesst2)
<!--[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)--> | 1,780 | [
[
-0.0186767578125,
-0.05126953125,
0.024627685546875,
0.0196685791015625,
-0.049957275390625,
0.00943756103515625,
-0.01454925537109375,
-0.0143890380859375,
0.01267242431640625,
0.0298919677734375,
-0.069091796875,
-0.046844482421875,
-0.0421142578125,
0.006... |
yongchanskii/youtube-data-for-developers | 2023-08-22T17:25:33.000Z | [
"region:us"
] | yongchanskii | null | null | 0 | 67 | 2023-08-22T17:14:20 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 3663423940.287
num_examples: 8389
- name: test
num_bytes: 417482475.0
num_examples: 933
download_size: 4039879845
dataset_size: 4080906415.287
---
# Dataset Card for "youtube-for-developers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 518 | [
[
-0.054840087890625,
-0.027679443359375,
0.004947662353515625,
0.0255584716796875,
-0.0051422119140625,
0.0146636962890625,
-0.000965118408203125,
0.0159454345703125,
0.062225341796875,
0.030670166015625,
-0.06982421875,
-0.047637939453125,
-0.040771484375,
-... |
pccl-org/formal-logic-simple-order-simple-objects-blivergent-500 | 2023-09-21T20:20:02.000Z | [
"region:us"
] | pccl-org | null | null | 0 | 67 | 2023-09-21T20:14:22 | ---
dataset_info:
features:
- name: greater_than
dtype: string
- name: less_than
dtype: string
- name: correct_example
sequence: string
- name: incorrect_example
sequence: string
- name: distance
dtype: int64
- name: index
dtype: int64
splits:
- name: train
num_bytes: 19635650
num_examples: 124750
download_size: 3888871
dataset_size: 19635650
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "formal-logic-simple-order-simple-objects-blivergent-500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 698 | [
[
-0.025299072265625,
-0.026519775390625,
0.0257720947265625,
0.02081298828125,
0.0008845329284667969,
-0.0257720947265625,
0.01522064208984375,
-0.027252197265625,
0.023956298828125,
0.032623291015625,
-0.0614013671875,
-0.051300048828125,
-0.039764404296875,
... |
erhwenkuo/poetry-chinese-zhtw | 2023-10-16T08:16:59.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"license:mit",
"region:us"
] | erhwenkuo | null | null | 1 | 67 | 2023-10-16T07:16:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: author
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 19839577
num_examples: 76013
download_size: 15009797
dataset_size: 19839577
license: mit
task_categories:
- text-generation
language:
- zh
size_categories:
- 10K<n<100K
---
# Dataset Card for "poetry-chinese-zhtw"
## 資料集摘要
中文古典文集資料庫收集了約 5.5 萬首唐詩、26 萬首宋詩、2.1 萬首宋詞和其他古典文集。詩人包括唐宋兩朝近 1.4 萬古詩人,和兩宋時期 1.5 千古詞人。
- **五代十國**- 收錄"花間集"與"南唐二主詞"
- **唐**- 收錄"全唐詩"(是清康熙四十四年,康熙皇帝主導下,蒐集羅唐詩的收藏「得詩 48,900 餘首,詩入 2,200 人」)。
- **宋**- 收錄"全宋詞"(由唐圭璋編著,孔凡禮補輯,共收錄宋代詞人 1,330 家,詞作 21,116 首)。
- **元**- 收錄元曲 11,057 篇,曲家 233 人。
- **清**- 收錄"納蘭性德詩集"
原始資料來源:
- [chinese-poetry: 最全中文诗歌古典文集数据库](https://github.com/chinese-poetry/chinese-poetry/tree/master)
## 資料下載清理
1. 下載 [chinese-poetry: 最全中文诗歌古典文集数据库](https://github.com/chinese-poetry/chinese-poetry/tree/master) 的 Repo
2. 調整資料呈現結構便於模型訓練
3. 使用 OpenCC 來進行簡繁轉換
4. 使用 Huggingface Datasets 來上傳至 Huggingface Hub
## 資料集結構
```json
{
"author":"杜甫",
"title":"月",
"text":"天上秋期近,人間月影清。入河蟾不沒,搗藥兔長生。只益丹心苦,能添白髮明。干戈知滿地,休照國西營。",
"category":"唐"
}
```
## 資料欄位
- `author`: (string) 作者
- `title`: (string) 作品名稱
- `text`: (string) 文章內容
- `category`: (string) 作品的朝代
## 如何使用
```python
from datasets import load_dataset
dataset = load_dataset("erhwenkuo/poetry-chinese-zhtw", split="train")
```
## 許可資訊
[MIT](https://zh.wikipedia.org/zh-tw/MIT%E8%A8%B1%E5%8F%AF%E8%AD%89)
| 1,593 | [
[
-0.020233154296875,
-0.0193634033203125,
0.0017213821411132812,
0.020477294921875,
-0.050323486328125,
-0.027801513671875,
-0.02667236328125,
-0.0305938720703125,
0.038330078125,
0.032470703125,
-0.04376220703125,
-0.070068359375,
-0.0289154052734375,
0.0066... |
automated-research-group/phi-winogrande-results | 2023-10-30T01:03:01.000Z | [
"region:us"
] | automated-research-group | null | null | 0 | 67 | 2023-10-28T13:25:32 | ---
dataset_info:
- config_name: '{''do_sample''=False, ''beams''=10}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 42573
dataset_size: 47503
- config_name: '{''do_sample''=False, ''beams''=1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 42573
dataset_size: 47503
- config_name: '{''do_sample''=False, ''beams''=5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 42573
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 0
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
configs:
- config_name: '{''do_sample''=False, ''beams''=10}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=10}/train-*'
- config_name: '{''do_sample''=False, ''beams''=1}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=1}/train-*'
- config_name: '{''do_sample''=False, ''beams''=5}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}/train-*'
---
# Dataset Card for "phi-winogrande-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 70,070 | [
[
-0.03216552734375,
-0.011260986328125,
0.01392364501953125,
0.01210784912109375,
-0.0246734619140625,
-0.0099945068359375,
0.0193328857421875,
-0.017547607421875,
0.07122802734375,
0.0233001708984375,
-0.0467529296875,
-0.050384521484375,
-0.05078125,
-0.038... |
yxchar/amazon-tlm | 2021-11-04T22:22:29.000Z | [
"region:us"
] | yxchar | null | null | 0 | 66 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
pile-of-law/eoir_privacy | 2022-07-07T08:44:32.000Z | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2207.00220",
"region:us"
] | pile-of-law | A living legal dataset. | TODO | 9 | 66 | 2022-05-08T22:30:20 | ---
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: eoir_privacy
source_datasets: []
task_categories:
- text-classification
viewer: false
---
# Dataset Card for eoir_privacy
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset mimics privacy standards for EOIR decisions. It is meant to help learn contextual data sanitization rules to anonymize potentially sensitive contexts in crawled language data.
### Languages
English
## Dataset Structure
### Data Instances
{
"text" : masked paragraph,
"label" : whether to use a pseudonym in filling masks
}
### Data Splits
train 75%, validation 25%
## Dataset Creation
### Curation Rationale
This dataset mimics privacy standards for EOIR decisions. It is meant to help learn contextual data sanitization rules to anonymize potentially sensitive contexts in crawled language data.
### Source Data
#### Initial Data Collection and Normalization
We scrape EOIR. We then filter at the paragraph level and replace any references to respondent, applicant, or names with [MASK] tokens. We then determine if the case used a pseudonym or not.
#### Who are the source language producers?
U.S. Executive Office for Immigration Review
### Annotations
#### Annotation process
Annotations (i.e., pseudonymity decisions) were made by the EOIR court. We use regex to identify if a pseudonym was used to refer to the applicant/respondent.
#### Who are the annotators?
EOIR judges.
### Personal and Sensitive Information
There may be sensitive contexts involved, the courts already make a determination as to data filtering of sensitive data, but nonetheless there may be sensitive topics discussed.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to learn contextual privacy rules to help filter private/sensitive data, but itself encodes biases of the courts from which the data came. We suggest that people look beyond this data for learning more contextual privacy rules.
### Discussion of Biases
Data may be biased due to its origin in U.S. immigration courts.
### Licensing Information
CC-BY-NC
### Citation Information
```
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson, Peter and Krass, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
``` | 3,809 | [
[
-0.021820068359375,
-0.0447998046875,
0.0194854736328125,
0.00592041015625,
-0.0221710205078125,
0.0080718994140625,
-0.00958251953125,
-0.02587890625,
0.0244293212890625,
0.052398681640625,
-0.039093017578125,
-0.077880859375,
-0.041046142578125,
-0.0066833... |
Tevatron/beir | 2022-07-08T00:17:30.000Z | [
"region:us"
] | Tevatron | null | null | 0 | 66 | 2022-06-07T05:59:24 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
nlpaueb/multi_eurlex | 2022-10-25T10:29:13.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|multi... | nlpaueb | An non-parallel version of the MultiEURLEX datasets released by Chalkidis et al. (2021).
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource).
Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU.
As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels);
this is multi-label classification task (given the text, predict multiple labels).
In this version, MultiEURLEX comprises non-parallel documents across 5 languages (English, German, French, Greek,
and Slovakian) including translations from English to the rest of the 4 available languages. | @InProceedings{xenouleas-etal-2022-realistic-multieurlex,
author = {Xenouleas, Stratos
and Tsoukara, Alexia
and Panagiotakis, Giannis
and Chalkidis, Ilias
and Androutsopoulos, Ion},
title = {Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification},
booktitle = {Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022)},
year = {2022},
publisher = {Association for Computer Machinery},
location = {Corfu, Greece},
} | 4 | 66 | 2022-06-07T10:28:06 | ---
pretty_name: Non-Parallel MultiEURLEX (incl. Translations)
annotations_creators:
- found
language_creators:
- found
- machine-generated
language:
- en
- de
- fr
- el
- sk
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|multi_eurlex
task_categories:
- text-classification
task_ids:
- multi-label-classification
- topic-classification
---
# Dataset Card for "Non-Parallel MultiEURLEX (incl. Translations)"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nlpaueb/multi-eurlex/tree/realistic-zero-shot
- **Repository:** https://github.com/nlpaueb/multi-eurlex/tree/realistic-zero-shot
- **Paper:** TBA
- **Leaderboard:** N/A
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
**Documents**
MultiEURLEX of Chalkidis et al. (2021) comprises 65k EU laws in 23 official EU languages. Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. Each EUROVOC label ID is associated with a *label descriptor*, e.g., [60, agri-foodstuffs], [6006, plant product], [1115, fruit]. The descriptors are also available in the 23 languages. Chalkidis et al. (2019) published a monolingual (English) version of this dataset, called EUR-LEX, comprising 57k EU laws with the originally assigned gold labels.
In this new version, dubbed "Non-Parallel MultiEURLEX (incl. Translations)", MultiEURLEX comprises non-parallel documents across 5 languages (English, German, French, Greek, and Slovak), i.e., 11,000 different documents per language, including also translations from English to the rest of the 4 available languages.
### Supported Tasks and Leaderboards
MultiEURLEX can be used for legal topic classification, a multi-label classification task where legal documents need to be assigned concepts (in our case, from EUROVOC) reflecting their topics. Unlike EUR-LEX, however, MultiEURLEX supports labels from three different granularities (EUROVOC levels). More importantly, apart from monolingual (*one-to-one*) experiments, it can be used to study cross-lingual transfer scenarios, including *one-to-many* (systems trained in one language and used in other languages with no training data), and *many-to-one* or *many-to-many* (systems jointly trained in multiple languages and used in one or more other languages).
The dataset is not yet part of an established benchmark.
### Languages
The EU has 24 official languages. When new members join the EU, the set of official languages usually expands, except the languages are already included. MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). EU laws are published in all official languages, except Irish, for resource-related reasons (Read more at https://europa.eu/european-union/about-eu/eu-languages_en). This wide coverage makes MultiEURLEX a valuable testbed for cross-lingual transfer. All languages use the Latin script, except for Bulgarian (Cyrillic script) and Greek. Several other languages are also spoken in EU countries. The EU is home to over 60 additional indigenous regional or minority languages, e.g., Basque, Catalan, Frisian, Saami, and Yiddish, among others, spoken by approx. 40 million people, but these additional languages are not considered official (in terms of EU), and EU laws are not translated to them.
This version of MultiEURLEX covers 5 EU languages (English, German, French, Greek, and Slovak). It also includes machine-translated versions of the documents using the EasyNMT framework (https://github.com/UKPLab/EasyNMT) utilizing the many-to-many M2M_100_418M model of Fan et al. (2020) for el-to-en and el-to-de pairs and the OPUS-MT (Tiedemann et al., 2020) models for the rest.
## Dataset Structure
### Data Instances
**Multilingual use of the dataset**
When the dataset is used in a multilingual setting selecting the the 'all_languages' flag:
```python
from datasets import load_dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'all_languages')
```
```json
{
"celex_id": "31979D0509",
"text": {"en": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"en2fr": "DU CONSEIL du 24 mai 1979 concernant l'aide financiere de la Communaute e l'eradication de la peste porcine africaine en Espagne (79/509/CEE)\nLE CONSEIL DES COMMUNAUTAS EUROPENNES ...",
"en2de": "...",
"en2el": "...",
"en2sk": "..."
},
"labels": [
1,
13,
47
]
}
```
**Monolingual use of the dataset**
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 5 supported languages, or supported translation pairs in the form src2trg, where src and trg are ISO language codes, e.g., en2fr for English translated to French. For example:
```python
from datasets import load_dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'en2fr')
```
```json
{
"celex_id": "31979D0509",
"text": "DU CONSEIL du 24 mai 1979 concernant l'aide financiere de la Communaute e l'eradication de la peste porcine africaine en Espagne (79/509/CEE)\nLE CONSEIL DES COMMUNAUTAS EUROPENNES ...",
"labels": [
1,
13,
47
]
}
```
### Data Fields
**Multilingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (dict[**str**]) A dictionary with the 23 languages as keys and the full content of each document as values.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
**Monolingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (**str**) The full content of each document across languages.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of the EUROVOC concepts, similar to [Chalkidis et al. (2020)](https://aclanthology.org/2020.emnlp-main.607/), please download the relevant JSON file [here](https://raw.githubusercontent.com/nlpaueb/multi-eurlex/master/data/eurovoc_descriptors.json).
Then you may load it and use it:
```python
import json
from datasets import load_dataset
# Load the English part of the dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'en', split='train')
# Load (label_id, descriptor) mapping
with open('./eurovoc_descriptors.json') as jsonl_file:
eurovoc_concepts = json.load(jsonl_file)
# Get feature map info
classlabel = dataset.features["labels"].feature
# Retrieve IDs and descriptors from dataset
for sample in dataset:
print(f'DOCUMENT: {sample["celex_id"]}')
# DOCUMENT: 32006D0213
for label_id in sample['labels']:
print(f'LABEL: id:{label_id}, eurovoc_id: {classlabel.int2str(label_id)}, \
eurovoc_desc:{eurovoc_concepts[classlabel.int2str(label_id)]}')
# LABEL: id: 1, eurovoc_id: '100160', eurovoc_desc: 'industry'
```
### Data Splits
<table>
<tr><td> Language </td> <td> ISO code </td> <td> Member Countries where official </td> <td> EU Speakers [1] </td> <td> Number of Documents [2] </td> </tr>
<tr><td> English </td> <td> <b>en</b> </td> <td> United Kingdom (1973-2020), Ireland (1973), Malta (2004) </td> <td> 13/ 51% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> Germany (1958), Belgium (1958), Luxembourg (1958) </td> <td> 16/32% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> France (1958), Belgium(1958), Luxembourg (1958) </td> <td> 12/26% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> Greece (1981), Cyprus (2008) </td> <td> 3/4% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> Slovakia (2004) </td> <td> 1/1% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
</table>
[1] Native and Total EU speakers percentage (%) \
[2] Training / Development / Test Splits
## Dataset Creation
### Curation Rationale
The original dataset was curated by Chalkidis et al. (2021).\
The new version of the dataset was curated by Xenouleas et al. (2022).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the EUR-LEX portal (https://eur-lex.europa.eu) in unprocessed formats (HTML, XML, RDF). The documents were downloaded from the EUR-LEX portal in HTML. The relevant EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
Chalkidis et al. (2021) stripped HTML mark-up to provide the documents in plain text format and inferred the labels for EUROVOC levels 1--3, by backtracking the EUROVOC hierarchy branches, from the originally assigned labels to their ancestors in levels 1--3, respectively.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). EUROVOC has eight levels of concepts. Each document is assigned one or more concepts (labels). If a document is assigned a concept, the ancestors and descendants of that concept are typically not assigned to the same document. The documents were originally annotated with concepts from levels 3 to 8.
Chalkidis et al. (2021)augmented the annotation with three alternative sets of labels per document, replacing each assigned concept by its ancestor from level 1, 2, or 3, respectively.
Thus, Chalkidis et al. (2021) provide four sets of gold labels per document, one for each of the first three levels of the hierarchy, plus the original sparse label assignment.Levels 4 to 8 cannot be used independently, as many documents have gold concepts from the third level; thus many documents will be mislabeled, if we discard level 3.
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset contains publicly available EU laws that do not include personal or sensitive information with the exception of trivial information presented by consent, e.g., the names of the current presidents of the European Parliament and European Council, and other administration bodies.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Xenouleas et al. (2021)
### Licensing Information
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Stratos Xenouleas, Alexia Tsoukara, Giannis Panagiotakis Ilias Chalkidis, and Ion Androutsopoulos.*
*Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification.*
*Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022). Corfu, Greece. 2022*
```
@InProceedings{xenouleas-etal-2022-realistic-multieurlex,
author = {Xenouleas, Stratos
and Tsoukara, Alexia
and Panagiotakis, Giannis
and Chalkidis, Ilias
and Androutsopoulos, Ion},
title = {Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification},
booktitle = {Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022)},
year = {2022},
publisher = {Association for Computer Machinery},
location = {Corfu, Greece},
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. | 16,048 | [
[
-0.046173095703125,
-0.031402587890625,
0.0164947509765625,
0.03314208984375,
0.004878997802734375,
-0.013641357421875,
-0.0285797119140625,
-0.045654296875,
0.0343017578125,
0.03759765625,
-0.029205322265625,
-0.05975341796875,
-0.0408935546875,
0.040802001... |
mrm8488/unnatural-instructions-full | 2022-12-21T21:41:31.000Z | [
"arxiv:2212.09689",
"region:us"
] | mrm8488 | null | null | 7 | 66 | 2022-12-21T20:59:04 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: instances
list:
- name: instruction_with_input
dtype: string
- name: input
dtype: string
- name: constraints
dtype: string
- name: output
dtype: string
- name: reformulations
list:
- name: instruction
dtype: string
- name: instruction_with_input
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 144282712
num_examples: 66010
download_size: 57715606
dataset_size: 144282712
---
# Dataset Card for Unnatural Instructions (Full data)
This info comes from the **Unnatural Instructions GitHub [repo](https://github.com/orhonovich/unnatural-instructions/)**.
Unnatural Instructions is a dataset of instructions automatically generated by a Large Language model.
See full details in the paper: "[Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor](https://arxiv.org/abs/2212.09689)"
## 🗃️ Content
It contains the full 240,670 Unnatural Instructions (instruction-input-output triplets) examples. It was constructed by expanding the core data with automatically generated instruction paraphrases.
## 📄 Format
### Full data
It has the same structure as [Core Data](https://huggingface.co/datasets/mrm8488/unnatural-instructions-core), but with one additional field - `reformulations`. `reformulations` is an array of JSON objects, each corresponds to an automatically generated paraphrase for the given instruction. Each reformulation contains the fields:
- `instruction`: A paraphrase of the original instruction
- `input`: An input for the task described by the `instruction`
- `instruction_with_input`: The paraphrased instruction concatenated with the `input`
- `output`: The output of executing `instruction` with the given `input`
## 📘 Citation
If you make use of Unnatural Instructions, please cite the following paper:
```
@misc{honovich2022unnatural,
title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor},
author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo},
url = {https://arxiv.org/abs/2212.09689},
publisher = {arXiv},
year={2022}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,428 | [
[
-0.0022792816162109375,
-0.07080078125,
0.0274505615234375,
-0.0011491775512695312,
-0.0166015625,
-0.0161285400390625,
-0.0111083984375,
-0.01129150390625,
0.003936767578125,
0.08203125,
-0.063232421875,
-0.05859375,
-0.0086669921875,
0.01166534423828125,
... |
flozi00/conversations | 2023-10-26T17:30:47.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:de",
"region:us"
] | flozi00 | null | null | 9 | 66 | 2023-07-06T13:24:36 | ---
language:
- de
task_categories:
- conversational
- text-generation
dataset_info:
features:
- name: conversations
dtype: string
- name: from
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 207776274.42806104
num_examples: 119451
download_size: 94239877
dataset_size: 207776274.42806104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is an uncensored and massively cleaned, double checked merge of several german datasets / subsets
https://github.com/flozi00/chat-data-experiments/blob/main/chat_combiner.py | 631 | [
[
-0.02764892578125,
-0.046875,
-0.0132293701171875,
0.0027713775634765625,
-0.0242767333984375,
0.014495849609375,
-0.0167083740234375,
-0.02642822265625,
0.0241241455078125,
0.0732421875,
-0.05780029296875,
-0.0416259765625,
-0.043212890625,
-0.0158081054687... |
LeoLM/German_Songs | 2023-09-04T21:49:42.000Z | [
"region:us"
] | LeoLM | null | null | 0 | 66 | 2023-09-04T21:49:38 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: analysis_prompt
dtype: string
- name: topic
dtype: string
- name: song
dtype: string
- name: analysis
dtype: string
splits:
- name: train
num_bytes: 1972513
num_examples: 500
download_size: 804509
dataset_size: 1972513
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "german_songs_gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 593 | [
[
-0.0535888671875,
-0.00891876220703125,
0.025909423828125,
0.0341796875,
-0.017730712890625,
-0.006076812744140625,
0.005130767822265625,
-0.01361083984375,
0.045440673828125,
0.0249176025390625,
-0.06536865234375,
-0.067626953125,
-0.04461669921875,
-0.0072... |
yzhuang/autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T16:50:49.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 66 | 2023-09-08T16:50:13 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1048362149
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 857 | [
[
-0.0350341796875,
-0.00440216064453125,
0.01548004150390625,
0.0169830322265625,
-0.0179290771484375,
0.00240325927734375,
0.0379638671875,
-0.0031681060791015625,
0.0521240234375,
0.0281524658203125,
-0.064453125,
-0.039215087890625,
-0.051116943359375,
0.0... |
repllabs/questions_how_to_do_great_work | 2023-09-17T05:43:44.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] | repllabs | null | null | 4 | 66 | 2023-09-17T05:10:55 | ---
configs:
- config_name: default
data_files:
- split: processed
path: data/processed-*
- split: raw
path: data/raw-*
dataset_info:
features:
- name: question
dtype: string
- name: model
dtype: string
splits:
- name: processed
num_bytes: 17391
num_examples: 142
- name: raw
num_bytes: 55307
num_examples: 450
download_size: 28702
dataset_size: 72698
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- n<1K
---
# Questions Generated by LLM on 'How To Do Great Work'
http://paulgraham.com/greatwork.html
https://github.com/fastrepl/fastrepl/blob/main/exp/pg_essay_questions.ipynb | 669 | [
[
-0.03326416015625,
-0.05072021484375,
0.0633544921875,
0.028900146484375,
-0.01052093505859375,
-0.0021686553955078125,
0.01318359375,
-0.026702880859375,
0.044189453125,
0.067626953125,
-0.058929443359375,
-0.04736328125,
-0.02471923828125,
0.00642395019531... |
cmalaviya/expertqa | 2023-10-07T05:07:10.000Z | [
"task_categories:question-answering",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2309.07852",
"region:us"
] | cmalaviya | null | null | 11 | 66 | 2023-10-03T04:02:09 | ---
configs:
- config_name: main
data_files: r2_compiled_anon_fixed.jsonl
- config_name: lfqa_random
data_files:
- split: train
path: rand_lfqa_train.json
- split: test
path: rand_lfqa_test.json
- split: validation
path: rand_lfqa_val.json
- config_name: lfqa_domain
data_files:
- split: train
path: domain_lfqa_train.json
- split: test
path: domain_lfqa_test.json
- split: validation
path: domain_lfqa_val.json
license: mit
task_categories:
- question-answering
language:
- en
source_datasets:
- original
pretty_name: ExpertQA
annotations_creators:
- expert-generated
size_categories:
- 1K<n<10K
---
# Dataset Card for ExpertQA
## Dataset Description
- **Repository: https://github.com/chaitanyamalaviya/ExpertQA**
- **Paper: https://arxiv.org/pdf/2309.07852**
- **Point of Contact: chaitanyamalaviya@gmail.com**
### Dataset Summary
We provide here the data accompanying the paper: [ExpertQA: Expert-Curated Questions and Attributed Answers](https://arxiv.org/pdf/2309.07852). The ExpertQA dataset contains 2177 examples from 32 different fields.
### Supported Tasks
The `main` data contains 2177 examples that can be used to evaluate new methods for estimating factuality and attribution, while the `lfqa_domain` and `lfqa_rand` data can be used to evaluate long-form question answering systems.
## Dataset Creation
### Curation Rationale
ExpertQA was created to evaluate factuality & attribution in language model responses to domain-specific questions, as well as evaluate long-form question answering in domain-specific settings.
### Annotation Process
Questions in ExpertQA were formulated by experts spanning 32 fields. The answers to these questions are expert-verified, model-generated answers to these questions. Each claim-evidence pair in an answer is judged by experts for various properties such as the claim’s informativeness, factuality, citeworthiness, whether the claim is supported by the evidence, and reliability of the evidence source. Further, experts revise the original claims to ensure they are factual and supported by trustworthy sources.
## Dataset Structure
### Data Instances
We provide the main data, with judgements of factuality and attribution, under the `default` subset.
The long-form QA data splits are provided at `lfqa_domain` (domain split) and `lfqa_rand` (random split).
Additional files are provided in our [GitHub repo](https://github.com/chaitanyamalaviya/ExpertQA).
### Data Fields
The main data file contains newline-separated json dictionaries with the following fields:
* `question` - Question written by an expert.
* `annotator_id` - Anonymized annotator ID of the author of the question.
* `answers` - Dict mapping model names to an Answer object. The model names can be one of `{gpt4, bing_chat, rr_sphere_gpt4, rr_gs_gpt4, post_hoc_sphere_gpt4, post_hoc_gs_gpt4}`.
* `metadata` - A dictionary with the following fields:
* `question_type` - The question type(s) separated by "|".
* `field` - The field to which the annotator belonged.
* `specific_field` - More specific field name within the broader field.
Each Answer object contains the following fields:
* `answer_string`: The answer string.
* `attribution`: List of evidences for the answer (not linked to specific claims). Note that these are only URLs, the evidence passages are stored in the Claim object -- see below.
* `claims`: List of Claim objects for the answer.
* `revised_answer_string`: Revised answer by annotator.
* `usefulness`: Usefulness of original answer marked by annotator.
* `annotation_time`: Time taken for annotating this answer.
* `annotator_id`: Anonymized annotator ID of the person who validated this answer.
Each Claim object contains the following fields:
* `claim_string`: Original claim string.
* `evidence`: List of evidences for the claim (URL+passage or URL).
* `support`: Attribution marked by annotator.
* `reason_missing_support`: Reason for missing support specified by annotator.
* `informativeness`: Informativeness of claim for the question, marked by annotator.
* `worthiness`: Worthiness of citing claim marked by annotator.
* `correctness`: Factual correctness of claim marked by annotator.
* `reliability`: Reliability of source evidence marked by annotator.
* `revised_claim`: Revised claim by annotator.
* `revised_evidence`: Revised evidence by annotator.
### Citation Information
```
@inproceedings{malaviya23expertqa,
title = {ExpertQA: Expert-Curated Questions and Attributed Answers},
author = {Chaitanya Malaviya and Subin Lee and Sihao Chen and Elizabeth Sieber and Mark Yatskar and Dan Roth},
booktitle = {arXiv},
month = {September},
year = {2023},
url = "https://arxiv.org/abs/2309.07852"
}
```
| 4,767 | [
[
-0.036163330078125,
-0.058349609375,
0.03424072265625,
-0.00949859619140625,
-0.005985260009765625,
-0.00594329833984375,
-0.0005025863647460938,
-0.038330078125,
0.0238037109375,
0.0421142578125,
-0.041473388671875,
-0.0404052734375,
-0.0292510986328125,
0.... |
vgoldberg/longform_article_summarization | 2023-10-11T19:36:28.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | vgoldberg | null | null | 1 | 66 | 2023-10-11T17:01:42 | ---
language:
- en
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- summarization
pretty_name: Long-Form Article Summarization Dataset
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 2243293725
num_examples: 105256
download_size: 880664627
dataset_size: 2243293725
---
**Dataset Name:** Long-Form Article Summarization Dataset
**Description:**
The Long-Form Article Summarization Dataset is meticulously curated for the purpose of fine-tuning Natural Language Processing (NLP) models specifically tailored for summarization tasks. It is a rich collection of long-form articles that have been carefully condensed and summarized. The dataset provides a diverse range of topics and writing styles, making it an invaluable resource for researchers and practitioners working on summarization algorithms and applications.
**Data Sources:**
1. **Billsum:** This dataset includes summaries of U.S. congressional and state bills, providing insights into legislative documents.
2. **Scientific Papers:** A collection of scientific papers covering various disciplines, enabling a deep dive into research-oriented content.
3. **Multi_news:** This dataset incorporates news articles, offering a blend of current events and journalistic writing styles.
4. **CCDV/Pubmed-Summarization:** Focused on biomedical literature, this dataset contains summaries from Pubmed articles, offering specialized content related to the field of medicine and life sciences.
**Data Combination:**
The Long-Form Article Summarization Dataset is an amalgamation of the above-mentioned datasets. By combining these diverse sources, the dataset achieves a comprehensive coverage of topics, styles, and domains. This fusion enhances the dataset's versatility and applicability across a wide array of domains, making it a valuable asset for NLP research and development.
**Data Preprocessing:**
To ensure equal representation of unique domains and to manage the scale of the dataset, large datasets were down-sampled. This meticulous preprocessing step guarantees that each domain is adequately represented, promoting a balanced and unbiased training environment for NLP models.
**Intended Use:**
This dataset is specifically designed for fine-tuning NLP models focused on summarization tasks. Researchers and developers can utilize this dataset to train and evaluate their algorithms for generating concise and informative summaries from long-form articles. The dataset's diverse origins and careful preprocessing make it an ideal choice for enhancing the summarization capabilities of NLP models.
**Access:**
The Long-Form Article Summarization Dataset is available for research purposes and can be accessed through authorized channels. Researchers and developers interested in using this dataset are encouraged to adhere to ethical guidelines and data usage policies governing the respective sources.
**Citation:**
Researchers and practitioners are expected to cite the original sources of the datasets used in this amalgamation, namely "Billsum," "Scientific Papers," "Multi_news," and "CCDV/Pubmed-Summarization," in addition to acknowledging the creation of the Long-Form Article Summarization Dataset in their publications and research outputs.
This dataset card provides an overview of the Long-Form Article Summarization Dataset, outlining its sources, preprocessing methods, intended use, and access guidelines, ensuring transparent and responsible utilization of the valuable data it encapsulates.
| 3,679 | [
[
-0.018524169921875,
-0.044464111328125,
0.01483154296875,
0.03302001953125,
-0.039337158203125,
0.003772735595703125,
-0.0235595703125,
-0.03594970703125,
0.0367431640625,
0.061279296875,
-0.0298614501953125,
-0.053131103515625,
-0.036529541015625,
0.0347290... |
magnifi/contextual-new-ontology-v2-contextual-lowercase | 2023-10-13T04:48:54.000Z | [
"region:us"
] | magnifi | null | null | 0 | 66 | 2023-10-13T04:48:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: uid
dtype: string
- name: user_text
dtype: string
- name: true_intent
dtype: string
- name: completion
dtype: string
- name: Source
dtype: string
- name: chat_history
dtype: string
- name: contextual
dtype: bool
- name: synthetic
dtype: bool
- name: in_regression_test
dtype: bool
splits:
- name: train
num_bytes: 2431425
num_examples: 4165
- name: validation
num_bytes: 294522
num_examples: 496
download_size: 779849
dataset_size: 2725947
---
# Dataset Card for "contextual-new-ontology-v2-contextual-lowercase"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 899 | [
[
-0.0191802978515625,
-0.01171875,
0.014801025390625,
-0.006923675537109375,
-0.01116943359375,
0.0009102821350097656,
0.00807952880859375,
-0.0116424560546875,
0.04962158203125,
0.052886962890625,
-0.065673828125,
-0.06414794921875,
-0.05078125,
-0.021743774... |
chengsp/geo | 2023-10-19T07:16:00.000Z | [
"region:us"
] | chengsp | null | null | 0 | 66 | 2023-10-13T12:11:52 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.