id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
argilla/databricks-dolly-15k-curated-en | 2023-10-02T12:32:53.000Z | [
"language:en",
"region:us"
] | argilla | null | null | 16 | 8,886,568 | 2023-05-30T09:54:44 | ---
language:
- en
---
## Guidelines
In this dataset, you will find a collection of records that show a category, an instruction, a context and a response to that instruction. The aim of the project is to correct the instructions, intput and responses to make sure they are of the highest quality and that they match t... | 3,002 | [
[
-0.020965576171875,
-0.052703857421875,
0.00861358642578125,
0.019439697265625,
-0.010833740234375,
-0.0167388916015625,
0.002788543701171875,
-0.012451171875,
0.0215301513671875,
0.0662841796875,
-0.0628662109375,
-0.046417236328125,
-0.040679931640625,
0.0... |
truthful_qa | 2023-06-09T14:18:13.000Z | [
"task_categories:multiple-choice",
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monoling... | null | TruthfulQA is a benchmark to measure whether a language model is truthful in
generating answers to questions. The benchmark comprises 817 questions that
span 38 categories, including health, law, finance and politics. Questions are
crafted so that some humans would answer falsely due to a false belief or
misconception.... | @misc{lin2021truthfulqa,
title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
author={Stephanie Lin and Jacob Hilton and Owain Evans},
year={2021},
eprint={2109.07958},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 73 | 3,784,469 | 2022-06-08T14:44:06 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: TruthfulQA
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- multiple-choice
- text-generation
- question-answering
task_ids:
- multipl... | 9,365 | [
[
-0.03692626953125,
-0.06939697265625,
0.031646728515625,
-0.006591796875,
0.0038166046142578125,
0.002216339111328125,
-0.0076141357421875,
-0.01611328125,
0.00034236907958984375,
0.041656494140625,
-0.05126953125,
-0.038543701171875,
-0.0297698974609375,
0.... |
cais/mmlu | 2023-10-07T11:24:05.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2009.03300",
"arxiv:2005.... | cais | This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more. | @article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)}... | 92 | 1,500,832 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mmlu
pretty_name: Measuring Massi... | 39,677 | [
[
-0.040008544921875,
-0.0457763671875,
0.0215301513671875,
0.0034198760986328125,
0.004787445068359375,
0.007534027099609375,
-0.0183563232421875,
-0.02288818359375,
0.0162200927734375,
0.01499176025390625,
-0.051300048828125,
-0.049102783203125,
-0.0445251464843... |
glue | 2023-06-01T14:59:59.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monol... | null | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | 245 | 1,428,634 | 2022-03-02T23:29:22 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-sco... | 27,887 | [
[
-0.0303192138671875,
-0.057159423828125,
0.00943756103515625,
0.01551055908203125,
-0.0060272216796875,
-0.004344940185546875,
-0.01239776611328125,
-0.030914306640625,
0.02679443359375,
0.03179931640625,
-0.058502197265625,
-0.053924560546875,
-0.03598022460937... |
poloclub/diffusiondb | 2023-05-09T19:00:45.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n>1T",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"stable diffusion"... | poloclub | DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2
million images generated by Stable Diffusion using prompts and hyperparameters
specified by real users. The unprecedented scale and diversity of this
human-actuated dataset provide exciting research opportunities in understanding
the inter... | @article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:221... | 323 | 1,069,360 | 2022-10-25T02:25:28 | ---
layout: default
title: Home
nav_order: 1
has_children: false
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: DiffusionDB
size_categories:
- n>1T
source_datasets:
- original
tags:
- stable diffusion
- prompt engineering
... | 24,582 | [
[
-0.049774169921875,
-0.06365966796875,
0.03656005859375,
0.0313720703125,
-0.018280029296875,
-0.00547027587890625,
-0.0006356239318847656,
-0.004180908203125,
0.0277557373046875,
0.037109375,
-0.04730224609375,
-0.07843017578125,
-0.046051025390625,
0.00836... |
squad_v2 | 2023-04-05T13:40:44.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:... | null | combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers
to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but
also determine when no answer is supported by the paragraph and abstain from an... | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250}... | 90 | 1,054,465 | 2022-03-02T23:29:22 | ---
pretty_name: SQuAD2.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_... | 8,016 | [
[
-0.046173095703125,
-0.043670654296875,
0.005634307861328125,
0.0204010009765625,
-0.00832366943359375,
0.01317596435546875,
-0.0222625732421875,
-0.034942626953125,
0.041229248046875,
0.025421142578125,
-0.0804443359375,
-0.059173583984375,
-0.034088134765625,
... |
super_glue | 2023-04-05T13:41:04.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_ids:natural-language-inference",
"task_ids:word-sense-disambiguation",
"task_ids:coreference-resolution",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"lan... | null | SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard. | @article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00... | 117 | 824,558 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- text-classification
- token-classification
- question-answering
task_ids:
- natural-language-inferen... | 14,813 | [
[
-0.042449951171875,
-0.04705810546875,
0.0084228515625,
-0.0011720657348632812,
-0.00946044921875,
-0.0071258544921875,
-0.0096588134765625,
-0.031890869140625,
0.036895751953125,
0.0310211181640625,
-0.05316162109375,
-0.056976318359375,
-0.028045654296875,
... |
lighteval/mmlu | 2023-06-09T16:36:19.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2009.03300",
"arxiv:2005.... | lighteval | This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more. | @article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)}... | 6 | 578,067 | 2023-05-16T09:39:28 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mmlu
pretty_name: Measuring Massi... | 39,677 | [
[
-0.03997802734375,
-0.0457763671875,
0.0215301513671875,
0.00342559814453125,
0.004791259765625,
0.00754547119140625,
-0.018341064453125,
-0.022857666015625,
0.0162200927734375,
0.0149993896484375,
-0.051300048828125,
-0.049102783203125,
-0.044525146484375,
... |
wikitext | 2023-06-20T07:52:10.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"languag... | null | The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
License. | @misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 198 | 575,928 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: WikiText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- ... | 9,573 | [
[
-0.044677734375,
-0.038116455078125,
0.01137542724609375,
0.0172271728515625,
-0.010040283203125,
-0.003154754638671875,
-0.020294189453125,
-0.0443115234375,
0.0430908203125,
0.033355712890625,
-0.0572509765625,
-0.055877685546875,
-0.039825439453125,
0.005... |
HuggingFaceM4/COCO | 2022-12-15T15:51:03.000Z | [
"license:cc-by-4.0",
"arxiv:1405.0312",
"region:us"
] | HuggingFaceM4 | MS COCO is a large-scale object detection, segmentation, and captioning dataset.
COCO has several features: Object segmentation, Recognition in context, Superpixel stuff segmentation, 330K images (>200K labeled), 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, 250,000 peop... | @article{DBLP:journals/corr/LinMBHPRDZ14,
author = {Tsung{-}Yi Lin and
Michael Maire and
Serge J. Belongie and
Lubomir D. Bourdev and
Ross B. Girshick and
James Hays and
Pietro Perona and
Deva Ramanan and
... | 8 | 438,316 | 2022-12-14T21:13:57 | ---
license: cc-by-4.0
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dat... | 3,660 | [
[
-0.035552978515625,
-0.047760009765625,
-0.005481719970703125,
0.030731201171875,
-0.0199737548828125,
0.0233612060546875,
-0.023590087890625,
-0.040679931640625,
0.038604736328125,
0.042694091796875,
-0.04901123046875,
-0.0771484375,
-0.05120849609375,
0.01... |
ai2_arc | 2023-04-05T09:11:00.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:multiple-choice-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | null | A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in
advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains
only questions answered incorrectly by both a retrieval-based algorithm and a... | @article{allenai:arc,
author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and
Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
journal = {arXiv:1803.05... | 30 | 377,705 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
language_bcp47:
- en-US
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
paperswithcode_id: null... | 8,665 | [
[
-0.047698974609375,
-0.03173828125,
0.0036468505859375,
0.005615234375,
-0.00243377685546875,
0.00437164306640625,
-0.01514434814453125,
-0.038116455078125,
0.0506591796875,
0.03680419921875,
-0.050140380859375,
-0.0498046875,
-0.034820556640625,
0.009208679... |
imdb | 2023-04-05T10:07:38.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | null | Large Movie Review Dataset.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.\ | @InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for... | 122 | 302,346 | 2022-03-02T23:29:22 | ---
pretty_name: IMDB
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: imd... | 7,590 | [
[
-0.05438232421875,
-0.0360107421875,
0.004360198974609375,
0.009490966796875,
-0.02685546875,
0.0036373138427734375,
-0.02545166015625,
-0.029388427734375,
0.053375244140625,
0.0310516357421875,
-0.058746337890625,
-0.06982421875,
-0.0465087890625,
0.0037574... |
lavita/medical-qa-shared-task-v1-toy | 2023-07-20T00:29:06.000Z | [
"region:us"
] | lavita | null | null | 2 | 299,949 | 2023-07-20T00:28:51 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: ending4
dtype: string
- name: label
dtype: int64
- name: sent1
dtype: string
- name: sen... | 773 | [
[
-0.0183868408203125,
-0.014862060546875,
0.0228729248046875,
0.01177978515625,
-0.025665283203125,
-0.0005898475646972656,
0.038909912109375,
-0.01467132568359375,
0.06988525390625,
0.0239715576171875,
-0.078369140625,
-0.04364013671875,
-0.037017822265625,
... |
hf-internal-testing/fixtures_image_utils | 2021-12-07T08:06:37.000Z | [
"region:us"
] | hf-internal-testing | \\n | \\n | 0 | 296,722 | 2022-03-02T23:29:22 | This dataset includes 5 images for testing.
It includes 4 different kinds of images (RGBA, LA, L, Rotated Image) as well as an original cats image of the COCO dataset.
This dataset is used for testing in the HuggingFace Transformers library. You can see [here](https://github.com/huggingface/transformers/search?q=fixt... | 365 | [
[
-0.069580078125,
-0.04052734375,
-0.002674102783203125,
0.055511474609375,
-0.017547607421875,
0.006866455078125,
0.023162841796875,
-0.0244598388671875,
0.0380859375,
0.05450439453125,
-0.055755615234375,
-0.02972412109375,
-0.00960540771484375,
0.037902832... |
lavita/medical-qa-shared-task-v1-toy-eval | 2023-07-27T01:09:59.000Z | [
"region:us"
] | lavita | null | null | 0 | 289,586 | 2023-07-27T01:09:50 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: ending4
dtype: string
- name: label
dtype: int64
- name: sent1
dtype: string
- name: sen... | 685 | [
[
-0.0184173583984375,
-0.0268096923828125,
0.027069091796875,
0.0108184814453125,
-0.0211639404296875,
0.01062774658203125,
0.034393310546875,
-0.00778961181640625,
0.06640625,
0.024383544921875,
-0.0711669921875,
-0.046844482421875,
-0.034515380859375,
-0.01... |
trec | 2023-04-05T13:42:29.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.
The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.
Data are collected from four sources: 4,500... | @inproceedings{li-roth-2002-learning,
title = "Learning Question Classifiers",
author = "Li, Xin and
Roth, Dan",
booktitle = "{COLING} 2002: The 19th International Conference on Computational Linguistics",
year = "2002",
url = "https://www.aclweb.org/anthology/C02-1150",
}
@inproceedings{hovy... | 30 | 261,438 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: trecqa
pretty_name:... | 10,630 | [
[
-0.03961181640625,
-0.055694580078125,
0.012176513671875,
0.01297760009765625,
-0.01629638671875,
0.005828857421875,
-0.022216796875,
-0.0260009765625,
0.0526123046875,
0.030029296875,
-0.042266845703125,
-0.06488037109375,
-0.036956787109375,
0.021484375,
... |
piqa | 2023-01-25T14:42:33.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv... | null | To apply eyeshadow without a brush, should I use a cotton swab or a toothpick?
Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art
natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning
and a corresponding benchmark dataset P... | @inproceedings{Bisk2020,
author = {Yonatan Bisk and Rowan Zellers and
Ronan Le Bras and Jianfeng Gao
and Yejin Choi},
title = {PIQA: Reasoning about Physical Commonsense in
Natural Language},
booktitle = {Thirty-Fourth AAAI Conference on
Artificial Intelligence},
... | 45 | 257,379 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: piqa
pretty_name: 'Physica... | 8,413 | [
[
-0.04669189453125,
-0.063720703125,
0.0212554931640625,
0.01204681396484375,
0.000006318092346191406,
-0.016876220703125,
-0.00446319580078125,
-0.02978515625,
0.007289886474609375,
0.04437255859375,
-0.055267333984375,
-0.03204345703125,
-0.0321044921875,
0... |
winogrande | 2023-06-05T11:49:56.000Z | [
"language:en",
"region:us"
] | null | WinoGrande is a new collection of 44k problems, inspired by Winograd Schema Challenge (Levesque, Davis, and Morgenstern
2011), but adjusted to improve the scale and robustness against the dataset-specific bias. Formulated as a
fill-in-a-blank task with binary options, the goal is to choose the right option for a given... | @InProceedings{ai2:winogrande,
title = {WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
authors={Keisuke, Sakaguchi and Ronan, Le Bras and Chandra, Bhagavatula and Yejin, Choi
},
year={2019}
} | 25 | 246,291 | 2022-03-02T23:29:22 | ---
language:
- en
paperswithcode_id: winogrande
pretty_name: WinoGrande
dataset_info:
- config_name: winogrande_xs
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 20704
... | 9,967 | [
[
-0.037628173828125,
-0.033905029296875,
0.01197052001953125,
-0.00173187255859375,
-0.0121002197265625,
0.00167083740234375,
-0.025390625,
-0.03729248046875,
0.033660888671875,
0.0316162109375,
-0.046966552734375,
-0.055084228515625,
-0.048736572265625,
-0.0... |
wikiann | 2023-06-01T14:59:59.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ace",
"language:af",
"language:als",
"language:am... | null | WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (organisation) tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of Rahimi et al. (2019), which supports 1... | @inproceedings{pan-etal-2017-cross,
title = "Cross-lingual Name Tagging and Linking for 282 Languages",
author = "Pan, Xiaoman and
Zhang, Boliang and
May, Jonathan and
Nothman, Joel and
Knight, Kevin and
Ji, Heng",
booktitle = "Proceedings of the 55th Annual Meeting of the... | 59 | 226,481 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- ace
- af
- als
- am
- an
- ang
- ar
- arc
- arz
- as
- ast
- ay
- az
- ba
- bar
- be
- bg
- bh
- bn
- bo
- br
- bs
- ca
- cbk
- cdo
- ce
- ceb
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- diq
- dv
- el
- eml
- en
- eo
- es
... | 130,677 | [
[
-0.05950927734375,
-0.0160675048828125,
-0.0011758804321289062,
0.01477813720703125,
-0.0036258697509765625,
-0.0045166015625,
-0.011016845703125,
-0.021484375,
0.053924560546875,
0.035125732421875,
-0.0498046875,
-0.0423583984375,
-0.0517578125,
0.015640258... |
openbookqa | 2023-04-05T13:36:14.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknow... | null | OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic
(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In
particular, it contains questions that require multi-step reasoning, use of additio... | @inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
} | 37 | 190,952 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: OpenBookQA
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithco... | 8,690 | [
[
-0.046295166015625,
-0.048126220703125,
0.004180908203125,
-0.013397216796875,
-0.0037078857421875,
-0.01308441162109375,
-0.01366424560546875,
-0.0223846435546875,
0.0272979736328125,
0.042510986328125,
-0.052947998046875,
-0.055023193359375,
-0.012626647949218... |
squad | 2023-04-05T13:40:31.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
... | null | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250}... | 142 | 179,684 | 2022-03-02T23:29:22 | ---
pretty_name: SQuAD
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: ... | 7,665 | [
[
-0.047210693359375,
-0.04620361328125,
0.00705718994140625,
0.01451873779296875,
-0.00772857666015625,
0.00609588623046875,
-0.0211639404296875,
-0.0267333984375,
0.040252685546875,
0.0289459228515625,
-0.07452392578125,
-0.06414794921875,
-0.029144287109375,
... |
lukaemon/mmlu | 2023-02-02T02:38:44.000Z | [
"region:us"
] | lukaemon | Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021). | @article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={20... | 24 | 179,320 | 2023-02-02T00:42:27 | ---
dataset_info:
- config_name: high_school_european_history
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 268045
num_ex... | 27,220 | [
[
-0.026031494140625,
-0.05133056640625,
0.0282440185546875,
0.0261077880859375,
-0.00916290283203125,
0.0102691650390625,
-0.040313720703125,
-0.0236358642578125,
0.0203399658203125,
0.003849029541015625,
-0.064697265625,
-0.030303955078125,
-0.0499267578125,
... |
common_voice | 2023-06-27T07:46:51.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:extended|common_voice... | null | Common Voice is Mozilla's initiative to help teach machines how real people speak.
The dataset currently consists of 7,335 validated hours of speech in 60 languages, but we’re always adding more voices and languages. | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Lang... | 104 | 172,188 | 2022-03-02T23:29:22 | ---
pretty_name: Common Voice
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa
- pl
-... | 62,382 | [
[
-0.045806884765625,
-0.046051025390625,
0.00273895263671875,
0.0248565673828125,
-0.01169586181640625,
-0.0015735626220703125,
-0.036895751953125,
-0.022796630859375,
0.033599853515625,
0.04180908203125,
-0.06036376953125,
-0.06939697265625,
-0.0310516357421875,... |
sciq | 2023-06-06T07:16:34.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-3.0",
"region:us"
] | null | The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided. | @inproceedings{SciQ,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
year={2017},
journal={arXiv:1707.06209v1}
} | 64 | 141,761 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: sciq
pretty_name: SciQ
dataset... | 6,842 | [
[
-0.036895751953125,
-0.0276641845703125,
0.01209259033203125,
0.01605224609375,
-0.0145263671875,
0.0038738250732421875,
-0.01395416259765625,
-0.02886962890625,
0.05419921875,
0.0306243896484375,
-0.060882568359375,
-0.055328369140625,
-0.033477783203125,
0... |
gsm8k | 2022-11-18T22:06:26.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"math-word-problems",
"arxiv:2110.14168",
"region:us"
] | null | GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality
linguistically diverse grade school math word problems. The
dataset was created to support the task of question answering
on basic mathematical problems that require multi-step reasoning. | @misc{cobbe2021training,
title={Training Verifiers to Solve Math Word Problems},
author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},
year={2021},
eprint={2110.14168},
archivePrefix={arXiv},
prima... | 99 | 132,083 | 2022-04-12T10:22:10 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: gsm8k
pretty_name: Grade School Math 8K
tags:
- math-wor... | 6,792 | [
[
-0.0261688232421875,
-0.053924560546875,
0.02191162109375,
0.016632080078125,
-0.01392364501953125,
0.003887176513671875,
-0.00952911376953125,
-0.00879669189453125,
0.023345947265625,
0.03778076171875,
-0.055023193359375,
-0.056396484375,
-0.04644775390625,
... |
argilla/oasst_response_quality | 2023-08-09T11:27:12.000Z | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | 0 | 120,028 | 2023-08-02T11:36:31 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for oasst_response_quality
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), o... | 12,169 | [
[
-0.043426513671875,
-0.06396484375,
0.01543426513671875,
0.023529052734375,
-0.01342010498046875,
-0.0199737548828125,
0.00626373291015625,
-0.05450439453125,
0.07196044921875,
0.053375244140625,
-0.04754638671875,
-0.04229736328125,
-0.0428466796875,
0.0209... |
facebook/flores | 2022-08-09T20:27:39.000Z | [
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|flores",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:af",
"language:ajp",
"lan... | facebook | The creation of FLORES-200 doubles the existing language coverage of FLORES-101.
Given the nature of the new languages, which have less standardization and require
more specialized professional translations, the verification process became more complex.
This required modifications to the translation workflow. FLORES... | @article{nllb2022,
author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Han... | 26 | 114,819 | 2022-07-13T21:11:38 | ---
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fu... | 11,199 | [
[
-0.022125244140625,
-0.0413818359375,
0.032958984375,
0.028533935546875,
-0.0159912109375,
-0.0015211105346679688,
-0.03216552734375,
-0.0234375,
0.036041259765625,
0.0171051025390625,
-0.040069580078125,
-0.064208984375,
-0.03753662109375,
0.04693603515625,... |
hf-internal-testing/librispeech_asr_dummy | 2022-03-08T11:02:02.000Z | [
"region:us"
] | hf-internal-testing | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.
Note that in order to limit the re... | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--... | 0 | 114,448 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
xnli | 2023-04-05T13:45:18.000Z | [
"language:ar",
"language:bg",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:ru",
"language:sw",
"language:th",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"region:us"
] | null | XNLI is a subset of a few thousand examples from MNLI which has been translated
into a 14 different languages (some low-ish resource). As with MNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
lab... | @InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross... | 30 | 108,082 | 2022-03-02T23:29:22 | ---
language:
- ar
- bg
- de
- el
- en
- es
- fr
- hi
- ru
- sw
- th
- tr
- ur
- vi
- zh
paperswithcode_id: xnli
pretty_name: Cross-lingual Natural Language Inference
dataset_info:
- config_name: ar
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
c... | 17,926 | [
[
-0.044525146484375,
-0.03863525390625,
0.0133056640625,
0.003143310546875,
-0.01248931884765625,
-0.00794219970703125,
-0.032745361328125,
-0.032501220703125,
0.0479736328125,
0.03173828125,
-0.060516357421875,
-0.061279296875,
-0.034149169921875,
0.02052307... |
Muennighoff/flores200 | 2023-10-05T14:56:26.000Z | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|flores",
"license:cc-by-sa-4.0",
"condition... | Muennighoff | >The creation of FLORES200 doubles the existing language coverage of FLORES-101.
Given the nature of the new languages, which have less standardization and require
more specialized professional translations, the verification process became more complex.
This required modifications to the translation workflow. FLORES... | @article{nllb2022,
author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Han... | 4 | 107,343 | 2022-07-17T08:11:54 | ---
annotations_creators:
- found
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
- translation
size_categories:
- unknown
source_datasets:
- extended|flores
task_categories:
- text2text-generation
- translation
task_ids: []
paperswithcode_id: flores
pretty_name: flores200
... | 7,309 | [
[
-0.02178955078125,
-0.044219970703125,
0.037017822265625,
0.0285491943359375,
-0.0177154541015625,
0.0007748603820800781,
-0.02703857421875,
-0.0243377685546875,
0.038665771484375,
0.01739501953125,
-0.040252685546875,
-0.06964111328125,
-0.0372314453125,
0.... |
openai_humaneval | 2022-11-29T16:41:19.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:mit",
"code-generation",
"arxiv:2107.03374",
"region:us"
] | null | The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution. | @misc{chen2021evaluating,
title={Evaluating Large Language Models Trained on Code},
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gr... | 100 | 105,288 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: OpenAI HumanEval
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- code-generation
paperswithcode_id... | 6,402 | [
[
-0.0238494873046875,
-0.042633056640625,
0.0010614395141601562,
0.00916290283203125,
-0.0031032562255859375,
-0.0199432373046875,
-0.041351318359375,
-0.0238037109375,
0.00919342041015625,
0.040191650390625,
-0.03643798828125,
-0.0594482421875,
-0.02827453613281... |
cnn_dailymail | 2022-11-18T19:30:01.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | null | CNN/DailyMail non-anonymized summarization dataset.
There are two features:
- article: text of news article, used as the document to be summarized
- highlights: joined text of highlights with <s> and </s> around each
highlight, which is the target summary | @article{DBLP:journals/corr/SeeLM17,
author = {Abigail See and
Peter J. Liu and
Christopher D. Manning},
title = {Get To The Point: Summarization with Pointer-Generator Networks},
journal = {CoRR},
volume = {abs/1704.04368},
year = {2017},
url = {http://a... | 120 | 102,419 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: cnn-daily-mail-1
pretty_name: CNN ... | 15,061 | [
[
-0.0250701904296875,
-0.051177978515625,
0.005191802978515625,
0.0184173583984375,
-0.047515869140625,
-0.0012102127075195312,
-0.018402099609375,
-0.0338134765625,
0.0203857421875,
0.0294342041015625,
-0.031524658203125,
-0.061309814453125,
-0.0501708984375,
... |
allenai/c4 | 2021-11-09T20:11:36.000Z | [
"region:us"
] | allenai | null | null | 80 | 99,327 | 2022-03-02T23:29:22 | This is the processed version of [Google's C4 dataset](https://www.tensorflow.org/datasets/catalog/c4).
We prepared five variants of the data: `en`, `en.noclean`, `en.noblocklist`, `realnewslike`, and `multilingual`.
For reference, these are the sizes of the variants:
- `en`: 305GB
- `en.noclean`: 2.3TB
- `en.nobloc... | 2,379 | [
[
-0.044342041015625,
-0.050933837890625,
0.02105712890625,
0.0193939208984375,
-0.031219482421875,
0.002742767333984375,
-0.0243377685546875,
-0.053863525390625,
0.058319091796875,
0.034576416015625,
-0.040771484375,
-0.04119873046875,
-0.041839599609375,
0.0... |
Skywork/SkyPile-150B | 2023-11-02T02:10:20.000Z | [
"task_categories:text-generation",
"size_categories:100B<n<1T",
"language:zh",
"llm ",
"casual-lm",
"language-modeling",
"arxiv:2310.19341",
"region:us"
] | Skywork | null | null | 98 | 92,182 | 2023-10-23T12:55:10 | ---
task_categories:
- text-generation
language:
- zh
tags:
- 'llm '
- casual-lm
- language-modeling
pretty_name: SkyPile-150B
size_categories:
- 100B<n<1T
---
# SkyPile-150B
## Dataset Summary
SkyPile-150B is a comprehensive, large-scale Chinese dataset specifically designed for the pre-training of large language m... | 3,159 | [
[
-0.0264892578125,
-0.0377197265625,
-0.004058837890625,
0.038726806640625,
0.00023651123046875,
-0.026824951171875,
-0.034027099609375,
-0.0304412841796875,
0.0017766952514648438,
0.0518798828125,
-0.04302978515625,
-0.038604736328125,
-0.03057861328125,
0.0... |
samsum | 2022-12-27T11:03:09.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-nd-4.0",
"conversations-summarization",
"arxiv:1911.12237",
"r... | null | SAMSum Corpus contains over 16k chat dialogues with manually annotated
summaries.
There are two features:
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: id of a example. | @article{gliwa2019samsum,
title={SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization},
author={Gliwa, Bogdan and Mochol, Iwona and Biesek, Maciej and Wawer, Aleksander},
journal={arXiv preprint arXiv:1911.12237},
year={2019}
} | 170 | 91,881 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: SAMSum Corpus
... | 7,042 | [
[
-0.0284881591796875,
-0.053802490234375,
0.01187896728515625,
0.00881195068359375,
-0.0227508544921875,
0.0096435546875,
-0.0272674560546875,
-0.0307464599609375,
0.054656982421875,
0.04095458984375,
-0.048675537109375,
-0.056854248046875,
-0.03741455078125,
... |
ceval/ceval-exam | 2023-08-31T14:04:10.000Z | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-sa-4.0",
"arxiv:2305.08322",
"region:us"
] | ceval | C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. | @article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and ... | 155 | 87,516 | 2023-05-16T01:47:44 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- multiple-choice
- question-answering
language:
- zh
pretty_name: C-Eval
size_categories:
- 10K<n<100K
---
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disci... | 1,897 | [
[
-0.0308990478515625,
-0.08416748046875,
0.021270751953125,
0.018341064453125,
0.010986328125,
0.01068115234375,
-0.025482177734375,
-0.0233154296875,
-0.00861358642578125,
0.0276336669921875,
-0.0230865478515625,
-0.033447265625,
-0.006374359130859375,
0.004... |
wikihow | 2022-11-18T22:01:14.000Z | [
"region:us"
] | null | WikiHow is a new large-scale dataset using the online WikiHow
(http://www.wikihow.com/) knowledge base.
There are two features:
- text: wikihow answers texts.
- headline: bold lines as summary.
There are two separate versions:
- all: consisting of the concatenation of all paragraphs as the articles and
... | @misc{koupaee2018wikihow,
title={WikiHow: A Large Scale Text Summarization Dataset},
author={Mahnaz Koupaee and William Yang Wang},
year={2018},
eprint={1810.09305},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 3 | 84,943 | 2022-03-02T23:29:22 | ---
paperswithcode_id: wikihow
pretty_name: WikiHow
dataset_info:
- config_name: all
features:
- name: text
dtype: string
- name: headline
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 513238309
num_examples: 157252
- name: validation
num_bytes: 1824689... | 1,127 | [
[
-0.01971435546875,
-0.0038890838623046875,
0.035491943359375,
0.04730224609375,
-0.00653076171875,
-0.0164947509765625,
-0.0016698837280273438,
-0.004261016845703125,
0.03814697265625,
0.04901123046875,
-0.0477294921875,
-0.05517578125,
-0.04888916015625,
-0... |
machelreid/m2d2 | 2022-10-25T12:57:24.000Z | [
"license:cc-by-nc-4.0",
"arxiv:2210.07370",
"region:us"
] | machelreid | null | null | 2 | 80,979 | 2022-10-18T15:14:07 | ---
license: cc-by-nc-4.0
---
# M2D2: A Massively Multi-domain Language Modeling Dataset
*From the paper "[M2D2: A Massively Multi-domain Language Modeling Dataset](https://arxiv.org/abs/2210.07370)", (Reid et al., EMNLP 2022)*
Load the dataset as follows:
```python
import datasets
dataset = datasets.load_dataset("m... | 4,944 | [
[
-0.0357666015625,
-0.0311279296875,
0.02923583984375,
0.004131317138671875,
0.00962066650390625,
0.0034046173095703125,
0.004901885986328125,
-0.01093292236328125,
0.026123046875,
0.0112152099609375,
-0.036590576171875,
-0.040069580078125,
-0.041290283203125,
... |
red_caps | 2023-01-25T14:43:07.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2111.11431",
"region:us"
] | null | RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
Images and captions from Reddit depict and describe a wide variety of objects and scenes.
The data is collected from a manually curated set of subreddits (350 total),
which give coarse image labels and allow steering of the dataset composit... | @misc{desai2021redcaps,
title={RedCaps: web-curated image-text data created by the people, for the people},
author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
year={2021},
eprint={2111.11431},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | 43 | 74,871 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: redcaps
pretty_name: RedCaps
dataset_info:
features... | 35,276 | [
[
-0.056488037109375,
-0.03558349609375,
0.011810302734375,
0.0118560791015625,
-0.047943115234375,
-0.0013141632080078125,
-0.02313232421875,
-0.042266845703125,
0.032684326171875,
0.033477783203125,
-0.05755615234375,
-0.03521728515625,
-0.049774169921875,
0... |
lighteval/agi_eval_en | 2023-10-17T14:46:49.000Z | [
"arxiv:2304.06364",
"region:us"
] | lighteval | null | null | 0 | 71,448 | 2023-09-28T14:59:03 | # Introduction
AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving.
This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human ... | 819 | [
[
-0.043548583984375,
-0.06365966796875,
0.01377105712890625,
0.0169677734375,
0.00359344482421875,
-0.0013904571533203125,
0.0163726806640625,
-0.041961669921875,
-0.0015211105346679688,
0.0237579345703125,
-0.03680419921875,
-0.0215911865234375,
-0.0300445556640... |
ccdv/cnn_dailymail | 2022-10-24T20:31:59.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"region... | ccdv | CNN/DailyMail non-anonymized summarization dataset.
There are two features:
- article: text of news article, used as the document to be summarized
- highlights: joined text of highlights with <s> and </s> around each
highlight, which is the target summary | @article{DBLP:journals/corr/SeeLM17,
author = {Abigail See and
Peter J. Liu and
Christopher D. Manning},
title = {Get To The Point: Summarization with Pointer-Generator Networks},
journal = {CoRR},
volume = {abs/1704.04368},
year = {2017},
url = {http://a... | 4 | 70,771 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text-generation
task_ids: []
paperswithcode_id: cnn-daily-mail-1
pretty_name: CNN / Daily M... | 13,835 | [
[
-0.020538330078125,
-0.052703857421875,
0.0009169578552246094,
0.02325439453125,
-0.047515869140625,
-0.0020008087158203125,
-0.0193634033203125,
-0.035919189453125,
0.0250396728515625,
0.033599853515625,
-0.0301971435546875,
-0.054962158203125,
-0.0504455566406... |
bigscience/P3 | 2023-02-01T13:38:41.000Z | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"arxiv:2110.08207",
"region:us"
] | bigscience | P3 (Public Pool of Prompts) is a collection of prompted English datasets covering a diverse set of NLP tasks. A prompt is the combination of an input template and a target template. The templates are functions mapping a data example into natural language for the input and target sequences. For example, in the case of a... | @misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Sa... | 159 | 70,288 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: P3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for P3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#datase... | 10,380 | [
[
-0.0292205810546875,
-0.049560546875,
0.04180908203125,
0.026580810546875,
0.0008206367492675781,
-0.01091766357421875,
-0.0101776123046875,
-0.01397705078125,
0.0137481689453125,
0.024200439453125,
-0.0693359375,
-0.043914794921875,
-0.042755126953125,
0.03... |
rotten_tomatoes | 2023-04-05T13:39:30.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | Movie Review Dataset.
This is a dataset of containing 5,331 positive and 5,331 negative processed
sentences from Rotten Tomatoes movie reviews. This data was first used in Bo
Pang and Lillian Lee, ``Seeing stars: Exploiting class relationships for
sentiment categorization with respect to rating scales.'', Proceedings o... | @InProceedings{Pang+Lee:05a,
author = {Bo Pang and Lillian Lee},
title = {Seeing stars: Exploiting class relationships for sentiment
categorization with respect to rating scales},
booktitle = {Proceedings of the ACL},
year = 2005
} | 28 | 70,054 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: mr
pretty_name: RottenTomatoe... | 7,247 | [
[
-0.046142578125,
-0.036956787109375,
0.01546478271484375,
-0.00026726722717285156,
-0.01593017578125,
0.0007600784301757812,
-0.02191162109375,
-0.0238800048828125,
0.05322265625,
0.044342041015625,
-0.051849365234375,
-0.06317138671875,
-0.050201416015625,
... |
EleutherAI/wikitext_document_level | 2023-03-10T11:04:18.000Z | [
"arxiv:1609.07843",
"region:us"
] | EleutherAI | The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
License. | @misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 4 | 64,083 | 2023-03-10T10:57:24 | # Wikitext Document Level
This is a modified version of [https://huggingface.co/datasets/wikitext](https://huggingface.co/datasets/wikitext) that returns Wiki pages instead of Wiki text line-by-line. The original readme is contained below.
# Dataset Card for "wikitext"
## Table of Contents
- [Dataset Description](#d... | 7,740 | [
[
-0.043670654296875,
-0.040771484375,
0.01311492919921875,
0.0150604248046875,
-0.006366729736328125,
-0.002162933349609375,
-0.0200347900390625,
-0.045440673828125,
0.042022705078125,
0.0380859375,
-0.055938720703125,
-0.05718994140625,
-0.03936767578125,
0.... |
wikipedia | 2023-06-01T14:59:58.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categ... | null | Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | 334 | 63,455 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
si... | 16,258 | [
[
-0.060699462890625,
-0.043792724609375,
0.01357269287109375,
0.00969696044921875,
-0.01273345947265625,
-0.0206756591796875,
-0.027130126953125,
-0.033447265625,
0.03802490234375,
0.0236663818359375,
-0.0548095703125,
-0.06146240234375,
-0.032989501953125,
0... |
tweet_eval | 2023-06-01T14:59:58.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"s... | null | TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits. | @inproceedings{barbieri2020tweeteval,
title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}},
author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo},
booktitle={Proceedings of Findings of EMNLP},
year={2020}
} | 82 | 63,352 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|other-tweet-datasets
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-clas... | 21,757 | [
[
-0.01372528076171875,
-0.0501708984375,
0.01096343994140625,
0.02996826171875,
-0.027618408203125,
0.02093505859375,
-0.0233154296875,
-0.0234222412109375,
0.038970947265625,
0.01629638671875,
-0.045501708984375,
-0.07281494140625,
-0.05523681640625,
0.00586... |
tatsu-lab/alpaca | 2023-05-22T20:33:36.000Z | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"region:us"
] | tatsu-lab | null | null | 468 | 60,485 | 2023-03-13T17:19:43 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Alpaca
task_categories:
- text-generation
---
# Dataset Card for Alpaca
## Dataset Description
- **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html
- **Repository:** https://github.com/tatsu-lab/stanford_alpaca
- **Paper... | 7,466 | [
[
-0.0316162109375,
-0.060455322265625,
0.0110321044921875,
0.00580596923828125,
-0.0175628662109375,
-0.027130126953125,
-0.01012420654296875,
-0.0372314453125,
0.01320648193359375,
0.0489501953125,
-0.049957275390625,
-0.05517578125,
-0.05517578125,
-0.00343... |
lukaemon/bbh | 2023-02-02T01:14:46.000Z | [
"region:us"
] | lukaemon | BBH focuses on a suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. We find that applying chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the average humanrater perfor... | @article{suzgun2022challenging,
title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them},
author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and ... | 19 | 59,801 | 2023-02-01T07:46:51 | ---
dataset_info:
- config_name: boolean_expressions
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 11790
num_examples: 250
download_size: 17172
dataset_size: 11790
- config_name: causal_judgement
features:
- name: input
dtype: st... | 6,769 | [
[
-0.0232391357421875,
-0.04034423828125,
0.056549072265625,
0.0273590087890625,
-0.0079803466796875,
-0.0036258697509765625,
-0.040771484375,
-0.0132598876953125,
0.0015993118286132812,
0.024017333984375,
-0.07086181640625,
-0.028778076171875,
-0.031646728515625,... |
wmt14 | 2023-04-05T13:43:47.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:extended|europarl_bilingual",
"source_datasets:extended|giga_fren",
"source_datasets:extended|news_commentary",
"source_datase... | null | null | @InProceedings{bojar-EtAl:2014:W14-33,
author = {Bojar, Ondrej and Buck, Christian and Federmann, Christian and Haddow, Barry and Koehn, Philipp and Leveling, Johannes and Monz, Christof and Pecina, Pavel and Post, Matt and Saint-Amand, Herve and Soricut, Radu and Specia, Lucia and Tamchyna... | 6 | 57,396 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- cs
- de
- en
- fr
- hi
- ru
license:
- unknown
multilinguality:
- translation
size_categories:
- 10M<n<100M
source_datasets:
- extended|europarl_bilingual
- extended|giga_fren
- extended|news_commentary
- extended|un_multi
- extended|hind_... | 9,368 | [
[
-0.041015625,
-0.036376953125,
0.013671875,
0.015838623046875,
-0.026397705078125,
0.0006628036499023438,
-0.036285400390625,
-0.031341552734375,
0.04473876953125,
0.0257415771484375,
-0.0584716796875,
-0.0699462890625,
-0.046722412109375,
0.0198974609375,
... |
cifar10 | 2023-01-25T14:27:53.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown",
"region:us"
] | null | The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images
per class. There are 50000 training images and 10000 test images. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | 28 | 57,302 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-10
pretty_name: Cifar1... | 4,996 | [
[
-0.0513916015625,
-0.035430908203125,
-0.0030117034912109375,
0.0102386474609375,
-0.0182952880859375,
0.00910186767578125,
-0.0233612060546875,
-0.044891357421875,
0.0270233154296875,
0.024688720703125,
-0.032867431640625,
-0.062744140625,
-0.05438232421875,
... |
HuggingFaceM4/cm4-synthetic-testing | 2022-11-22T16:24:24.000Z | [
"license:bigscience-openrail-m",
"region:us"
] | HuggingFaceM4 | This dataset is designed to be used in testing. It's derived from cm4-10k dataset | @InProceedings{huggingface:dataset,
title = {Multimodal synthetic dataset for testing},
author={HuggingFace, Inc.},
year={2022}
} | 3 | 56,393 | 2022-09-24T02:37:35 | ---
license: bigscience-openrail-m
---
This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset.
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniq... | 703 | [
[
-0.0411376953125,
-0.055694580078125,
0.016021728515625,
0.02252197265625,
-0.03167724609375,
-0.01158905029296875,
-0.0005183219909667969,
0.0016756057739257812,
0.0038471221923828125,
0.036956787109375,
-0.07110595703125,
-0.042144775390625,
-0.019424438476562... |
ptb_text_only | 2022-11-18T21:39:46.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:... | null | This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material. This corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure. | @article{marcus-etal-1993-building,
title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
author = "Marcus, Mitchell P. and
Santorini, Beatrice and
Marcinkiewicz, Mary Ann",
journal = "Computational Linguistics",
volume = "19",
number = "2",
year = "1993"... | 9 | 55,254 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
license_details: LDC User Agreement for Non-Members
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modelin... | 4,210 | [
[
-0.02911376953125,
-0.059478759765625,
0.003101348876953125,
0.0151214599609375,
-0.02484130859375,
0.0180816650390625,
-0.033935546875,
-0.04473876953125,
0.039825439453125,
0.0294952392578125,
-0.0367431640625,
-0.06103515625,
-0.0478515625,
0.017791748046... |
MBZUAI/Bactrian-X | 2023-05-27T12:54:05.000Z | [
"task_categories:text-generation",
"language:af",
"language:ar",
"language:az",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"langua... | MBZUAI | null | null | 37 | 55,113 | 2023-04-22T12:42:39 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- af
- ar
- az
- bn
- cs
- de
- en
- es
- et
- fi
- fr
- gl
- gu
- he
- hi
- hr
- id
- it
- ja
- ka
- kk
- km
- ko
- lt
- lv
- mk
- ml
- mn
- mr
- my
- ne
- nl
- pl
- ps
- pt
- ro
- ru
- si
- sl
- sv
- sw
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- xh... | 13,205 | [
[
-0.036712646484375,
-0.04388427734375,
0.02301025390625,
0.0204010009765625,
-0.03070068359375,
0.008148193359375,
-0.0209503173828125,
-0.0283203125,
0.050506591796875,
0.0167388916015625,
-0.0496826171875,
-0.06451416015625,
-0.050689697265625,
0.022537231... |
EleutherAI/pile | 2023-05-03T15:58:14.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100B<n<1T",
"source_datasets:original",
"language:en",... | EleutherAI | The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together. | @misc{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
y... | 239 | 54,919 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license: other
multilinguality:
- monolingual
pretty_name: the Pile
size_categories:
- 100B<n<1T
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
papersw... | 14,174 | [
[
-0.0214385986328125,
-0.0284423828125,
0.0146026611328125,
0.007305145263671875,
-0.025238037109375,
-0.01003265380859375,
0.01276397705078125,
-0.02740478515625,
0.054412841796875,
0.031707763671875,
-0.03326416015625,
-0.04815673828125,
-0.03863525390625,
... |
code_search_net | 2023-06-06T11:19:59.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
... | null | CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and prep... | @article{husain2019codesearchnet,
title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
} | 134 | 53,928 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- code
license:
- other
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-langua... | 12,901 | [
[
-0.0272674560546875,
-0.0279693603515625,
0.01142120361328125,
0.00279998779296875,
-0.008575439453125,
0.010345458984375,
-0.03863525390625,
-0.018951416015625,
0.0355224609375,
0.0304412841796875,
-0.030426025390625,
-0.0673828125,
-0.03338623046875,
0.016... |
conll2003 | 2023-04-05T10:02:26.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-reuters-corpus",
"language:en",
"lice... | null | The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns se... | @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning... | 69 | 53,682 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_i... | 12,330 | [
[
-0.04913330078125,
-0.03558349609375,
0.0087127685546875,
0.00942230224609375,
-0.01324462890625,
0.00035881996154785156,
-0.0221405029296875,
-0.0428466796875,
0.042144775390625,
0.02410888671875,
-0.0440673828125,
-0.0635986328125,
-0.0428466796875,
0.0218... |
mbpp | 2022-11-18T20:20:07.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:c... | null | The MBPP (Mostly Basic Python Problems) dataset consists of around 1,000 crowd-sourced Python
programming problems, designed to be solvable by entry level programmers, covering programming
fundamentals, standard library functionality, and so on. Each problem consists of a task
description, code solution and 3 automated... | @article{austin2021program,
title={Program Synthesis with Large Language Models},
author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others},
journal={arXiv preprint arXiv:2108.... | 50 | 53,127 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Mostly Basic Python Problems
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_i... | 8,600 | [
[
-0.03594970703125,
-0.0443115234375,
0.016845703125,
0.02490234375,
0.01316070556640625,
-0.0078582763671875,
-0.0167388916015625,
-0.0195465087890625,
0.00182342529296875,
0.0265045166015625,
-0.046173095703125,
-0.041229248046875,
-0.030609130859375,
0.010... |
c4 | 2022-11-03T16:47:14.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en"... | null | A colossal, cleaned version of Common Crawl's web crawl corpus.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's C4 dataset by AllenAI. | @article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2... | 160 | 52,964 | 2022-03-02T23:29:22 | ---
pretty_name: C4
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswit... | 7,767 | [
[
-0.035919189453125,
-0.044097900390625,
0.005954742431640625,
0.01035308837890625,
-0.0111846923828125,
0.007404327392578125,
-0.0196990966796875,
-0.045440673828125,
0.029815673828125,
0.037322998046875,
-0.040740966796875,
-0.06781005859375,
-0.036834716796875... |
lambada | 2023-06-13T09:14:12.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|bookcorpus",
"language:en",
"license:cc-by-4.0",
"long-range-dependency",
"region:us"
] | null | The LAMBADA evaluates the capabilities of computational models
for text understanding by means of a word prediction task.
LAMBADA is a collection of narrative passages sharing the characteristic
that human subjects are able to guess their last word if
they are exposed to the whole passage, but not if they
only see the ... | @InProceedings{paperno-EtAl:2016:P16-1,
author = {Paperno, Denis and Kruszewski, Germ\'{a}n and Lazaridou,
Angeliki and Pham, Ngoc Quan and Bernardi, Raffaella and Pezzelle,
Sandro and Baroni, Marco and Boleda, Gemma and Fernandez, Raquel},
title = {The {LAMBADA} dataset: Word prediction requ... | 32 | 46,312 | 2022-03-02T23:29:22 | ---
task_categories:
- text2text-generation
task_ids: []
multilinguality:
- monolingual
language:
- en
language_creators:
- found
annotations_creators:
- expert-generated
source_datasets:
- extended|bookcorpus
size_categories:
- 10K<n<100K
license:
- cc-by-4.0
paperswithcode_id: lambada
pretty_name: LAMBADA
tags:
- lon... | 7,110 | [
[
-0.0228424072265625,
-0.054779052734375,
0.018829345703125,
0.00656890869140625,
-0.0313720703125,
-0.0122222900390625,
-0.0180206298828125,
-0.0361328125,
0.01390838623046875,
0.03692626953125,
-0.04400634765625,
-0.0533447265625,
-0.04156494140625,
0.01530... |
HuggingFaceM4/cm4-synthetic-testing-with-embeddings | 2023-10-03T12:25:35.000Z | [
"region:us"
] | HuggingFaceM4 | null | null | 0 | 43,944 | 2023-10-03T12:23:54 | ---
dataset_info:
- config_name: 100.unique.embeddings
features:
- name: texts
sequence: string
- name: metadata
dtype: string
- name: original_idx
dtype: int64
- name: image_embeddings
sequence:
sequence:
sequence: float64
splits:
- name: train
num_bytes: 15422178
nu... | 1,119 | [
[
-0.052581787109375,
-0.032440185546875,
0.0262908935546875,
0.015106201171875,
-0.019927978515625,
0.0169830322265625,
0.002498626708984375,
-0.00403594970703125,
0.05682373046875,
0.0198822021484375,
-0.06268310546875,
-0.0654296875,
-0.0318603515625,
0.000... |
xtreme | 2023-06-01T14:59:58.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:token-classification",
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multiple-choice-qa",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"task_ids:natural-language-inf... | null | The Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark is a benchmark for the evaluation of
the cross-lingual generalization ability of pre-trained multilingual models. It covers 40 typologically diverse languages
(spanning 12 language families) and includes nine tasks that collectively requi... | @article{hu2020xtreme,
author = {Junjie Hu and Sebastian Ruder and Aditya Siddhant and Graham Neubig and Orhan Firat and Melvin Johnson},
title = {XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization},
journal = {CoRR},
volume = {abs/2003.... | 59 | 41,811 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- bg
- bn
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- jv
- ka
- kk
- ko
- ml
- mr
- ms
- my
- nl
- pt
- ru
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- yo
- zh
license:
- apache-2.0
- cc-by-4.0
- cc-by-2.0
- c... | 104,799 | [
[
-0.051544189453125,
-0.034393310546875,
0.01082611083984375,
-0.0003921985626220703,
-0.0003731250762939453,
0.004047393798828125,
-0.019989013671875,
-0.0343017578125,
0.044891357421875,
0.035491943359375,
-0.060638427734375,
-0.058868408203125,
-0.041015625,
... |
race | 2023-04-05T13:37:29.000Z | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1704.04683",
"region:us"
] | null | Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension. | @article{lai2017large,
title={RACE: Large-scale ReAding Comprehension Dataset From Examinations},
author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},
journal={arXiv preprint arXiv:1704.04683},
year={2017}
} | 25 | 41,766 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: RACE
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: race
dataset_info:
- con... | 10,546 | [
[
-0.043121337890625,
-0.057952880859375,
0.02423095703125,
0.0013132095336914062,
-0.017852783203125,
0.005306243896484375,
-0.020050048828125,
-0.03570556640625,
0.039642333984375,
0.03204345703125,
-0.05255126953125,
-0.064453125,
-0.0308990478515625,
0.009... |
Anthropic/hh-rlhf | 2023-05-26T18:47:34.000Z | [
"license:mit",
"human-feedback",
"arxiv:2204.05862",
"region:us"
] | Anthropic | null | null | 713 | 41,719 | 2022-12-08T20:11:33 | ---
license: mit
tags:
- human-feedback
---
# Dataset Card for HH-RLHF
## Dataset Summary
This repository provides access to two different kinds of data:
1. Human preference data about helpfulness and harmlessness from [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](htt... | 5,771 | [
[
-0.02215576171875,
-0.0487060546875,
0.007266998291015625,
0.0032520294189453125,
0.0029163360595703125,
-0.0055389404296875,
-0.00803375244140625,
-0.05224609375,
0.0171966552734375,
0.048065185546875,
-0.050262451171875,
-0.0396728515625,
-0.0303955078125,
... |
web_questions | 2023-04-05T13:43:02.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | This dataset consists of 6,642 question/answer pairs.
The questions are supposed to be answerable by Freebase, a large knowledge graph.
The questions are mostly centered around a single named entity.
The questions are popular ones asked on the web (at least in 2013). | @inproceedings{berant-etal-2013-semantic,
title = "Semantic Parsing on {F}reebase from Question-Answer Pairs",
author = "Berant, Jonathan and
Chou, Andrew and
Frostig, Roy and
Liang, Percy",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process... | 13 | 41,149 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: WebQuestions
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: webquestions
dataset_... | 6,488 | [
[
-0.05462646484375,
-0.05023193359375,
0.01474761962890625,
0.01129913330078125,
-0.0172271728515625,
-0.01117706298828125,
-0.0263824462890625,
-0.031707763671875,
0.048126220703125,
0.038330078125,
-0.0609130859375,
-0.06793212890625,
-0.039794921875,
0.009... |
EleutherAI/persona | 2023-08-29T07:53:23.000Z | [
"region:us"
] | EleutherAI | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, And... | 1 | 39,751 | 2023-08-29T06:59:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
universal_dependencies | 2023-06-01T14:59:56.000Z | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:af",
"language:aii",
"language:ajp",
"language:akk",
"languag... | null | Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. The annotation scheme is based on (universal... | null | 14 | 39,692 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- af
- aii
- ajp
- akk
- am
- apu
- aqz
- ar
- be
- bg
- bho
- bm
- br
- bxr
- ca
- ckt
- cop
- cs
- cu
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fo
- fr
- fro
- ga
- gd
- gl
- got
- grc
- gsw
- gun
- gv
- he
- hi
- hr
- ... | 191,193 | [
[
-0.0318603515625,
-0.0234222412109375,
0.012786865234375,
0.0236968994140625,
-0.011077880859375,
0.00907135009765625,
-0.0135498046875,
-0.047119140625,
0.03515625,
0.059539794921875,
-0.053558349609375,
-0.07733154296875,
-0.050262451171875,
0.007942199707... |
shunk031/JGLUE | 2023-09-26T12:41:51.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_cr... | shunk031 | JGLUE, Japanese General Language Understanding Evaluation, is built to measure the general NLU ability in Japanese. JGLUE has been constructed from scratch without translation. We hope that JGLUE will facilitate NLU research in Japanese.\ | @inproceedings{kurihara-lrec-2022-jglue,
title={JGLUE: Japanese general language understanding evaluation},
author={Kurihara, Kentaro and Kawahara, Daisuke and Shibata, Tomohide},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={2957--2966},
year={2022},
url={ht... | 33 | 38,729 | 2023-02-27T08:31:09 | ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: JGLUE
size_categories: []
source_datasets:
- original
tags:
- MARC
- CoLA
- STS
- NLI
- SQuAD
- CommonsenseQA
task_categories:
- multiple-choice
- question-a... | 37,626 | [
[
-0.03216552734375,
-0.062744140625,
0.0192413330078125,
0.0031261444091796875,
-0.0029506683349609375,
-0.0023479461669921875,
-0.0275421142578125,
-0.031402587890625,
0.0279083251953125,
0.041046142578125,
-0.041778564453125,
-0.05828857421875,
-0.0376892089843... |
tasksource/bigbench | 2023-05-11T14:08:10.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_ids:multiple-choice-qa",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"task_ids:closed-domain-q... | tasksource | null | null | 36 | 38,306 | 2023-01-31T10:44:51 | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
- expert-generated
- machine-generated
- other
language:
- en
license:
- apache-2.0
multilinguality:
- multilingual
- monolingual
pretty_name: bigbench
size_categories:
- unknown
source_datasets:
- original... | 1,620 | [
[
-0.038665771484375,
-0.0418701171875,
0.03802490234375,
0.046844482421875,
-0.003536224365234375,
-0.00749969482421875,
-0.034027099609375,
-0.027618408203125,
0.0197601318359375,
0.0035037994384765625,
-0.0264739990234375,
-0.02081298828125,
-0.0298614501953125... |
lambdalabs/pokemon-blip-captions | 2022-09-21T10:38:05.000Z | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | lambdalabs | null | null | 189 | 37,645 | 2022-09-14T12:04:50 | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'Pokémon BLIP captions'
size_categories:
- n<1K
source_datasets:
- huggan/few-shot-pokemon
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Ca... | 1,799 | [
[
-0.0231781005859375,
-0.01209259033203125,
0.006725311279296875,
0.027679443359375,
-0.0279083251953125,
-0.006816864013671875,
-0.012298583984375,
-0.03546142578125,
0.034393310546875,
0.031402587890625,
-0.040679931640625,
-0.019317626953125,
-0.03997802734375... |
Helsinki-NLP/tatoeba_mt | 2022-10-21T15:50:25.000Z | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca... | Helsinki-NLP | The Tatoeba Translation Challenge is a multilingual data set of
machine translation benchmarks derived from user-contributed
translations collected by [Tatoeba.org](https://tatoeba.org/) and
provided as parallel corpus from [OPUS](https://opus.nlpl.eu/). This
dataset includes test and development data sorted by languag... | @inproceedings{tiedemann-2020-tatoeba,
title = "The {T}atoeba {T}ranslation {C}hallenge {--} {R}ealistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
publis... | 39 | 37,478 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- af
- ar
- az
- be
- bg
- bn
- br
- bs
- ca
- ch
- cs
- cv
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fy
- ga
- gd
- gl
- gn
- he
- hi
- hr
- hu
- hy
- ia
- id
- ie
- io
- is
- it
- ja
- jv
- ka
- kk
- km
- ko... | 12,087 | [
[
-0.03271484375,
-0.047637939453125,
0.006984710693359375,
0.03656005859375,
-0.028289794921875,
0.006298065185546875,
-0.036407470703125,
-0.042877197265625,
0.03814697265625,
0.0263519287109375,
-0.0391845703125,
-0.06292724609375,
-0.047393798828125,
0.035... |
wmt16 | 2023-04-05T13:43:53.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:extended|europarl_bilingual",
"source_datasets:extended|news_commentary",
"source_datasets:extended|setimes",
"source_datasets... | null | null | @InProceedings{bojar-EtAl:2016:WMT1,
author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Neveol, Aurelie ... | 12 | 36,576 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- cs
- de
- en
- fi
- ro
- ru
- tr
license:
- unknown
multilinguality:
- translation
size_categories:
- 10M<n<100M
source_datasets:
- extended|europarl_bilingual
- extended|news_commentary
- extended|setimes
- extended|un_multi
task_categori... | 9,889 | [
[
-0.04302978515625,
-0.037200927734375,
0.013519287109375,
0.00879669189453125,
-0.0281524658203125,
0.0035552978515625,
-0.035491943359375,
-0.037139892578125,
0.04632568359375,
0.022491455078125,
-0.058380126953125,
-0.06829833984375,
-0.045806884765625,
0.... |
mozilla-foundation/common_voice_11_0 | 2023-06-26T15:23:38.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Lang... | 109 | 35,924 | 2022-10-12T09:20:16 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- 1K<n<10K
ast:
- n<1K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 100K<n<1M
bg:
- 1K<n<10K
bn:
... | 14,413 | [
[
-0.037750244140625,
-0.042816162109375,
-0.0062713623046875,
0.023284912109375,
-0.0096588134765625,
0.000152587890625,
-0.04364013671875,
-0.0197296142578125,
0.0300445556640625,
0.0279998779296875,
-0.041473388671875,
-0.0604248046875,
-0.032470703125,
0.0... |
timdettmers/openassistant-guanaco | 2023-05-27T22:40:40.000Z | [
"region:us"
] | timdettmers | null | null | 244 | 35,731 | 2023-05-27T21:56:25 | This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
This dataset was used to train Guanaco with QLoRA.
For... | 395 | [
[
-0.01934814453125,
-0.039215087890625,
0.021820068359375,
0.0085906982421875,
-0.005558013916015625,
-0.0062408447265625,
0.0078125,
-0.03729248046875,
0.0233612060546875,
0.037811279296875,
-0.06939697265625,
-0.05303955078125,
-0.032623291015625,
-0.012321... |
HuggingFaceM4/tmp-pmd-synthetic-testing | 2022-10-05T17:16:27.000Z | [
"region:us"
] | HuggingFaceM4 | null | null | 1 | 35,071 | 2022-10-05T17:15:40 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
lhoestq/demo1 | 2021-11-08T14:36:41.000Z | [
"region:us"
] | lhoestq | null | null | 1 | 34,341 | 2022-03-02T23:29:22 | ---
type: demo
---
# Dataset Card for Demo1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#... | 2,665 | [
[
-0.027191162109375,
-0.036102294921875,
0.00418853759765625,
0.01515960693359375,
-0.012298583984375,
0.01006317138671875,
-0.0256805419921875,
-0.0251617431640625,
0.039703369140625,
0.038177490234375,
-0.06201171875,
-0.07623291015625,
-0.03729248046875,
0... |
trivia_qa | 2023-06-09T15:34:16.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_ids:open-domain-qa",
"task_ids:open-domain-abstractive-qa",
"task_ids:extractive-qa",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingua... | null | TriviaqQA is a reading comprehension dataset containing over 650K
question-answer-evidence triples. TriviaqQA includes 95K question-answer
pairs authored by trivia enthusiasts and independently gathered evidence
documents, six per question on average, that provide high quality distant
supervision for answering the ques... | @article{2017arXivtriviaqa,
author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
Daniel and {Zettlemoyer}, Luke},
title = "{triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}",
journal = {arXiv e-prints},
year = 2017,
ei... | 26 | 33,951 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
paperswithcode_id: triviaqa
pretty_name: TriviaQA
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
- text2text-gener... | 25,097 | [
[
-0.054901123046875,
-0.0496826171875,
0.02398681640625,
0.005817413330078125,
-0.004093170166015625,
0.0038394927978515625,
-0.018310546875,
-0.01499176025390625,
0.04461669921875,
0.03631591796875,
-0.055877685546875,
-0.07171630859375,
-0.02984619140625,
0... |
nuprl/MultiPL-E | 2023-06-16T00:08:57.000Z | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|openai_humaneval",
"source_datasets:extended|mbpp",
"language:en",
... | nuprl | MultiPL-E is a dataset for evaluating large language models for code generation that supports 18 programming languages. It takes the OpenAI "HumanEval" and the MBPP Python benchmarks and uses little compilers to translate them to other languages. It is easy to add support for new languages and benchmarks. | @article{cassano:multipl-e,
author = {Cassano, Federico and Gouwar, John and Nguyen, Daniel and Nguyen, Sydney and
Phipps-Costin, Luna and Pinckney, Donald and Yee, Ming-Ho and Zi, Yangtian and
Anderson, Carolyn Jane and Feldman, Molly Q and Guha, Arjun and
Greenberg, Michael and J... | 14 | 33,334 | 2022-09-28T19:20:07 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: MultiPLE-E
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|openai_humaneval
- extended|mbpp
tags: []
task_categories: []
ta... | 99,586 | [
[
-0.037567138671875,
-0.05633544921875,
0.0157928466796875,
0.0306243896484375,
-0.00010782480239868164,
0.004138946533203125,
-0.031768798828125,
-0.01412200927734375,
0.0052947998046875,
0.00621795654296875,
-0.051788330078125,
-0.0217132568359375,
-0.027450561... |
sst2 | 2023-05-02T12:53:26.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The Stanford Sentiment Treebank consists of sentences from movie reviews and
human annotations of their sentiment. The task is to predict the sentiment of a
given sentence. We use the two-way (positive/negative) class split, and use only
sentence-level labels. | @inproceedings{socher2013recursive,
title={Recursive deep models for semantic compositionality over a sentiment treebank},
author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},
booktitle={Proceedings of the 2013 conference on ... | 31 | 33,058 | 2022-06-13T14:01:47 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: sst
pretty_name: Stanford Sentimen... | 5,100 | [
[
-0.037322998046875,
-0.0587158203125,
0.01666259765625,
0.01073455810546875,
-0.025634765625,
0.01556396484375,
-0.021453857421875,
-0.0177764892578125,
0.02783203125,
0.040924072265625,
-0.066650390625,
-0.07672119140625,
-0.05267333984375,
0.01119232177734... |
haonan-li/cmmlu | 2023-07-13T10:19:29.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-4.0",
"chinese",
"llm",
"evaluation",
"arxiv:2306.09212",
"region:us"
] | haonan-li | CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context. | @misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
pr... | 31 | 32,500 | 2023-06-25T16:37:44 | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- chinese
- llm
- evaluation
pretty_name: CMMLU
size_categories:
- 10K<n<100K
---
# CMMLU: Measuring massive multitask language understanding in Chinese
- **Homepage:** [https://github.com/haonan-li/CMMLU](https://g... | 4,449 | [
[
-0.024322509765625,
-0.0565185546875,
0.0345458984375,
0.0163726806640625,
-0.0144195556640625,
-0.00628662109375,
-0.03863525390625,
-0.00373077392578125,
0.0113067626953125,
0.01959228515625,
-0.033935546875,
-0.05224609375,
-0.040924072265625,
0.009086608... |
ydshieh/coco_dataset_script | 2022-02-14T17:32:43.000Z | [
"region:us"
] | ydshieh | COCO is a large-scale object detection, segmentation, and captioning dataset. | @article{DBLP:journals/corr/LinMBHPRDZ14,
author = {Tsung{-}Yi Lin and
Michael Maire and
Serge J. Belongie and
Lubomir D. Bourdev and
Ross B. Girshick and
James Hays and
Pietro Perona and
Deva Ramanan and
... | 7 | 29,528 | 2022-03-02T23:29:22 | ## Usage
For testing purpose, you can use the hosted dummy dataset (`dummy_data`) as follows:
```
import datasets
ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir="./dummy_data/")
```
For using the COCO dataset (2017), you need to download it manually first:
```
wget http://images.cocodatas... | 781 | [
[
-0.055755615234375,
-0.0294036865234375,
-0.0195770263671875,
0.04241943359375,
-0.034210205078125,
0.01369476318359375,
-0.0020904541015625,
-0.01629638671875,
0.037017822265625,
0.0295562744140625,
-0.06146240234375,
-0.025787353515625,
-0.019378662109375,
... |
facebook/belebele | 2023-09-15T01:12:38.000Z | [
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text-classification",
"task_categories:multiple-choice",
"size_categories:100K<n<1M",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:as",
"language:bm",
"language:bn",
"l... | facebook | 30 | 29,475 | 2023-09-01T18:27:13 | ---
configs:
- config_name: default
data_files:
- split: eval
path: "data/*.jsonl"
license: cc-by-sa-4.0
task_categories:
- question-answering
- zero-shot-classification
- text-classification
- multiple-choice
language:
- af
- am
- ar
- az
- as
- bm
- bn
- bo
- bg
- ca
- cs
- ku
- da
- de
- el
- en
- es
- et
- ... | 14,010 | [
[
-0.038177490234375,
-0.062286376953125,
0.009979248046875,
0.0166015625,
-0.00417327880859375,
-0.01149749755859375,
-0.025146484375,
-0.04193115234375,
-0.0059661865234375,
0.0294036865234375,
-0.04974365234375,
-0.0382080078125,
-0.02606201171875,
0.045288... | ||
MMInstruction/M3IT | 2023-10-29T12:00:35.000Z | [
"task_categories:image-to-text",
"task_categories:image-classification",
"size_categories:1M<n<10M",
"language:en",
"language:zh",
"license:other",
"region:us"
] | MMInstruction | Multi-modal Bi-lingual Instruction Dataset for Vision Language Models | null | 56 | 28,812 | 2023-05-04T01:43:31 | ---
license: other
task_categories:
- image-to-text
- image-classification
size_categories:
- 1M<n<10M
language:
- en
- zh
---
# Dataset Card for M3IT
Project Page: [M3IT](https://m3-it.github.io/)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/MMInstruction/M3IT**
- **Repository: https://hu... | 18,712 | [
[
-0.03692626953125,
-0.039215087890625,
0.0106048583984375,
0.0230560302734375,
-0.00447845458984375,
-0.00823211669921875,
0.0005283355712890625,
-0.01934814453125,
0.0248870849609375,
0.022918701171875,
-0.04339599609375,
-0.044403076171875,
-0.04327392578125,
... |
mnist | 2023-04-18T08:44:09.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-nist",
"language:en",
"license:mit",
"region:us"
] | null | The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000
images per class. There are 60,000 training images and 10,000 test images. | @article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
} | 44 | 28,262 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-nist
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: mnist
pretty_n... | 6,825 | [
[
-0.036224365234375,
-0.0270843505859375,
0.00653839111328125,
0.0048980712890625,
-0.03509521484375,
0.005199432373046875,
-0.005657196044921875,
-0.031890869140625,
0.04425048828125,
0.048828125,
-0.036468505859375,
-0.058349609375,
-0.051025390625,
0.01416... |
ag_news | 2023-04-05T08:34:57.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for researc... | @inproceedings{Zhang2015CharacterlevelCN,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},
booktitle={NIPS},
year={2015}
} | 74 | 27,309 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: ag-news
pretty_name: AG’s News Corpus
dataset_... | 7,944 | [
[
-0.04705810546875,
-0.034454345703125,
0.00417327880859375,
0.007747650146484375,
-0.0178375244140625,
0.0010442733764648438,
-0.0189971923828125,
-0.034881591796875,
0.044189453125,
0.033111572265625,
-0.046051025390625,
-0.06829833984375,
-0.049835205078125,
... |
commonsense_qa | 2023-04-05T10:02:16.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:1811.00937",
"region:us"
] | null | CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge
to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers.
The dataset is provided in two major training/validation/testing set splits: "Random... | @inproceedings{talmor-etal-2019-commonsenseqa,
title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
author = "Talmor, Alon and
Herzig, Jonathan and
Lourie, Nicholas and
Berant, Jonathan",
booktitle = "Proceedings of the 2019 Conference of the Nort... | 26 | 25,901 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: CommonsenseQA
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: commonsenseqa
dat... | 7,221 | [
[
-0.04547119140625,
-0.0521240234375,
0.013153076171875,
-0.0011434555053710938,
-0.006671905517578125,
-0.007640838623046875,
-0.0214385986328125,
-0.021392822265625,
0.04010009765625,
0.031402587890625,
-0.051788330078125,
-0.059722900390625,
-0.028961181640625... |
databricks/databricks-dolly-15k | 2023-06-30T18:34:13.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2203.02155",
"region:us"
] | databricks | null | null | 414 | 25,604 | 2023-04-11T16:43:13 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in... | 8,196 | [
[
-0.0285797119140625,
-0.08514404296875,
0.01318359375,
0.0166778564453125,
-0.007659912109375,
-0.0095062255859375,
-0.0191802978515625,
-0.0116119384765625,
-0.00012254714965820312,
0.040374755859375,
-0.051361083984375,
-0.05072021484375,
-0.0202789306640625,
... |
Matthijs/cmu-arctic-xvectors | 2023-02-07T14:04:48.000Z | [
"task_categories:text-to-speech",
"task_categories:audio-to-audio",
"license:mit",
"region:us"
] | Matthijs | null | null | 21 | 25,073 | 2023-02-07T12:39:22 | ---
pretty_name: CMU ARCTIC X-Vectors
task_categories:
- text-to-speech
- audio-to-audio
license: mit
---
# Speaker embeddings extracted from CMU ARCTIC
There is one `.npy` file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.
The [CMU ARCTIC](http://www.festv... | 1,012 | [
[
-0.05462646484375,
-0.0296173095703125,
0.033355712890625,
0.004974365234375,
-0.022216796875,
0.0184478759765625,
-0.032196044921875,
-0.0111541748046875,
0.022186279296875,
0.028228759765625,
-0.047760009765625,
-0.06658935546875,
-0.03961181640625,
0.0113... |
wikisql | 2023-04-05T13:43:31.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"text-to-sql",
"arxiv:1709.001... | null | A large crowd-sourced dataset for developing natural language interfaces for relational databases | @article{zhongSeq2SQL2017,
author = {Victor Zhong and
Caiming Xiong and
Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017}... | 60 | 25,048 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
- machine-generated
license:
- unknown
multilinguality:
- monolingual
pretty_name: WikiSQL
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: wikisql
tags:
- ... | 7,797 | [
[
-0.04827880859375,
-0.0511474609375,
0.00955963134765625,
0.0121307373046875,
-0.0147552490234375,
-0.00385284423828125,
-0.0263671875,
-0.0226593017578125,
0.044158935546875,
0.0428466796875,
-0.057830810546875,
-0.06646728515625,
-0.028839111328125,
0.0169... |
social_i_qa | 2023-04-05T13:40:21.000Z | [
"language:en",
"region:us"
] | null | We introduce Social IQa: Social Interaction QA, a new question-answering benchmark for testing social commonsense intelligence. Contrary to many prior benchmarks that focus on physical or taxonomic knowledge, Social IQa focuses on reasoning about people’s actions and their social implications. For example, given an act... | 4 | 23,604 | 2022-03-02T23:29:22 | ---
language:
- en
paperswithcode_id: social-iqa
pretty_name: Social Interaction QA
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answerA
dtype: string
- name: answerB
dtype: string
- name: answerC
dtype: string
- name: label
dtype: st... | 6,807 | [
[
-0.038909912109375,
-0.04644775390625,
0.0182037353515625,
0.017242431640625,
-0.01190948486328125,
0.0117645263671875,
-0.008575439453125,
-0.030792236328125,
0.06109619140625,
0.0247039794921875,
-0.055572509765625,
-0.0638427734375,
-0.042816162109375,
0.... | |
xcopa | 2023-04-05T13:45:13.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:extended|copa",
"language:et",
"language:ht",
"language:id",
"language:i... | null | XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across
languages. The dataset is the translation and reannotation of the English COPA (Roemmele ... | @article{ponti2020xcopa,
title={{XCOPA: A} Multilingual Dataset for Causal Commonsense Reasoning},
author={Edoardo M. Ponti, Goran Glava\v{s}, Olga Majewska, Qianchu Liu, Ivan Vuli\'{c} and Anna Korhonen},
journal={arXiv preprint},
year={2020},
url={https://ducdauge.github.io/files/xcopa.pdf}
}
@inproceedi... | 6 | 23,386 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- et
- ht
- id
- it
- qu
- sw
- ta
- th
- tr
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: XCOPA
size_categories:
- unknown
source_datasets:
- extended|copa
task_categories:
- question-answering
ta... | 19,040 | [
[
-0.044036865234375,
-0.0386962890625,
0.009765625,
0.00748443603515625,
-0.01433563232421875,
-0.0004906654357910156,
-0.0224151611328125,
-0.0285797119140625,
0.043182373046875,
0.041839599609375,
-0.058258056640625,
-0.060699462890625,
-0.040557861328125,
... |
oscar | 2023-06-01T14:59:59.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:100M<n<1B",
"size_catego... | null | The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | @inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Associat... | 126 | 23,234 | 2022-03-02T23:29:22 | ---
pretty_name: OSCAR
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- als
- am
- an
- ar
- arz
- as
- ast
- av
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bh
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- diq
- dsb
- dv
- el
- eml
- en
- eo
- es
- e... | 279,164 | [
[
-0.056671142578125,
-0.03594970703125,
0.0133209228515625,
0.00616455078125,
-0.029266357421875,
-0.01392364501953125,
-0.020111083984375,
-0.0276031494140625,
0.049224853515625,
0.036468505859375,
-0.05377197265625,
-0.044189453125,
-0.03753662109375,
0.024... |
EleutherAI/advanced_ai_risk | 2023-10-10T14:47:31.000Z | [
"region:us"
] | EleutherAI | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, And... | 1 | 22,865 | 2023-08-29T07:59:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
blimp | 2023-04-05T09:41:50.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | BLiMP is a challenge set for evaluating what language models (LMs) know about
major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each
containing 1000 minimal pairs isolating specific contrasts in syntax,
morphology, or semantics. The data is automatically generated according to
expert-crafted gr... | @article{warstadt2019blimp,
title={BLiMP: A Benchmark of Linguistic Minimal Pairs for English},
author={Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei, and Wang, Sheng-Fu and Bowman, Samuel R},
journal={arXiv preprint arXiv:1912.00582},
year={2019}
} | 30 | 22,781 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- en
license: cc-by-4.0
multilinguality:
- monolingual
pretty_name: BLiMP
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
paperswithcode_id:... | 49,462 | [
[
-0.037933349609375,
-0.0546875,
-0.0007610321044921875,
0.0207061767578125,
-0.00612640380859375,
-0.0016002655029296875,
-0.044342041015625,
-0.0157318115234375,
0.0285797119140625,
0.0253448486328125,
-0.058502197265625,
-0.06304931640625,
-0.04022216796875,
... |
monology/pile-uncopyrighted | 2023-08-31T03:45:38.000Z | [
"license:other",
"arxiv:2101.00027",
"region:us"
] | monology | null | null | 14 | 22,697 | 2023-08-30T18:47:58 | ---
license: other
---
# Pile Uncopyrighted
In response to [authors demanding that LLMs stop using their works](https://tcrn.ch/3rtpIDn), here's a copy of [The Pile](https://huggingface.co/datasets/monology/pile) with all copyrighted content removed.
Please consider using this dataset to train your future LLMs, to r... | 776 | [
[
-0.0278778076171875,
-0.0240020751953125,
-0.01094818115234375,
0.0182952880859375,
-0.04888916015625,
-0.0084075927734375,
0.0009174346923828125,
-0.03753662109375,
0.03302001953125,
0.097412109375,
-0.037200927734375,
-0.0220794677734375,
-0.060089111328125,
... |
GEM/wiki_lingua | 2023-02-16T09:23:29.000Z | [
"task_categories:summarization",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:ar",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"langua... | GEM | WikiLingua is a large-scale multilingual dataset for the evaluation of
crosslingual abstractive summarization systems. The dataset includes ~770k
article and summary pairs in 18 languages from WikiHow. The gold-standard
article-summary alignments across languages was done by aligning the images
that are used to describ... | @article{ladhak-wiki-2020,
title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},
authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
journal = {arXiv preprint arXiv:2010.03093},
year = {2020},
url = {https://arxiv.org/abs/2010.03093}
} | 38 | 22,479 | 2022-03-02T23:29:22 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- id
- it
- ja
- ko
- nl
- pt
- ru
- th
- tr
- vi
- zh
license:
- cc-by-nc-sa-3.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: [... | 17,833 | [
[
-0.0433349609375,
-0.051025390625,
0.00799560546875,
0.007366180419921875,
-0.01483154296875,
-0.00397491455078125,
-0.03509521484375,
-0.035552978515625,
0.0474853515625,
0.0231475830078125,
-0.0372314453125,
-0.0576171875,
-0.040313720703125,
0.02095031738... |
tasksource/mmlu | 2023-03-31T20:44:21.000Z | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"task_ids:closed-domain-qa",
"language:en",
"license:apache-2.0",
"multi-task",
"multitask",
"mmlu",
"hendrycks_test",
"re... | tasksource | This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more. | @article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)}... | 21 | 22,039 | 2023-02-01T10:20:16 | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
- closed-domain-qa
language:
- en
tags:
- multi-task
- multitask
- mmlu
- hendrycks_test
pretty_name: mmlu
---
MMLU (`hendrycks_test` on huggingface) without auxiliary t... | 1,061 | [
[
-0.0452880859375,
-0.0548095703125,
0.019805908203125,
0.0282745361328125,
-0.018798828125,
-0.01316070556640625,
-0.03936767578125,
-0.04400634765625,
0.02972412109375,
-0.0032634735107421875,
-0.0653076171875,
0.0007038116455078125,
-0.04583740234375,
0.01... |
stingning/ultrachat | 2023-10-12T05:55:01.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"region:us"
] | stingning | null | null | 255 | 22,029 | 2023-04-20T15:15:28 | ---
license: mit
task_categories:
- conversational
- text-generation
language:
- en
size_categories:
- 1M<n<10M
pretty_name: UltraChat
---
# Dataset Card for Dataset Name
## Dataset Description
An open-source, large-scale, and multi-round dialogue data powered by Turbo APIs. In consideration of factors such as safeg... | 3,142 | [
[
-0.0148773193359375,
-0.05609130859375,
0.01305389404296875,
-0.00447845458984375,
-0.012298583984375,
0.02398681640625,
-0.00997161865234375,
-0.029144287109375,
0.0197601318359375,
0.03167724609375,
-0.060455322265625,
-0.0277099609375,
-0.0006623268127441406,... |
beans | 2023-01-25T14:27:13.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | null | Beans is a dataset of images of beans taken in the field using smartphone
cameras. It consists of 3 classes: 2 disease classes and the healthy class.
Diseases depicted include Angular Leaf Spot and Bean Rust. Data was annotated
by experts from the National Crops Resources Research Institute (NaCRRI) in
Uganda and colle... | @ONLINE {beansdata,
author="Makerere AI Lab",
title="Bean disease dataset",
month="January",
year="2020",
url="https://github.com/AI-Lab-Makerere/ibean/"
} | 17 | 21,320 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
pretty_name: Beans
dataset_info:
... | 4,747 | [
[
-0.034423828125,
-0.044464111328125,
0.01544189453125,
0.01739501953125,
-0.01372528076171875,
0.006862640380859375,
-0.008575439453125,
-0.0367431640625,
0.03338623046875,
0.026611328125,
-0.034515380859375,
-0.072509765625,
-0.061126708984375,
0.0083007812... |
gia-project/gia-dataset-parquet | 2023-11-02T22:42:50.000Z | [
"task_categories:reinforcement-learning",
"task_categories:text-generation",
"task_categories:question-answering",
"annotations_creators:found",
"annotations_creators:machine-generated",
"source_datasets:conceptual-captions",
"source_datasets:ok-vqa",
"source_datasets:oscar",
"license:apache-2.0",
... | gia-project | null | null | 0 | 20,856 | 2023-08-29T09:03:24 | ---
annotations_creators:
- found
- machine-generated
license: apache-2.0
size_categories:
- {}
source_datasets:
- conceptual-captions
- ok-vqa
- oscar
task_categories:
- reinforcement-learning
- text-generation
- question-answering
pretty_name: GIA-dataset
configs:
- config_name: atari-alien
data_files:
- split: t... | 112,747 | [
[
-0.054931640625,
-0.0237274169921875,
0.01540374755859375,
0.01202392578125,
-0.0128631591796875,
0.0158233642578125,
0.006191253662109375,
-0.0291290283203125,
0.05615234375,
0.0067596435546875,
-0.05902099609375,
-0.0323486328125,
-0.0428466796875,
0.00391... |
graelo/wikipedia | 2023-09-10T06:10:08.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categ... | graelo | Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | 51 | 20,722 | 2023-06-10T22:40:06 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
si... | 200,856 | [
[
-0.061187744140625,
-0.035491943359375,
0.00836181640625,
0.0230255126953125,
-0.004840850830078125,
-0.015655517578125,
-0.035858154296875,
-0.016998291015625,
0.031585693359375,
0.047088623046875,
-0.03533935546875,
-0.041107177734375,
-0.0302276611328125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.