Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
sequence
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
sequence
argilla/databricks-dolly-15k-curated-en
2023-10-02T12:32:53.000Z
[ "language:en", "region:us" ]
argilla
null
null
16
8,886,568
2023-05-30T09:54:44
--- language: - en --- ## Guidelines In this dataset, you will find a collection of records that show a category, an instruction, a context and a response to that instruction. The aim of the project is to correct the instructions, intput and responses to make sure they are of the highest quality and that they match t...
3,002
[ [ -0.020965576171875, -0.052703857421875, 0.00861358642578125, 0.019439697265625, -0.010833740234375, -0.0167388916015625, 0.002788543701171875, -0.012451171875, 0.0215301513671875, 0.0662841796875, -0.0628662109375, -0.046417236328125, -0.040679931640625, 0.0...
truthful_qa
2023-06-09T14:18:13.000Z
[ "task_categories:multiple-choice", "task_categories:text-generation", "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:language-modeling", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monoling...
null
TruthfulQA is a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. Questions are crafted so that some humans would answer falsely due to a false belief or misconception....
@misc{lin2021truthfulqa, title={TruthfulQA: Measuring How Models Mimic Human Falsehoods}, author={Stephanie Lin and Jacob Hilton and Owain Evans}, year={2021}, eprint={2109.07958}, archivePrefix={arXiv}, primaryClass={cs.CL} }
73
3,784,469
2022-06-08T14:44:06
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: TruthfulQA size_categories: - n<1K source_datasets: - original task_categories: - multiple-choice - text-generation - question-answering task_ids: - multipl...
9,365
[ [ -0.03692626953125, -0.06939697265625, 0.031646728515625, -0.006591796875, 0.0038166046142578125, 0.002216339111328125, -0.0076141357421875, -0.01611328125, 0.00034236907958984375, 0.041656494140625, -0.05126953125, -0.038543701171875, -0.0297698974609375, 0....
cais/mmlu
2023-10-07T11:24:05.000Z
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "arxiv:2009.03300", "arxiv:2005....
cais
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more.
@article{hendryckstest2021, title={Measuring Massive Multitask Language Understanding}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}...
92
1,500,832
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - multiple-choice-qa paperswithcode_id: mmlu pretty_name: Measuring Massi...
39,677
[ [ -0.040008544921875, -0.0457763671875, 0.0215301513671875, 0.0034198760986328125, 0.004787445068359375, 0.007534027099609375, -0.0183563232421875, -0.02288818359375, 0.0162200927734375, 0.01499176025390625, -0.051300048828125, -0.049102783203125, -0.0445251464843...
glue
2023-06-01T14:59:59.000Z
[ "task_categories:text-classification", "task_ids:acceptability-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:sentiment-classification", "task_ids:text-scoring", "annotations_creators:other", "language_creators:other", "multilinguality:monol...
null
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
@inproceedings{wang2019glue, title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.}, note={In the Proceedings of ICLR.}, year={2019} }
245
1,428,634
2022-03-02T23:29:22
--- annotations_creators: - other language_creators: - other language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - acceptability-classification - natural-language-inference - semantic-similarity-sco...
27,887
[ [ -0.0303192138671875, -0.057159423828125, 0.00943756103515625, 0.01551055908203125, -0.0060272216796875, -0.004344940185546875, -0.01239776611328125, -0.030914306640625, 0.02679443359375, 0.03179931640625, -0.058502197265625, -0.053924560546875, -0.03598022460937...
poloclub/diffusiondb
2023-05-09T19:00:45.000Z
[ "task_categories:text-to-image", "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n>1T", "source_datasets:original", "language:en", "license:cc0-1.0", "stable diffusion"...
poloclub
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users. The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the inter...
@article{wangDiffusionDBLargescalePrompt2022, title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models}, author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng}, year = {2022}, journal = {arXiv:221...
323
1,069,360
2022-10-25T02:25:28
--- layout: default title: Home nav_order: 1 has_children: false annotations_creators: - no-annotation language: - en language_creators: - found license: - cc0-1.0 multilinguality: - multilingual pretty_name: DiffusionDB size_categories: - n>1T source_datasets: - original tags: - stable diffusion - prompt engineering ...
24,582
[ [ -0.049774169921875, -0.06365966796875, 0.03656005859375, 0.0313720703125, -0.018280029296875, -0.00547027587890625, -0.0006356239318847656, -0.004180908203125, 0.0277557373046875, 0.037109375, -0.04730224609375, -0.07843017578125, -0.046051025390625, 0.00836...
squad_v2
2023-04-05T13:40:44.000Z
["task_categories:question-answering","task_ids:open-domain-qa","task_ids:extractive-qa","annotation(...TRUNCATED)
null
"combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversar(...TRUNCATED)
"@article{2016arXiv160605250R,\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}(...TRUNCATED)
90
1,054,465
2022-03-02T23:29:22
"---\npretty_name: SQuAD2.0\nannotations_creators:\n- crowdsourced\nlanguage_creators:\n- crowdsourc(...TRUNCATED)
8,016
[[-0.046173095703125,-0.043670654296875,0.005634307861328125,0.0204010009765625,-0.00832366943359375(...TRUNCATED)
super_glue
2023-04-05T13:41:04.000Z
["task_categories:text-classification","task_categories:token-classification","task_categories:quest(...TRUNCATED)
null
"SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after\nGLUE with a new set o(...TRUNCATED)
"@article{wang2019superglue,\n title={SuperGLUE: A Stickier Benchmark for General-Purpose Language (...TRUNCATED)
117
824,558
2022-03-02T23:29:22
"---\nannotations_creators:\n- expert-generated\nlanguage_creators:\n- other\nlanguage:\n- en\nlicen(...TRUNCATED)
14,813
[[-0.042449951171875,-0.04705810546875,0.0084228515625,-0.0011720657348632812,-0.00946044921875,-0.0(...TRUNCATED)
lighteval/mmlu
2023-06-09T16:36:19.000Z
["task_categories:question-answering","task_ids:multiple-choice-qa","annotations_creators:no-annotat(...TRUNCATED)
lighteval
"This is a massive multitask test consisting of multiple-choice questions from various branches of k(...TRUNCATED)
"@article{hendryckstest2021,\n title={Measuring Massive Multitask Language Understanding},\n (...TRUNCATED)
6
578,067
2023-05-16T09:39:28
"---\nannotations_creators:\n- no-annotation\nlanguage_creators:\n- expert-generated\nlanguage:\n- e(...TRUNCATED)
39,677
[[-0.03997802734375,-0.0457763671875,0.0215301513671875,0.00342559814453125,0.004791259765625,0.0075(...TRUNCATED)
wikitext
2023-06-20T07:52:10.000Z
["task_categories:text-generation","task_categories:fill-mask","task_ids:language-modeling","task_id(...TRUNCATED)
null
" The WikiText language modeling dataset is a collection of over 100 million tokens extracted from t(...TRUNCATED)
"@misc{merity2016pointer,\n title={Pointer Sentinel Mixture Models},\n author={Stephen Mer(...TRUNCATED)
198
575,928
2022-03-02T23:29:22
"---\nannotations_creators:\n- no-annotation\nlanguage_creators:\n- crowdsourced\nlanguage:\n- en\nl(...TRUNCATED)
9,573
[[-0.044677734375,-0.038116455078125,0.01137542724609375,0.0172271728515625,-0.010040283203125,-0.00(...TRUNCATED)
HuggingFaceM4/COCO
2022-12-15T15:51:03.000Z
[ "license:cc-by-4.0", "arxiv:1405.0312", "region:us" ]
HuggingFaceM4
"MS COCO is a large-scale object detection, segmentation, and captioning dataset.\nCOCO has several (...TRUNCATED)
"@article{DBLP:journals/corr/LinMBHPRDZ14,\n author = {Tsung{-}Yi Lin and\n Michae(...TRUNCATED)
8
438,316
2022-12-14T21:13:57
"---\nlicense: cc-by-4.0\n---\n\n# Dataset Card for [Dataset Name]\n\n## Table of Contents\n- [Table(...TRUNCATED)
3,660
[[-0.035552978515625,-0.047760009765625,-0.005481719970703125,0.030731201171875,-0.0199737548828125,(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for "dataset_cards_with_long_context_embeddins"

More Information needed

Downloads last month
1