id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
turkish-nlp-suite/vitamins-supplements-reviews
2023-09-23T18:34:47.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:tr", "license:cc-by-sa-4.0", "region:us" ]
turkish-nlp-suite
Customer reviews dataset for Turkish. Includes reviews for vitamins and supplement products,crawled from e-commerce website Vitaminler.com. All reviews are in Turkish.[Vitamins and Supplements Customer Reviews Dataset](https://github.com/turkish-nlp-suite/Vitamins-Supplements-Reviews)
@inproceedings{altinok-2023-diverse, title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish", author = "Altinok, Duygu", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.768", pages = "13739--13750", abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.", }
null
0
0
--- language: - tr license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Vitamins and Supplements Customer Reviews Dataset --- # Dataset Card for turkish-nlp-suite/vitamins-supplements-reviews <img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/supplements.png" width="20%" height="20%"> ### Dataset Description - **Repository:** [Vitamins and Supplements Reviews Dataset](https://github.com/turkish-nlp-suite/Vitamins-Supplements-Reviews) - **Paper:** [ACL link](https://aclanthology.org/2023.acl-long.768/) - **Dataset:** Vitamins and Supplements Reviews Dataset - **Domain:** E-commerce, customer reviews ### Dataset Summary Turkish sentiment analysis dataset from customer reviews about supplement and vitamin products. The dataset is scraped from Vitaminler.com and contains customer reviews and star rating about vitamin and supplement products. Each customer review in the Vitamins and Supplements Reviews Dataset describes a customer’s experience with a supplement product in terms of the product’s effectiveness, side effects, taste and smell, as well as comments on supplement usage frequency and dosage, active ingredients, brand, and similar products by other brands. The reviews also include pointers to customers’ health history and indications how the supplements helped in resolving customers’ health problems. Considering the characteristics of the data, our Vitamins and Supplements Reviews Dataset lies at the intersection of customer review data and healthcare NLP data. We hope to offer a finely compiled medical NLP dataset for Turkish NLU. ### Dataset Instances The dataset includes 1,052 products of 262 distinct brands with 244K customer reviews. During the compilation, we eliminated reviews containing person names such as customer's name and influencer names Each dataset instance contains - product name - brand name - customer review text - star rating Here's an example for you: ``` { "product_name": "Microfer Şurup 250 ml", "brand": "Ocean", "review": "Bittikçe alıyorum harika bişey kızım tadını da seviyo", "star": 5 } ``` If you're rather interested in JSON format where reviews are accumulated by product name, you can find the dataset as a single JSON in dataset's [Github repo](https://github.com/turkish-nlp-suite/Vitamins-Supplements-Reviews). ### Data Split | name |train|validation|test| |---------|----:|---:|---:| |Vitamins and Supplements Reviews|200866|20000|20000| ### Citation This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/) : ``` @inproceedings{altinok-2023-diverse, title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish", author = "Altinok, Duygu", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.768", pages = "13739--13750", abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.", } ```
open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16
2023-09-22T17:43:22.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16 dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [bhenrym14/airophin-v2-13b-PI-8k-fp16](https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T17:43:10.494860](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16/blob/main/results_2023-09-22T17-43-10.494860.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0921770134228188,\n\ \ \"em_stderr\": 0.00296245358879876,\n \"f1\": 0.2086210151006714,\n\ \ \"f1_stderr\": 0.0033790655527750446,\n \"acc\": 0.4199589150853921,\n\ \ \"acc_stderr\": 0.009541015115774397\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.0921770134228188,\n \"em_stderr\": 0.00296245358879876,\n\ \ \"f1\": 0.2086210151006714,\n \"f1_stderr\": 0.0033790655527750446\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07354056103108415,\n \ \ \"acc_stderr\": 0.007189835754365268\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183525\n\ \ }\n}\n```" repo_url: https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16 leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T17_43_10.494860 path: - '**/details_harness|drop|3_2023-09-22T17-43-10.494860.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T17-43-10.494860.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T17_43_10.494860 path: - '**/details_harness|gsm8k|5_2023-09-22T17-43-10.494860.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T17-43-10.494860.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T17_43_10.494860 path: - '**/details_harness|winogrande|5_2023-09-22T17-43-10.494860.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T17-43-10.494860.parquet' - config_name: results data_files: - split: 2023_09_22T17_43_10.494860 path: - results_2023-09-22T17-43-10.494860.parquet - split: latest path: - results_2023-09-22T17-43-10.494860.parquet --- # Dataset Card for Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [bhenrym14/airophin-v2-13b-PI-8k-fp16](https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T17:43:10.494860](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16/blob/main/results_2023-09-22T17-43-10.494860.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0921770134228188, "em_stderr": 0.00296245358879876, "f1": 0.2086210151006714, "f1_stderr": 0.0033790655527750446, "acc": 0.4199589150853921, "acc_stderr": 0.009541015115774397 }, "harness|drop|3": { "em": 0.0921770134228188, "em_stderr": 0.00296245358879876, "f1": 0.2086210151006714, "f1_stderr": 0.0033790655527750446 }, "harness|gsm8k|5": { "acc": 0.07354056103108415, "acc_stderr": 0.007189835754365268 }, "harness|winogrande|5": { "acc": 0.7663772691397001, "acc_stderr": 0.011892194477183525 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical
2023-09-22T17:46:29.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of FelixChao/vicuna-7B-chemical dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [FelixChao/vicuna-7B-chemical](https://huggingface.co/FelixChao/vicuna-7B-chemical)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T17:46:17.694402](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical/blob/main/results_2023-09-22T17-46-17.694402.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.005557885906040268,\n\ \ \"em_stderr\": 0.0007613497667018453,\n \"f1\": 0.06261220637583904,\n\ \ \"f1_stderr\": 0.0014974766629904516,\n \"acc\": 0.3525119781135765,\n\ \ \"acc_stderr\": 0.009072291049445833\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.005557885906040268,\n \"em_stderr\": 0.0007613497667018453,\n\ \ \"f1\": 0.06261220637583904,\n \"f1_stderr\": 0.0014974766629904516\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03335860500379075,\n \ \ \"acc_stderr\": 0.004946282649173776\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.6716653512233622,\n \"acc_stderr\": 0.013198299449717888\n\ \ }\n}\n```" repo_url: https://huggingface.co/FelixChao/vicuna-7B-chemical leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T17_46_17.694402 path: - '**/details_harness|drop|3_2023-09-22T17-46-17.694402.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T17-46-17.694402.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T17_46_17.694402 path: - '**/details_harness|gsm8k|5_2023-09-22T17-46-17.694402.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T17-46-17.694402.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T17_46_17.694402 path: - '**/details_harness|winogrande|5_2023-09-22T17-46-17.694402.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T17-46-17.694402.parquet' - config_name: results data_files: - split: 2023_09_22T17_46_17.694402 path: - results_2023-09-22T17-46-17.694402.parquet - split: latest path: - results_2023-09-22T17-46-17.694402.parquet --- # Dataset Card for Evaluation run of FelixChao/vicuna-7B-chemical ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/FelixChao/vicuna-7B-chemical - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [FelixChao/vicuna-7B-chemical](https://huggingface.co/FelixChao/vicuna-7B-chemical) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T17:46:17.694402](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical/blob/main/results_2023-09-22T17-46-17.694402.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.005557885906040268, "em_stderr": 0.0007613497667018453, "f1": 0.06261220637583904, "f1_stderr": 0.0014974766629904516, "acc": 0.3525119781135765, "acc_stderr": 0.009072291049445833 }, "harness|drop|3": { "em": 0.005557885906040268, "em_stderr": 0.0007613497667018453, "f1": 0.06261220637583904, "f1_stderr": 0.0014974766629904516 }, "harness|gsm8k|5": { "acc": 0.03335860500379075, "acc_stderr": 0.004946282649173776 }, "harness|winogrande|5": { "acc": 0.6716653512233622, "acc_stderr": 0.013198299449717888 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_baichuan-inc__Baichuan-7B
2023-09-22T17:53:14.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of baichuan-inc/Baichuan-7B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B) on\ \ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_baichuan-inc__Baichuan-7B\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T17:53:01.811068](https://huggingface.co/datasets/open-llm-leaderboard/details_baichuan-inc__Baichuan-7B/blob/main/results_2023-09-22T17-53-01.811068.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n\ \ \"em_stderr\": 0.00033145814652192515,\n \"f1\": 0.05030096476510072,\n\ \ \"f1_stderr\": 0.001249643333921536,\n \"acc\": 0.34788528775895733,\n\ \ \"acc_stderr\": 0.00889327304403644\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.00033145814652192515,\n\ \ \"f1\": 0.05030096476510072,\n \"f1_stderr\": 0.001249643333921536\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.028051554207733132,\n \ \ \"acc_stderr\": 0.004548229533836359\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.6677190213101816,\n \"acc_stderr\": 0.013238316554236521\n\ \ }\n}\n```" repo_url: https://huggingface.co/baichuan-inc/Baichuan-7B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T17_53_01.811068 path: - '**/details_harness|drop|3_2023-09-22T17-53-01.811068.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T17-53-01.811068.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T17_53_01.811068 path: - '**/details_harness|gsm8k|5_2023-09-22T17-53-01.811068.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T17-53-01.811068.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T17_53_01.811068 path: - '**/details_harness|winogrande|5_2023-09-22T17-53-01.811068.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T17-53-01.811068.parquet' - config_name: results data_files: - split: 2023_09_22T17_53_01.811068 path: - results_2023-09-22T17-53-01.811068.parquet - split: latest path: - results_2023-09-22T17-53-01.811068.parquet --- # Dataset Card for Evaluation run of baichuan-inc/Baichuan-7B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/baichuan-inc/Baichuan-7B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_baichuan-inc__Baichuan-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T17:53:01.811068](https://huggingface.co/datasets/open-llm-leaderboard/details_baichuan-inc__Baichuan-7B/blob/main/results_2023-09-22T17-53-01.811068.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0010486577181208054, "em_stderr": 0.00033145814652192515, "f1": 0.05030096476510072, "f1_stderr": 0.001249643333921536, "acc": 0.34788528775895733, "acc_stderr": 0.00889327304403644 }, "harness|drop|3": { "em": 0.0010486577181208054, "em_stderr": 0.00033145814652192515, "f1": 0.05030096476510072, "f1_stderr": 0.001249643333921536 }, "harness|gsm8k|5": { "acc": 0.028051554207733132, "acc_stderr": 0.004548229533836359 }, "harness|winogrande|5": { "acc": 0.6677190213101816, "acc_stderr": 0.013238316554236521 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
riquinho21/digovoz
2023-09-22T18:00:21.000Z
[ "license:cc0-1.0", "region:us" ]
riquinho21
null
null
null
0
0
--- license: cc0-1.0 ---
isaiah08/estrange-ai
2023-09-22T18:01:11.000Z
[ "region:us" ]
isaiah08
null
null
null
0
0
Entry not found
gollark/consciousness
2023-09-22T18:18:51.000Z
[ "region:us" ]
gollark
null
null
null
0
0
Papers on consciousness extracted from PDF format, from Cognition and Consciousness, arXiv and JCER.
Arthur91284/Arthur91284
2023-09-23T15:39:54.000Z
[ "license:openrail", "region:us" ]
Arthur91284
null
null
null
0
0
--- license: openrail ---
quocanh34/test_result
2023-09-22T18:28:49.000Z
[ "region:us" ]
quocanh34
null
null
null
0
0
Entry not found
Eu001/Teste
2023-09-22T23:16:05.000Z
[ "license:openrail", "region:us" ]
Eu001
null
null
null
0
0
--- license: openrail ---
open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b
2023-09-22T18:39:04.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [luffycodes/higgs-llama-vicuna-ep25-70b](https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 61 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b\"\ ,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\ \nThese are the [latest results from run 2023-09-22T18:37:41.856857](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b/blob/main/results_2023-09-22T18-37-41.856857.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6420797619490708,\n\ \ \"acc_stderr\": 0.03261955033835842,\n \"acc_norm\": 0.6458841283778145,\n\ \ \"acc_norm_stderr\": 0.0325948592191213,\n \"mc1\": 0.3684210526315789,\n\ \ \"mc1_stderr\": 0.016886551261046042,\n \"mc2\": 0.5374973802082814,\n\ \ \"mc2_stderr\": 0.015377782429548816\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.5853242320819113,\n \"acc_stderr\": 0.014397070564409174,\n\ \ \"acc_norm\": 0.6228668941979523,\n \"acc_norm_stderr\": 0.014163366896192601\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.673770165305716,\n\ \ \"acc_stderr\": 0.004678743563766661,\n \"acc_norm\": 0.8606851224855606,\n\ \ \"acc_norm_stderr\": 0.003455671196993115\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \ \ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4962962962962963,\n\ \ \"acc_stderr\": 0.043192236258113303,\n \"acc_norm\": 0.4962962962962963,\n\ \ \"acc_norm_stderr\": 0.043192236258113303\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03523807393012047,\n \ \ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03523807393012047\n \ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.7,\n\ \ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \ \ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.6528301886792452,\n \"acc_stderr\": 0.029300101705549652,\n\ \ \"acc_norm\": 0.6528301886792452,\n \"acc_norm_stderr\": 0.029300101705549652\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\ \ \"acc_stderr\": 0.03586879280080341,\n \"acc_norm\": 0.7569444444444444,\n\ \ \"acc_norm_stderr\": 0.03586879280080341\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \ \ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.55,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.55,\n\ \ \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \ \ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.630057803468208,\n\ \ \"acc_stderr\": 0.0368122963339432,\n \"acc_norm\": 0.630057803468208,\n\ \ \"acc_norm_stderr\": 0.0368122963339432\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929775,\n\ \ \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929775\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\ \ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n\ \ \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.39473684210526316,\n\ \ \"acc_stderr\": 0.04598188057816541,\n \"acc_norm\": 0.39473684210526316,\n\ \ \"acc_norm_stderr\": 0.04598188057816541\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n\ \ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055287,\n \"\ acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055287\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n\ \ \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n\ \ \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \ \ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\ : 0.7580645161290323,\n \"acc_stderr\": 0.024362599693031083,\n \"\ acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.024362599693031083\n\ \ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\ : 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n \"\ acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\"\ : 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.03158415324047709,\n\ \ \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.03158415324047709\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.8232323232323232,\n \"acc_stderr\": 0.027178752639044915,\n \"\ acc_norm\": 0.8232323232323232,\n \"acc_norm_stderr\": 0.027178752639044915\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033442,\n\ \ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033442\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\ \ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.3111111111111111,\n \"acc_stderr\": 0.02822644674968352,\n \ \ \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.02822644674968352\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.02934457250063435,\n \ \ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.02934457250063435\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.44370860927152317,\n \"acc_stderr\": 0.04056527902281732,\n \"\ acc_norm\": 0.44370860927152317,\n \"acc_norm_stderr\": 0.04056527902281732\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8550458715596331,\n \"acc_stderr\": 0.015094215699700464,\n \"\ acc_norm\": 0.8550458715596331,\n \"acc_norm_stderr\": 0.015094215699700464\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"\ acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.8725490196078431,\n \"acc_stderr\": 0.023405530480846322,\n \"\ acc_norm\": 0.8725490196078431,\n \"acc_norm_stderr\": 0.023405530480846322\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.8312236286919831,\n \"acc_stderr\": 0.024381406832586234,\n \ \ \"acc_norm\": 0.8312236286919831,\n \"acc_norm_stderr\": 0.024381406832586234\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7219730941704036,\n\ \ \"acc_stderr\": 0.03006958487449405,\n \"acc_norm\": 0.7219730941704036,\n\ \ \"acc_norm_stderr\": 0.03006958487449405\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n\ \ \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"\ acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n\ \ \"acc_stderr\": 0.036028141763926456,\n \"acc_norm\": 0.8333333333333334,\n\ \ \"acc_norm_stderr\": 0.036028141763926456\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n\ \ \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\ \ \"acc_stderr\": 0.047184714852195886,\n \"acc_norm\": 0.44642857142857145,\n\ \ \"acc_norm_stderr\": 0.047184714852195886\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\ \ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\ \ \"acc_stderr\": 0.021262719400406943,\n \"acc_norm\": 0.8803418803418803,\n\ \ \"acc_norm_stderr\": 0.021262719400406943\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \ \ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8365261813537676,\n\ \ \"acc_stderr\": 0.013223928616741622,\n \"acc_norm\": 0.8365261813537676,\n\ \ \"acc_norm_stderr\": 0.013223928616741622\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\ \ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3854748603351955,\n\ \ \"acc_stderr\": 0.01627792703963819,\n \"acc_norm\": 0.3854748603351955,\n\ \ \"acc_norm_stderr\": 0.01627792703963819\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n\ \ \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\ \ \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n\ \ \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.7253086419753086,\n \"acc_stderr\": 0.024836057868294677,\n\ \ \"acc_norm\": 0.7253086419753086,\n \"acc_norm_stderr\": 0.024836057868294677\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.5070921985815603,\n \"acc_stderr\": 0.02982449855912901,\n \ \ \"acc_norm\": 0.5070921985815603,\n \"acc_norm_stderr\": 0.02982449855912901\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5078226857887875,\n\ \ \"acc_stderr\": 0.012768673076111903,\n \"acc_norm\": 0.5078226857887875,\n\ \ \"acc_norm_stderr\": 0.012768673076111903\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.5845588235294118,\n \"acc_stderr\": 0.02993534270787774,\n\ \ \"acc_norm\": 0.5845588235294118,\n \"acc_norm_stderr\": 0.02993534270787774\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.673202614379085,\n \"acc_stderr\": 0.018975427920507205,\n \ \ \"acc_norm\": 0.673202614379085,\n \"acc_norm_stderr\": 0.018975427920507205\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n\ \ \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n\ \ \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.7836734693877551,\n \"acc_stderr\": 0.02635891633490403,\n\ \ \"acc_norm\": 0.7836734693877551,\n \"acc_norm_stderr\": 0.02635891633490403\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n\ \ \"acc_stderr\": 0.02519692987482708,\n \"acc_norm\": 0.8507462686567164,\n\ \ \"acc_norm_stderr\": 0.02519692987482708\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \ \ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\ \ \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.5180722891566265,\n\ \ \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\ \ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3684210526315789,\n\ \ \"mc1_stderr\": 0.016886551261046042,\n \"mc2\": 0.5374973802082814,\n\ \ \"mc2_stderr\": 0.015377782429548816\n }\n}\n```" repo_url: https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|arc:challenge|25_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hellaswag|10_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-management|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-management|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-37-41.856857.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-management|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T18-37-41.856857.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_09_22T18_37_41.856857 path: - '**/details_harness|truthfulqa:mc|0_2023-09-22T18-37-41.856857.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-09-22T18-37-41.856857.parquet' - config_name: results data_files: - split: 2023_09_22T18_37_41.856857 path: - results_2023-09-22T18-37-41.856857.parquet - split: latest path: - results_2023-09-22T18-37-41.856857.parquet --- # Dataset Card for Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [luffycodes/higgs-llama-vicuna-ep25-70b](https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T18:37:41.856857](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b/blob/main/results_2023-09-22T18-37-41.856857.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6420797619490708, "acc_stderr": 0.03261955033835842, "acc_norm": 0.6458841283778145, "acc_norm_stderr": 0.0325948592191213, "mc1": 0.3684210526315789, "mc1_stderr": 0.016886551261046042, "mc2": 0.5374973802082814, "mc2_stderr": 0.015377782429548816 }, "harness|arc:challenge|25": { "acc": 0.5853242320819113, "acc_stderr": 0.014397070564409174, "acc_norm": 0.6228668941979523, "acc_norm_stderr": 0.014163366896192601 }, "harness|hellaswag|10": { "acc": 0.673770165305716, "acc_stderr": 0.004678743563766661, "acc_norm": 0.8606851224855606, "acc_norm_stderr": 0.003455671196993115 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4962962962962963, "acc_stderr": 0.043192236258113303, "acc_norm": 0.4962962962962963, "acc_norm_stderr": 0.043192236258113303 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.75, "acc_stderr": 0.03523807393012047, "acc_norm": 0.75, "acc_norm_stderr": 0.03523807393012047 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6528301886792452, "acc_stderr": 0.029300101705549652, "acc_norm": 0.6528301886792452, "acc_norm_stderr": 0.029300101705549652 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7569444444444444, "acc_stderr": 0.03586879280080341, "acc_norm": 0.7569444444444444, "acc_norm_stderr": 0.03586879280080341 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.55, "acc_stderr": 0.049999999999999996, "acc_norm": 0.55, "acc_norm_stderr": 0.049999999999999996 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.630057803468208, "acc_stderr": 0.0368122963339432, "acc_norm": 0.630057803468208, "acc_norm_stderr": 0.0368122963339432 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.29411764705882354, "acc_stderr": 0.04533838195929775, "acc_norm": 0.29411764705882354, "acc_norm_stderr": 0.04533838195929775 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5914893617021276, "acc_stderr": 0.032134180267015755, "acc_norm": 0.5914893617021276, "acc_norm_stderr": 0.032134180267015755 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.39473684210526316, "acc_stderr": 0.04598188057816541, "acc_norm": 0.39473684210526316, "acc_norm_stderr": 0.04598188057816541 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.04122737111370333, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.04122737111370333 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4126984126984127, "acc_stderr": 0.025355741263055287, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.025355741263055287 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.40476190476190477, "acc_stderr": 0.04390259265377562, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.04390259265377562 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7580645161290323, "acc_stderr": 0.024362599693031083, "acc_norm": 0.7580645161290323, "acc_norm_stderr": 0.024362599693031083 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.49261083743842365, "acc_stderr": 0.03517603540361008, "acc_norm": 0.49261083743842365, "acc_norm_stderr": 0.03517603540361008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.67, "acc_stderr": 0.04725815626252607, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252607 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.793939393939394, "acc_stderr": 0.03158415324047709, "acc_norm": 0.793939393939394, "acc_norm_stderr": 0.03158415324047709 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8232323232323232, "acc_stderr": 0.027178752639044915, "acc_norm": 0.8232323232323232, "acc_norm_stderr": 0.027178752639044915 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.021500249576033442, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.021500249576033442 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6615384615384615, "acc_stderr": 0.023991500500313036, "acc_norm": 0.6615384615384615, "acc_norm_stderr": 0.023991500500313036 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3111111111111111, "acc_stderr": 0.02822644674968352, "acc_norm": 0.3111111111111111, "acc_norm_stderr": 0.02822644674968352 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7142857142857143, "acc_stderr": 0.02934457250063435, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.02934457250063435 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.44370860927152317, "acc_stderr": 0.04056527902281732, "acc_norm": 0.44370860927152317, "acc_norm_stderr": 0.04056527902281732 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8550458715596331, "acc_stderr": 0.015094215699700464, "acc_norm": 0.8550458715596331, "acc_norm_stderr": 0.015094215699700464 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.49074074074074076, "acc_stderr": 0.034093869469927006, "acc_norm": 0.49074074074074076, "acc_norm_stderr": 0.034093869469927006 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8725490196078431, "acc_stderr": 0.023405530480846322, "acc_norm": 0.8725490196078431, "acc_norm_stderr": 0.023405530480846322 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8312236286919831, "acc_stderr": 0.024381406832586234, "acc_norm": 0.8312236286919831, "acc_norm_stderr": 0.024381406832586234 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7219730941704036, "acc_stderr": 0.03006958487449405, "acc_norm": 0.7219730941704036, "acc_norm_stderr": 0.03006958487449405 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7175572519083969, "acc_stderr": 0.03948406125768361, "acc_norm": 0.7175572519083969, "acc_norm_stderr": 0.03948406125768361 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8333333333333334, "acc_stderr": 0.036028141763926456, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.036028141763926456 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7239263803680982, "acc_stderr": 0.035123852837050475, "acc_norm": 0.7239263803680982, "acc_norm_stderr": 0.035123852837050475 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.44642857142857145, "acc_stderr": 0.047184714852195886, "acc_norm": 0.44642857142857145, "acc_norm_stderr": 0.047184714852195886 }, "harness|hendrycksTest-management|5": { "acc": 0.8155339805825242, "acc_stderr": 0.03840423627288276, "acc_norm": 0.8155339805825242, "acc_norm_stderr": 0.03840423627288276 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406943, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406943 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.67, "acc_stderr": 0.04725815626252607, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252607 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8365261813537676, "acc_stderr": 0.013223928616741622, "acc_norm": 0.8365261813537676, "acc_norm_stderr": 0.013223928616741622 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7225433526011561, "acc_stderr": 0.024105712607754307, "acc_norm": 0.7225433526011561, "acc_norm_stderr": 0.024105712607754307 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3854748603351955, "acc_stderr": 0.01627792703963819, "acc_norm": 0.3854748603351955, "acc_norm_stderr": 0.01627792703963819 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7124183006535948, "acc_stderr": 0.02591780611714716, "acc_norm": 0.7124183006535948, "acc_norm_stderr": 0.02591780611714716 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7009646302250804, "acc_stderr": 0.02600330111788514, "acc_norm": 0.7009646302250804, "acc_norm_stderr": 0.02600330111788514 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7253086419753086, "acc_stderr": 0.024836057868294677, "acc_norm": 0.7253086419753086, "acc_norm_stderr": 0.024836057868294677 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5070921985815603, "acc_stderr": 0.02982449855912901, "acc_norm": 0.5070921985815603, "acc_norm_stderr": 0.02982449855912901 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5078226857887875, "acc_stderr": 0.012768673076111903, "acc_norm": 0.5078226857887875, "acc_norm_stderr": 0.012768673076111903 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5845588235294118, "acc_stderr": 0.02993534270787774, "acc_norm": 0.5845588235294118, "acc_norm_stderr": 0.02993534270787774 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.673202614379085, "acc_stderr": 0.018975427920507205, "acc_norm": 0.673202614379085, "acc_norm_stderr": 0.018975427920507205 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7363636363636363, "acc_stderr": 0.04220224692971987, "acc_norm": 0.7363636363636363, "acc_norm_stderr": 0.04220224692971987 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7836734693877551, "acc_stderr": 0.02635891633490403, "acc_norm": 0.7836734693877551, "acc_norm_stderr": 0.02635891633490403 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8507462686567164, "acc_stderr": 0.02519692987482708, "acc_norm": 0.8507462686567164, "acc_norm_stderr": 0.02519692987482708 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.038899512528272166, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.038899512528272166 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8304093567251462, "acc_stderr": 0.02878210810540171, "acc_norm": 0.8304093567251462, "acc_norm_stderr": 0.02878210810540171 }, "harness|truthfulqa:mc|0": { "mc1": 0.3684210526315789, "mc1_stderr": 0.016886551261046042, "mc2": 0.5374973802082814, "mc2_stderr": 0.015377782429548816 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k
2023-09-22T18:49:20.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T18:49:08.232237](https://huggingface.co/datasets/open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k/blob/main/results_2023-09-22T18-49-08.232237.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03838087248322147,\n\ \ \"em_stderr\": 0.0019674269651511014,\n \"f1\": 0.12162856543624175,\n\ \ \"f1_stderr\": 0.0024752721568615517,\n \"acc\": 0.28326869809409316,\n\ \ \"acc_stderr\": 0.00781704702542305\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.03838087248322147,\n \"em_stderr\": 0.0019674269651511014,\n\ \ \"f1\": 0.12162856543624175,\n \"f1_stderr\": 0.0024752721568615517\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0037907505686125853,\n \ \ \"acc_stderr\": 0.0016927007401501821\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.5627466456195738,\n \"acc_stderr\": 0.013941393310695918\n\ \ }\n}\n```" repo_url: https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T18_49_08.232237 path: - '**/details_harness|drop|3_2023-09-22T18-49-08.232237.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T18-49-08.232237.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T18_49_08.232237 path: - '**/details_harness|gsm8k|5_2023-09-22T18-49-08.232237.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T18-49-08.232237.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T18_49_08.232237 path: - '**/details_harness|winogrande|5_2023-09-22T18-49-08.232237.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T18-49-08.232237.parquet' - config_name: results data_files: - split: 2023_09_22T18_49_08.232237 path: - results_2023-09-22T18-49-08.232237.parquet - split: latest path: - results_2023-09-22T18-49-08.232237.parquet --- # Dataset Card for Evaluation run of GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-6.7b-evol-instruct-70k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T18:49:08.232237](https://huggingface.co/datasets/open-llm-leaderboard/details_GeorgiaTechResearchInstitute__galactica-6.7b-evol-instruct-70k/blob/main/results_2023-09-22T18-49-08.232237.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.03838087248322147, "em_stderr": 0.0019674269651511014, "f1": 0.12162856543624175, "f1_stderr": 0.0024752721568615517, "acc": 0.28326869809409316, "acc_stderr": 0.00781704702542305 }, "harness|drop|3": { "em": 0.03838087248322147, "em_stderr": 0.0019674269651511014, "f1": 0.12162856543624175, "f1_stderr": 0.0024752721568615517 }, "harness|gsm8k|5": { "acc": 0.0037907505686125853, "acc_stderr": 0.0016927007401501821 }, "harness|winogrande|5": { "acc": 0.5627466456195738, "acc_stderr": 0.013941393310695918 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Isaacsa7/Modelo_Treinado
2023-09-22T19:14:22.000Z
[ "region:us" ]
Isaacsa7
null
null
null
0
0
Entry not found
NyxSlee/cool_new_dataset
2023-09-22T19:02:38.000Z
[ "region:us" ]
NyxSlee
null
null
null
0
0
--- dataset_info: features: - name: name dtype: string - name: description dtype: string - name: price dtype: float64 - name: color dtype: string - name: size sequence: string - name: ad dtype: string splits: - name: train num_bytes: 5020 num_examples: 5 download_size: 11617 dataset_size: 5020 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "cool_new_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
erenoz/your-dataset-name
2023-09-22T19:03:05.000Z
[ "region:us" ]
erenoz
null
null
null
0
0
Entry not found
axelprsvl/my_dataset
2023-09-22T19:03:21.000Z
[ "region:us" ]
axelprsvl
null
null
null
0
0
--- dataset_info: features: - name: audio dtype: audio splits: - name: train num_bytes: 40520175.0 num_examples: 5 download_size: 40474142 dataset_size: 40520175.0 --- # Dataset Card for "my_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mmuttharasan/llmjp2
2023-09-22T19:19:58.000Z
[ "region:us" ]
mmuttharasan
null
null
null
0
0
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 592043 num_examples: 1 download_size: 0 dataset_size: 592043 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "llmjp2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jgp756/m
2023-09-22T19:32:56.000Z
[ "license:openrail", "region:us" ]
jgp756
null
null
null
0
0
--- license: openrail ---
Raspado/Lucasdataset
2023-09-22T19:45:04.000Z
[ "region:us" ]
Raspado
null
null
null
0
0
Entry not found
open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b
2023-09-22T19:56:23.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of zarakiquemparte/zarafusionix-l2-7b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [zarakiquemparte/zarafusionix-l2-7b](https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T19:56:11.100071](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b/blob/main/results_2023-09-22T19-56-11.100071.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.20669043624161074,\n\ \ \"em_stderr\": 0.004146877317311672,\n \"f1\": 0.29368812919463155,\n\ \ \"f1_stderr\": 0.004195906469994281,\n \"acc\": 0.40933494018871774,\n\ \ \"acc_stderr\": 0.009672451208885371\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.20669043624161074,\n \"em_stderr\": 0.004146877317311672,\n\ \ \"f1\": 0.29368812919463155,\n \"f1_stderr\": 0.004195906469994281\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07202426080363912,\n \ \ \"acc_stderr\": 0.007121147983537124\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7466456195737964,\n \"acc_stderr\": 0.012223754434233618\n\ \ }\n}\n```" repo_url: https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T19_56_11.100071 path: - '**/details_harness|drop|3_2023-09-22T19-56-11.100071.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T19-56-11.100071.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T19_56_11.100071 path: - '**/details_harness|gsm8k|5_2023-09-22T19-56-11.100071.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T19-56-11.100071.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T19_56_11.100071 path: - '**/details_harness|winogrande|5_2023-09-22T19-56-11.100071.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T19-56-11.100071.parquet' - config_name: results data_files: - split: 2023_09_22T19_56_11.100071 path: - results_2023-09-22T19-56-11.100071.parquet - split: latest path: - results_2023-09-22T19-56-11.100071.parquet --- # Dataset Card for Evaluation run of zarakiquemparte/zarafusionix-l2-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [zarakiquemparte/zarafusionix-l2-7b](https://huggingface.co/zarakiquemparte/zarafusionix-l2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T19:56:11.100071](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__zarafusionix-l2-7b/blob/main/results_2023-09-22T19-56-11.100071.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.20669043624161074, "em_stderr": 0.004146877317311672, "f1": 0.29368812919463155, "f1_stderr": 0.004195906469994281, "acc": 0.40933494018871774, "acc_stderr": 0.009672451208885371 }, "harness|drop|3": { "em": 0.20669043624161074, "em_stderr": 0.004146877317311672, "f1": 0.29368812919463155, "f1_stderr": 0.004195906469994281 }, "harness|gsm8k|5": { "acc": 0.07202426080363912, "acc_stderr": 0.007121147983537124 }, "harness|winogrande|5": { "acc": 0.7466456195737964, "acc_stderr": 0.012223754434233618 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
germanbava/isawa
2023-09-22T20:15:21.000Z
[ "region:us" ]
germanbava
null
null
null
0
0
Entry not found
open-llm-leaderboard/details_Azure99__blossom-v1-3b
2023-09-22T20:19:18.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of Azure99/blossom-v1-3b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Azure99/blossom-v1-3b](https://huggingface.co/Azure99/blossom-v1-3b) on the [Open\ \ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Azure99__blossom-v1-3b\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T20:19:06.674002](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v1-3b/blob/main/results_2023-09-22T20-19-06.674002.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.035968959731543626,\n\ \ \"em_stderr\": 0.0019069930004768894,\n \"f1\": 0.08654886744966468,\n\ \ \"f1_stderr\": 0.002229945283926482,\n \"acc\": 0.2962915868075896,\n\ \ \"acc_stderr\": 0.007760914549413539\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.035968959731543626,\n \"em_stderr\": 0.0019069930004768894,\n\ \ \"f1\": 0.08654886744966468,\n \"f1_stderr\": 0.002229945283926482\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0037907505686125853,\n \ \ \"acc_stderr\": 0.0016927007401502012\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.5887924230465666,\n \"acc_stderr\": 0.013829128358676876\n\ \ }\n}\n```" repo_url: https://huggingface.co/Azure99/blossom-v1-3b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T20_19_06.674002 path: - '**/details_harness|drop|3_2023-09-22T20-19-06.674002.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T20-19-06.674002.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T20_19_06.674002 path: - '**/details_harness|gsm8k|5_2023-09-22T20-19-06.674002.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T20-19-06.674002.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T20_19_06.674002 path: - '**/details_harness|winogrande|5_2023-09-22T20-19-06.674002.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T20-19-06.674002.parquet' - config_name: results data_files: - split: 2023_09_22T20_19_06.674002 path: - results_2023-09-22T20-19-06.674002.parquet - split: latest path: - results_2023-09-22T20-19-06.674002.parquet --- # Dataset Card for Evaluation run of Azure99/blossom-v1-3b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Azure99/blossom-v1-3b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [Azure99/blossom-v1-3b](https://huggingface.co/Azure99/blossom-v1-3b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Azure99__blossom-v1-3b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T20:19:06.674002](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v1-3b/blob/main/results_2023-09-22T20-19-06.674002.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.035968959731543626, "em_stderr": 0.0019069930004768894, "f1": 0.08654886744966468, "f1_stderr": 0.002229945283926482, "acc": 0.2962915868075896, "acc_stderr": 0.007760914549413539 }, "harness|drop|3": { "em": 0.035968959731543626, "em_stderr": 0.0019069930004768894, "f1": 0.08654886744966468, "f1_stderr": 0.002229945283926482 }, "harness|gsm8k|5": { "acc": 0.0037907505686125853, "acc_stderr": 0.0016927007401502012 }, "harness|winogrande|5": { "acc": 0.5887924230465666, "acc_stderr": 0.013829128358676876 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
collabora/librilight-processed-webdataset
2023-10-04T12:05:17.000Z
[ "license:cc0-1.0", "region:us" ]
collabora
null
null
null
0
0
--- license: cc0-1.0 ---
Vaibhav9401/toxic25m
2023-09-23T06:20:30.000Z
[ "region:us" ]
Vaibhav9401
null
null
null
0
0
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: llama_finetune_text dtype: string splits: - name: train num_bytes: 20143312184 num_examples: 25159680 download_size: 3446911922 dataset_size: 20143312184 --- # Dataset Card for "toxic25m" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca
2023-09-22T20:56:21.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of chavinlo/gpt4-x-alpaca dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [chavinlo/gpt4-x-alpaca](https://huggingface.co/chavinlo/gpt4-x-alpaca) on the\ \ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T20:56:09.987040](https://huggingface.co/datasets/open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca/blob/main/results_2023-09-22T20-56-09.987040.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.15478187919463088,\n\ \ \"em_stderr\": 0.003704111989193061,\n \"f1\": 0.24988045302013467,\n\ \ \"f1_stderr\": 0.00385619985047934,\n \"acc\": 0.3648545063856345,\n\ \ \"acc_stderr\": 0.008703557271933391\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.15478187919463088,\n \"em_stderr\": 0.003704111989193061,\n\ \ \"f1\": 0.24988045302013467,\n \"f1_stderr\": 0.00385619985047934\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.028051554207733132,\n \ \ \"acc_stderr\": 0.004548229533836362\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7016574585635359,\n \"acc_stderr\": 0.012858885010030421\n\ \ }\n}\n```" repo_url: https://huggingface.co/chavinlo/gpt4-x-alpaca leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T20_56_09.987040 path: - '**/details_harness|drop|3_2023-09-22T20-56-09.987040.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T20-56-09.987040.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T20_56_09.987040 path: - '**/details_harness|gsm8k|5_2023-09-22T20-56-09.987040.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T20-56-09.987040.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T20_56_09.987040 path: - '**/details_harness|winogrande|5_2023-09-22T20-56-09.987040.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T20-56-09.987040.parquet' - config_name: results data_files: - split: 2023_09_22T20_56_09.987040 path: - results_2023-09-22T20-56-09.987040.parquet - split: latest path: - results_2023-09-22T20-56-09.987040.parquet --- # Dataset Card for Evaluation run of chavinlo/gpt4-x-alpaca ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/chavinlo/gpt4-x-alpaca - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [chavinlo/gpt4-x-alpaca](https://huggingface.co/chavinlo/gpt4-x-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T20:56:09.987040](https://huggingface.co/datasets/open-llm-leaderboard/details_chavinlo__gpt4-x-alpaca/blob/main/results_2023-09-22T20-56-09.987040.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.15478187919463088, "em_stderr": 0.003704111989193061, "f1": 0.24988045302013467, "f1_stderr": 0.00385619985047934, "acc": 0.3648545063856345, "acc_stderr": 0.008703557271933391 }, "harness|drop|3": { "em": 0.15478187919463088, "em_stderr": 0.003704111989193061, "f1": 0.24988045302013467, "f1_stderr": 0.00385619985047934 }, "harness|gsm8k|5": { "acc": 0.028051554207733132, "acc_stderr": 0.004548229533836362 }, "harness|winogrande|5": { "acc": 0.7016574585635359, "acc_stderr": 0.012858885010030421 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
thejaskp/random
2023-09-22T21:01:46.000Z
[ "region:us" ]
thejaskp
null
null
null
0
0
Entry not found
RadicalRendy/jakdataset
2023-09-22T21:23:48.000Z
[ "license:openrail", "region:us" ]
RadicalRendy
null
null
null
0
0
--- license: openrail ---
open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca
2023-09-22T21:36:50.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca](https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-22T21:36:39.212716](https://huggingface.co/datasets/open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca/blob/main/results_2023-09-22T21-36-39.212716.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"\ em_stderr\": 0.0,\n \"f1\": 0.0004404362416107381,\n \"f1_stderr\"\ : 6.976502994544788e-05,\n \"acc\": 0.2541436464088398,\n \"acc_stderr\"\ : 0.007025277661412096\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n\ \ \"em_stderr\": 0.0,\n \"f1\": 0.0004404362416107381,\n \"\ f1_stderr\": 6.976502994544788e-05\n },\n \"harness|gsm8k|5\": {\n \ \ \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.5082872928176796,\n \"acc_stderr\": 0.014050555322824192\n\ \ }\n}\n```" repo_url: https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_22T21_20_12.395485 path: - '**/details_harness|drop|3_2023-09-22T21-20-12.395485.parquet' - split: 2023_09_22T21_36_39.212716 path: - '**/details_harness|drop|3_2023-09-22T21-36-39.212716.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-22T21-36-39.212716.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_22T21_20_12.395485 path: - '**/details_harness|gsm8k|5_2023-09-22T21-20-12.395485.parquet' - split: 2023_09_22T21_36_39.212716 path: - '**/details_harness|gsm8k|5_2023-09-22T21-36-39.212716.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-22T21-36-39.212716.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_22T21_20_12.395485 path: - '**/details_harness|winogrande|5_2023-09-22T21-20-12.395485.parquet' - split: 2023_09_22T21_36_39.212716 path: - '**/details_harness|winogrande|5_2023-09-22T21-36-39.212716.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-22T21-36-39.212716.parquet' - config_name: results data_files: - split: 2023_09_22T21_20_12.395485 path: - results_2023-09-22T21-20-12.395485.parquet - split: 2023_09_22T21_36_39.212716 path: - results_2023-09-22T21-36-39.212716.parquet - split: latest path: - results_2023-09-22T21-36-39.212716.parquet --- # Dataset Card for Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca](https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-22T21:36:39.212716](https://huggingface.co/datasets/open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca/blob/main/results_2023-09-22T21-36-39.212716.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0, "em_stderr": 0.0, "f1": 0.0004404362416107381, "f1_stderr": 6.976502994544788e-05, "acc": 0.2541436464088398, "acc_stderr": 0.007025277661412096 }, "harness|drop|3": { "em": 0.0, "em_stderr": 0.0, "f1": 0.0004404362416107381, "f1_stderr": 6.976502994544788e-05 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.5082872928176796, "acc_stderr": 0.014050555322824192 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
ShuhongZheng/3D-LLM
2023-10-10T18:47:22.000Z
[ "arxiv:2307.12981", "region:us" ]
ShuhongZheng
null
null
null
0
0
https://arxiv.org/abs/2307.12981
Viniciaao/HardLevel
2023-09-22T22:09:51.000Z
[ "license:openrail", "region:us" ]
Viniciaao
null
null
null
0
0
--- license: openrail ---
dhenypatungka/dpXRealism
2023-09-22T23:29:00.000Z
[ "region:us" ]
dhenypatungka
null
null
null
0
0
Entry not found
Roscall/Alice
2023-09-22T23:28:39.000Z
[ "region:us" ]
Roscall
null
null
null
0
0
Entry not found
DevilCaos/lucas
2023-09-22T23:56:41.000Z
[ "license:unknown", "region:us" ]
DevilCaos
null
null
null
0
0
--- license: unknown ---
dongyoung4091/shp-generated_flan_t5_large_flan_t5_small_zeroshot
2023-09-23T00:45:13.000Z
[ "region:us" ]
dongyoung4091
null
null
null
0
0
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: zeroshot_helpfulness dtype: float64 - name: zeroshot_specificity dtype: float64 - name: zeroshot_intent dtype: float64 - name: zeroshot_factuality dtype: float64 - name: zeroshot_easy-to-understand dtype: float64 - name: zeroshot_relevance dtype: float64 - name: zeroshot_readability dtype: float64 - name: zeroshot_enough-detail dtype: float64 - name: 'zeroshot_biased:' dtype: float64 - name: zeroshot_fail-to-consider-individual-preferences dtype: float64 - name: zeroshot_repetetive dtype: float64 - name: zeroshot_fail-to-consider-context dtype: float64 - name: zeroshot_too-long dtype: float64 splits: - name: train num_bytes: 29493865 num_examples: 25600 download_size: 1808580 dataset_size: 29493865 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "shp-generated_flan_t5_large_flan_t5_small_zeroshot" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dongyoung4091/shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot
2023-09-23T00:47:12.000Z
[ "region:us" ]
dongyoung4091
null
null
null
0
0
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: post_id dtype: string - name: domain dtype: string - name: upvote_ratio dtype: float64 - name: history dtype: string - name: c_root_id_A dtype: string - name: c_root_id_B dtype: string - name: created_at_utc_A dtype: int64 - name: created_at_utc_B dtype: int64 - name: score_A dtype: int64 - name: score_B dtype: int64 - name: human_ref_A dtype: string - name: human_ref_B dtype: string - name: labels dtype: int64 - name: seconds_difference dtype: float64 - name: score_ratio dtype: float64 - name: helpfulness_A dtype: float64 - name: helpfulness_B dtype: float64 - name: specificity_A dtype: float64 - name: specificity_B dtype: float64 - name: intent_A dtype: float64 - name: intent_B dtype: float64 - name: factuality_A dtype: float64 - name: factuality_B dtype: float64 - name: easy-to-understand_A dtype: float64 - name: easy-to-understand_B dtype: float64 - name: relevance_A dtype: float64 - name: relevance_B dtype: float64 - name: readability_A dtype: float64 - name: readability_B dtype: float64 - name: enough-detail_A dtype: float64 - name: enough-detail_B dtype: float64 - name: biased:_A dtype: float64 - name: biased:_B dtype: float64 - name: fail-to-consider-individual-preferences_A dtype: float64 - name: fail-to-consider-individual-preferences_B dtype: float64 - name: repetetive_A dtype: float64 - name: repetetive_B dtype: float64 - name: fail-to-consider-context_A dtype: float64 - name: fail-to-consider-context_B dtype: float64 - name: too-long_A dtype: float64 - name: too-long_B dtype: float64 - name: __index_level_0__ dtype: int64 - name: log_score_A dtype: float64 - name: log_score_B dtype: float64 - name: zeroshot_helpfulness_A dtype: float64 - name: zeroshot_helpfulness_B dtype: float64 - name: zeroshot_specificity_A dtype: float64 - name: zeroshot_specificity_B dtype: float64 - name: zeroshot_intent_A dtype: float64 - name: zeroshot_intent_B dtype: float64 - name: zeroshot_factuality_A dtype: float64 - name: zeroshot_factuality_B dtype: float64 - name: zeroshot_easy-to-understand_A dtype: float64 - name: zeroshot_easy-to-understand_B dtype: float64 - name: zeroshot_relevance_A dtype: float64 - name: zeroshot_relevance_B dtype: float64 - name: zeroshot_readability_A dtype: float64 - name: zeroshot_readability_B dtype: float64 - name: zeroshot_enough-detail_A dtype: float64 - name: zeroshot_enough-detail_B dtype: float64 - name: zeroshot_biased:_A dtype: float64 - name: zeroshot_biased:_B dtype: float64 - name: zeroshot_fail-to-consider-individual-preferences_A dtype: float64 - name: zeroshot_fail-to-consider-individual-preferences_B dtype: float64 - name: zeroshot_repetetive_A dtype: float64 - name: zeroshot_repetetive_B dtype: float64 - name: zeroshot_fail-to-consider-context_A dtype: float64 - name: zeroshot_fail-to-consider-context_B dtype: float64 - name: zeroshot_too-long_A dtype: float64 - name: zeroshot_too-long_B dtype: float64 splits: - name: train num_bytes: 22674534 num_examples: 9459 - name: test num_bytes: 22627412 num_examples: 9459 download_size: 24128568 dataset_size: 45301946 --- # Dataset Card for "shp_with_features_20k_flan_t5_large_flan_t5_small_zeroshot" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ocolegro/book_names_and_fields
2023-09-23T01:44:31.000Z
[ "region:us" ]
ocolegro
null
null
null
0
0
--- dataset_info: features: - name: name dtype: string - name: persons list: - name: id dtype: string - name: name dtype: string - name: year dtype: float64 - name: field_name dtype: string splits: - name: train num_bytes: 218318565 num_examples: 1480516 download_size: 123891575 dataset_size: 218318565 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "book_names_and_fields" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TheVarunKaushik/US_History
2023-09-23T01:46:42.000Z
[ "license:openrail", "region:us" ]
TheVarunKaushik
null
null
null
0
0
--- license: openrail ---
Ialoris/test2
2023-09-23T02:36:04.000Z
[ "license:mit", "region:us" ]
Ialoris
null
null
null
0
0
--- license: mit ---
proanimer/anime_face
2023-09-23T04:26:02.000Z
[ "language:en", "license:mit", "region:us" ]
proanimer
null
null
null
0
0
--- license: mit language: - en ---
OscarHenry94/oscarhenry
2023-09-23T02:58:47.000Z
[ "region:us" ]
OscarHenry94
null
null
null
0
0
Entry not found
GreenDaFox/BobVelseb
2023-09-23T04:09:25.000Z
[ "region:us" ]
GreenDaFox
null
null
null
0
0
benxh/libgen_titles
2023-09-23T10:03:24.000Z
[ "size_categories:1M<n<10M", "language:en", "language:ru", "language:uk", "language:de", "language:fr", "region:us" ]
benxh
null
null
null
1
0
--- language: - en - ru - uk - de - fr size_categories: - 1M<n<10M --- # All of libgen[dot]rs non-fiction ~4 Million records straight from the source. No summaries/descriptions. Useful for synthetic textbook generation, when you run out of ideas, just sample book titles and topics from this dataset.
ptx0/midjourney-52-showcase
2023-09-23T04:56:48.000Z
[ "license:unlicense", "region:us" ]
ptx0
null
null
null
0
0
--- license: unlicense ---
Ediudo/rodolfo
2023-09-23T04:52:16.000Z
[ "region:us" ]
Ediudo
null
null
null
0
0
Entry not found
nlplabtdtu/diem_chuan_dai_hoc
2023-09-23T04:59:58.000Z
[ "region:us" ]
nlplabtdtu
null
null
null
0
0
Entry not found
larryvrh/belle_filtered_2.8M_CN
2023-09-23T05:28:23.000Z
[ "task_categories:text-generation", "task_categories:conversational", "size_categories:1M<n<10M", "language:zh", "license:gpl-3.0", "region:us" ]
larryvrh
null
null
null
0
0
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string - name: id dtype: string - name: category dtype: string splits: - name: train num_bytes: 4151854934 num_examples: 2802515 download_size: 2513439396 dataset_size: 4151854934 configs: - config_name: default data_files: - split: train path: data/train-* license: gpl-3.0 task_categories: - text-generation - conversational language: - zh size_categories: - 1M<n<10M --- # Dataset Card for "belle_filtered_2.8M_CN" 通过启发式方法过滤后的[BelleGroup/train_3.5M_CN](https://huggingface.co/datasets/BelleGroup/train_3.5M_CN),移除了语料中明显的错误和低质量内容。 ## 样例 ``` { "id":"66623028", "conversations":[ { "from":"human", "value":"归纳介绍三种不同类型的电影题材" }, { "from":"assistant", "value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3. 科幻片:科幻片是以未来科技和未知世界为主题的电影,通常包括时间旅行、外星人、虚拟现实、未来社会等各种奇幻的元素。这种电影描绘了一种比现实更加宏伟、更加神秘和惊奇的世界。例如《星际穿越》和《发条橙》等。" } ] } { "id":"18205437", "conversations":[ { "from":"human", "value":"根据给定的食材,编写一道美食菜谱。\n鸡腿肉、蘑菇、洋葱" }, { "from":"assistant", "value":"鸡肉炒蘑菇洋葱\n食材:\n- 鸡腿肉 400克,去骨去皮切成小块\n- 蘑菇 200克,切片\n- 洋葱 1个,切丝\n- 生姜 适量,切丝\n- 酱油 2勺\n- 料酒 1勺\n- 盐 适量\n- 白胡椒粉 适量\n- 植物油 适量\n做法:\n1. 在一个碗里放入切好的鸡腿肉块,加入酱油、料酒和适量的白胡椒粉,搅拌均匀,腌制10分钟。\n2. 热锅凉油,加入姜丝翻炒,放入腌好的鸡肉块翻炒至变色。\n3. 加入洋葱丝和蘑菇,翻炒2-3分钟至蘑菇变软。\n4. 加入适量的盐和少量的水,盖上锅盖焖煮5分钟直至鸡肉熟透。\n5. 最后翻炒几下,使汤汁变浓稠即可。\n温馨提示:\n1. 在腌制鸡肉时,如果想让肉质更嫩滑,可以加入少量的淀粉和蛋清。\n2. 可以根据个人口味加入其他调料,如蒜末、辣椒等。\n3. 洋葱和蘑菇可以用其他蔬菜代替,如青椒、胡萝卜等。" }, { "from":"human", "value":"太好了,我会试着照着你教的做一次,但是我最近买了一部新相机,想让你推荐一些好的拍照地点。" }, { "from":"assistant", "value":"当地的风景区、公园、景点等等都是比较适合拍照的地点。如果你想要一些特别的地方,可以考虑一些建筑群,街道巷子、咖啡馆、书店等具有文艺气息的地方。" }, { "from":"human", "value":"嗯,这些地方都很不错,但是我还想找一些可以体验当地文化和风俗的地点,你有什么好的建议吗?" }, { "from":"assistant", "value":"如果你想要更深入地了解当地文化和风俗,可以考虑参加一些游学活动,比如参观当地的博物馆、参加民俗节庆、尝试当地美食等等。此外,也可以去一些小城镇,感受那里的古朴风情。" } ] } ``` ### 字段: ``` id: 数据id conversations: 数据内容,以对话形式给出,包括多轮和单轮对话的数据 category: 数据类别 ``` 过滤后的数据构成: | Category | Count | |----------------|---------:| | close qa | 112,570 | | classification | 125,623 | | extract | 6,400 | | open qa | 385,306 | | harmless | 45,968 | | role playing | 465,782 | | rewrite | 28,146 | | code | 180,825 | | translation | 29,923 | | summarization | 99,017 | | math | 106,202 | | generation |1,023,643 | | brainstorming | 193,110 |
morosCORP/governmentschemes
2023-09-23T05:38:17.000Z
[ "license:afl-3.0", "region:us" ]
morosCORP
null
null
null
0
0
--- license: afl-3.0 ---
JapanDegitalMaterial/Abandoned_places_in_Japan
2023-09-23T12:58:42.000Z
[ "task_categories:text-to-image", "language:en", "language:ja", "license:cc0-1.0", "region:us" ]
JapanDegitalMaterial
null
null
null
0
0
--- license: cc0-1.0 language: - en - ja task_categories: - text-to-image --- # Abandoned place in Japan This is a dataset to train text-to-image or other models without any copyright issue. All materials used in this dataset are CC0 (Public domain /P.D.). ## Dataset Description - **Homepage:** - https://www.deviantart.com/japanmaterial - - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct
2023-09-23T05:54:45.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of VMware/open-llama-7b-open-instruct dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T05:54:33.646620](https://huggingface.co/datasets/open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct/blob/main/results_2023-09-23T05-54-33.646620.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.24811241610738255,\n\ \ \"em_stderr\": 0.004423238498303271,\n \"f1\": 0.3074643456375843,\n\ \ \"f1_stderr\": 0.004402791070678147,\n \"acc\": 0.3298042752007123,\n\ \ \"acc_stderr\": 0.007683951336441218\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.24811241610738255,\n \"em_stderr\": 0.004423238498303271,\n\ \ \"f1\": 0.3074643456375843,\n \"f1_stderr\": 0.004402791070678147\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.00530705079605762,\n \ \ \"acc_stderr\": 0.0020013057209480527\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.654301499605367,\n \"acc_stderr\": 0.013366596951934383\n\ \ }\n}\n```" repo_url: https://huggingface.co/VMware/open-llama-7b-open-instruct leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T05_54_33.646620 path: - '**/details_harness|drop|3_2023-09-23T05-54-33.646620.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T05-54-33.646620.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T05_54_33.646620 path: - '**/details_harness|gsm8k|5_2023-09-23T05-54-33.646620.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T05-54-33.646620.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T05_54_33.646620 path: - '**/details_harness|winogrande|5_2023-09-23T05-54-33.646620.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T05-54-33.646620.parquet' - config_name: results data_files: - split: 2023_09_23T05_54_33.646620 path: - results_2023-09-23T05-54-33.646620.parquet - split: latest path: - results_2023-09-23T05-54-33.646620.parquet --- # Dataset Card for Evaluation run of VMware/open-llama-7b-open-instruct ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/VMware/open-llama-7b-open-instruct - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T05:54:33.646620](https://huggingface.co/datasets/open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct/blob/main/results_2023-09-23T05-54-33.646620.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.24811241610738255, "em_stderr": 0.004423238498303271, "f1": 0.3074643456375843, "f1_stderr": 0.004402791070678147, "acc": 0.3298042752007123, "acc_stderr": 0.007683951336441218 }, "harness|drop|3": { "em": 0.24811241610738255, "em_stderr": 0.004423238498303271, "f1": 0.3074643456375843, "f1_stderr": 0.004402791070678147 }, "harness|gsm8k|5": { "acc": 0.00530705079605762, "acc_stderr": 0.0020013057209480527 }, "harness|winogrande|5": { "acc": 0.654301499605367, "acc_stderr": 0.013366596951934383 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
ryacon/CANNABIS
2023-09-23T05:59:08.000Z
[ "license:openrail", "region:us" ]
ryacon
null
null
null
0
0
--- license: openrail ---
marthazh/sanhuanzhonghuinewnew
2023-09-23T06:28:29.000Z
[ "region:us" ]
marthazh
null
null
null
0
0
Entry not found
Falah/luxurious_food_photography_prompts
2023-09-23T06:31:09.000Z
[ "region:us" ]
Falah
null
null
null
0
0
--- dataset_info: features: - name: prompts dtype: string splits: - name: train num_bytes: 116535 num_examples: 1000 download_size: 1927 dataset_size: 116535 --- # Dataset Card for "luxurious_food_photography_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b
2023-09-23T06:40:11.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of rinna/bilingual-gpt-neox-4b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [rinna/bilingual-gpt-neox-4b](https://huggingface.co/rinna/bilingual-gpt-neox-4b)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T06:39:58.316038](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b/blob/main/results_2023-09-23T06-39-58.316038.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"\ em_stderr\": 0.0,\n \"f1\": 0.0019494546979865776,\n \"f1_stderr\"\ : 0.0001656985868155588,\n \"acc\": 0.25927387529597473,\n \"acc_stderr\"\ : 0.007021406854444189\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n\ \ \"em_stderr\": 0.0,\n \"f1\": 0.0019494546979865776,\n \"\ f1_stderr\": 0.0001656985868155588\n },\n \"harness|gsm8k|5\": {\n \ \ \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.5185477505919495,\n \"acc_stderr\": 0.014042813708888378\n\ \ }\n}\n```" repo_url: https://huggingface.co/rinna/bilingual-gpt-neox-4b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T06_39_58.316038 path: - '**/details_harness|drop|3_2023-09-23T06-39-58.316038.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T06-39-58.316038.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T06_39_58.316038 path: - '**/details_harness|gsm8k|5_2023-09-23T06-39-58.316038.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T06-39-58.316038.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T06_39_58.316038 path: - '**/details_harness|winogrande|5_2023-09-23T06-39-58.316038.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T06-39-58.316038.parquet' - config_name: results data_files: - split: 2023_09_23T06_39_58.316038 path: - results_2023-09-23T06-39-58.316038.parquet - split: latest path: - results_2023-09-23T06-39-58.316038.parquet --- # Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/rinna/bilingual-gpt-neox-4b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [rinna/bilingual-gpt-neox-4b](https://huggingface.co/rinna/bilingual-gpt-neox-4b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T06:39:58.316038](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b/blob/main/results_2023-09-23T06-39-58.316038.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0, "em_stderr": 0.0, "f1": 0.0019494546979865776, "f1_stderr": 0.0001656985868155588, "acc": 0.25927387529597473, "acc_stderr": 0.007021406854444189 }, "harness|drop|3": { "em": 0.0, "em_stderr": 0.0, "f1": 0.0019494546979865776, "f1_stderr": 0.0001656985868155588 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.5185477505919495, "acc_stderr": 0.014042813708888378 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_JosephusCheung__Guanaco
2023-09-23T06:44:14.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of JosephusCheung/Guanaco dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [JosephusCheung/Guanaco](https://huggingface.co/JosephusCheung/Guanaco) on the\ \ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_JosephusCheung__Guanaco\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T06:44:02.813633](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Guanaco/blob/main/results_2023-09-23T06-44-02.813633.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.23343120805369127,\n\ \ \"em_stderr\": 0.004332062137833453,\n \"f1\": 0.2960843120805377,\n\ \ \"f1_stderr\": 0.004351433413685765,\n \"acc\": 0.34333070244672453,\n\ \ \"acc_stderr\": 0.006518256048373988\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.23343120805369127,\n \"em_stderr\": 0.004332062137833453,\n\ \ \"f1\": 0.2960843120805377,\n \"f1_stderr\": 0.004351433413685765\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\ : 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6866614048934491,\n\ \ \"acc_stderr\": 0.013036512096747976\n }\n}\n```" repo_url: https://huggingface.co/JosephusCheung/Guanaco leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T06_44_02.813633 path: - '**/details_harness|drop|3_2023-09-23T06-44-02.813633.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T06-44-02.813633.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T06_44_02.813633 path: - '**/details_harness|gsm8k|5_2023-09-23T06-44-02.813633.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T06-44-02.813633.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T06_44_02.813633 path: - '**/details_harness|winogrande|5_2023-09-23T06-44-02.813633.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T06-44-02.813633.parquet' - config_name: results data_files: - split: 2023_09_23T06_44_02.813633 path: - results_2023-09-23T06-44-02.813633.parquet - split: latest path: - results_2023-09-23T06-44-02.813633.parquet --- # Dataset Card for Evaluation run of JosephusCheung/Guanaco ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/JosephusCheung/Guanaco - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [JosephusCheung/Guanaco](https://huggingface.co/JosephusCheung/Guanaco) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_JosephusCheung__Guanaco", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T06:44:02.813633](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Guanaco/blob/main/results_2023-09-23T06-44-02.813633.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.23343120805369127, "em_stderr": 0.004332062137833453, "f1": 0.2960843120805377, "f1_stderr": 0.004351433413685765, "acc": 0.34333070244672453, "acc_stderr": 0.006518256048373988 }, "harness|drop|3": { "em": 0.23343120805369127, "em_stderr": 0.004332062137833453, "f1": 0.2960843120805377, "f1_stderr": 0.004351433413685765 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.6866614048934491, "acc_stderr": 0.013036512096747976 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
YiYiXu/pr5158
2023-09-23T07:52:15.000Z
[ "region:us" ]
YiYiXu
null
null
null
0
0
Entry not found
Corianas/StorySalt_Concepts
2023-09-28T09:24:44.000Z
[ "license:cdla-sharing-1.0", "region:us" ]
Corianas
null
null
null
0
0
--- license: cdla-sharing-1.0 --- This is the story elements extracted from the TinyStories dataset to be used as randomization data for short story generation.
open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg
2023-09-23T08:58:34.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T08:58:22.598379](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg/blob/main/results_2023-09-23T08-58-22.598379.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.02925755033557047,\n\ \ \"em_stderr\": 0.0017258801842771152,\n \"f1\": 0.09186136744966467,\n\ \ \"f1_stderr\": 0.0021533865918944134,\n \"acc\": 0.4337145226735951,\n\ \ \"acc_stderr\": 0.009944810794409672\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.02925755033557047,\n \"em_stderr\": 0.0017258801842771152,\n\ \ \"f1\": 0.09186136744966467,\n \"f1_stderr\": 0.0021533865918944134\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09552691432903715,\n \ \ \"acc_stderr\": 0.008096605771155745\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.0117930158176636\n\ \ }\n}\n```" repo_url: https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T08_58_22.598379 path: - '**/details_harness|drop|3_2023-09-23T08-58-22.598379.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T08-58-22.598379.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T08_58_22.598379 path: - '**/details_harness|gsm8k|5_2023-09-23T08-58-22.598379.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T08-58-22.598379.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T08_58_22.598379 path: - '**/details_harness|winogrande|5_2023-09-23T08-58-22.598379.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T08-58-22.598379.parquet' - config_name: results data_files: - split: 2023_09_23T08_58_22.598379 path: - results_2023-09-23T08-58-22.598379.parquet - split: latest path: - results_2023-09-23T08-58-22.598379.parquet --- # Dataset Card for Evaluation run of openaccess-ai-collective/manticore-13b-chat-pyg ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T08:58:22.598379](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__manticore-13b-chat-pyg/blob/main/results_2023-09-23T08-58-22.598379.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.02925755033557047, "em_stderr": 0.0017258801842771152, "f1": 0.09186136744966467, "f1_stderr": 0.0021533865918944134, "acc": 0.4337145226735951, "acc_stderr": 0.009944810794409672 }, "harness|drop|3": { "em": 0.02925755033557047, "em_stderr": 0.0017258801842771152, "f1": 0.09186136744966467, "f1_stderr": 0.0021533865918944134 }, "harness|gsm8k|5": { "acc": 0.09552691432903715, "acc_stderr": 0.008096605771155745 }, "harness|winogrande|5": { "acc": 0.7719021310181531, "acc_stderr": 0.0117930158176636 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct
2023-09-23T09:06:38.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of sartmis1/starcoder-finetune-selfinstruct dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [sartmis1/starcoder-finetune-selfinstruct](https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T09:06:26.158683](https://huggingface.co/datasets/open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct/blob/main/results_2023-09-23T09-06-26.158683.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n\ \ \"em_stderr\": 0.00036305608931189545,\n \"f1\": 0.04220742449664442,\n\ \ \"f1_stderr\": 0.0011048606881245398,\n \"acc\": 0.31919735419373096,\n\ \ \"acc_stderr\": 0.01022815770603217\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.00036305608931189545,\n\ \ \"f1\": 0.04220742449664442,\n \"f1_stderr\": 0.0011048606881245398\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.060652009097801364,\n \ \ \"acc_stderr\": 0.0065747333814057925\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.5777426992896606,\n \"acc_stderr\": 0.013881582030658549\n\ \ }\n}\n```" repo_url: https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T09_06_26.158683 path: - '**/details_harness|drop|3_2023-09-23T09-06-26.158683.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T09-06-26.158683.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T09_06_26.158683 path: - '**/details_harness|gsm8k|5_2023-09-23T09-06-26.158683.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T09-06-26.158683.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T09_06_26.158683 path: - '**/details_harness|winogrande|5_2023-09-23T09-06-26.158683.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T09-06-26.158683.parquet' - config_name: results data_files: - split: 2023_09_23T09_06_26.158683 path: - results_2023-09-23T09-06-26.158683.parquet - split: latest path: - results_2023-09-23T09-06-26.158683.parquet --- # Dataset Card for Evaluation run of sartmis1/starcoder-finetune-selfinstruct ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [sartmis1/starcoder-finetune-selfinstruct](https://huggingface.co/sartmis1/starcoder-finetune-selfinstruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T09:06:26.158683](https://huggingface.co/datasets/open-llm-leaderboard/details_sartmis1__starcoder-finetune-selfinstruct/blob/main/results_2023-09-23T09-06-26.158683.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0012583892617449664, "em_stderr": 0.00036305608931189545, "f1": 0.04220742449664442, "f1_stderr": 0.0011048606881245398, "acc": 0.31919735419373096, "acc_stderr": 0.01022815770603217 }, "harness|drop|3": { "em": 0.0012583892617449664, "em_stderr": 0.00036305608931189545, "f1": 0.04220742449664442, "f1_stderr": 0.0011048606881245398 }, "harness|gsm8k|5": { "acc": 0.060652009097801364, "acc_stderr": 0.0065747333814057925 }, "harness|winogrande|5": { "acc": 0.5777426992896606, "acc_stderr": 0.013881582030658549 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
sauravjoshi23/aws-documentation-chunked
2023-09-23T17:41:26.000Z
[ "region:us" ]
sauravjoshi23
null
null
null
1
0
Entry not found
Chungfan/biomed-lay-summ
2023-09-23T09:34:26.000Z
[ "task_categories:summarization", "size_categories:10K<n<100K", "language:en", "biology", "medical", "region:us" ]
Chungfan
null
null
null
0
0
--- task_categories: - summarization language: - en tags: - biology - medical pretty_name: f size_categories: - 10K<n<100K ---
Pulse495/your-dataset-name
2023-09-23T10:18:47.000Z
[ "region:us" ]
Pulse495
null
null
null
0
0
Entry not found
open-llm-leaderboard/details_Corianas__Quokka_256m
2023-09-23T10:44:09.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of Corianas/Quokka_256m dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Corianas/Quokka_256m](https://huggingface.co/Corianas/Quokka_256m) on the [Open\ \ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Corianas__Quokka_256m\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T10:43:58.208940](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_256m/blob/main/results_2023-09-23T10-43-58.208940.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003984899328859061,\n\ \ \"em_stderr\": 0.0006451805848102272,\n \"f1\": 0.04266883389261752,\n\ \ \"f1_stderr\": 0.0013952300953918367,\n \"acc\": 0.2612470402525651,\n\ \ \"acc_stderr\": 0.007019128912029941\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.003984899328859061,\n \"em_stderr\": 0.0006451805848102272,\n\ \ \"f1\": 0.04266883389261752,\n \"f1_stderr\": 0.0013952300953918367\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\ : 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5224940805051302,\n\ \ \"acc_stderr\": 0.014038257824059881\n }\n}\n```" repo_url: https://huggingface.co/Corianas/Quokka_256m leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T10_43_58.208940 path: - '**/details_harness|drop|3_2023-09-23T10-43-58.208940.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T10-43-58.208940.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T10_43_58.208940 path: - '**/details_harness|gsm8k|5_2023-09-23T10-43-58.208940.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T10-43-58.208940.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T10_43_58.208940 path: - '**/details_harness|winogrande|5_2023-09-23T10-43-58.208940.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T10-43-58.208940.parquet' - config_name: results data_files: - split: 2023_09_23T10_43_58.208940 path: - results_2023-09-23T10-43-58.208940.parquet - split: latest path: - results_2023-09-23T10-43-58.208940.parquet --- # Dataset Card for Evaluation run of Corianas/Quokka_256m ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Corianas/Quokka_256m - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [Corianas/Quokka_256m](https://huggingface.co/Corianas/Quokka_256m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Corianas__Quokka_256m", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T10:43:58.208940](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_256m/blob/main/results_2023-09-23T10-43-58.208940.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.003984899328859061, "em_stderr": 0.0006451805848102272, "f1": 0.04266883389261752, "f1_stderr": 0.0013952300953918367, "acc": 0.2612470402525651, "acc_stderr": 0.007019128912029941 }, "harness|drop|3": { "em": 0.003984899328859061, "em_stderr": 0.0006451805848102272, "f1": 0.04266883389261752, "f1_stderr": 0.0013952300953918367 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.5224940805051302, "acc_stderr": 0.014038257824059881 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
martinakaduc/hh-rlhf-gpt2-embedding
2023-09-27T16:54:37.000Z
[ "language:en", "license:mit", "region:us" ]
martinakaduc
null
null
null
0
0
--- license: mit language: - en dataset_info: features: - name: chosen sequence: float64 - name: rejected sequence: float64 splits: - name: train num_bytes: 1810512224 num_examples: 147244 - name: test num_bytes: 96179312 num_examples: 7822 download_size: 1565358947 dataset_size: 1906691536 ---
dongyoung4091/shp-generated_flan_t5_large_flan_t5_large_zeroshot
2023-09-23T10:59:25.000Z
[ "region:us" ]
dongyoung4091
null
null
null
0
0
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: zeroshot_helpfulness dtype: float64 - name: zeroshot_specificity dtype: float64 - name: zeroshot_intent dtype: float64 - name: zeroshot_factuality dtype: float64 - name: zeroshot_easy-to-understand dtype: float64 - name: zeroshot_relevance dtype: float64 - name: zeroshot_readability dtype: float64 - name: zeroshot_enough-detail dtype: float64 - name: 'zeroshot_biased:' dtype: float64 - name: zeroshot_fail-to-consider-individual-preferences dtype: float64 - name: zeroshot_repetetive dtype: float64 - name: zeroshot_fail-to-consider-context dtype: float64 - name: zeroshot_too-long dtype: float64 splits: - name: train num_bytes: 29493865 num_examples: 25600 download_size: 1905432 dataset_size: 29493865 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "shp-generated_flan_t5_large_flan_t5_large_zeroshot" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ossaili/test_01
2023-09-23T11:03:17.000Z
[ "region:us" ]
ossaili
null
null
null
0
0
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 102096.0 num_examples: 1 download_size: 103703 dataset_size: 102096.0 --- # Dataset Card for "test_01" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hohlederbetrug/hohlederbetrug
2023-09-23T11:05:57.000Z
[ "region:us" ]
hohlederbetrug
null
null
null
0
0
Entry not found
ayoubkirouane/One-Piece-anime-captions
2023-09-23T11:20:10.000Z
[ "region:us" ]
ayoubkirouane
null
null
null
0
0
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 28504098.0 num_examples: 856 download_size: 28452041 dataset_size: 28504098.0 --- # Dataset Card for "One-Piece-anime-captions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
taldarim/ar-higher
2023-09-23T11:23:35.000Z
[ "region:us" ]
taldarim
null
null
null
0
0
--- dataset_info: features: - name: text dtype: string - name: Comprehension dtype: class_label: names: '0': '0' '1': '1' - name: Configuration dtype: class_label: names: '0': '0' '1': '1' - name: Crashes dtype: class_label: names: '0': '0' '1': '1' - name: Implementation dtype: class_label: names: '0': '0' '1': '1' - name: Performance issue dtype: class_label: names: '0': '0' '1': '1' - name: Results interpretation dtype: class_label: names: '0': '0' '1': '1' splits: - name: train num_bytes: 373318 num_examples: 280 - name: test num_bytes: 369328 num_examples: 236 download_size: 186867 dataset_size: 742646 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Dataset Card for "ar-higher" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kanoyo/gui
2023-09-23T11:57:27.000Z
[ "license:mit", "region:us" ]
kanoyo
null
null
null
0
0
--- license: mit ---
ixfo/test_hf
2023-09-23T12:12:39.000Z
[ "region:us" ]
ixfo
null
null
null
0
0
Entry not found
JapanDegitalMaterial/Texture_images
2023-09-23T14:05:23.000Z
[ "task_categories:text-to-image", "language:en", "language:ja", "license:cc0-1.0", "region:us" ]
JapanDegitalMaterial
null
null
null
0
0
--- license: cc0-1.0 task_categories: - text-to-image language: - en - ja --- # Textuer images This is a dataset to train text-to-image or other models without any copyright issue. All materials used in this dataset are CC0 (Public domain /P.D.). ## Dataset Description - **Homepage:** - https://www.deviantart.com/japanmaterial - - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Valmy/Hackers_Face_Detection_Image
2023-09-23T12:29:37.000Z
[ "license:other", "region:us" ]
Valmy
null
null
null
0
0
--- license: other ---
isno0907/AFHQv2_CAT_256
2023-09-23T12:29:52.000Z
[ "region:us" ]
isno0907
null
null
null
0
0
Entry not found
JapanDegitalMaterial/Objects_in_Japan
2023-09-23T14:19:40.000Z
[ "license:cc0-1.0", "region:us" ]
JapanDegitalMaterial
null
null
null
0
0
--- license: cc0-1.0 --- # Objects in japan. This is a dataset to train text-to-image or other models without any copyright issue. All materials used in this dataset are CC0 (Public domain /P.D.). ## Dataset Description - **Homepage:** - https://www.deviantart.com/japanmaterial - - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
lionpig/myopenia
2023-09-30T10:03:15.000Z
[ "region:us" ]
lionpig
null
null
null
0
0
Entry not found
JapanDegitalMaterial/Places_in_Japan
2023-09-23T14:00:16.000Z
[ "task_categories:text-to-image", "language:en", "language:ja", "license:cc0-1.0", "region:us" ]
JapanDegitalMaterial
null
null
null
0
0
--- license: cc0-1.0 task_categories: - text-to-image language: - en - ja --- # Places in japan. This is a dataset to train text-to-image or other models without any copyright issue. All materials used in this dataset are CC0 (Public domain /P.D.). ## Dataset Description - **Homepage:** - https://www.deviantart.com/japanmaterial - - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
BangumiBase/puellamagimadokamagica
2023-09-29T11:24:42.000Z
[ "size_categories:1K<n<10K", "license:mit", "art", "region:us" ]
BangumiBase
null
null
null
0
0
--- license: mit tags: - art size_categories: - 1K<n<10K --- # Bangumi Image Base of Puella Magi Madoka Magica This is the image base of bangumi Puella Magi Madoka Magica, we detected 17 characters, 2197 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 561 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 238 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 29 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 355 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 392 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 45 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 32 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 12 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 15 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 16 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 6 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | N/A | N/A | | 11 | 58 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 150 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 64 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 13 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 13 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | noise | 198 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
rigolettofranc/regularization_data
2023-09-23T16:53:02.000Z
[ "license:openrail", "region:us" ]
rigolettofranc
null
null
null
1
0
--- license: openrail ---
SilentAntagonist/ai-generated-music-dataset_singing-vocals-and-instrumental-accompaniments
2023-09-23T13:45:38.000Z
[ "license:cc-by-nc-4.0", "region:us" ]
SilentAntagonist
null
null
null
0
0
--- license: cc-by-nc-4.0 ---
LiChenYi/china-law
2023-10-08T11:43:25.000Z
[ "region:us" ]
LiChenYi
null
null
null
0
0
Entry not found
Alfred0622/HypR
2023-09-23T13:34:24.000Z
[ "region:us" ]
Alfred0622
null
null
null
0
0
Entry not found
open-llm-leaderboard/details_tiiuae__falcon-40b-instruct
2023-09-23T13:36:31.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of tiiuae/falcon-40b-instruct dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [tiiuae/falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tiiuae__falcon-40b-instruct\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T13:36:20.116121](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-40b-instruct/blob/main/results_2023-09-23T13-36-20.116121.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.008494127516778523,\n\ \ \"em_stderr\": 0.0009398243325411525,\n \"f1\": 0.07122378355704674,\n\ \ \"f1_stderr\": 0.0016125239917803853,\n \"acc\": 0.503219594859419,\n\ \ \"acc_stderr\": 0.011237300869919437\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.008494127516778523,\n \"em_stderr\": 0.0009398243325411525,\n\ \ \"f1\": 0.07122378355704674,\n \"f1_stderr\": 0.0016125239917803853\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.20849128127369218,\n \ \ \"acc_stderr\": 0.011189587985791428\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.797947908445146,\n \"acc_stderr\": 0.011285013754047448\n\ \ }\n}\n```" repo_url: https://huggingface.co/tiiuae/falcon-40b-instruct leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T13_36_20.116121 path: - '**/details_harness|drop|3_2023-09-23T13-36-20.116121.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T13-36-20.116121.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T13_36_20.116121 path: - '**/details_harness|gsm8k|5_2023-09-23T13-36-20.116121.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T13-36-20.116121.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T13_36_20.116121 path: - '**/details_harness|winogrande|5_2023-09-23T13-36-20.116121.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T13-36-20.116121.parquet' - config_name: results data_files: - split: 2023_09_23T13_36_20.116121 path: - results_2023-09-23T13-36-20.116121.parquet - split: latest path: - results_2023-09-23T13-36-20.116121.parquet --- # Dataset Card for Evaluation run of tiiuae/falcon-40b-instruct ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/tiiuae/falcon-40b-instruct - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [tiiuae/falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_tiiuae__falcon-40b-instruct", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T13:36:20.116121](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-40b-instruct/blob/main/results_2023-09-23T13-36-20.116121.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.008494127516778523, "em_stderr": 0.0009398243325411525, "f1": 0.07122378355704674, "f1_stderr": 0.0016125239917803853, "acc": 0.503219594859419, "acc_stderr": 0.011237300869919437 }, "harness|drop|3": { "em": 0.008494127516778523, "em_stderr": 0.0009398243325411525, "f1": 0.07122378355704674, "f1_stderr": 0.0016125239917803853 }, "harness|gsm8k|5": { "acc": 0.20849128127369218, "acc_stderr": 0.011189587985791428 }, "harness|winogrande|5": { "acc": 0.797947908445146, "acc_stderr": 0.011285013754047448 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
phusroyal/ViHOS
2023-09-23T19:02:18.000Z
[ "task_categories:text-classification", "task_categories:token-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:vi", "license:mit", "region:us" ]
phusroyal
This is a dataset of Vietnamese Hate and Offensive Spans dataset from social media texts.
null
null
2
0
--- annotations_creators: - crowdsourced license: mit multilinguality: - monolingual source_datasets: - original task_ids: - hate-speech-detection task_categories: - text-classification - token-classification language: - vi pretty_name: ViHOS - Vietnamese Hate and Offensive Spans Dataset size_categories: - 10K<n<100K configs: - config_name: default data_files: - split: train_sequence_labeling path: - "train_sequence_labeling/syllable/train_BIO_syllable.csv" - "train_sequence_labeling/syllable/dev_BIO_syllable.csv" - "train_sequence_labeling/syllable/test_BIO_syllable.csv" - "train_sequence_labeling/word/train_BIO_syllable.csv" - "train_sequence_labeling/word/dev_BIO_syllable.csv" - "train_sequence_labeling/word/test_BIO_syllable.csv" - split: train_span_extraction path: - 'train_span_extraction/train.csv' - 'train_span_extraction/dev.csv' - split: test path: "test/test.csv" --- **Disclaimer**: This project contains real comments that could be considered profane, offensive, or abusive. # Dataset Card for "ViHOS - Vietnamese Hate and Offensive Spans Dataset" ## Dataset Description - **Repository:** [ViHOS](https://github.com/phusroyal/ViHOS) - **Paper:** [EACL-ViHOS](https://aclanthology.org/2023.eacl-main.47/) - **Total amount of disk used:** 2.6 MB ## Dataset Motivation The rise in hateful and offensive language directed at other users is one of the adverse side effects of the increased use of social networking platforms. This could make it difficult for human moderators to review tagged comments filtered by classification systems. To help address this issue, we present the ViHOS (**Vi**etnamese **H**ate and **O**ffensive **S**pans) dataset, the first human-annotated corpus containing 26k spans on 11k online comments. Our goal is to create a dataset that contains comprehensive hate and offensive thoughts, meanings, or opinions within the comments rather than just a lexicon of hate and offensive terms. We also provide definitions of hateful and offensive spans in Vietnamese comments as well as detailed annotation guidelines. Futhermore, our solutions to deal with *nine different online foul linguistic phenomena* are also provided in the [*paper*](https://aclanthology.org/2023.eacl-main.47/) (e.g. Teencodes; Metaphors, metonymies; Hyponyms; Puns...). We hope that this dataset will be useful for researchers and practitioners in the field of hate speech detection in general and hate spans detection in particular. ## Dataset Summary ViHOS contains 26,476 human-annotated spans on 11,056 comments (5,360 comments have hate and offensive spans, and 5,696 comments do not) It is splitted into train, dev, and test set with following information: 1. Train set: 8,844 comments 2. Dev set: 1,106 comments 3. Test set: 1,106 comments ## Data Instance An span extraction-based (see Data Structure for more details) example of 'test' looks as follows: ``` { "content": "Thối CC chỉ không ngửi đuợc thôi", 'index_spans': "[0, 1, 2, 3, 5, 6]" } ``` An sequence labeling-based (see Data Structure for more details) example of 'test' looks as follows: ``` { "content": "Thối CC chỉ không ngửi đuợc thôi", 'index_spans': ["B-T", "I-T", "O", "O", "O", "O", "O"] } ``` ## Data Structure Here is our data folder structure! ``` . └── data/ ├── train_sequence_labeling/ │ ├── syllable/ │ │ ├── dev_BIO_syllable.csv │ │ ├── test_BIO_syllable.csv │ │ └── train_BIO_syllable.csv │ └── word/ │ ├── dev_BIO_Word.csv │ ├── test_BIO_Word.csv │ └── train_BIO_Word.csv ├── train_span_extraction/ │ ├── dev.csv │ └── train.csv └── test/ └── test.csv ``` ### Sequence labeling-based version #### Syllable Description: - This folder contains the data for the sequence labeling-based version of the task. The data is divided into two files: train, and dev. Each file contains the following columns: - **index**: The id of the word. - **word**: Words in the sentence after the processing of tokenization using [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) tokenizer followed by underscore tokenization. The reason for this is that some words are in bad format: e.g. "điện.thoại của tôi" is split into ["điện.thoại", "của", "tôi"] instead of ["điện", "thoại", "của", "tôi"] if we use space tokenization, which is not in the right format of Syllable. As that, we used VnCoreNLP to tokenize first and then split words into tokens. e.g. "điện.thoại của tôi" ---(VnCoreNLP)---> ["điện_thoại", "của", "tôi"] ---(split by "_")---> ["điện", "thoại", "của", "tôi"]. - **tag**: The tag of the word. The tag is either B-T (beginning of a word), I-T (inside of a word), or O (outside of a word). - The train_BIO_syllable and dev_BIO_syllable file are used for training and validation for XLMR model, respectively. - The test_BIO_syllable file is used for reference only. It is not used for testing the model. **Please use the test.csv file in the Testdata folder for testing the model.** #### Word Description: - This folder contains the data for the sequence labeling-based version of the task. The data is divided into two files: train, and dev. Each file contains the following columns: - **index**: The id of the word. - **word**: Words in the sentence after the processing of tokenization using [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) tokenizer - **tag**: The tag of the word. The tag is either B-T (beginning of a word), I-T (inside of a word), or O (outside of a word). - The train_BIO_Word and dev_BIO_Word file are used for training and validation for PhoBERT model, respectively. - The test_BIO_Word file is used for reference only. It is not used for testing the model. **Please use the test.csv file in the data/test folder for testing the model.** ### Span Extraction-based version Description: - This folder contains the data for the span extraction-based version of the task. The data is divided into two files: train and dev. Each file contains the following columns: - **content**: The content of the sentence. - **span_ids**: The index of the hate and offensive spans in the sentence. The index is in the format of [start, end] where start is the index of the first character of the hate and offensive span and end is the index of the last character of the hate and offensive span. - The train and dev file are used for training and validation for BiLSTM-CRF model, respectively. ### Citation Information ``` @inproceedings{hoang-etal-2023-vihos, title = "{V}i{HOS}: Hate Speech Spans Detection for {V}ietnamese", author = "Hoang, Phu Gia and Luu, Canh Duc and Tran, Khanh Quoc and Nguyen, Kiet Van and Nguyen, Ngan Luu-Thuy", booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.eacl-main.47", doi = "10.18653/v1/2023.eacl-main.47", pages = "652--669", abstract = "The rise in hateful and offensive language directed at other users is one of the adverse side effects of the increased use of social networking platforms. This could make it difficult for human moderators to review tagged comments filtered by classification systems. To help address this issue, we present the ViHOS (Vietnamese Hate and Offensive Spans) dataset, the first human-annotated corpus containing 26k spans on 11k comments. We also provide definitions of hateful and offensive spans in Vietnamese comments as well as detailed annotation guidelines. Besides, we conduct experiments with various state-of-the-art models. Specifically, XLM-R{\_}Large achieved the best F1-scores in Single span detection and All spans detection, while PhoBERT{\_}Large obtained the highest in Multiple spans detection. Finally, our error analysis demonstrates the difficulties in detecting specific types of spans in our data for future research. Our dataset is released on GitHub.", } ```
mesolitica/NER-augmentation
2023-09-23T14:09:40.000Z
[ "region:us" ]
mesolitica
null
null
null
0
0
Entry not found
carles-undergrad-thesis/indo-mmarco
2023-09-23T14:34:18.000Z
[ "region:us" ]
carles-undergrad-thesis
null
null
null
0
0
Entry not found
MrJoshua217/cagliostro-colab-ui
2023-09-23T14:36:14.000Z
[ "region:us" ]
MrJoshua217
null
null
null
0
0
Entry not found
xieyizheng/elastic_zip
2023-09-23T14:46:34.000Z
[ "region:us" ]
xieyizheng
null
null
null
0
0
Entry not found
felixdae/length-control
2023-09-23T15:15:50.000Z
[ "region:us" ]
felixdae
null
null
null
0
0
source https://worksheets.codalab.org/bundles/0x8b65ebfe46674fbc83fc6df60da32f1b
LoliOverflow/DeviantArtCollection
2023-09-23T15:07:58.000Z
[ "region:us" ]
LoliOverflow
null
null
null
0
0
Entry not found
CyberHarem/kaname_madoka_puellamagimadokamagica
2023-09-23T15:32:31.000Z
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
CyberHarem
null
null
null
0
0
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Kaname Madoka This is the dataset of Kaname Madoka, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 650 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 300 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 300 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 650 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 650 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 650 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/akemi_homura_puellamagimadokamagica
2023-09-23T16:20:00.000Z
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
CyberHarem
null
null
null
0
0
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Akemi Homura This is the dataset of Akemi Homura, containing 261 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 261 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 544 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 261 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 261 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 261 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 261 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 261 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 544 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 544 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 544 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
Jinyan1/PolitiFact
2023-09-23T16:35:37.000Z
[ "region:us" ]
Jinyan1
null
null
null
0
0
--- configs: - config_name: default data_files: - split: MF path: data/MF-* - split: HF path: data/HF-* - split: MR path: data/MR-* - split: HR path: data/HR-* dataset_info: features: - name: id dtype: string - name: description dtype: string - name: text dtype: string - name: title dtype: string splits: - name: MF num_bytes: 164626 num_examples: 97 - name: HF num_bytes: 266214 num_examples: 97 - name: MR num_bytes: 641082 num_examples: 132 - name: HR num_bytes: 3338801 num_examples: 194 download_size: 2380714 dataset_size: 4410723 --- # Dataset Card for "PolitiFact" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ERmak1581/QA_Code
2023-09-23T16:49:57.000Z
[ "region:us" ]
ERmak1581
null
null
null
0
0
QA code on russian language. Based on Den4ikAI/russian_code_qa
ERmak1581/QA_sberquad
2023-09-23T16:51:44.000Z
[ "region:us" ]
ERmak1581
null
null
null
0
0
Russian QA datasets (small, medium, large) based on sberquad QA data
CyberHarem/miki_sayaka_puellamagimadokamagica
2023-09-23T17:23:49.000Z
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
CyberHarem
null
null
null
0
0
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Miki Sayaka This is the dataset of Miki Sayaka, containing 284 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 284 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 611 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 284 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 284 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 284 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 284 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 284 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 611 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 611 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 611 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
davanstrien/parl
2023-09-23T17:40:28.000Z
[ "region:us" ]
davanstrien
null
null
null
0
0
Entry not found
CyberHarem/tomoe_mami_puellamagimadokamagica
2023-09-23T17:43:50.000Z
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
CyberHarem
null
null
null
0
0
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Tomoe Mami This is the dataset of Tomoe Mami, containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 454 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 454 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 454 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 454 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
tinhpx2911/vietai_book_data
2023-09-24T03:59:51.000Z
[ "region:us" ]
tinhpx2911
null
null
null
0
0
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 8740094801 num_examples: 15189 download_size: 4515817258 dataset_size: 8740094801 --- # Dataset Card for "vietai_book_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Lololyric/Experimental
2023-09-23T17:50:52.000Z
[ "license:openrail", "region:us" ]
Lololyric
null
null
null
0
0
--- license: openrail ---
open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-16k
2023-09-23T17:52:01.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of totally-not-an-llm/EverythingLM-13b-16k dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [totally-not-an-llm/EverythingLM-13b-16k](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-16k\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-09-23T17:51:49.550032](https://huggingface.co/datasets/open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-16k/blob/main/results_2023-09-23T17-51-49.550032.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0025167785234899327,\n\ \ \"em_stderr\": 0.0005131152834514911,\n \"f1\": 0.0588632550335571,\n\ \ \"f1_stderr\": 0.0013761671412880158,\n \"acc\": 0.3960729978284714,\n\ \ \"acc_stderr\": 0.009637044859971106\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.0025167785234899327,\n \"em_stderr\": 0.0005131152834514911,\n\ \ \"f1\": 0.0588632550335571,\n \"f1_stderr\": 0.0013761671412880158\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06444275966641395,\n \ \ \"acc_stderr\": 0.0067633917284882755\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7277032359905288,\n \"acc_stderr\": 0.012510697991453934\n\ \ }\n}\n```" repo_url: https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_09_23T17_51_49.550032 path: - '**/details_harness|drop|3_2023-09-23T17-51-49.550032.parquet' - split: latest path: - '**/details_harness|drop|3_2023-09-23T17-51-49.550032.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_09_23T17_51_49.550032 path: - '**/details_harness|gsm8k|5_2023-09-23T17-51-49.550032.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-09-23T17-51-49.550032.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_09_23T17_51_49.550032 path: - '**/details_harness|winogrande|5_2023-09-23T17-51-49.550032.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-09-23T17-51-49.550032.parquet' - config_name: results data_files: - split: 2023_09_23T17_51_49.550032 path: - results_2023-09-23T17-51-49.550032.parquet - split: latest path: - results_2023-09-23T17-51-49.550032.parquet --- # Dataset Card for Evaluation run of totally-not-an-llm/EverythingLM-13b-16k ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [totally-not-an-llm/EverythingLM-13b-16k](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-16k", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-09-23T17:51:49.550032](https://huggingface.co/datasets/open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-16k/blob/main/results_2023-09-23T17-51-49.550032.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0025167785234899327, "em_stderr": 0.0005131152834514911, "f1": 0.0588632550335571, "f1_stderr": 0.0013761671412880158, "acc": 0.3960729978284714, "acc_stderr": 0.009637044859971106 }, "harness|drop|3": { "em": 0.0025167785234899327, "em_stderr": 0.0005131152834514911, "f1": 0.0588632550335571, "f1_stderr": 0.0013761671412880158 }, "harness|gsm8k|5": { "acc": 0.06444275966641395, "acc_stderr": 0.0067633917284882755 }, "harness|winogrande|5": { "acc": 0.7277032359905288, "acc_stderr": 0.012510697991453934 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]