datasetId
stringlengths
2
117
card
stringlengths
19
1.01M
vhtran/de-en-official
--- license: cc-by-4.0 ---
ynklab/XCodeSearchNet
--- license: mit language: - en - fr - ja - zh tags: - codesearch pretty_name: XCodeSearchNet --- [Paper on arXiv](https://arxiv.org/abs/2306.15604) ## pre-training data You need to manually combine each dataset if you want to use a multilingual dataset. ```python from datasets import load_dataset xcsn_pt_python_en = load_dataset("ynklab/XCodeSearchNet", data_dir='pretraining/python/en') """ DatasetDict({ train: Dataset({ features: ['function_tokens', 'docstring'], num_rows: 453623 }) validation: Dataset({ features: ['function_tokens', 'docstring'], num_rows: 4596 }) test: Dataset({ features: ['function_tokens', 'docstring'], num_rows: 45283 }) }) """ print(xcsn_pt_python_en['train'][0]) """ { 'function_tokens': ['def', 'get_feature_ide_paths', '(', 'container_dir', ',', 'product_name', ')', ':', 'repo_name', '=', 'get_repo_name', '(', 'container_dir', ')', 'class', 'Paths', '(', 'object', ')', ':', 'feature_order_json', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/feature_order.json'", ')', 'model_xml_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/model.xml'", ')', 'config_file_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/products/'", ',', 'repo_name', ',', 'product_name', ',', "'product.equation.config'", ')', 'equation_file_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'products'", ',', 'product_name', ',', "'product.equation'", ')', 'product_spec_path', '=', 'os', '.', 'path', '.', 'join', '(', 'container_dir', ',', "'_lib/featuremodel/productline/products/'", ',', 'repo_name', ',', "'product_spec.json'", ')', 'return', 'Paths'], 'docstring': 'Takes the container_dir and the product name and returns all relevant paths from the\n feature_order_json to the config_file_path.\n :param container_dir: the full path of the container dir\n :param product_name: the name of the product\n :return: object with divert path attributes' } """ ``` ## fine-tuning data ```python from datasets import load_dataset xcsn_ft_python_en = load_dataset("ynklab/XCodeSearchNet", data_dir='finetuning/python/en') """ DatasetDict({ train: Dataset({ features: ['text'], num_rows: 1648684 }) validation: Dataset({ features: ['text'], num_rows: 92426 }) }) """ print(xcsn_ft_python_en['train'][0]) """ { 'text': '1<CODESPLIT><CODESPLIT><CODESPLIT>Logs the definition of the object that was just auto - decorated inside the ipython notebook .<CODESPLIT>def _logdef ( self , n , o , otype ) : import re try : #The latest input cell will be the one that this got executed #from. TODO: actually, if acorn got imported after the fact, then #the import would have caused all the undecorated functions to be #decorated as soon as acorn imported. I suppose we just won\'t have #any code for that case. if otype == "classes" : cellno = max ( [ int ( k [ 2 : ] ) for k in self . shell . user_ns . keys ( ) if re . match ( "_i\\d+" , k ) ] ) elif otype == "functions" : cellno = int ( o . __code__ . co_filename . strip ( "<>" ) . split ( \'-\' ) [ 2 ] ) except : #This must not have been an ipython notebook declaration, so we #don\'t store the code. cellno = None pass code = "" if cellno is not None : cellstr = "_i{0:d}" . format ( cellno ) if cellstr in self . shell . user_ns : cellcode = self . shell . user_ns [ cellstr ] import ast astm = ast . parse ( cellcode ) ab = astm . body parts = { ab [ i ] . name : ( ab [ i ] . lineno , None if i + 1 >= len ( ab ) else ab [ i + 1 ] . lineno ) for i , d in enumerate ( ab ) } if n in parts : celllines = cellcode . split ( \'\\n\' ) start , end = parts [ n ] if end is not None : code = celllines [ start - 1 : end - 1 ] else : code = celllines [ start - 1 : ] #Now, we actually create the entry. Since the execution for function #definitions is almost instantaneous, we just log the pre and post #events at the same time. from time import time from acorn . logging . database import record entry = { "m" : "def" , "a" : None , "s" : time ( ) , "r" : None , "c" : code , } from acorn import msg record ( "__main__.{}" . format ( n ) , entry , diff = True ) msg . info ( entry , 1 )' } """ ```
Gabriel/cnn_daily_swe
--- language: - sv license: - mit size_categories: - 100K<n<1M source_datasets: - https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail task_categories: - summarization - text2text-generation task_ids: [] tags: - conditional-text-generation --- # Dataset Card for Swedish CNN Dailymail Dataset The Swedish CNN/DailyMail dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/cnn_dailymail ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `article`: a string containing the body of the news article - `highlights`: a string containing the highlight of the article as written by the article author ### Data Splits The Swedish CNN/DailyMail dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 287,113 | | Validation | 13,368 | | Test | 11,490 |
xzuyn/manythings-translations-alpaca
--- task_categories: - translation - text-generation language: - en size_categories: - 1M<n<10M --- [Original Dataset](http://www.manythings.org/anki/) 3,164,972 translations from English to 84 other languages. I've duplicated it to be *to* and *from* English, so it's now 6,329,944 translations.
autoevaluate/autoeval-eval-futin__feed-top_vi-71f14a-2175469964
--- type: predictions tags: - autotrain - evaluation datasets: - futin/feed eval_info: task: text_zero_shot_classification model: facebook/opt-6.7b metrics: [] dataset_name: futin/feed dataset_config: top_vi dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-6.7b * Dataset: futin/feed * Config: top_vi * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@futin](https://huggingface.co/futin) for evaluating this model.
HydraLM/partitioned_v3_standardized_012
--- dataset_info: features: - name: message dtype: string - name: message_type dtype: string - name: message_id dtype: int64 - name: conversation_id dtype: int64 - name: dataset_id dtype: string - name: unique_id dtype: string splits: - name: train num_bytes: 19913272.771467183 num_examples: 37033 download_size: 16406844 dataset_size: 19913272.771467183 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "partitioned_v3_standardized_012" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hf-doc-build/doc-build-dev
--- license: mit tags: - documentation pretty_name: HF Documentation (PRs) --- This is a dataset which contains the docs from all the PRs updating one of the docs from https://huggingface.co/docs. It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo.
infiagent/DABench
--- license: apache-2.0 tags: - code ---
NeuralShell/Gore-Blood-Dataset-v1.0
--- license: mit task_categories: - image-to-image - image-classification - image-segmentation language: - en tags: - art - blood - death - not-for-all-audiences pretty_name: gore-blood size_categories: - n<1K --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63dc683562dc193e6d45ceb3/zwEIeZSMQ-x9cUH93P23m.png) # Gore Blood Dataset (Version 1.0) ## Overview The Gore Blood Dataset (Version 1.0) is a collection of images curated by NeuralShell specifically designed for training AI models, particularly for stable diffusion models. These images are intended to aid in the development and enhancement of machine learning models, leveraging the advancements in the field of computer vision and AI. ## Dataset Information - **Dataset Name**: Gore-Blood-Dataset-v1.0 - **Creator**: NeuralShell - **Base Model Version**: sd v2.1 - **AI Refiners Version**: sd v1.5 ## Purpose This dataset serves as a resource to train AI models, particularly focusing on stable diffusion models within the realm of computer vision. It contains images pertinent to blood-related visual data, curated and optimized using the base model version sd v2.1 and AI refiners version sd v1.5. ## Contents The dataset comprises a diverse collection of Gore images related to blood, meticulously chosen and preprocessed to facilitate robust model training. It is a valuable resource for researchers and developers aiming to advance the capabilities of AI in understanding and interpreting blood-related visual information. ## Usage This dataset can be utilized for various purposes within the field of computer vision and machine learning, including but not limited to: - Training stable diffusion models - Experimentation and research in AI development - Benchmarking and evaluation of new algorithms and models ## Acknowledgments We would like to express our gratitude to the contributors and researchers involved in the creation and curation of this dataset. Their efforts have enabled the availability of this resource for the wider AI and machine learning community. ## Citation If you use this dataset in your research or work, kindly cite it using the following format: ``` @dataset{Gore-Blood-Dataset-v1.0, author = {NeuralShell}, title = {Gore Blood Dataset}, year = {2023}, publisher = {Hugging Face}, version = {1.0}, url = {https://huggingface.co/NeuralShell/Gore-Blood-Dataset-v1.0} } ``` ## License This dataset is provided under the specified license terms by NeuralShell. Please refer to the LICENSE file accompanying the dataset for detailed information on permitted usage and redistribution.
open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3
--- pretty_name: Evaluation run of RatanRohith/NeuralPizza-7B-V0.3 dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [RatanRohith/NeuralPizza-7B-V0.3](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.3)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-02-01T17:49:35.277472](https://huggingface.co/datasets/open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3/blob/main/results_2024-02-01T17-49-35.277472.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6463893662332927,\n\ \ \"acc_stderr\": 0.03224452934744884,\n \"acc_norm\": 0.6479975016510882,\n\ \ \"acc_norm_stderr\": 0.032891778674840784,\n \"mc1\": 0.5140758873929009,\n\ \ \"mc1_stderr\": 0.017496563717042776,\n \"mc2\": 0.6793456051279607,\n\ \ \"mc2_stderr\": 0.015369634410362739\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.6783276450511946,\n \"acc_stderr\": 0.013650488084494162,\n\ \ \"acc_norm\": 0.7107508532423208,\n \"acc_norm_stderr\": 0.013250012579393441\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.710017924716192,\n\ \ \"acc_stderr\": 0.004528264116475881,\n \"acc_norm\": 0.8738299143596893,\n\ \ \"acc_norm_stderr\": 0.0033136235601649287\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \ \ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\ \ \"acc_stderr\": 0.041539484047423976,\n \"acc_norm\": 0.6370370370370371,\n\ \ \"acc_norm_stderr\": 0.041539484047423976\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\ \ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\ \ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \ \ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.027834912527544067,\n\ \ \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.027834912527544067\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\ \ \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n\ \ \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \ \ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\ acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\"\ : 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \ \ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n\ \ \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n\ \ \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n\ \ \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\ \ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400351,\n\ \ \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400351\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\ \ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\ \ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.04161808503501531,\n\ \ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.04161808503501531\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.38095238095238093,\n \"acc_stderr\": 0.025010749116137595,\n \"\ acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.025010749116137595\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\ \ \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n\ \ \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \ \ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n\ \ \"acc_stderr\": 0.023540799358723295,\n \"acc_norm\": 0.7806451612903226,\n\ \ \"acc_norm_stderr\": 0.023540799358723295\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.0351760354036101,\n\ \ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.0351760354036101\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\ : 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n\ \ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"\ acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644237,\n\ \ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644237\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n\ \ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948485,\n \ \ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948485\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.7310924369747899,\n \"acc_stderr\": 0.028801392193631273,\n\ \ \"acc_norm\": 0.7310924369747899,\n \"acc_norm_stderr\": 0.028801392193631273\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\ acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8422018348623853,\n \"acc_stderr\": 0.01563002297009244,\n \"\ acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.01563002297009244\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"\ acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"\ acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \ \ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\ \ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\ \ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n\ \ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.743801652892562,\n \"acc_stderr\": 0.03984979653302872,\n \"acc_norm\"\ : 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302872\n },\n\ \ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\ \ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\ \ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\ \ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n\ \ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \ \ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n\ \ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n\ \ \"acc_stderr\": 0.02280138253459753,\n \"acc_norm\": 0.8589743589743589,\n\ \ \"acc_norm_stderr\": 0.02280138253459753\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \ \ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n\ \ \"acc_stderr\": 0.013625556907993457,\n \"acc_norm\": 0.8237547892720306,\n\ \ \"acc_norm_stderr\": 0.013625556907993457\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.02394851290546837,\n\ \ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.02394851290546837\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39553072625698327,\n\ \ \"acc_stderr\": 0.016353415410075775,\n \"acc_norm\": 0.39553072625698327,\n\ \ \"acc_norm_stderr\": 0.016353415410075775\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.02573885479781873,\n\ \ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.02573885479781873\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n\ \ \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n\ \ \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460842,\n\ \ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460842\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.48936170212765956,\n \"acc_stderr\": 0.02982074719142248,\n \ \ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.02982074719142248\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46479791395045633,\n\ \ \"acc_stderr\": 0.012738547371303957,\n \"acc_norm\": 0.46479791395045633,\n\ \ \"acc_norm_stderr\": 0.012738547371303957\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.028332959514031204,\n\ \ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.028332959514031204\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.6552287581699346,\n \"acc_stderr\": 0.01922832201869664,\n \ \ \"acc_norm\": 0.6552287581699346,\n \"acc_norm_stderr\": 0.01922832201869664\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\ \ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\ \ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\ \ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n\ \ \"acc_stderr\": 0.02411267824090083,\n \"acc_norm\": 0.8656716417910447,\n\ \ \"acc_norm_stderr\": 0.02411267824090083\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \ \ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\ \ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\ \ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\ \ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\ \ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5140758873929009,\n\ \ \"mc1_stderr\": 0.017496563717042776,\n \"mc2\": 0.6793456051279607,\n\ \ \"mc2_stderr\": 0.015369634410362739\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.8050513022888713,\n \"acc_stderr\": 0.011134099415938273\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5890826383623957,\n \ \ \"acc_stderr\": 0.01355213290142322\n }\n}\n```" repo_url: https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.3 leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|arc:challenge|25_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-02-01T17-49-35.277472.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|gsm8k|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hellaswag|10_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|truthfulqa:mc|0_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-02-01T17-49-35.277472.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_02_01T17_49_35.277472 path: - '**/details_harness|winogrande|5_2024-02-01T17-49-35.277472.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-02-01T17-49-35.277472.parquet' - config_name: results data_files: - split: 2024_02_01T17_49_35.277472 path: - results_2024-02-01T17-49-35.277472.parquet - split: latest path: - results_2024-02-01T17-49-35.277472.parquet --- # Dataset Card for Evaluation run of RatanRohith/NeuralPizza-7B-V0.3 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [RatanRohith/NeuralPizza-7B-V0.3](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-01T17:49:35.277472](https://huggingface.co/datasets/open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3/blob/main/results_2024-02-01T17-49-35.277472.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6463893662332927, "acc_stderr": 0.03224452934744884, "acc_norm": 0.6479975016510882, "acc_norm_stderr": 0.032891778674840784, "mc1": 0.5140758873929009, "mc1_stderr": 0.017496563717042776, "mc2": 0.6793456051279607, "mc2_stderr": 0.015369634410362739 }, "harness|arc:challenge|25": { "acc": 0.6783276450511946, "acc_stderr": 0.013650488084494162, "acc_norm": 0.7107508532423208, "acc_norm_stderr": 0.013250012579393441 }, "harness|hellaswag|10": { "acc": 0.710017924716192, "acc_stderr": 0.004528264116475881, "acc_norm": 0.8738299143596893, "acc_norm_stderr": 0.0033136235601649287 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.047609522856952365, "acc_norm": 0.34, "acc_norm_stderr": 0.047609522856952365 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.041539484047423976, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.041539484047423976 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6842105263157895, "acc_stderr": 0.0378272898086547, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.61, "acc_stderr": 0.04902071300001975, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7132075471698113, "acc_stderr": 0.027834912527544067, "acc_norm": 0.7132075471698113, "acc_norm_stderr": 0.027834912527544067 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7569444444444444, "acc_stderr": 0.0358687928008034, "acc_norm": 0.7569444444444444, "acc_norm_stderr": 0.0358687928008034 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.29, "acc_stderr": 0.04560480215720683, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720683 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6763005780346821, "acc_stderr": 0.0356760379963917, "acc_norm": 0.6763005780346821, "acc_norm_stderr": 0.0356760379963917 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.45098039215686275, "acc_stderr": 0.049512182523962625, "acc_norm": 0.45098039215686275, "acc_norm_stderr": 0.049512182523962625 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5872340425531914, "acc_stderr": 0.03218471141400351, "acc_norm": 0.5872340425531914, "acc_norm_stderr": 0.03218471141400351 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5241379310344828, "acc_stderr": 0.04161808503501531, "acc_norm": 0.5241379310344828, "acc_norm_stderr": 0.04161808503501531 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.38095238095238093, "acc_stderr": 0.025010749116137595, "acc_norm": 0.38095238095238093, "acc_norm_stderr": 0.025010749116137595 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04444444444444449, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04444444444444449 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7806451612903226, "acc_stderr": 0.023540799358723295, "acc_norm": 0.7806451612903226, "acc_norm_stderr": 0.023540799358723295 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.0351760354036101, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.0351760354036101 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.02937661648494563, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.02937661648494563 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.021995311963644237, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.021995311963644237 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402534, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402534 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.028742040903948485, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.028742040903948485 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7310924369747899, "acc_stderr": 0.028801392193631273, "acc_norm": 0.7310924369747899, "acc_norm_stderr": 0.028801392193631273 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8422018348623853, "acc_stderr": 0.01563002297009244, "acc_norm": 0.8422018348623853, "acc_norm_stderr": 0.01563002297009244 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5416666666666666, "acc_stderr": 0.03398110890294636, "acc_norm": 0.5416666666666666, "acc_norm_stderr": 0.03398110890294636 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8235294117647058, "acc_stderr": 0.026756401538078966, "acc_norm": 0.8235294117647058, "acc_norm_stderr": 0.026756401538078966 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.02553010046023349, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.02553010046023349 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057221, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057221 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596913, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596913 }, "harness|hendrycksTest-international_law|5": { "acc": 0.743801652892562, "acc_stderr": 0.03984979653302872, "acc_norm": 0.743801652892562, "acc_norm_stderr": 0.03984979653302872 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.0395783547198098, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.0395783547198098 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7668711656441718, "acc_stderr": 0.0332201579577674, "acc_norm": 0.7668711656441718, "acc_norm_stderr": 0.0332201579577674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4375, "acc_stderr": 0.04708567521880525, "acc_norm": 0.4375, "acc_norm_stderr": 0.04708567521880525 }, "harness|hendrycksTest-management|5": { "acc": 0.7961165048543689, "acc_stderr": 0.03989139859531771, "acc_norm": 0.7961165048543689, "acc_norm_stderr": 0.03989139859531771 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8589743589743589, "acc_stderr": 0.02280138253459753, "acc_norm": 0.8589743589743589, "acc_norm_stderr": 0.02280138253459753 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8237547892720306, "acc_stderr": 0.013625556907993457, "acc_norm": 0.8237547892720306, "acc_norm_stderr": 0.013625556907993457 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7283236994219653, "acc_stderr": 0.02394851290546837, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.02394851290546837 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39553072625698327, "acc_stderr": 0.016353415410075775, "acc_norm": 0.39553072625698327, "acc_norm_stderr": 0.016353415410075775 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7189542483660131, "acc_stderr": 0.02573885479781873, "acc_norm": 0.7189542483660131, "acc_norm_stderr": 0.02573885479781873 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7170418006430869, "acc_stderr": 0.02558306248998481, "acc_norm": 0.7170418006430869, "acc_norm_stderr": 0.02558306248998481 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7345679012345679, "acc_stderr": 0.024569223600460842, "acc_norm": 0.7345679012345679, "acc_norm_stderr": 0.024569223600460842 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48936170212765956, "acc_stderr": 0.02982074719142248, "acc_norm": 0.48936170212765956, "acc_norm_stderr": 0.02982074719142248 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.46479791395045633, "acc_stderr": 0.012738547371303957, "acc_norm": 0.46479791395045633, "acc_norm_stderr": 0.012738547371303957 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6801470588235294, "acc_stderr": 0.028332959514031204, "acc_norm": 0.6801470588235294, "acc_norm_stderr": 0.028332959514031204 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6552287581699346, "acc_stderr": 0.01922832201869664, "acc_norm": 0.6552287581699346, "acc_norm_stderr": 0.01922832201869664 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7306122448979592, "acc_stderr": 0.02840125202902294, "acc_norm": 0.7306122448979592, "acc_norm_stderr": 0.02840125202902294 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8656716417910447, "acc_stderr": 0.02411267824090083, "acc_norm": 0.8656716417910447, "acc_norm_stderr": 0.02411267824090083 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.83, "acc_stderr": 0.0377525168068637, "acc_norm": 0.83, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.5140758873929009, "mc1_stderr": 0.017496563717042776, "mc2": 0.6793456051279607, "mc2_stderr": 0.015369634410362739 }, "harness|winogrande|5": { "acc": 0.8050513022888713, "acc_stderr": 0.011134099415938273 }, "harness|gsm8k|5": { "acc": 0.5890826383623957, "acc_stderr": 0.01355213290142322 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
tyzhu/random25eof_find_passage_train5000000_eval1000_rare
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: inputs dtype: string - name: targets dtype: string splits: - name: train num_bytes: 1042263000 num_examples: 10001000 - name: validation num_bytes: 118222 num_examples: 1000 download_size: 0 dataset_size: 1042381222 --- # Dataset Card for "random25eof_find_passage_train5000000_eval1000_rare" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
michalby24/dataset_combined_4
--- dataset_info: features: - name: language dtype: string - name: input_values sequence: float32 splits: - name: train num_bytes: 2023164070 num_examples: 31607 - name: test num_bytes: 505807020 num_examples: 7902 download_size: 1287752556 dataset_size: 2528971090 --- # Dataset Card for "dataset_combined_4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stanfordnlp/sst2
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: sst pretty_name: Stanford Sentiment Treebank v2 dataset_info: features: - name: idx dtype: int32 - name: sentence dtype: string - name: label dtype: class_label: names: '0': negative '1': positive splits: - name: train num_bytes: 4681603 num_examples: 67349 - name: validation num_bytes: 106252 num_examples: 872 - name: test num_bytes: 216640 num_examples: 1821 download_size: 3331058 dataset_size: 5004495 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nlp.stanford.edu/sentiment/ - **Repository:** - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://www.aclweb.org/anthology/D13-1170/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges. Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive with neutral sentences discarded) refer to the dataset as SST-2 or SST binary. ### Supported Tasks and Leaderboards - `sentiment-classification` ### Languages The text in the dataset is in English (`en`). ## Dataset Structure ### Data Instances ``` {'idx': 0, 'sentence': 'hide new secretions from the parental units ', 'label': 0} ``` ### Data Fields - `idx`: Monotonically increasing index ID. - `sentence`: Complete sentence expressing an opinion about a film. - `label`: Sentiment of the opinion, either "negative" (0) or positive (1). The test set labels are hidden (-1). ### Data Splits | | train | validation | test | |--------------------|---------:|-----------:|-----:| | Number of examples | 67349 | 872 | 1821 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Rotten Tomatoes reviewers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown. ### Citation Information ```bibtex @inproceedings{socher-etal-2013-recursive, title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", author = "Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D. and Ng, Andrew and Potts, Christopher", booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", month = oct, year = "2013", address = "Seattle, Washington, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D13-1170", pages = "1631--1642", } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
sivan22/synth-HTR
--- dataset_info: features: - name: image dtype: image - name: labels dtype: string splits: - name: train num_bytes: 2904123997.0 num_examples: 30000 download_size: 0 dataset_size: 2904123997.0 --- # Dataset Card for "synth-HTR" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathan-roberts1/USTC_SmokeRS
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': cloud '1': dust '2': haze '3': land '4': seaside '5': smoke splits: - name: train num_bytes: 1229029078.725 num_examples: 6225 download_size: 1115042620 dataset_size: 1229029078.725 license: other --- # Dataset Card for "USTC_SmokeRS" ## Dataset Description - **Paper:** [SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention](https://www.mdpi.com/2072-4292/11/14/1702/pdf) ### Licensing Information For research/education purposes. ## Citation Information [SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention](https://www.mdpi.com/2072-4292/11/14/1702/pdf) ``` @article{ba2019smokenet, title = {SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention}, author = {Ba, Rui and Chen, Chen and Yuan, Jing and Song, Weiguo and Lo, Siuming}, year = 2019, journal = {Remote Sensing}, publisher = {MDPI}, volume = 11, number = 14, pages = 1702 } ```
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667119
--- type: predictions tags: - autotrain - evaluation datasets: - mathemakitten/winobias_antistereotype_test_v5 eval_info: task: text_zero_shot_classification model: inverse-scaling/opt-6.7b_eval metrics: [] dataset_name: mathemakitten/winobias_antistereotype_test_v5 dataset_config: mathemakitten--winobias_antistereotype_test_v5 dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: inverse-scaling/opt-6.7b_eval * Dataset: mathemakitten/winobias_antistereotype_test_v5 * Config: mathemakitten--winobias_antistereotype_test_v5 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
ksukrit/training_data_hands
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': bad '1': good splits: - name: train num_bytes: 4654722697.352 num_examples: 3974 download_size: 0 dataset_size: 4654722697.352 --- # Dataset Card for "training_data_hands" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Azure99/blossom-math-v2
--- license: apache-2.0 task_categories: - text-generation - text2text-generation language: - zh size_categories: - 10K<n<100K --- # BLOSSOM MATH V2 ### 介绍 [Blossom Math V3](https://huggingface.co/datasets/Azure99/blossom-math-v3)版本已发布!🤗 Blossom Math V2是基于Math23K和GSM8K衍生而来的中英双语数学对话数据集,适用于数学问题微调。 相比于blossom-math-v1,新增了2500条GSM8K数据和翻译为中文的2500条GSM8K-CN数据。此外,优化了答案的检查逻辑,还移除了<<1+1=2>>等计算步骤,以统一推理步骤的风格。 本数据集采用全量Math23K、GSM8K和翻译后的GSM8K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。 本次发布了全量数据的25%,包含10K记录。 ### 语言 中文和英文 ### 数据集结构 每条数据代表一个完整的题目及答案,包含id、input、output、answer、dataset四个字段。 - id:字符串,代表原始数据集中的题目id,与dataset字段结合可确定唯一题目。 - input:字符串,代表问题。 - output:字符串,代表gpt-3.5-turbo-0613生成的答案。 - answer:字符串,代表正确答案。 - dataset:字符串,代表原始数据集。 ### 数据集限制 本数据集的所有响应均由gpt-3.5-turbo-0613生成,并经过初步校验,但仍可能包含不准确的回答。
zolak/twitter_dataset_79_1713219372
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 162885 num_examples: 423 download_size: 90656 dataset_size: 162885 configs: - config_name: default data_files: - split: train path: data/train-* ---
BeIR/nfcorpus-generated-queries
--- annotations_creators: [] language_creators: [] language: - en license: - cc-by-sa-4.0 multilinguality: - monolingual paperswithcode_id: beir pretty_name: BEIR Benchmark size_categories: msmarco: - 1M<n<10M trec-covid: - 100k<n<1M nfcorpus: - 1K<n<10K nq: - 1M<n<10M hotpotqa: - 1M<n<10M fiqa: - 10K<n<100K arguana: - 1K<n<10K touche-2020: - 100K<n<1M cqadupstack: - 100K<n<1M quora: - 100K<n<1M dbpedia: - 1M<n<10M scidocs: - 10K<n<100K fever: - 1M<n<10M climate-fever: - 1M<n<10M scifact: - 1K<n<10K source_datasets: [] task_categories: - text-retrieval - zero-shot-retrieval - information-retrieval - zero-shot-information-retrieval task_ids: - passage-retrieval - entity-linking-retrieval - fact-checking-retrieval - tweet-retrieval - citation-prediction-retrieval - duplication-question-retrieval - argument-retrieval - news-retrieval - biomedical-information-retrieval - question-answering-retrieval --- # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** nandan.thakur@uwaterloo.ca ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
result-kand2-sdxl-wuerst-karlo/a9adf6d9
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 182 num_examples: 10 download_size: 1395 dataset_size: 182 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "a9adf6d9" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/annand_fireemblem
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of annand (Fire Emblem) This is the dataset of annand (Fire Emblem), containing 20 images and their tags. The core tags of this character are `green_hair, long_hair, green_eyes, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 20 | 18.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annand_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 20 | 12.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annand_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 35 | 20.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annand_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 20 | 16.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annand_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 35 | 26.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annand_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/annand_fireemblem', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, circlet, smile, breastplate, elbow_gloves, simple_background, thighhighs, white_background, belt, boots, closed_mouth, looking_at_viewer, white_dress | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | circlet | smile | breastplate | elbow_gloves | simple_background | thighhighs | white_background | belt | boots | closed_mouth | looking_at_viewer | white_dress | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------|:--------|:--------------|:---------------|:--------------------|:-------------|:-------------------|:-------|:--------|:---------------|:--------------------|:--------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
OUTEIRAL2/VOZIA2
--- license: openrail ---
cg1177/fineaction_internvideo2_1b_w16_s4
--- license: apache-2.0 ---
blenderwang/meruem-cell-1m
--- dataset_info: features: - name: input_ids sequence: int32 - name: exprs sequence: float32 - name: length dtype: int32 - name: species dtype: class_label: names: '0': arabidopsis_thaliana '1': danio_rerio '2': drosophila_melanogaster '3': homo_sapiens '4': mus_musculus '5': rattus_norvegicus - name: dataset_id dtype: int16 - name: cell_id dtype: int32 splits: - name: train num_bytes: 14854205784 num_examples: 1000000 download_size: 6831816916 dataset_size: 14854205784 configs: - config_name: default data_files: - split: train path: data/train-* tags: - biology size_categories: - 1M<n<10M --- # Meruem Cell 10m dataset's subset A single cell RNA expression dataset ## Source - Original data is from [EMBL](https://www.ebi.ac.uk/gxa/sc/home) - Only species that have atleast 100k cells are included - Only protein coding genes are included - The dataset ids can be mapped to dataset names in `./dataset_names.txt` - The input_ids is follow the proteins listed in `./gene_map_cleaned.tsv` - cell ids are their original indices in the dataset ## Note - This dataset is for development only - Full set will be uploaded when I make sure my model can fit on this subset
The13thDrifter/Cayde-6_DATASET
--- license: cc-by-3.0 ---
AdapterOcean/data-standardized_cluster_4
--- dataset_info: features: - name: text dtype: string - name: conversation_id dtype: int64 - name: embedding sequence: float64 - name: cluster dtype: int64 splits: - name: train num_bytes: 46206021 num_examples: 4509 download_size: 13041751 dataset_size: 46206021 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "data-standardized_cluster_4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/iris_pokemon
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of iris (Pokémon) This is the dataset of iris (Pokémon), containing 500 images and their tags. The core tags of this character are `dark-skinned_female, dark_skin, long_hair, purple_hair, bangs, big_hair, brown_eyes, very_long_hair, two_side_up, breasts, eyelashes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 426.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 272.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 997 | 509.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 387.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 997 | 679.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/iris_pokemon', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, open_mouth, waist_bow, dress, smile, blush, crown | | 1 | 16 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, :d, open_mouth, tongue, dress, hair_rings, long_sleeves, tiara, upper_teeth_only, wide_sleeves, looking_at_viewer, bow, blush, sandals, solo, pokemon_(creature), red_eyes, toes, white_footwear, collarbone, spread_fingers | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, :d, armlet, black_dress, fake_horns, official_alternate_costume, open_mouth, tongue, twintails, upper_teeth_only, wrist_cuffs, bare_shoulders, black_hairband, claw_pose, hair_rings, hands_up, looking_at_viewer, red_eyes, sleeveless_dress, solo, blush, fake_wings, halloween | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, :d, collarbone, fake_horns, fangs, hair_rings, official_alternate_costume, open_mouth, tongue, black_hairband, twintails, wrist_cuffs, armlet, bare_shoulders, black_dress, blush, looking_at_viewer, solo, wings, hands_up, upper_teeth_only | | 4 | 22 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, nipples, nude, blush, solo, open_mouth, collarbone, looking_at_viewer, navel, pussy, small_breasts, tongue, :d, barefoot, light_areolae, shiny_skin, censored, simple_background, white_background | | 5 | 11 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, hetero, nipples, nude, blush, 1boy, sex, small_breasts, vaginal, penis, pussy, solo_focus, pokemon_(creature), red_eyes, uncensored, bestiality, navel, spread_legs, open_mouth, pokephilia, :q, closed_mouth, collarbone, hair_tie, loli, looking_down, smile | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | open_mouth | waist_bow | dress | smile | blush | crown | :d | tongue | hair_rings | long_sleeves | tiara | upper_teeth_only | wide_sleeves | looking_at_viewer | bow | sandals | pokemon_(creature) | red_eyes | toes | white_footwear | collarbone | spread_fingers | armlet | black_dress | fake_horns | official_alternate_costume | twintails | wrist_cuffs | bare_shoulders | black_hairband | claw_pose | hands_up | sleeveless_dress | fake_wings | halloween | fangs | wings | nipples | nude | navel | pussy | small_breasts | barefoot | light_areolae | shiny_skin | censored | simple_background | white_background | hetero | 1boy | sex | vaginal | penis | solo_focus | uncensored | bestiality | spread_legs | pokephilia | :q | closed_mouth | hair_tie | loli | looking_down | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------------|:------------|:--------|:--------|:--------|:--------|:-----|:---------|:-------------|:---------------|:--------|:-------------------|:---------------|:--------------------|:------|:----------|:---------------------|:-----------|:-------|:-----------------|:-------------|:-----------------|:---------|:--------------|:-------------|:-----------------------------|:------------|:--------------|:-----------------|:-----------------|:------------|:-----------|:-------------------|:-------------|:------------|:--------|:--------|:----------|:-------|:--------|:--------|:----------------|:-----------|:----------------|:-------------|:-----------|:--------------------|:-------------------|:---------|:-------|:------|:----------|:--------|:-------------|:-------------|:-------------|:--------------|:-------------|:-----|:---------------|:-----------|:-------|:---------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 16 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | | X | | X | X | X | | | X | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | | | X | | X | X | X | | | X | | X | | | | | | | X | | X | X | X | X | X | X | X | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 22 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | | | | X | | X | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | 5 | 11 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | | | X | X | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
arthurmluz/GPTextSum2_data-xlsum_temario_results
--- dataset_info: features: - name: id dtype: int64 - name: text dtype: string - name: summary dtype: string - name: gen_summary dtype: string - name: rouge struct: - name: rouge1 dtype: float64 - name: rouge2 dtype: float64 - name: rougeL dtype: float64 - name: rougeLsum dtype: float64 - name: bert struct: - name: f1 sequence: float64 - name: hashcode dtype: string - name: precision sequence: float64 - name: recall sequence: float64 - name: moverScore dtype: float64 splits: - name: validation num_bytes: 92617 num_examples: 20 download_size: 93095 dataset_size: 92617 configs: - config_name: default data_files: - split: validation path: data/validation-* --- # Dataset Card for "gptextsum2_data-xlsum_temario_results" rouge= {'rouge1': 0.34663019021874986, 'rouge2': 0.14819749362220133, 'rougeL': 0.21196170584218882, 'rougeLsum': 0.21196170584218882} bert= {'precision': 0.7504127502441407, 'recall': 0.6941693127155304, 'f1': 0.720111683011055} mover = 0.5858268677961962
MottsCoding/MeltpoolsLabeled
--- dataset_info: features: - name: images dtype: image - name: labels sequence: sequence: int32 splits: - name: train num_bytes: 51624539.0 num_examples: 12 download_size: 15662464 dataset_size: 51624539.0 --- # Dataset Card for "MeltpoolsLabeled" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jordanfungtc/cnntorsion
--- size_categories: - 10K<n<100K ---
adalbertojunior/ICD_dataset
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* dataset_info: features: - name: text dtype: string - name: label sequence: string splits: - name: train num_bytes: 418410601 num_examples: 39354 - name: test num_bytes: 53529100 num_examples: 5000 - name: validation num_bytes: 52947510 num_examples: 5000 download_size: 301971173 dataset_size: 524887211 --- # Dataset Card for "ICD_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/MULTI_VALUE_mnli_after_perfect
--- dataset_info: features: - name: premise dtype: string - name: hypothesis dtype: string - name: label dtype: int64 - name: idx dtype: int64 - name: score dtype: int64 splits: - name: dev_matched num_bytes: 239959 num_examples: 1035 - name: dev_mismatched num_bytes: 273238 num_examples: 1082 - name: test_matched num_bytes: 262731 num_examples: 1038 - name: test_mismatched num_bytes: 277346 num_examples: 1143 - name: train num_bytes: 10108342 num_examples: 41417 download_size: 6773376 dataset_size: 11161616 --- # Dataset Card for "MULTI_VALUE_mnli_after_perfect" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Hemg/Emotion-audio-Dataset
--- dataset_info: features: - name: audio dtype: audio - name: label dtype: class_label: names: '0': Angry '1': Disgusted '2': Fearful '3': Happy '4': Neutral '5': Sad '6': Suprised splits: - name: train num_bytes: 2836512748.06 num_examples: 12798 download_size: 1577902101 dataset_size: 2836512748.06 configs: - config_name: default data_files: - split: train path: data/train-* ---
liaad/translation_sample
--- dataset_info: - config_name: ai2_arc features: - name: question dtype: string - name: question_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: choices sequence: string - name: choices_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 713 num_examples: 1 download_size: 7660 dataset_size: 713 - config_name: boolq features: - name: question dtype: string - name: question_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: passage dtype: string - name: passage_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 1338 num_examples: 1 download_size: 13729 dataset_size: 1338 - config_name: gsm8k features: - name: question dtype: string - name: question_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: answer dtype: string - name: answer_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 2249 num_examples: 1 download_size: 19759 dataset_size: 2249 - config_name: hellaswag features: - name: activity_label dtype: string - name: activity_label_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: ctx dtype: string - name: ctx_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: endings sequence: string - name: endings_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 3111 num_examples: 1 download_size: 17613 dataset_size: 3111 - config_name: mbpp features: - name: text dtype: string - name: text_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 358 num_examples: 1 download_size: 4822 dataset_size: 358 - config_name: natural_questions_parsed features: - name: document dtype: string - name: document_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: question dtype: string - name: question_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: candidates sequence: string - name: candidates_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: long_answer dtype: string - name: long_answer_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 5399 num_examples: 1 download_size: 38881 dataset_size: 5399 - config_name: openbookqa features: - name: question_stem dtype: string - name: question_stem_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: choices sequence: string - name: choices_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: fact1 dtype: string - name: fact1_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 776 num_examples: 1 download_size: 10475 dataset_size: 776 - config_name: quac features: - name: background dtype: string - name: background_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: context dtype: string - name: context_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: questions sequence: string - name: questions_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: orig_answers sequence: string - name: orig_answers_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 11166 num_examples: 1 download_size: 76251 dataset_size: 11166 - config_name: social_i_qa features: - name: context dtype: string - name: context_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: question dtype: string - name: question_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: answerA dtype: string - name: answerA_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: answerB dtype: string - name: answerB_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: answerC dtype: string - name: answerC_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 677 num_examples: 1 download_size: 15127 dataset_size: 677 - config_name: squad_v1_pt features: - name: context dtype: string - name: context_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: question dtype: string - name: question_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: answers sequence: string - name: answers_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 1587 num_examples: 1 download_size: 17739 dataset_size: 1587 - config_name: trivia_qa features: - name: question dtype: string - name: question_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: search_results_search_context sequence: string - name: search_results_search_context_translated list: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: answer_value dtype: string - name: answer_value_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 1154 num_examples: 1 download_size: 15177 dataset_size: 1154 - config_name: winogrande features: - name: sentence dtype: string - name: sentence_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: option1 dtype: string - name: option1_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string - name: option2 dtype: string - name: option2_translated struct: - name: Helsinki-NLP/opus-mt-tc-big-en-pt dtype: string - name: google_translation dtype: string - name: libre_translation dtype: string splits: - name: test num_bytes: 677 num_examples: 1 download_size: 11112 dataset_size: 677 configs: - config_name: ai2_arc data_files: - split: test path: ai2_arc/test-* - config_name: boolq data_files: - split: test path: boolq/test-* - config_name: gsm8k data_files: - split: test path: gsm8k/test-* - config_name: hellaswag data_files: - split: test path: hellaswag/test-* - config_name: mbpp data_files: - split: test path: mbpp/test-* - config_name: natural_questions_parsed data_files: - split: test path: natural_questions_parsed/test-* - config_name: openbookqa data_files: - split: test path: openbookqa/test-* - config_name: quac data_files: - split: test path: quac/test-* - config_name: social_i_qa data_files: - split: test path: social_i_qa/test-* - config_name: squad_v1_pt data_files: - split: test path: squad_v1_pt/test-* - config_name: trivia_qa data_files: - split: test path: trivia_qa/test-* - config_name: winogrande data_files: - split: test path: winogrande/test-* ---
w11wo/imdb-javanese
--- annotations_creators: - found language_creators: - machine-generated language: - jv license: - odbl multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification extended: - original --- # Dataset Card for "imdb-javanese" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits Sample Size](#data-instances-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Github](https://github.com/w11wo/nlp-datasets#javanese-imdb) - **Repository:** [Github](https://github.com/w11wo/nlp-datasets#javanese-imdb) - **Paper:** [Aclweb](http://www.aclweb.org/anthology/P11-1015) - **Point of Contact:** [Wilson Wongso](https://github.com/w11wo) - **Size of downloaded dataset files:** 17.0 MB - **Size of the generated dataset:** 47.5 MB - **Total amount of disk used:** 64.5 MB ### Dataset Summary Large Movie Review Dataset translated to Javanese. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. We translated the [original IMDB Dataset](https://huggingface.co/datasets/imdb) to Javanese using the multi-lingual MarianMT Transformer model from [`Helsinki-NLP/opus-mt-en-mul`](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure We show detailed information for up to 5 configurations of the dataset. ### Data Instances An example of `javanese_imdb_train.csv` looks as follows. | label | text | | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1 | "Drama romantik sing digawé karo direktur Martin Ritt kuwi ora dingertèni, nanging ana momen-momen sing marahi karisma lintang Jane Fonda lan Robert De Niro (kelompok sing luar biasa). Dhèwèké dadi randha sing ora isa mlaku, iso anu anyar lan anyar-inventor-- kowé isa nganggep isiné. Adapsi novel Pat Barker ""Union Street"" (yak titel sing apik!) arep dinggo-back-back it on bland, lan pendidikan film kuwi gampang, nanging isih nyenengké; a rosy-hued-inventor-fantasi. Ora ana sing ngganggu gambar sing sejati ding kok iso dinggo nggawe gambar sing paling nyeneng." | | 0 | "Pengalaman wong lanang sing nduwé perasaan sing ora lumrah kanggo babi. Mulai nganggo tuladha sing luar biasa yaiku komedia. Wong orkestra termel digawé dadi wong gila, sing kasar merga nyanyian nyanyi. Sayangé, kuwi tetep absurd wektu WHOLE tanpa ceramah umum sing mung digawé. Malah, sing ana ing jaman kuwi kudu ditinggalké. Diyalog kryptik sing nggawé Shakespeare marah gampang kanggo kelas telu. Pak teknis kuwi luwih apik timbang kowe mikir nganggo cinematografi sing apik sing jenengé Vilmos Zsmond. Masa depan bintang Saly Kirkland lan Frederic Forrest isa ndelok." | ### Data Fields - `text`: The movie review translated into Javanese. - `label`: The sentiment exhibited in the review, either `1` (positive) or `0` (negative). ### Data Splits Sample Size | train | unsupervised | test | | ----: | -----------: | ----: | | 25000 | 50000 | 25000 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information If you use this dataset in your research, please cite: ``` @inproceedings{wongso2021causal, title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures}, author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin}, booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)}, pages={1--7}, year={2021}, organization={IEEE} } ``` ``` @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} } ```
Tngarg/hindi_test
--- dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: text dtype: string - name: sentiment dtype: string - name: label dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 81183 num_examples: 554 download_size: 53504 dataset_size: 81183 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "hindi_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
distilled-one-sec-cv12-each-chunk-uniq/chunk_162
--- dataset_info: features: - name: logits sequence: float32 - name: mfcc sequence: sequence: float64 splits: - name: train num_bytes: 1154530644.0 num_examples: 224967 download_size: 1183013324 dataset_size: 1154530644.0 --- # Dataset Card for "chunk_162" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bkueper/test
--- language: - de ---
davanstrien/test_imdb_embedd2
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: imdb task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: imdb-movie-reviews pretty_name: IMDB dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: 0: neg 1: pos config_name: plain_text splits: - name: train num_bytes: 33432835 num_examples: 25000 - name: test num_bytes: 32650697 num_examples: 25000 - name: unsupervised num_bytes: 67106814 num_examples: 50000 download_size: 84125825 dataset_size: 133190346 train-eval-index: - config: plain_text task: text-classification task_id: binary_classification splits: train_split: train eval_split: test col_mapping: text: text label: target metrics: - type: accuracy - name: Accuracy - type: f1 name: F1 macro args: average: macro - type: f1 name: F1 micro args: average: micro - type: f1 name: F1 weighted args: average: weighted - type: precision name: Precision macro args: average: macro - type: precision name: Precision micro args: average: micro - type: precision name: Precision weighted args: average: weighted - type: recall name: Recall macro args: average: macro - type: recall name: Recall micro args: average: micro - type: recall name: Recall weighted args: average: weighted --- # Dataset Card for "test_imdb_embedd2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ibranze/araproje_arc_tr_s3
--- dataset_info: features: - name: id dtype: string - name: question dtype: string - name: choices sequence: - name: text dtype: string - name: label dtype: string - name: answerKey dtype: string splits: - name: validation num_bytes: 86423.0 num_examples: 250 download_size: 46973 dataset_size: 86423.0 configs: - config_name: default data_files: - split: validation path: data/validation-* --- # Dataset Card for "araproje_arc_tr_s3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SEACrowd/id_panl_bppt
--- tags: - machine-translation language: - ind --- # id_panl_bppt Parallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and Application of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing Capacity in Asia). The dataset contains about 24K sentences in English and Bahasa Indonesia from 4 different topics (Economy, International Affairs, Science & Technology, and Sports). ## Dataset Usage Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`. ## Citation ``` @inproceedings{id_panl_bppt, author = {PAN Localization - BPPT}, title = {Parallel Text Corpora, English Indonesian}, year = {2009}, url = {http://digilib.bppt.go.id/sampul/p92-budiono.pdf}, } ``` ## Homepage [http://digilib.bppt.go.id/sampul/p92-budiono.pdf](http://digilib.bppt.go.id/sampul/p92-budiono.pdf) ### NusaCatalogue For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
ruanchaves/reli-sa_por_Latn_to_glg_Latn
--- dataset_info: features: - name: source dtype: string - name: title dtype: string - name: book dtype: string - name: review_id dtype: string - name: score dtype: float64 - name: sentence_id dtype: int64 - name: unique_review_id dtype: string - name: sentence dtype: string - name: label dtype: string splits: - name: train num_bytes: 1776947 num_examples: 7875 - name: validation num_bytes: 313722 num_examples: 1348 - name: test num_bytes: 652065 num_examples: 3288 download_size: 0 dataset_size: 2742734 --- # Dataset Card for "reli-sa_por_Latn_to_glg_Latn" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Sonrin/Thorneworks
--- license: artistic-2.0 ---
youngwoo3283/summarize_llama_20k
--- size_categories: - 10K<n<100K ---
Kasuzu/522
--- license: unknown ---
itisarainyday/phys_gre_question
--- dataset_info: features: - name: '0' dtype: string splits: - name: train num_bytes: 466194 num_examples: 395 - name: validation num_bytes: 5448 num_examples: 5 download_size: 135395 dataset_size: 471642 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
augsaksham/small_train
--- dataset_info: features: - name: PII dtype: string - name: TOOL dtype: string - name: full_text dtype: string - name: document dtype: int64 - name: is_valid dtype: bool splits: - name: train num_bytes: 36712 num_examples: 9 - name: validation num_bytes: 6082 num_examples: 1 download_size: 42287 dataset_size: 42794 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
heliosprime/twitter_dataset_1713017877
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 10831 num_examples: 25 download_size: 9459 dataset_size: 10831 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "twitter_dataset_1713017877" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mpazdzioch/oasst1_pl
--- dataset_info: features: - name: message_id dtype: string - name: parent_id dtype: string - name: user_id dtype: string - name: created_date dtype: string - name: text dtype: string - name: role dtype: string - name: lang dtype: string - name: review_count dtype: int64 - name: review_result dtype: bool - name: deleted dtype: bool - name: rank dtype: float64 - name: synthetic dtype: bool - name: model_name dtype: 'null' - name: detoxify struct: - name: identity_attack dtype: float64 - name: insult dtype: float64 - name: obscene dtype: float64 - name: severe_toxicity dtype: float64 - name: sexual_explicit dtype: float64 - name: threat dtype: float64 - name: toxicity dtype: float64 - name: message_tree_id dtype: string - name: tree_state dtype: string - name: emojis struct: - name: count sequence: int64 - name: name sequence: string - name: labels struct: - name: count sequence: int64 - name: name sequence: string - name: value sequence: float64 splits: - name: validation num_bytes: 1081868 num_examples: 1481 - name: train num_bytes: 65002941 num_examples: 77374 download_size: 19411941 dataset_size: 66084809 configs: - config_name: default data_files: - split: validation path: data/validation-* - split: train path: data/train-* ---
Maciel/FinCUGE-Instruction
--- license: apache-2.0 dataset_info: features: - name: task dtype: string - name: desc dtype: string - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 62215078 num_examples: 123137 - name: eval num_bytes: 7548859 num_examples: 15167 download_size: 32078572 dataset_size: 69763937 task_categories: - question-answering language: - zh tags: - finance pretty_name: s size_categories: - 100K<n<1M --- # Dataset Card for Dataset Name ## Dataset Description 本数据集包含八项中文金融自然语言处理基准任务,分别为金融新闻摘要(FinNA)、金融新闻公告事件问答(FinQA)、金融新闻分类(FinNL)、金融新闻关系抽取(FinRE)、金融社交媒体文本情绪分类(FinNE)、金融负面消息及其主体判定(FinNSP)、金融因果事件抽取(FinCQA)、金融事件主体抽取(FinESE)。 ![数据来源](https://github.com/ssymmetry/BBT-FinCUGE-Applications) ## Dataset Structure (1)FinNA 金融新闻摘要数据集。输入一段金融新闻,需要模型生成一句话摘要。其中训练集包含24000条数据,验证集包含3000条数据。 ```json { "instruction": "根据以下新闻生成摘要。", "input": "天宇股份公告,预计2021年半年度归属于上公司股东的净利润1.7亿元-2.3亿元,同比下降39.68%-55.41%。公司主营产品沙坦类原料药受低端市场激烈竞争影响,原料药销售价格较去年同期下降;子公司山东昌邑一期项目和京圣药业生产基地建设完成,进入试生产阶段和达产阶段,产能利用率没有完全释放,生产成本阶段性较高等原因导致报告期毛利率较上年同期下降。", "output": "天宇股份:半年度净利润预降40%-55%" } ``` (2)FinQA 金融新闻公告事件问答数据集。由DuEE-fin数据集转化得到。输入一段金融新闻或公告,和一个与文本中发生的事件有关的问题,需要模型根据文本生成问题答案。问题的范围包含文本中包含的事件类型,以及某个事件对应的发生时间和人物等要素;答案为问题对应的文本中的事件类型或事件要素的列表。其中训练集包含16000条数据,验证集包含2000条数据。 ```json { "instruction": "新城悦服务股份回购事件对应的每股交易价格是什么?原标题:新城悦“自救”:1064万港元回购公司190万股股份 来源:新浪乐居 \ 7月8日,新城悦服务(01755.hk)发布公告称,公司于今日回购190万股普通股票,占据现有已发行股份的0.23171%。回购股份每股付出价格区间为5.30港元至5.83港元,付出总额为1064万港元。 \ 值得注意的是,新城控股(28.500,1.52,5.63%)董事长涉嫌猥亵儿童被刑拘事件发生后第四个交易日(7月8日),新城悦服务股价开始回升,收涨12.20%。 \ 据悉,新城控股董事长涉嫌猥亵儿童被刑拘事件发生第三个交易日(7月5日),新城系港股上市房企市值共蒸发约256亿港元。截至7月5日收盘,新城发展(01030.HK)收于6.71港元\/股,市值自事件发生后减少227.11亿港元;新城悦(01755.HK)收于5.08港元\/股,市值自事件发生后减少28.86亿港元。", "input": "", "output": "5.30港元至5.83港元" } ``` (3)FinNL 金融新闻分类数据集。对于给出的金融新闻,需要模型将其多标签分类到可能的十五种类别,类别包括公司、行业、大盘、国际、经济、政策、政治、期货、债券、房地产、外汇、虚拟货币、新冠、能源和其它。其中训练集包含8000条数据,验证集包含1000条数据。 ```json { "instruction": "新城悦服务股份回购事件对应的每股交易价格是什么?原标题:新城悦“自救”:1064万港元回购公司190万股股份 来源:新浪乐居 \ 7月8日,新城悦服务(01755.hk)发布公告称,公司于今日回购190万股普通股票,占据现有已发行股份的0.23171%。回购股份每股付出价格区间为5.30港元至5.83港元,付出总额为1064万港元。 \ 值得注意的是,新城控股(28.500,1.52,5.63%)董事长涉嫌猥亵儿童被刑拘事件发生后第四个交易日(7月8日),新城悦服务股价开始回升,收涨12.20%。 \ 据悉,新城控股董事长涉嫌猥亵儿童被刑拘事件发生第三个交易日(7月5日),新城系港股上市房企市值共蒸发约256亿港元。截至7月5日收盘,新城发展(01030.HK)收于6.71港元\/股,市值自事件发生后减少227.11亿港元;新城悦(01755.HK)收于5.08港元\/股,市值自事件发生后减少28.86亿港元。", "input": "", "output": "5.30港元至5.83港元" } ``` (4)FinRE 金融新闻关系抽取数据集。对于给出的金融新闻和头实体-尾实体对,需要模型分类实体对的关系到包含空关系的44种关系类别,包含拥有、持股、竞争、收购、交易、合作、减持等财经金融领域的特有关系类别。其中训练集包含7454条数据,验证集包含1489条数据。 ```json { "instruction": "根据以下文本,描述以下两个实体东方航空和上航之间的关系。", "input": "东方航空AH股临时停牌传将与上航合并", "output": "合并" } ``` (5)FinFE 金融社交媒体文本情绪分类数据集。对于给出的金融社交媒体文本,需要模型分类该文本的情绪为消极-中性-积极三种类别。其中训练集包含8000条数据,验证集包含1000条数据。 ```json { "instruction": "这个文本的情感倾向是积极、消极还是中性的。", "input": "补仓补的心力憔悴", "output": "该文本情感是消极的。" } ``` (6)FinNSP 金融负面消息及其主体判定数据集。对于给出的金融新闻或社交媒体文本及其中包含的实体,需要模型判断该文本中是否包含有针对某个实体的负面消息,并指出负面消息的主体是哪个实体。其中训练集包含4800条数据,验证集包含600条数据。 ```json { "instruction": "识别以下内容中的负面金融实体信息。", "input": "11月5日,警方对众车在线涉嫌非法吸收公众存款案立案侦查", "output": "文中包含的负面主体:众车在线。" } ``` (7)FinCQA 金融因果事件抽取数据集。因果事件抽取专注于在文本中识别出具有因果关系的两个事件及其事件参数,并将其整理为机构化数据。该数据集的因果事件数据集包含对大宗商品领域的因果事件识别,识别的事件类型包括台风/地震,供给增加/减少,需求增加/减少,价格上升/下降等可能为原因和结果事件及其对应关系和对应的产品、地区等参数 。其中训练集包含21965条数据,验证集包含2741条数据。 ```json { "instruction": "下文中有哪些因果事件?铁矿:中长期,今年铁矿供需格局明显改善,巴西矿难及飓风对发运的影响,导致铁矿石全年供应走低", "input": "", "output": "台风导致供给减少" } ``` (8)FinESE 金融事件主体抽取数据集。从真实的新闻语料中,抽取特定事件类型的主体。其中训练集包含11752条数据,验证集包含1464条数据。 ```json { "instruction": "分析以下新闻,抽取资金账户风险事件相关的主体信息。", "input": "金一文化违规减持仅””罚酒三杯””未来减持或””仍不手软””雅虎承认发生大规模数据泄露 2亿账户信息被盗科远股份(002380)股东减持202万股套现5989万", "output": "所属资金账户风险事件的金融主体是雅虎。" } ```
Hazqeel/ms-patriots
--- language: - ms --- thepatriots(dot)asia data scraped on 8/7/2023
fffilo/genre-classifier-2
--- dataset_info: features: - name: example dtype: string splits: - name: train num_bytes: 130697 num_examples: 118 - name: test num_bytes: 5589 num_examples: 5 download_size: 32844 dataset_size: 136286 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
cboettig/biodiversity
--- license: pddl --- # Public Biodiversity Data A collection of biodiversity-related datasets in the public domain. Data objects are copied here to make more easily available over virtual filesystem protocols. Some datasets are also translated into cloud-optimized formats.
lhoestq/small-publaynet-wds
--- tags: - webdataset --- # Small PubLayNet (WebDataset) This dataset consists in the first WebDataset shards of PubLayNet from http://storage.googleapis.com/nvdata-publaynet It is mostly used to test the WebDataset integration within the Hugging Face ecosystem.
tyzhu/flan_max_300
--- dataset_info: features: - name: id dtype: string - name: system_prompt dtype: string - name: question dtype: string - name: response dtype: string splits: - name: train num_bytes: 2253528229.0133214 num_examples: 1321267 - name: test num_bytes: 118607826.10465212 num_examples: 69541 - name: validation num_bytes: 118607826.10465212 num_examples: 69541 download_size: 34774605 dataset_size: 2490743881.2226253 --- # Dataset Card for "flan_max_300" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/watanabe_you_lovelivesunshine
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of watanabe_you/渡辺曜/와타나베요 (Love Live! Sunshine!!) This is the dataset of watanabe_you/渡辺曜/와타나베요 (Love Live! Sunshine!!), containing 500 images and their tags. The core tags of this character are `blue_eyes, short_hair, brown_hair, grey_hair, bangs, breasts, medium_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 660.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/watanabe_you_lovelivesunshine/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 375.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/watanabe_you_lovelivesunshine/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1206 | 811.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/watanabe_you_lovelivesunshine/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 585.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/watanabe_you_lovelivesunshine/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1206 | 1.12 GiB | [Download](https://huggingface.co/datasets/CyberHarem/watanabe_you_lovelivesunshine/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/watanabe_you_lovelivesunshine', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, grey_skirt, pleated_skirt, serafuku, smile, solo, uranohoshi_school_uniform, long_sleeves, looking_at_viewer, red_bowtie, simple_background, white_background, buttons, salute, grey_sailor_collar, miniskirt, shirt, collarbone, cowboy_shot | | 1 | 27 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, earrings, looking_at_viewer, midriff, navel, tiara, smile, skirt, birthday, fish, detached_sleeves, thighhighs, blush, bubble, underwater | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, blush, looking_at_viewer, open_mouth, smile, dress, hair_flower, white_background, detached_sleeves, earrings, simple_background, bow, tiara | | 3 | 14 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, solo, competition_swimsuit, looking_at_viewer, blue_one-piece_swimsuit, blush, collarbone, smile, covered_navel, wet, water, highleg_swimsuit, poolside, bare_shoulders, open_mouth | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, floral_print, obi, smile, solo, looking_at_viewer, blush, hair_flower, open_mouth, upper_body, alternate_hairstyle, wide_sleeves, yukata | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, looking_at_viewer, outdoors, smile, solo, blue_sky, cleavage, day, navel, cloud, ocean, blush, collarbone, earrings, blue_bikini, bracelet, open_mouth, salute, skirt, striped_bikini, x_hair_ornament, rainbow | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, beret, blush, cleavage, collarbone, looking_at_viewer, necklace, open_mouth, sailor_collar, sailor_hat, short_sleeves, solo, white_headwear, :d, navel, white_skirt, wrist_cuffs, bikini_top_only, crop_top, midriff, miniskirt, pleated_skirt, simple_background, stomach, teeth, white_background, white_thighhighs, blue_bikini, bow, frilled_skirt, from_above, open_clothes, pendant, polka_dot_bikini, swept_bangs, water_drop, zettai_ryouiki | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, solo, bag, looking_at_viewer, straw_hat, open_mouth, sun_hat, anchor_necklace, outdoors, shorts, vertical-striped_dress, :d, bare_shoulders, black_ribbon, blush, collarbone, day, ocean, sky, sleeveless, wrist_ribbon | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, hat, looking_at_viewer, solo, simple_background, white_background, short_shorts, thighhighs, blush, long_sleeves, salute, thigh_strap, white_shorts, detached_sleeves, epaulettes, grin, one_eye_closed, open_mouth, waist_cape, white_headwear, ;d, blue_ribbon, gun, holding_weapon, jewelry, necktie, white_footwear | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | grey_skirt | pleated_skirt | serafuku | smile | solo | uranohoshi_school_uniform | long_sleeves | looking_at_viewer | red_bowtie | simple_background | white_background | buttons | salute | grey_sailor_collar | miniskirt | shirt | collarbone | cowboy_shot | earrings | midriff | navel | tiara | skirt | birthday | fish | detached_sleeves | thighhighs | bubble | underwater | open_mouth | dress | hair_flower | bow | competition_swimsuit | blue_one-piece_swimsuit | covered_navel | wet | water | highleg_swimsuit | poolside | bare_shoulders | floral_print | obi | upper_body | alternate_hairstyle | wide_sleeves | yukata | outdoors | blue_sky | cleavage | day | cloud | ocean | blue_bikini | bracelet | striped_bikini | x_hair_ornament | rainbow | beret | necklace | sailor_collar | sailor_hat | short_sleeves | white_headwear | :d | white_skirt | wrist_cuffs | bikini_top_only | crop_top | stomach | teeth | white_thighhighs | frilled_skirt | from_above | open_clothes | pendant | polka_dot_bikini | swept_bangs | water_drop | zettai_ryouiki | bag | straw_hat | sun_hat | anchor_necklace | shorts | vertical-striped_dress | black_ribbon | sky | sleeveless | wrist_ribbon | hat | short_shorts | thigh_strap | white_shorts | epaulettes | grin | one_eye_closed | waist_cape | ;d | blue_ribbon | gun | holding_weapon | jewelry | necktie | white_footwear | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:----------------|:-----------|:--------|:-------|:----------------------------|:---------------|:--------------------|:-------------|:--------------------|:-------------------|:----------|:---------|:---------------------|:------------|:--------|:-------------|:--------------|:-----------|:----------|:--------|:--------|:--------|:-----------|:-------|:-------------------|:-------------|:---------|:-------------|:-------------|:--------|:--------------|:------|:-----------------------|:--------------------------|:----------------|:------|:--------|:-------------------|:-----------|:-----------------|:---------------|:------|:-------------|:----------------------|:---------------|:---------|:-----------|:-----------|:-----------|:------|:--------|:--------|:--------------|:-----------|:-----------------|:------------------|:----------|:--------|:-----------|:----------------|:-------------|:----------------|:-----------------|:-----|:--------------|:--------------|:------------------|:-----------|:----------|:--------|:-------------------|:----------------|:-------------|:---------------|:----------|:-------------------|:--------------|:-------------|:-----------------|:------|:------------|:----------|:------------------|:---------|:-------------------------|:---------------|:------|:-------------|:---------------|:------|:---------------|:--------------|:---------------|:-------------|:-------|:-----------------|:-------------|:-----|:--------------|:------|:-----------------|:----------|:----------|:-----------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 27 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | X | X | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | X | X | | | X | | X | X | | | | | | | | X | | | X | | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 14 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | X | X | | | X | | | | | | | | | X | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | | X | X | | | X | | | | | X | | | | X | | X | | X | | X | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | X | | | X | | | X | | X | X | | | | X | | X | | | X | X | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | | | | X | | | X | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | X | | | X | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | | | | | X | | X | X | | X | X | | X | | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
sent_comp
--- annotations_creators: - machine-generated language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - other task_ids: [] paperswithcode_id: sentence-compression pretty_name: Google Sentence Compression tags: - sentence-compression dataset_info: features: - name: graph struct: - name: id dtype: string - name: sentence dtype: string - name: node sequence: - name: form dtype: string - name: type dtype: string - name: mid dtype: string - name: word sequence: - name: id dtype: int32 - name: form dtype: string - name: stem dtype: string - name: tag dtype: string - name: gender dtype: int32 - name: head_word_index dtype: int32 - name: edge sequence: - name: parent_id dtype: int32 - name: child_id dtype: int32 - name: label dtype: string - name: entity_mention sequence: - name: start dtype: int32 - name: end dtype: int32 - name: head dtype: int32 - name: name dtype: string - name: type dtype: string - name: mid dtype: string - name: is_proper_name_entity dtype: bool - name: gender dtype: int32 - name: compression struct: - name: text dtype: string - name: edge sequence: - name: parent_id dtype: int32 - name: child_id dtype: int32 - name: headline dtype: string - name: compression_ratio dtype: float32 - name: doc_id dtype: string - name: source_tree struct: - name: id dtype: string - name: sentence dtype: string - name: node sequence: - name: form dtype: string - name: type dtype: string - name: mid dtype: string - name: word sequence: - name: id dtype: int32 - name: form dtype: string - name: stem dtype: string - name: tag dtype: string - name: gender dtype: int32 - name: head_word_index dtype: int32 - name: edge sequence: - name: parent_id dtype: int32 - name: child_id dtype: int32 - name: label dtype: string - name: entity_mention sequence: - name: start dtype: int32 - name: end dtype: int32 - name: head dtype: int32 - name: name dtype: string - name: type dtype: string - name: mid dtype: string - name: is_proper_name_entity dtype: bool - name: gender dtype: int32 - name: compression_untransformed struct: - name: text dtype: string - name: edge sequence: - name: parent_id dtype: int32 - name: child_id dtype: int32 splits: - name: validation num_bytes: 55823979 num_examples: 10000 - name: train num_bytes: 1135684803 num_examples: 200000 download_size: 259652560 dataset_size: 1191508782 --- # Dataset Card for Google Sentence Compression ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/sentence-compression](https://github.com/google-research-datasets/sentence-compression) - **Repository:** [https://github.com/google-research-datasets/sentence-compression](https://github.com/google-research-datasets/sentence-compression) - **Paper:** [https://www.aclweb.org/anthology/D13-1155/](https://www.aclweb.org/anthology/D13-1155/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A major challenge in supervised sentence compression is making use of rich feature representations because of very scarce parallel data. We address this problem and present a method to automatically build a compression corpus with hundreds of thousands of instances on which deletion-based algorithms can be trained. In our corpus, the syntactic trees of the compressions are subtrees of their uncompressed counterparts, and hence supervised systems which require a structural alignment between the input and output can be successfully trained. We also extend an existing unsupervised compression method with a learning module. The new system uses structured prediction to learn from lexical, syntactic and other features. An evaluation with human raters shows that the presented data harvesting method indeed produces a parallel corpus of high quality. Also, the supervised system trained on this corpus gets high scores both from human raters and in an automatic evaluation setting, significantly outperforming a strong baseline. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances Each data instance should contains the information about the original sentence in `instance["graph"]["sentence"]` as well as the compressed sentence in `instance["compression"]["text"]`. As this dataset was created by pruning dependency connections, the author also includes the dependency tree and transformed graph of the original sentence and compressed sentence. ### Data Fields Each instance should contains these information: - `graph` (`Dict`): the transformation graph/tree for extracting compression (a modified version of a dependency tree). - This will have features similar to a dependency tree (listed bellow) - `compression` (`Dict`) - `text` (`str`) - `edge` (`List`) - `headline` (`str`): the headline of the original news page. - `compression_ratio` (`float`): the ratio between compressed sentence vs original sentence. - `doc_id` (`str`): url of the original news page. - `source_tree` (`Dict`): the original dependency tree (features listed bellow). - `compression_untransformed` (`Dict`) - `text` (`str`) - `edge` (`List`) Dependency tree features: - `id` (`str`) - `sentence` (`str`) - `node` (`List`): list of nodes, each node represent a word/word phrase in the tree. - `form` (`string`) - `type` (`string`): the enity type of a node. Defaults to `""` if it's not an entity. - `mid` (`string`) - `word` (`List`): list of words the node contains. - `id` (`int`) - `form` (`str`): the word from the sentence. - `stem` (`str`): the stemmed/lemmatized version of the word. - `tag` (`str`): dependency tag of the word. - `gender` (`int`) - `head_word_index` (`int`) - `edge`: list of the dependency connections between words. - `parent_id` (`int`) - `child_id` (`int`) - `label` (`str`) - `entity_mention` list of the entities in the sentence. - `start` (`int`) - `end` (`int`) - `head` (`str`) - `name` (`str`) - `type` (`str`) - `mid` (`str`) - `is_proper_name_entity` (`bool`) - `gender` (`int`) ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
PleIAs/Ukrainian-CulturalHeritage-Books
--- task_categories: - text-generation language: - uk tags: - ocr pretty_name: Ukrainian-Public Domain-Books --- # 🇺🇦 Ukrainian-Cultural Heritage-Books 🇺🇦 **Ukrainian-Cultural Heritage-Books** or **Ukrainian-CulturalHeritage-Books** is a collection of Ukrainian cultural heritage books and periodicals, most of them being in the public domain. ## Dataset summary The collection has been compiled by Pierre-Carl Langlais from 19,574 digitized files hosted on Internet Archive (462M words) and will be expanded to other cultural heritage sources. ## Curation method The composition of the dataset adheres to the criteria for public domain works in the EU and, consequently, all Berne-countries for EU authors: any publication whose author is dead for more than 70 years. Additionally, the initial consolidation of public domain status for cultural heritage operates in the EU under the 2019 Copyright Directive (art. 14). As of March 2024, to limit rights verification, we have retained exclusively titles published prior to 1884. The corpus will be expanded at a later stage to encompass late 19th century and early 20th century publications, after checking for public domain validity. ## Uses The collection aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes. The rationales for creation of this collection are multifold: * **Scientific**: We observe that the closure of training corpora represents a major barrier to AI research. Large language models face a real crisis of reproducibility. * **Legal**: With the adoption of the AI Act with its obligations in terms of copyright law compliance for the pretraining corpora, the European AI ecosystem will have to change its provenance practices. * **Cultural**: The linguistic diversity of the European Union is currently underrepresented. Unlike web archives, open, heritage, administrative, or scientific texts are often of high quality: they are long, multilingual, and editorialized publications. * **Economical**: Today, value capture is concentrated on players whose financial resources are already considerable, allowing them to collect or purchase data at a high price. Making a royalty-free corpus available to as many people as possible frees innovation in uses and minimizes economic dependencies on dominant actors. ## License The entire collection is in the public domain in all regions. This means that the patrimonial rights of each individual or collective right holders have expired. There has been a debate for years in Europe over the definition of public domain and the possibility to restrict its use. Since 2019, the EU Copyright Directive states that "Member States shall provide that, when the term of protection of a work of visual art has expired, any material resulting from an act of reproduction of that work is not subject to copyright or related rights, unless the material resulting from that act of reproduction is original in the sense that it is the author's own intellectual creation." (art. 14) ## Future work This dataset is not a one-time work but will continue to evolve significantly in three directions: * Expansion of the dataset to the late 19th and early 20th century works and its further enhancement with currently unexploited collections coming from European patrimonial data repositories. * Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s) and some documents should be. Future versions will strive either to re-OCRize the original text or use experimental LLM models for partial OCR correction. * Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well-formatted. ## Acknowledgements The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). Corpus collection has been largely facilitated thanks to the open science LLM community insights and cooperation (Occiglot, Eleuther AI, Allen AI). <div style="text-align: center;"> <img src="https://github.com/mch-dd/datasetlogo/blob/main/scaleway.jpeg?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/> <img src="https://github.com/mch-dd/datasetlogo/blob/main/ministere.png?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/> <img src="https://github.com/mch-dd/datasetlogo/blob/main/occiglot.jpg?raw=true" style="width: 33%; margin: 0 auto; display: inline-block;"/> </div>
expertai/BUSTER
--- language: - en license: apache-2.0 size_categories: - 10K<n<100K task_categories: - token-classification pretty_name: buster tags: - finance configs: - config_name: default data_files: - split: FOLD_1 path: data/FOLD_1-* - split: FOLD_2 path: data/FOLD_2-* - split: FOLD_3 path: data/FOLD_3-* - split: FOLD_4 path: data/FOLD_4-* - split: FOLD_5 path: data/FOLD_5-* - split: SILVER path: data/SILVER-* dataset_info: features: - name: document_id dtype: string - name: text dtype: string - name: tokens sequence: string - name: labels sequence: string splits: - name: FOLD_1 num_bytes: 13597946 num_examples: 753 - name: FOLD_2 num_bytes: 13477878 num_examples: 759 - name: FOLD_3 num_bytes: 13602552 num_examples: 758 - name: FOLD_4 num_bytes: 13834760 num_examples: 755 - name: FOLD_5 num_bytes: 13632431 num_examples: 754 - name: SILVER num_bytes: 108914416 num_examples: 6196 download_size: 0 dataset_size: 177059983 --- # Dataset Card for BUSTER BUSiness Transaction Entity Recognition dataset. BUSTER is an Entity Recognition (ER) benchmark for entities related to business transactions. It consists of a gold corpus of 3779 manually annotated documents on financial transactions that were randomly divided into 5 folds, plus an additional silver corpus of 6196 automatically annotated documents that were created by the model-optimized RoBERTa system. ### Data Splits Statistics <table border="1" cellspacing="0" cellpadding="5" style="border-collapse: collapse; width: 100%;"> <thead> <tr> <th></th> <th></th> <th colspan="6" style="text-align:center;">Gold</th> <th>Silver</th> </tr> <tr> <th></th> <th></th> <th>fold 1</th> <th>fold 2</th> <th>fold 3</th> <th>fold 4</th> <th>fold 5</th> <th>Total</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td></td> <td>N. Docs</td> <td>753</td> <td>759</td> <td>758</td> <td>755</td> <td>754</td> <td>3779</td> <td>6196</td> </tr> <tr> <td></td> <td>N. Tokens</td> <td>685K</td> <td>680K</td> <td>687K</td> <td>697K</td> <td>688K</td> <td>3437K</td> <td>5647K</td> </tr> <tr> <td></td> <td>N. Annotations</td> <td>4119</td> <td>4267</td> <td>4100</td> <td>4103</td> <td>4163</td> <td>20752</td> <td>33272</td> </tr> </tbody> </table> ### Pre-print You can find the pre-print [here](https://arxiv.org/abs/2402.09916). ### Citation Information If you use BUSTER in your work, please cite us: ``` @inproceedings{zugarini-etal-2023-buster, title = "{BUSTER}: a {``}{BUS}iness Transaction Entity Recognition{''} dataset", author = "Zugarini, Andrea and Zamai, Andrew and Ernandes, Marco and Rigutini, Leonardo", editor = "Wang, Mingxuan and Zitouni, Imed", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-industry.57", doi = "10.18653/v1/2023.emnlp-industry.57", pages = "605--611", abstract = "Albeit Natural Language Processing has seen major breakthroughs in the last few years, transferring such advances into real-world business cases can be challenging. One of the reasons resides in the displacement between popular benchmarks and actual data. Lack of supervision, unbalanced classes, noisy data and long documents often affect real problems in vertical domains such as finance, law and health. To support industry-oriented research, we present BUSTER, a BUSiness Transaction Entity Recognition dataset. The dataset consists of 3779 manually annotated documents on financial transactions. We establish several baselines exploiting both general-purpose and domain-specific language models. The best performing model is also used to automatically annotate 6196 documents, which we release as an additional silver corpus to BUSTER.", } ```
open-llm-leaderboard/details_ABX-AI__Infinite-Laymons-9B
--- pretty_name: Evaluation run of ABX-AI/Infinite-Laymons-9B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [ABX-AI/Infinite-Laymons-9B](https://huggingface.co/ABX-AI/Infinite-Laymons-9B)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ABX-AI__Infinite-Laymons-9B\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-04-10T11:48:19.797016](https://huggingface.co/datasets/open-llm-leaderboard/details_ABX-AI__Infinite-Laymons-9B/blob/main/results_2024-04-10T11-48-19.797016.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.645959924426964,\n\ \ \"acc_stderr\": 0.032166986596234216,\n \"acc_norm\": 0.6488478418620767,\n\ \ \"acc_norm_stderr\": 0.03281361203349943,\n \"mc1\": 0.38922888616891066,\n\ \ \"mc1_stderr\": 0.017068552680690335,\n \"mc2\": 0.5486913365249292,\n\ \ \"mc2_stderr\": 0.015339111406271384\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.6254266211604096,\n \"acc_stderr\": 0.014144193471893446,\n\ \ \"acc_norm\": 0.6561433447098977,\n \"acc_norm_stderr\": 0.013880644570156213\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6478789085839474,\n\ \ \"acc_stderr\": 0.004766553336917499,\n \"acc_norm\": 0.8413662617008564,\n\ \ \"acc_norm_stderr\": 0.003645875568601287\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \ \ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\ \ \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n\ \ \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.038424985593952694,\n\ \ \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.038424985593952694\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\ \ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \ \ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.02825420034443866,\n\ \ \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.02825420034443866\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\ \ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\ \ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\ : 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\ : {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \ \ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \ \ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n\ \ \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \ \ \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\"\ : {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n\ \ \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n\ \ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n\ \ \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n\ \ \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\"\ : {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \ \ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n \ \ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n\ \ \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n\ \ \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\"\ : {\n \"acc\": 0.45614035087719296,\n \"acc_stderr\": 0.04685473041907789,\n\ \ \"acc_norm\": 0.45614035087719296,\n \"acc_norm_stderr\": 0.04685473041907789\n\ \ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\ : 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n \"\ acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997685,\n \"\ acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997685\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\ \ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\ \ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \ \ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n\ \ \"acc_stderr\": 0.023157879349083525,\n \"acc_norm\": 0.7903225806451613,\n\ \ \"acc_norm_stderr\": 0.023157879349083525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n\ \ \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\ : 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\ \ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586804,\n \"\ acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586804\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768776,\n\ \ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768776\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.6923076923076923,\n \"acc_stderr\": 0.023400928918310495,\n\ \ \"acc_norm\": 0.6923076923076923,\n \"acc_norm_stderr\": 0.023400928918310495\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.35185185185185186,\n \"acc_stderr\": 0.02911661760608301,\n \ \ \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.02911661760608301\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342853,\n\ \ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342853\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\ acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8403669724770643,\n \"acc_stderr\": 0.015703498348461763,\n \"\ acc_norm\": 0.8403669724770643,\n \"acc_norm_stderr\": 0.015703498348461763\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.5138888888888888,\n \"acc_stderr\": 0.034086558679777494,\n \"\ acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.034086558679777494\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"\ acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.7763713080168776,\n \"acc_stderr\": 0.027123298205229966,\n \ \ \"acc_norm\": 0.7763713080168776,\n \"acc_norm_stderr\": 0.027123298205229966\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\ \ \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n\ \ \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159464,\n\ \ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159464\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\ acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\ \ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\ \ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\ \ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\ \ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\ \ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\ \ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n\ \ \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n\ \ \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \ \ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n\ \ \"acc_stderr\": 0.013890862162876164,\n \"acc_norm\": 0.8148148148148148,\n\ \ \"acc_norm_stderr\": 0.013890862162876164\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.02418242749657761,\n\ \ \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.02418242749657761\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3452513966480447,\n\ \ \"acc_stderr\": 0.015901432608930354,\n \"acc_norm\": 0.3452513966480447,\n\ \ \"acc_norm_stderr\": 0.015901432608930354\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n\ \ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n\ \ \"acc_stderr\": 0.025583062489984813,\n \"acc_norm\": 0.7170418006430869,\n\ \ \"acc_norm_stderr\": 0.025583062489984813\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135107,\n\ \ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135107\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \ \ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45632333767926986,\n\ \ \"acc_stderr\": 0.012721420501462547,\n \"acc_norm\": 0.45632333767926986,\n\ \ \"acc_norm_stderr\": 0.012721420501462547\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.6948529411764706,\n \"acc_stderr\": 0.027971541370170595,\n\ \ \"acc_norm\": 0.6948529411764706,\n \"acc_norm_stderr\": 0.027971541370170595\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.6650326797385621,\n \"acc_stderr\": 0.019094228167000318,\n \ \ \"acc_norm\": 0.6650326797385621,\n \"acc_norm_stderr\": 0.019094228167000318\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7272727272727273,\n\ \ \"acc_stderr\": 0.04265792110940589,\n \"acc_norm\": 0.7272727272727273,\n\ \ \"acc_norm_stderr\": 0.04265792110940589\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n\ \ \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n\ \ \"acc_stderr\": 0.024484487162913973,\n \"acc_norm\": 0.8606965174129353,\n\ \ \"acc_norm_stderr\": 0.024484487162913973\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \ \ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\ \ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\ \ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n\ \ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.38922888616891066,\n\ \ \"mc1_stderr\": 0.017068552680690335,\n \"mc2\": 0.5486913365249292,\n\ \ \"mc2_stderr\": 0.015339111406271384\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.8082083662194159,\n \"acc_stderr\": 0.011065209664659527\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5375284306292646,\n \ \ \"acc_stderr\": 0.013733636059107759\n }\n}\n```" repo_url: https://huggingface.co/ABX-AI/Infinite-Laymons-9B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|arc:challenge|25_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-04-10T11-48-19.797016.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|gsm8k|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hellaswag|10_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-management|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-management|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-48-19.797016.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-management|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T11-48-19.797016.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|truthfulqa:mc|0_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-04-10T11-48-19.797016.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_04_10T11_48_19.797016 path: - '**/details_harness|winogrande|5_2024-04-10T11-48-19.797016.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-04-10T11-48-19.797016.parquet' - config_name: results data_files: - split: 2024_04_10T11_48_19.797016 path: - results_2024-04-10T11-48-19.797016.parquet - split: latest path: - results_2024-04-10T11-48-19.797016.parquet --- # Dataset Card for Evaluation run of ABX-AI/Infinite-Laymons-9B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [ABX-AI/Infinite-Laymons-9B](https://huggingface.co/ABX-AI/Infinite-Laymons-9B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ABX-AI__Infinite-Laymons-9B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-04-10T11:48:19.797016](https://huggingface.co/datasets/open-llm-leaderboard/details_ABX-AI__Infinite-Laymons-9B/blob/main/results_2024-04-10T11-48-19.797016.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.645959924426964, "acc_stderr": 0.032166986596234216, "acc_norm": 0.6488478418620767, "acc_norm_stderr": 0.03281361203349943, "mc1": 0.38922888616891066, "mc1_stderr": 0.017068552680690335, "mc2": 0.5486913365249292, "mc2_stderr": 0.015339111406271384 }, "harness|arc:challenge|25": { "acc": 0.6254266211604096, "acc_stderr": 0.014144193471893446, "acc_norm": 0.6561433447098977, "acc_norm_stderr": 0.013880644570156213 }, "harness|hellaswag|10": { "acc": 0.6478789085839474, "acc_stderr": 0.004766553336917499, "acc_norm": 0.8413662617008564, "acc_norm_stderr": 0.003645875568601287 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6222222222222222, "acc_stderr": 0.04188307537595852, "acc_norm": 0.6222222222222222, "acc_norm_stderr": 0.04188307537595852 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6644736842105263, "acc_stderr": 0.038424985593952694, "acc_norm": 0.6644736842105263, "acc_norm_stderr": 0.038424985593952694 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.64, "acc_stderr": 0.04824181513244218, "acc_norm": 0.64, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6981132075471698, "acc_stderr": 0.02825420034443866, "acc_norm": 0.6981132075471698, "acc_norm_stderr": 0.02825420034443866 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7638888888888888, "acc_stderr": 0.03551446610810826, "acc_norm": 0.7638888888888888, "acc_norm_stderr": 0.03551446610810826 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.048971049527263666, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.048971049527263666 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.78, "acc_stderr": 0.04163331998932261, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932261 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.574468085106383, "acc_stderr": 0.03232146916224468, "acc_norm": 0.574468085106383, "acc_norm_stderr": 0.03232146916224468 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.45614035087719296, "acc_stderr": 0.04685473041907789, "acc_norm": 0.45614035087719296, "acc_norm_stderr": 0.04685473041907789 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5310344827586206, "acc_stderr": 0.04158632762097828, "acc_norm": 0.5310344827586206, "acc_norm_stderr": 0.04158632762097828 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4021164021164021, "acc_stderr": 0.025253032554997685, "acc_norm": 0.4021164021164021, "acc_norm_stderr": 0.025253032554997685 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.47619047619047616, "acc_stderr": 0.04467062628403273, "acc_norm": 0.47619047619047616, "acc_norm_stderr": 0.04467062628403273 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7903225806451613, "acc_stderr": 0.023157879349083525, "acc_norm": 0.7903225806451613, "acc_norm_stderr": 0.023157879349083525 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4876847290640394, "acc_stderr": 0.035169204442208966, "acc_norm": 0.4876847290640394, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7575757575757576, "acc_stderr": 0.03346409881055953, "acc_norm": 0.7575757575757576, "acc_norm_stderr": 0.03346409881055953 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7878787878787878, "acc_stderr": 0.029126522834586804, "acc_norm": 0.7878787878787878, "acc_norm_stderr": 0.029126522834586804 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8911917098445595, "acc_stderr": 0.022473253332768776, "acc_norm": 0.8911917098445595, "acc_norm_stderr": 0.022473253332768776 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6923076923076923, "acc_stderr": 0.023400928918310495, "acc_norm": 0.6923076923076923, "acc_norm_stderr": 0.023400928918310495 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35185185185185186, "acc_stderr": 0.02911661760608301, "acc_norm": 0.35185185185185186, "acc_norm_stderr": 0.02911661760608301 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7016806722689075, "acc_stderr": 0.029719142876342853, "acc_norm": 0.7016806722689075, "acc_norm_stderr": 0.029719142876342853 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.36423841059602646, "acc_stderr": 0.03929111781242742, "acc_norm": 0.36423841059602646, "acc_norm_stderr": 0.03929111781242742 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8403669724770643, "acc_stderr": 0.015703498348461763, "acc_norm": 0.8403669724770643, "acc_norm_stderr": 0.015703498348461763 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5138888888888888, "acc_stderr": 0.034086558679777494, "acc_norm": 0.5138888888888888, "acc_norm_stderr": 0.034086558679777494 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8235294117647058, "acc_stderr": 0.026756401538078966, "acc_norm": 0.8235294117647058, "acc_norm_stderr": 0.026756401538078966 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7763713080168776, "acc_stderr": 0.027123298205229966, "acc_norm": 0.7763713080168776, "acc_norm_stderr": 0.027123298205229966 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6681614349775785, "acc_stderr": 0.03160295143776679, "acc_norm": 0.6681614349775785, "acc_norm_stderr": 0.03160295143776679 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7938931297709924, "acc_stderr": 0.03547771004159464, "acc_norm": 0.7938931297709924, "acc_norm_stderr": 0.03547771004159464 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8099173553719008, "acc_stderr": 0.03581796951709282, "acc_norm": 0.8099173553719008, "acc_norm_stderr": 0.03581796951709282 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.0335195387952127, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.0335195387952127 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8675213675213675, "acc_stderr": 0.022209309073165616, "acc_norm": 0.8675213675213675, "acc_norm_stderr": 0.022209309073165616 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8148148148148148, "acc_stderr": 0.013890862162876164, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.013890862162876164 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7196531791907514, "acc_stderr": 0.02418242749657761, "acc_norm": 0.7196531791907514, "acc_norm_stderr": 0.02418242749657761 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3452513966480447, "acc_stderr": 0.015901432608930354, "acc_norm": 0.3452513966480447, "acc_norm_stderr": 0.015901432608930354 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7189542483660131, "acc_stderr": 0.025738854797818733, "acc_norm": 0.7189542483660131, "acc_norm_stderr": 0.025738854797818733 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7170418006430869, "acc_stderr": 0.025583062489984813, "acc_norm": 0.7170418006430869, "acc_norm_stderr": 0.025583062489984813 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7376543209876543, "acc_stderr": 0.024477222856135107, "acc_norm": 0.7376543209876543, "acc_norm_stderr": 0.024477222856135107 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48936170212765956, "acc_stderr": 0.029820747191422473, "acc_norm": 0.48936170212765956, "acc_norm_stderr": 0.029820747191422473 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.45632333767926986, "acc_stderr": 0.012721420501462547, "acc_norm": 0.45632333767926986, "acc_norm_stderr": 0.012721420501462547 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6948529411764706, "acc_stderr": 0.027971541370170595, "acc_norm": 0.6948529411764706, "acc_norm_stderr": 0.027971541370170595 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6650326797385621, "acc_stderr": 0.019094228167000318, "acc_norm": 0.6650326797385621, "acc_norm_stderr": 0.019094228167000318 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7272727272727273, "acc_stderr": 0.04265792110940589, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.04265792110940589 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7224489795918367, "acc_stderr": 0.028666857790274648, "acc_norm": 0.7224489795918367, "acc_norm_stderr": 0.028666857790274648 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8606965174129353, "acc_stderr": 0.024484487162913973, "acc_norm": 0.8606965174129353, "acc_norm_stderr": 0.024484487162913973 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.85, "acc_stderr": 0.03588702812826371, "acc_norm": 0.85, "acc_norm_stderr": 0.03588702812826371 }, "harness|hendrycksTest-virology|5": { "acc": 0.536144578313253, "acc_stderr": 0.038823108508905954, "acc_norm": 0.536144578313253, "acc_norm_stderr": 0.038823108508905954 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8245614035087719, "acc_stderr": 0.029170885500727665, "acc_norm": 0.8245614035087719, "acc_norm_stderr": 0.029170885500727665 }, "harness|truthfulqa:mc|0": { "mc1": 0.38922888616891066, "mc1_stderr": 0.017068552680690335, "mc2": 0.5486913365249292, "mc2_stderr": 0.015339111406271384 }, "harness|winogrande|5": { "acc": 0.8082083662194159, "acc_stderr": 0.011065209664659527 }, "harness|gsm8k|5": { "acc": 0.5375284306292646, "acc_stderr": 0.013733636059107759 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
AymanMansour/SDN-Dialect-Dataset
--- dataset_info: features: - name: filename dtype: string - name: text dtype: string - name: orthographic dtype: string - name: transliteration dtype: string - name: audio dtype: audio splits: - name: train num_bytes: 1697895484.76 num_examples: 4830 - name: test num_bytes: 244760635.0 num_examples: 532 download_size: 2883670807 dataset_size: 1942656119.76 --- # Dataset Card for "SDN-Dialect-Dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
talktolisten/ttl-train
--- task_categories: - conversational size_categories: - 10K<n<100K ---
316usman/thematic5e-pw-embed-part7
--- dataset_info: features: - name: text dtype: string - name: document_url dtype: string - name: source_url dtype: string - name: country dtype: string splits: - name: train num_bytes: 263314408 num_examples: 405601 download_size: 102834585 dataset_size: 263314408 configs: - config_name: default data_files: - split: train path: data/train-* ---
daje/mistral_tokenized_en_wiki
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 24317150575 num_examples: 16096560 download_size: 10840035501 dataset_size: 24317150575 configs: - config_name: default data_files: - split: train path: data/train-* ---
ronnybehrens/mini-platypus_dc
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 4168526 num_examples: 1000 download_size: 2239555 dataset_size: 4168526 configs: - config_name: default data_files: - split: train path: data/train-* ---
Intuit-GenSRF/all_spanish_datasets
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string - name: labels sequence: string - name: encoded_labels sequence: int64 - name: lang dtype: string - name: has_toxic dtype: int64 - name: has_profane dtype: int64 - name: has_insult dtype: int64 - name: has_hate dtype: int64 - name: has_threat dtype: int64 - name: has_sexual dtype: int64 - name: has_offensive dtype: int64 - name: has_selfharm dtype: int64 - name: has_harassment dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 1495796876 num_examples: 2814389 download_size: 603996129 dataset_size: 1495796876 --- # Dataset Card for "all_spanish_datasets" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jtjt520j/cspider_for_chinese_llama2_1.3b
--- license: apache-2.0 ---
projecte-aina/CA-EN_Parallel_Corpus
--- language: - ca - en multilinguality: - multilingual pretty_name: CA-EN Parallel Corpus size_categories: - 10M<n<100M task_categories: - translation task_ids: [] license: cc-by-4.0 --- # Dataset Card for CA-EN Parallel Corpus ## Dataset Description ### Dataset Summary The CA-EN Parallel Corpus is a Catalan-English dataset of **14.967.979** parallel sentences. The dataset was created to support Catalan in NLP tasks, specifically Machine Translation. ### Supported Tasks and Leaderboards The dataset can be used to train Bilingual Machine Translation models between English and Catalan in any direction, as well as Multilingual Machine Translation models. ### Languages The sentences included in the dataset are in Catalan (CA) and English (EN). ## Dataset Structure ### Data Instances The dataset is a single tsv file where each row contains a parallel sentence pair, as well as the following information per sentence: * language probability score calculated with the language detector [lingua.py](https://github.com/pemistahl/lingua-py), * alignment score calculated with [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), * domain, * text type. ### Data Fields Each example contains the following 7 fields: * ca: Catalan sentence * en: English sentence * ca_prob: Language probability for the Catalan sentence calculated with the language detector [lingua.py](https://github.com/pemistahl/lingua-py) * en_prob: Language probability for the English sentence calculated with the language detector [lingua.py](https://github.com/pemistahl/lingua-py) * alignment: Sentence pair alignment score calculated with [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) * Domain: Domain (see list of domains) * Type: Text type (see list of text types) #### Example: <pre> [ { Pel que fa als motors de cerca, també es basen en l'estructura del seu contingut d'informació al lloc web per analitzar i indexar el seu lloc web. As for the search engines, they also rely on the structure of your information content on the website to analyze and index your website. 0.9999799355804416 0.9993718600460302 0.91045034 MWM SM }, ... ] </pre> #### List of domains (and number of sentences per domain): AUT: Automotive, transport, traffic regulations (2.289.951) LEG: legal, law, HR, certificates, degrees (498.676) MWM: Marketing, web, merchandising, customer support and service, e-commerce , advertising, surveys (1.066.111) LSM: Medicine, natural sciences, food/nutrition, biology, sexology, cosmetics, chemistry, genetics (457.647) ENV: Environment, agriculture, forestry, fisheries, farming, zoology, ecology (681.813) FIN: Finance, economics, business, entrepreneurship, business, competitions, labour, employment, accounting, insurance, insurance (292.865) POL: Politics, international relations, European Union, international organisations, defence, military (451.569) PRN: Porn, inappropriate content (597.926) COM: Computers, IT, robotics, domotics, home automation, telecommunications (1.200.192) ING: Pure engineering (mechanical, electrical, electronic, aerospace...), meteorology, mining, engineering, maritime, acoustics (581.722) ARC: Architecture, civil engineering, construction, public engineering (663.985) MAT: Mathematics, statistics, physics (216.635) HRM: History, religion, mythology, folklore, philosophy, psychology, ethics, anthropology, tourism (1.362.302) CUL: Art, poetry, literature, cinema, video games, theatre, theatre/film scripts, esotericism, astrology, sports, music, photography (2.774.420) GEN: General - generic cathegory with topics such as clothing, textiles, gastronomy, etc. (1.832.164) #### List of text types (and number of sentences per text type): PAT: Patents (583.353) SM: Social media, chats, forums, tweets (6.420.644) CON: Oral language, transcription of conversations, subtitles (3.709.344) EML: Emails (543.010) MNL: Manuals, data sheets (1.379.021) NEW: News, journalism (1.346.845) GEN: Prose, generic type of text (985.761) ### Data Splits The dataset contains a single split: `train`. Individual domain or style specific subsets can be extracted from the original dataset by filtering by the previously mentioned domains and text types. ## Dataset Creation ### Curation Rationale This dataset is aimed at promoting the development of Machine Translation between Catalan and other languages, specifically English. ### Source Data #### Initial Data Collection and Normalization The data is a collection of parallel sentences in Catalan and English, partially derived from web crawlings and belonging to a mix of different domains and styles. The source data is partially Catalan authentic text translated into English and partially authentic English text translated into Catalan. The data was obtained through a combination of human translation and machine translation with human proofreading. The obtained corpus consists of **14.967.979** parallel sentences. #### Who are the source language producers? The original data gathering was entrusted to an external company through a public tender process. ### Annotations #### Annotation process The dataset does not contain any annotations. #### Who are the annotators? [N/A] ### Personal and Sensitive Information Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied, personal and sensitive information may be present in the data. This needs to be considered when using the data for training models. ## Considerations for Using the Data ### Social Impact of Dataset By providing this resource, we intend to promote the use of Catalan across NLP tasks, thereby improving the accessibility and visibility of the Catalan language. ### Discussion of Biases No specific bias mitigation strategies were applied to this dataset. Inherent biases may exist within the data. ### Other Known Limitations The dataset contains data of several specific domains. The dataset can be used as a whole or extracting subsets per domain or text types. Applications of this dataset in domains other than the ones included in the domain list would be of limited use. ## Additional Information ### Dataset Curators Language Technologies Unit at the Barcelona Supercomputing Center (langtech@bsc.es). This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/). ### Licensing Information This work is licensed under a [Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/). ### Citation Information [N/A] ### Contributions [N/A]
dominguesm/brwac
--- annotations_creators: - no-annotation language_creators: - found language: - pt license: - unknown multilinguality: - monolingual size_categories: - 1M<n<10M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: brwac pretty_name: BrWaC dataset_info: features: - name: doc_id dtype: string - name: title dtype: string - name: uri dtype: string - name: text sequence: - name: paragraphs sequence: string splits: - name: train num_bytes: 18828412956 num_examples: 3530796 download_size: 11616550261 dataset_size: 18828412956 --- # Dataset Card for BrWaC ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [BrWaC homepage](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Repository:** [BrWaC repository](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Paper:** [The brWaC Corpus: A New Open Resource for Brazilian Portuguese](https://www.aclweb.org/anthology/L18-1686/) - **Point of Contact:** [Jorge A. Wagner Filho](mailto:jawfilho@inf.ufrgs.br) ### Dataset Summary The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. No need to manually download external sources. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Portuguese ## Dataset Structure ### Data Instances An example from the BrWaC dataset looks as follows: ``` { "doc_id": "netg-1afc73", "text": { "paragraphs": [ [ "Conteúdo recente" ], [ "ESPUMA MARROM CHAMADA \"NINGUÉM MERECE\"" ], [ "31 de Agosto de 2015, 7:07 , por paulo soavinski - | No one following this article yet." ], [ "Visualizado 202 vezes" ], [ "JORNAL ELETRÔNICO DA ILHA DO MEL" ], [ "Uma espuma marrom escuro tem aparecido com frequência na Praia de Fora.", "Na faixa de areia ela aparece disseminada e não chama muito a atenção.", "No Buraco do Aipo, com muitas pedras, ela aparece concentrada.", "É fácil saber que esta espuma estranha está lá, quando venta.", "Pequenos algodões de espuma começam a flutuar no espaço, pertinho da Praia do Saquinho.", "Quem pode ajudar na coleta deste material, envio a laboratório renomado e pagamento de análises, favor entrar em contato com o site." ] ] }, "title": "ESPUMA MARROM CHAMADA ‟NINGUÉM MERECE‟ - paulo soavinski", "uri": "http://blogoosfero.cc/ilhadomel/pousadasilhadomel.com.br/espuma-marrom-chamada-ninguem-merece" } ``` ### Data Fields - `doc_id`: The document ID - `title`: The document title - `uri`: URI where the document was extracted from - `text`: A list of document paragraphs (with a list of sentences in it as a list of strings) ### Data Splits The data is only split into train set with size of 3530796 samples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{wagner2018brwac, title={The brwac corpus: A new open resource for brazilian portuguese}, author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ```
sumitj39/openhathi-7b-base-q4_0.ggml
--- license: llama2 --- This dataset contains the ggml version of OpenHathi model released by Sarvam AI. [Link to original model](https://huggingface.co/sarvamai/OpenHathi-7B-Hi-v0.1-Base). The ggml file provided is 4 bit quantized version, it can be run on local devices such as an M1 MacBook or other hardware. ### How to use? 1. Download llama.cpp from [here](https://github.com/ggerganov/llama.cpp) ```bash git clone https://github.com/ggerganov/llama.cpp ``` 3. Note: The ggml support has been deprecated, new file format is gguf. But since this repository contains ggml file, we have to switch back to an older commit of llama.cpp that worked with ggml files. Execute this command to switch to the commit that worked with ggml files: ``` bash git checkout dadbed9 ``` 3. Read the instructions mentioned [here](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#build) to create an executable file in the llama.cpp directory. 4. Run the model: ```bash ./main -t 4 -m ~/ggml-models/openhathi-7b-base-q4_0.ggml -p "tell me about india in hindi: - भारत" --ctx-size 1024 -ngl 1 2>/dev/null ``` 5. The model prints output: >भारत दुनिया के सबसे पुराने देशों में से एक है और दुनिया की 7वीं सबसे बड़ी आबादी वाला देश है। The nation has a rich and diverse history, dating back to ancient times when it was ruled by various empires and kingdoms. भारत में दो मुख्य भौगोलिक क्षेत्र शामिल हैंः एक द्वीपसमूह जिसमें कई बड़े द्वीपों के साथ-साथ छोटे द्वीप भी शामिल हैं और दूसरा समतल क्षेत्रों से घिरा हुआ है। भारत की अनूठी सांस्कृतिक विरासत, विविध धर्मों और भाषाओं को बढ़ावा देता है जो देश की समृद्ध विविधता का प्रमाण हैं। भारत में सबसे अधिक बोली जाने वाली भाषाएँ हिंदी, बंगाली, तमिल, मराठी, कन्नड़, उड़िया और मलयालम हैं। 40 प्रतिशत आबादी हिंदू है, जबकि अन्य प्रमुख धर्म इस्लाम, बौद्ध धर्म, ईसाई धर्म और सिख धर्म हैं। भारत अपनी समृद्ध कृषि अर्थव्यवस्था के लिए जाना जाता है और यह अपने विविध व्यंजनों, समृद्ध इतिहास और जीवंत सांस्कृतिक विरासत के लिए भी मनाया जाता है। The country has made remarkable strides in areas such as information technology and manufacturing, which have contributed to its global economic position. एक बड़े देश होने के बावजूद, भारत में सभी को एक स्थान से दूसरे स्थान पर जाने की आवश्यकता नहीं है। India's transport infrastructure is extensive, with a well-developed road network that connects most major cities and towns. इसके अलावा, मुंबई और दिल्ली जैसे प्रमुख शहरों में अंतर्राष्ट्रीय हवाई अड्डे हैं जो दुनिया भर के गंतव्यों के लिए उड़ान भरते हैं। India has also gained prominence as a popular tourist destination in recent years. देश अपने अनूठे अनुभवों, आश्चर्यजनक प्राकृतिक परिदृश्यों, विविध संस्कृतियों और समृद्ध इतिहास की पेशकश करता है। From world-famous sites such as the Taj Mahal to lesser- ..... > Note: This is a base model, to use it in your applications you need to fine tune it.
autoevaluate/autoeval-eval-conceptual_captions-unlabeled-ccbde0-1800162251
--- type: predictions tags: - autotrain - evaluation datasets: - conceptual_captions eval_info: task: summarization model: 0ys/mt5-small-finetuned-amazon-en-es metrics: ['accuracy'] dataset_name: conceptual_captions dataset_config: unlabeled dataset_split: train col_mapping: text: image_url target: caption --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: 0ys/mt5-small-finetuned-amazon-en-es * Dataset: conceptual_captions * Config: unlabeled * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@DonaldDaz](https://huggingface.co/DonaldDaz) for evaluating this model.
EgilKarlsen/PKDD_GPT2_Finetuned
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: '0' dtype: float32 - name: '1' dtype: float32 - name: '2' dtype: float32 - name: '3' dtype: float32 - name: '4' dtype: float32 - name: '5' dtype: float32 - name: '6' dtype: float32 - name: '7' dtype: float32 - name: '8' dtype: float32 - name: '9' dtype: float32 - name: '10' dtype: float32 - name: '11' dtype: float32 - name: '12' dtype: float32 - name: '13' dtype: float32 - name: '14' dtype: float32 - name: '15' dtype: float32 - name: '16' dtype: float32 - name: '17' dtype: float32 - name: '18' dtype: float32 - name: '19' dtype: float32 - name: '20' dtype: float32 - name: '21' dtype: float32 - name: '22' dtype: float32 - name: '23' dtype: float32 - name: '24' dtype: float32 - name: '25' dtype: float32 - name: '26' dtype: float32 - name: '27' dtype: float32 - name: '28' dtype: float32 - name: '29' dtype: float32 - name: '30' dtype: float32 - name: '31' dtype: float32 - name: '32' dtype: float32 - name: '33' dtype: float32 - name: '34' dtype: float32 - name: '35' dtype: float32 - name: '36' dtype: float32 - name: '37' dtype: float32 - name: '38' dtype: float32 - name: '39' dtype: float32 - name: '40' dtype: float32 - name: '41' dtype: float32 - name: '42' dtype: float32 - name: '43' dtype: float32 - name: '44' dtype: float32 - name: '45' dtype: float32 - name: '46' dtype: float32 - name: '47' dtype: float32 - name: '48' dtype: float32 - name: '49' dtype: float32 - name: '50' dtype: float32 - name: '51' dtype: float32 - name: '52' dtype: float32 - name: '53' dtype: float32 - name: '54' dtype: float32 - name: '55' dtype: float32 - name: '56' dtype: float32 - name: '57' dtype: float32 - name: '58' dtype: float32 - name: '59' dtype: float32 - name: '60' dtype: float32 - name: '61' dtype: float32 - name: '62' dtype: float32 - name: '63' dtype: float32 - name: '64' dtype: float32 - name: '65' dtype: float32 - name: '66' dtype: float32 - name: '67' dtype: float32 - name: '68' dtype: float32 - name: '69' dtype: float32 - name: '70' dtype: float32 - name: '71' dtype: float32 - name: '72' dtype: float32 - name: '73' dtype: float32 - name: '74' dtype: float32 - name: '75' dtype: float32 - name: '76' dtype: float32 - name: '77' dtype: float32 - name: '78' dtype: float32 - name: '79' dtype: float32 - name: '80' dtype: float32 - name: '81' dtype: float32 - name: '82' dtype: float32 - name: '83' dtype: float32 - name: '84' dtype: float32 - name: '85' dtype: float32 - name: '86' dtype: float32 - name: '87' dtype: float32 - name: '88' dtype: float32 - name: '89' dtype: float32 - name: '90' dtype: float32 - name: '91' dtype: float32 - name: '92' dtype: float32 - name: '93' dtype: float32 - name: '94' dtype: float32 - name: '95' dtype: float32 - name: '96' dtype: float32 - name: '97' dtype: float32 - name: '98' dtype: float32 - name: '99' dtype: float32 - name: '100' dtype: float32 - name: '101' dtype: float32 - name: '102' dtype: float32 - name: '103' dtype: float32 - name: '104' dtype: float32 - name: '105' dtype: float32 - name: '106' dtype: float32 - name: '107' dtype: float32 - name: '108' dtype: float32 - name: '109' dtype: float32 - name: '110' dtype: float32 - name: '111' dtype: float32 - name: '112' dtype: float32 - name: '113' dtype: float32 - name: '114' dtype: float32 - name: '115' dtype: float32 - name: '116' dtype: float32 - name: '117' dtype: float32 - name: '118' dtype: float32 - name: '119' dtype: float32 - name: '120' dtype: float32 - name: '121' dtype: float32 - name: '122' dtype: float32 - name: '123' dtype: float32 - name: '124' dtype: float32 - name: '125' dtype: float32 - name: '126' dtype: float32 - name: '127' dtype: float32 - name: '128' dtype: float32 - name: '129' dtype: float32 - name: '130' dtype: float32 - name: '131' dtype: float32 - name: '132' dtype: float32 - name: '133' dtype: float32 - name: '134' dtype: float32 - name: '135' dtype: float32 - name: '136' dtype: float32 - name: '137' dtype: float32 - name: '138' dtype: float32 - name: '139' dtype: float32 - name: '140' dtype: float32 - name: '141' dtype: float32 - name: '142' dtype: float32 - name: '143' dtype: float32 - name: '144' dtype: float32 - name: '145' dtype: float32 - name: '146' dtype: float32 - name: '147' dtype: float32 - name: '148' dtype: float32 - name: '149' dtype: float32 - name: '150' dtype: float32 - name: '151' dtype: float32 - name: '152' dtype: float32 - name: '153' dtype: float32 - name: '154' dtype: float32 - name: '155' dtype: float32 - name: '156' dtype: float32 - name: '157' dtype: float32 - name: '158' dtype: float32 - name: '159' dtype: float32 - name: '160' dtype: float32 - name: '161' dtype: float32 - name: '162' dtype: float32 - name: '163' dtype: float32 - name: '164' dtype: float32 - name: '165' dtype: float32 - name: '166' dtype: float32 - name: '167' dtype: float32 - name: '168' dtype: float32 - name: '169' dtype: float32 - name: '170' dtype: float32 - name: '171' dtype: float32 - name: '172' dtype: float32 - name: '173' dtype: float32 - name: '174' dtype: float32 - name: '175' dtype: float32 - name: '176' dtype: float32 - name: '177' dtype: float32 - name: '178' dtype: float32 - name: '179' dtype: float32 - name: '180' dtype: float32 - name: '181' dtype: float32 - name: '182' dtype: float32 - name: '183' dtype: float32 - name: '184' dtype: float32 - name: '185' dtype: float32 - name: '186' dtype: float32 - name: '187' dtype: float32 - name: '188' dtype: float32 - name: '189' dtype: float32 - name: '190' dtype: float32 - name: '191' dtype: float32 - name: '192' dtype: float32 - name: '193' dtype: float32 - name: '194' dtype: float32 - name: '195' dtype: float32 - name: '196' dtype: float32 - name: '197' dtype: float32 - name: '198' dtype: float32 - name: '199' dtype: float32 - name: '200' dtype: float32 - name: '201' dtype: float32 - name: '202' dtype: float32 - name: '203' dtype: float32 - name: '204' dtype: float32 - name: '205' dtype: float32 - name: '206' dtype: float32 - name: '207' dtype: float32 - name: '208' dtype: float32 - name: '209' dtype: float32 - name: '210' dtype: float32 - name: '211' dtype: float32 - name: '212' dtype: float32 - name: '213' dtype: float32 - name: '214' dtype: float32 - name: '215' dtype: float32 - name: '216' dtype: float32 - name: '217' dtype: float32 - name: '218' dtype: float32 - name: '219' dtype: float32 - name: '220' dtype: float32 - name: '221' dtype: float32 - name: '222' dtype: float32 - name: '223' dtype: float32 - name: '224' dtype: float32 - name: '225' dtype: float32 - name: '226' dtype: float32 - name: '227' dtype: float32 - name: '228' dtype: float32 - name: '229' dtype: float32 - name: '230' dtype: float32 - name: '231' dtype: float32 - name: '232' dtype: float32 - name: '233' dtype: float32 - name: '234' dtype: float32 - name: '235' dtype: float32 - name: '236' dtype: float32 - name: '237' dtype: float32 - name: '238' dtype: float32 - name: '239' dtype: float32 - name: '240' dtype: float32 - name: '241' dtype: float32 - name: '242' dtype: float32 - name: '243' dtype: float32 - name: '244' dtype: float32 - name: '245' dtype: float32 - name: '246' dtype: float32 - name: '247' dtype: float32 - name: '248' dtype: float32 - name: '249' dtype: float32 - name: '250' dtype: float32 - name: '251' dtype: float32 - name: '252' dtype: float32 - name: '253' dtype: float32 - name: '254' dtype: float32 - name: '255' dtype: float32 - name: '256' dtype: float32 - name: '257' dtype: float32 - name: '258' dtype: float32 - name: '259' dtype: float32 - name: '260' dtype: float32 - name: '261' dtype: float32 - name: '262' dtype: float32 - name: '263' dtype: float32 - name: '264' dtype: float32 - name: '265' dtype: float32 - name: '266' dtype: float32 - name: '267' dtype: float32 - name: '268' dtype: float32 - name: '269' dtype: float32 - name: '270' dtype: float32 - name: '271' dtype: float32 - name: '272' dtype: float32 - name: '273' dtype: float32 - name: '274' dtype: float32 - name: '275' dtype: float32 - name: '276' dtype: float32 - name: '277' dtype: float32 - name: '278' dtype: float32 - name: '279' dtype: float32 - name: '280' dtype: float32 - name: '281' dtype: float32 - name: '282' dtype: float32 - name: '283' dtype: float32 - name: '284' dtype: float32 - name: '285' dtype: float32 - name: '286' dtype: float32 - name: '287' dtype: float32 - name: '288' dtype: float32 - name: '289' dtype: float32 - name: '290' dtype: float32 - name: '291' dtype: float32 - name: '292' dtype: float32 - name: '293' dtype: float32 - name: '294' dtype: float32 - name: '295' dtype: float32 - name: '296' dtype: float32 - name: '297' dtype: float32 - name: '298' dtype: float32 - name: '299' dtype: float32 - name: '300' dtype: float32 - name: '301' dtype: float32 - name: '302' dtype: float32 - name: '303' dtype: float32 - name: '304' dtype: float32 - name: '305' dtype: float32 - name: '306' dtype: float32 - name: '307' dtype: float32 - name: '308' dtype: float32 - name: '309' dtype: float32 - name: '310' dtype: float32 - name: '311' dtype: float32 - name: '312' dtype: float32 - name: '313' dtype: float32 - name: '314' dtype: float32 - name: '315' dtype: float32 - name: '316' dtype: float32 - name: '317' dtype: float32 - name: '318' dtype: float32 - name: '319' dtype: float32 - name: '320' dtype: float32 - name: '321' dtype: float32 - name: '322' dtype: float32 - name: '323' dtype: float32 - name: '324' dtype: float32 - name: '325' dtype: float32 - name: '326' dtype: float32 - name: '327' dtype: float32 - name: '328' dtype: float32 - name: '329' dtype: float32 - name: '330' dtype: float32 - name: '331' dtype: float32 - name: '332' dtype: float32 - name: '333' dtype: float32 - name: '334' dtype: float32 - name: '335' dtype: float32 - name: '336' dtype: float32 - name: '337' dtype: float32 - name: '338' dtype: float32 - name: '339' dtype: float32 - name: '340' dtype: float32 - name: '341' dtype: float32 - name: '342' dtype: float32 - name: '343' dtype: float32 - name: '344' dtype: float32 - name: '345' dtype: float32 - name: '346' dtype: float32 - name: '347' dtype: float32 - name: '348' dtype: float32 - name: '349' dtype: float32 - name: '350' dtype: float32 - name: '351' dtype: float32 - name: '352' dtype: float32 - name: '353' dtype: float32 - name: '354' dtype: float32 - name: '355' dtype: float32 - name: '356' dtype: float32 - name: '357' dtype: float32 - name: '358' dtype: float32 - name: '359' dtype: float32 - name: '360' dtype: float32 - name: '361' dtype: float32 - name: '362' dtype: float32 - name: '363' dtype: float32 - name: '364' dtype: float32 - name: '365' dtype: float32 - name: '366' dtype: float32 - name: '367' dtype: float32 - name: '368' dtype: float32 - name: '369' dtype: float32 - name: '370' dtype: float32 - name: '371' dtype: float32 - name: '372' dtype: float32 - name: '373' dtype: float32 - name: '374' dtype: float32 - name: '375' dtype: float32 - name: '376' dtype: float32 - name: '377' dtype: float32 - name: '378' dtype: float32 - name: '379' dtype: float32 - name: '380' dtype: float32 - name: '381' dtype: float32 - name: '382' dtype: float32 - name: '383' dtype: float32 - name: '384' dtype: float32 - name: '385' dtype: float32 - name: '386' dtype: float32 - name: '387' dtype: float32 - name: '388' dtype: float32 - name: '389' dtype: float32 - name: '390' dtype: float32 - name: '391' dtype: float32 - name: '392' dtype: float32 - name: '393' dtype: float32 - name: '394' dtype: float32 - name: '395' dtype: float32 - name: '396' dtype: float32 - name: '397' dtype: float32 - name: '398' dtype: float32 - name: '399' dtype: float32 - name: '400' dtype: float32 - name: '401' dtype: float32 - name: '402' dtype: float32 - name: '403' dtype: float32 - name: '404' dtype: float32 - name: '405' dtype: float32 - name: '406' dtype: float32 - name: '407' dtype: float32 - name: '408' dtype: float32 - name: '409' dtype: float32 - name: '410' dtype: float32 - name: '411' dtype: float32 - name: '412' dtype: float32 - name: '413' dtype: float32 - name: '414' dtype: float32 - name: '415' dtype: float32 - name: '416' dtype: float32 - name: '417' dtype: float32 - name: '418' dtype: float32 - name: '419' dtype: float32 - name: '420' dtype: float32 - name: '421' dtype: float32 - name: '422' dtype: float32 - name: '423' dtype: float32 - name: '424' dtype: float32 - name: '425' dtype: float32 - name: '426' dtype: float32 - name: '427' dtype: float32 - name: '428' dtype: float32 - name: '429' dtype: float32 - name: '430' dtype: float32 - name: '431' dtype: float32 - name: '432' dtype: float32 - name: '433' dtype: float32 - name: '434' dtype: float32 - name: '435' dtype: float32 - name: '436' dtype: float32 - name: '437' dtype: float32 - name: '438' dtype: float32 - name: '439' dtype: float32 - name: '440' dtype: float32 - name: '441' dtype: float32 - name: '442' dtype: float32 - name: '443' dtype: float32 - name: '444' dtype: float32 - name: '445' dtype: float32 - name: '446' dtype: float32 - name: '447' dtype: float32 - name: '448' dtype: float32 - name: '449' dtype: float32 - name: '450' dtype: float32 - name: '451' dtype: float32 - name: '452' dtype: float32 - name: '453' dtype: float32 - name: '454' dtype: float32 - name: '455' dtype: float32 - name: '456' dtype: float32 - name: '457' dtype: float32 - name: '458' dtype: float32 - name: '459' dtype: float32 - name: '460' dtype: float32 - name: '461' dtype: float32 - name: '462' dtype: float32 - name: '463' dtype: float32 - name: '464' dtype: float32 - name: '465' dtype: float32 - name: '466' dtype: float32 - name: '467' dtype: float32 - name: '468' dtype: float32 - name: '469' dtype: float32 - name: '470' dtype: float32 - name: '471' dtype: float32 - name: '472' dtype: float32 - name: '473' dtype: float32 - name: '474' dtype: float32 - name: '475' dtype: float32 - name: '476' dtype: float32 - name: '477' dtype: float32 - name: '478' dtype: float32 - name: '479' dtype: float32 - name: '480' dtype: float32 - name: '481' dtype: float32 - name: '482' dtype: float32 - name: '483' dtype: float32 - name: '484' dtype: float32 - name: '485' dtype: float32 - name: '486' dtype: float32 - name: '487' dtype: float32 - name: '488' dtype: float32 - name: '489' dtype: float32 - name: '490' dtype: float32 - name: '491' dtype: float32 - name: '492' dtype: float32 - name: '493' dtype: float32 - name: '494' dtype: float32 - name: '495' dtype: float32 - name: '496' dtype: float32 - name: '497' dtype: float32 - name: '498' dtype: float32 - name: '499' dtype: float32 - name: '500' dtype: float32 - name: '501' dtype: float32 - name: '502' dtype: float32 - name: '503' dtype: float32 - name: '504' dtype: float32 - name: '505' dtype: float32 - name: '506' dtype: float32 - name: '507' dtype: float32 - name: '508' dtype: float32 - name: '509' dtype: float32 - name: '510' dtype: float32 - name: '511' dtype: float32 - name: '512' dtype: float32 - name: '513' dtype: float32 - name: '514' dtype: float32 - name: '515' dtype: float32 - name: '516' dtype: float32 - name: '517' dtype: float32 - name: '518' dtype: float32 - name: '519' dtype: float32 - name: '520' dtype: float32 - name: '521' dtype: float32 - name: '522' dtype: float32 - name: '523' dtype: float32 - name: '524' dtype: float32 - name: '525' dtype: float32 - name: '526' dtype: float32 - name: '527' dtype: float32 - name: '528' dtype: float32 - name: '529' dtype: float32 - name: '530' dtype: float32 - name: '531' dtype: float32 - name: '532' dtype: float32 - name: '533' dtype: float32 - name: '534' dtype: float32 - name: '535' dtype: float32 - name: '536' dtype: float32 - name: '537' dtype: float32 - name: '538' dtype: float32 - name: '539' dtype: float32 - name: '540' dtype: float32 - name: '541' dtype: float32 - name: '542' dtype: float32 - name: '543' dtype: float32 - name: '544' dtype: float32 - name: '545' dtype: float32 - name: '546' dtype: float32 - name: '547' dtype: float32 - name: '548' dtype: float32 - name: '549' dtype: float32 - name: '550' dtype: float32 - name: '551' dtype: float32 - name: '552' dtype: float32 - name: '553' dtype: float32 - name: '554' dtype: float32 - name: '555' dtype: float32 - name: '556' dtype: float32 - name: '557' dtype: float32 - name: '558' dtype: float32 - name: '559' dtype: float32 - name: '560' dtype: float32 - name: '561' dtype: float32 - name: '562' dtype: float32 - name: '563' dtype: float32 - name: '564' dtype: float32 - name: '565' dtype: float32 - name: '566' dtype: float32 - name: '567' dtype: float32 - name: '568' dtype: float32 - name: '569' dtype: float32 - name: '570' dtype: float32 - name: '571' dtype: float32 - name: '572' dtype: float32 - name: '573' dtype: float32 - name: '574' dtype: float32 - name: '575' dtype: float32 - name: '576' dtype: float32 - name: '577' dtype: float32 - name: '578' dtype: float32 - name: '579' dtype: float32 - name: '580' dtype: float32 - name: '581' dtype: float32 - name: '582' dtype: float32 - name: '583' dtype: float32 - name: '584' dtype: float32 - name: '585' dtype: float32 - name: '586' dtype: float32 - name: '587' dtype: float32 - name: '588' dtype: float32 - name: '589' dtype: float32 - name: '590' dtype: float32 - name: '591' dtype: float32 - name: '592' dtype: float32 - name: '593' dtype: float32 - name: '594' dtype: float32 - name: '595' dtype: float32 - name: '596' dtype: float32 - name: '597' dtype: float32 - name: '598' dtype: float32 - name: '599' dtype: float32 - name: '600' dtype: float32 - name: '601' dtype: float32 - name: '602' dtype: float32 - name: '603' dtype: float32 - name: '604' dtype: float32 - name: '605' dtype: float32 - name: '606' dtype: float32 - name: '607' dtype: float32 - name: '608' dtype: float32 - name: '609' dtype: float32 - name: '610' dtype: float32 - name: '611' dtype: float32 - name: '612' dtype: float32 - name: '613' dtype: float32 - name: '614' dtype: float32 - name: '615' dtype: float32 - name: '616' dtype: float32 - name: '617' dtype: float32 - name: '618' dtype: float32 - name: '619' dtype: float32 - name: '620' dtype: float32 - name: '621' dtype: float32 - name: '622' dtype: float32 - name: '623' dtype: float32 - name: '624' dtype: float32 - name: '625' dtype: float32 - name: '626' dtype: float32 - name: '627' dtype: float32 - name: '628' dtype: float32 - name: '629' dtype: float32 - name: '630' dtype: float32 - name: '631' dtype: float32 - name: '632' dtype: float32 - name: '633' dtype: float32 - name: '634' dtype: float32 - name: '635' dtype: float32 - name: '636' dtype: float32 - name: '637' dtype: float32 - name: '638' dtype: float32 - name: '639' dtype: float32 - name: '640' dtype: float32 - name: '641' dtype: float32 - name: '642' dtype: float32 - name: '643' dtype: float32 - name: '644' dtype: float32 - name: '645' dtype: float32 - name: '646' dtype: float32 - name: '647' dtype: float32 - name: '648' dtype: float32 - name: '649' dtype: float32 - name: '650' dtype: float32 - name: '651' dtype: float32 - name: '652' dtype: float32 - name: '653' dtype: float32 - name: '654' dtype: float32 - name: '655' dtype: float32 - name: '656' dtype: float32 - name: '657' dtype: float32 - name: '658' dtype: float32 - name: '659' dtype: float32 - name: '660' dtype: float32 - name: '661' dtype: float32 - name: '662' dtype: float32 - name: '663' dtype: float32 - name: '664' dtype: float32 - name: '665' dtype: float32 - name: '666' dtype: float32 - name: '667' dtype: float32 - name: '668' dtype: float32 - name: '669' dtype: float32 - name: '670' dtype: float32 - name: '671' dtype: float32 - name: '672' dtype: float32 - name: '673' dtype: float32 - name: '674' dtype: float32 - name: '675' dtype: float32 - name: '676' dtype: float32 - name: '677' dtype: float32 - name: '678' dtype: float32 - name: '679' dtype: float32 - name: '680' dtype: float32 - name: '681' dtype: float32 - name: '682' dtype: float32 - name: '683' dtype: float32 - name: '684' dtype: float32 - name: '685' dtype: float32 - name: '686' dtype: float32 - name: '687' dtype: float32 - name: '688' dtype: float32 - name: '689' dtype: float32 - name: '690' dtype: float32 - name: '691' dtype: float32 - name: '692' dtype: float32 - name: '693' dtype: float32 - name: '694' dtype: float32 - name: '695' dtype: float32 - name: '696' dtype: float32 - name: '697' dtype: float32 - name: '698' dtype: float32 - name: '699' dtype: float32 - name: '700' dtype: float32 - name: '701' dtype: float32 - name: '702' dtype: float32 - name: '703' dtype: float32 - name: '704' dtype: float32 - name: '705' dtype: float32 - name: '706' dtype: float32 - name: '707' dtype: float32 - name: '708' dtype: float32 - name: '709' dtype: float32 - name: '710' dtype: float32 - name: '711' dtype: float32 - name: '712' dtype: float32 - name: '713' dtype: float32 - name: '714' dtype: float32 - name: '715' dtype: float32 - name: '716' dtype: float32 - name: '717' dtype: float32 - name: '718' dtype: float32 - name: '719' dtype: float32 - name: '720' dtype: float32 - name: '721' dtype: float32 - name: '722' dtype: float32 - name: '723' dtype: float32 - name: '724' dtype: float32 - name: '725' dtype: float32 - name: '726' dtype: float32 - name: '727' dtype: float32 - name: '728' dtype: float32 - name: '729' dtype: float32 - name: '730' dtype: float32 - name: '731' dtype: float32 - name: '732' dtype: float32 - name: '733' dtype: float32 - name: '734' dtype: float32 - name: '735' dtype: float32 - name: '736' dtype: float32 - name: '737' dtype: float32 - name: '738' dtype: float32 - name: '739' dtype: float32 - name: '740' dtype: float32 - name: '741' dtype: float32 - name: '742' dtype: float32 - name: '743' dtype: float32 - name: '744' dtype: float32 - name: '745' dtype: float32 - name: '746' dtype: float32 - name: '747' dtype: float32 - name: '748' dtype: float32 - name: '749' dtype: float32 - name: '750' dtype: float32 - name: '751' dtype: float32 - name: '752' dtype: float32 - name: '753' dtype: float32 - name: '754' dtype: float32 - name: '755' dtype: float32 - name: '756' dtype: float32 - name: '757' dtype: float32 - name: '758' dtype: float32 - name: '759' dtype: float32 - name: '760' dtype: float32 - name: '761' dtype: float32 - name: '762' dtype: float32 - name: '763' dtype: float32 - name: '764' dtype: float32 - name: '765' dtype: float32 - name: '766' dtype: float32 - name: '767' dtype: float32 - name: label dtype: string splits: - name: train num_bytes: 115608907.5 num_examples: 37500 - name: test num_bytes: 38536305.0 num_examples: 12500 download_size: 211871323 dataset_size: 154145212.5 --- # Dataset Card for "PKDD_GPT2_Finetuned" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k
--- pretty_name: Evaluation run of Open-Orca/LlongOrca-7B-16k dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Open-Orca/LlongOrca-7B-16k](https://huggingface.co/Open-Orca/LlongOrca-7B-16k)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-10-18T04:31:23.491817](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k/blob/main/results_2023-10-18T04-31-23.491817.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.016988255033557047,\n\ \ \"em_stderr\": 0.0013234068882109723,\n \"f1\": 0.08061136744966452,\n\ \ \"f1_stderr\": 0.001896831507875326,\n \"acc\": 0.4100619744335266,\n\ \ \"acc_stderr\": 0.009753220057431532\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.016988255033557047,\n \"em_stderr\": 0.0013234068882109723,\n\ \ \"f1\": 0.08061136744966452,\n \"f1_stderr\": 0.001896831507875326\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07505686125852919,\n \ \ \"acc_stderr\": 0.007257633145486642\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.745067087608524,\n \"acc_stderr\": 0.012248806969376422\n\ \ }\n}\n```" repo_url: https://huggingface.co/Open-Orca/LlongOrca-7B-16k leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_10_18T04_31_23.491817 path: - '**/details_harness|drop|3_2023-10-18T04-31-23.491817.parquet' - split: latest path: - '**/details_harness|drop|3_2023-10-18T04-31-23.491817.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_10_18T04_31_23.491817 path: - '**/details_harness|gsm8k|5_2023-10-18T04-31-23.491817.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-10-18T04-31-23.491817.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_10_18T04_31_23.491817 path: - '**/details_harness|winogrande|5_2023-10-18T04-31-23.491817.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-10-18T04-31-23.491817.parquet' - config_name: results data_files: - split: 2023_10_18T04_31_23.491817 path: - results_2023-10-18T04-31-23.491817.parquet - split: latest path: - results_2023-10-18T04-31-23.491817.parquet --- # Dataset Card for Evaluation run of Open-Orca/LlongOrca-7B-16k ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Open-Orca/LlongOrca-7B-16k - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [Open-Orca/LlongOrca-7B-16k](https://huggingface.co/Open-Orca/LlongOrca-7B-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-18T04:31:23.491817](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k/blob/main/results_2023-10-18T04-31-23.491817.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.016988255033557047, "em_stderr": 0.0013234068882109723, "f1": 0.08061136744966452, "f1_stderr": 0.001896831507875326, "acc": 0.4100619744335266, "acc_stderr": 0.009753220057431532 }, "harness|drop|3": { "em": 0.016988255033557047, "em_stderr": 0.0013234068882109723, "f1": 0.08061136744966452, "f1_stderr": 0.001896831507875326 }, "harness|gsm8k|5": { "acc": 0.07505686125852919, "acc_stderr": 0.007257633145486642 }, "harness|winogrande|5": { "acc": 0.745067087608524, "acc_stderr": 0.012248806969376422 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
IlyaGusev/stihi_ru
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: title dtype: string - name: genre dtype: string - name: topic dtype: string - name: author dtype: string splits: - name: train num_bytes: 6029108612 num_examples: 5151050 download_size: 1892727043 dataset_size: 6029108612 task_categories: - text-generation language: - ru size_categories: - 1M<n<10M --- # Stihi.ru dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Description](#description) - [Usage](#usage) - [Personal and Sensitive Information](#personal-and-sensitive-information) ## Description **Summary:** A subset if [Taiga](https://tatianashavrina.github.io/taiga_site/), uploaded here for convenience. Additional cleaning was performed. **Script:** [create_stihi.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_stihi.py) **Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu) **Languages:** Russian. ## Usage Prerequisites: ```bash pip install datasets zstandard jsonlines pysimdjson ``` Dataset iteration: ```python from datasets import load_dataset dataset = load_dataset('IlyaGusev/stihi_ru', split="train", streaming=True) for example in dataset: print(example["text"]) ``` ## Personal and Sensitive Information The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
rabhinavs/new_ds
--- license: apache-2.0 ---
collabora/project-gutenberg-wds-preprocessed
--- license: cc0-1.0 ---
zolak/twitter_dataset_50_1713070262
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 2631200 num_examples: 6583 download_size: 1309320 dataset_size: 2631200 configs: - config_name: default data_files: - split: train path: data/train-* ---
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/739614f9
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 180 num_examples: 10 download_size: 1325 dataset_size: 180 --- # Dataset Card for "739614f9" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/eruda_edomaeelf
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of エルダ This is the dataset of エルダ, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 667 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 300 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 300 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 667 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 667 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 667 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
automated-research-group/llama2_7b_chat-boolq
--- dataset_info: features: - name: id dtype: string - name: request dtype: string - name: response dtype: string - name: input_perplexity dtype: float64 - name: input_likelihood dtype: float64 - name: output_perplexity dtype: float64 - name: output_likelihood dtype: float64 splits: - name: validation num_bytes: 2716450 num_examples: 3270 download_size: 1480464 dataset_size: 2716450 configs: - config_name: default data_files: - split: validation path: data/validation-* --- # Dataset Card for "llama2_7b_chat-boolq" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ZHLiu627/ultrafeedback_binarized_with_response_full
--- dataset_info: features: - name: prompt dtype: string - name: prompt_id dtype: string - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: score_chosen dtype: float64 - name: score_rejected dtype: float64 - name: reference_response dtype: string splits: - name: train_prefs num_bytes: 510824465 num_examples: 61135 download_size: 0 dataset_size: 510824465 configs: - config_name: default data_files: - split: train_prefs path: data/train_prefs-* --- # Dataset Card for "ultrafeedback_binarized_with_response_full" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zolak/twitter_dataset_78_1713170051
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 354964 num_examples: 898 download_size: 175261 dataset_size: 354964 configs: - config_name: default data_files: - split: train path: data/train-* ---
CyberHarem/annie_leagueoflegends
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of annie (League of Legends) This is the dataset of annie (League of Legends), containing 125 images and their tags. The core tags of this character are `green_eyes, animal_ears, short_hair, red_hair, cat_ears, fake_animal_ears, pink_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 125 | 92.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annie_leagueoflegends/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 125 | 67.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annie_leagueoflegends/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 260 | 128.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annie_leagueoflegends/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 125 | 85.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annie_leagueoflegends/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 260 | 155.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/annie_leagueoflegends/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/annie_leagueoflegends', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, teddy_bear, backpack, looking_at_viewer, smile, dress, puffy_sleeves, striped | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | teddy_bear | backpack | looking_at_viewer | smile | dress | puffy_sleeves | striped | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------------|:-----------|:--------------------|:--------|:--------|:----------------|:----------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X |
hippocrates/Multicare_rare_train
--- dataset_info: features: - name: id dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: text dtype: string splits: - name: train num_bytes: 47550177 num_examples: 11334 - name: valid num_bytes: 47550177 num_examples: 11334 - name: test num_bytes: 47550177 num_examples: 11334 download_size: 77224462 dataset_size: 142650531 --- # Dataset Card for "Multicare_rare_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Harsha9044/MAl_MSA
--- license: apache-2.0 dataset_info: features: - name: File name dtype: string - name: Transcript dtype: string - name: Labels dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 290551 num_examples: 70 download_size: 124404 dataset_size: 290551 ---
TokenBender/HelpSteer_alpaca_reformatted
--- license: apache-2.0 ---
annaludicode/ladiesInColoredWaterStyle
--- license: artistic-2.0 ---
datahrvoje/twitter_dataset_1713138771
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 17776 num_examples: 44 download_size: 10873 dataset_size: 17776 configs: - config_name: default data_files: - split: train path: data/train-* ---
open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B
--- pretty_name: Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Severian/ANIMA-Phi-Neptune-Mistral-7B](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 64 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-10-29T10:04:15.191273](https://huggingface.co/datasets/open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B/blob/main/results_2023-10-29T10-04-15.191273.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.10329278523489933,\n\ \ \"em_stderr\": 0.003116735713102519,\n \"f1\": 0.1624748322147643,\n\ \ \"f1_stderr\": 0.003266242273162539,\n \"acc\": 0.442081101118795,\n\ \ \"acc_stderr\": 0.011112320094960076\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.10329278523489933,\n \"em_stderr\": 0.003116735713102519,\n\ \ \"f1\": 0.1624748322147643,\n \"f1_stderr\": 0.003266242273162539\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14935557240333586,\n \ \ \"acc_stderr\": 0.009818090723727293\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7348066298342542,\n \"acc_stderr\": 0.01240654946619286\n\ \ }\n}\n```" repo_url: https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|arc:challenge|25_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-10-10T23-09-12.843992.parquet' - config_name: harness_drop_3 data_files: - split: 2023_10_29T10_04_15.191273 path: - '**/details_harness|drop|3_2023-10-29T10-04-15.191273.parquet' - split: latest path: - '**/details_harness|drop|3_2023-10-29T10-04-15.191273.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_10_29T10_04_15.191273 path: - '**/details_harness|gsm8k|5_2023-10-29T10-04-15.191273.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-10-29T10-04-15.191273.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hellaswag|10_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-management|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-management|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-10-10T23-09-12.843992.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-international_law|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-management|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-marketing|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-sociology|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-virology|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T23-09-12.843992.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_10_10T23_09_12.843992 path: - '**/details_harness|truthfulqa:mc|0_2023-10-10T23-09-12.843992.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-10-10T23-09-12.843992.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_10_29T10_04_15.191273 path: - '**/details_harness|winogrande|5_2023-10-29T10-04-15.191273.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-10-29T10-04-15.191273.parquet' - config_name: results data_files: - split: 2023_10_10T23_09_12.843992 path: - results_2023-10-10T23-09-12.843992.parquet - split: 2023_10_29T10_04_15.191273 path: - results_2023-10-29T10-04-15.191273.parquet - split: latest path: - results_2023-10-29T10-04-15.191273.parquet --- # Dataset Card for Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-7B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [Severian/ANIMA-Phi-Neptune-Mistral-7B](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-29T10:04:15.191273](https://huggingface.co/datasets/open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-7B/blob/main/results_2023-10-29T10-04-15.191273.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.10329278523489933, "em_stderr": 0.003116735713102519, "f1": 0.1624748322147643, "f1_stderr": 0.003266242273162539, "acc": 0.442081101118795, "acc_stderr": 0.011112320094960076 }, "harness|drop|3": { "em": 0.10329278523489933, "em_stderr": 0.003116735713102519, "f1": 0.1624748322147643, "f1_stderr": 0.003266242273162539 }, "harness|gsm8k|5": { "acc": 0.14935557240333586, "acc_stderr": 0.009818090723727293 }, "harness|winogrande|5": { "acc": 0.7348066298342542, "acc_stderr": 0.01240654946619286 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
lucasabc/vozes
--- license: other license_name: teste license_link: LICENSE ---
Dahoas/instruct_helpful_preferences
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 187411037 num_examples: 105161 - name: test num_bytes: 9924509 num_examples: 5538 download_size: 119287465 dataset_size: 197335546 --- # Dataset Card for "instruct_helpful_preferences" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/synpre_mix_v4_1M
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: inputs dtype: string - name: targets dtype: string splits: - name: train num_bytes: 1631375419.5 num_examples: 1000000 - name: validation num_bytes: 16342801.5 num_examples: 10000 download_size: 10827005 dataset_size: 1647718221.0 --- # Dataset Card for "synpre_mix_v4_1M" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sadelja/cuprum_dataset
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: description dtype: string splits: - name: train num_bytes: 3646765464.0 num_examples: 841 download_size: 3470985112 dataset_size: 3646765464.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
presencesw/multinli_entailment
--- dataset_info: features: - name: gold_label dtype: string - name: anchor dtype: string - name: positive dtype: string - name: negative dtype: string splits: - name: train num_bytes: 69566860 num_examples: 274829 - name: dev_matched num_bytes: 1918017 num_examples: 9815 - name: dev_mismatched num_bytes: 2033699 num_examples: 9832 download_size: 30820933 dataset_size: 73518576 configs: - config_name: default data_files: - split: train path: data/train-* - split: dev_matched path: data/dev_matched-* - split: dev_mismatched path: data/dev_mismatched-* ---
Horeknad/komi-russian-parallel-corpora
--- license: cc-by-4.0 task_categories: - translation language: - ru - kv size_categories: - 10K<n<100K annotations_creators: - found tags: - text source_datasets: - Millet porridge by Ivan Toropov (adaptation) - Komi media library (http://videocorpora.ru/) - news from the website of the Komi administration (https://rkomi.ru/) --- # Source Datasets # <li>1 - news from the website of the Komi administration (https://rkomi.ru/)</li> <li>2 - Komi media library (http://videocorpora.ru/)</li> <li>3 - Millet porridge by Ivan Toropov (adaptation)</li> <br> # Authors # Shilova Nadezhda<br> Chernousov Georgy
cestwc/SG-subzone-poi-sentiment_1
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: local_created_at dtype: string - name: id dtype: int64 - name: text dtype: string - name: source dtype: string - name: truncated dtype: bool - name: in_reply_to_status_id dtype: float32 - name: in_reply_to_user_id dtype: float32 - name: user_id dtype: int64 - name: user_name dtype: string - name: user_screen_name dtype: string - name: user_location dtype: string - name: user_url dtype: string - name: user_verified dtype: bool - name: user_default_profile dtype: bool - name: user_description dtype: string - name: user_followers_count dtype: int64 - name: user_friends_count dtype: int64 - name: user_listed_count dtype: int64 - name: user_favourites_count dtype: int64 - name: user_statuses_count dtype: int64 - name: local_user_created_at dtype: string - name: place_id dtype: string - name: place_url dtype: string - name: place_place_type dtype: string - name: place_name dtype: string - name: place_country_code dtype: string - name: place_bounding_box_type dtype: string - name: place_bounding_box_coordinates dtype: string - name: is_quote_status dtype: bool - name: retweet_count dtype: int64 - name: favorite_count dtype: int64 - name: entities_hashtags dtype: string - name: entities_urls dtype: string - name: entities_symbols dtype: string - name: entities_user_mentions dtype: string - name: favorited dtype: bool - name: retweeted dtype: bool - name: possibly_sensitive dtype: bool - name: lang dtype: string - name: latitude dtype: float32 - name: longitude dtype: float32 - name: year_created_at dtype: int64 - name: month_created_at dtype: int64 - name: day_created_at dtype: int64 - name: weekday_created_at dtype: int64 - name: hour_created_at dtype: int64 - name: minute_created_at dtype: int64 - name: year_user_created_at dtype: int64 - name: month_user_created_at dtype: int64 - name: day_user_created_at dtype: int64 - name: weekday_user_created_at dtype: int64 - name: hour_user_created_at dtype: int64 - name: minute_user_created_at dtype: int64 - name: subzone dtype: string - name: planning_area dtype: string - name: poi_flag dtype: float32 - name: poi_id dtype: string - name: poi_dist dtype: float32 - name: poi_latitude dtype: float32 - name: poi_longitude dtype: float32 - name: poi_name dtype: string - name: poi_type dtype: string - name: poi_cate2 dtype: string - name: poi_cate3 dtype: string - name: clean_text dtype: string - name: joy_score dtype: float32 - name: trust_score dtype: float32 - name: positive_score dtype: float32 - name: sadness_score dtype: float32 - name: disgust_score dtype: float32 - name: anger_score dtype: float32 - name: anticipation_score dtype: float32 - name: negative_score dtype: float32 - name: fear_score dtype: float32 - name: surprise_score dtype: float32 - name: words dtype: string - name: polarity_score dtype: float32 - name: manual_label_1 dtype: int64 - name: T0_q1 dtype: int64 - name: bart_mnli dtype: float32 - name: T0_q2 dtype: int64 - name: num_keywords dtype: int64 - name: preprocess-1 dtype: string - name: preprocess-2 dtype: string - name: llama dtype: int64 - name: clabel dtype: bool splits: - name: train num_bytes: 1597795154 num_examples: 1025135 download_size: 490565616 dataset_size: 1597795154 --- # Dataset Card for "SG-subzone-poi-sentiment_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
anthony-wss/librispeech_asr-audiodec_encodec_24k
--- configs: - config_name: default data_files: - split: train.clean.360 path: data/train.clean.360-* - split: train.other.500 path: data/train.other.500-* dataset_info: features: - name: text dtype: string - name: id dtype: string - name: unit sequence: sequence: int64 splits: - name: train.clean.360 num_bytes: 1070603220 num_examples: 104014 - name: train.other.500 num_bytes: 1462474737 num_examples: 148688 download_size: 406727746 dataset_size: 2533077957 --- # Dataset Card for "librispeech_asr-audiodec_encodec_24k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sdg416826/test
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 1655208 num_examples: 1000 download_size: 966969 dataset_size: 1655208 ---
open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO
--- pretty_name: Evaluation run of cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO](https://huggingface.co/cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-02-05T13:57:06.982400](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO/blob/main/results_2024-02-05T13-57-06.982400.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7649892778549832,\n\ \ \"acc_stderr\": 0.02823313368050758,\n \"acc_norm\": 0.7681511495490131,\n\ \ \"acc_norm_stderr\": 0.028777527908042073,\n \"mc1\": 0.5458996328029376,\n\ \ \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.7131962651033679,\n\ \ \"mc2_stderr\": 0.014139525056193024\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.7167235494880546,\n \"acc_stderr\": 0.013167478735134575,\n\ \ \"acc_norm\": 0.7406143344709898,\n \"acc_norm_stderr\": 0.012808273573927097\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6703843855805617,\n\ \ \"acc_stderr\": 0.004691128722535485,\n \"acc_norm\": 0.8666600278828919,\n\ \ \"acc_norm_stderr\": 0.003392470498816845\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \ \ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7555555555555555,\n\ \ \"acc_stderr\": 0.03712537833614866,\n \"acc_norm\": 0.7555555555555555,\n\ \ \"acc_norm_stderr\": 0.03712537833614866\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.875,\n \"acc_stderr\": 0.026913523521537846,\n \ \ \"acc_norm\": 0.875,\n \"acc_norm_stderr\": 0.026913523521537846\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n\ \ \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \ \ \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.8037735849056604,\n \"acc_stderr\": 0.024442388131100813,\n\ \ \"acc_norm\": 0.8037735849056604,\n \"acc_norm_stderr\": 0.024442388131100813\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9027777777777778,\n\ \ \"acc_stderr\": 0.024774516250440182,\n \"acc_norm\": 0.9027777777777778,\n\ \ \"acc_norm_stderr\": 0.024774516250440182\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \ \ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.61,\n \"acc_stderr\": 0.049020713000019756,\n \"acc_norm\": 0.61,\n\ \ \"acc_norm_stderr\": 0.049020713000019756\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \ \ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7109826589595376,\n\ \ \"acc_stderr\": 0.034564257450869995,\n \"acc_norm\": 0.7109826589595376,\n\ \ \"acc_norm_stderr\": 0.034564257450869995\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.5196078431372549,\n \"acc_stderr\": 0.04971358884367406,\n\ \ \"acc_norm\": 0.5196078431372549,\n \"acc_norm_stderr\": 0.04971358884367406\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\ \ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.7574468085106383,\n \"acc_stderr\": 0.028020226271200217,\n\ \ \"acc_norm\": 0.7574468085106383,\n \"acc_norm_stderr\": 0.028020226271200217\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5964912280701754,\n\ \ \"acc_stderr\": 0.04615186962583707,\n \"acc_norm\": 0.5964912280701754,\n\ \ \"acc_norm_stderr\": 0.04615186962583707\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.7517241379310344,\n \"acc_stderr\": 0.036001056927277696,\n\ \ \"acc_norm\": 0.7517241379310344,\n \"acc_norm_stderr\": 0.036001056927277696\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.7486772486772487,\n \"acc_stderr\": 0.0223404823396439,\n \"acc_norm\"\ : 0.7486772486772487,\n \"acc_norm_stderr\": 0.0223404823396439\n },\n\ \ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5158730158730159,\n\ \ \"acc_stderr\": 0.044698818540726076,\n \"acc_norm\": 0.5158730158730159,\n\ \ \"acc_norm_stderr\": 0.044698818540726076\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \ \ \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9064516129032258,\n\ \ \"acc_stderr\": 0.016565754668270982,\n \"acc_norm\": 0.9064516129032258,\n\ \ \"acc_norm_stderr\": 0.016565754668270982\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.6699507389162561,\n \"acc_stderr\": 0.033085304262282574,\n\ \ \"acc_norm\": 0.6699507389162561,\n \"acc_norm_stderr\": 0.033085304262282574\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\"\ : 0.77,\n \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.8848484848484849,\n \"acc_stderr\": 0.024925699798115344,\n\ \ \"acc_norm\": 0.8848484848484849,\n \"acc_norm_stderr\": 0.024925699798115344\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.9343434343434344,\n \"acc_stderr\": 0.017646526677233335,\n \"\ acc_norm\": 0.9343434343434344,\n \"acc_norm_stderr\": 0.017646526677233335\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.9740932642487047,\n \"acc_stderr\": 0.011464523356953162,\n\ \ \"acc_norm\": 0.9740932642487047,\n \"acc_norm_stderr\": 0.011464523356953162\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.8102564102564103,\n \"acc_stderr\": 0.019880165406588796,\n\ \ \"acc_norm\": 0.8102564102564103,\n \"acc_norm_stderr\": 0.019880165406588796\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.45925925925925926,\n \"acc_stderr\": 0.030384169232350832,\n \ \ \"acc_norm\": 0.45925925925925926,\n \"acc_norm_stderr\": 0.030384169232350832\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.8445378151260504,\n \"acc_stderr\": 0.023536818625398897,\n\ \ \"acc_norm\": 0.8445378151260504,\n \"acc_norm_stderr\": 0.023536818625398897\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.5165562913907285,\n \"acc_stderr\": 0.04080244185628972,\n \"\ acc_norm\": 0.5165562913907285,\n \"acc_norm_stderr\": 0.04080244185628972\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.9229357798165138,\n \"acc_stderr\": 0.011434381698911096,\n \"\ acc_norm\": 0.9229357798165138,\n \"acc_norm_stderr\": 0.011434381698911096\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.6620370370370371,\n \"acc_stderr\": 0.03225941352631295,\n \"\ acc_norm\": 0.6620370370370371,\n \"acc_norm_stderr\": 0.03225941352631295\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.9264705882352942,\n \"acc_stderr\": 0.018318855850089678,\n \"\ acc_norm\": 0.9264705882352942,\n \"acc_norm_stderr\": 0.018318855850089678\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.9113924050632911,\n \"acc_stderr\": 0.018498315206865384,\n \ \ \"acc_norm\": 0.9113924050632911,\n \"acc_norm_stderr\": 0.018498315206865384\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8026905829596412,\n\ \ \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.8026905829596412,\n\ \ \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.8702290076335878,\n \"acc_stderr\": 0.029473649496907065,\n\ \ \"acc_norm\": 0.8702290076335878,\n \"acc_norm_stderr\": 0.029473649496907065\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035202,\n \"\ acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035202\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n\ \ \"acc_stderr\": 0.02923927267563275,\n \"acc_norm\": 0.8981481481481481,\n\ \ \"acc_norm_stderr\": 0.02923927267563275\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.8711656441717791,\n \"acc_stderr\": 0.026321383198783674,\n\ \ \"acc_norm\": 0.8711656441717791,\n \"acc_norm_stderr\": 0.026321383198783674\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5446428571428571,\n\ \ \"acc_stderr\": 0.04726835553719098,\n \"acc_norm\": 0.5446428571428571,\n\ \ \"acc_norm_stderr\": 0.04726835553719098\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.0339329572976101,\n\ \ \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.0339329572976101\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9444444444444444,\n\ \ \"acc_stderr\": 0.01500631280644693,\n \"acc_norm\": 0.9444444444444444,\n\ \ \"acc_norm_stderr\": 0.01500631280644693\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \ \ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\ \ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9157088122605364,\n\ \ \"acc_stderr\": 0.009934966499513791,\n \"acc_norm\": 0.9157088122605364,\n\ \ \"acc_norm_stderr\": 0.009934966499513791\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.8323699421965318,\n \"acc_stderr\": 0.020110579919734847,\n\ \ \"acc_norm\": 0.8323699421965318,\n \"acc_norm_stderr\": 0.020110579919734847\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.8,\n\ \ \"acc_stderr\": 0.013378001241813072,\n \"acc_norm\": 0.8,\n \ \ \"acc_norm_stderr\": 0.013378001241813072\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.02082375883758091,\n\ \ \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02082375883758091\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8006430868167203,\n\ \ \"acc_stderr\": 0.022691033780549656,\n \"acc_norm\": 0.8006430868167203,\n\ \ \"acc_norm_stderr\": 0.022691033780549656\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.8672839506172839,\n \"acc_stderr\": 0.018877353839571842,\n\ \ \"acc_norm\": 0.8672839506172839,\n \"acc_norm_stderr\": 0.018877353839571842\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.648936170212766,\n \"acc_stderr\": 0.028473501272963758,\n \ \ \"acc_norm\": 0.648936170212766,\n \"acc_norm_stderr\": 0.028473501272963758\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5912646675358539,\n\ \ \"acc_stderr\": 0.01255570134670338,\n \"acc_norm\": 0.5912646675358539,\n\ \ \"acc_norm_stderr\": 0.01255570134670338\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.8382352941176471,\n \"acc_stderr\": 0.022368672562886747,\n\ \ \"acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.022368672562886747\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.815359477124183,\n \"acc_stderr\": 0.015697029240757773,\n \ \ \"acc_norm\": 0.815359477124183,\n \"acc_norm_stderr\": 0.015697029240757773\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7272727272727273,\n\ \ \"acc_stderr\": 0.04265792110940589,\n \"acc_norm\": 0.7272727272727273,\n\ \ \"acc_norm_stderr\": 0.04265792110940589\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.8489795918367347,\n \"acc_stderr\": 0.022923004094736847,\n\ \ \"acc_norm\": 0.8489795918367347,\n \"acc_norm_stderr\": 0.022923004094736847\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9104477611940298,\n\ \ \"acc_stderr\": 0.02019067053502792,\n \"acc_norm\": 0.9104477611940298,\n\ \ \"acc_norm_stderr\": 0.02019067053502792\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \ \ \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n\ \ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n\ \ \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n\ \ \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015577,\n\ \ \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015577\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5458996328029376,\n\ \ \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.7131962651033679,\n\ \ \"mc2_stderr\": 0.014139525056193024\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.8342541436464088,\n \"acc_stderr\": 0.01045089954537063\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7293404094010614,\n \ \ \"acc_stderr\": 0.012238245006183411\n }\n}\n```" repo_url: https://huggingface.co/cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|arc:challenge|25_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-02-05T13-57-06.982400.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|gsm8k|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hellaswag|10_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-05T13-57-06.982400.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-management|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-virology|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-05T13-57-06.982400.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|truthfulqa:mc|0_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-02-05T13-57-06.982400.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_02_05T13_57_06.982400 path: - '**/details_harness|winogrande|5_2024-02-05T13-57-06.982400.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-02-05T13-57-06.982400.parquet' - config_name: results data_files: - split: 2024_02_05T13_57_06.982400 path: - results_2024-02-05T13-57-06.982400.parquet - split: latest path: - results_2024-02-05T13-57-06.982400.parquet --- # Dataset Card for Evaluation run of cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO](https://huggingface.co/cloudyu/TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-05T13:57:06.982400](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_full_linear_DPO/blob/main/results_2024-02-05T13-57-06.982400.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.7649892778549832, "acc_stderr": 0.02823313368050758, "acc_norm": 0.7681511495490131, "acc_norm_stderr": 0.028777527908042073, "mc1": 0.5458996328029376, "mc1_stderr": 0.017429593091323522, "mc2": 0.7131962651033679, "mc2_stderr": 0.014139525056193024 }, "harness|arc:challenge|25": { "acc": 0.7167235494880546, "acc_stderr": 0.013167478735134575, "acc_norm": 0.7406143344709898, "acc_norm_stderr": 0.012808273573927097 }, "harness|hellaswag|10": { "acc": 0.6703843855805617, "acc_stderr": 0.004691128722535485, "acc_norm": 0.8666600278828919, "acc_norm_stderr": 0.003392470498816845 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.7555555555555555, "acc_stderr": 0.03712537833614866, "acc_norm": 0.7555555555555555, "acc_norm_stderr": 0.03712537833614866 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.875, "acc_stderr": 0.026913523521537846, "acc_norm": 0.875, "acc_norm_stderr": 0.026913523521537846 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.77, "acc_stderr": 0.04229525846816505, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.8037735849056604, "acc_stderr": 0.024442388131100813, "acc_norm": 0.8037735849056604, "acc_norm_stderr": 0.024442388131100813 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.9027777777777778, "acc_stderr": 0.024774516250440182, "acc_norm": 0.9027777777777778, "acc_norm_stderr": 0.024774516250440182 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.53, "acc_stderr": 0.05016135580465919, "acc_norm": 0.53, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.61, "acc_stderr": 0.049020713000019756, "acc_norm": 0.61, "acc_norm_stderr": 0.049020713000019756 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.7109826589595376, "acc_stderr": 0.034564257450869995, "acc_norm": 0.7109826589595376, "acc_norm_stderr": 0.034564257450869995 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.5196078431372549, "acc_stderr": 0.04971358884367406, "acc_norm": 0.5196078431372549, "acc_norm_stderr": 0.04971358884367406 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.04093601807403326, "acc_norm": 0.79, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.7574468085106383, "acc_stderr": 0.028020226271200217, "acc_norm": 0.7574468085106383, "acc_norm_stderr": 0.028020226271200217 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5964912280701754, "acc_stderr": 0.04615186962583707, "acc_norm": 0.5964912280701754, "acc_norm_stderr": 0.04615186962583707 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.7517241379310344, "acc_stderr": 0.036001056927277696, "acc_norm": 0.7517241379310344, "acc_norm_stderr": 0.036001056927277696 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.7486772486772487, "acc_stderr": 0.0223404823396439, "acc_norm": 0.7486772486772487, "acc_norm_stderr": 0.0223404823396439 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5158730158730159, "acc_stderr": 0.044698818540726076, "acc_norm": 0.5158730158730159, "acc_norm_stderr": 0.044698818540726076 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.61, "acc_stderr": 0.04902071300001975, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.9064516129032258, "acc_stderr": 0.016565754668270982, "acc_norm": 0.9064516129032258, "acc_norm_stderr": 0.016565754668270982 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.6699507389162561, "acc_stderr": 0.033085304262282574, "acc_norm": 0.6699507389162561, "acc_norm_stderr": 0.033085304262282574 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.77, "acc_stderr": 0.042295258468165044, "acc_norm": 0.77, "acc_norm_stderr": 0.042295258468165044 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8848484848484849, "acc_stderr": 0.024925699798115344, "acc_norm": 0.8848484848484849, "acc_norm_stderr": 0.024925699798115344 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.9343434343434344, "acc_stderr": 0.017646526677233335, "acc_norm": 0.9343434343434344, "acc_norm_stderr": 0.017646526677233335 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9740932642487047, "acc_stderr": 0.011464523356953162, "acc_norm": 0.9740932642487047, "acc_norm_stderr": 0.011464523356953162 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.8102564102564103, "acc_stderr": 0.019880165406588796, "acc_norm": 0.8102564102564103, "acc_norm_stderr": 0.019880165406588796 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.45925925925925926, "acc_stderr": 0.030384169232350832, "acc_norm": 0.45925925925925926, "acc_norm_stderr": 0.030384169232350832 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8445378151260504, "acc_stderr": 0.023536818625398897, "acc_norm": 0.8445378151260504, "acc_norm_stderr": 0.023536818625398897 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.5165562913907285, "acc_stderr": 0.04080244185628972, "acc_norm": 0.5165562913907285, "acc_norm_stderr": 0.04080244185628972 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.9229357798165138, "acc_stderr": 0.011434381698911096, "acc_norm": 0.9229357798165138, "acc_norm_stderr": 0.011434381698911096 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6620370370370371, "acc_stderr": 0.03225941352631295, "acc_norm": 0.6620370370370371, "acc_norm_stderr": 0.03225941352631295 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9264705882352942, "acc_stderr": 0.018318855850089678, "acc_norm": 0.9264705882352942, "acc_norm_stderr": 0.018318855850089678 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.9113924050632911, "acc_stderr": 0.018498315206865384, "acc_norm": 0.9113924050632911, "acc_norm_stderr": 0.018498315206865384 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.8026905829596412, "acc_stderr": 0.02670985334496796, "acc_norm": 0.8026905829596412, "acc_norm_stderr": 0.02670985334496796 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8702290076335878, "acc_stderr": 0.029473649496907065, "acc_norm": 0.8702290076335878, "acc_norm_stderr": 0.029473649496907065 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8760330578512396, "acc_stderr": 0.030083098716035202, "acc_norm": 0.8760330578512396, "acc_norm_stderr": 0.030083098716035202 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8981481481481481, "acc_stderr": 0.02923927267563275, "acc_norm": 0.8981481481481481, "acc_norm_stderr": 0.02923927267563275 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.8711656441717791, "acc_stderr": 0.026321383198783674, "acc_norm": 0.8711656441717791, "acc_norm_stderr": 0.026321383198783674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5446428571428571, "acc_stderr": 0.04726835553719098, "acc_norm": 0.5446428571428571, "acc_norm_stderr": 0.04726835553719098 }, "harness|hendrycksTest-management|5": { "acc": 0.8640776699029126, "acc_stderr": 0.0339329572976101, "acc_norm": 0.8640776699029126, "acc_norm_stderr": 0.0339329572976101 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9444444444444444, "acc_stderr": 0.01500631280644693, "acc_norm": 0.9444444444444444, "acc_norm_stderr": 0.01500631280644693 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.9157088122605364, "acc_stderr": 0.009934966499513791, "acc_norm": 0.9157088122605364, "acc_norm_stderr": 0.009934966499513791 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.8323699421965318, "acc_stderr": 0.020110579919734847, "acc_norm": 0.8323699421965318, "acc_norm_stderr": 0.020110579919734847 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.8, "acc_stderr": 0.013378001241813072, "acc_norm": 0.8, "acc_norm_stderr": 0.013378001241813072 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.8431372549019608, "acc_stderr": 0.02082375883758091, "acc_norm": 0.8431372549019608, "acc_norm_stderr": 0.02082375883758091 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.8006430868167203, "acc_stderr": 0.022691033780549656, "acc_norm": 0.8006430868167203, "acc_norm_stderr": 0.022691033780549656 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8672839506172839, "acc_stderr": 0.018877353839571842, "acc_norm": 0.8672839506172839, "acc_norm_stderr": 0.018877353839571842 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.648936170212766, "acc_stderr": 0.028473501272963758, "acc_norm": 0.648936170212766, "acc_norm_stderr": 0.028473501272963758 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5912646675358539, "acc_stderr": 0.01255570134670338, "acc_norm": 0.5912646675358539, "acc_norm_stderr": 0.01255570134670338 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.8382352941176471, "acc_stderr": 0.022368672562886747, "acc_norm": 0.8382352941176471, "acc_norm_stderr": 0.022368672562886747 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.815359477124183, "acc_stderr": 0.015697029240757773, "acc_norm": 0.815359477124183, "acc_norm_stderr": 0.015697029240757773 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7272727272727273, "acc_stderr": 0.04265792110940589, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.04265792110940589 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8489795918367347, "acc_stderr": 0.022923004094736847, "acc_norm": 0.8489795918367347, "acc_norm_stderr": 0.022923004094736847 }, "harness|hendrycksTest-sociology|5": { "acc": 0.9104477611940298, "acc_stderr": 0.02019067053502792, "acc_norm": 0.9104477611940298, "acc_norm_stderr": 0.02019067053502792 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.92, "acc_stderr": 0.0272659924344291, "acc_norm": 0.92, "acc_norm_stderr": 0.0272659924344291 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598053, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598053 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8771929824561403, "acc_stderr": 0.02517298435015577, "acc_norm": 0.8771929824561403, "acc_norm_stderr": 0.02517298435015577 }, "harness|truthfulqa:mc|0": { "mc1": 0.5458996328029376, "mc1_stderr": 0.017429593091323522, "mc2": 0.7131962651033679, "mc2_stderr": 0.014139525056193024 }, "harness|winogrande|5": { "acc": 0.8342541436464088, "acc_stderr": 0.01045089954537063 }, "harness|gsm8k|5": { "acc": 0.7293404094010614, "acc_stderr": 0.012238245006183411 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]