id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
autoevaluate/autoeval-eval-zeroshot__twitter-financial-news-topic-zeroshot__twitte-178919-28982144928
2023-10-04T14:04:01.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - zeroshot/twitter-financial-news-topic eval_info: task: summarization model: phpaiola/ptt5-base-summ-temario metrics: ['bertscore'] dataset_name: zeroshot/twitter-financial-news-topic dataset_config: zeroshot--twitter-financial-news-topic dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: phpaiola/ptt5-base-summ-temario * Dataset: zeroshot/twitter-financial-news-topic * Config: zeroshot--twitter-financial-news-topic * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@peterdevathala](https://huggingface.co/peterdevathala) for evaluating this model.
autoevaluate/autoeval-eval-zeroshot__twitter-financial-news-topic-zeroshot__twitte-178919-28982144929
2023-10-04T14:56:56.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - zeroshot/twitter-financial-news-topic eval_info: task: summarization model: facebook/bart-large-cnn metrics: ['bertscore'] dataset_name: zeroshot/twitter-financial-news-topic dataset_config: zeroshot--twitter-financial-news-topic dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: zeroshot/twitter-financial-news-topic * Config: zeroshot--twitter-financial-news-topic * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@peterdevathala](https://huggingface.co/peterdevathala) for evaluating this model.
autoevaluate/autoeval-eval-zeroshot__twitter-financial-news-topic-zeroshot__twitte-e590a9-28983144930
2023-10-04T14:04:12.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - zeroshot/twitter-financial-news-topic eval_info: task: summarization model: phpaiola/ptt5-base-summ-temario metrics: ['bertscore'] dataset_name: zeroshot/twitter-financial-news-topic dataset_config: zeroshot--twitter-financial-news-topic dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: phpaiola/ptt5-base-summ-temario * Dataset: zeroshot/twitter-financial-news-topic * Config: zeroshot--twitter-financial-news-topic * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@peterdevathala](https://huggingface.co/peterdevathala) for evaluating this model.
autoevaluate/autoeval-eval-zeroshot__twitter-financial-news-topic-zeroshot__twitte-e590a9-28983144931
2023-10-04T14:54:14.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - zeroshot/twitter-financial-news-topic eval_info: task: summarization model: facebook/bart-large-cnn metrics: ['bertscore'] dataset_name: zeroshot/twitter-financial-news-topic dataset_config: zeroshot--twitter-financial-news-topic dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: zeroshot/twitter-financial-news-topic * Config: zeroshot--twitter-financial-news-topic * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@peterdevathala](https://huggingface.co/peterdevathala) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-8cee52-28984144932
2023-10-04T14:00:15.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-b8dfc7-29110144933
2023-10-04T14:00:25.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-6f0cb9-29355144934
2023-10-04T14:00:35.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-6f0cb9-29355144935
2023-10-04T14:00:44.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-6f0cb9-29355144936
2023-10-04T14:00:52.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-18d944-29628144937
2023-10-04T14:01:02.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-banking77-default-6f5a44-29679144938
2023-10-04T14:01:12.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-conll2003-conll2003-b8d238-30069144939
2023-10-04T14:01:20.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-id_clickbait-annotated-d5b1f7-30309144940
2023-10-04T14:01:32.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-sentiment-be35d9-30474144941
2023-10-04T14:02:43.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: cardiffnlp/twitter-roberta-base-sentiment-latest metrics: [] dataset_name: tweet_eval dataset_config: sentiment dataset_split: test col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: cardiffnlp/twitter-roberta-base-sentiment-latest * Dataset: tweet_eval * Config: sentiment * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ericbugin](https://huggingface.co/ericbugin) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-736f56-30712144944
2023-10-04T14:03:15.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: cardiffnlp/twitter-roberta-base-2021-124m-offensive metrics: ['bertscore'] dataset_name: tweet_eval dataset_config: offensive dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: cardiffnlp/twitter-roberta-base-2021-124m-offensive * Dataset: tweet_eval * Config: offensive * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@fabeelaalirawther@gmail.com](https://huggingface.co/fabeelaalirawther@gmail.com) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-736f56-30712144945
2023-10-04T14:02:21.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-736f56-30712144946
2023-10-04T14:02:29.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-736f56-30712144948
2023-10-04T14:02:49.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-5cc293-30676144942
2023-10-04T14:03:07.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-93ad2d-30713144950
2023-10-04T14:04:12.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: cardiffnlp/twitter-roberta-base-2021-124m-offensive metrics: ['bertscore'] dataset_name: tweet_eval dataset_config: offensive dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: cardiffnlp/twitter-roberta-base-2021-124m-offensive * Dataset: tweet_eval * Config: offensive * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@fabeelaalirawther@gmail.com](https://huggingface.co/fabeelaalirawther@gmail.com) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-93ad2d-30713144951
2023-10-04T14:03:21.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-93ad2d-30713144952
2023-10-04T14:03:31.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-93ad2d-30713144953
2023-10-04T14:04:45.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: elozano/tweet_offensive_eval metrics: ['bertscore'] dataset_name: tweet_eval dataset_config: offensive dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: elozano/tweet_offensive_eval * Dataset: tweet_eval * Config: offensive * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@fabeelaalirawther@gmail.com](https://huggingface.co/fabeelaalirawther@gmail.com) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-736f56-30712144947
2023-10-04T14:04:47.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: elozano/tweet_offensive_eval metrics: ['bertscore'] dataset_name: tweet_eval dataset_config: offensive dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: elozano/tweet_offensive_eval * Dataset: tweet_eval * Config: offensive * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@fabeelaalirawther@gmail.com](https://huggingface.co/fabeelaalirawther@gmail.com) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-93ad2d-30713144954
2023-10-04T14:03:50.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-f58805-30720144955
2023-10-04T14:05:09.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: cardiffnlp/roberta-base-offensive metrics: ['bertscore'] dataset_name: tweet_eval dataset_config: offensive dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: cardiffnlp/roberta-base-offensive * Dataset: tweet_eval * Config: offensive * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@fabeelaalirawther@gmail.com](https://huggingface.co/fabeelaalirawther@gmail.com) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-f58805-30720144956
2023-10-04T14:05:10.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: cardiffnlp/twitter-roberta-base-2021-124m-offensive metrics: ['bertscore'] dataset_name: tweet_eval dataset_config: offensive dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: cardiffnlp/twitter-roberta-base-2021-124m-offensive * Dataset: tweet_eval * Config: offensive * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@fabeelaalirawther@gmail.com](https://huggingface.co/fabeelaalirawther@gmail.com) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-f58805-30720144957
2023-10-04T14:04:15.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-f58805-30720144958
2023-10-04T14:04:29.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tweet_eval-offensive-f58805-30720144959
2023-10-04T14:05:44.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - tweet_eval eval_info: task: multi_class_classification model: elozano/tweet_offensive_eval metrics: ['bertscore'] dataset_name: tweet_eval dataset_config: offensive dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: elozano/tweet_offensive_eval * Dataset: tweet_eval * Config: offensive * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@fabeelaalirawther@gmail.com](https://huggingface.co/fabeelaalirawther@gmail.com) for evaluating this model.
autoevaluate/autoeval-eval-tweet_eval-offensive-f58805-30720144960
2023-10-04T14:04:50.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-Jean-Baptiste__wikiner_fr-Jean-Baptiste__wikiner_fr-995435-30782144961
2023-10-04T14:04:59.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-Jean-Baptiste__wikiner_fr-Jean-Baptiste__wikiner_fr-995435-30782144962
2023-10-04T14:05:07.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-imdb-plain_text-d548d6-30846144963
2023-10-04T14:05:17.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-zeroshot__twitter-financial-news-sentiment-zeroshot__tw-cfc4d6-30970144964
2023-10-04T14:05:27.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
tessiw/test
2023-10-04T14:07:26.000Z
[ "region:us" ]
tessiw
null
null
null
0
0
--- configs: - config_name: default data_files: - split: train2 path: data/train2-* dataset_info: features: - name: id dtype: string - name: system_prompt dtype: string - name: question dtype: string - name: response dtype: string splits: - name: train2 num_bytes: 1754 num_examples: 3 download_size: 6433 dataset_size: 1754 --- # Dataset Card for "test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
autoevaluate/autoeval-eval-squad_kor_v1-squad_kor_v1-773ead-31137144965
2023-10-04T14:05:36.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-e5a500-31239144966
2023-10-04T14:05:47.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad_it-default-8e27e4-31506144967
2023-10-04T14:05:56.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad_it-default-626c41-31522144968
2023-10-04T14:06:05.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-67ab09-31609144969
2023-10-04T14:11:55.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: 0ys/mt5-small-finetuned-amazon-en-es metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: 0ys/mt5-small-finetuned-amazon-en-es * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@malar](https://huggingface.co/malar) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-67ab09-31609144970
2023-10-04T14:16:23.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: ARTeLab/it5-summarization-ilpost metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-ilpost * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@malar](https://huggingface.co/malar) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-37f310-31613144971
2023-10-04T14:12:03.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: 0ys/mt5-small-finetuned-amazon-en-es metrics: ['rouge'] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: 0ys/mt5-small-finetuned-amazon-en-es * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@malar](https://huggingface.co/malar) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-37f310-31613144972
2023-10-04T14:16:47.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: ARTeLab/it5-summarization-ilpost metrics: ['rouge'] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-ilpost * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@malar](https://huggingface.co/malar) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-37f310-31613144973
2023-10-04T14:17:01.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: ARTeLab/it5-summarization-fanpage metrics: ['rouge'] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-fanpage * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@malar](https://huggingface.co/malar) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-e1b364-31627144974
2023-10-04T14:17:00.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - cnn_dailymail eval_info: task: summarization model: ARTeLab/it5-summarization-fanpage metrics: [] dataset_name: cnn_dailymail dataset_config: 3.0.0 dataset_split: test col_mapping: text: article target: highlights --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-fanpage * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sr5434](https://huggingface.co/sr5434) for evaluating this model.
autoevaluate/autoeval-eval-multi_nli-default-725a45-31703144975
2023-10-04T15:14:29.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - multi_nli eval_info: task: natural_language_inference model: HiTZ/A2T_RoBERTa_SMFA_ACE-arg metrics: [] dataset_name: multi_nli dataset_config: default dataset_split: train col_mapping: text1: premise text2: hypothesis target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: HiTZ/A2T_RoBERTa_SMFA_ACE-arg * Dataset: multi_nli * Config: default * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@2552](https://huggingface.co/2552) for evaluating this model.
autoevaluate/autoeval-eval-squad_it-default-23b92b-31766144976
2023-10-04T14:07:17.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-e82217-31832144977
2023-10-04T14:07:27.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144978
2023-10-04T14:07:36.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144979
2023-10-04T14:07:45.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144980
2023-10-04T14:07:54.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144981
2023-10-04T14:08:06.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144982
2023-10-04T14:08:15.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144983
2023-10-04T14:08:23.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144984
2023-10-04T14:08:34.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144985
2023-10-04T14:08:45.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144986
2023-10-04T14:08:55.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-01abad-31923144987
2023-10-04T14:09:05.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144988
2023-10-04T14:09:16.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144989
2023-10-04T14:09:25.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144990
2023-10-04T14:09:34.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144991
2023-10-04T14:09:45.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144992
2023-10-04T14:09:55.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144993
2023-10-04T14:10:04.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144994
2023-10-04T14:10:14.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144995
2023-10-04T14:10:25.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144996
2023-10-04T14:10:34.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-commanderstrife__jnlpba-jnlpba-cb558b-31925144997
2023-10-04T14:10:43.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927144998
2023-10-04T14:10:51.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927144999
2023-10-04T14:11:01.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145000
2023-10-04T14:16:15.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - drAbreu/bc4chemd_ner eval_info: task: entity_extraction model: sschet/biobert_chemical_ner metrics: [] dataset_name: drAbreu/bc4chemd_ner dataset_config: bc4chemd dataset_split: test col_mapping: tokens: tokens tags: ner_tags --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: sschet/biobert_chemical_ner * Dataset: drAbreu/bc4chemd_ner * Config: bc4chemd * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sschet](https://huggingface.co/sschet) for evaluating this model.
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145001
2023-10-04T14:11:20.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145002
2023-10-04T14:11:31.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145003
2023-10-04T14:11:42.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145004
2023-10-04T14:11:50.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145005
2023-10-04T14:12:00.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145006
2023-10-04T14:12:11.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145007
2023-10-04T14:12:20.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-cuad-default-1afd53-32124145008
2023-10-04T14:12:29.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad_it-default-deec75-32207145009
2023-10-04T14:12:40.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-billsum-default-bec98f-32334145010
2023-10-04T14:12:51.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-billsum-default-bec98f-32334145011
2023-10-04T14:28:44.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - billsum eval_info: task: summarization model: PoseyATX/Moist-Pony metrics: [] dataset_name: billsum dataset_config: default dataset_split: test col_mapping: text: text target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: PoseyATX/Moist-Pony * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@poseyatx](https://huggingface.co/poseyatx) for evaluating this model.
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-b93f9f-32775145012
2023-10-04T14:13:09.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-d08caa-33166145013
2023-10-04T14:13:19.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-xsum-default-403a15-33262145014
2023-10-04T14:15:56.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: Alred/t5-small-finetuned-summarization-cnn metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Alred/t5-small-finetuned-summarization-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@dfantasy](https://huggingface.co/dfantasy) for evaluating this model.
open-llm-leaderboard/details_bongchoi__test-llama2-70b
2023-10-04T14:14:30.000Z
[ "region:us" ]
open-llm-leaderboard
null
null
null
0
0
--- pretty_name: Evaluation run of bongchoi/test-llama2-70b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [bongchoi/test-llama2-70b](https://huggingface.co/bongchoi/test-llama2-70b) on\ \ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 61 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bongchoi__test-llama2-70b\"\ ,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\ \nThese are the [latest results from run 2023-10-04T14:13:10.692338](https://huggingface.co/datasets/open-llm-leaderboard/details_bongchoi__test-llama2-70b/blob/main/results_2023-10-04T14-13-10.692338.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6967225637378714,\n\ \ \"acc_stderr\": 0.030867069907791145,\n \"acc_norm\": 0.7008615431872544,\n\ \ \"acc_norm_stderr\": 0.030836865817034945,\n \"mc1\": 0.3108935128518972,\n\ \ \"mc1_stderr\": 0.016203316673559696,\n \"mc2\": 0.44923493721887353,\n\ \ \"mc2_stderr\": 0.01390226410719232\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.6262798634812287,\n \"acc_stderr\": 0.014137708601759091,\n\ \ \"acc_norm\": 0.6732081911262798,\n \"acc_norm_stderr\": 0.013706665975587333\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6760605457080263,\n\ \ \"acc_stderr\": 0.00467020812857923,\n \"acc_norm\": 0.8733320055765784,\n\ \ \"acc_norm_stderr\": 0.0033192094001351187\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \ \ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\ \ \"acc_stderr\": 0.04171654161354544,\n \"acc_norm\": 0.6296296296296297,\n\ \ \"acc_norm_stderr\": 0.04171654161354544\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.8092105263157895,\n \"acc_stderr\": 0.031975658210325,\n\ \ \"acc_norm\": 0.8092105263157895,\n \"acc_norm_stderr\": 0.031975658210325\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.72,\n\ \ \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \ \ \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.7169811320754716,\n \"acc_stderr\": 0.027724236492700918,\n\ \ \"acc_norm\": 0.7169811320754716,\n \"acc_norm_stderr\": 0.027724236492700918\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8472222222222222,\n\ \ \"acc_stderr\": 0.030085743248565666,\n \"acc_norm\": 0.8472222222222222,\n\ \ \"acc_norm_stderr\": 0.030085743248565666\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \ \ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n\ \ \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \ \ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\ \ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6416184971098265,\n\ \ \"acc_stderr\": 0.03656343653353159,\n \"acc_norm\": 0.6416184971098265,\n\ \ \"acc_norm_stderr\": 0.03656343653353159\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n\ \ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n\ \ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.6638297872340425,\n \"acc_stderr\": 0.030881618520676942,\n\ \ \"acc_norm\": 0.6638297872340425,\n \"acc_norm_stderr\": 0.030881618520676942\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n\ \ \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n\ \ \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.6551724137931034,\n \"acc_stderr\": 0.03960933549451207,\n\ \ \"acc_norm\": 0.6551724137931034,\n \"acc_norm_stderr\": 0.03960933549451207\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.43386243386243384,\n \"acc_stderr\": 0.025525034382474894,\n \"\ acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.025525034382474894\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\ \ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\ \ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \ \ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n\ \ \"acc_stderr\": 0.02188617856717253,\n \"acc_norm\": 0.8193548387096774,\n\ \ \"acc_norm_stderr\": 0.02188617856717253\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n\ \ \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\"\ : 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.8303030303030303,\n \"acc_stderr\": 0.029311188674983134,\n\ \ \"acc_norm\": 0.8303030303030303,\n \"acc_norm_stderr\": 0.029311188674983134\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.8787878787878788,\n \"acc_stderr\": 0.023253157951942084,\n \"\ acc_norm\": 0.8787878787878788,\n \"acc_norm_stderr\": 0.023253157951942084\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.9430051813471503,\n \"acc_stderr\": 0.016731085293607555,\n\ \ \"acc_norm\": 0.9430051813471503,\n \"acc_norm_stderr\": 0.016731085293607555\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.7410256410256411,\n \"acc_stderr\": 0.02221110681006167,\n \ \ \"acc_norm\": 0.7410256410256411,\n \"acc_norm_stderr\": 0.02221110681006167\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857403,\n \ \ \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857403\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.02755361446786381,\n \ \ \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.02755361446786381\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.4304635761589404,\n \"acc_stderr\": 0.04042809961395634,\n \"\ acc_norm\": 0.4304635761589404,\n \"acc_norm_stderr\": 0.04042809961395634\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8733944954128441,\n \"acc_stderr\": 0.014257128686165169,\n \"\ acc_norm\": 0.8733944954128441,\n \"acc_norm_stderr\": 0.014257128686165169\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.6342592592592593,\n \"acc_stderr\": 0.032847388576472056,\n \"\ acc_norm\": 0.6342592592592593,\n \"acc_norm_stderr\": 0.032847388576472056\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.8970588235294118,\n \"acc_stderr\": 0.02132833757080437,\n \"\ acc_norm\": 0.8970588235294118,\n \"acc_norm_stderr\": 0.02132833757080437\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.8776371308016878,\n \"acc_stderr\": 0.021331741829746786,\n \ \ \"acc_norm\": 0.8776371308016878,\n \"acc_norm_stderr\": 0.021331741829746786\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8026905829596412,\n\ \ \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.8026905829596412,\n\ \ \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.8778625954198473,\n \"acc_stderr\": 0.028718776889342344,\n\ \ \"acc_norm\": 0.8778625954198473,\n \"acc_norm_stderr\": 0.028718776889342344\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.8760330578512396,\n \"acc_stderr\": 0.03008309871603521,\n \"\ acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.03008309871603521\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n\ \ \"acc_stderr\": 0.03602814176392645,\n \"acc_norm\": 0.8333333333333334,\n\ \ \"acc_norm_stderr\": 0.03602814176392645\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709218,\n\ \ \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709218\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5357142857142857,\n\ \ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.5357142857142857,\n\ \ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n\ \ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.905982905982906,\n\ \ \"acc_stderr\": 0.01911989279892498,\n \"acc_norm\": 0.905982905982906,\n\ \ \"acc_norm_stderr\": 0.01911989279892498\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768077,\n \ \ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768077\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8620689655172413,\n\ \ \"acc_stderr\": 0.012331009307795656,\n \"acc_norm\": 0.8620689655172413,\n\ \ \"acc_norm_stderr\": 0.012331009307795656\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.7774566473988439,\n \"acc_stderr\": 0.02239421566194282,\n\ \ \"acc_norm\": 0.7774566473988439,\n \"acc_norm_stderr\": 0.02239421566194282\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4547486033519553,\n\ \ \"acc_stderr\": 0.016653875777524012,\n \"acc_norm\": 0.4547486033519553,\n\ \ \"acc_norm_stderr\": 0.016653875777524012\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.7810457516339869,\n \"acc_stderr\": 0.02367908986180772,\n\ \ \"acc_norm\": 0.7810457516339869,\n \"acc_norm_stderr\": 0.02367908986180772\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7877813504823151,\n\ \ \"acc_stderr\": 0.023222756797435115,\n \"acc_norm\": 0.7877813504823151,\n\ \ \"acc_norm_stderr\": 0.023222756797435115\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.8364197530864198,\n \"acc_stderr\": 0.020581466138257114,\n\ \ \"acc_norm\": 0.8364197530864198,\n \"acc_norm_stderr\": 0.020581466138257114\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.5673758865248227,\n \"acc_stderr\": 0.02955545423677884,\n \ \ \"acc_norm\": 0.5673758865248227,\n \"acc_norm_stderr\": 0.02955545423677884\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5319426336375489,\n\ \ \"acc_stderr\": 0.012744149704869645,\n \"acc_norm\": 0.5319426336375489,\n\ \ \"acc_norm_stderr\": 0.012744149704869645\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.75,\n \"acc_stderr\": 0.026303648393696036,\n \ \ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.026303648393696036\n \ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\"\ : 0.7565359477124183,\n \"acc_stderr\": 0.01736247376214662,\n \"\ acc_norm\": 0.7565359477124183,\n \"acc_norm_stderr\": 0.01736247376214662\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\ \ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\ \ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.7918367346938775,\n \"acc_stderr\": 0.0259911176728133,\n\ \ \"acc_norm\": 0.7918367346938775,\n \"acc_norm_stderr\": 0.0259911176728133\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n\ \ \"acc_stderr\": 0.021166216304659393,\n \"acc_norm\": 0.900497512437811,\n\ \ \"acc_norm_stderr\": 0.021166216304659393\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \ \ \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n\ \ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\ \ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\ \ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.027097290118070806,\n\ \ \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.027097290118070806\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3108935128518972,\n\ \ \"mc1_stderr\": 0.016203316673559696,\n \"mc2\": 0.44923493721887353,\n\ \ \"mc2_stderr\": 0.01390226410719232\n }\n}\n```" repo_url: https://huggingface.co/bongchoi/test-llama2-70b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|arc:challenge|25_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hellaswag|10_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-management|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-management|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-10-04T14-13-10.692338.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-international_law|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-management|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-marketing|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-sociology|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-virology|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T14-13-10.692338.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_10_04T14_13_10.692338 path: - '**/details_harness|truthfulqa:mc|0_2023-10-04T14-13-10.692338.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-10-04T14-13-10.692338.parquet' - config_name: results data_files: - split: 2023_10_04T14_13_10.692338 path: - results_2023-10-04T14-13-10.692338.parquet - split: latest path: - results_2023-10-04T14-13-10.692338.parquet --- # Dataset Card for Evaluation run of bongchoi/test-llama2-70b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/bongchoi/test-llama2-70b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [bongchoi/test-llama2-70b](https://huggingface.co/bongchoi/test-llama2-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_bongchoi__test-llama2-70b", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-10-04T14:13:10.692338](https://huggingface.co/datasets/open-llm-leaderboard/details_bongchoi__test-llama2-70b/blob/main/results_2023-10-04T14-13-10.692338.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6967225637378714, "acc_stderr": 0.030867069907791145, "acc_norm": 0.7008615431872544, "acc_norm_stderr": 0.030836865817034945, "mc1": 0.3108935128518972, "mc1_stderr": 0.016203316673559696, "mc2": 0.44923493721887353, "mc2_stderr": 0.01390226410719232 }, "harness|arc:challenge|25": { "acc": 0.6262798634812287, "acc_stderr": 0.014137708601759091, "acc_norm": 0.6732081911262798, "acc_norm_stderr": 0.013706665975587333 }, "harness|hellaswag|10": { "acc": 0.6760605457080263, "acc_stderr": 0.00467020812857923, "acc_norm": 0.8733320055765784, "acc_norm_stderr": 0.0033192094001351187 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6296296296296297, "acc_stderr": 0.04171654161354544, "acc_norm": 0.6296296296296297, "acc_norm_stderr": 0.04171654161354544 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.8092105263157895, "acc_stderr": 0.031975658210325, "acc_norm": 0.8092105263157895, "acc_norm_stderr": 0.031975658210325 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.72, "acc_stderr": 0.04512608598542127, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7169811320754716, "acc_stderr": 0.027724236492700918, "acc_norm": 0.7169811320754716, "acc_norm_stderr": 0.027724236492700918 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8472222222222222, "acc_stderr": 0.030085743248565666, "acc_norm": 0.8472222222222222, "acc_norm_stderr": 0.030085743248565666 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.51, "acc_stderr": 0.05024183937956912, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6416184971098265, "acc_stderr": 0.03656343653353159, "acc_norm": 0.6416184971098265, "acc_norm_stderr": 0.03656343653353159 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.04810840148082635, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.04810840148082635 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.77, "acc_stderr": 0.04229525846816506, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6638297872340425, "acc_stderr": 0.030881618520676942, "acc_norm": 0.6638297872340425, "acc_norm_stderr": 0.030881618520676942 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4473684210526316, "acc_stderr": 0.04677473004491199, "acc_norm": 0.4473684210526316, "acc_norm_stderr": 0.04677473004491199 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6551724137931034, "acc_stderr": 0.03960933549451207, "acc_norm": 0.6551724137931034, "acc_norm_stderr": 0.03960933549451207 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.43386243386243384, "acc_stderr": 0.025525034382474894, "acc_norm": 0.43386243386243384, "acc_norm_stderr": 0.025525034382474894 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.47619047619047616, "acc_stderr": 0.04467062628403273, "acc_norm": 0.47619047619047616, "acc_norm_stderr": 0.04467062628403273 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8193548387096774, "acc_stderr": 0.02188617856717253, "acc_norm": 0.8193548387096774, "acc_norm_stderr": 0.02188617856717253 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8303030303030303, "acc_stderr": 0.029311188674983134, "acc_norm": 0.8303030303030303, "acc_norm_stderr": 0.029311188674983134 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8787878787878788, "acc_stderr": 0.023253157951942084, "acc_norm": 0.8787878787878788, "acc_norm_stderr": 0.023253157951942084 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9430051813471503, "acc_stderr": 0.016731085293607555, "acc_norm": 0.9430051813471503, "acc_norm_stderr": 0.016731085293607555 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7410256410256411, "acc_stderr": 0.02221110681006167, "acc_norm": 0.7410256410256411, "acc_norm_stderr": 0.02221110681006167 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35555555555555557, "acc_stderr": 0.029185714949857403, "acc_norm": 0.35555555555555557, "acc_norm_stderr": 0.029185714949857403 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7647058823529411, "acc_stderr": 0.02755361446786381, "acc_norm": 0.7647058823529411, "acc_norm_stderr": 0.02755361446786381 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4304635761589404, "acc_stderr": 0.04042809961395634, "acc_norm": 0.4304635761589404, "acc_norm_stderr": 0.04042809961395634 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8733944954128441, "acc_stderr": 0.014257128686165169, "acc_norm": 0.8733944954128441, "acc_norm_stderr": 0.014257128686165169 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6342592592592593, "acc_stderr": 0.032847388576472056, "acc_norm": 0.6342592592592593, "acc_norm_stderr": 0.032847388576472056 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8970588235294118, "acc_stderr": 0.02132833757080437, "acc_norm": 0.8970588235294118, "acc_norm_stderr": 0.02132833757080437 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8776371308016878, "acc_stderr": 0.021331741829746786, "acc_norm": 0.8776371308016878, "acc_norm_stderr": 0.021331741829746786 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.8026905829596412, "acc_stderr": 0.02670985334496796, "acc_norm": 0.8026905829596412, "acc_norm_stderr": 0.02670985334496796 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8778625954198473, "acc_stderr": 0.028718776889342344, "acc_norm": 0.8778625954198473, "acc_norm_stderr": 0.028718776889342344 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8760330578512396, "acc_stderr": 0.03008309871603521, "acc_norm": 0.8760330578512396, "acc_norm_stderr": 0.03008309871603521 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8333333333333334, "acc_stderr": 0.03602814176392645, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.03602814176392645 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.803680981595092, "acc_stderr": 0.031207970394709218, "acc_norm": 0.803680981595092, "acc_norm_stderr": 0.031207970394709218 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5357142857142857, "acc_stderr": 0.04733667890053756, "acc_norm": 0.5357142857142857, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.8349514563106796, "acc_stderr": 0.03675668832233188, "acc_norm": 0.8349514563106796, "acc_norm_stderr": 0.03675668832233188 }, "harness|hendrycksTest-marketing|5": { "acc": 0.905982905982906, "acc_stderr": 0.01911989279892498, "acc_norm": 0.905982905982906, "acc_norm_stderr": 0.01911989279892498 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768077, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768077 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8620689655172413, "acc_stderr": 0.012331009307795656, "acc_norm": 0.8620689655172413, "acc_norm_stderr": 0.012331009307795656 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7774566473988439, "acc_stderr": 0.02239421566194282, "acc_norm": 0.7774566473988439, "acc_norm_stderr": 0.02239421566194282 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4547486033519553, "acc_stderr": 0.016653875777524012, "acc_norm": 0.4547486033519553, "acc_norm_stderr": 0.016653875777524012 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7810457516339869, "acc_stderr": 0.02367908986180772, "acc_norm": 0.7810457516339869, "acc_norm_stderr": 0.02367908986180772 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7877813504823151, "acc_stderr": 0.023222756797435115, "acc_norm": 0.7877813504823151, "acc_norm_stderr": 0.023222756797435115 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8364197530864198, "acc_stderr": 0.020581466138257114, "acc_norm": 0.8364197530864198, "acc_norm_stderr": 0.020581466138257114 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5673758865248227, "acc_stderr": 0.02955545423677884, "acc_norm": 0.5673758865248227, "acc_norm_stderr": 0.02955545423677884 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5319426336375489, "acc_stderr": 0.012744149704869645, "acc_norm": 0.5319426336375489, "acc_norm_stderr": 0.012744149704869645 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.75, "acc_stderr": 0.026303648393696036, "acc_norm": 0.75, "acc_norm_stderr": 0.026303648393696036 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7565359477124183, "acc_stderr": 0.01736247376214662, "acc_norm": 0.7565359477124183, "acc_norm_stderr": 0.01736247376214662 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6909090909090909, "acc_stderr": 0.044262946482000985, "acc_norm": 0.6909090909090909, "acc_norm_stderr": 0.044262946482000985 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7918367346938775, "acc_stderr": 0.0259911176728133, "acc_norm": 0.7918367346938775, "acc_norm_stderr": 0.0259911176728133 }, "harness|hendrycksTest-sociology|5": { "acc": 0.900497512437811, "acc_stderr": 0.021166216304659393, "acc_norm": 0.900497512437811, "acc_norm_stderr": 0.021166216304659393 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.92, "acc_stderr": 0.0272659924344291, "acc_norm": 0.92, "acc_norm_stderr": 0.0272659924344291 }, "harness|hendrycksTest-virology|5": { "acc": 0.5301204819277109, "acc_stderr": 0.03885425420866767, "acc_norm": 0.5301204819277109, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8538011695906432, "acc_stderr": 0.027097290118070806, "acc_norm": 0.8538011695906432, "acc_norm_stderr": 0.027097290118070806 }, "harness|truthfulqa:mc|0": { "mc1": 0.3108935128518972, "mc1_stderr": 0.016203316673559696, "mc2": 0.44923493721887353, "mc2_stderr": 0.01390226410719232 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-a11de2-45191145157
2023-10-04T14:13:36.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-xsum-default-cf6255-33263145015
2023-10-04T14:16:03.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: t5-small metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-small * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@thefirebanks](https://huggingface.co/thefirebanks) for evaluating this model.
autoevaluate/autoeval-eval-ag_news-default-3c97aa-33465145016
2023-10-04T14:13:48.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad_v2-squad_v2-79ae53-33494145017
2023-10-04T14:13:57.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-xsum-default-01da82-33500145018
2023-10-04T14:16:30.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: t5-small metrics: [] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-small * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@dfantasy](https://huggingface.co/dfantasy) for evaluating this model.
autoevaluate/autoeval-eval-yelp_review_full-yelp_review_full-4aa164-33656145019
2023-10-04T14:14:17.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-yelp_review_full-yelp_review_full-4aa164-33656145020
2023-10-04T14:14:26.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-3c7ce4-33925145021
2023-10-04T14:14:35.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-allocine-allocine-a88052-34639145022
2023-10-04T14:14:47.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-allocine-allocine-16a42b-34641145023
2023-10-04T14:14:55.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-squad-plain_text-c462da-35081145024
2023-10-04T14:15:05.000Z
[ "region:us" ]
autoevaluate
null
null
null
0
0
Entry not found
autoevaluate/autoeval-eval-amazon_reviews_multi-en-4405a7-35409145025
2023-10-04T14:16:40.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - amazon_reviews_multi eval_info: task: summarization model: 0ys/mt5-small-finetuned-amazon-en-es metrics: ['accuracy', 'bertscore', 'precision'] dataset_name: amazon_reviews_multi dataset_config: en dataset_split: test col_mapping: text: review_body target: review_title --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: 0ys/mt5-small-finetuned-amazon-en-es * Dataset: amazon_reviews_multi * Config: en * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Caxmann](https://huggingface.co/Caxmann) for evaluating this model.
autoevaluate/autoeval-eval-gnad10-default-1a81d6-36119145026
2023-10-04T14:16:33.000Z
[ "autotrain", "evaluation", "region:us" ]
autoevaluate
null
null
null
0
0
--- type: predictions tags: - autotrain - evaluation datasets: - gnad10 eval_info: task: multi_class_classification model: Mathking/bert-base-german-cased-gnad10 metrics: [] dataset_name: gnad10 dataset_config: default dataset_split: train col_mapping: text: text target: label --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Mathking/bert-base-german-cased-gnad10 * Dataset: gnad10 * Config: default * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@BraveStone9](https://huggingface.co/BraveStone9) for evaluating this model.