datasetId
stringlengths
2
117
card
stringlengths
19
1.01M
liuyanchen1015/MULTI_VALUE_qqp_not_preverbal_negator
--- dataset_info: features: - name: question1 dtype: string - name: question2 dtype: string - name: label dtype: int64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: dev num_bytes: 255092 num_examples: 1281 - name: test num_bytes: 2419258 num_examples: 12249 - name: train num_bytes: 2272011 num_examples: 11176 download_size: 3042453 dataset_size: 4946361 --- # Dataset Card for "MULTI_VALUE_qqp_not_preverbal_negator" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6
--- pretty_name: Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6 dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 1 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6\"\ ,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\ \ are the [latest results from run 2023-12-02T13:49:51.667624](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6/blob/main/results_2023-12-02T13-49-51.667624.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.02122820318423048,\n\ \ \"acc_stderr\": 0.003970449129848635\n },\n \"harness|gsm8k|5\":\ \ {\n \"acc\": 0.02122820318423048,\n \"acc_stderr\": 0.003970449129848635\n\ \ }\n}\n```" repo_url: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6 leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_gsm8k_5 data_files: - split: 2023_12_02T13_49_51.667624 path: - '**/details_harness|gsm8k|5_2023-12-02T13-49-51.667624.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-12-02T13-49-51.667624.parquet' - config_name: results data_files: - split: 2023_12_02T13_49_51.667624 path: - results_2023-12-02T13-49-51.667624.parquet - split: latest path: - results_2023-12-02T13-49-51.667624.parquet --- # Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6", "harness_gsm8k_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-02T13:49:51.667624](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v0.6/blob/main/results_2023-12-02T13-49-51.667624.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.02122820318423048, "acc_stderr": 0.003970449129848635 }, "harness|gsm8k|5": { "acc": 0.02122820318423048, "acc_stderr": 0.003970449129848635 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
ysharma/short_jokes
--- license: mit --- **Context** Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes. You can visit the [Github repository](https://github.com/amoudgl/short-jokes-dataset) from [amoudgl](https://github.com/amoudgl) for more information regarding collection of data and the scripts used. **Content** This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke. **Disclaimer** It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people. **Note** This dataset is taken from Kaggle dataset that can be found [here](https://www.kaggle.com/datasets/abhinavmoudgil95/short-jokes).
bri25yu/flores200_packed2
--- dataset_info: features: - name: id dtype: int32 - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: labels sequence: int64 splits: - name: train num_bytes: 15086599195.0 num_examples: 10240000 - name: val num_bytes: 3827042 num_examples: 5000 - name: test num_bytes: 7670994 num_examples: 10000 download_size: 6552366058 dataset_size: 15098097231.0 --- # Dataset Card for "flores200_packed2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
OpenGVLab/InternVid-10M-FLT-INFO
--- license: cc-by-nc-sa-4.0 task_categories: - feature-extraction language: - en size_categories: - 10M<n<100M extra_gated_prompt: "You agree to not use the data to conduct experiments that cause harm to human subjects." extra_gated_fields: Name: text Company/Organization: text E-Mail: text configs: - config_name: InternVid-10M-FLT data_files: - split: FLT path: InternVid-10M-FLT-INFO.jsonl --- # InternVid ## Dataset Description - **Homepage:** [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid) - **Repository:** [OpenGVLab](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid) - **Paper:** [2307.06942](https://arxiv.org/pdf/2307.06942.pdf) - **Point of Contact:** mailto:[InternVideo](gvx-sh@pjlab.org.cn) ## InternVid-10M-FLT We present InternVid-10M-FLT, a subset of this dataset, consisting of 10 million video clips, with generated high-quality captions for publicly available web videos. ## Download The 10M samples are provided in jsonlines file. Columns include the videoID, timestamps, generated caption and their UMT similarity scores.\ ## How to Use ``` from datasets import load_dataset dataset = load_dataset("OpenGVLab/InternVid") ``` ## Method ![Caption Method](assert/caption_fig.jpg) ## Citation If you find this work useful for your research, please consider citing InternVid. Your acknowledgement would greatly help us in continuing to contribute resources to the research community. ``` @article{wang2023internvid, title={InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation}, author={Wang, Yi and He, Yinan and Li, Yizhuo and Li, Kunchang and Yu, Jiashuo and Ma, Xin and Chen, Xinyuan and Wang, Yaohui and Luo, Ping and Liu, Ziwei and Wang, Yali and Wang, Limin and Qiao, Yu}, journal={arXiv preprint arXiv:2307.06942}, year={2023} } @article{wang2022internvideo, title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning}, author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu}, journal={arXiv preprint arXiv:2212.03191}, year={2022} } ```
open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2
--- pretty_name: Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [tianlinliu0121/zephyr-7b-dpo-full-beta-0.2](https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 1 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2\"\ ,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\ \ are the [latest results from run 2023-12-03T18:01:25.798597](https://huggingface.co/datasets/open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2/blob/main/results_2023-12-03T18-01-25.798597.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.30022744503411675,\n\ \ \"acc_stderr\": 0.012625423152283044\n },\n \"harness|gsm8k|5\":\ \ {\n \"acc\": 0.30022744503411675,\n \"acc_stderr\": 0.012625423152283044\n\ \ }\n}\n```" repo_url: https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_gsm8k_5 data_files: - split: 2023_12_03T17_57_40.359102 path: - '**/details_harness|gsm8k|5_2023-12-03T17-57-40.359102.parquet' - split: 2023_12_03T17_57_42.100432 path: - '**/details_harness|gsm8k|5_2023-12-03T17-57-42.100432.parquet' - split: 2023_12_03T18_01_20.411431 path: - '**/details_harness|gsm8k|5_2023-12-03T18-01-20.411431.parquet' - split: 2023_12_03T18_01_25.798597 path: - '**/details_harness|gsm8k|5_2023-12-03T18-01-25.798597.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-12-03T18-01-25.798597.parquet' - config_name: results data_files: - split: 2023_12_03T17_57_40.359102 path: - results_2023-12-03T17-57-40.359102.parquet - split: 2023_12_03T17_57_42.100432 path: - results_2023-12-03T17-57-42.100432.parquet - split: 2023_12_03T18_01_20.411431 path: - results_2023-12-03T18-01-20.411431.parquet - split: 2023_12_03T18_01_25.798597 path: - results_2023-12-03T18-01-25.798597.parquet - split: latest path: - results_2023-12-03T18-01-25.798597.parquet --- # Dataset Card for Evaluation run of tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [tianlinliu0121/zephyr-7b-dpo-full-beta-0.2](https://huggingface.co/tianlinliu0121/zephyr-7b-dpo-full-beta-0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2", "harness_gsm8k_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-03T18:01:25.798597](https://huggingface.co/datasets/open-llm-leaderboard/details_tianlinliu0121__zephyr-7b-dpo-full-beta-0.2/blob/main/results_2023-12-03T18-01-25.798597.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.30022744503411675, "acc_stderr": 0.012625423152283044 }, "harness|gsm8k|5": { "acc": 0.30022744503411675, "acc_stderr": 0.012625423152283044 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
automated-research-group/phi-boolq-results
--- dataset_info: config_name: '{''do_sample''=False, ''beams''=1}' features: - name: id dtype: string - name: prediction dtype: string - name: bool_accuracy dtype: bool splits: - name: train num_bytes: 475041 num_examples: 3270 download_size: 282821 dataset_size: 475041 configs: - config_name: '{''do_sample''=False, ''beams''=1}' data_files: - split: train path: '{''do_sample''=False, ''beams''=1}/train-*' --- # Dataset Card for "phi-boolq-results" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
presencesw/wmt14_fr_en
--- dataset_info: features: - name: en dtype: string - name: fr dtype: string splits: - name: train num_bytes: 14753166087 num_examples: 40836876 - name: validation num_bytes: 744439 num_examples: 3000 - name: test num_bytes: 838849 num_examples: 3003 download_size: 9661488345 dataset_size: 14754749375 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
librarian-bots/authors_merged_model_prs
Invalid username or password.
zolak/twitter_dataset_81_1713123291
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 272567 num_examples: 662 download_size: 149726 dataset_size: 272567 configs: - config_name: default data_files: - split: train path: data/train-* ---
Francesco/solar-panels-taxvb
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': solar-panels '1': Cell '2': Cell-Multi '3': No-Anomaly '4': Shadowing '5': Unclassified annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: solar-panels-taxvb tags: - rf100 --- # Dataset Card for solar-panels-taxvb ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/solar-panels-taxvb - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary solar-panels-taxvb ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/solar-panels-taxvb ### Citation Information ``` @misc{ solar-panels-taxvb, title = { solar panels taxvb Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/solar-panels-taxvb } }, url = { https://universe.roboflow.com/object-detection/solar-panels-taxvb }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
yyy999/Unicauca-dataset-April-June-2019-Network-flows
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: ts dtype: int64 - name: duration dtype: int64 - name: src_ip dtype: int64 - name: src_port dtype: int64 - name: dst_ip dtype: int64 - name: dst_port dtype: int64 - name: proto dtype: int64 - name: packets dtype: int64 - name: bytes dtype: int64 - name: packet_size dtype: float64 splits: - name: train num_bytes: 173109680 num_examples: 2163871 - name: test num_bytes: 43277440 num_examples: 540968 download_size: 99801648 dataset_size: 216387120 --- # Dataset Card for "Unicauca-dataset-April-June-2019-Network-flows" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AppleHarem/tsurugi_bluearchive
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of tsurugi (Blue Archive) This is the dataset of tsurugi (Blue Archive), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 531 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 667 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 531 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 531 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 485 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 667 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 667 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
vincha77/review_sample_with_aspect
--- dataset_info: features: - name: text dtype: string - name: aspect dtype: string - name: food dtype: string - name: service dtype: string - name: label dtype: int64 - name: review_length dtype: int64 - name: price dtype: string - name: ambience dtype: string splits: - name: train num_bytes: 114005.35714285714 num_examples: 100 - name: test num_bytes: 13680.642857142857 num_examples: 12 download_size: 96925 dataset_size: 127686.0 --- # Dataset Card for "review_sample_with_aspect" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/arkhangelsk_azurlane
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of arkhangelsk/アルハンゲリスク/阿尔汉格尔斯克 (Azur Lane) This is the dataset of arkhangelsk/アルハンゲリスク/阿尔汉格尔斯克 (Azur Lane), containing 23 images and their tags. The core tags of this character are `breasts, large_breasts, long_hair, yellow_eyes, blue_hair, hair_between_eyes, bangs, very_long_hair, white_headwear, hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 23 | 39.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arkhangelsk_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 23 | 20.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arkhangelsk_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 59 | 45.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arkhangelsk_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 23 | 33.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arkhangelsk_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 59 | 67.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arkhangelsk_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/arkhangelsk_azurlane', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, elbow_gloves, solo, black_gloves, cleavage, looking_at_viewer, smile, white_coat, white_thighhighs, military_hat, thighs, fur_trim, thigh_boots, blush, holding_sword, open_mouth, white_dress, white_footwear | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_bodysuit, latex_bodysuit, looking_at_viewer, blush, headset, cat_ear_headphones, cat_ears, cleavage, skin_tight, smile, solo, official_alternate_costume, arms_up, black_gloves, open_mouth, shiny_clothes | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | elbow_gloves | solo | black_gloves | cleavage | looking_at_viewer | smile | white_coat | white_thighhighs | military_hat | thighs | fur_trim | thigh_boots | blush | holding_sword | open_mouth | white_dress | white_footwear | black_bodysuit | latex_bodysuit | headset | cat_ear_headphones | cat_ears | skin_tight | official_alternate_costume | arms_up | shiny_clothes | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:---------------|:-----------|:--------------------|:--------|:-------------|:-------------------|:---------------|:---------|:-----------|:--------------|:--------|:----------------|:-------------|:--------------|:-----------------|:-----------------|:-----------------|:----------|:---------------------|:-----------|:-------------|:-----------------------------|:----------|:----------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | X | X | X | | | | | | | X | | X | | | X | X | X | X | X | X | X | X | X |
arbml/tashkeela
--- dataset_info: features: - name: diacratized dtype: string - name: text dtype: string splits: - name: train num_bytes: 1419585102 num_examples: 979982 - name: test num_bytes: 78869542 num_examples: 54444 - name: dev num_bytes: 78863352 num_examples: 54443 download_size: 747280703 dataset_size: 1577317996 --- # Dataset Card for "tashkeela" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
andstor/methods2test_small
--- language: - en license: mit task_categories: - text-generation configs: - config_name: fm data_files: - split: train path: data/fm/train-* - split: test path: data/fm/test-* - split: validation path: data/fm/validation-* - config_name: fm_indented data_files: - split: train path: data/fm_indented/train-* - split: test path: data/fm_indented/test-* - split: validation path: data/fm_indented/validation-* - config_name: fm+t data_files: - split: train path: data/fm+t/train-* - split: test path: data/fm+t/test-* - split: validation path: data/fm+t/validation-* - config_name: fm+fc data_files: - split: train path: data/fm+fc/train-* - split: test path: data/fm+fc/test-* - split: validation path: data/fm+fc/validation-* - config_name: fm+fc+t+tc data_files: - split: train path: data/fm+fc+t+tc/train-* - split: test path: data/fm+fc+t+tc/test-* - split: validation path: data/fm+fc+t+tc/validation-* - config_name: fm+fc+c data_files: - split: train path: data/fm+fc+c/train-* - split: test path: data/fm+fc+c/test-* - split: validation path: data/fm+fc+c/validation-* - config_name: fm+fc+c+t+tc data_files: - split: train path: data/fm+fc+c+t+tc/train-* - split: test path: data/fm+fc+c+t+tc/test-* - split: validation path: data/fm+fc+c+t+tc/validation-* - config_name: fm+fc+c+m data_files: - split: train path: data/fm+fc+c+m/train-* - split: test path: data/fm+fc+c+m/test-* - split: validation path: data/fm+fc+c+m/validation-* - config_name: fm+fc+c+m+t+tc data_files: - split: train path: data/fm+fc+c+m+t+tc/train-* - split: test path: data/fm+fc+c+m+t+tc/test-* - split: validation path: data/fm+fc+c+m+t+tc/validation-* - config_name: fm+fc+c+m+f data_files: - split: train path: data/fm+fc+c+m+f/train-* - split: test path: data/fm+fc+c+m+f/test-* - split: validation path: data/fm+fc+c+m+f/validation-* - config_name: fm+fc+c+m+f+t+tc data_files: - split: train path: data/fm+fc+c+m+f+t+tc/train-* - split: test path: data/fm+fc+c+m+f+t+tc/test-* - split: validation path: data/fm+fc+c+m+f+t+tc/validation-* - config_name: t data_files: - split: train path: data/t/train-* - split: test path: data/t/test-* - split: validation path: data/t/validation-* - config_name: t_indented data_files: - split: train path: data/t_indented/train-* - split: test path: data/t_indented/test-* - split: validation path: data/t_indented/validation-* - config_name: t+tc data_files: - split: train path: data/t+tc/train-* - split: test path: data/t+tc/test-* - split: validation path: data/t+tc/validation-* dataset_info: - config_name: fm features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 4696431 num_examples: 7440 - name: test num_bytes: 642347 num_examples: 1017 - name: validation num_bytes: 662917 num_examples: 953 download_size: 2633268 dataset_size: 6001695 - config_name: fm+fc features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 5387123 num_examples: 7440 - name: test num_bytes: 738049 num_examples: 1017 - name: validation num_bytes: 757167 num_examples: 953 download_size: 2925807 dataset_size: 6882339 - config_name: fm+fc+c features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 5906873 num_examples: 7440 - name: test num_bytes: 820149 num_examples: 1017 - name: validation num_bytes: 824441 num_examples: 953 download_size: 3170873 dataset_size: 7551463 - config_name: fm+fc+c+m features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 11930672 num_examples: 7440 - name: test num_bytes: 1610045 num_examples: 1017 - name: validation num_bytes: 1553249 num_examples: 953 download_size: 5406454 dataset_size: 15093966 - config_name: fm+fc+c+m+f features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 12722890 num_examples: 7440 - name: test num_bytes: 1713683 num_examples: 1017 - name: validation num_bytes: 1654607 num_examples: 953 download_size: 5753116 dataset_size: 16091180 - config_name: fm+fc+c+m+f+t+tc features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 18332635 num_examples: 7440 - name: test num_bytes: 2461169 num_examples: 1017 - name: validation num_bytes: 2510969 num_examples: 953 download_size: 8280985 dataset_size: 23304773 - config_name: fm+fc+c+m+t+tc features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 17537661 num_examples: 7440 - name: test num_bytes: 2357359 num_examples: 1017 - name: validation num_bytes: 2409506 num_examples: 953 download_size: 8178222 dataset_size: 22304526 - config_name: fm+fc+c+t+tc features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 11445562 num_examples: 7440 - name: test num_bytes: 1565365 num_examples: 1017 - name: validation num_bytes: 1676986 num_examples: 953 download_size: 5944482 dataset_size: 14687913 - config_name: fm+fc+t+tc features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 10923038 num_examples: 7440 - name: test num_bytes: 1483265 num_examples: 1017 - name: validation num_bytes: 1609296 num_examples: 953 download_size: 5715335 dataset_size: 14015599 - config_name: fm+t features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 8889443 num_examples: 7440 - name: test num_bytes: 1207763 num_examples: 1017 - name: validation num_bytes: 1336798 num_examples: 953 download_size: 4898458 dataset_size: 11434004 - config_name: fm_indented features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 5054397 num_examples: 7440 - name: test num_bytes: 692948 num_examples: 1017 - name: validation num_bytes: 714462 num_examples: 953 download_size: 2703115 dataset_size: 6461807 - config_name: t features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 4316096 num_examples: 7440 - name: test num_bytes: 582266 num_examples: 1017 - name: validation num_bytes: 689647 num_examples: 953 download_size: 2434024 dataset_size: 5588009 - config_name: t+tc features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 5648321 num_examples: 7440 - name: test num_bytes: 761386 num_examples: 1017 - name: validation num_bytes: 867350 num_examples: 953 download_size: 3024686 dataset_size: 7277057 - config_name: t_indented features: - name: id dtype: string - name: source dtype: string - name: target dtype: string splits: - name: train num_bytes: 4606253 num_examples: 7440 - name: test num_bytes: 623576 num_examples: 1017 - name: validation num_bytes: 734221 num_examples: 953 download_size: 2496661 dataset_size: 5964050 tags: - unit test - java - code --- ## Dataset Description Microsoft created the methods2test dataset, consisting of Java Junit test cases with its corresponding focal methods. It contains 780k pairs of JUnit test cases and focal methods which were extracted from a total of 91K Java open source project hosted on GitHub. This is smaller subset of the assembled version of the methods2test dataset. It provides convenient access to the different context levels based on the raw source code (e.g. newlines are preserved). The test cases and associated classes are also made available. The subset is created by randomly selecting only one sample from each of the 91k projects. The mapping between test case and focal methods are based heuristics rules and Java developer's best practice. More information could be found here: - [methods2test Github repo](https://github.com/microsoft/methods2test) - [Methods2Test: A dataset of focal methods mapped to test cases](https://arxiv.org/pdf/2203.12776.pdf) ## Dataset Schema ``` t: <TEST_CASE> t+tc: <TEST_CASE> <TEST_CLASS_NAME> fm: <FOCAL_METHOD> fm+fc: <FOCAL_CLASS_NAME> <FOCAL_METHOD> fm+fc+c: <FOCAL_CLASS_NAME> <FOCAL_METHOD> <CONTRSUCTORS> fm+fc+c+m: <FOCAL_CLASS_NAME> <FOCAL_METHOD> <CONTRSUCTORS> <METHOD_SIGNATURES> fm+fc+c+m+f: <FOCAL_CLASS_NAME> <FOCAL_METHOD> <CONTRSUCTORS> <METHOD_SIGNATURES> <FIELDS> ``` ## Focal Context - fm: this representation incorporates exclusively the source code of the focal method. Intuitively, this contains the most important information for generating accurate test cases for the given method. - fm+fc: this representations adds the focal class name, which can provide meaningful semantic information to the model. - fm+fc+c: this representation adds the signatures of the constructor methods of the focal class. The idea behind this augmentation is that the test case may require instantiating an object of the focal class in order to properly test the focal method. - fm+fc+c+m: this representation adds the signatures of the other public methods in the focal class. The rationale which motivated this inclusion is that the test case may need to invoke other auxiliary methods within the class (e.g., getters, setters) to set up or tear down the testing environment. - fm+fc+c+m+f : this representation adds the public fields of the focal class. The motivation is that test cases may need to inspect the status of the public fields to properly test a focal method. ![image/png](https://huggingface.co/datasets/andstor/methods2test/resolve/main/figure-1-focal-context.png) The different levels of focal contexts are the following: ``` T: test case t+tc: test case + test class name FM: focal method fm+fc: focal method + focal class name fm+fc+c: focal method + focal class name + constructor signatures fm+fc+c+m: focal method + focal class name + constructor signatures + public method signatures fm+fc+c+m+f: focal method + focal class name + constructor signatures + public method signatures + public fields ``` ## Limitations The original authors validate the heuristics by inspecting a statistically significant sample (confidence level of 95% within 10% margin of error) of 97 samples from the training set. Two authors independently evaluated the sample, then met to discuss the disagreements. We found that 90.72% of the samples have a correct link between the test case and the corresponding focal method ## Contribution All thanks to the original authors.
AlekseyKorshuk/instinwild-chatml-deduplicated
--- dataset_info: features: - name: conversation list: - name: content dtype: string - name: do_train dtype: bool - name: role dtype: string splits: - name: train num_bytes: 38772282.58722768 num_examples: 50970 download_size: 20538245 dataset_size: 38772282.58722768 --- # Dataset Card for "instinwild-chatml-deduplicated" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TIGER-Lab/MMLU-STEM
--- license: mit --- This is the dataset being used by MINERVA and LLEMMA in https://huggingface.co/EleutherAI/llemma_7b. This contains a subset of STEM subjects defined in MMLU by MINERVA. The included subjects are - 'abstract_algebra', - 'anatomy', - 'astronomy', - 'college_biology', - 'college_chemistry', - 'college_computer_science', - 'college_mathematics', - 'college_physics', - 'computer_security', - 'conceptual_physics', - 'electrical_engineering', - 'elementary_mathematics', - 'high_school_biology', - 'high_school_chemistry', - 'high_school_computer_science', - 'high_school_mathematics', - 'high_school_physics', - 'high_school_statistics', - 'machine_learning' Please cite the original MMLU paper when you are using it.
SlimX/Subset-test-FREE
--- license: apache-2.0 ---
CyberHarem/kaname_rana_bangdreamitsmygo
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Kaname Rāna This is the dataset of Kaname Rāna, containing 131 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 131 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 315 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 131 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 131 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 131 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 131 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 131 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 315 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 315 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 315 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
joseluhf11/oct-fovea-detection_v2
--- dataset_info: features: - name: image dtype: image - name: objects struct: - name: bbox sequence: sequence: int64 - name: categories sequence: string splits: - name: train num_bytes: 438180043.0 num_examples: 539 download_size: 430057504 dataset_size: 438180043.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
pyRis/wikinewssum
--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* dataset_info: features: - name: lang_src dtype: string - name: text dtype: string - name: lang_tgt dtype: string - name: summary dtype: string splits: - name: train num_bytes: 286852471 num_examples: 62543 - name: test num_bytes: 41437935 num_examples: 8977 - name: validation num_bytes: 81600711 num_examples: 17996 download_size: 199887304 dataset_size: 409891117 ---
one-sec-cv12/chunk_39
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 splits: - name: train num_bytes: 24516540144.375 num_examples: 255253 download_size: 21196982997 dataset_size: 24516540144.375 --- # Dataset Card for "chunk_39" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
carcanha/marilhamendonsa
--- license: openrail ---
grosenthal/lat_en_loeb_morph
--- dataset_info: features: - name: id dtype: int64 - name: la dtype: string - name: en dtype: string - name: file dtype: string splits: - name: train num_bytes: 60797479 num_examples: 99343 - name: test num_bytes: 628768 num_examples: 1014 - name: valid num_bytes: 605889 num_examples: 1014 download_size: 31059812 dataset_size: 62032136 --- # Dataset Card for "lat_en_loeb_morph" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stealthwriter/newAIHumanGPT3.5V2
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: sentence dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 4751074 num_examples: 36000 - name: validation num_bytes: 528788 num_examples: 4000 download_size: 3478514 dataset_size: 5279862 --- # Dataset Card for "newAIHumanGPT3.5V2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
m-ric/Open_Assistant_Chains_German_Translation
--- language: - en - de license: apache-2.0 size_categories: - 10K<n<100K task_categories: - conversational - text-generation pretty_name: OpenAssistant Conversation Chains - With German Translation tags: - human-feedback configs: - config_name: default data_files: - split: train_english path: data/train_english-* - split: train_german path: data/train_german-* dataset_info: features: - name: conversation_id dtype: string - name: user_id dtype: string - name: created_date dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: lang_original dtype: string - name: could_be_code dtype: bool splits: - name: train_english num_bytes: 29675151 num_examples: 18192 - name: train_german num_bytes: 28931906 num_examples: 18192 download_size: 21854409 dataset_size: 58607057 --- # Dataset Card for Dataset Name ## Dataset description <!-- Provide a quick summary of the dataset. --> This dataset is derived from [OpenAssistant Conversation Chains](https://huggingface.co/datasets/A-Roucher/Open_Assistant_Conversation_Chains), which is a reformatting of [OpenAssistant Conversations (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), which is itself > a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers. It was modified from the original by adding a german translation of each conversation, generated with [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de). #### Who are the editors? Aymeric Roucher, Hugging Face. ### Example ```` { 'conversation_id': '6708c47f-05c9-4346-b3d2-40b2bd24fde4', 'user_id': '2c96e467-66f0-4be7-9693-bda51356a424', 'created_date': '2023-02-06T18:48:49.391686+00:00', 'messages_original': [ {'content': 'Can you write a short introduction about the relevance of the term "monopsony" in economics? Please use examples related to potential monopsonies in the labour market and cite relevant research.', 'role': 'user'}, {'content': '"Monopsony" refers to a market structure where there is only one buyer for a particular good or service. In economics, this term is particularly relevant in the labor market, where a monopsony employer has significant power over the wages and working conditions of their employees. The presence of a monopsony can result in lower wages and reduced employment opportunities for workers, as the employer has little incentive to increase wages or provide better working conditions.\n\nRecent research has identified potential monopsonies in industries such as retail and fast food, where a few large companies control a significant portion of the market (Bivens & Mishel, 2013). In these industries, workers often face low wages, limited benefits, and reduced bargaining power, leading to a situation where they are dependent on the employer for their livelihood. This dependence can result in further suppression of wages and a decline in working conditions.\n\nOverall, the concept of monopsony is essential to understanding the dynamics of labor markets and the impact of market power on workers. Further research is needed to understand the extent and impact of monopsonies on the economy and to develop policies to address this issue.\n\nReferences:\nBivens, J., & Mishel, L. (2013). The Pay of Corporate Executives and Financial Professionals as Evidence of Rents in Top 1 Percent Incomes. Journal of Economic Perspectives, 27(3), 57-78.', 'role': 'assistant'}, {'content': 'Now explain it to a dog', 'role': 'user'} ], 'lang_original': 'en' 'messages_german': { "content": "Können Sie eine kurze Einführung über die Relevanz des Begriffs \"Monopsonie\" in der Wirtschaft schreiben? Bitte verwenden Sie Beispiele für mögliche Monopsonien auf dem Arbeitsmarkt und zitieren Sie relevante Forschung.", "role": "user" }, { "content": "\"Monopsony\" bezieht sich auf eine Marktstruktur, in der es nur einen Käufer für ein bestimmtes Gut oder eine bestimmte Dienstleistung gibt. In der Wirtschaft ist dieser Begriff besonders relevant auf dem Arbeitsmarkt, wo ein Monopsony Arbeitgeber über die Löhne und Arbeitsbedingungen ihrer Mitarbeiter verfügt. Das Vorhandensein eines Monopsony kann zu niedrigeren Löhnen und reduzierten Beschäftigungsmöglichkeiten für Arbeitnehmer führen, da der Arbeitgeber wenig Anreiz hat, Löhne zu erhöhen oder bessere Arbeitsbedingungen zu bieten. Neuere Forschung hat mögliche Monopsonies in Branchen wie Einzelhandel und Fast Food identifiziert, wo ein paar große Unternehmen einen erheblichen Teil des Marktes kontrollieren (Bivens & Mishel, 2013). In diesen Branchen ist das Konzept der Monopsony oft mit niedrigen Löhnen, begrenzten Vorteilen und reduzierten Verhandlungsmacht, was zu einer Situation führt, in der sie vom Arbeitgeber für ihren Lebensunterhalt abhängig sind. Diese Abhängigkeit kann zu einer weiteren Unterdrückung der Löhne und einem Rückgang der Arbeitsbedingungen führen. Insgesamt ist das Konzept der Monopsony wesentlich, um die Dynamik der Arbeitsmärkte und die Auswirkungen der Marktmacht auf die Arbeitnehmer zu verstehen. Weitere Forschung ist erforderlich, um das Ausmaß und die Auswirkungen der Monopsonies auf die Wirtschaft zu verstehen und zu entwickeln.", "role": "assistant" }, { "content": "Nun erklären Sie es einem Hund", "role": "user" } ] } ````
yassineechahboun9/INSUSTRY
--- license: llama2 ---
semeru/code-code-CodeCompletion-TokenLevel-Java
--- license: mit Programminglanguage: "Java" version: "N/A" Date: "From paper: https://homepages.inf.ed.ac.uk/csutton/publications/msr2013.pdf (2013 - paper release date)" Contaminated: "Very Likely" Size: "Standard Tokenizer (TreeSitter)" --- ### Dataset is imported from CodeXGLUE and pre-processed using their script. # Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/CodeCompletion-token/dataset/javaCorpus in Semeru # CodeXGLUE -- Code Completion (token level) **Update 2021.07.30:** We update the code completion dataset with literals normalized to avoid sensitive information. Here is the introduction and pipeline for token level code completion task. ## Task Definition Predict next code token given context of previous tokens. Models are evaluated by token level accuracy. Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types. ## Dataset The dataset is in java. ### Dependency - javalang == 0.13.0 ### Github Java Corpus We use java corpus dataset mined by Allamanis and Sutton, in their MSR 2013 paper [Mining Source Code Repositories at Massive Scale using Language Modeling](https://homepages.inf.ed.ac.uk/csutton/publications/msr2013.pdf). We follow the same split and preprocessing in Karampatsis's ICSE 2020 paper [Big Code != Big Vocabulary: Open-Vocabulary Models for Source Code](http://homepages.inf.ed.ac.uk/s1467463/documents/icse20-main-1325.pdf). ### Data Format Code corpus are saved in txt format files. one line is a tokenized code snippets: ``` <s> from __future__ import unicode_literals <EOL> from django . db import models , migrations <EOL> class Migration ( migrations . Migration ) : <EOL> dependencies = [ <EOL> ] <EOL> operations = [ <EOL> migrations . CreateModel ( <EOL> name = '<STR_LIT>' , <EOL> fields = [ <EOL> ( '<STR_LIT:id>' , models . AutoField ( verbose_name = '<STR_LIT>' , serialize = False , auto_created = True , primary_key = True ) ) , <EOL> ( '<STR_LIT:name>' , models . CharField ( help_text = b'<STR_LIT>' , max_length = <NUM_LIT> ) ) , <EOL> ( '<STR_LIT:image>' , models . ImageField ( help_text = b'<STR_LIT>' , null = True , upload_to = b'<STR_LIT>' , blank = True ) ) , <EOL> ] , <EOL> options = { <EOL> '<STR_LIT>' : ( '<STR_LIT:name>' , ) , <EOL> '<STR_LIT>' : '<STR_LIT>' , <EOL> } , <EOL> bases = ( models . Model , ) , <EOL> ) , <EOL> ] </s> ``` ### Data Statistics Data statistics of Github Java Corpus dataset are shown in the below table: | Data Split | #Files | #Tokens | | ----------- | :--------: | :---------: | | Train | 12,934 | 15.7M | | Dev | 7,176 | 3.8M | | Test | 8,268 | 5.3M |
atmallen/qm_bob_easy_2_mixture_1.0e
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: alice_label dtype: bool - name: bob_label dtype: bool - name: difficulty dtype: int64 - name: statement dtype: string - name: choices sequence: string - name: character dtype: string - name: label dtype: class_label: names: '0': 'False' '1': 'True' splits: - name: train num_bytes: 12520368.5 num_examples: 117117 - name: validation num_bytes: 1221097.5 num_examples: 11279 - name: test num_bytes: 1205746.0 num_examples: 11186 download_size: 3703276 dataset_size: 14947212.0 --- # Dataset Card for "qm_bob_easy_2_mixture_1.0e" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
logikon/oasst1-delib
--- language: - en license: apache-2.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: message_id dtype: string - name: parent_id dtype: string - name: user_id dtype: string - name: created_date dtype: string - name: text dtype: string - name: role dtype: string - name: lang dtype: string - name: review_count dtype: int32 - name: review_result dtype: bool - name: deleted dtype: bool - name: rank dtype: float64 - name: synthetic dtype: bool - name: model_name dtype: 'null' - name: detoxify struct: - name: identity_attack dtype: float64 - name: insult dtype: float64 - name: obscene dtype: float64 - name: severe_toxicity dtype: float64 - name: sexual_explicit dtype: float64 - name: threat dtype: float64 - name: toxicity dtype: float64 - name: message_tree_id dtype: string - name: tree_state dtype: string - name: emojis struct: - name: count sequence: int32 - name: name sequence: string - name: labels struct: - name: count sequence: int32 - name: name sequence: string - name: value sequence: float64 - name: history dtype: string splits: - name: train num_bytes: 278875 num_examples: 90 - name: validation num_bytes: 18290 num_examples: 6 download_size: 208227 dataset_size: 297165 --- # Dataset Card for "oasst1-delib" Subset of `OpenAssistant/oasst1` with English chat messages that (are supposed to) contain reasoning: * filtered by keyword "pros" * includes chat history as extra feature Dataset creation is documented in https://github.com/logikon-ai/deliberation-datasets/blob/main/notebooks/create_oasst1_delib.ipynb
heliosprime/twitter_dataset_1713101663
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 11500 num_examples: 33 download_size: 12604 dataset_size: 11500 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "twitter_dataset_1713101663" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MohammedNasri/wav_to_vec_common_voice_fleurs_without_diacs
--- dataset_info: features: - name: input_values sequence: float32 - name: input_length dtype: int64 - name: labels sequence: int64 splits: - name: train num_bytes: 11806675364 num_examples: 40880 - name: test num_bytes: 2889905492 num_examples: 10440 download_size: 14014156133 dataset_size: 14696580856 --- # Dataset Card for "wav_to_vec_common_voice_fleurs_without_diacs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_xxyyy123__mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2
--- pretty_name: Evaluation run of xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2 dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2](https://huggingface.co/xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 61 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_xxyyy123__mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2\"\ ,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\ \nThese are the [latest results from run 2023-09-03T18:41:04.280567](https://huggingface.co/datasets/open-llm-leaderboard/details_xxyyy123__mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2/blob/main/results_2023-09-03T18%3A41%3A04.280567.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5159772470651705,\n\ \ \"acc_stderr\": 0.03490050368845693,\n \"acc_norm\": 0.5196198874675843,\n\ \ \"acc_norm_stderr\": 0.03488383911166199,\n \"mc1\": 0.3574051407588739,\n\ \ \"mc1_stderr\": 0.0167765996767294,\n \"mc2\": 0.5084843623108531,\n\ \ \"mc2_stderr\": 0.015788699144390992\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.5537542662116041,\n \"acc_stderr\": 0.014526705548539982,\n\ \ \"acc_norm\": 0.5810580204778157,\n \"acc_norm_stderr\": 0.014418106953639013\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6132244572794264,\n\ \ \"acc_stderr\": 0.004860162076330978,\n \"acc_norm\": 0.8008364867556264,\n\ \ \"acc_norm_stderr\": 0.0039855506403304606\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \ \ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.48148148148148145,\n\ \ \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.48148148148148145,\n\ \ \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04063302731486671,\n\ \ \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04063302731486671\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.5,\n\ \ \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \ \ \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.6,\n \"acc_stderr\": 0.030151134457776285,\n \ \ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.030151134457776285\n \ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5625,\n\ \ \"acc_stderr\": 0.04148415739394154,\n \"acc_norm\": 0.5625,\n \ \ \"acc_norm_stderr\": 0.04148415739394154\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \ \ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n\ \ \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \ \ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4797687861271676,\n\ \ \"acc_stderr\": 0.03809342081273957,\n \"acc_norm\": 0.4797687861271676,\n\ \ \"acc_norm_stderr\": 0.03809342081273957\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929775,\n\ \ \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929775\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n\ \ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.4851063829787234,\n \"acc_stderr\": 0.032671518489247764,\n\ \ \"acc_norm\": 0.4851063829787234,\n \"acc_norm_stderr\": 0.032671518489247764\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.32456140350877194,\n\ \ \"acc_stderr\": 0.044045561573747664,\n \"acc_norm\": 0.32456140350877194,\n\ \ \"acc_norm_stderr\": 0.044045561573747664\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.45517241379310347,\n \"acc_stderr\": 0.04149886942192117,\n\ \ \"acc_norm\": 0.45517241379310347,\n \"acc_norm_stderr\": 0.04149886942192117\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.30952380952380953,\n \"acc_stderr\": 0.023809523809523867,\n \"\ acc_norm\": 0.30952380952380953,\n \"acc_norm_stderr\": 0.023809523809523867\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2777777777777778,\n\ \ \"acc_stderr\": 0.040061680838488774,\n \"acc_norm\": 0.2777777777777778,\n\ \ \"acc_norm_stderr\": 0.040061680838488774\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \ \ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5612903225806452,\n\ \ \"acc_stderr\": 0.028229497320317216,\n \"acc_norm\": 0.5612903225806452,\n\ \ \"acc_norm_stderr\": 0.028229497320317216\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.3842364532019704,\n \"acc_stderr\": 0.0342239856565755,\n\ \ \"acc_norm\": 0.3842364532019704,\n \"acc_norm_stderr\": 0.0342239856565755\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\"\ : 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.03546563019624336,\n\ \ \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.03546563019624336\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.6818181818181818,\n \"acc_stderr\": 0.0331847733384533,\n \"acc_norm\"\ : 0.6818181818181818,\n \"acc_norm_stderr\": 0.0331847733384533\n },\n\ \ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \ \ \"acc\": 0.7512953367875648,\n \"acc_stderr\": 0.031195840877700286,\n\ \ \"acc_norm\": 0.7512953367875648,\n \"acc_norm_stderr\": 0.031195840877700286\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.48205128205128206,\n \"acc_stderr\": 0.02533466708095495,\n\ \ \"acc_norm\": 0.48205128205128206,\n \"acc_norm_stderr\": 0.02533466708095495\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.2518518518518518,\n \"acc_stderr\": 0.02646611753895991,\n \ \ \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.02646611753895991\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.5126050420168067,\n \"acc_stderr\": 0.03246816765752174,\n \ \ \"acc_norm\": 0.5126050420168067,\n \"acc_norm_stderr\": 0.03246816765752174\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\ acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.7192660550458716,\n \"acc_stderr\": 0.019266055045871623,\n \"\ acc_norm\": 0.7192660550458716,\n \"acc_norm_stderr\": 0.019266055045871623\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.375,\n \"acc_stderr\": 0.033016908987210894,\n \"acc_norm\": 0.375,\n\ \ \"acc_norm_stderr\": 0.033016908987210894\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\ : {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.03228210387037892,\n\ \ \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.03228210387037892\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.7215189873417721,\n \"acc_stderr\": 0.029178682304842548,\n \ \ \"acc_norm\": 0.7215189873417721,\n \"acc_norm_stderr\": 0.029178682304842548\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5964125560538116,\n\ \ \"acc_stderr\": 0.03292802819330314,\n \"acc_norm\": 0.5964125560538116,\n\ \ \"acc_norm_stderr\": 0.03292802819330314\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.5954198473282443,\n \"acc_stderr\": 0.043046937953806645,\n\ \ \"acc_norm\": 0.5954198473282443,\n \"acc_norm_stderr\": 0.043046937953806645\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.6446280991735537,\n \"acc_stderr\": 0.0436923632657398,\n \"acc_norm\"\ : 0.6446280991735537,\n \"acc_norm_stderr\": 0.0436923632657398\n },\n\ \ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6481481481481481,\n\ \ \"acc_stderr\": 0.046166311118017125,\n \"acc_norm\": 0.6481481481481481,\n\ \ \"acc_norm_stderr\": 0.046166311118017125\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.5705521472392638,\n \"acc_stderr\": 0.03889066619112723,\n\ \ \"acc_norm\": 0.5705521472392638,\n \"acc_norm_stderr\": 0.03889066619112723\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.375,\n\ \ \"acc_stderr\": 0.04595091388086298,\n \"acc_norm\": 0.375,\n \ \ \"acc_norm_stderr\": 0.04595091388086298\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7087378640776699,\n \"acc_stderr\": 0.04498676320572924,\n\ \ \"acc_norm\": 0.7087378640776699,\n \"acc_norm_stderr\": 0.04498676320572924\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7777777777777778,\n\ \ \"acc_stderr\": 0.027236013946196704,\n \"acc_norm\": 0.7777777777777778,\n\ \ \"acc_norm_stderr\": 0.027236013946196704\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\ : 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-miscellaneous|5\"\ : {\n \"acc\": 0.7100893997445722,\n \"acc_stderr\": 0.01622501794477098,\n\ \ \"acc_norm\": 0.7100893997445722,\n \"acc_norm_stderr\": 0.01622501794477098\n\ \ },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5809248554913294,\n\ \ \"acc_stderr\": 0.02656417811142262,\n \"acc_norm\": 0.5809248554913294,\n\ \ \"acc_norm_stderr\": 0.02656417811142262\n },\n \"harness|hendrycksTest-moral_scenarios|5\"\ : {\n \"acc\": 0.264804469273743,\n \"acc_stderr\": 0.014756906483260664,\n\ \ \"acc_norm\": 0.264804469273743,\n \"acc_norm_stderr\": 0.014756906483260664\n\ \ },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5261437908496732,\n\ \ \"acc_stderr\": 0.028590752958852394,\n \"acc_norm\": 0.5261437908496732,\n\ \ \"acc_norm_stderr\": 0.028590752958852394\n },\n \"harness|hendrycksTest-philosophy|5\"\ : {\n \"acc\": 0.5884244372990354,\n \"acc_stderr\": 0.027950481494401266,\n\ \ \"acc_norm\": 0.5884244372990354,\n \"acc_norm_stderr\": 0.027950481494401266\n\ \ },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5648148148148148,\n\ \ \"acc_stderr\": 0.027586006221607708,\n \"acc_norm\": 0.5648148148148148,\n\ \ \"acc_norm_stderr\": 0.027586006221607708\n },\n \"harness|hendrycksTest-professional_accounting|5\"\ : {\n \"acc\": 0.375886524822695,\n \"acc_stderr\": 0.028893955412115882,\n\ \ \"acc_norm\": 0.375886524822695,\n \"acc_norm_stderr\": 0.028893955412115882\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.38396349413298564,\n\ \ \"acc_stderr\": 0.01242158783313423,\n \"acc_norm\": 0.38396349413298564,\n\ \ \"acc_norm_stderr\": 0.01242158783313423\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.4889705882352941,\n \"acc_stderr\": 0.03036544647727568,\n\ \ \"acc_norm\": 0.4889705882352941,\n \"acc_norm_stderr\": 0.03036544647727568\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.4869281045751634,\n \"acc_stderr\": 0.020220920829626912,\n \ \ \"acc_norm\": 0.4869281045751634,\n \"acc_norm_stderr\": 0.020220920829626912\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5909090909090909,\n\ \ \"acc_stderr\": 0.04709306978661896,\n \"acc_norm\": 0.5909090909090909,\n\ \ \"acc_norm_stderr\": 0.04709306978661896\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.6122448979591837,\n \"acc_stderr\": 0.031192230726795656,\n\ \ \"acc_norm\": 0.6122448979591837,\n \"acc_norm_stderr\": 0.031192230726795656\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6119402985074627,\n\ \ \"acc_stderr\": 0.034457899643627506,\n \"acc_norm\": 0.6119402985074627,\n\ \ \"acc_norm_stderr\": 0.034457899643627506\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \ \ \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.042295258468165065\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.40963855421686746,\n\ \ \"acc_stderr\": 0.03828401115079022,\n \"acc_norm\": 0.40963855421686746,\n\ \ \"acc_norm_stderr\": 0.03828401115079022\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.695906432748538,\n \"acc_stderr\": 0.03528211258245229,\n\ \ \"acc_norm\": 0.695906432748538,\n \"acc_norm_stderr\": 0.03528211258245229\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3574051407588739,\n\ \ \"mc1_stderr\": 0.0167765996767294,\n \"mc2\": 0.5084843623108531,\n\ \ \"mc2_stderr\": 0.015788699144390992\n }\n}\n```" repo_url: https://huggingface.co/xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2 leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|arc:challenge|25_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hellaswag|10_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-management|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-management|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-09-03T18:41:04.280567.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-international_law|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-management|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-marketing|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-sociology|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-virology|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T18:41:04.280567.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_09_03T18_41_04.280567 path: - '**/details_harness|truthfulqa:mc|0_2023-09-03T18:41:04.280567.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-09-03T18:41:04.280567.parquet' - config_name: results data_files: - split: 2023_09_03T18_41_04.280567 path: - results_2023-09-03T18:41:04.280567.parquet - split: latest path: - results_2023-09-03T18:41:04.280567.parquet --- # Dataset Card for Evaluation run of xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2](https://huggingface.co/xxyyy123/mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_xxyyy123__mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-09-03T18:41:04.280567](https://huggingface.co/datasets/open-llm-leaderboard/details_xxyyy123__mc_data_30k_from_platpus_orca_7b_10k_v1_lora_qk_rank14_v2/blob/main/results_2023-09-03T18%3A41%3A04.280567.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5159772470651705, "acc_stderr": 0.03490050368845693, "acc_norm": 0.5196198874675843, "acc_norm_stderr": 0.03488383911166199, "mc1": 0.3574051407588739, "mc1_stderr": 0.0167765996767294, "mc2": 0.5084843623108531, "mc2_stderr": 0.015788699144390992 }, "harness|arc:challenge|25": { "acc": 0.5537542662116041, "acc_stderr": 0.014526705548539982, "acc_norm": 0.5810580204778157, "acc_norm_stderr": 0.014418106953639013 }, "harness|hellaswag|10": { "acc": 0.6132244572794264, "acc_stderr": 0.004860162076330978, "acc_norm": 0.8008364867556264, "acc_norm_stderr": 0.0039855506403304606 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.48148148148148145, "acc_stderr": 0.043163785995113245, "acc_norm": 0.48148148148148145, "acc_norm_stderr": 0.043163785995113245 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.47368421052631576, "acc_stderr": 0.04063302731486671, "acc_norm": 0.47368421052631576, "acc_norm_stderr": 0.04063302731486671 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6, "acc_stderr": 0.030151134457776285, "acc_norm": 0.6, "acc_norm_stderr": 0.030151134457776285 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5625, "acc_stderr": 0.04148415739394154, "acc_norm": 0.5625, "acc_norm_stderr": 0.04148415739394154 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.36, "acc_stderr": 0.048241815132442176, "acc_norm": 0.36, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.4797687861271676, "acc_stderr": 0.03809342081273957, "acc_norm": 0.4797687861271676, "acc_norm_stderr": 0.03809342081273957 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.29411764705882354, "acc_stderr": 0.04533838195929775, "acc_norm": 0.29411764705882354, "acc_norm_stderr": 0.04533838195929775 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4851063829787234, "acc_stderr": 0.032671518489247764, "acc_norm": 0.4851063829787234, "acc_norm_stderr": 0.032671518489247764 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.32456140350877194, "acc_stderr": 0.044045561573747664, "acc_norm": 0.32456140350877194, "acc_norm_stderr": 0.044045561573747664 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.45517241379310347, "acc_stderr": 0.04149886942192117, "acc_norm": 0.45517241379310347, "acc_norm_stderr": 0.04149886942192117 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30952380952380953, "acc_stderr": 0.023809523809523867, "acc_norm": 0.30952380952380953, "acc_norm_stderr": 0.023809523809523867 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2777777777777778, "acc_stderr": 0.040061680838488774, "acc_norm": 0.2777777777777778, "acc_norm_stderr": 0.040061680838488774 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5612903225806452, "acc_stderr": 0.028229497320317216, "acc_norm": 0.5612903225806452, "acc_norm_stderr": 0.028229497320317216 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.3842364532019704, "acc_stderr": 0.0342239856565755, "acc_norm": 0.3842364532019704, "acc_norm_stderr": 0.0342239856565755 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.41, "acc_stderr": 0.04943110704237102, "acc_norm": 0.41, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7090909090909091, "acc_stderr": 0.03546563019624336, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.03546563019624336 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6818181818181818, "acc_stderr": 0.0331847733384533, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.0331847733384533 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7512953367875648, "acc_stderr": 0.031195840877700286, "acc_norm": 0.7512953367875648, "acc_norm_stderr": 0.031195840877700286 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.48205128205128206, "acc_stderr": 0.02533466708095495, "acc_norm": 0.48205128205128206, "acc_norm_stderr": 0.02533466708095495 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2518518518518518, "acc_stderr": 0.02646611753895991, "acc_norm": 0.2518518518518518, "acc_norm_stderr": 0.02646611753895991 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5126050420168067, "acc_stderr": 0.03246816765752174, "acc_norm": 0.5126050420168067, "acc_norm_stderr": 0.03246816765752174 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7192660550458716, "acc_stderr": 0.019266055045871623, "acc_norm": 0.7192660550458716, "acc_norm_stderr": 0.019266055045871623 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.375, "acc_stderr": 0.033016908987210894, "acc_norm": 0.375, "acc_norm_stderr": 0.033016908987210894 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.696078431372549, "acc_stderr": 0.03228210387037892, "acc_norm": 0.696078431372549, "acc_norm_stderr": 0.03228210387037892 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7215189873417721, "acc_stderr": 0.029178682304842548, "acc_norm": 0.7215189873417721, "acc_norm_stderr": 0.029178682304842548 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5964125560538116, "acc_stderr": 0.03292802819330314, "acc_norm": 0.5964125560538116, "acc_norm_stderr": 0.03292802819330314 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5954198473282443, "acc_stderr": 0.043046937953806645, "acc_norm": 0.5954198473282443, "acc_norm_stderr": 0.043046937953806645 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6446280991735537, "acc_stderr": 0.0436923632657398, "acc_norm": 0.6446280991735537, "acc_norm_stderr": 0.0436923632657398 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6481481481481481, "acc_stderr": 0.046166311118017125, "acc_norm": 0.6481481481481481, "acc_norm_stderr": 0.046166311118017125 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.5705521472392638, "acc_stderr": 0.03889066619112723, "acc_norm": 0.5705521472392638, "acc_norm_stderr": 0.03889066619112723 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.375, "acc_stderr": 0.04595091388086298, "acc_norm": 0.375, "acc_norm_stderr": 0.04595091388086298 }, "harness|hendrycksTest-management|5": { "acc": 0.7087378640776699, "acc_stderr": 0.04498676320572924, "acc_norm": 0.7087378640776699, "acc_norm_stderr": 0.04498676320572924 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7777777777777778, "acc_stderr": 0.027236013946196704, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.027236013946196704 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7100893997445722, "acc_stderr": 0.01622501794477098, "acc_norm": 0.7100893997445722, "acc_norm_stderr": 0.01622501794477098 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5809248554913294, "acc_stderr": 0.02656417811142262, "acc_norm": 0.5809248554913294, "acc_norm_stderr": 0.02656417811142262 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.264804469273743, "acc_stderr": 0.014756906483260664, "acc_norm": 0.264804469273743, "acc_norm_stderr": 0.014756906483260664 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5261437908496732, "acc_stderr": 0.028590752958852394, "acc_norm": 0.5261437908496732, "acc_norm_stderr": 0.028590752958852394 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5884244372990354, "acc_stderr": 0.027950481494401266, "acc_norm": 0.5884244372990354, "acc_norm_stderr": 0.027950481494401266 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5648148148148148, "acc_stderr": 0.027586006221607708, "acc_norm": 0.5648148148148148, "acc_norm_stderr": 0.027586006221607708 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.375886524822695, "acc_stderr": 0.028893955412115882, "acc_norm": 0.375886524822695, "acc_norm_stderr": 0.028893955412115882 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.38396349413298564, "acc_stderr": 0.01242158783313423, "acc_norm": 0.38396349413298564, "acc_norm_stderr": 0.01242158783313423 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4889705882352941, "acc_stderr": 0.03036544647727568, "acc_norm": 0.4889705882352941, "acc_norm_stderr": 0.03036544647727568 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.4869281045751634, "acc_stderr": 0.020220920829626912, "acc_norm": 0.4869281045751634, "acc_norm_stderr": 0.020220920829626912 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.5909090909090909, "acc_stderr": 0.04709306978661896, "acc_norm": 0.5909090909090909, "acc_norm_stderr": 0.04709306978661896 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6122448979591837, "acc_stderr": 0.031192230726795656, "acc_norm": 0.6122448979591837, "acc_norm_stderr": 0.031192230726795656 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6119402985074627, "acc_stderr": 0.034457899643627506, "acc_norm": 0.6119402985074627, "acc_norm_stderr": 0.034457899643627506 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.77, "acc_stderr": 0.042295258468165065, "acc_norm": 0.77, "acc_norm_stderr": 0.042295258468165065 }, "harness|hendrycksTest-virology|5": { "acc": 0.40963855421686746, "acc_stderr": 0.03828401115079022, "acc_norm": 0.40963855421686746, "acc_norm_stderr": 0.03828401115079022 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.695906432748538, "acc_stderr": 0.03528211258245229, "acc_norm": 0.695906432748538, "acc_norm_stderr": 0.03528211258245229 }, "harness|truthfulqa:mc|0": { "mc1": 0.3574051407588739, "mc1_stderr": 0.0167765996767294, "mc2": 0.5084843623108531, "mc2_stderr": 0.015788699144390992 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
louisbrulenaudet/code-forestier
--- license: apache-2.0 language: - fr multilinguality: - monolingual tags: - finetuning - legal - french law - droit français - Code forestier source_datasets: - original pretty_name: Code forestier task_categories: - text-generation - table-question-answering - summarization - conversational size_categories: - 1K<n<10K --- # Code forestier, non-instruct (11-12-2023) This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice. Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach. Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks. Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways: - Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions. - Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs. - Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more. - Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs. - Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text. ## Dataset generation This JSON file is a list of dictionaries, each dictionary contains the following fields: - `instruction`: `string`, presenting the instruction linked to the element. - `input`: `string`, signifying the input details for the element. - `output`: `string`, indicating the output information for the element. We used the following list of instructions for generating the dataset: ```python instructions = [ "Compose l'intégralité de l'article sous forme écrite.", "Écris la totalité du contenu de l'article.", "Formule la totalité du texte présent dans l'article.", "Produis l'intégralité de l'article en écriture.", "Développe l'article dans son ensemble par écrit.", "Génère l'ensemble du texte contenu dans l'article.", "Formule le contenu intégral de l'article en entier.", "Rédige la totalité du texte de l'article en entier.", "Compose l'intégralité du contenu textuel de l'article.", "Rédige l'ensemble du texte qui constitue l'article.", "Formule l'article entier dans son contenu écrit.", "Composez l'intégralité de l'article sous forme écrite.", "Écrivez la totalité du contenu de l'article.", "Formulez la totalité du texte présent dans l'article.", "Développez l'article dans son ensemble par écrit.", "Générez l'ensemble du texte contenu dans l'article.", "Formulez le contenu intégral de l'article en entier.", "Rédigez la totalité du texte de l'article en entier.", "Composez l'intégralité du contenu textuel de l'article.", "Écrivez l'article dans son intégralité en termes de texte.", "Rédigez l'ensemble du texte qui constitue l'article.", "Formulez l'article entier dans son contenu écrit.", "Composer l'intégralité de l'article sous forme écrite.", "Écrire la totalité du contenu de l'article.", "Formuler la totalité du texte présent dans l'article.", "Produire l'intégralité de l'article en écriture.", "Développer l'article dans son ensemble par écrit.", "Générer l'ensemble du texte contenu dans l'article.", "Formuler le contenu intégral de l'article en entier.", "Rédiger la totalité du texte de l'article en entier.", "Composer l'intégralité du contenu textuel de l'article.", "Rédiger l'ensemble du texte qui constitue l'article.", "Formuler l'article entier dans son contenu écrit.", "Quelles sont les dispositions de l'article ?", "Quelles dispositions sont incluses dans l'article ?", "Quelles sont les dispositions énoncées dans l'article ?", "Quel est le texte intégral de l'article ?", "Quelle est la lettre de l'article ?" ] ``` ## Citing this project If you use this code in your research, please use the following BibTeX entry. ```BibTeX @misc{louisbrulenaudet2023, author = {Louis Brulé Naudet}, title = {Code forestier, non-instruct (11-12-2023)}, howpublished = {\url{https://huggingface.co/datasets/louisbrulenaudet/code-forestier}}, year = {2023} } ``` ## Feedback If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).
bruno17eyef/grafith
--- license: openrail ---
paveldruy/sarah
--- license: openrail ---
nccratliri/vad-bengalese-finch
--- license: apache-2.0 --- # Positive Transfer Of The Whisper Speech Transformer To Human And Animal Voice Activity Detection We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper > > [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**](https://doi.org/10.1101/2023.09.30.560270) > > Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br> > University of Zurich and ETH Zurich This is the Bengalese finch dataset customized for Animal Voice Activity Detection (vocal segmentation) in WhisperSeg. ## Download Dataset ```python from huggingface_hub import snapshot_download snapshot_download('nccratliri/vad-bengalese-finch', local_dir = "data/bengalese-finch", repo_type="dataset" ) ``` For more usage details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg When using this dataset, please also cite: ``` @article {10.7554/eLife.68837, article_type = {journal}, title = {Fast and accurate annotation of acoustic signals with deep neural networks}, author = {Steinfath, Elsa and Palacios-Muñoz, Adrian and Rottschäfer, Julian R and Yuezak, Deniz and Clemens, Jan}, editor = {Calabrese, Ronald L and Egnor, SE Roian and Troyer, Todd}, volume = 10, year = 2021, month = {nov}, pub_date = {2021-11-01}, pages = {e68837}, citation = {eLife 2021;10:e68837}, doi = {10.7554/eLife.68837}, url = {https://doi.org/10.7554/eLife.68837}, abstract = {Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast. We here introduce \textit{DeepAudioSegmenter} (\textit{DAS)}, a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of \textit{DAS} using acoustic signals with diverse characteristics from insects, birds, and mammals. \textit{DAS} comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. \textit{DAS} annotates song with high throughput and low latency for experimental interventions in realtime. Overall, \textit{DAS} is a universal, versatile, and accessible tool for annotating acoustic communication signals.}, keywords = {acoustic communication, annotation, song, deep learning, bird, fly}, journal = {eLife}, issn = {2050-084X}, publisher = {eLife Sciences Publications, Ltd}, } ``` ``` @article {Gu2023.09.30.560270, author = {Nianlong Gu and Kanghwi Lee and Maris Basha and Sumit Kumar Ram and Guanghao You and Richard Hahnloser}, title = {Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection}, elocation-id = {2023.09.30.560270}, year = {2023}, doi = {10.1101/2023.09.30.560270}, publisher = {Cold Spring Harbor Laboratory}, abstract = {This paper introduces WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for human and animal Voice Activity Detection (VAD). Contrary to traditional methods that detect human voice or animal vocalizations from a short audio frame and rely on careful threshold selection, WhisperSeg processes entire spectrograms of long audio and generates plain text representations of onset, offset, and type of voice activity. Processing a longer audio context with a larger network greatly improves detection accuracy from few labeled examples. We further demonstrate a positive transfer of detection performance to new animal species, making our approach viable in the data-scarce multi-species setting.Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270}, eprint = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270.full.pdf}, journal = {bioRxiv} } ``` ## Contact nianlong.gu@uzh.ch
KyonBS/hana-KunoichiTsubaki
--- license: openrail ---
irds/lotte_technology_test_forum
--- pretty_name: '`lotte/technology/test/forum`' viewer: false source_datasets: ['irds/lotte_technology_test'] task_categories: - text-retrieval --- # Dataset Card for `lotte/technology/test/forum` The `lotte/technology/test/forum` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/technology/test/forum). # Data This dataset provides: - `queries` (i.e., topics); count=2,004 - `qrels`: (relevance assessments); count=15,890 - For `docs`, use [`irds/lotte_technology_test`](https://huggingface.co/datasets/irds/lotte_technology_test) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/lotte_technology_test_forum', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/lotte_technology_test_forum', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Santhanam2021ColBERTv2, title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction", author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia", journal= "arXiv preprint arXiv:2112.01488", year = "2021", url = "https://arxiv.org/abs/2112.01488" } ```
CognitiveLab/FS_transcribe_summary
--- dataset_info: features: - name: author dtype: string - name: duration dtype: int64 - name: description dtype: string - name: transcript_json struct: - name: metadata struct: - name: channels dtype: int64 - name: created dtype: string - name: duration dtype: float64 - name: model_info struct: - name: 30089e05-99d1-4376-b32e-c263170674af struct: - name: arch dtype: string - name: name dtype: string - name: version dtype: string - name: models sequence: string - name: request_id dtype: string - name: sha256 dtype: string - name: summary_info struct: - name: input_tokens dtype: int64 - name: model_uuid dtype: string - name: output_tokens dtype: int64 - name: transaction_key dtype: string - name: warnings dtype: 'null' - name: results struct: - name: channels list: - name: alternatives list: - name: confidence dtype: float64 - name: entities dtype: 'null' - name: paragraphs struct: - name: paragraphs list: - name: end dtype: float64 - name: num_words dtype: float64 - name: sentences list: - name: end dtype: float64 - name: start dtype: float64 - name: text dtype: string - name: speaker dtype: int64 - name: start dtype: float64 - name: transcript dtype: string - name: summaries dtype: 'null' - name: topics list: - name: end_word dtype: float64 - name: start_word dtype: float64 - name: text dtype: string - name: topics list: - name: confidence dtype: float64 - name: topic dtype: string - name: transcript dtype: string - name: translations dtype: 'null' - name: words list: - name: confidence dtype: float64 - name: end dtype: float64 - name: punctuated_word dtype: string - name: speaker dtype: int64 - name: speaker_confidence dtype: float64 - name: start dtype: float64 - name: word dtype: string - name: detected_language dtype: string - name: language_confidence dtype: float64 - name: search dtype: 'null' - name: summary struct: - name: result dtype: string - name: short dtype: string - name: utterances dtype: 'null' - name: audio_path dtype: string - name: link dtype: string - name: title dtype: string - name: views dtype: int64 - name: transcript dtype: string - name: summary dtype: string splits: - name: train num_bytes: 66706960 num_examples: 522 download_size: 24568645 dataset_size: 66706960 configs: - config_name: default data_files: - split: train path: data/train-* ---
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_25_1000
--- dataset_info: features: - name: id dtype: int64 - name: response dtype: string splits: - name: train num_bytes: 862 num_examples: 32 download_size: 1973 dataset_size: 862 --- # Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_25_1000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Circularmachines/batch_indexing_machine_test
--- dataset_info: features: - name: image dtype: image splits: - name: test num_bytes: 88034570.0 num_examples: 400 download_size: 88040937 dataset_size: 88034570.0 --- # Dataset Card for "batch_indexing_machine_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sdotmac/bayc
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 381887922.0 num_examples: 10000 download_size: 378097332 dataset_size: 381887922.0 --- # Dataset Card for "bayc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lapp0/level_01_arithmetic_align
--- dataset_info: features: - name: expression dtype: string - name: value dtype: string - name: is_error dtype: bool - name: instruction dtype: string - name: precision dtype: int64 - name: curriculum_stage dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 16566740 num_examples: 199998 download_size: 7449971 dataset_size: 16566740 configs: - config_name: default data_files: - split: train path: data/train-* ---
KakologArchives/KakologArchives
--- pretty_name: ニコニコ実況 過去ログアーカイブ license: mit language: - ja task_categories: - text-classification --- # ニコニコ実況 過去ログアーカイブ ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp)のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。 去る2020年12月、ニコニコ実況は[ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html)されました。 これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。 そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。 しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。 一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。 また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。 私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。 このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティベースの番組も含めた新ニコニコ実況の当日分の過去ログを5分に1回収集し、随時反映しています。 過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。 よろしければそちらもご活用ください。 ## Dataset Structure ### Builder Config | Key | Value Type | Default Value | Description | | --------------- | ---------- | ------------- | ----------- | | channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) | | year | int | None | 取得する過去ログの年 (省略時はすべての年) | | number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) | ### Data Splits | Split | Approximate Size | Description | | ------- | ---------------- | ----------- | | sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 | | all | 180GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。180GB 近くあるため注意してください。 | ### Data Fields | Field | Type | Description | | --------------- | -------- | ----------- | | thread | string | コメントのスレッド ID | | no | int64 | コメント番号 (コメ番) | | vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) | | date | int64 | コメント投稿時間の UNIX タイムスタンプ | | date_usec | int64 | コメント投稿時間の小数点以下の時間 | | user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) | | mail | string | コメントのコマンド (184, red naka big など、省略されることもある) | | premium | boolean | コメントしたユーザーがプレミアム会員であれば True | | anonymity | boolean | 匿名コメントであれば True | | content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) | ## Example ```python from datasets import load_dataset dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10) for data in dataset['train']: print(data) ``` ## Licensing Information [MIT License](https://opensource.org/license/mit/)
benayas/snips_augmented_10pct_v0
--- dataset_info: features: - name: text dtype: string - name: category dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 1032169 num_examples: 13084 download_size: 485992 dataset_size: 1032169 configs: - config_name: default data_files: - split: train path: data/train-* ---
CyberHarem/frima_nikke
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of frima/プリム/普琳玛/프림 (Nikke: Goddess of Victory) This is the dataset of frima/プリム/普琳玛/프림 (Nikke: Goddess of Victory), containing 33 images and their tags. The core tags of this character are `long_hair, grey_hair, breasts, bangs, large_breasts, brown_eyes, yellow_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 33 | 51.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frima_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 33 | 25.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frima_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 89 | 60.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frima_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 33 | 42.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frima_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 89 | 89.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/frima_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/frima_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, white_shirt, off_shoulder, sleep_mask, looking_at_viewer, black_pants, collarbone, red_jacket, bare_shoulders, cleavage, fur-trimmed_jacket, open_jacket, simple_background, torn_pants, closed_mouth, white_background | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, blue_shorts, navel, denim_shorts, open_jacket, short_shorts, straw_hat, blue_jacket, off_shoulder, outdoors, sun_hat, bikini, cloud, day, long_sleeves, looking_at_viewer, midriff, shirt, sky, stomach | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | white_shirt | off_shoulder | sleep_mask | looking_at_viewer | black_pants | collarbone | red_jacket | bare_shoulders | cleavage | fur-trimmed_jacket | open_jacket | simple_background | torn_pants | closed_mouth | white_background | blue_shorts | navel | denim_shorts | short_shorts | straw_hat | blue_jacket | outdoors | sun_hat | bikini | cloud | day | long_sleeves | midriff | shirt | sky | stomach | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------|:---------------|:-------------|:--------------------|:--------------|:-------------|:-------------|:-----------------|:-----------|:---------------------|:--------------|:--------------------|:-------------|:---------------|:-------------------|:--------------|:--------|:---------------|:---------------|:------------|:--------------|:-----------|:----------|:---------|:--------|:------|:---------------|:----------|:--------|:------|:----------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | | X | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
mxeval/mbxp
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - mxeval - mbxp - mbpp - code-generation - mxeval pretty_name: mbxp size_categories: - 10K<n<100K --- # MBXP ## Table of Contents - [MBXP](#MBXP) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Executional Correctness](#execution) - [Execution Example](#execution-example) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) # MBXP ## Dataset Description - **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval) - **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8) ### Dataset Summary This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data, namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval. <br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868). ### Related Tasks and Leaderboards * [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval) * [MBXP](https://huggingface.co/datasets/mxeval/mbxp) * [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x) ### Languages The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings. ## Dataset Structure To lookup currently supported datasets ```python from datasets import get_dataset_config_names get_dataset_config_names("mxeval/mbxp") ['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript'] ``` To load a specific dataset and language ```python from datasets import load_dataset load_dataset("mxeval/mbxp", "python") DatasetDict({ test: Dataset({ features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'], num_rows: 974 }) }) ``` ### Data Instances An example of a dataset instance: ```python { "task_id": "MBPP/1", "language": "python", "prompt": "\n\ndef min_cost(cost, m, n):\n\t\"\"\"\n\tWrite a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].\n\t>>> min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2)\n\t8\n\t>>> min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2)\n\t12\n\t>>> min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2)\n\t16\n\t\"\"\"\n", "test": "\n\nMETADATA = {}\n\n\ndef check(candidate):\n assert candidate([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8\n assert candidate([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12\n assert candidate([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16\n\n", "entry_point": "min_cost", "canonical_solution": "\tR = 3\n\tC = 3\n\t \n\ttc = [[0 for x in range(C)] for x in range(R)] \n\ttc[0][0] = cost[0][0] \n\tfor i in range(1, m+1): \n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \n\tfor j in range(1, n+1): \n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \n\tfor i in range(1, m+1): \n\t\tfor j in range(1, n+1): \n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \n\treturn tc[m][n]", "description": "Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][]." } ``` ### Data Fields - `task_id`: identifier for the data sample - `prompt`: input for the model containing function header and docstrings - `canonical_solution`: solution for the problem in the `prompt` - `description`: task description - `test`: contains function to test generated code for correctness - `entry_point`: entry point for test - `language`: programming lanuage identifier to call the appropriate subprocess call for program execution ### Data Splits - MBXP - Python - Java - Javascript - Typescript - Kotlin - Ruby - Php - Cpp - Csharp - Go - Perl - Scala - Swift ## Dataset Creation ### Curation Rationale Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps. ### Personal and Sensitive Information None. ### Social Impact of Dataset With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models. ### Dataset Curators AWS AI Labs ## Execution ### Execution Example Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset. ```python >>> from datasets import load_dataset >>> from mxeval.execution import check_correctness >>> mbxp_python = load_dataset("mxeval/mbxp", "python", split="test") >>> example_problem = mbxp_python[0] >>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0) {'task_id': 'MBPP/1', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 10.314226150512695} ``` ### Considerations for Using the Data Make sure to sandbox the execution environment. ### Licensing Information [LICENSE](https://huggingface.co/datasets/mxeval/mbxp/blob/main/mbxp-LICENSE) <br> [THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mbxp/blob/main/THIRD_PARTY_LICENSES) ### Citation Information ``` @article{mbxp_athiwaratkun2022, title = {Multi-lingual Evaluation of Code Generation Models}, author = {Athiwaratkun, Ben and Gouda, Sanjay Krishna and Wang, Zijian and Li, Xiaopeng and Tian, Yuchen and Tan, Ming and Ahmad, Wasi Uddin and Wang, Shiqi and Sun, Qing and Shang, Mingyue and Gonugondla, Sujan Kumar and Ding, Hantian and Kumar, Varun and Fulton, Nathan and Farahani, Arash and Jain, Siddhartha and Giaquinto, Robert and Qian, Haifeng and Ramanathan, Murali Krishna and Nallapati, Ramesh and Ray, Baishakhi and Bhatia, Parminder and Sengupta, Sudipta and Roth, Dan and Xiang, Bing}, doi = {10.48550/ARXIV.2210.14868}, url = {https://arxiv.org/abs/2210.14868}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ### Contributions [skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
m-a-p/MusicPile-sft
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: src dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 1926209131 num_examples: 1139473 download_size: 985473122 dataset_size: 1926209131 configs: - config_name: default data_files: - split: train path: data/train-* --- [**🌐 DemoPage**](https://ezmonyi.github.io/ChatMusician/) | [**🤗 Pretrain Dataset**](https://huggingface.co/datasets/m-a-p/MusicPile) | [**🤗 Benchmark**](https://huggingface.co/datasets/m-a-p/MusicTheoryBench) | [**📖 arXiv**](http://arxiv.org/abs/2402.16153) | [💻 **Code**](https://github.com/hf-lin/ChatMusician) | [**🤖 Chat Model**](https://huggingface.co/m-a-p/ChatMusician) | [**🤖 Base Model**](https://huggingface.co/m-a-p/ChatMusician-Base) # Dataset Card for MusicPile-sft *MusicPile-sft* is a subset of [MusicPile](https://huggingface.co/datasets/m-a-p/MusicPile). It contains **1.14M** samples with a ratio of music verbal to music score(abc notation) of 2:1. Here is the overview: | Datasets | Sourced from | # Samples | Category | Format | | --- | --- | --- | --- | --- | | [IrishMAN](https://huggingface.co/datasets/sander-wood/irishman) | public dataset + Human-written Instructions | 340K | music score | chat | | [KernScores](http://kern.ccarh.org) | public dataset + Human-written Instructions | 10K | music score | chat | | [JSB Chorales](https://github.com/sander-wood/deepchoir) | public dataset + Human-written Instructions | 33.5k | music score | chat | | music knowledge** | Generated with GPT-4 | 255K | music verbal | chat | | music summary** | Generated with GPT-4 | 500K | music verbal | chat | Note: The data of JSB Chorales is repeated 100 times.(Because there is so little data on compositions in the Bach style.) You can easily load it: ```python from datasets import load_dataset ds = load_dataset("m-a-p/MusicPile-sft") ``` ## Languages *MusicPile-sft* primarily contains English. ## Dataset Structure *MusicPile-sft* has 5 fields `id`,`src`, `input`, `instruction` and `output`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{yuan2024chatmusician, title={ChatMusician: Understanding and Generating Music Intrinsically with LLM}, author={Ruibin Yuan and Hanfeng Lin and Yi Wang and Zeyue Tian and Shangda Wu and Tianhao Shen and Ge Zhang and Yuhang Wu and Cong Liu and Ziya Zhou and Ziyang Ma and Liumeng Xue and Ziyu Wang and Qin Liu and Tianyu Zheng and Yizhi Li and Yinghao Ma and Yiming Liang and Xiaowei Chi and Ruibo Liu and Zili Wang and Pengfei Li and Jingcheng Wu and Chenghua Lin and Qifeng Liu and Tao Jiang and Wenhao Huang and Wenhu Chen and Emmanouil Benetos and Jie Fu and Gus Xia and Roger Dannenberg and Wei Xue and Shiyin Kang and Yike Guo}, year={2024}, eprint={2402.16153}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` ## Dataset Card Contact Authors of ChatMusician.
zolak/twitter_dataset_80_1713211337
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 1554633 num_examples: 3832 download_size: 779734 dataset_size: 1554633 configs: - config_name: default data_files: - split: train path: data/train-* ---
distilled-from-one-sec-cv12/chunk_168
--- dataset_info: features: - name: logits sequence: float32 - name: mfcc sequence: sequence: float64 splits: - name: train num_bytes: 1336855208 num_examples: 260494 download_size: 1366404458 dataset_size: 1336855208 --- # Dataset Card for "chunk_168" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ashercn97/RenamedCodeEvol
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 92842613 num_examples: 32178 download_size: 45495030 dataset_size: 92842613 --- # Dataset Card for "RenamedCodeEvol" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cerebras/SlimPajama-627B
--- task_categories: - text-generation language: - en pretty_name: SlimPajama-627B --- ## Dataset Description - **Homepage:** [SlimPajama Blog](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) - **Repository:** [Pre-Processing Libraries](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama) - **Size of compressed dataset:** 895 GB The dataset consists of 59166 jsonl files and is ~895GB compressed. It is a cleaned and deduplicated version of [Together's RedPajama](https://github.com/togethercomputer/redpajama-data). Check out our [blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama) explaining our methods, [our code on GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama), and join the discussion on the [Cerebras Discord](https://discord.gg/q6bZcMWJVu). ## Getting Started You can download the dataset using Hugging Face datasets: ```python from datasets import load_dataset ds = load_dataset("cerebras/SlimPajama-627B") ``` ## Background Today we are releasing SlimPajama – the largest extensively deduplicated, multi-corpora, open-source dataset for training large language models. SlimPajama was created by cleaning and deduplicating the 1.2T token RedPajama dataset from Together. By filtering out low quality data and duplicates, we were able to remove 49.6% of bytes, slimming down the dataset from 1210B to 627B tokens. We believe SlimPajama offers the highest quality and most compute efficient data to train on for runs up to 627B tokens. When upsampled, we expect SlimPajama to perform equal to or better than RedPajama-1T when training at trillion token scale. In addition to the data, we are also releasing the tools we built to create SlimPajama. Applying [MinHashLSH](http://infolab.stanford.edu/~ullman/mmds/book0n.pdf) deduplication to trillion token datasets like RedPajama was not possible with off-the-shelf open-source code. We made several improvements to existing solutions to produce an infrastructure that can perform MinHashLSH deduplication on trillion token datasets in a distributed, multi-threaded, and memory efficient fashion. Today we are open-sourcing this infrastructure to enable the community to easily create higher quality, extensively deduplicated datasets in the future. ### Our contributions 1. SlimPajama 627B – the largest extensively deduplicated, multi-corpora, open dataset for LLM training. We release it under the Apache 2.0 license. 2. Releasing validation and test sets, 500M tokens each, which has been decontaminated against the training data. 3. Library of methods to replicate or pre-process from scratch other datasets. To the best of our knowledge these are the first open-source tools to enable cleaning and MinHashLSH deduplication of text data at trillion token scale. The full set of scripts to recreate the dataset from the original RedPajama dataset are available on the [Cerebras GitHub](https://github.com/Cerebras/modelzoo/tree/main/modelzoo/transformers/data_processing/slimpajama). A deeper explanation of our cleaning and deduplication process can be found in the [SlimPajama blog post](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama). ## Dataset Summary The [latest research](https://arxiv.org/abs/2306.01116) has shown that data quality is as important as data quantity. While training on more than one data epoch can be beneficial, this should be a choice rather than a side-effect of duplicates in the dataset. We decided to extensively deduplicate RedPajama to produce a dataset with higher information density. This means when using SlimPajama, you can achieve higher accuracy with the same compute budget when compared to other datasets. #### Comparison of dataset features | Data source | Tokens | Open Source | Curated Data Sources | Deduplication Level | | --------------- | ------- | ----------- | -------------------- | ------------------- | | SlimPajama | **627B**| **Yes** | **Yes** | **Extensive** | | RedPajama | 1.21T | **Yes** | **Yes** | Partial | | RefinedWeb-600B | 600B | **Yes** | No | **Extensive** | | RefinedWeb-5T | **5T** | No | No | **Extensive** | | LLaMA | 1.4T | No | **Yes** | Partial | | MPT | 1T | No | **Yes** | Partial | | MassiveText | 1.4T | No | **Yes** | **Extensive** | #### Document low-length filter rates | Data source | Document low-length filter rate | | ------------- | ------------------------------- | | Commoncrawl | 0.02% | | C4 | 4.70% | | GitHub | 0.00% | | Books | 0.00% | | ArXiv | 0.62% | | Wikpedia | 0.00% | | StackExchange | 0.32% | | Total | 1.86% | #### Data source byte deduplication rates | Data source | Byte deduplication rate | | ------------- | ---------------------- | | Commoncrawl | 63.76% | | C4 | 6.85% | | GitHub | 46.16% | | Books | 2.01% | | ArXiv | 0.06% | | Wikipedia | 2.24% | | StackExchange | 0.20% | | Total | 49.60% | #### Data source proportions for SlimPajama and RedPajama | Data source | SlimPajama | RedPajama | | ------------- | ---------- | --------- | | Commoncrawl | 52.2% | 72.6% | | C4 | 26.7% | 14.4% | | GitHub | 5.2% | 4.9% | | Books | 4.2% | 2.1% | | ArXiv | 4.6% | 2.3% | | Wikpedia | 3.8% | 2.0% | | StackExchange | 3.3% | 1.7% | ### Languages Primarily English, with some non-English files in Wikipedia. ### Dataset Structure The dataset consists of jsonl files, with structure as follows: ```json { "text": ..., "meta": {"redpajama_set_name": "RedPajamaCommonCrawl" | "RedPajamaC4" | "RedPajamaGithub" | "RedPajamaBook" | "RedPajamaArXiv" | "RedPajamaWikipedia" | "RedPajamaStackExchange"}, } ``` ### Dataset Creation SlimPajama was created by cleaning and deduplicating the [RedPajama dataset from Together](https://github.com/togethercomputer/redpajama-data) via MinHashLSH. RedPajama is an open-source reproduction of the [LLaMA](https://arxiv.org/abs/2302.13971) data collection methodology. ### Source Data The data sources composing RedPajama are explained in [its model card](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). To cite SlimPajama, please use: ``` @misc{cerebras2023slimpajama, author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan}, title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}}, month = June, year = 2023, howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}}, url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B}, } ``` ## License Please refer to the licenses of the data subsets you use. - [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/) - [C4 license](https://huggingface.co/datasets/allenai/c4#license) - GitHub was limited to MIT, BSD, or Apache licenses only - Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information) - [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html) - [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information) - [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) ## Acknowledgements - We’d like to thank Together, Ontocord.ai, ETH DS3Lab , AAI CERC Lab for creating the original RedPajama dataset and releasing it open source. - This release was made possible with the support and collaboration of Opentensor. - Easy cloud access to Cerebras systems is provided by our partner Cirrascale.
llm-aes/gpt-3.5_SummEval_gpt2-vs-others_rate_explain
--- dataset_info: features: - name: task_id dtype: string - name: worker_id dtype: string - name: human_label dtype: int64 - name: llm_label dtype: int64 - name: generator_1 dtype: string - name: generator_2 dtype: string - name: premise dtype: string splits: - name: train num_bytes: 3292945 num_examples: 1500 download_size: 288631 dataset_size: 3292945 configs: - config_name: default data_files: - split: train path: data/train-* ---
BangumiBase/durarara
--- license: mit tags: - art size_categories: - 1K<n<10K --- # Bangumi Image Base of Durarara!! This is the image base of bangumi Durarara!!, we detected 78 characters, 7823 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 1153 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 159 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 185 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 224 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 177 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 237 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 267 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 67 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 38 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 129 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 375 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | ![preview 8](10/preview_8.png) | | 11 | 34 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 75 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 33 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 24 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 177 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 112 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | 17 | 124 | [Download](17/dataset.zip) | ![preview 1](17/preview_1.png) | ![preview 2](17/preview_2.png) | ![preview 3](17/preview_3.png) | ![preview 4](17/preview_4.png) | ![preview 5](17/preview_5.png) | ![preview 6](17/preview_6.png) | ![preview 7](17/preview_7.png) | ![preview 8](17/preview_8.png) | | 18 | 142 | [Download](18/dataset.zip) | ![preview 1](18/preview_1.png) | ![preview 2](18/preview_2.png) | ![preview 3](18/preview_3.png) | ![preview 4](18/preview_4.png) | ![preview 5](18/preview_5.png) | ![preview 6](18/preview_6.png) | ![preview 7](18/preview_7.png) | ![preview 8](18/preview_8.png) | | 19 | 133 | [Download](19/dataset.zip) | ![preview 1](19/preview_1.png) | ![preview 2](19/preview_2.png) | ![preview 3](19/preview_3.png) | ![preview 4](19/preview_4.png) | ![preview 5](19/preview_5.png) | ![preview 6](19/preview_6.png) | ![preview 7](19/preview_7.png) | ![preview 8](19/preview_8.png) | | 20 | 132 | [Download](20/dataset.zip) | ![preview 1](20/preview_1.png) | ![preview 2](20/preview_2.png) | ![preview 3](20/preview_3.png) | ![preview 4](20/preview_4.png) | ![preview 5](20/preview_5.png) | ![preview 6](20/preview_6.png) | ![preview 7](20/preview_7.png) | ![preview 8](20/preview_8.png) | | 21 | 87 | [Download](21/dataset.zip) | ![preview 1](21/preview_1.png) | ![preview 2](21/preview_2.png) | ![preview 3](21/preview_3.png) | ![preview 4](21/preview_4.png) | ![preview 5](21/preview_5.png) | ![preview 6](21/preview_6.png) | ![preview 7](21/preview_7.png) | ![preview 8](21/preview_8.png) | | 22 | 34 | [Download](22/dataset.zip) | ![preview 1](22/preview_1.png) | ![preview 2](22/preview_2.png) | ![preview 3](22/preview_3.png) | ![preview 4](22/preview_4.png) | ![preview 5](22/preview_5.png) | ![preview 6](22/preview_6.png) | ![preview 7](22/preview_7.png) | ![preview 8](22/preview_8.png) | | 23 | 161 | [Download](23/dataset.zip) | ![preview 1](23/preview_1.png) | ![preview 2](23/preview_2.png) | ![preview 3](23/preview_3.png) | ![preview 4](23/preview_4.png) | ![preview 5](23/preview_5.png) | ![preview 6](23/preview_6.png) | ![preview 7](23/preview_7.png) | ![preview 8](23/preview_8.png) | | 24 | 440 | [Download](24/dataset.zip) | ![preview 1](24/preview_1.png) | ![preview 2](24/preview_2.png) | ![preview 3](24/preview_3.png) | ![preview 4](24/preview_4.png) | ![preview 5](24/preview_5.png) | ![preview 6](24/preview_6.png) | ![preview 7](24/preview_7.png) | ![preview 8](24/preview_8.png) | | 25 | 297 | [Download](25/dataset.zip) | ![preview 1](25/preview_1.png) | ![preview 2](25/preview_2.png) | ![preview 3](25/preview_3.png) | ![preview 4](25/preview_4.png) | ![preview 5](25/preview_5.png) | ![preview 6](25/preview_6.png) | ![preview 7](25/preview_7.png) | ![preview 8](25/preview_8.png) | | 26 | 62 | [Download](26/dataset.zip) | ![preview 1](26/preview_1.png) | ![preview 2](26/preview_2.png) | ![preview 3](26/preview_3.png) | ![preview 4](26/preview_4.png) | ![preview 5](26/preview_5.png) | ![preview 6](26/preview_6.png) | ![preview 7](26/preview_7.png) | ![preview 8](26/preview_8.png) | | 27 | 34 | [Download](27/dataset.zip) | ![preview 1](27/preview_1.png) | ![preview 2](27/preview_2.png) | ![preview 3](27/preview_3.png) | ![preview 4](27/preview_4.png) | ![preview 5](27/preview_5.png) | ![preview 6](27/preview_6.png) | ![preview 7](27/preview_7.png) | ![preview 8](27/preview_8.png) | | 28 | 48 | [Download](28/dataset.zip) | ![preview 1](28/preview_1.png) | ![preview 2](28/preview_2.png) | ![preview 3](28/preview_3.png) | ![preview 4](28/preview_4.png) | ![preview 5](28/preview_5.png) | ![preview 6](28/preview_6.png) | ![preview 7](28/preview_7.png) | ![preview 8](28/preview_8.png) | | 29 | 68 | [Download](29/dataset.zip) | ![preview 1](29/preview_1.png) | ![preview 2](29/preview_2.png) | ![preview 3](29/preview_3.png) | ![preview 4](29/preview_4.png) | ![preview 5](29/preview_5.png) | ![preview 6](29/preview_6.png) | ![preview 7](29/preview_7.png) | ![preview 8](29/preview_8.png) | | 30 | 31 | [Download](30/dataset.zip) | ![preview 1](30/preview_1.png) | ![preview 2](30/preview_2.png) | ![preview 3](30/preview_3.png) | ![preview 4](30/preview_4.png) | ![preview 5](30/preview_5.png) | ![preview 6](30/preview_6.png) | ![preview 7](30/preview_7.png) | ![preview 8](30/preview_8.png) | | 31 | 106 | [Download](31/dataset.zip) | ![preview 1](31/preview_1.png) | ![preview 2](31/preview_2.png) | ![preview 3](31/preview_3.png) | ![preview 4](31/preview_4.png) | ![preview 5](31/preview_5.png) | ![preview 6](31/preview_6.png) | ![preview 7](31/preview_7.png) | ![preview 8](31/preview_8.png) | | 32 | 62 | [Download](32/dataset.zip) | ![preview 1](32/preview_1.png) | ![preview 2](32/preview_2.png) | ![preview 3](32/preview_3.png) | ![preview 4](32/preview_4.png) | ![preview 5](32/preview_5.png) | ![preview 6](32/preview_6.png) | ![preview 7](32/preview_7.png) | ![preview 8](32/preview_8.png) | | 33 | 82 | [Download](33/dataset.zip) | ![preview 1](33/preview_1.png) | ![preview 2](33/preview_2.png) | ![preview 3](33/preview_3.png) | ![preview 4](33/preview_4.png) | ![preview 5](33/preview_5.png) | ![preview 6](33/preview_6.png) | ![preview 7](33/preview_7.png) | ![preview 8](33/preview_8.png) | | 34 | 20 | [Download](34/dataset.zip) | ![preview 1](34/preview_1.png) | ![preview 2](34/preview_2.png) | ![preview 3](34/preview_3.png) | ![preview 4](34/preview_4.png) | ![preview 5](34/preview_5.png) | ![preview 6](34/preview_6.png) | ![preview 7](34/preview_7.png) | ![preview 8](34/preview_8.png) | | 35 | 64 | [Download](35/dataset.zip) | ![preview 1](35/preview_1.png) | ![preview 2](35/preview_2.png) | ![preview 3](35/preview_3.png) | ![preview 4](35/preview_4.png) | ![preview 5](35/preview_5.png) | ![preview 6](35/preview_6.png) | ![preview 7](35/preview_7.png) | ![preview 8](35/preview_8.png) | | 36 | 37 | [Download](36/dataset.zip) | ![preview 1](36/preview_1.png) | ![preview 2](36/preview_2.png) | ![preview 3](36/preview_3.png) | ![preview 4](36/preview_4.png) | ![preview 5](36/preview_5.png) | ![preview 6](36/preview_6.png) | ![preview 7](36/preview_7.png) | ![preview 8](36/preview_8.png) | | 37 | 83 | [Download](37/dataset.zip) | ![preview 1](37/preview_1.png) | ![preview 2](37/preview_2.png) | ![preview 3](37/preview_3.png) | ![preview 4](37/preview_4.png) | ![preview 5](37/preview_5.png) | ![preview 6](37/preview_6.png) | ![preview 7](37/preview_7.png) | ![preview 8](37/preview_8.png) | | 38 | 39 | [Download](38/dataset.zip) | ![preview 1](38/preview_1.png) | ![preview 2](38/preview_2.png) | ![preview 3](38/preview_3.png) | ![preview 4](38/preview_4.png) | ![preview 5](38/preview_5.png) | ![preview 6](38/preview_6.png) | ![preview 7](38/preview_7.png) | ![preview 8](38/preview_8.png) | | 39 | 21 | [Download](39/dataset.zip) | ![preview 1](39/preview_1.png) | ![preview 2](39/preview_2.png) | ![preview 3](39/preview_3.png) | ![preview 4](39/preview_4.png) | ![preview 5](39/preview_5.png) | ![preview 6](39/preview_6.png) | ![preview 7](39/preview_7.png) | ![preview 8](39/preview_8.png) | | 40 | 59 | [Download](40/dataset.zip) | ![preview 1](40/preview_1.png) | ![preview 2](40/preview_2.png) | ![preview 3](40/preview_3.png) | ![preview 4](40/preview_4.png) | ![preview 5](40/preview_5.png) | ![preview 6](40/preview_6.png) | ![preview 7](40/preview_7.png) | ![preview 8](40/preview_8.png) | | 41 | 31 | [Download](41/dataset.zip) | ![preview 1](41/preview_1.png) | ![preview 2](41/preview_2.png) | ![preview 3](41/preview_3.png) | ![preview 4](41/preview_4.png) | ![preview 5](41/preview_5.png) | ![preview 6](41/preview_6.png) | ![preview 7](41/preview_7.png) | ![preview 8](41/preview_8.png) | | 42 | 653 | [Download](42/dataset.zip) | ![preview 1](42/preview_1.png) | ![preview 2](42/preview_2.png) | ![preview 3](42/preview_3.png) | ![preview 4](42/preview_4.png) | ![preview 5](42/preview_5.png) | ![preview 6](42/preview_6.png) | ![preview 7](42/preview_7.png) | ![preview 8](42/preview_8.png) | | 43 | 23 | [Download](43/dataset.zip) | ![preview 1](43/preview_1.png) | ![preview 2](43/preview_2.png) | ![preview 3](43/preview_3.png) | ![preview 4](43/preview_4.png) | ![preview 5](43/preview_5.png) | ![preview 6](43/preview_6.png) | ![preview 7](43/preview_7.png) | ![preview 8](43/preview_8.png) | | 44 | 120 | [Download](44/dataset.zip) | ![preview 1](44/preview_1.png) | ![preview 2](44/preview_2.png) | ![preview 3](44/preview_3.png) | ![preview 4](44/preview_4.png) | ![preview 5](44/preview_5.png) | ![preview 6](44/preview_6.png) | ![preview 7](44/preview_7.png) | ![preview 8](44/preview_8.png) | | 45 | 22 | [Download](45/dataset.zip) | ![preview 1](45/preview_1.png) | ![preview 2](45/preview_2.png) | ![preview 3](45/preview_3.png) | ![preview 4](45/preview_4.png) | ![preview 5](45/preview_5.png) | ![preview 6](45/preview_6.png) | ![preview 7](45/preview_7.png) | ![preview 8](45/preview_8.png) | | 46 | 22 | [Download](46/dataset.zip) | ![preview 1](46/preview_1.png) | ![preview 2](46/preview_2.png) | ![preview 3](46/preview_3.png) | ![preview 4](46/preview_4.png) | ![preview 5](46/preview_5.png) | ![preview 6](46/preview_6.png) | ![preview 7](46/preview_7.png) | ![preview 8](46/preview_8.png) | | 47 | 36 | [Download](47/dataset.zip) | ![preview 1](47/preview_1.png) | ![preview 2](47/preview_2.png) | ![preview 3](47/preview_3.png) | ![preview 4](47/preview_4.png) | ![preview 5](47/preview_5.png) | ![preview 6](47/preview_6.png) | ![preview 7](47/preview_7.png) | ![preview 8](47/preview_8.png) | | 48 | 162 | [Download](48/dataset.zip) | ![preview 1](48/preview_1.png) | ![preview 2](48/preview_2.png) | ![preview 3](48/preview_3.png) | ![preview 4](48/preview_4.png) | ![preview 5](48/preview_5.png) | ![preview 6](48/preview_6.png) | ![preview 7](48/preview_7.png) | ![preview 8](48/preview_8.png) | | 49 | 11 | [Download](49/dataset.zip) | ![preview 1](49/preview_1.png) | ![preview 2](49/preview_2.png) | ![preview 3](49/preview_3.png) | ![preview 4](49/preview_4.png) | ![preview 5](49/preview_5.png) | ![preview 6](49/preview_6.png) | ![preview 7](49/preview_7.png) | ![preview 8](49/preview_8.png) | | 50 | 64 | [Download](50/dataset.zip) | ![preview 1](50/preview_1.png) | ![preview 2](50/preview_2.png) | ![preview 3](50/preview_3.png) | ![preview 4](50/preview_4.png) | ![preview 5](50/preview_5.png) | ![preview 6](50/preview_6.png) | ![preview 7](50/preview_7.png) | ![preview 8](50/preview_8.png) | | 51 | 8 | [Download](51/dataset.zip) | ![preview 1](51/preview_1.png) | ![preview 2](51/preview_2.png) | ![preview 3](51/preview_3.png) | ![preview 4](51/preview_4.png) | ![preview 5](51/preview_5.png) | ![preview 6](51/preview_6.png) | ![preview 7](51/preview_7.png) | ![preview 8](51/preview_8.png) | | 52 | 13 | [Download](52/dataset.zip) | ![preview 1](52/preview_1.png) | ![preview 2](52/preview_2.png) | ![preview 3](52/preview_3.png) | ![preview 4](52/preview_4.png) | ![preview 5](52/preview_5.png) | ![preview 6](52/preview_6.png) | ![preview 7](52/preview_7.png) | ![preview 8](52/preview_8.png) | | 53 | 19 | [Download](53/dataset.zip) | ![preview 1](53/preview_1.png) | ![preview 2](53/preview_2.png) | ![preview 3](53/preview_3.png) | ![preview 4](53/preview_4.png) | ![preview 5](53/preview_5.png) | ![preview 6](53/preview_6.png) | ![preview 7](53/preview_7.png) | ![preview 8](53/preview_8.png) | | 54 | 171 | [Download](54/dataset.zip) | ![preview 1](54/preview_1.png) | ![preview 2](54/preview_2.png) | ![preview 3](54/preview_3.png) | ![preview 4](54/preview_4.png) | ![preview 5](54/preview_5.png) | ![preview 6](54/preview_6.png) | ![preview 7](54/preview_7.png) | ![preview 8](54/preview_8.png) | | 55 | 14 | [Download](55/dataset.zip) | ![preview 1](55/preview_1.png) | ![preview 2](55/preview_2.png) | ![preview 3](55/preview_3.png) | ![preview 4](55/preview_4.png) | ![preview 5](55/preview_5.png) | ![preview 6](55/preview_6.png) | ![preview 7](55/preview_7.png) | ![preview 8](55/preview_8.png) | | 56 | 16 | [Download](56/dataset.zip) | ![preview 1](56/preview_1.png) | ![preview 2](56/preview_2.png) | ![preview 3](56/preview_3.png) | ![preview 4](56/preview_4.png) | ![preview 5](56/preview_5.png) | ![preview 6](56/preview_6.png) | ![preview 7](56/preview_7.png) | ![preview 8](56/preview_8.png) | | 57 | 19 | [Download](57/dataset.zip) | ![preview 1](57/preview_1.png) | ![preview 2](57/preview_2.png) | ![preview 3](57/preview_3.png) | ![preview 4](57/preview_4.png) | ![preview 5](57/preview_5.png) | ![preview 6](57/preview_6.png) | ![preview 7](57/preview_7.png) | ![preview 8](57/preview_8.png) | | 58 | 10 | [Download](58/dataset.zip) | ![preview 1](58/preview_1.png) | ![preview 2](58/preview_2.png) | ![preview 3](58/preview_3.png) | ![preview 4](58/preview_4.png) | ![preview 5](58/preview_5.png) | ![preview 6](58/preview_6.png) | ![preview 7](58/preview_7.png) | ![preview 8](58/preview_8.png) | | 59 | 64 | [Download](59/dataset.zip) | ![preview 1](59/preview_1.png) | ![preview 2](59/preview_2.png) | ![preview 3](59/preview_3.png) | ![preview 4](59/preview_4.png) | ![preview 5](59/preview_5.png) | ![preview 6](59/preview_6.png) | ![preview 7](59/preview_7.png) | ![preview 8](59/preview_8.png) | | 60 | 30 | [Download](60/dataset.zip) | ![preview 1](60/preview_1.png) | ![preview 2](60/preview_2.png) | ![preview 3](60/preview_3.png) | ![preview 4](60/preview_4.png) | ![preview 5](60/preview_5.png) | ![preview 6](60/preview_6.png) | ![preview 7](60/preview_7.png) | ![preview 8](60/preview_8.png) | | 61 | 16 | [Download](61/dataset.zip) | ![preview 1](61/preview_1.png) | ![preview 2](61/preview_2.png) | ![preview 3](61/preview_3.png) | ![preview 4](61/preview_4.png) | ![preview 5](61/preview_5.png) | ![preview 6](61/preview_6.png) | ![preview 7](61/preview_7.png) | ![preview 8](61/preview_8.png) | | 62 | 30 | [Download](62/dataset.zip) | ![preview 1](62/preview_1.png) | ![preview 2](62/preview_2.png) | ![preview 3](62/preview_3.png) | ![preview 4](62/preview_4.png) | ![preview 5](62/preview_5.png) | ![preview 6](62/preview_6.png) | ![preview 7](62/preview_7.png) | ![preview 8](62/preview_8.png) | | 63 | 18 | [Download](63/dataset.zip) | ![preview 1](63/preview_1.png) | ![preview 2](63/preview_2.png) | ![preview 3](63/preview_3.png) | ![preview 4](63/preview_4.png) | ![preview 5](63/preview_5.png) | ![preview 6](63/preview_6.png) | ![preview 7](63/preview_7.png) | ![preview 8](63/preview_8.png) | | 64 | 26 | [Download](64/dataset.zip) | ![preview 1](64/preview_1.png) | ![preview 2](64/preview_2.png) | ![preview 3](64/preview_3.png) | ![preview 4](64/preview_4.png) | ![preview 5](64/preview_5.png) | ![preview 6](64/preview_6.png) | ![preview 7](64/preview_7.png) | ![preview 8](64/preview_8.png) | | 65 | 9 | [Download](65/dataset.zip) | ![preview 1](65/preview_1.png) | ![preview 2](65/preview_2.png) | ![preview 3](65/preview_3.png) | ![preview 4](65/preview_4.png) | ![preview 5](65/preview_5.png) | ![preview 6](65/preview_6.png) | ![preview 7](65/preview_7.png) | ![preview 8](65/preview_8.png) | | 66 | 21 | [Download](66/dataset.zip) | ![preview 1](66/preview_1.png) | ![preview 2](66/preview_2.png) | ![preview 3](66/preview_3.png) | ![preview 4](66/preview_4.png) | ![preview 5](66/preview_5.png) | ![preview 6](66/preview_6.png) | ![preview 7](66/preview_7.png) | ![preview 8](66/preview_8.png) | | 67 | 12 | [Download](67/dataset.zip) | ![preview 1](67/preview_1.png) | ![preview 2](67/preview_2.png) | ![preview 3](67/preview_3.png) | ![preview 4](67/preview_4.png) | ![preview 5](67/preview_5.png) | ![preview 6](67/preview_6.png) | ![preview 7](67/preview_7.png) | ![preview 8](67/preview_8.png) | | 68 | 22 | [Download](68/dataset.zip) | ![preview 1](68/preview_1.png) | ![preview 2](68/preview_2.png) | ![preview 3](68/preview_3.png) | ![preview 4](68/preview_4.png) | ![preview 5](68/preview_5.png) | ![preview 6](68/preview_6.png) | ![preview 7](68/preview_7.png) | ![preview 8](68/preview_8.png) | | 69 | 28 | [Download](69/dataset.zip) | ![preview 1](69/preview_1.png) | ![preview 2](69/preview_2.png) | ![preview 3](69/preview_3.png) | ![preview 4](69/preview_4.png) | ![preview 5](69/preview_5.png) | ![preview 6](69/preview_6.png) | ![preview 7](69/preview_7.png) | ![preview 8](69/preview_8.png) | | 70 | 13 | [Download](70/dataset.zip) | ![preview 1](70/preview_1.png) | ![preview 2](70/preview_2.png) | ![preview 3](70/preview_3.png) | ![preview 4](70/preview_4.png) | ![preview 5](70/preview_5.png) | ![preview 6](70/preview_6.png) | ![preview 7](70/preview_7.png) | ![preview 8](70/preview_8.png) | | 71 | 6 | [Download](71/dataset.zip) | ![preview 1](71/preview_1.png) | ![preview 2](71/preview_2.png) | ![preview 3](71/preview_3.png) | ![preview 4](71/preview_4.png) | ![preview 5](71/preview_5.png) | ![preview 6](71/preview_6.png) | N/A | N/A | | 72 | 6 | [Download](72/dataset.zip) | ![preview 1](72/preview_1.png) | ![preview 2](72/preview_2.png) | ![preview 3](72/preview_3.png) | ![preview 4](72/preview_4.png) | ![preview 5](72/preview_5.png) | ![preview 6](72/preview_6.png) | N/A | N/A | | 73 | 17 | [Download](73/dataset.zip) | ![preview 1](73/preview_1.png) | ![preview 2](73/preview_2.png) | ![preview 3](73/preview_3.png) | ![preview 4](73/preview_4.png) | ![preview 5](73/preview_5.png) | ![preview 6](73/preview_6.png) | ![preview 7](73/preview_7.png) | ![preview 8](73/preview_8.png) | | 74 | 10 | [Download](74/dataset.zip) | ![preview 1](74/preview_1.png) | ![preview 2](74/preview_2.png) | ![preview 3](74/preview_3.png) | ![preview 4](74/preview_4.png) | ![preview 5](74/preview_5.png) | ![preview 6](74/preview_6.png) | ![preview 7](74/preview_7.png) | ![preview 8](74/preview_8.png) | | 75 | 12 | [Download](75/dataset.zip) | ![preview 1](75/preview_1.png) | ![preview 2](75/preview_2.png) | ![preview 3](75/preview_3.png) | ![preview 4](75/preview_4.png) | ![preview 5](75/preview_5.png) | ![preview 6](75/preview_6.png) | ![preview 7](75/preview_7.png) | ![preview 8](75/preview_8.png) | | 76 | 6 | [Download](76/dataset.zip) | ![preview 1](76/preview_1.png) | ![preview 2](76/preview_2.png) | ![preview 3](76/preview_3.png) | ![preview 4](76/preview_4.png) | ![preview 5](76/preview_5.png) | ![preview 6](76/preview_6.png) | N/A | N/A | | noise | 231 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
GAS17/salvo
--- license: apache-2.0 ---
tartuNLP/EstCOPA
--- annotations_creators: - expert-generated language: - et language_creators: - expert-generated - machine-generated license: - cc-by-4.0 multilinguality: - monolingual - translation pretty_name: EstCOPA size_categories: - n<1K source_datasets: - extended|xcopa tags: [] task_categories: - question-answering task_ids: [] --- # Dataset Card for EstCOPA ### Dataset Summary EstCOPA is an extended version of [XCOPA](https://huggingface.co/datasets/xcopa) that was created with a goal to further investigate Estonian language understanding of large language models. EstCOPA provides two new versions of train, eval and test datasets in Estonian: firstly, a machine translated (En->Et) version of original English COPA ([Roemmele et al., 2011](http://commonsensereasoning.org/2011/papers/Roemmele.pdf)) and secondly, a manually post-edited version of the same machine translated data. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - et ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information If you use the dataset in your work, please cite ``` @article{kuulmets_estcopa_2022, title={Estonian Language Understanding: a Case Study on the COPA Task}, volume={10}, DOI={https://doi.org/10.22364/bjmc.2022.10.3.19}, number={3}, journal={Baltic Journal of Modern Computing}, author={Kuulmets, Hele-Andra and Tättar, Andre and Fishel, Mark}, year={2022}, pages={470–480} } ``` ### Contributions Thanks to [@helehh](https://github.com/helehh) for adding this dataset.
qqlu1992/Adobe_EntitySeg
--- viewer: false --- The images and pretrained-models used in the ICCV oral paper 'High-Quality Entity Segmentation'. The offical link is https://github.com/adobe-research/EntitySeg-Dataset. The code link is https://github.com/qqlu/Entity/tree/main/Entityv2. We noted that we do not own the copyright of the images. It is solely your responsibility to check the original licenses of the images before using them. Any use of the images are at your own discretion and risk.
mlml-chip/testing
--- license: apache-2.0 ---
distilled-one-sec-cv12-each-chunk-uniq/chunk_111
--- dataset_info: features: - name: logits sequence: float32 - name: mfcc sequence: sequence: float64 splits: - name: train num_bytes: 1164024844.0 num_examples: 226817 download_size: 1189172108 dataset_size: 1164024844.0 --- # Dataset Card for "chunk_111" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
minh21/COVID-QA-unique-context-test-10-percent-validation-10-percent
--- dataset_info: features: - name: question dtype: string - name: answer_text dtype: string - name: answer_start dtype: int64 - name: is_impossible dtype: bool - name: document_id dtype: int64 - name: id dtype: int64 - name: context dtype: string splits: - name: train num_bytes: 2050073 num_examples: 1615 - name: test num_bytes: 260386 num_examples: 202 - name: validation num_bytes: 261992 num_examples: 202 download_size: 0 dataset_size: 2572451 --- # Dataset Card for "COVID-QA-unique-context-test-10-percent-validation-10-percent" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yoshitomo-matsubara/srsd-feynman_hard
--- pretty_name: SRSD-Feynman (Hard) annotations_creators: - expert language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended task_categories: - tabular-regression task_ids: [] --- # Dataset Card for SRSD-Feynman (Hard set) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/omron-sinicx/srsd-benchmark - **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540) - **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com) ### Dataset Summary Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery. We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets. This is the ***Hard set*** of our SRSD-Feynman datasets, which consists of the following 50 different physics formulas: [![Click here to open a PDF file](problem_table.png)](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/resolve/main/problem_table.pdf) More details of these datasets are provided in [the paper and its supplementary material](https://openreview.net/forum?id=qrUdrXsiXX). ### Supported Tasks and Leaderboards Symbolic Regression ## Dataset Structure ### Data Instances Tabular data + Ground-truth equation per equation Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables. Note that the number of variables (`num_variables`) varies from equation to equation. Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function. ### Data Fields For each dataset, we have 1. train split (txt file, whitespace as a delimiter) 2. val split (txt file, whitespace as a delimiter) 3. test split (txt file, whitespace as a delimiter) 4. true equation (pickle file for sympy object) ### Data Splits - train: 8,000 samples per equation - val: 1,000 samples per equation - test: 1,000 samples per equation ## Dataset Creation ### Curation Rationale We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html). ### Annotations #### Annotation process We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database. First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants. Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation. In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen. Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes. Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly. In addition, variables that take a specific sign were set to be sampled within that range. #### Who are the annotators? The main annotators are - Naoya Chiba (@nchiba) - Ryo Igarashi (@rigarash) ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery. ### Discussion of Biases Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics. ### Other Known Limitations Some variables used in our datasets indicate some numbers (counts), which should be treated as integer. Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25}) ## Additional Information ### Dataset Curators The main curators are - Naoya Chiba (@nchiba) - Ryo Igarashi (@rigarash) ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information [[OpenReview](https://openreview.net/forum?id=qrUdrXsiXX)] [[Video](https://www.youtube.com/watch?v=MmeOXuUUAW0)] [[Preprint](https://arxiv.org/abs/2206.10540)] ```bibtex @article{matsubara2024rethinking, title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery}, author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka}, journal={Journal of Data-centric Machine Learning Research}, year={2024}, url={https://openreview.net/forum?id=qrUdrXsiXX} } ``` ### Contributions Authors: - Yoshitomo Matsubara (@yoshitomo-matsubara) - Naoya Chiba (@nchiba) - Ryo Igarashi (@rigarash) - Yoshitaka Ushiku (@yushiku)
SalomonMetre13/nnd_fr_26k
--- license: mit language: - nnd task_categories: - translation size_categories: - 10K<n<100K --- This <span style="color:teal;">parallel corpus </span> contains <span style="color:teal;">26240</span> aligned sentence pairs <span style="color:teal;">Nande-French</span> in a <span style="color:teal;">90:10</span> split for the train and the test sets. It has been mainly used to fine-tune the <span style="color:teal;"> t5-base </span> pretrained model for the development of <a href="https://huggingface.co/SalomonMetre13/nnd_fr_mt_v3" style="color:green;">this translation model </a>
artemsnegirev/blended_skill_talk_ru
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ru license: - unknown multilinguality: - monolingual pretty_name: BlendedSkillTalk size_categories: - 1K<n<10K source_datasets: - original task_categories: - conversational task_ids: - dialogue-generation paperswithcode_id: blended-skill-talk dataset_info: features: - name: personas sequence: string - name: additional_context dtype: string - name: previous_utterance sequence: string - name: context dtype: string - name: free_messages sequence: string - name: guided_messages sequence: string - name: suggestions sequence: - name: convai2 dtype: string - name: empathetic_dialogues dtype: string - name: wizard_of_wikipedia dtype: string - name: guided_chosen_suggestions sequence: string splits: - name: train num_examples: 4819 - name: validation num_examples: 1009 - name: test num_examples: 980 --- # Dataset Card for "blended_skill_talk" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://parl.ai/projects/bst/](https://parl.ai/projects/bst/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills](https://arxiv.org/abs/2004.08449v1) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary Russian version of the Blended Skill Talk dataset. Each replica was translated separately using a paid translator. A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { "personas": ["мне все время звонит женщина.", "однажды мне предложили профессионально заниматься баскетболом."], "additional_context": "", "previous_utterance": ["Я по-настоящему обрадовался, когда мой папа подарил мне мой первый автомобиль. Это было просто счастливое чувство", "Да. Мне знакомо это чувство, хотя мне пришлось купить свой собственный."], "context": "empathetic_dialogues", "free_messages": ["Автомобиль был именно таким, как я хотел, - спортивная машина новой модели.", "Mustang GT с откидным верхом", "Несколько лет назад я помогал с реставрацией 67-го. Это было просто великолепно."], "guided_messages": ["Это был хороший выбор, что это была за машина?", "Мило! Жаль, что вам не удалось заполучить в свои руки "Мустанг II" 1963 года выпуска. Это моя любимая машина.", "Это звучит потрясающе. Вы восстановили его вместе со своим отцом?"], "suggestions": { "convai2": ["я не большой любитель спортивных автомобилей, лол. мне нужна машина, которая может испачкаться", "я не большой любитель спортивных автомобилей, лол. мне нужна машина, которая может испачкаться", "мы с папой восстановили мой, он принадлежал ему."], "empathetic_dialogues": ["Это был хороший выбор, рассматривали ли вы какие-либо другие марки / модели?", "О, ничего себе, это потрясающая машина.", "Это звучит как классический автомобиль. Ты часто катался на нем со своим отцом?"], "wizard_of_wikipedia": ["Это круто. Мне нравятся экономичные автомобили, потому что они доступны по цене", "Мило! Жаль, что вам не удалось заполучить в свои руки Mustang II 1963 года выпуска, который представляет собой четырехместный концепт-кар.", "Мило! Жаль, что вам не удалось заполучить в свои руки Mustang II 1963 года выпуска, который представляет собой четырехместный концепт-кар."]}, "guided_chosen_suggestions": ["empathetic_dialogues", "wizard_of_wikipedia", "empathetic_dialogues"] } ``` Original version of dataset has "label_candidates" field. It was not translated. ### Data Fields The data fields are the same among all splits. - `personas`: a `list` of `string` features. - `additional_context`: a `string` feature. - `previous_utterance`: a `list` of `string` features. - `context`: a `string` feature. - `free_messages`: a `list` of `string` features. - `guided_messgaes`: a `list` of `string` features. - `suggestions`: a dictionary feature containing: - `convai2`: a `string` feature. - `empathetic_dialogues`: a `string` feature. - `wizard_of_wikipedia`: a `string` feature. - `guided_chosen_suggestions`: a `list` of `string` features. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 4819| 1009| 980| ## Additional Information ### Citation Information ``` @misc{smith2020evaluating, title={Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills}, author={Eric Michael Smith and Mary Williamson and Kurt Shuster and Jason Weston and Y-Lan Boureau}, year={2020}, eprint={2004.08449}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@artemsnegirev](https://github.com/artemsnegirev), [Dmitriy Sidorenko](https://github.com/DimaSidorenko) for adding this dataset.
patrickNLP/tapilot-crossing
--- license: cc-by-nc-4.0 task_categories: - text-generation - table-question-answering language: - en tags: - code size_categories: - 100M<n<1B --- ## Dataset Summary Tapilot-Crossing is the first benchmark to evaluate LLM agents on interactive data analysis. It includes 1024 user-machine interactions with 1176 user intents, spanning four practical scenarios: - 1) **Normal**, where all questions and user requirements are explicit, requiring no actions from agents; - 2) **Action**, where agents must respond to diverse user feedback or instructions; - 3) **Private**, which examines the true semantic parsing capability of agents when encountering unseen packages during the pre-training phase (Zan et al., 2022); and - 4) **Private Action**, a mode that combines the features of Private and Action, more closely reflecting real-world data analysis. There are two answer types: - 1) **Code Generation**, which can test whether the agent can correctly interpret the user’s query and generate the corresponding code for data analysis, and - 2) **Multiple-Choice questions**, which can evaluate the agent’s ability to understand the returned results being executed and provide appropriate insights for users. Additionally, we separately evaluate LLM agents on 6 common actions in data analysis: - 1) **Update_Code**: Update_Code: this refers to instances where the user requests corrections for bugs or refinements to the conditions of previous queries. - 2) **Fast_Fail**: it is an action that alerts users when the current data contents or resources are insufficient to meet their requests, or when user queries contain factual errors. - 3) **Clarification**: this is a common action in response to under-specified questions, which are frequent in data-analysis queries. In this action, agents make the conditions of the question more specific and clear by seeking additional information from users. - 4) **Best_Guess**: while clarification is an effective action to reduce the uncertainty, it can lead to issues such as user impatience due to unsteadily asking, and long dialog histories that result in attention distraction and long-context problems. Therefore, this action can address these issues by making appropriate assumptions based on data contents, domain knowledge, and commonsense knowledge for under-specific questions. However, there is also a risk that incorrect guesses can lead to hallucinations. - 5) **Plot_QA**: in real data analysis settings, agents are also expected to answer user questions about insights derived from plots. The Plot_QA action can assist users in better understanding the contents of plots for decision making. - 6) **Insight_Mining**: Insight_Mining: beyond generating codes for users to retrieve expected results, interactive data analysis agents are also tasked with summarizing executed results from the environment to assist users in making informed decisions. This process, known as Insight_Mining, plays an important role in data analysis since it contributes to the evolution of code agents into comprehensive data analysis agents. ## History Relational Database We split the User-AI interaction history into several single-turn user queries and AI answers stored in a relational database (H-RDB), indexed by the conversational order. This storage is subject to dynamic combinations for different scenarios. In this work we only present Tapilot-Alpha, which means all histories are assumed as clean and explicit (without any actions). While researchers are able to customize their own Tapilot-Crossing by merging diverse types of interaction histories according to their needs. Here, we showcase our processed H-RDB tailored for Tapilot-Alpha. ## Dataset Structure This section outlines the structure of the `data` directory, detailing its organization across several layers and describing the contents of each. ### 1. Directory structure of interaction_data - **First Layer: Dataset Domains** \ The top level of the dataset directory is organized by dataset domains, each representing a specific area of interest. For example, `atp_tennis` is dedicated to tennis matches. - **Second Layer: Agent Profiles and Interaction Length** \ The next level categorizes data by agent profiles and the length of interaction: - **Long Interactions**: Represented by profiles with extensive experience, such as "Carlos" who is noted for his expertise in organizing tennis tournaments. - **Short Interactions**: Denoted by a prefix `short_`, indicating brief interactions with the agent, e.g., `short_Carlos`. - **Third Layer: Data Modes** \ Data is further organized into four modes reflecting the nature of the interaction turns: - **Action**: Interaction turns featuring special actions (e.g., analysis, clarification requests). - **Action_Private**: A private library version of the Action mode. - **Normal**: Interaction turns with clear user intents that do not require any special actions. - **Private**: A private library version of the Normal mode. - **Fourth Layer: Interaction Turns** \ The final layer is organized by the interaction turns themselves, providing a granular view of the dataset. ### 2. Content Files of data - `interaction_data`: Contains all the interaction data of tapilot data. For each interaction turn (e.g., `turn_1_short_1` for short interactions, `turn_1` for long interactions), the following files are included: - `meta_data.json`: Contains metadata about the problem, including `result_type`, `action`, and `data_id`. - `ref_result` and `pred_result` directories: Store files (pickles or PNGs) generated from the reference and prediction codes, respectively. - `prompt_curr.txt`: The basic prompt for the current interaction turn used for querying large language models. - `prompt_curr_oracle.txt` (Optional): An enhanced version of the basic prompt that includes oracle private functions for the current interaction turn. - `ref_code.txt`: The ground truth solution code for the current interaction turn. - `eval.py`: The evaluation code specific to the current interaction turn. - `reference_answer.txt` (Optional): The correct answer for the current interaction turn in multi-choice format. - `reference` directory: Contains the interaction history and reference code: - `prompt_code_hist.json`: Basic prompt with interaction history in a list of dictionaries format, adhering to the standard GPT input format. - `prompt_code_hist.txt`: Plain text format of the basic prompt with interaction history. - `ref_code_hist.py`: Reference code of interaction history excluding the current turn's reference code, useful as code context for model-generated code. - `ref_code_all.py`: Includes the complete reference code history along with the current turn's code, serving as a comprehensive reference that can generate results for the current turn. - `resource`: Contains all the tabular data in csv file of 5 domains of tapilot data; along with the private library named "decision_company" in python and json format. - `rdb`: Contains the History Relational Database. We split the User-AI interaction into several singleturn user queries and AI answers stored in a relational database, indexed by the conversational order. This storage is subject to dynamic combinations for different scenarios. The visualization of the `tapilot_data` directory structure which should be stored in **`data/`** folder of this project. ``` data ├── interaction_data ├── atp_tennis ├── Carlos ├── Emma ├── John ├── short_Carlos ├── short_Emma └── short_John ├── short_action ├── short_action_private ├── short_normal └── short_private ├── turn_1_short_1 ├── turn_2_short_1 ├── turn_2_short_2 ├── meta_data.json ├── eval.py ├── ref_code.py ├── prompt_curr_oracle.txt (Optional: only for private mode) ├── prompt_curr.txt ├── ref_result ├── pred_result ├── reference_answer.txt (Optional: only for multi-choice mode) └── reference ├── prompt_code_hist.json ├── prompt_code_hist.txt ├── ref_code_hist.py └── ref_code_all.py (Optional: only for code generation mode) ├── credit_card_risk ├── fast_food ├── laptop_price └── melb_housing ├── resource └── rdb ``` ## Other Information: For detailed cases of evaluation with Tapilot-Crossing, please visit https://tapilot-crossing.github.io/. ## Citation ``` @article{li2024tapilot, title={Tapilot-Crossing: Benchmarking and Evolving LLMs Towards Interactive Data Analysis Agents}, author={Li, Jinyang and Huo, Nan and Gao, Yan and Shi, Jiayi and Zhao, Yingxiu and Qu, Ge and Wu, Yurong and Ma, Chenhao and Lou, Jian-Guang and Cheng, Reynold}, journal={arXiv preprint arXiv:2403.05307}, year={2024} } ```
Vinnyyw/Otrodiavoice
--- license: openrail ---
open-llm-leaderboard/details_R136a1__InfinityKuno-2x7B
--- pretty_name: Evaluation run of R136a1/InfinityKuno-2x7B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [R136a1/InfinityKuno-2x7B](https://huggingface.co/R136a1/InfinityKuno-2x7B) on\ \ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_R136a1__InfinityKuno-2x7B\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-03-29T17:53:08.431911](https://huggingface.co/datasets/open-llm-leaderboard/details_R136a1__InfinityKuno-2x7B/blob/main/results_2024-03-29T17-53-08.431911.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6496265424377181,\n\ \ \"acc_stderr\": 0.03214693976951449,\n \"acc_norm\": 0.6497019134375417,\n\ \ \"acc_norm_stderr\": 0.032809875892765086,\n \"mc1\": 0.4663402692778458,\n\ \ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6328236726229287,\n\ \ \"mc2_stderr\": 0.015383415147976018\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.6715017064846417,\n \"acc_stderr\": 0.013724978465537305,\n\ \ \"acc_norm\": 0.6962457337883959,\n \"acc_norm_stderr\": 0.013438909184778766\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7034455287791277,\n\ \ \"acc_stderr\": 0.004558049018764657,\n \"acc_norm\": 0.8744274048994224,\n\ \ \"acc_norm_stderr\": 0.003306898242234494\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \ \ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\ \ \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n\ \ \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\ \ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\ \ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \ \ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n\ \ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\ \ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \ \ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \ \ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\ acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \ \ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \ \ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n\ \ \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n\ \ \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\ \ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\ \ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n\ \ \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\ \ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\ \ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.041227371113703316,\n\ \ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.041227371113703316\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"\ acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\ \ \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n\ \ \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \ \ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7967741935483871,\n\ \ \"acc_stderr\": 0.022891687984554963,\n \"acc_norm\": 0.7967741935483871,\n\ \ \"acc_norm_stderr\": 0.022891687984554963\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n\ \ \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\ : 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.03192271569548301,\n\ \ \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.03192271569548301\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.797979797979798,\n \"acc_stderr\": 0.028606204289229872,\n \"\ acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229872\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\ \ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402538,\n\ \ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402538\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.34444444444444444,\n \"acc_stderr\": 0.028972648884844267,\n \ \ \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.028972648884844267\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.02971914287634285,\n \ \ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.02971914287634285\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\ acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8440366972477065,\n \"acc_stderr\": 0.01555580271359017,\n \"\ acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.01555580271359017\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"\ acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.8186274509803921,\n \"acc_stderr\": 0.027044621719474082,\n \"\ acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.027044621719474082\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.7721518987341772,\n \"acc_stderr\": 0.027303484599069436,\n \ \ \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.027303484599069436\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\ \ \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n\ \ \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n\ \ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\ acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\ \ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\ \ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\ \ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\ \ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\ \ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\ \ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n\ \ \"acc_stderr\": 0.01987565502786744,\n \"acc_norm\": 0.8974358974358975,\n\ \ \"acc_norm_stderr\": 0.01987565502786744\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \ \ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8314176245210728,\n\ \ \"acc_stderr\": 0.013387895731543604,\n \"acc_norm\": 0.8314176245210728,\n\ \ \"acc_norm_stderr\": 0.013387895731543604\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.02394851290546837,\n\ \ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.02394851290546837\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41899441340782123,\n\ \ \"acc_stderr\": 0.016501579306861677,\n \"acc_norm\": 0.41899441340782123,\n\ \ \"acc_norm_stderr\": 0.016501579306861677\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n\ \ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\ \ \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n\ \ \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.0242885336377261,\n\ \ \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.0242885336377261\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \ \ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46479791395045633,\n\ \ \"acc_stderr\": 0.012738547371303957,\n \"acc_norm\": 0.46479791395045633,\n\ \ \"acc_norm_stderr\": 0.012738547371303957\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \ \ \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \ \ \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\ \ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\ \ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\ \ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\ \ \"acc_stderr\": 0.02553843336857833,\n \"acc_norm\": 0.845771144278607,\n\ \ \"acc_norm_stderr\": 0.02553843336857833\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \ \ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\ \ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\ \ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\ \ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\ \ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4663402692778458,\n\ \ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6328236726229287,\n\ \ \"mc2_stderr\": 0.015383415147976018\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.8271507498026835,\n \"acc_stderr\": 0.010626964529971854\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6633813495072024,\n \ \ \"acc_stderr\": 0.01301646367998336\n }\n}\n```" repo_url: https://huggingface.co/R136a1/InfinityKuno-2x7B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|arc:challenge|25_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-03-29T17-53-08.431911.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|gsm8k|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hellaswag|10_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-management|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-management|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-03-29T17-53-08.431911.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-international_law|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-management|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-marketing|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-sociology|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-virology|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T17-53-08.431911.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|truthfulqa:mc|0_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-03-29T17-53-08.431911.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_03_29T17_53_08.431911 path: - '**/details_harness|winogrande|5_2024-03-29T17-53-08.431911.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-03-29T17-53-08.431911.parquet' - config_name: results data_files: - split: 2024_03_29T17_53_08.431911 path: - results_2024-03-29T17-53-08.431911.parquet - split: latest path: - results_2024-03-29T17-53-08.431911.parquet --- # Dataset Card for Evaluation run of R136a1/InfinityKuno-2x7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [R136a1/InfinityKuno-2x7B](https://huggingface.co/R136a1/InfinityKuno-2x7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_R136a1__InfinityKuno-2x7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-03-29T17:53:08.431911](https://huggingface.co/datasets/open-llm-leaderboard/details_R136a1__InfinityKuno-2x7B/blob/main/results_2024-03-29T17-53-08.431911.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6496265424377181, "acc_stderr": 0.03214693976951449, "acc_norm": 0.6497019134375417, "acc_norm_stderr": 0.032809875892765086, "mc1": 0.4663402692778458, "mc1_stderr": 0.017463793867168106, "mc2": 0.6328236726229287, "mc2_stderr": 0.015383415147976018 }, "harness|arc:challenge|25": { "acc": 0.6715017064846417, "acc_stderr": 0.013724978465537305, "acc_norm": 0.6962457337883959, "acc_norm_stderr": 0.013438909184778766 }, "harness|hellaswag|10": { "acc": 0.7034455287791277, "acc_stderr": 0.004558049018764657, "acc_norm": 0.8744274048994224, "acc_norm_stderr": 0.003306898242234494 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6222222222222222, "acc_stderr": 0.04188307537595853, "acc_norm": 0.6222222222222222, "acc_norm_stderr": 0.04188307537595853 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6842105263157895, "acc_stderr": 0.0378272898086547, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.61, "acc_stderr": 0.04902071300001975, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6943396226415094, "acc_stderr": 0.028353298073322666, "acc_norm": 0.6943396226415094, "acc_norm_stderr": 0.028353298073322666 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.75, "acc_stderr": 0.03621034121889507, "acc_norm": 0.75, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6820809248554913, "acc_stderr": 0.0355068398916558, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.0355068398916558 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.04093601807403326, "acc_norm": 0.79, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5829787234042553, "acc_stderr": 0.03223276266711712, "acc_norm": 0.5829787234042553, "acc_norm_stderr": 0.03223276266711712 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.041227371113703316, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.041227371113703316 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.02533120243894443, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.02533120243894443 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04444444444444449, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04444444444444449 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7967741935483871, "acc_stderr": 0.022891687984554963, "acc_norm": 0.7967741935483871, "acc_norm_stderr": 0.022891687984554963 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5024630541871922, "acc_stderr": 0.03517945038691063, "acc_norm": 0.5024630541871922, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7878787878787878, "acc_stderr": 0.03192271569548301, "acc_norm": 0.7878787878787878, "acc_norm_stderr": 0.03192271569548301 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.797979797979798, "acc_stderr": 0.028606204289229872, "acc_norm": 0.797979797979798, "acc_norm_stderr": 0.028606204289229872 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.02199531196364424, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.02199531196364424 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402538, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402538 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34444444444444444, "acc_stderr": 0.028972648884844267, "acc_norm": 0.34444444444444444, "acc_norm_stderr": 0.028972648884844267 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7016806722689075, "acc_stderr": 0.02971914287634285, "acc_norm": 0.7016806722689075, "acc_norm_stderr": 0.02971914287634285 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.36423841059602646, "acc_stderr": 0.03929111781242742, "acc_norm": 0.36423841059602646, "acc_norm_stderr": 0.03929111781242742 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8440366972477065, "acc_stderr": 0.01555580271359017, "acc_norm": 0.8440366972477065, "acc_norm_stderr": 0.01555580271359017 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5231481481481481, "acc_stderr": 0.03406315360711507, "acc_norm": 0.5231481481481481, "acc_norm_stderr": 0.03406315360711507 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8186274509803921, "acc_stderr": 0.027044621719474082, "acc_norm": 0.8186274509803921, "acc_norm_stderr": 0.027044621719474082 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7721518987341772, "acc_stderr": 0.027303484599069436, "acc_norm": 0.7721518987341772, "acc_norm_stderr": 0.027303484599069436 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.03138147637575499, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.03138147637575499 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596913, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596913 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8099173553719008, "acc_stderr": 0.03581796951709282, "acc_norm": 0.8099173553719008, "acc_norm_stderr": 0.03581796951709282 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.040191074725573483, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.040191074725573483 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7484662576687117, "acc_stderr": 0.03408997886857529, "acc_norm": 0.7484662576687117, "acc_norm_stderr": 0.03408997886857529 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.42857142857142855, "acc_stderr": 0.04697113923010212, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.04697113923010212 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8974358974358975, "acc_stderr": 0.01987565502786744, "acc_norm": 0.8974358974358975, "acc_norm_stderr": 0.01987565502786744 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8314176245210728, "acc_stderr": 0.013387895731543604, "acc_norm": 0.8314176245210728, "acc_norm_stderr": 0.013387895731543604 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7283236994219653, "acc_stderr": 0.02394851290546837, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.02394851290546837 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.41899441340782123, "acc_stderr": 0.016501579306861677, "acc_norm": 0.41899441340782123, "acc_norm_stderr": 0.016501579306861677 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7352941176470589, "acc_stderr": 0.02526169121972948, "acc_norm": 0.7352941176470589, "acc_norm_stderr": 0.02526169121972948 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7106109324758842, "acc_stderr": 0.025755865922632945, "acc_norm": 0.7106109324758842, "acc_norm_stderr": 0.025755865922632945 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7438271604938271, "acc_stderr": 0.0242885336377261, "acc_norm": 0.7438271604938271, "acc_norm_stderr": 0.0242885336377261 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4787234042553192, "acc_stderr": 0.029800481645628693, "acc_norm": 0.4787234042553192, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.46479791395045633, "acc_stderr": 0.012738547371303957, "acc_norm": 0.46479791395045633, "acc_norm_stderr": 0.012738547371303957 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6875, "acc_stderr": 0.02815637344037142, "acc_norm": 0.6875, "acc_norm_stderr": 0.02815637344037142 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6715686274509803, "acc_stderr": 0.018999707383162673, "acc_norm": 0.6715686274509803, "acc_norm_stderr": 0.018999707383162673 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6909090909090909, "acc_stderr": 0.044262946482000985, "acc_norm": 0.6909090909090909, "acc_norm_stderr": 0.044262946482000985 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7306122448979592, "acc_stderr": 0.02840125202902294, "acc_norm": 0.7306122448979592, "acc_norm_stderr": 0.02840125202902294 }, "harness|hendrycksTest-sociology|5": { "acc": 0.845771144278607, "acc_stderr": 0.02553843336857833, "acc_norm": 0.845771144278607, "acc_norm_stderr": 0.02553843336857833 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.83, "acc_stderr": 0.0377525168068637, "acc_norm": 0.83, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-virology|5": { "acc": 0.5301204819277109, "acc_stderr": 0.03885425420866767, "acc_norm": 0.5301204819277109, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160893, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160893 }, "harness|truthfulqa:mc|0": { "mc1": 0.4663402692778458, "mc1_stderr": 0.017463793867168106, "mc2": 0.6328236726229287, "mc2_stderr": 0.015383415147976018 }, "harness|winogrande|5": { "acc": 0.8271507498026835, "acc_stderr": 0.010626964529971854 }, "harness|gsm8k|5": { "acc": 0.6633813495072024, "acc_stderr": 0.01301646367998336 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Saulons3/platypus_reducido
--- license: apache-2.0 dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 4219564 num_examples: 1000 download_size: 2249944 dataset_size: 4219564 configs: - config_name: default data_files: - split: train path: data/train-* ---
Minifoxy/foxy
--- license: apache-2.0 ---
truongpdd/wit_vietnamese_subset
--- dataset_info: features: - name: language dtype: string - name: page_url dtype: string - name: image_url dtype: string - name: page_title dtype: string - name: section_title dtype: string - name: hierarchical_section_title dtype: string - name: caption_reference_description dtype: string - name: caption_attribution_description dtype: string - name: caption_alt_text_description dtype: string - name: mime_type dtype: string - name: original_height dtype: int32 - name: original_width dtype: int32 - name: is_main_image dtype: bool - name: attribution_passes_lang_id dtype: bool - name: page_changed_recently dtype: bool - name: context_page_description dtype: string - name: context_section_description dtype: string splits: - name: train num_bytes: 966699738.4883065 num_examples: 514473 download_size: 478250989 dataset_size: 966699738.4883065 --- # Dataset Card for "wit_vietnamese_subset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Saravanan666/python-faq
--- license: apache-2.0 ---
mesolitica/chatgpt4-malaysian-general-qa
--- task_categories: - question-answering language: - ms pretty_name: malaysian-general-qa --- # Synthetic Malaysian QA Generated common QA using ChatGPT4 based on Malaysia topics, notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/question-answer/chatgpt4-synthetic-malaysian-qa ## General Malaysia topics - [malaysian-general-qa.jsonl](malaysian-general-qa.jsonl), 20396 rows, 28.6 MB. - [malaysian-general-qa-v2.jsonl](malaysian-general-qa-v2.jsonl), 5294 rows, 8.05 MB. - [malaysian-general-qa-v3.jsonl](malaysian-general-qa-v3.jsonl), 1368 rows, 5.09 MB. - [malaysian-general-qa-v4.jsonl](malaysian-general-qa-v4.jsonl), 7733 rows. 36.2 MB. - [malaysian-general-qa-v5.jsonl](malaysian-general-qa-v5.jsonl), 6363 rows, 32.7 MB. 1. politics 2. socioeconomy 3. culture 4. gender 5. religion 6. sociology 7. social class 8. technology 9. ethnicity 10. infrastructure 11. health 12. education 13. ecology 14. party politics 15. diplomacy 16. history 17. cuisine 18. microeconomics 19. business 20. artificial intelligence 21. law 22. negeri johor 23. negeri kedah 24. negeri kelantan 25. negeri melaka 26. negeri negeri sembilan 27. negeri pahang 28. negeri perak 29. negeri perlis 30. negeri pulau pinang 31. negeri selangor 32. negeri terengganu 33. negeri sabah 34. negeri sarawak 35. kuala lumpur 36. negeri labuan 37. putrajaya 38. najib razak 39. anwar ibrahim 40. parti keadilan rakyat 41. parti islam semalaysia 42. dr mahathir mohamad 43. barisan nasional 44. constitutional monarchy 45. parliamentary democracy 46. political economy 47. political dynamic 48. empowerment of youths 49. kebebasan bersuara 50. hadis 51. hukum aqidah islam 52. hukum fiqah islam ### Example data ```python {'question': 'Adakah AI boleh digunakan untuk mempercepat proses pengesanan dan rawatan penyakit berjangkit di Malaysia?', 'answer': 'Ya, AI boleh digunakan untuk mempercepat proses pengesanan dan rawatan penyakit berjangkit di Malaysia. AI dapat membantu dalam pengumpulan dan analisis data kesihatan secara cepat dan tepat, membolehkan doktor dan pakar kesihatan membuat keputusan yang lebih baik dan pantas dalam merawat pesakit. AI juga boleh digunakan untuk mengesan pola penyebaran penyakit berjangkit, memungkinkan tindakan pencegahan dan kawalan yang lebih efektif. Sebagai contoh, AI boleh digunakan dalam sistem pengawasan epidemik untuk mengenal pasti kawasan yang berisiko tinggi dan mengambil langkah-langkah pencegahan segera.'} ``` ## Kesusasteraan QA - [tatabahasa.jsonl](tatabahasa.jsonl), 25956 rows, 40.1 MB. 1. sastera 2. tatabahasa 3. kesusasteraan melayu 4. pantun 5. sajak 6. syair ### Example data ```python {'question': 'Analisa peranan sekolah dalam menggalakkan penggunaan Bahasa Melayu yang standard di Malaysia.', 'answer': 'Sekolah memiliki peranan yang sangat penting dalam menggalakkan penggunaan Bahasa Melayu yang standard di Malaysia. Berikut adalah beberapa peranannya:\n\n1. Pelaksanaan Kurikulum: Sekolah merupakan tempat utama di mana kurikulum Bahasa Melayu diajar kepada murid-murid. Kurikulum ini dirancang sedemikian rupa untuk memastikan murid-murid memahami dan mampu menggunakan Bahasa Melayu yang standard dalam komunikasi lisan dan tulisan.\n\n2. Latihan dan Praktik: Di sekolah, murid-murid diberikan banyak peluang untuk mempraktikkan Bahasa Melayu yang standard melalui pelbagai aktiviti seperti latihan, ujian, perbincangan dalam kelas, dan sebagainya.\n\n3. Pengajaran dan Pembelajaran: Guru-guru di sekolah juga memainkan peranan penting dalam mengajar dan membimbing murid-murid untuk menggunakan Bahasa Melayu yang standard. Mereka menggunakan pelbagai kaedah pengajaran dan strategi pembelajaran untuk memastikan murid-murid memahami dan mampu menggunakan Bahasa Melayu dengan baik.\n\n4. Pendidikan Moral dan Nilai: Sekolah juga memainkan peranan dalam menanamkan nilai-nilai positif dan etika berkomunikasi yang baik kepada murid-murid. Ini termasuk penggunaan Bahasa Melayu yang sopan dan betul.\n\n5. Aktiviti Luar Kelas: Sekolah juga sering mengadakan aktiviti luar kelas seperti pertandingan pidato, debat, dan drama yang menggunakan Bahasa Melayu. Ini memberi peluang kepada murid-murid untuk mempraktikkan dan mempertingkatkan kemahiran berbahasa mereka.\n\n6. Sumber Rujukan: Sekolah menyediakan berbagai sumber rujukan seperti buku teks, kamus, dan bahan bacaan lain yang menggunakan Bahasa Melayu standard. Ini membantu murid-murid memahami dan menggunakan Bahasa Melayu yang betul.\n\n7. Pemantauan dan Penilaian: Sekolah juga memantau dan menilai kemajuan murid-murid dalam penggunaan Bahasa Melayu. Ini membolehkan sekolah mengenal pasti dan menangani sebarang masalah atau kelemahan yang mungkin dihadapi oleh murid-murid.\n\nDengan ini, jelas bahawa sekolah memainkan peranan yang sangat penting dan berkesan dalam menggalakkan penggunaan Bahasa Melayu yang standard di Malaysia.'} ``` ## Loghat QA - [loghat.jsonl](loghat.jsonl), 2024 rows, 2.89 MB. 1. kedah 2. kelantan 3. perak 4. negeri sembilan 5. selangor 6. sarawak 7. sabah 8. terengganu 9. johor 10. baling ### Example data ```python {'question': "'Rupe-rupenye' dalam Terengganu maksudnya apa?", 'answer': '"Rupe-rupenye" adalah ungkapan dalam bahasa Melayu Terengganu, salah satu dialek Melayu yang digunakan oleh masyarakat di negeri Terengganu, Malaysia. Ungkapan ini berarti "rupa-rupanya" dalam Bahasa Melayu standard, dan digunakan untuk menunjukkan kesedaran atau realisasi sesuatu yang mungkin tidak diketahui sebelumnya atau suatu kejutan. Dalam konteks Bahasa Inggeris, ungkapan ini serupa dengan "it turns out" atau "as it turns out". Contoh penggunaannya:\n\n"Rupe-rupenye, dia dah sampai awai dari kita."\n(It turns out, he arrived earlier than us.)\n\nUngkapan ini sering digunakan dalam percakapan sehari-hari untuk menyatakan sesuatu yang baru diketahui atau sesuatu yang berbeda dari yang diharapkan.'} ```
irds/nyt
--- pretty_name: '`nyt`' viewer: false source_datasets: [] task_categories: - text-retrieval --- # Dataset Card for `nyt` The `nyt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,864,661 This dataset is used by: [`nyt_trec-core-2017`](https://huggingface.co/datasets/irds/nyt_trec-core-2017), [`nyt_wksup`](https://huggingface.co/datasets/irds/nyt_wksup), [`nyt_wksup_train`](https://huggingface.co/datasets/irds/nyt_wksup_train), [`nyt_wksup_valid`](https://huggingface.co/datasets/irds/nyt_wksup_valid) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/nyt', 'docs') for record in docs: record # {'doc_id': ..., 'headline': ..., 'body': ..., 'source_xml': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
fxmeng/wikinews
--- dataset_info: features: - name: conversations dtype: string splits: - name: en num_bytes: 401014 num_examples: 200 - name: ta num_bytes: 712167 num_examples: 200 - name: cs num_bytes: 271584 num_examples: 200 - name: es num_bytes: 352499 num_examples: 200 - name: fr num_bytes: 381363 num_examples: 200 - name: de num_bytes: 417924 num_examples: 200 - name: bg num_bytes: 113419 num_examples: 44 - name: sv num_bytes: 269221 num_examples: 200 - name: fi num_bytes: 156859 num_examples: 111 - name: he num_bytes: 252918 num_examples: 114 - name: pt num_bytes: 299202 num_examples: 200 - name: fa num_bytes: 92987 num_examples: 68 - name: ru num_bytes: 770436 num_examples: 200 - name: tr num_bytes: 120009 num_examples: 114 - name: it num_bytes: 352746 num_examples: 200 - name: pl num_bytes: 280085 num_examples: 200 - name: bs num_bytes: 36670 num_examples: 44 - name: ca num_bytes: 279943 num_examples: 192 - name: ja num_bytes: 369314 num_examples: 200 - name: nl num_bytes: 243089 num_examples: 200 - name: zh num_bytes: 417530 num_examples: 200 - name: sr num_bytes: 399367 num_examples: 200 - name: 'no' num_bytes: 120488 num_examples: 74 - name: li num_bytes: 30090 num_examples: 26 - name: uk num_bytes: 293883 num_examples: 77 - name: hu num_bytes: 167977 num_examples: 87 - name: el num_bytes: 247871 num_examples: 94 - name: eo num_bytes: 169288 num_examples: 99 - name: ar num_bytes: 123020 num_examples: 73 - name: ro num_bytes: 53274 num_examples: 45 - name: sq num_bytes: 14481 num_examples: 10 - name: ko num_bytes: 58987 num_examples: 53 - name: th num_bytes: 20167 num_examples: 2 download_size: 4919179 dataset_size: 8289872 configs: - config_name: default data_files: - split: en path: data/en-* - split: ta path: data/ta-* - split: cs path: data/cs-* - split: es path: data/es-* - split: fr path: data/fr-* - split: de path: data/de-* - split: bg path: data/bg-* - split: sv path: data/sv-* - split: fi path: data/fi-* - split: he path: data/he-* - split: pt path: data/pt-* - split: fa path: data/fa-* - split: ru path: data/ru-* - split: tr path: data/tr-* - split: it path: data/it-* - split: pl path: data/pl-* - split: bs path: data/bs-* - split: ca path: data/ca-* - split: ja path: data/ja-* - split: nl path: data/nl-* - split: zh path: data/zh-* - split: sr path: data/sr-* - split: 'no' path: data/no-* - split: li path: data/li-* - split: uk path: data/uk-* - split: hu path: data/hu-* - split: el path: data/el-* - split: eo path: data/eo-* - split: ar path: data/ar-* - split: ro path: data/ro-* - split: sq path: data/sq-* - split: ko path: data/ko-* - split: th path: data/th-* --- ``` from datasets import load_dataset dataset = load_dataset("Fumika/Wikinews-multilingual", split='train') from collections import defaultdict category=defaultdict(list) for data in dataset: category[data['lang']].append(' '.join(data['text'])) import numpy as np from datasets import Dataset, DatasetDict wikinews = DatasetDict() for key in category.keys(): if len(category[key])>200: sample = np.random.choice(category[key],200,replace=False) else: sample = category[key] wikinews[key] = Dataset.from_dict({'conversations': sample}) wikinews.push_to_hub('xxx/wikinews',token='hf_xxx') ```
CodedotAI/code_clippy
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - code license: - gpl-3.0 multilinguality: - multilingual size_categories: - unknown source_datasets: - original task_categories: - text-generation task_ids: - language-modeling pretty_name: Code Clippy --- # Dataset Card for Code Clippy Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://the-eye.eu/public/AI/training_data/code_clippy_data/ - **Repository:** https://github.com/ncoop57/gpt-code-clippy - **Paper:** [Not yet :)] - **Leaderboard:** [Not yet :)] - **Point of Contact:** [Nathan Cooper](mailto@nacooper01@email.wm.edu) ### Dataset Summary This dataset was generated by selecting GitHub repositories from a large collection of repositories. These repositories were collected from https://seart-ghs.si.usi.ch/ and Github portion of [The Pile](https://github.com/EleutherAI/github-downloader) (performed on July 7th, 2021). The goal of this dataset is to provide a training set for pretraining large language models on code data for helping software engineering researchers better understand their impacts on software related tasks such as autocompletion of code. The dataset is split into train, validation, and test splits. There is a version containing duplicates (209GBs compressed) and ones where exact duplicates (132GBs compressed) are removed. Contains mostly JavaScript and Python code, but other programming languages are included as well to various degrees. ### Supported Tasks and Leaderboards - `language-modeling`: The dataset can be used to train a model for language modeling for modeling programming languages, which consists of pretraining/finetuning a model to predict missing tokens, either causally or masked, given some context. Success on this task is typically measured by achieving a *low* perplexity score. ### Languages Multiple programming languages are included in the dataset. ## Dataset Structure ### Data Instances ``` { "id": datasets.Value("int64"), "text": datasets.Value("string"), "repo_name": datasets.Value("string"), "stars": datasets.Value("string"), "repo_language": datasets.Value("string"), "file_name": datasets.Value("string"), "mime_type": datasets.Value("string") } ``` ### Data Fields - `id`: A unique identifier for the data instance. - `text`: The text of the code. - `repo_name`: The name of the repository. - `stars`: The number of stars the repository has. - `repo_language`: The programming language of the repository. - `file_name`: The name of the file. - `mime_type`: The MIME type of the file. ### Data Splits | Size in GBs | Tain | Valid | Test | | ----- | ------ | ----- | ---- | | Duplicate | 194 | 9 | 6.3 | | Deduplicate | 126 | 3.3 | 3.1 | ## Dataset Creation ### Curation Rationale To have a code dataset that is large enough to properly train a large language model on. ### Source Data #### Initial Data Collection and Normalization - [The Pile](https://github.com/EleutherAI/github-downloader) - [Seart-GHS](https://seart-ghs.si.usi.ch/) Repositories were collected from both sources and the helper script from https://github.com/EleutherAI/github-downloader was used to download the repositories. Files were scrapped from the downloaded repositories, but ignored files that had certain extensions associated with binary or other non-textual/autogenerated content, and the output was converted into the [LM_Dataformat](https://pypi.org/project/lm-dataformat/) format. #### Who are the source language producers? Software developers. ### Annotations #### Annotation process No annotation was performed. #### Who are the annotators? N/A ### Personal and Sensitive Information Since this data was collected from public repositories, there exists potential for personal and sensitive information to be included in the data through developers accidentally or on purpose uploading their secret keys, passwords, API keys, emails, etc. ## Considerations for Using the Data ### Social Impact of Dataset The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** A language model trained on large datasets such as this one for the task of autogenerating code may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using a language model trained on this dataset. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, any model trained on this dataset may generate vulnerable, buggy, or malicious code. In safety critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, a model trained on this dataset may be used to generate malicious code on purpose in order to perform ransomware or other such attacks. 4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there has been little to no previous cases of such usages of licensed publicly available code. Therefore, any model trained on this dataset may be required to obey license terms that align with the software it was trained on such as GPL-3.0, which is why we purposefully put this dataset under the GPL-3.0 license. It is unclear the legal ramifications of using a language model trained on this dataset. ### Discussion of Biases The programming languages most represented in this dataset are those of Javascript and Python. Therefore, other, still popular languages such as C and C++, are less represented and therefore model performance for these languages will be less comparatively. Additionally, this dataset only contains public repositories and so may not be representative of code written by private developers. No filtering was performed for potential racist, offensive, or otherwise inappropriate content. Therefore there may be such content in the dataset that will be reflected in models trained on it. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Nathan Coooper, Artashes Arutiunian, Santiago Hincapié-Potes, Ben Trevett, Arun Raja, Erfan Hossami, Mrinal Mathur, and contributors! ### Licensing Information This repository is under the GPL-3.0 license. ### Citation Information ``` @misc{cooper-2021-code-clippy-data, author = {Nathan Coooper, Artashes Arutiunian, Santiago Hincapié-Potes, Ben Trevett, Arun Raja, Erfan Hossami, Mrinal Mathur, and contributors}, title = {{Code Clippy Data: A large dataset of code data from Github for research into code language models}}, month = jul, year = 2021, version = {1.0}, publisher = {GitHub}, url = {https://github.com/ncoop57/gpt-code-clippy} } ``` ### Contributions Thanks to [@ncoop57](https://github.com/ncoop57), [@arampacha](https://github.com/arampacha), [@shpotes](https://github.com/shpotes), [@bentrevett](https://github.com/bentrevett), [@arunraja-hub](https://github.com/arunraja-hub), [@taisazero](https://github.com/taisazero), [@Mrinal18](https://github.com/Mrinal18), and contributors for adding this dataset.
DataAnalyticsLab/SymbolicGPT
--- license: mit ---
CyberHarem/qiyana_leagueoflegends
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of qiyana (League of Legends) This is the dataset of qiyana (League of Legends), containing 51 images and their tags. The core tags of this character are `dark_skin, breasts, bangs, dark-skinned_female, blunt_bangs, long_hair, grey_hair, yellow_eyes, large_breasts, medium_breasts, white_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 51 | 57.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/qiyana_leagueoflegends/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 51 | 32.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/qiyana_leagueoflegends/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 115 | 67.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/qiyana_leagueoflegends/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 51 | 50.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/qiyana_leagueoflegends/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 115 | 91.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/qiyana_leagueoflegends/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/qiyana_leagueoflegends', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, solo, bracelet, cleavage, collarbone, looking_at_viewer, necklace, pantyhose, simple_background, white_background, green_dress, open_mouth, blush, hand_on_hip, orange_eyes, shiny_hair, armlet, cowboy_shot, hair_ornament, short_dress, strapless_dress, tiara, upper_teeth_only | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bare_shoulders, bracelet, tiara, looking_at_viewer, necklace, solo, armlet, cleavage, weapon, pants | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | solo | bracelet | cleavage | collarbone | looking_at_viewer | necklace | pantyhose | simple_background | white_background | green_dress | open_mouth | blush | hand_on_hip | orange_eyes | shiny_hair | armlet | cowboy_shot | hair_ornament | short_dress | strapless_dress | tiara | upper_teeth_only | weapon | pants | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-------|:-----------|:-----------|:-------------|:--------------------|:-----------|:------------|:--------------------|:-------------------|:--------------|:-------------|:--------|:--------------|:--------------|:-------------|:---------|:--------------|:----------------|:--------------|:------------------|:--------|:-------------------|:---------|:--------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | X | X | | | | | | | | | | X | | | | | X | | X | X |
Lollitor/DatasetFineTune
--- dataset_info: features: - name: sequence dtype: string - name: label dtype: float64 splits: - name: train num_bytes: 22099679 num_examples: 18926 download_size: 8110458 dataset_size: 22099679 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "DatasetFineTune" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
universeTBD/arxiv-abstracts-large
--- license: afl-3.0 task_categories: - text-generation language: - en pretty_name: Arxiv Abstracts Larg size_categories: - 1M<n<10M --- The arXiv Dataset is a comprehensive knowledge repository of 1.7 million scholarly articles drawn from the vast domains of physics, computer science, statistics, electrical engineering, quantitative biology, and economics among others. It provides open access to vital features such as article titles, authors, categories, abstracts, full text PDFs, and more. The dataset offers immense depth, allowing for exploration into various subdisciplines and interconnections between them. It serves as a potent tool for researchers seeking overlooked discoveries, innovative perspectives, or cross-field insights, thereby paving the way for new ideas and scientific advancements.
open-llm-leaderboard/details_lemon-mint__gemma-7b-openhermes-v0.80
--- pretty_name: Evaluation run of lemon-mint/gemma-7b-openhermes-v0.80 dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [lemon-mint/gemma-7b-openhermes-v0.80](https://huggingface.co/lemon-mint/gemma-7b-openhermes-v0.80)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lemon-mint__gemma-7b-openhermes-v0.80\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-04-09T11:26:08.031314](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon-mint__gemma-7b-openhermes-v0.80/blob/main/results_2024-04-09T11-26-08.031314.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5607727208540026,\n\ \ \"acc_stderr\": 0.03374009542716251,\n \"acc_norm\": 0.5646252525030766,\n\ \ \"acc_norm_stderr\": 0.034417724800141,\n \"mc1\": 0.2937576499388005,\n\ \ \"mc1_stderr\": 0.015945068581236614,\n \"mc2\": 0.4706193972805725,\n\ \ \"mc2_stderr\": 0.015617837881275841\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.5179180887372014,\n \"acc_stderr\": 0.014602005585490976,\n\ \ \"acc_norm\": 0.5435153583617748,\n \"acc_norm_stderr\": 0.014555949760496442\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5553674566819359,\n\ \ \"acc_stderr\": 0.004959094146471529,\n \"acc_norm\": 0.7369049990041824,\n\ \ \"acc_norm_stderr\": 0.004394136724173006\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384739,\n \ \ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384739\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5037037037037037,\n\ \ \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.5037037037037037,\n\ \ \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.631578947368421,\n \"acc_stderr\": 0.03925523381052932,\n\ \ \"acc_norm\": 0.631578947368421,\n \"acc_norm_stderr\": 0.03925523381052932\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n\ \ \"acc_stderr\": 0.04975698519562427,\n \"acc_norm\": 0.57,\n \ \ \"acc_norm_stderr\": 0.04975698519562427\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.5849056603773585,\n \"acc_stderr\": 0.03032594578928611,\n\ \ \"acc_norm\": 0.5849056603773585,\n \"acc_norm_stderr\": 0.03032594578928611\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n\ \ \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n\ \ \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \ \ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n\ \ \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \ \ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5144508670520231,\n\ \ \"acc_stderr\": 0.03810871630454764,\n \"acc_norm\": 0.5144508670520231,\n\ \ \"acc_norm_stderr\": 0.03810871630454764\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.045766654032077615,\n\ \ \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.045766654032077615\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\ \ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.4553191489361702,\n \"acc_stderr\": 0.032555253593403555,\n\ \ \"acc_norm\": 0.4553191489361702,\n \"acc_norm_stderr\": 0.032555253593403555\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3684210526315789,\n\ \ \"acc_stderr\": 0.04537815354939392,\n \"acc_norm\": 0.3684210526315789,\n\ \ \"acc_norm_stderr\": 0.04537815354939392\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\ \ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997692,\n \"\ acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997692\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n\ \ \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n\ \ \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \ \ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6741935483870968,\n\ \ \"acc_stderr\": 0.0266620105785671,\n \"acc_norm\": 0.6741935483870968,\n\ \ \"acc_norm_stderr\": 0.0266620105785671\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.4482758620689655,\n \"acc_stderr\": 0.03499113137676744,\n\ \ \"acc_norm\": 0.4482758620689655,\n \"acc_norm_stderr\": 0.03499113137676744\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\"\ : 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.03477691162163659,\n\ \ \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.03477691162163659\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.7222222222222222,\n \"acc_stderr\": 0.03191178226713547,\n \"\ acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.03191178226713547\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.8186528497409327,\n \"acc_stderr\": 0.027807032360686088,\n\ \ \"acc_norm\": 0.8186528497409327,\n \"acc_norm_stderr\": 0.027807032360686088\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.5307692307692308,\n \"acc_stderr\": 0.025302958890850154,\n\ \ \"acc_norm\": 0.5307692307692308,\n \"acc_norm_stderr\": 0.025302958890850154\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.3111111111111111,\n \"acc_stderr\": 0.028226446749683515,\n \ \ \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.028226446749683515\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.5756302521008403,\n \"acc_stderr\": 0.032104790510157764,\n\ \ \"acc_norm\": 0.5756302521008403,\n \"acc_norm_stderr\": 0.032104790510157764\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\"\ : 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n\ \ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7669724770642202,\n\ \ \"acc_stderr\": 0.018125669180861507,\n \"acc_norm\": 0.7669724770642202,\n\ \ \"acc_norm_stderr\": 0.018125669180861507\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\ : {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.033247089118091176,\n\ \ \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.033247089118091176\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.7009803921568627,\n \"acc_stderr\": 0.032133257173736156,\n \"\ acc_norm\": 0.7009803921568627,\n \"acc_norm_stderr\": 0.032133257173736156\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.70042194092827,\n \"acc_stderr\": 0.029818024749753095,\n \ \ \"acc_norm\": 0.70042194092827,\n \"acc_norm_stderr\": 0.029818024749753095\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n\ \ \"acc_stderr\": 0.03181149747055359,\n \"acc_norm\": 0.6591928251121076,\n\ \ \"acc_norm_stderr\": 0.03181149747055359\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n\ \ \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\ : 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\ \ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6388888888888888,\n\ \ \"acc_stderr\": 0.04643454608906275,\n \"acc_norm\": 0.6388888888888888,\n\ \ \"acc_norm_stderr\": 0.04643454608906275\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.656441717791411,\n \"acc_stderr\": 0.037311335196738925,\n\ \ \"acc_norm\": 0.656441717791411,\n \"acc_norm_stderr\": 0.037311335196738925\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\ \ \"acc_stderr\": 0.04742762361243011,\n \"acc_norm\": 0.48214285714285715,\n\ \ \"acc_norm_stderr\": 0.04742762361243011\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.6990291262135923,\n \"acc_stderr\": 0.04541609446503948,\n\ \ \"acc_norm\": 0.6990291262135923,\n \"acc_norm_stderr\": 0.04541609446503948\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n\ \ \"acc_stderr\": 0.02416161812798774,\n \"acc_norm\": 0.8376068376068376,\n\ \ \"acc_norm_stderr\": 0.02416161812798774\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \ \ \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\ \ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7650063856960408,\n\ \ \"acc_stderr\": 0.015162024152278441,\n \"acc_norm\": 0.7650063856960408,\n\ \ \"acc_norm_stderr\": 0.015162024152278441\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.5982658959537572,\n \"acc_stderr\": 0.026394104177643634,\n\ \ \"acc_norm\": 0.5982658959537572,\n \"acc_norm_stderr\": 0.026394104177643634\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2335195530726257,\n\ \ \"acc_stderr\": 0.014149575348976273,\n \"acc_norm\": 0.2335195530726257,\n\ \ \"acc_norm_stderr\": 0.014149575348976273\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.5980392156862745,\n \"acc_stderr\": 0.028074158947600656,\n\ \ \"acc_norm\": 0.5980392156862745,\n \"acc_norm_stderr\": 0.028074158947600656\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.594855305466238,\n\ \ \"acc_stderr\": 0.02788238379132596,\n \"acc_norm\": 0.594855305466238,\n\ \ \"acc_norm_stderr\": 0.02788238379132596\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.02686949074481525,\n\ \ \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.02686949074481525\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.40425531914893614,\n \"acc_stderr\": 0.02927553215970472,\n \ \ \"acc_norm\": 0.40425531914893614,\n \"acc_norm_stderr\": 0.02927553215970472\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4282920469361147,\n\ \ \"acc_stderr\": 0.01263822388031316,\n \"acc_norm\": 0.4282920469361147,\n\ \ \"acc_norm_stderr\": 0.01263822388031316\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.4522058823529412,\n \"acc_stderr\": 0.030233758551596445,\n\ \ \"acc_norm\": 0.4522058823529412,\n \"acc_norm_stderr\": 0.030233758551596445\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.5751633986928104,\n \"acc_stderr\": 0.019997973035458333,\n \ \ \"acc_norm\": 0.5751633986928104,\n \"acc_norm_stderr\": 0.019997973035458333\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n\ \ \"acc_stderr\": 0.04631381319425465,\n \"acc_norm\": 0.6272727272727273,\n\ \ \"acc_norm_stderr\": 0.04631381319425465\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.6775510204081633,\n \"acc_stderr\": 0.029923100563683906,\n\ \ \"acc_norm\": 0.6775510204081633,\n \"acc_norm_stderr\": 0.029923100563683906\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n\ \ \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n\ \ \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \ \ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.46987951807228917,\n\ \ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.46987951807228917,\n\ \ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.7660818713450293,\n \"acc_stderr\": 0.03246721765117826,\n\ \ \"acc_norm\": 0.7660818713450293,\n \"acc_norm_stderr\": 0.03246721765117826\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2937576499388005,\n\ \ \"mc1_stderr\": 0.015945068581236614,\n \"mc2\": 0.4706193972805725,\n\ \ \"mc2_stderr\": 0.015617837881275841\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.6677190213101816,\n \"acc_stderr\": 0.013238316554236526\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4336618650492798,\n \ \ \"acc_stderr\": 0.013650728047064686\n }\n}\n```" repo_url: https://huggingface.co/lemon-mint/gemma-7b-openhermes-v0.80 leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|arc:challenge|25_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|arc:challenge|25_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-04-09T11-26-08.031314.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|gsm8k|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|gsm8k|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hellaswag|10_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hellaswag|10_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-management|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-07-43.510982.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-management|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-management|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-26-08.031314.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-management|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-management|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-26-08.031314.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|truthfulqa:mc|0_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|truthfulqa:mc|0_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-04-09T11-26-08.031314.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_04_09T11_07_43.510982 path: - '**/details_harness|winogrande|5_2024-04-09T11-07-43.510982.parquet' - split: 2024_04_09T11_26_08.031314 path: - '**/details_harness|winogrande|5_2024-04-09T11-26-08.031314.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-04-09T11-26-08.031314.parquet' - config_name: results data_files: - split: 2024_04_09T11_07_43.510982 path: - results_2024-04-09T11-07-43.510982.parquet - split: 2024_04_09T11_26_08.031314 path: - results_2024-04-09T11-26-08.031314.parquet - split: latest path: - results_2024-04-09T11-26-08.031314.parquet --- # Dataset Card for Evaluation run of lemon-mint/gemma-7b-openhermes-v0.80 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [lemon-mint/gemma-7b-openhermes-v0.80](https://huggingface.co/lemon-mint/gemma-7b-openhermes-v0.80) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_lemon-mint__gemma-7b-openhermes-v0.80", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-04-09T11:26:08.031314](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon-mint__gemma-7b-openhermes-v0.80/blob/main/results_2024-04-09T11-26-08.031314.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5607727208540026, "acc_stderr": 0.03374009542716251, "acc_norm": 0.5646252525030766, "acc_norm_stderr": 0.034417724800141, "mc1": 0.2937576499388005, "mc1_stderr": 0.015945068581236614, "mc2": 0.4706193972805725, "mc2_stderr": 0.015617837881275841 }, "harness|arc:challenge|25": { "acc": 0.5179180887372014, "acc_stderr": 0.014602005585490976, "acc_norm": 0.5435153583617748, "acc_norm_stderr": 0.014555949760496442 }, "harness|hellaswag|10": { "acc": 0.5553674566819359, "acc_stderr": 0.004959094146471529, "acc_norm": 0.7369049990041824, "acc_norm_stderr": 0.004394136724173006 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.04461960433384739, "acc_norm": 0.27, "acc_norm_stderr": 0.04461960433384739 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5037037037037037, "acc_stderr": 0.04319223625811331, "acc_norm": 0.5037037037037037, "acc_norm_stderr": 0.04319223625811331 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.631578947368421, "acc_stderr": 0.03925523381052932, "acc_norm": 0.631578947368421, "acc_norm_stderr": 0.03925523381052932 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.57, "acc_stderr": 0.04975698519562427, "acc_norm": 0.57, "acc_norm_stderr": 0.04975698519562427 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5849056603773585, "acc_stderr": 0.03032594578928611, "acc_norm": 0.5849056603773585, "acc_norm_stderr": 0.03032594578928611 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6527777777777778, "acc_stderr": 0.039812405437178615, "acc_norm": 0.6527777777777778, "acc_norm_stderr": 0.039812405437178615 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.39, "acc_stderr": 0.04902071300001974, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5144508670520231, "acc_stderr": 0.03810871630454764, "acc_norm": 0.5144508670520231, "acc_norm_stderr": 0.03810871630454764 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.30392156862745096, "acc_stderr": 0.045766654032077615, "acc_norm": 0.30392156862745096, "acc_norm_stderr": 0.045766654032077615 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4553191489361702, "acc_stderr": 0.032555253593403555, "acc_norm": 0.4553191489361702, "acc_norm_stderr": 0.032555253593403555 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.3684210526315789, "acc_stderr": 0.04537815354939392, "acc_norm": 0.3684210526315789, "acc_norm_stderr": 0.04537815354939392 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5586206896551724, "acc_stderr": 0.04137931034482757, "acc_norm": 0.5586206896551724, "acc_norm_stderr": 0.04137931034482757 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4021164021164021, "acc_stderr": 0.025253032554997692, "acc_norm": 0.4021164021164021, "acc_norm_stderr": 0.025253032554997692 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.40476190476190477, "acc_stderr": 0.04390259265377562, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.04390259265377562 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6741935483870968, "acc_stderr": 0.0266620105785671, "acc_norm": 0.6741935483870968, "acc_norm_stderr": 0.0266620105785671 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4482758620689655, "acc_stderr": 0.03499113137676744, "acc_norm": 0.4482758620689655, "acc_norm_stderr": 0.03499113137676744 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.59, "acc_stderr": 0.04943110704237102, "acc_norm": 0.59, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7272727272727273, "acc_stderr": 0.03477691162163659, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.03477691162163659 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7222222222222222, "acc_stderr": 0.03191178226713547, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.03191178226713547 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8186528497409327, "acc_stderr": 0.027807032360686088, "acc_norm": 0.8186528497409327, "acc_norm_stderr": 0.027807032360686088 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5307692307692308, "acc_stderr": 0.025302958890850154, "acc_norm": 0.5307692307692308, "acc_norm_stderr": 0.025302958890850154 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3111111111111111, "acc_stderr": 0.028226446749683515, "acc_norm": 0.3111111111111111, "acc_norm_stderr": 0.028226446749683515 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5756302521008403, "acc_stderr": 0.032104790510157764, "acc_norm": 0.5756302521008403, "acc_norm_stderr": 0.032104790510157764 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.304635761589404, "acc_stderr": 0.03757949922943343, "acc_norm": 0.304635761589404, "acc_norm_stderr": 0.03757949922943343 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7669724770642202, "acc_stderr": 0.018125669180861507, "acc_norm": 0.7669724770642202, "acc_norm_stderr": 0.018125669180861507 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.3888888888888889, "acc_stderr": 0.033247089118091176, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.033247089118091176 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7009803921568627, "acc_stderr": 0.032133257173736156, "acc_norm": 0.7009803921568627, "acc_norm_stderr": 0.032133257173736156 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.70042194092827, "acc_stderr": 0.029818024749753095, "acc_norm": 0.70042194092827, "acc_norm_stderr": 0.029818024749753095 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6591928251121076, "acc_stderr": 0.03181149747055359, "acc_norm": 0.6591928251121076, "acc_norm_stderr": 0.03181149747055359 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.648854961832061, "acc_stderr": 0.04186445163013751, "acc_norm": 0.648854961832061, "acc_norm_stderr": 0.04186445163013751 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6388888888888888, "acc_stderr": 0.04643454608906275, "acc_norm": 0.6388888888888888, "acc_norm_stderr": 0.04643454608906275 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.656441717791411, "acc_stderr": 0.037311335196738925, "acc_norm": 0.656441717791411, "acc_norm_stderr": 0.037311335196738925 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.04742762361243011, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.04742762361243011 }, "harness|hendrycksTest-management|5": { "acc": 0.6990291262135923, "acc_stderr": 0.04541609446503948, "acc_norm": 0.6990291262135923, "acc_norm_stderr": 0.04541609446503948 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8376068376068376, "acc_stderr": 0.02416161812798774, "acc_norm": 0.8376068376068376, "acc_norm_stderr": 0.02416161812798774 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7650063856960408, "acc_stderr": 0.015162024152278441, "acc_norm": 0.7650063856960408, "acc_norm_stderr": 0.015162024152278441 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5982658959537572, "acc_stderr": 0.026394104177643634, "acc_norm": 0.5982658959537572, "acc_norm_stderr": 0.026394104177643634 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2335195530726257, "acc_stderr": 0.014149575348976273, "acc_norm": 0.2335195530726257, "acc_norm_stderr": 0.014149575348976273 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5980392156862745, "acc_stderr": 0.028074158947600656, "acc_norm": 0.5980392156862745, "acc_norm_stderr": 0.028074158947600656 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.594855305466238, "acc_stderr": 0.02788238379132596, "acc_norm": 0.594855305466238, "acc_norm_stderr": 0.02788238379132596 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6296296296296297, "acc_stderr": 0.02686949074481525, "acc_norm": 0.6296296296296297, "acc_norm_stderr": 0.02686949074481525 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.40425531914893614, "acc_stderr": 0.02927553215970472, "acc_norm": 0.40425531914893614, "acc_norm_stderr": 0.02927553215970472 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4282920469361147, "acc_stderr": 0.01263822388031316, "acc_norm": 0.4282920469361147, "acc_norm_stderr": 0.01263822388031316 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4522058823529412, "acc_stderr": 0.030233758551596445, "acc_norm": 0.4522058823529412, "acc_norm_stderr": 0.030233758551596445 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5751633986928104, "acc_stderr": 0.019997973035458333, "acc_norm": 0.5751633986928104, "acc_norm_stderr": 0.019997973035458333 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6272727272727273, "acc_stderr": 0.04631381319425465, "acc_norm": 0.6272727272727273, "acc_norm_stderr": 0.04631381319425465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6775510204081633, "acc_stderr": 0.029923100563683906, "acc_norm": 0.6775510204081633, "acc_norm_stderr": 0.029923100563683906 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7810945273631841, "acc_stderr": 0.029239174636647, "acc_norm": 0.7810945273631841, "acc_norm_stderr": 0.029239174636647 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-virology|5": { "acc": 0.46987951807228917, "acc_stderr": 0.03885425420866767, "acc_norm": 0.46987951807228917, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7660818713450293, "acc_stderr": 0.03246721765117826, "acc_norm": 0.7660818713450293, "acc_norm_stderr": 0.03246721765117826 }, "harness|truthfulqa:mc|0": { "mc1": 0.2937576499388005, "mc1_stderr": 0.015945068581236614, "mc2": 0.4706193972805725, "mc2_stderr": 0.015617837881275841 }, "harness|winogrande|5": { "acc": 0.6677190213101816, "acc_stderr": 0.013238316554236526 }, "harness|gsm8k|5": { "acc": 0.4336618650492798, "acc_stderr": 0.013650728047064686 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_HiTZ__GoLLIE-7B
--- pretty_name: Evaluation run of HiTZ/GoLLIE-7B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 1 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_HiTZ__GoLLIE-7B\"\ ,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\ \ are the [latest results from run 2023-12-03T17:27:35.795751](https://huggingface.co/datasets/open-llm-leaderboard/details_HiTZ__GoLLIE-7B/blob/main/results_2023-12-03T17-27-35.795751.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.032600454890068235,\n\ \ \"acc_stderr\": 0.004891669021939591\n },\n \"harness|gsm8k|5\":\ \ {\n \"acc\": 0.032600454890068235,\n \"acc_stderr\": 0.004891669021939591\n\ \ }\n}\n```" repo_url: https://huggingface.co/HiTZ/GoLLIE-7B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_gsm8k_5 data_files: - split: 2023_12_03T17_27_35.795751 path: - '**/details_harness|gsm8k|5_2023-12-03T17-27-35.795751.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-12-03T17-27-35.795751.parquet' - config_name: results data_files: - split: 2023_12_03T17_27_35.795751 path: - results_2023-12-03T17-27-35.795751.parquet - split: latest path: - results_2023-12-03T17-27-35.795751.parquet --- # Dataset Card for Evaluation run of HiTZ/GoLLIE-7B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/HiTZ/GoLLIE-7B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_HiTZ__GoLLIE-7B", "harness_gsm8k_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-03T17:27:35.795751](https://huggingface.co/datasets/open-llm-leaderboard/details_HiTZ__GoLLIE-7B/blob/main/results_2023-12-03T17-27-35.795751.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.032600454890068235, "acc_stderr": 0.004891669021939591 }, "harness|gsm8k|5": { "acc": 0.032600454890068235, "acc_stderr": 0.004891669021939591 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
meccanicamente/guanaco-llama2-oracolo-1k
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 1654448 num_examples: 1000 download_size: 966692 dataset_size: 1654448 configs: - config_name: default data_files: - split: train path: data/train-* ---
MASTERREDE/vozjose502
--- license: openrail ---
jxm/trec
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: dev path: data/dev-* dataset_info: features: - name: sentence dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 342265 num_examples: 5452 - name: test num_bytes: 24003 num_examples: 500 - name: dev num_bytes: 12305 num_examples: 256 download_size: 228995 dataset_size: 378573 --- # Dataset Card for "trec" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jacobbieker/dwd-icon-global
--- license: mit ---
amallia/DeepImpactMSMarcoTrainingData
--- license: apache-2.0 ---
honzatoegel/lola-gramma-de-en
--- license: apache-2.0 language: - de - en tags: - Languages - Gramma size_categories: - n<1K --- # Dataset Card This gramma correction dataset is still work in progress! Do not use it for any serious LLM task - see Issues bellow. ## Dataset summary This dataset is used to finetune LLMs for German gramma correction for English speakers. ### Input An input is German sentence, which has potentially grammatical errors. ### Output Output is corrected sentence with minimal adjustments and list all gramma corrections and explanations. ### Dataset creation The incorrect input sentences was created manually, the correction was prehenetaded by GPT and then finally manually corrected. The focus was on explainable gramma rules, and high quality of data. ### Issues The main issue is the small amounth of data points, all trained LLMs do not generalize well. The aim is to make various categories of grammatical errors and then add more examples with data augmentation. #### Proposed gramma error categories (TODO) - Interpuncion - ex. missing comma, comma on wrong position - Wrong word order - Missing clause words (missing subject, object, verb,..) - Additional clause words which should not be used - Misspelling & Typos - Conjugation of verbs - wrong person, wrong tense - Declination of nouns+articles - Wrong article, wrong case - Wrong prepositions/adjectives in given clause meaning Each category should have at least 15-20 datapoints for training, and 5 for evaluation.
HydraLM/partitioned_v2_standardized_02
--- dataset_info: features: - name: message dtype: string - name: message_type dtype: string - name: message_id dtype: int64 - name: conversation_id dtype: int64 - name: dataset_id dtype: string - name: unique_conversation_id dtype: string splits: - name: train num_bytes: 61392071.613857076 num_examples: 120104 download_size: 30521550 dataset_size: 61392071.613857076 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "partitioned_v2_standardized_02" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/MULTI_VALUE_stsb_that_infinitival_subclause
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: score dtype: float64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: dev num_bytes: 1404 num_examples: 6 - name: test num_bytes: 1191 num_examples: 6 - name: train num_bytes: 3976 num_examples: 20 download_size: 14435 dataset_size: 6571 --- # Dataset Card for "MULTI_VALUE_stsb_that_infinitival_subclause" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_Sao10K__Chat-Stheno-L2-13B
--- pretty_name: Evaluation run of Sao10K/Chat-Stheno-L2-13B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Sao10K/Chat-Stheno-L2-13B](https://huggingface.co/Sao10K/Chat-Stheno-L2-13B)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sao10K__Chat-Stheno-L2-13B_public\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-11-04T16:25:41.456974](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Chat-Stheno-L2-13B_public/blob/main/results_2023-11-04T16-25-41.456974.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.053062080536912755,\n\ \ \"em_stderr\": 0.00229558058219171,\n \"f1\": 0.11251887583892609,\n\ \ \"f1_stderr\": 0.002552941756424483,\n \"acc\": 0.45079414279670627,\n\ \ \"acc_stderr\": 0.010942596493290702\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.053062080536912755,\n \"em_stderr\": 0.00229558058219171,\n\ \ \"f1\": 0.11251887583892609,\n \"f1_stderr\": 0.002552941756424483\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14783927217589082,\n \ \ \"acc_stderr\": 0.009776827679143889\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7537490134175217,\n \"acc_stderr\": 0.012108365307437518\n\ \ }\n}\n```" repo_url: https://huggingface.co/Sao10K/Chat-Stheno-L2-13B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_11_04T16_25_41.456974 path: - '**/details_harness|drop|3_2023-11-04T16-25-41.456974.parquet' - split: latest path: - '**/details_harness|drop|3_2023-11-04T16-25-41.456974.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_11_04T16_25_41.456974 path: - '**/details_harness|gsm8k|5_2023-11-04T16-25-41.456974.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-11-04T16-25-41.456974.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_11_04T16_25_41.456974 path: - '**/details_harness|winogrande|5_2023-11-04T16-25-41.456974.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-11-04T16-25-41.456974.parquet' - config_name: results data_files: - split: 2023_11_04T16_25_41.456974 path: - results_2023-11-04T16-25-41.456974.parquet - split: latest path: - results_2023-11-04T16-25-41.456974.parquet --- # Dataset Card for Evaluation run of Sao10K/Chat-Stheno-L2-13B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Sao10K/Chat-Stheno-L2-13B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [Sao10K/Chat-Stheno-L2-13B](https://huggingface.co/Sao10K/Chat-Stheno-L2-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Sao10K__Chat-Stheno-L2-13B_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-04T16:25:41.456974](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Chat-Stheno-L2-13B_public/blob/main/results_2023-11-04T16-25-41.456974.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.053062080536912755, "em_stderr": 0.00229558058219171, "f1": 0.11251887583892609, "f1_stderr": 0.002552941756424483, "acc": 0.45079414279670627, "acc_stderr": 0.010942596493290702 }, "harness|drop|3": { "em": 0.053062080536912755, "em_stderr": 0.00229558058219171, "f1": 0.11251887583892609, "f1_stderr": 0.002552941756424483 }, "harness|gsm8k|5": { "acc": 0.14783927217589082, "acc_stderr": 0.009776827679143889 }, "harness|winogrande|5": { "acc": 0.7537490134175217, "acc_stderr": 0.012108365307437518 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
liuyanchen1015/MULTI_VALUE_sst2_clause_final_really_but
--- dataset_info: features: - name: sentence dtype: string - name: label dtype: int64 - name: idx dtype: int64 - name: score dtype: int64 splits: - name: train num_bytes: 4089 num_examples: 38 download_size: 3178 dataset_size: 4089 --- # Dataset Card for "MULTI_VALUE_sst2_clause_final_really_but" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ouvic215/test_dataset-0212
--- dataset_info: features: - name: mask_image dtype: image - name: text dtype: string - name: image dtype: image splits: - name: train num_bytes: 147332332.0 num_examples: 1588 download_size: 146499523 dataset_size: 147332332.0 --- # Dataset Card for "test_dataset-0212" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
assafm/odontil_ds_001
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 11466 num_examples: 90 download_size: 6305 dataset_size: 11466 --- # Dataset Card for "odontil_ds_001" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arthurmluz/temario_data-xlsum_gptextsum2_results
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: summary dtype: string - name: gen_summary dtype: string - name: rouge struct: - name: rouge1 dtype: float64 - name: rouge2 dtype: float64 - name: rougeL dtype: float64 - name: rougeLsum dtype: float64 - name: bert struct: - name: f1 sequence: float64 - name: hashcode dtype: string - name: precision sequence: float64 - name: recall sequence: float64 - name: moverScore dtype: float64 splits: - name: validation num_bytes: 228776 num_examples: 25 download_size: 181580 dataset_size: 228776 configs: - config_name: default data_files: - split: validation path: data/validation-* --- # Dataset Card for "temario_data-xlsum_gptextsum2_results" rouge= {'rouge1': 0.41527671599026306, 'rouge2': 0.15216375743250313, 'rougeL': 0.2336496143136067, 'rougeLsum': 0.2336496143136067} bert= {'precision': 0.7253225016593933, 'recall': 0.7107182025909424, 'f1': 0.7176165866851807} mover = 0.6200280069222645
DanFosing/public-domain-poetry
--- task_categories: - text-generation pretty_name: public-domain-poetry language: - en size_categories: - 10K<n<100K license: cc0-1.0 --- # Overview This dataset is a collection of approximately 38,500 poems from https://www.public-domain-poetry.com/. ## Language The language of this dataset is English. ## License All data in this dataset is public domain, which means you should be able to use it for anything you want, as long as you aren't breaking any law in the process of doing so.
CyberHarem/korwa_granbluefantasy
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of korwa/コルワ (Granblue Fantasy) This is the dataset of korwa/コルワ (Granblue Fantasy), containing 282 images and their tags. The core tags of this character are `long_hair, animal_ears, bangs, breasts, blue_eyes, blunt_bangs, hair_ornament, large_breasts, medium_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 282 | 349.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/korwa_granbluefantasy/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 282 | 220.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/korwa_granbluefantasy/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 654 | 444.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/korwa_granbluefantasy/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 282 | 317.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/korwa_granbluefantasy/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 654 | 588.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/korwa_granbluefantasy/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/korwa_granbluefantasy', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, dress, elbow_gloves, erune, solo, looking_at_viewer, mismatched_legwear, thighhighs, smile, quill, white_gloves, white_background, simple_background, blush, cat_ears, sitting, open_mouth | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_jacket, erune, looking_at_viewer, open_jacket, ribbed_dress, smile, solo, thighhighs, belt, long_sleeves, mismatched_legwear, quill, simple_background, white_background, blush, parted_lips, school_uniform, crossed_legs, feathers, full_body, grey_eyes, holding, sitting, skirt | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, bare_shoulders, erune, official_alternate_costume, smile, solo, cleavage, looking_at_viewer, simple_background, hair_flower, parted_lips, white_background, white_bikini, navel, blush, collarbone, very_long_hair, bracelet | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | dress | elbow_gloves | erune | solo | looking_at_viewer | mismatched_legwear | thighhighs | smile | quill | white_gloves | white_background | simple_background | blush | cat_ears | sitting | open_mouth | black_jacket | open_jacket | ribbed_dress | belt | long_sleeves | parted_lips | school_uniform | crossed_legs | feathers | full_body | grey_eyes | holding | skirt | official_alternate_costume | cleavage | hair_flower | white_bikini | navel | collarbone | very_long_hair | bracelet | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------|:---------------|:--------|:-------|:--------------------|:---------------------|:-------------|:--------|:--------|:---------------|:-------------------|:--------------------|:--------|:-----------|:----------|:-------------|:---------------|:--------------|:---------------|:-------|:---------------|:--------------|:-----------------|:---------------|:-----------|:------------|:------------|:----------|:--------|:-----------------------------|:-----------|:--------------|:---------------|:--------|:-------------|:-----------------|:-----------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | | X | X | X | X | X | X | X | | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | X | X | X | | | X | | | X | X | X | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X |
arnavmahapatra/fruit-detection-dataset
--- license: cc-by-4.0 ---
frncscp/patacon-730.rar
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': Patacon-False '1': Patacon-True - name: index dtype: int64 splits: - name: train num_bytes: 798131556.0 num_examples: 874 - name: validation num_bytes: 131098374.0 num_examples: 143 - name: test num_bytes: 403631748.0 num_examples: 442 download_size: 929557514 dataset_size: 1332861678.0 --- # Dataset Card for "patacon-730.rar" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)