datasetId
stringlengths
2
117
card
stringlengths
19
1.01M
lorinma/NL2SQL_zh
--- language: - zh --- 整合了3个中文数据集:追一科技NL2SQL,西湖大学的CSpider中文翻译,百度的DuSQL。 进行了大致的清洗,以及格式转换(alpaca): 假设你是一个数据库SQL专家,下面我会给出一个MySQL数据库的信息,请根据问题,帮我生成相应的SQL语句。当前时间为2023年。格式如下:{'sql':sql语句} MySQL数据库数据库结构如下:\n{表名(字段名...)}\n 其中:\n{表之间的主外键关联关系}\n 对于query:“{问题}”,给出相应的SQL语句,按照要求的格式返回,不进行任何解释。 其中,DuSQL最终结果是25004个。NL2SQL最终结果45919个,注意表名是乱码。CSpider,最终结果7786条,注意数据库是英文的,问题是中文的。 最终形成的文件,一共78706条,文件样例: ``` { "instruction": "假设你是一个数据库SQL专家,下面我会给出一个MySQL数据库的信息,请根据问题,帮我生成相应的SQL语句。当前时间为2023年。", "input": "MySQL数据库数据库结构如下:Table_f367fd66453d11e9bdc9f40f24344a08(序号, 检验报告号, 商品名称, 标称生产者名称, 标称商标, 型号(货号))。对于问题:“我想知道纯琦服饰生产的呢子风衣的检验结果如何,麻烦给我它的报告号”,给出相应的SQL语句,不进行任何解释。", "output": "SELECT 检验报告号 WHERE 标称生产者名称 == \"纯琦服饰有限公司\" and 商品名称 == \"呢子风衣\"" }, ```
kkar0/autotrain-data-size
--- task_categories: - image-classification --- # AutoTrain Dataset for project: size ## Dataset Description This dataset has been automatically processed by AutoTrain for project size. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<990x990 RGB PIL image>", "target": 0 }, { "image": "<966x990 RGB PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['Large', 'Medium', 'Mosaic', 'Small'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 212 | | valid | 54 |
myt517/GID_benchmark
--- license: apache-2.0 ---
Wrathless/Testing
--- license: apache-2.0 ---
thu-coai/SafetyBench
--- license: mit configs: - config_name: test data_files: - split: zh path: "test_zh.json" - split: en path: "test_en.json" - split: zh_subset path: "test_zh_subset.json" - config_name: dev data_files: - split: zh path: "dev_zh.json" - split: en path: "dev_en.json" --- SafetyBench is a comprehensive benchmark for evaluating the safety of LLMs, which comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns. Notably, SafetyBench also incorporates both Chinese and English data, facilitating the evaluation in both languages. Please visit our [GitHub](https://github.com/thu-coai/SafetyBench) and [website](https://llmbench.ai/safety) or check our [paper](https://arxiv.org/abs/2309.07045) for more details. We release three differents test sets including Chinese testset (`test_zh.json`), English testset (`test_en.json`) and Chinese subset testset (`test_zh_subset.json`). The Chinese subset testset removes samples with highly sensitive keywords to test API-based Chinese LLMs with strict filtering mechanism. We also release 5 examples for each of the 7 safety categories in both Chinese (`dev_zh.json`) and English (`dev_en.json`), to enable few-shot evaluation. Labels on the test sets are not released and users are required to submit their results to automatically obtain test accuracy. [How to submit?](https://github.com/thu-coai/SafetyBench#how-to-submit) ## Load the Data ```python from datasets import load_dataset test_dataset = load_dataset("thu-coai/SafetyBench", "test") dev_dataset = load_dataset("thu-coai/SafetyBench", "dev") ``` More details about loading the data and evaluating LLMs could be found at our [github repository](https://github.com/thu-coai/SafetyBench). Please cite our paper if you use our dataset. ``` @article{zhang2023safetybench, title={SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions}, author={Zhexin Zhang and Leqi Lei and Lindong Wu and Rui Sun and Yongkang Huang and Chong Long and Xiao Liu and Xuanyu Lei and Jie Tang and Minlie Huang}, journal={arXiv preprint arXiv:2309.07045}, year={2023} } ```
DJFelipeBR/vozcarlos
--- license: openrail ---
bowphs/cc-100-01-final
--- dataset_info: features: - name: input_ids sequence: int64 splits: - name: train num_bytes: 2336802540 num_examples: 285115 download_size: 586074202 dataset_size: 2336802540 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "cc-100-01-final" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nenekochan/yoruno-vn
--- pretty_name: 夜羊L系列脚本 language: - zh - ja language_details: zho_Hans, jpn license: cc-by-nc-4.0 annotations_creators: - expert-generated - machine-generated task_categories: - text-generation tags: - not-for-all-audiences viewer: false --- > 睡不着的夜晚和不想睡觉的夜晚 ## ⚠️注意 - **请注意,数据来自 R18 的视觉小说,并且包含可能被认为是不适当、令人震惊、令人不安、令人反感和极端的主题。如果您不确定在您的国家拥有任何形式的虚构文字内容的法律后果,请不要下载。** - **本项目内的所有数据及基于这些数据的衍生作品禁止用作商业性目的。** 我不拥有 `scenario-raw` 和 `scenario_ja-raw` 里的 krkr2 脚本源文件,而其余的数据处理方法按照 CC BY-NC 4.0 协议开放。 - 🔑 压缩包已加密,解压密码是 yorunohitsuji ## 文件结构 ``` yoruno-vn.7z # (zh) ├── scenario-raw/ # krkr2 脚本源文件 ├── scenario/ # 清理后的结构化脚本 └── conversation/ # 我主观分段制作的对话格式数据 yoruno_ja-vn.7z # (ja) ├── scenario_ja-raw/ # krkr2 脚本源文件 ├── scenario_ja/ # 清理后的结构化脚本 └── sound_ja/ # (并不存在的语音和)我手工标注的分类元信息 ``` - 对于主观分段,一部分是手动的,其余是基于文本相似度的不太靠谱自动分段(我还没推的那部分,我不想被剧透啊啊啊)。手动分段道且阻且长,慢慢做吧,进度记录在 [manual_seg-progress.md](manual_seg-progress.md)。 - 2015-2017 的前四作是单女主,后面的作品都是双女主的,脚本格式也略微不同。 - 主观分段内容排除了一些与 npc 的对话、改了错别字,所以和原脚本不完全一致。 - 语音文件果然还是不放了,不过我有标注元信息,用来分离出含有喘和或口腔音的语音。 ## 给我自己看的预处理流程 0. 各作的脚本提取出来放在 `scenario[_ja]-raw/` 里,用 `script/transcode.sh` 转成 UTF-8,`2015-sssa` 额外需要 `script/dos2unix.sh` 转成 LF 1. 修复格式小问题 `cd scenario-raw && bash patch.sh` 2. 运行 `python ks-parse-all.py --voice scenario[_ja]-raw/ scenario[_ja]/` 得到 `scenario[_ja]/` 3. 分段,再转成 `conversation/` a. 自动分段:`python -m segment.auto path/to/scenario.jsonl` b. 手动分段后,`python -m segment.manual path/to/scenario-manual_seg.jsonl` 添加新卷: 0. 脚本放在 `scenario[_ja]-raw/` 里 1. 在 `ks-parse-all.py` 里添加新卷的元数据 ## 致谢 日夜陪伴我的夜羊社和数据源背后的众汉化组……
joey234/mmlu-high_school_biology-original-neg-prepend
--- dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: neg_prompt dtype: string splits: - name: test num_bytes: 42393 num_examples: 69 download_size: 29000 dataset_size: 42393 --- # Dataset Card for "mmlu-high_school_biology-original-neg-prepend" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/wakana_rei_bangdreamdai2ki
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of wakana_rei/和奏レイ (BanG Dream! Dai 2-ki) This is the dataset of wakana_rei/和奏レイ (BanG Dream! Dai 2-ki), containing 66 images and their tags. The core tags of this character are `blue_eyes, long_hair, black_hair, brown_hair, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 66 | 77.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 66 | 51.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 141 | 97.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 66 | 70.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 141 | 132.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_rei_bangdreamdai2ki/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/wakana_rei_bangdreamdai2ki', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_gloves, looking_at_viewer, ponytail, solo, black_serafuku, black_skirt, earrings, pleated_skirt, smile, black_choker, black_sailor_collar, midriff, navel, open_mouth, red_neckerchief, short_sleeves, suspenders, bandaged_arm, black_shirt, blush, chain, cowboy_shot, crop_top, guitar, half_gloves, holding_microphone, parted_bangs, piercing, red_ribbon, simple_background, standing, v-shaped_eyebrows, white_background | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, smile, solo, belt, jeans, looking_at_viewer, simple_background, white_background, white_shirt, standing, blue_pants, closed_mouth, collarbone, full_body, holding, jacket, long_sleeves, necklace, torn_pants, vest | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, earrings, solo, bare_shoulders, looking_at_viewer, short_hair, smile, black_gloves, flower, hair_ornament, red_dress, closed_mouth, feather_boa, upper_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | looking_at_viewer | ponytail | solo | black_serafuku | black_skirt | earrings | pleated_skirt | smile | black_choker | black_sailor_collar | midriff | navel | open_mouth | red_neckerchief | short_sleeves | suspenders | bandaged_arm | black_shirt | blush | chain | cowboy_shot | crop_top | guitar | half_gloves | holding_microphone | parted_bangs | piercing | red_ribbon | simple_background | standing | v-shaped_eyebrows | white_background | belt | jeans | white_shirt | blue_pants | closed_mouth | collarbone | full_body | holding | jacket | long_sleeves | necklace | torn_pants | vest | bare_shoulders | short_hair | flower | hair_ornament | red_dress | feather_boa | upper_body | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------------|:-----------|:-------|:-----------------|:--------------|:-----------|:----------------|:--------|:---------------|:----------------------|:----------|:--------|:-------------|:------------------|:----------------|:-------------|:---------------|:--------------|:--------|:--------|:--------------|:-----------|:---------|:--------------|:---------------------|:---------------|:-----------|:-------------|:--------------------|:-----------|:--------------------|:-------------------|:-------|:--------|:--------------|:-------------|:---------------|:-------------|:------------|:----------|:---------|:---------------|:-----------|:-------------|:-------|:-----------------|:-------------|:---------|:----------------|:------------|:--------------|:-------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X |
SEACrowd/nusax_mt
--- tags: - machine-translation language: - ind - ace - ban - bjn - bbc - bug - jav - mad - min - nij - sun - eng --- # nusax_mt NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak. NusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language. ## Dataset Usage Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`. ## Citation ``` @misc{winata2022nusax, title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages}, author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya, Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony, Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo, Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau, Jey Han and Sennrich, Rico and Ruder, Sebastian}, year={2022}, eprint={2205.15960}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License Creative Commons Attribution Share-Alike 4.0 International ## Homepage [https://github.com/IndoNLP/nusax/tree/main/datasets/mt](https://github.com/IndoNLP/nusax/tree/main/datasets/mt) ### NusaCatalogue For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
includeno/movielens-100k
--- license: apache-2.0 size_categories: - 10K<n<100K ---
Hack90/ncbi_genbank_part_30
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: id dtype: string - name: sequence dtype: string - name: name dtype: string - name: description dtype: string - name: features dtype: int64 - name: seq_length dtype: int64 splits: - name: train num_bytes: 32901664415 num_examples: 168885 download_size: 14692414119 dataset_size: 32901664415 --- # Dataset Card for "ncbi_genbank_part_30" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
irds/mr-tydi_bn_train
--- pretty_name: '`mr-tydi/bn/train`' viewer: false source_datasets: ['irds/mr-tydi_bn'] task_categories: - text-retrieval --- # Dataset Card for `mr-tydi/bn/train` The `mr-tydi/bn/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/bn/train). # Data This dataset provides: - `queries` (i.e., topics); count=1,713 - `qrels`: (relevance assessments); count=1,719 - For `docs`, use [`irds/mr-tydi_bn`](https://huggingface.co/datasets/irds/mr-tydi_bn) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_bn_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_bn_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
nguyenvulebinh/fsd50k
--- license: cc-by-4.0 ---
KETI-AIR/kor_wiki_hop
--- language: - ko license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa paperswithcode_id: wikihop pretty_name: WikiHop dataset_info: features: - name: data_index_by_user dtype: int32 - name: question dtype: string - name: answer dtype: string - name: candidates sequence: string - name: supports sequence: string - name: annotations sequence: sequence: string splits: - name: train num_bytes: 351647702 num_examples: 43738 - name: validation num_bytes: 44492664 num_examples: 5129 download_size: 221782303 dataset_size: 396140366 --- # Dataset Card for wiki_hop ## Licensing Information The data is distributed under the [CC BY SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.en) license. ## Source Data Citation Information ``` @article{DBLP:journals/corr/abs-1710-06481, author = {Johannes Welbl and Pontus Stenetorp and Sebastian Riedel}, title = {Constructing Datasets for Multi-hop Reading Comprehension Across Documents}, journal = {CoRR}, volume = {abs/1710.06481}, year = {2017}, url = {http://arxiv.org/abs/1710.06481}, eprinttype = {arXiv}, eprint = {1710.06481}, timestamp = {Mon, 13 Aug 2018 16:46:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1710-06481.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
result-kand2-sdxl-wuerst-karlo/2ddeba07
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 200 num_examples: 10 download_size: 1374 dataset_size: 200 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "2ddeba07" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
johannes-garstenauer/ENN_masking_embeddings_dim_64
--- dataset_info: features: - name: last_hs sequence: float32 - name: label dtype: int64 splits: - name: train num_bytes: 18028896 num_examples: 67272 download_size: 24542250 dataset_size: 18028896 --- # Dataset Card for "ENN_masking_embeddings_dim_64" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vntc/wiki-dump-cleaned
--- dataset_info: features: - name: id dtype: string - name: revid dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 1068764982.7283747 num_examples: 1287811 download_size: 577597797 dataset_size: 1068764982.7283747 configs: - config_name: default data_files: - split: train path: data/train-* ---
open-llm-leaderboard/details_shaohang__SparseOPT-1.3B
--- pretty_name: Evaluation run of shaohang/SparseOPT-1.3B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [shaohang/SparseOPT-1.3B](https://huggingface.co/shaohang/SparseOPT-1.3B) on the\ \ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 3 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_shaohang__SparseOPT-1.3B\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-10-24T23:18:41.395628](https://huggingface.co/datasets/open-llm-leaderboard/details_shaohang__SparseOPT-1.3B/blob/main/results_2023-10-24T23-18-41.395628.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003145973154362416,\n\ \ \"em_stderr\": 0.0005734993648436396,\n \"f1\": 0.048264471476510196,\n\ \ \"f1_stderr\": 0.0013025272598396927,\n \"acc\": 0.29319675461487227,\n\ \ \"acc_stderr\": 0.007301498172995543\n },\n \"harness|drop|3\": {\n\ \ \"em\": 0.003145973154362416,\n \"em_stderr\": 0.0005734993648436396,\n\ \ \"f1\": 0.048264471476510196,\n \"f1_stderr\": 0.0013025272598396927\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.000758150113722517,\n \ \ \"acc_stderr\": 0.0007581501137225237\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.585635359116022,\n \"acc_stderr\": 0.013844846232268563\n\ \ }\n}\n```" repo_url: https://huggingface.co/shaohang/SparseOPT-1.3B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_drop_3 data_files: - split: 2023_10_24T23_18_41.395628 path: - '**/details_harness|drop|3_2023-10-24T23-18-41.395628.parquet' - split: latest path: - '**/details_harness|drop|3_2023-10-24T23-18-41.395628.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_10_24T23_18_41.395628 path: - '**/details_harness|gsm8k|5_2023-10-24T23-18-41.395628.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-10-24T23-18-41.395628.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_10_24T23_18_41.395628 path: - '**/details_harness|winogrande|5_2023-10-24T23-18-41.395628.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-10-24T23-18-41.395628.parquet' - config_name: results data_files: - split: 2023_10_24T23_18_41.395628 path: - results_2023-10-24T23-18-41.395628.parquet - split: latest path: - results_2023-10-24T23-18-41.395628.parquet --- # Dataset Card for Evaluation run of shaohang/SparseOPT-1.3B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/shaohang/SparseOPT-1.3B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [shaohang/SparseOPT-1.3B](https://huggingface.co/shaohang/SparseOPT-1.3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_shaohang__SparseOPT-1.3B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-24T23:18:41.395628](https://huggingface.co/datasets/open-llm-leaderboard/details_shaohang__SparseOPT-1.3B/blob/main/results_2023-10-24T23-18-41.395628.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.003145973154362416, "em_stderr": 0.0005734993648436396, "f1": 0.048264471476510196, "f1_stderr": 0.0013025272598396927, "acc": 0.29319675461487227, "acc_stderr": 0.007301498172995543 }, "harness|drop|3": { "em": 0.003145973154362416, "em_stderr": 0.0005734993648436396, "f1": 0.048264471476510196, "f1_stderr": 0.0013025272598396927 }, "harness|gsm8k|5": { "acc": 0.000758150113722517, "acc_stderr": 0.0007581501137225237 }, "harness|winogrande|5": { "acc": 0.585635359116022, "acc_stderr": 0.013844846232268563 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-86000
--- dataset_info: features: - name: input_ids sequence: sequence: int32 - name: attention_mask sequence: sequence: int8 - name: labels sequence: sequence: int64 splits: - name: train num_bytes: 13336000 num_examples: 1000 download_size: 654227 dataset_size: 13336000 configs: - config_name: default data_files: - split: train path: data/train-* ---
hippocrates/liveqa_train
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: id dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: text dtype: string splits: - name: train num_bytes: 1155384 num_examples: 635 download_size: 506374 dataset_size: 1155384 --- # Dataset Card for "liveqa_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Yunij/testing_kaggle_ai_detection_dataset
--- license: apache-2.0 dataset_info: features: - name: prompt dtype: string - name: text dtype: string - name: source dtype: string - name: label dtype: int64 - name: perplexity dtype: float64 - name: input_ids sequence: int32 - name: attention_mask sequence: int8 splits: - name: train num_bytes: 1244922869 num_examples: 330345 - name: test num_bytes: 65575515 num_examples: 17387 download_size: 435690981 dataset_size: 1310498384 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
gaoyzz/spider_SQL_prompts
--- license: apache-2.0 --- spider dataset with user question, column names and table names only, the spider train_other also include.
Dinosaur-AICovers/bfb_ai_voice_1_marker
--- license: artistic-2.0 ---
vishalp23/subject-classification
--- language: - en license: - other multilinguality: - monolingual source_datasets: - original task_categories: - text-classification task_ids: - multi-class-classification paperswithcode_id: subject pretty_name: Subject tags: - subject-classification --- # Dataset Card for "subject" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Distilbert Subject Classifier](https://github.com/vishal-parameswaran/distilbert-subject-classifier/tree/main) - **Repository:** - **Paper:** - **Point of Contact:** [Vishal Parameswaran](https://github.com/vishal-parameswaran) - **Size of downloaded dataset files:** 230 MB - **Size of the generated dataset:** 241,485,123 bytes - **Total amount of disk used:** 241,491,968 bytes ### Dataset Summary Subject is a dataset of text's extracted from Textbooks from [NCERT](https://ncert.nic.in/), [Open Textbook Library](https://open.umn.edu/opentextbooks/) and [IITJEE NEET AIIMS Students Questions Data](https://www.kaggle.com/datasets/mrutyunjaybiswal/iitjee-neet-aims-students-questions-data?resource=download) ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances An example looks as follows. ``` { "text": "Getting Started This chapter will be about getting started with Git. We will begin by explaining some background on version control tools, then move on to how to get Git running on your system and finally how to get it set up to start working with. At the end of this chapter you should understand why Git is around, why you should use it and you should be all set up to do so. About Version Control What is “version control”, and why should you care? Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later. For the examples in this book, you will use software source code as the files being version controlled, though in reality you can do this with nearly any type of file on a computer. If you are a graphic or web designer and want to keep every version of an image or layout (which you would most certainly want to), a Version Control System (VCS) is a very wise thing to use.", "label": 2 } ``` ### Data Fields The data fields are: - `text`: a `string` feature. - `label`: a classification label, with possible values including 'biology': 0, 'chemistry': 1, 'computer': 2, 'maths': 3, 'physics': 4, 'social sciences': 5. ### Data Splits The dataset has 1 configurations: - split: with a total of 338,683 examples split into train and test. | name | train | test | |-------|--------|--------| | split | 230601 | 108082 | ## Dataset Creation The dataset was created by chunking and merging across textbooks, based on their overall category. For Example: Lets take the textbook: Relativity Lite: A Pictorial Translation of Einstein’s Theories of Motion and Gravity by Jack C. Straton, Portland State University 1) We remove the Cover page, table of contents and preface 2) We extract all text from the pdf. 3) We chunk the text using the chunkipypackage and the Bert Tokenizer to chunks of size length: 512 max. 4) We record the labels according to the book. Wrt the above book, the label will be 4 as the book is a physics textbook. 5) All the chunked files are merged into one csv. ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset should be used for educational and research purposes only. ### Citation Information If you use this dataset, please cite: ``` ``` ### References WIP ### Contributions
alexandrainst/wiki40b-da
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: wikidata_id dtype: string - name: text dtype: string - name: version_id dtype: string splits: - name: train num_bytes: 220855898 num_examples: 109486 - name: validation num_bytes: 12416304 num_examples: 6173 - name: test num_bytes: 12818380 num_examples: 6219 download_size: 150569852 dataset_size: 246090582 license: cc-by-sa-4.0 task_categories: - text-generation language: - da pretty_name: Wiki40b-da size_categories: - 100K<n<1M --- # Dataset Card for "wiki40b-da" ## Dataset Description - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk) - **Size of downloaded dataset files:** 150.57 MB - **Size of the generated dataset:** 246.09 MB - **Total amount of disk used:** 396.66 MB ### Dataset Summary This dataset is an upload of the Danish part of the [Wiki40b dataset](https://aclanthology.org/2020.lrec-1.297), being a cleaned version of a dump of Wikipedia. The dataset is identical in content to [this dataset on the Hugging Face Hub](https://huggingface.co/datasets/wiki40b), but that one requires both `apache_beam`, `tensorflow` and `mwparserfromhell`, which can lead to dependency issues since these are not compatible with several newer packages. The training, validation and test splits are the original ones. ### Languages The dataset is available in Danish (`da`). ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 150.57 MB - **Size of the generated dataset:** 246.09 MB - **Total amount of disk used:** 396.66 MB An example from the dataset looks as follows. ``` { 'wikidata_id': 'Q17341862', 'text': "\n_START_ARTICLE_\nÆgyptiske tekstiler\n_START_PARAGRAPH_\nTekstiler havde mange (...)", 'version_id': '9018011197452276273' } ``` ### Data Fields The data fields are the same among all splits. - `wikidata_id`: a `string` feature. - `text`: a `string` feature. - `version_id`: a `string` feature. ### Dataset Statistics There are 109,486 samples in the training split, 6,173 samples in the validation split and 6,219 in the test split. #### Document Length Distribution ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/dn-7_ugJObyF-CkD6XoO-.png) ## Additional Information ### Dataset Curators [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra Institute](https://alexandra.dk/) uploaded it to the Hugging Face Hub. ### Licensing Information The dataset is licensed under the [CC-BY-SA license](https://creativecommons.org/licenses/by-sa/4.0/).
18moumi/data_docs_v1
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: input_ids sequence: int32 - name: attention_mask sequence: int32 - name: labels sequence: int64 splits: - name: train num_bytes: 176411.04929577466 num_examples: 127 - name: test num_bytes: 20835.950704225354 num_examples: 15 download_size: 72860 dataset_size: 197247.0 --- # Dataset Card for "data_docs_v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ReadingTimeMachine/historical_dla
--- license: apache-2.0 language: - en --- ## Dataset Introduction This dataset has bounding boxes for ~6000 hand annotated pages with bounding boxes for figures, figure captions, tables, and math formulas. More coverage is available for figures + captions (some pages might not have all tables and math formulas annotated). Format is JSON and includes lists (so it looks like HuggingFace doesn't necessarily like this format for display), with rows that look like: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6488a2de10af27fd305cb3dd/XdbA_YXAqNmurh-OBhvJO.png) ## How to use this data To plot an example check out the [trial_example_from_data.ipynb](https://huggingface.co/datasets/ReadingTimeMachine/historical_dla/blob/main/trial_example_from_data.ipynb) notebook. This assumes you have the data and the [data_utils.py](https://huggingface.co/datasets/ReadingTimeMachine/historical_dla/blob/main/data_utils.py) file in the same location as your notebook. The following packages will have to be installed: ```python matplotlib numpy pandas wand PIL wget cv2 # OpenCV ``` On Google Colab, to install `wand` we found we had to do the following (this is not in the linked notebook): ```python !apt install imagemagick !apt-get install libmagickwand-dev !pip install Wand !rm /etc/ImageMagick-6/policy.xml !pip install wget ```
B3ru/Gaybriel
--- license: openrail ---
arianataylorlife/malibu-rehab
--- license: apache-2.0 ---
linhtran92/tts_face
--- dataset_info: features: - name: sentence_norm dtype: string - name: audio struct: - name: array sequence: float64 - name: path dtype: 'null' - name: sampling_rate dtype: int64 - name: id dtype: string - name: wer dtype: float64 splits: - name: train num_bytes: 84090967 num_examples: 225 download_size: 64738623 dataset_size: 84090967 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "tts_face" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DeepFoldProtein/SCOP-1.65_processed_512
--- dataset_info: features: - name: index dtype: string - name: ndom dtype: int64 - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: domain_labels sequence: sequence: int64 - name: label sequence: sequence: float64 splits: - name: train num_bytes: 16121094414 num_examples: 7670 download_size: 2662317 dataset_size: 16121094414 configs: - config_name: default data_files: - split: train path: data/train-* ---
airedefined/rm-dolly-rag-pythia-410m
--- dataset_info: features: - name: instruction dtype: string - name: context dtype: string - name: response dtype: string - name: category dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 6006042 num_examples: 3588 download_size: 3742025 dataset_size: 6006042 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "rm-dolly-rag-pythia-410m" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
juancopi81/orca-math-word-problems-140028_150030
--- language: - en dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 9609044 num_examples: 10002 download_size: 3188429 dataset_size: 9609044 configs: - config_name: default data_files: - split: train path: data/train-* ---
KETI-AIR/aihub_scitech_translation
--- license: apache-2.0 ---
alvations/c4p0-v1-en-engb
--- dataset_info: features: - name: source dtype: string - name: target dtype: string - name: target_backto_source dtype: string - name: raw_target list: - name: generated_text dtype: string - name: raw_target_backto_source list: - name: generated_text dtype: string - name: prompt dtype: string - name: reverse_prompt dtype: string - name: source_langid dtype: string - name: target_langid dtype: string - name: target_backto_source_langid dtype: string - name: doc_id dtype: int64 - name: sent_id dtype: int64 - name: timestamp dtype: string - name: url dtype: string - name: doc_hash dtype: string - name: dataset dtype: string - name: source_lang dtype: string - name: target_lang dtype: string splits: - name: train num_bytes: 11762976 num_examples: 9136 download_size: 4387769 dataset_size: 11762976 configs: - config_name: default data_files: - split: train path: data/train-* ---
Fhrozen/AudioSet2K22
--- annotations_creators: - unknown language_creators: - unknown license: cc-by-sa-4.0 size_categories: - 100K<n<100M source_datasets: - unknown task_categories: - audio-classification task_ids: [] tags: - audio-slot-filling --- # Dataset Card for audioset2022 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [AudioSet Ontology](https://research.google.com/audioset/ontology/index.html) - **Repository:** [Needs More Information] - **Paper:** [Audio Set: An ontology and human-labeled dataset for audio events](https://research.google.com/pubs/pub45857.html) - **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/audioset) ### Dataset Summary The AudioSet ontology is a collection of sound events organized in a hierarchy. The ontology covers a wide range of everyday sounds, from human and animal sounds, to natural and environmental sounds, to musical and miscellaneous sounds. **This repository only includes audio files for DCASE 2022 - Task 3** The included labels are limited to: - Female speech, woman speaking - Male speech, man speaking - Clapping - Telephone - Telephone bell ringing - Ringtone - Laughter - Domestic sounds, home sounds - Vacuum cleaner - Kettle whistle - Mechanical fan - Walk, footsteps - Door - Cupboard open or close - Music - Background music - Pop music - Musical instrument - Acoustic guitar - Marimba, xylophone - Cowbell - Piano - Electric piano - Rattle (instrument) - Water tap, faucet - Bell - Bicycle bell - Chime - Knock ### Supported Tasks and Leaderboards - `audio-classification`: The dataset can be used to train a model for Sound Event Detection/Localization. **The recordings only includes the single channel audio. For Localization tasks, it will required to apply RIR information** ### Languages None ## Dataset Structure ### Data Instances **WIP** ``` { 'file': } ``` ### Data Fields - file: A path to the downloaded audio file in .mp3 format. ### Data Splits This dataset only includes audio file from the unbalance train list. The data comprises two splits: weak labels and strong labels. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was initially downloaded by Nelson Yalta (nelson.yalta@ieee.org). ### Licensing Information [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0) ### Citation Information ``` @inproceedings{45857, title = {Audio Set: An ontology and human-labeled dataset for audio events}, author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter}, year = {2017}, booktitle = {Proc. IEEE ICASSP 2017}, address = {New Orleans, LA} } ```
EleutherAI/quirky_nli_bob_hard
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: label dtype: class_label: names: '0': entailment '1': neutral '2': contradiction - name: id dtype: string - name: choices sequence: string - name: bob_label dtype: int64 - name: difficulty dtype: float64 - name: statement dtype: string - name: character dtype: string - name: alice_label dtype: int64 splits: - name: train num_bytes: 331205.67582760775 num_examples: 1401 - name: validation num_bytes: 114536.40525 num_examples: 477 - name: test num_bytes: 117796.156 num_examples: 496 download_size: 219952 dataset_size: 563538.2370776078 --- # Dataset Card for "quirky_nli_bob_hard" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
autoevaluate/autoeval-eval-futin__feed-sen_vi-894567-2175669981
--- type: predictions tags: - autotrain - evaluation datasets: - futin/feed eval_info: task: text_zero_shot_classification model: facebook/opt-350m metrics: [] dataset_name: futin/feed dataset_config: sen_vi dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-350m * Dataset: futin/feed * Config: sen_vi * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@futin](https://huggingface.co/futin) for evaluating this model.
open-llm-leaderboard/details_NeverSleep__MiquMaid-v1-70B
--- pretty_name: Evaluation run of NeverSleep/MiquMaid-v1-70B dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NeverSleep__MiquMaid-v1-70B\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-02-03T13:55:45.160201](https://huggingface.co/datasets/open-llm-leaderboard/details_NeverSleep__MiquMaid-v1-70B/blob/main/results_2024-02-03T13-55-45.160201.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.747644956445168,\n\ \ \"acc_stderr\": 0.028643244664619514,\n \"acc_norm\": 0.7506821937250182,\n\ \ \"acc_norm_stderr\": 0.029197302210032687,\n \"mc1\": 0.45165238678090575,\n\ \ \"mc1_stderr\": 0.017421480300277643,\n \"mc2\": 0.6178734796953549,\n\ \ \"mc2_stderr\": 0.014878715039713398\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.674061433447099,\n \"acc_stderr\": 0.013697432466693244,\n\ \ \"acc_norm\": 0.7167235494880546,\n \"acc_norm_stderr\": 0.013167478735134576\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.694582752439753,\n\ \ \"acc_stderr\": 0.004596426220000888,\n \"acc_norm\": 0.8796056562437762,\n\ \ \"acc_norm_stderr\": 0.003247570330456916\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \ \ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n\ \ \"acc_stderr\": 0.03972552884785137,\n \"acc_norm\": 0.6962962962962963,\n\ \ \"acc_norm_stderr\": 0.03972552884785137\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.8486842105263158,\n \"acc_stderr\": 0.029162631596843982,\n\ \ \"acc_norm\": 0.8486842105263158,\n \"acc_norm_stderr\": 0.029162631596843982\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.78,\n\ \ \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \ \ \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.7849056603773585,\n \"acc_stderr\": 0.025288394502891366,\n\ \ \"acc_norm\": 0.7849056603773585,\n \"acc_norm_stderr\": 0.025288394502891366\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8958333333333334,\n\ \ \"acc_stderr\": 0.02554523921025691,\n \"acc_norm\": 0.8958333333333334,\n\ \ \"acc_norm_stderr\": 0.02554523921025691\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \ \ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n\ \ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \ \ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7398843930635838,\n\ \ \"acc_stderr\": 0.033450369167889904,\n \"acc_norm\": 0.7398843930635838,\n\ \ \"acc_norm_stderr\": 0.033450369167889904\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n\ \ \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n\ \ \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.7531914893617021,\n \"acc_stderr\": 0.02818544130123409,\n\ \ \"acc_norm\": 0.7531914893617021,\n \"acc_norm_stderr\": 0.02818544130123409\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5964912280701754,\n\ \ \"acc_stderr\": 0.04615186962583707,\n \"acc_norm\": 0.5964912280701754,\n\ \ \"acc_norm_stderr\": 0.04615186962583707\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.7241379310344828,\n \"acc_stderr\": 0.03724563619774632,\n\ \ \"acc_norm\": 0.7241379310344828,\n \"acc_norm_stderr\": 0.03724563619774632\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.5396825396825397,\n \"acc_stderr\": 0.025670080636909308,\n \"\ acc_norm\": 0.5396825396825397,\n \"acc_norm_stderr\": 0.025670080636909308\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5634920634920635,\n\ \ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.5634920634920635,\n\ \ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \ \ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.867741935483871,\n\ \ \"acc_stderr\": 0.019272015434846475,\n \"acc_norm\": 0.867741935483871,\n\ \ \"acc_norm_stderr\": 0.019272015434846475\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.6305418719211823,\n \"acc_stderr\": 0.033959703819985754,\n\ \ \"acc_norm\": 0.6305418719211823,\n \"acc_norm_stderr\": 0.033959703819985754\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\"\ : 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.8424242424242424,\n \"acc_stderr\": 0.028450388805284332,\n\ \ \"acc_norm\": 0.8424242424242424,\n \"acc_norm_stderr\": 0.028450388805284332\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.9141414141414141,\n \"acc_stderr\": 0.019960225563172885,\n \"\ acc_norm\": 0.9141414141414141,\n \"acc_norm_stderr\": 0.019960225563172885\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.9430051813471503,\n \"acc_stderr\": 0.01673108529360756,\n\ \ \"acc_norm\": 0.9430051813471503,\n \"acc_norm_stderr\": 0.01673108529360756\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.7974358974358975,\n \"acc_stderr\": 0.020377660970371372,\n\ \ \"acc_norm\": 0.7974358974358975,\n \"acc_norm_stderr\": 0.020377660970371372\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.4,\n \"acc_stderr\": 0.029869605095316904,\n \"acc_norm\"\ : 0.4,\n \"acc_norm_stderr\": 0.029869605095316904\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\"\ : {\n \"acc\": 0.8739495798319328,\n \"acc_stderr\": 0.02155962312121392,\n\ \ \"acc_norm\": 0.8739495798319328,\n \"acc_norm_stderr\": 0.02155962312121392\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248437,\n \"\ acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248437\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.9174311926605505,\n \"acc_stderr\": 0.011800361363016567,\n \"\ acc_norm\": 0.9174311926605505,\n \"acc_norm_stderr\": 0.011800361363016567\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.6898148148148148,\n \"acc_stderr\": 0.031546962856566295,\n \"\ acc_norm\": 0.6898148148148148,\n \"acc_norm_stderr\": 0.031546962856566295\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.9215686274509803,\n \"acc_stderr\": 0.018869514646658928,\n \"\ acc_norm\": 0.9215686274509803,\n \"acc_norm_stderr\": 0.018869514646658928\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.9071729957805907,\n \"acc_stderr\": 0.01888975055095671,\n \ \ \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.01888975055095671\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8116591928251121,\n\ \ \"acc_stderr\": 0.026241132996407256,\n \"acc_norm\": 0.8116591928251121,\n\ \ \"acc_norm_stderr\": 0.026241132996407256\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.030884661089515375,\n\ \ \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.030884661089515375\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.9008264462809917,\n \"acc_stderr\": 0.02728524631275895,\n \"\ acc_norm\": 0.9008264462809917,\n \"acc_norm_stderr\": 0.02728524631275895\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.9074074074074074,\n\ \ \"acc_stderr\": 0.02802188803860944,\n \"acc_norm\": 0.9074074074074074,\n\ \ \"acc_norm_stderr\": 0.02802188803860944\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.8159509202453987,\n \"acc_stderr\": 0.030446777687971723,\n\ \ \"acc_norm\": 0.8159509202453987,\n \"acc_norm_stderr\": 0.030446777687971723\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6428571428571429,\n\ \ \"acc_stderr\": 0.04547960999764376,\n \"acc_norm\": 0.6428571428571429,\n\ \ \"acc_norm_stderr\": 0.04547960999764376\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.8737864077669902,\n \"acc_stderr\": 0.03288180278808629,\n\ \ \"acc_norm\": 0.8737864077669902,\n \"acc_norm_stderr\": 0.03288180278808629\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9316239316239316,\n\ \ \"acc_stderr\": 0.01653462768431136,\n \"acc_norm\": 0.9316239316239316,\n\ \ \"acc_norm_stderr\": 0.01653462768431136\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \ \ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.89272030651341,\n\ \ \"acc_stderr\": 0.01106657144950843,\n \"acc_norm\": 0.89272030651341,\n\ \ \"acc_norm_stderr\": 0.01106657144950843\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.8208092485549133,\n \"acc_stderr\": 0.020647590029679332,\n\ \ \"acc_norm\": 0.8208092485549133,\n \"acc_norm_stderr\": 0.020647590029679332\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6290502793296089,\n\ \ \"acc_stderr\": 0.016155910721341777,\n \"acc_norm\": 0.6290502793296089,\n\ \ \"acc_norm_stderr\": 0.016155910721341777\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.021828596053108416,\n\ \ \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.021828596053108416\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8263665594855305,\n\ \ \"acc_stderr\": 0.02151405158597041,\n \"acc_norm\": 0.8263665594855305,\n\ \ \"acc_norm_stderr\": 0.02151405158597041\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.8487654320987654,\n \"acc_stderr\": 0.019935086092149893,\n\ \ \"acc_norm\": 0.8487654320987654,\n \"acc_norm_stderr\": 0.019935086092149893\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.5673758865248227,\n \"acc_stderr\": 0.02955545423677884,\n \ \ \"acc_norm\": 0.5673758865248227,\n \"acc_norm_stderr\": 0.02955545423677884\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5814863102998696,\n\ \ \"acc_stderr\": 0.012599505608336482,\n \"acc_norm\": 0.5814863102998696,\n\ \ \"acc_norm_stderr\": 0.012599505608336482\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.8088235294117647,\n \"acc_stderr\": 0.023886881922440335,\n\ \ \"acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.023886881922440335\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.8137254901960784,\n \"acc_stderr\": 0.015750526284363346,\n \ \ \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.015750526284363346\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\ \ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \ \ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.8244897959183674,\n \"acc_stderr\": 0.024352800722970015,\n\ \ \"acc_norm\": 0.8244897959183674,\n \"acc_norm_stderr\": 0.024352800722970015\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9253731343283582,\n\ \ \"acc_stderr\": 0.018581939698490618,\n \"acc_norm\": 0.9253731343283582,\n\ \ \"acc_norm_stderr\": 0.018581939698490618\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.94,\n \"acc_stderr\": 0.02386832565759418,\n \ \ \"acc_norm\": 0.94,\n \"acc_norm_stderr\": 0.02386832565759418\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\ \ \"acc_stderr\": 0.03858158940685515,\n \"acc_norm\": 0.5662650602409639,\n\ \ \"acc_norm_stderr\": 0.03858158940685515\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015577,\n\ \ \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015577\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.45165238678090575,\n\ \ \"mc1_stderr\": 0.017421480300277643,\n \"mc2\": 0.6178734796953549,\n\ \ \"mc2_stderr\": 0.014878715039713398\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.850828729281768,\n \"acc_stderr\": 0.010012598805627328\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6929492039423806,\n \ \ \"acc_stderr\": 0.012705685723131719\n }\n}\n```" repo_url: https://huggingface.co/NeverSleep/MiquMaid-v1-70B leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|arc:challenge|25_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-02-03T13-55-45.160201.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|gsm8k|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hellaswag|10_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-03T13-55-45.160201.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-management|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-virology|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T13-55-45.160201.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|truthfulqa:mc|0_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-02-03T13-55-45.160201.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_02_03T13_55_45.160201 path: - '**/details_harness|winogrande|5_2024-02-03T13-55-45.160201.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-02-03T13-55-45.160201.parquet' - config_name: results data_files: - split: 2024_02_03T13_55_45.160201 path: - results_2024-02-03T13-55-45.160201.parquet - split: latest path: - results_2024-02-03T13-55-45.160201.parquet --- # Dataset Card for Evaluation run of NeverSleep/MiquMaid-v1-70B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_NeverSleep__MiquMaid-v1-70B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-03T13:55:45.160201](https://huggingface.co/datasets/open-llm-leaderboard/details_NeverSleep__MiquMaid-v1-70B/blob/main/results_2024-02-03T13-55-45.160201.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.747644956445168, "acc_stderr": 0.028643244664619514, "acc_norm": 0.7506821937250182, "acc_norm_stderr": 0.029197302210032687, "mc1": 0.45165238678090575, "mc1_stderr": 0.017421480300277643, "mc2": 0.6178734796953549, "mc2_stderr": 0.014878715039713398 }, "harness|arc:challenge|25": { "acc": 0.674061433447099, "acc_stderr": 0.013697432466693244, "acc_norm": 0.7167235494880546, "acc_norm_stderr": 0.013167478735134576 }, "harness|hellaswag|10": { "acc": 0.694582752439753, "acc_stderr": 0.004596426220000888, "acc_norm": 0.8796056562437762, "acc_norm_stderr": 0.003247570330456916 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6962962962962963, "acc_stderr": 0.03972552884785137, "acc_norm": 0.6962962962962963, "acc_norm_stderr": 0.03972552884785137 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.8486842105263158, "acc_stderr": 0.029162631596843982, "acc_norm": 0.8486842105263158, "acc_norm_stderr": 0.029162631596843982 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.78, "acc_stderr": 0.04163331998932261, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932261 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7849056603773585, "acc_stderr": 0.025288394502891366, "acc_norm": 0.7849056603773585, "acc_norm_stderr": 0.025288394502891366 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8958333333333334, "acc_stderr": 0.02554523921025691, "acc_norm": 0.8958333333333334, "acc_norm_stderr": 0.02554523921025691 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.63, "acc_stderr": 0.04852365870939099, "acc_norm": 0.63, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.42, "acc_stderr": 0.04960449637488584, "acc_norm": 0.42, "acc_norm_stderr": 0.04960449637488584 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.7398843930635838, "acc_stderr": 0.033450369167889904, "acc_norm": 0.7398843930635838, "acc_norm_stderr": 0.033450369167889904 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.45098039215686275, "acc_stderr": 0.049512182523962625, "acc_norm": 0.45098039215686275, "acc_norm_stderr": 0.049512182523962625 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.83, "acc_stderr": 0.03775251680686371, "acc_norm": 0.83, "acc_norm_stderr": 0.03775251680686371 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.7531914893617021, "acc_stderr": 0.02818544130123409, "acc_norm": 0.7531914893617021, "acc_norm_stderr": 0.02818544130123409 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5964912280701754, "acc_stderr": 0.04615186962583707, "acc_norm": 0.5964912280701754, "acc_norm_stderr": 0.04615186962583707 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.7241379310344828, "acc_stderr": 0.03724563619774632, "acc_norm": 0.7241379310344828, "acc_norm_stderr": 0.03724563619774632 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.5396825396825397, "acc_stderr": 0.025670080636909308, "acc_norm": 0.5396825396825397, "acc_norm_stderr": 0.025670080636909308 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5634920634920635, "acc_stderr": 0.04435932892851466, "acc_norm": 0.5634920634920635, "acc_norm_stderr": 0.04435932892851466 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.867741935483871, "acc_stderr": 0.019272015434846475, "acc_norm": 0.867741935483871, "acc_norm_stderr": 0.019272015434846475 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.6305418719211823, "acc_stderr": 0.033959703819985754, "acc_norm": 0.6305418719211823, "acc_norm_stderr": 0.033959703819985754 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.84, "acc_stderr": 0.03684529491774708, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774708 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8424242424242424, "acc_stderr": 0.028450388805284332, "acc_norm": 0.8424242424242424, "acc_norm_stderr": 0.028450388805284332 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.9141414141414141, "acc_stderr": 0.019960225563172885, "acc_norm": 0.9141414141414141, "acc_norm_stderr": 0.019960225563172885 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9430051813471503, "acc_stderr": 0.01673108529360756, "acc_norm": 0.9430051813471503, "acc_norm_stderr": 0.01673108529360756 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7974358974358975, "acc_stderr": 0.020377660970371372, "acc_norm": 0.7974358974358975, "acc_norm_stderr": 0.020377660970371372 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.4, "acc_stderr": 0.029869605095316904, "acc_norm": 0.4, "acc_norm_stderr": 0.029869605095316904 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8739495798319328, "acc_stderr": 0.02155962312121392, "acc_norm": 0.8739495798319328, "acc_norm_stderr": 0.02155962312121392 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4900662251655629, "acc_stderr": 0.04081677107248437, "acc_norm": 0.4900662251655629, "acc_norm_stderr": 0.04081677107248437 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.9174311926605505, "acc_stderr": 0.011800361363016567, "acc_norm": 0.9174311926605505, "acc_norm_stderr": 0.011800361363016567 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6898148148148148, "acc_stderr": 0.031546962856566295, "acc_norm": 0.6898148148148148, "acc_norm_stderr": 0.031546962856566295 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9215686274509803, "acc_stderr": 0.018869514646658928, "acc_norm": 0.9215686274509803, "acc_norm_stderr": 0.018869514646658928 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.9071729957805907, "acc_stderr": 0.01888975055095671, "acc_norm": 0.9071729957805907, "acc_norm_stderr": 0.01888975055095671 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.8116591928251121, "acc_stderr": 0.026241132996407256, "acc_norm": 0.8116591928251121, "acc_norm_stderr": 0.026241132996407256 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8549618320610687, "acc_stderr": 0.030884661089515375, "acc_norm": 0.8549618320610687, "acc_norm_stderr": 0.030884661089515375 }, "harness|hendrycksTest-international_law|5": { "acc": 0.9008264462809917, "acc_stderr": 0.02728524631275895, "acc_norm": 0.9008264462809917, "acc_norm_stderr": 0.02728524631275895 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.9074074074074074, "acc_stderr": 0.02802188803860944, "acc_norm": 0.9074074074074074, "acc_norm_stderr": 0.02802188803860944 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.8159509202453987, "acc_stderr": 0.030446777687971723, "acc_norm": 0.8159509202453987, "acc_norm_stderr": 0.030446777687971723 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.6428571428571429, "acc_stderr": 0.04547960999764376, "acc_norm": 0.6428571428571429, "acc_norm_stderr": 0.04547960999764376 }, "harness|hendrycksTest-management|5": { "acc": 0.8737864077669902, "acc_stderr": 0.03288180278808629, "acc_norm": 0.8737864077669902, "acc_norm_stderr": 0.03288180278808629 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9316239316239316, "acc_stderr": 0.01653462768431136, "acc_norm": 0.9316239316239316, "acc_norm_stderr": 0.01653462768431136 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.89272030651341, "acc_stderr": 0.01106657144950843, "acc_norm": 0.89272030651341, "acc_norm_stderr": 0.01106657144950843 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.8208092485549133, "acc_stderr": 0.020647590029679332, "acc_norm": 0.8208092485549133, "acc_norm_stderr": 0.020647590029679332 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.6290502793296089, "acc_stderr": 0.016155910721341777, "acc_norm": 0.6290502793296089, "acc_norm_stderr": 0.016155910721341777 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.8235294117647058, "acc_stderr": 0.021828596053108416, "acc_norm": 0.8235294117647058, "acc_norm_stderr": 0.021828596053108416 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.8263665594855305, "acc_stderr": 0.02151405158597041, "acc_norm": 0.8263665594855305, "acc_norm_stderr": 0.02151405158597041 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8487654320987654, "acc_stderr": 0.019935086092149893, "acc_norm": 0.8487654320987654, "acc_norm_stderr": 0.019935086092149893 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5673758865248227, "acc_stderr": 0.02955545423677884, "acc_norm": 0.5673758865248227, "acc_norm_stderr": 0.02955545423677884 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5814863102998696, "acc_stderr": 0.012599505608336482, "acc_norm": 0.5814863102998696, "acc_norm_stderr": 0.012599505608336482 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.8088235294117647, "acc_stderr": 0.023886881922440335, "acc_norm": 0.8088235294117647, "acc_norm_stderr": 0.023886881922440335 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.8137254901960784, "acc_stderr": 0.015750526284363346, "acc_norm": 0.8137254901960784, "acc_norm_stderr": 0.015750526284363346 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7, "acc_stderr": 0.04389311454644287, "acc_norm": 0.7, "acc_norm_stderr": 0.04389311454644287 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8244897959183674, "acc_stderr": 0.024352800722970015, "acc_norm": 0.8244897959183674, "acc_norm_stderr": 0.024352800722970015 }, "harness|hendrycksTest-sociology|5": { "acc": 0.9253731343283582, "acc_stderr": 0.018581939698490618, "acc_norm": 0.9253731343283582, "acc_norm_stderr": 0.018581939698490618 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.94, "acc_stderr": 0.02386832565759418, "acc_norm": 0.94, "acc_norm_stderr": 0.02386832565759418 }, "harness|hendrycksTest-virology|5": { "acc": 0.5662650602409639, "acc_stderr": 0.03858158940685515, "acc_norm": 0.5662650602409639, "acc_norm_stderr": 0.03858158940685515 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8771929824561403, "acc_stderr": 0.02517298435015577, "acc_norm": 0.8771929824561403, "acc_norm_stderr": 0.02517298435015577 }, "harness|truthfulqa:mc|0": { "mc1": 0.45165238678090575, "mc1_stderr": 0.017421480300277643, "mc2": 0.6178734796953549, "mc2_stderr": 0.014878715039713398 }, "harness|winogrande|5": { "acc": 0.850828729281768, "acc_stderr": 0.010012598805627328 }, "harness|gsm8k|5": { "acc": 0.6929492039423806, "acc_stderr": 0.012705685723131719 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
sanskrit_classic
--- annotations_creators: - no-annotation language_creators: - found language: - sa license: - other multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: SanskritClassic dataset_info: features: - name: text dtype: string config_name: combined splits: - name: train num_bytes: 40299787 num_examples: 342033 download_size: 7258904 dataset_size: 40299787 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[sanskrit_classic](https://github.com/parmarsuraj99/hf_datasets/tree/master/sanskrit_classic) - **Repository:**[GitHub](https://github.com/parmarsuraj99/hf_datasets/tree/master/sanskrit_classic) - **Paper:**N/A - **Leaderboard:**N/A - **Point of Contact:**[parmarsuraj99](parmarsuraj99@gmail.com) ### Dataset Summary A collection of classical sanskrit texts ### Supported Tasks and Leaderboards Language modeling ### Languages Sanskrit ## Dataset Structure ### Data Instances {'text': 'मा कर्मफलहेतुर्भूर्मा ते सङ्गोऽस्त्वकर्मणि॥'} ### Data Fields `text`: a line ### Data Splits | | Train | |-------------------|--------| | n_instances | 342033 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @Misc{johnsonetal2014, author = {Johnson, Kyle P. and Patrick Burns and John Stewart and Todd Cook}, title = {CLTK: The Classical Language Toolkit}, url = {https://github.com/cltk/cltk}, year = {2014--2020}, } ``` ### Contributions Thanks to [@parmarsuraj99](https://github.com/parmarsuraj99) for adding this dataset.
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_26_1000
--- dataset_info: features: - name: id dtype: int64 - name: response dtype: string splits: - name: train num_bytes: 982 num_examples: 32 download_size: 2050 dataset_size: 982 --- # Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_26_1000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bigscience-data/roots_es_the_pile_europarl
--- language: es license: mit extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at: https://hf.co/spaces/bigscience/ethical-charter' extra_gated_fields: I have read and agree to abide by the BigScience Ethical Charter: checkbox --- ROOTS Subset: roots_es_the_pile_europarl # the_pile_europarl - Dataset uid: `the_pile_europarl` ### Description ### Homepage ### Licensing ### Speaker Locations ### Sizes - 0.1278 % of total - 0.4112 % of fr - 1.5555 % of pt - 0.7511 % of es - 0.1503 % of en ### BigScience processing steps #### Filters applied to: fr - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: pt - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_300 #### Filters applied to: es - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024 #### Filters applied to: en - dedup_document - filter_remove_empty_docs - filter_small_docs_bytes_1024
japanese-asr/whisper_transcriptions.reazonspeech.all_48
--- dataset_info: config_name: all features: - name: name dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string - name: whisper_transcript sequence: int64 splits: - name: train num_bytes: 30368627341.0 num_examples: 266701 download_size: 30128164269 dataset_size: 30368627341.0 configs: - config_name: all data_files: - split: train path: all/train-* ---
Doraemon-AI/text-to-neo4j-cypher-chinese
--- license: afl-3.0 task_categories: - text2text-generation language: - zh tags: - code --- ## 动机 随着信息量的不断增加和技术的进步,我们的社会正在逐渐形成一个庞大而复杂的网络。随着大数据时代的到来,半结构化和非结构化的数据格式越来越多。 ,传统关系型数据库难以有效处理这些数据,而图数据库能够更灵活地存储和查询此类类型的数据,Neo4j就是其中最流行的产品之一 但是 Neo4j 的查询语言 Cypher 可以实现对图的高效查询。Cypher 的复杂操作和语法对用户的学习成本要求同样高。因此,本文提出并定义了一种类似Text-to-SQL的新任务Text-to-Neo4j-Cypher Text-to-Neo4j-Cypher是一种新的语义解析任务,即将用户的自然语言查询转化为为Neo4j-Cypherquery,以帮助降低用户的学习和使用成本,提升图数据库与用户的交互程度 ## 亮点 1、提出并正式定义了 Text-to-Neo4j-Cypher 任务,该任务的目的是将用户自然语言查询自动转化为 Neo4j-Cypher 查询,降低图数据库与用户交互的学习和使用成本 2、对参考文献中的数据进行了改进,以适应LLM的训练 3、对参考文献中的不规范数据进行了修正 ## 参考文献 A Semantic Parsing Dataset for Converting Natural Language into Cypher
ThanaritKanjanametawatAU/Machine-Generated-Text-Detection-Dataset
--- license: apache-2.0 ---
gokaygokay/prompt_description_stable_diffusion_3k
--- dataset_info: features: - name: prompts dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 2224096 num_examples: 3000 download_size: 1088664 dataset_size: 2224096 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - text-generation - text2text-generation language: - en tags: - art size_categories: - 1K<n<10K --- # The Synthetic Description from Prompts Dataset This dataset is created using the Phi 2 3B Q4_K_S quantized model, using 3k random samples from training set of a base dataset of about 80,000 prompts from the Stable Diffusion dataset on Lexica.art. This dataset is designed to explore the capabilities of language models in generating creative and expanded descriptions from concise prompts. ### Source Data https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts
yu-nomi/wikipedia-2.156
--- license: mit dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 232007785 num_examples: 1036857 download_size: 134009519 dataset_size: 232007785 configs: - config_name: default data_files: - split: train path: data/train-* ---
Chaymaa/roi_donut
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 81407507.7 num_examples: 560 - name: test num_bytes: 18098346.65 num_examples: 120 - name: valid num_bytes: 17695991.65 num_examples: 120 download_size: 117206254 dataset_size: 117201846.0 --- # Dataset Card for "roi_donut" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hlillemark/flores200_devtest_mt5-600m-flores200-scaffold
--- dataset_info: features: - name: id dtype: int32 - name: source_lang dtype: string - name: target_lang dtype: string - name: source dtype: string - name: target dtype: string - name: prediction dtype: string - name: chrf_unreduced dtype: string splits: - name: devtest num_bytes: 747261214 num_examples: 1000000 download_size: 524909314 dataset_size: 747261214 --- # Dataset Card for "flores200_devtest_mt5-600m-flores200-scaffold" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
waleko/SWE-bench-traceback
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: dev path: data/dev-* - split: test path: data/test-* dataset_info: features: - name: repo dtype: string - name: instance_id dtype: string - name: base_commit dtype: string - name: patch dtype: string - name: test_patch dtype: string - name: problem_statement dtype: string - name: hints_text dtype: string - name: created_at dtype: string - name: version dtype: string - name: FAIL_TO_PASS dtype: string - name: PASS_TO_PASS dtype: string - name: environment_setup_commit dtype: string - name: traceback dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 22169171 num_examples: 1626 - name: dev num_bytes: 494387 num_examples: 30 - name: test num_bytes: 3585946 num_examples: 194 download_size: 8347130 dataset_size: 26249504 --- # Dataset Card for "SWE-bench-traceback" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hyperdemocracy/usc-vecs-v1-s1024-o256-BAAI-bge-small-en-v1.5
--- configs: - config_name: default data_files: - path: data/usc-113-vecs-v1-s1024-o256-BAAI-bge-small-en-v1.5.parquet split: '113' - path: data/usc-114-vecs-v1-s1024-o256-BAAI-bge-small-en-v1.5.parquet split: '114' - path: data/usc-115-vecs-v1-s1024-o256-BAAI-bge-small-en-v1.5.parquet split: '115' - path: data/usc-116-vecs-v1-s1024-o256-BAAI-bge-small-en-v1.5.parquet split: '116' - path: data/usc-117-vecs-v1-s1024-o256-BAAI-bge-small-en-v1.5.parquet split: '117' - path: data/usc-118-vecs-v1-s1024-o256-BAAI-bge-small-en-v1.5.parquet split: '118' dataset_info: features: - dtype: string name: chunk_id - dtype: string name: text_id - dtype: string name: legis_id - dtype: string name: text - list: dtype: float32 name: vec - name: metadata struct: - dtype: string name: chunk_id - dtype: int32 name: chunk_index - dtype: string name: congress_num - dtype: string name: legis_class - dtype: string name: legis_id - dtype: int32 name: legis_num - dtype: string name: legis_type - dtype: string name: legis_version - dtype: int32 name: start_index - dtype: string name: text_date - dtype: string name: text_id ---
daqc/constitucion-politica-del-peru-1993-qa-gemma-2b-it-format-80train-20test
--- dataset_info: features: - name: pregunta dtype: string - name: respuesta dtype: string splits: - name: train num_bytes: 1446032.8 num_examples: 1660 - name: test num_bytes: 361508.2 num_examples: 415 download_size: 761984 dataset_size: 1807541.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
akadhim-ai/dilbert-comic-sample-dataset
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 530433.0 num_examples: 7 download_size: 531593 dataset_size: 530433.0 --- # Dataset Card for "dilbert-comic-sample-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
and-effect/mdk_gov_data_titles_clf
--- annotations_creators: crowdsourced language_creators: other language: de multilinguality: monolingual size_categories: - 1K<n<10K source_datasets: extended task_categories: - text-classification pretty_name: GOVDATA dataset titles labelled license: cc-by-4.0 --- # Dataset Card for MDK This dataset was created as part of the [Bertelsmann Foundation's](https://www.bertelsmann-stiftung.de/de/startseite) [Musterdatenkatalog (MDK)]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") project. The MDK provides an overview of Open Data in municipalities in Germany. It is intended to help municipalities in Germany, as well as data analysts and journalists, to get an overview of the topics and the extent to which cities have already published data sets. ## Dataset Description ### Dataset Summary The dataset is an annotated corpus of 1258 records based on the metadata of the datasets from [GOVDATA](https://www.govdata.de/). GovData is a data portal that aims to make cities' data available in a standardized way. The annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels: - Level 1: "Thema" (topic) ![](taxonomy_elinor.png) - Level 2: "Bezeichnung" (label). The first dash divides the levels. For example: ![](topic_label_example.png) You can find an interactive view of the taxonomy with all labels [here](https://huggingface.co/spaces/and-effect/Musterdatenkatalog). The repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set. ### Languages The language data is German. ## Dataset Structure ### Data Fields | dataset | size | |-----|-----| | small/train | 18.96 KB | | small/test | 6.13 KB | | large/train | 517.77 KB | | large/test | 118.66 KB | An example of looks as follows: ```json { "doc_id": "a063d3b7-4c09-421e-9849-073dc8939e76", "title": "Dienstleistungen Alphabetisch sortiert April 2019", "description": "CSV-Datei mit allen Dienstleistungen der Kreisverwaltung Kleve. Sortiert nach AlphabetStand 01.04.2019", "labels_name": "Sonstiges - Sonstiges", "labels": 166 } ``` The data fields are the same among all splits: - doc_id (uuid): identifier for each document - title (str): dataset title from GOVDATA - description (str): description of the dataset - labels_name (str): annotation with labels from taxonomy - labels (int): labels indexed from 0 to 250 ### Data Splits | dataset_name | dataset_splits | train_size | test_size | validation_size |-----|-----|-----|-----|-----| | dataset_large | train, test, validation | 1009 | 249 | 101 | dataset_small | train, test | 37 | 13 | None ## Dataset Creation The dataset was created through multiple manual annotation rounds. ### Source Data The data comes from [GOVDATA](https://www.govdata.de/), an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. [GOVDATA](https://www.govdata.de/) offers a [CKAN API](https://ckan.govdata.de/) to make requests and provides metadata for each data entry. #### Initial Data Collection and Normalization Several sources were used for the annotation process. A sample was collected from [GOVDATA](https://www.govdata.de/) with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) that contain older data from [GOVDATA](https://www.govdata.de/). Some of the datasets from the old [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles. ### Annotations #### Annotation process The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa. The following table shows the results of the of the annotations: | | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** | | ------------------ | :--------------: | ------------------------ | ----------------------- | | **Test Round** | .77 | 6 | 50 | | **Round 1** | .41 | 2 | 120 | | **Round 2** | .76 | 4 | 480 | | **Round 3** | .71 | 3 | 420 | | **Round 4** | .87 | 2 | 416 | | **Validation set** | - | 1 | 177 | In addition, a validation set was generated by the dataset curators. #### Who are the annotators? Annotators are all employees from [&effect data solutions GmbH](https://www.and-effect.com/). The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the [Musterdatenkatalog]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") from the Bertelsmann Foundation. On this basis, the [&effect](https://www.and-effect.com/) employees were instructed in the annotation by the curators of the datasets. ## Considerations for Using the Data The dataset for the annotation process was generated by sampling from [GOVDATA](https://www.govdata.de/) and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available. ### Social Impact of Dataset Since 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (https://www.verwaltung-innovativ.de/DE/Startseite/startseite_node.html). The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible. ### Discussion of Biases (non-ethical) The data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available. ## Additional Information ### Dataset Curators Friederike Bauer Rahkakavee Baskaran ### Licensing Information CC BY 4.0
liuyanchen1015/MULTI_VALUE_rte_drop_copula_be_NP
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: int64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: test num_bytes: 273543 num_examples: 689 - name: train num_bytes: 228929 num_examples: 565 download_size: 337136 dataset_size: 502472 --- # Dataset Card for "MULTI_VALUE_rte_drop_copula_be_NP" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
carolina-c4ai/corpus-carolina
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - pt license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1B<n<10B source_datasets: - original task_categories: - fill-mask - text-generation task_ids: - masked-language-modeling - language-modeling pretty_name: Carolina language_bcp47: - pt-BR --- # Dataset Card for Corpus Carolina ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [sites.usp.br/corpuscarolina](https://sites.usp.br/corpuscarolina/) - **Current Version:** 1.2 (Ada) - **Point of Contact:** [LaViHD](mailto:lavihd@usp.br) ### Dataset Summary Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a robust volume of texts of varied typology in contemporary Brazilian Portuguese (1970-2021). This corpus contains documents and texts extracted from the web and includes information (metadata) about its provenance and tipology. The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are: - `dat` : datasets and other corpora; - `jud` : judicial branch; - `leg` : legislative branch; - `pub` : public domain works; - `soc` : social media; - `uni` : university domains; - `wik` : wikis. Dataset Vesioning: The Carolina Corpus is under continuous development resulting in multiple vesions. The current version is v1.2, but v1.1 is also available. You can access diferent vesions of the corpus using the `revision` parameter on `load_dataset`. Usage Example: ```python from datasets import load_dataset # to load all taxonomies corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina") # to load social media documents social_media = load_dataset("carolina-c4ai/corpus-carolina", taxonomy="soc") # to load previous version corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina", revision="v1.1") ``` ### Supported Tasks Carolina corpus was compiled for academic purposes, namely linguistic and computational analysis. ### Languages Contemporary Brazilian Portuguese (1970-2021). ## Dataset Structure Files are stored inside `corpus` folder with a subfolder for each taxonomy. Every file folows a XML structure (TEI P5) and contains multiple extracted documents. For each document, the text and metadata are exposed as `text` and `meta` features, respectively. ### Data Instances Every instance have the following structure. ``` { "meta": datasets.Value("string"), "text": datasets.Value("string") } ``` | Code | Taxonomy | Instances | Size | |:----:|:---------------------------|----------:|-------:| | | **Total** | 2107045 | 11 GB | | dat | Datasets and other Corpora | 1102049 | 4.4 GB | | wik | Wikis | 960139 | 5.2 GB | | jud | Judicial Branch | 40464 | 1.5 GB | | leg | Legislative Branch | 13 | 25 MB | | soc | Social Media | 3413 | 17 MB | | uni | University Domains | 941 | 10 MB | | pub | Public Domain Works | 26 | 4.5 MB | || ### Data Fields - `meta`: a XML string with a TEI conformant `teiHeader` tag. It is exposed as text and needs to be parsed in order to access the actual metada; - `text`: a string containing the extracted document. ### Data Splits As a general corpus, Carolina does not have splits. In order to load the dataset, it is used `corpus` as its single split. ## Additional Information ### Dataset Curators The Corpus Carolina is developed by a multidisciplinary team of linguists and computer scientists, members of the Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI. ### Licensing Information The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was compiled for academic purposes, namely linguistic and computational analysis. It is composed of texts assembled in various digital repositories, whose licenses are multiple and therefore should be observed when making use of the corpus. The Carolina headers are licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International." ### Citation Information ``` @misc{corpusCarolinaV1.1, title={ Carolina: The Open Corpus for Linguistics and Artificial Intelligence }, author={ Finger, Marcelo and Paixão de Sousa, Maria Clara and Namiuti, Cristiane and Martins do Monte, Vanessa and Costa, Aline Silva and Serras, Felipe Ribas and Sturzeneker, Mariana Lourenço and Guets, Raquel de Paula and Mesquita, Renata Morais and Mello, Guilherme Lamartine de and Crespo, Maria Clara Ramos Morales and Rocha, Maria Lina de Souza Jeannine and Brasil, Patrícia and Silva, Mariana Marques da and Palma, Mayara Feliciano }, howpublished={\url{ https://sites.usp.br/corpuscarolina/corpus}}, year={2022}, note={Version 1.1 (Ada)}, } ```
Gabriel/gigaword_swe
--- language: - sv license: - mit size_categories: - 1M<n<3M source_datasets: - https://github.com/huggingface/datasets/tree/master/datasets/gigaword task_categories: - summarization - text2text-generation task_ids: [] tags: - conditional-text-generation --- # Dataset Card for Swedish Gigaword Dataset The Swedish gigaword dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/gigaword ### Data Fields - `document`: a string containing the shorter body - `summary`: a string containing the summary of the body ### Data Splits The Swedish gigaword dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 3,700,301 | | Validation | 189,650 | | Test | 1,951 |
lucapantea/egnn-lspe
--- license: mit ---
tingchih/EM
--- dataset_info: features: - name: claim dtype: string - name: label dtype: string - name: origin dtype: string - name: evidence dtype: string - name: images sequence: string splits: - name: train num_bytes: 218081338 num_examples: 37922 - name: test num_bytes: 34882854 num_examples: 5229 download_size: 68367435 dataset_size: 252964192 --- # Dataset Card for "EM" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MicPie/unpredictable_sporcle-com
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual pretty_name: UnpredicTable-sporcle-com size_categories: - 100K<n<1M source_datasets: [] task_categories: - multiple-choice - question-answering - zero-shot-classification - text2text-generation - table-question-answering - text-generation - text-classification - tabular-classification task_ids: - multiple-choice-qa - extractive-qa - open-domain-qa - closed-domain-qa - closed-book-qa - open-book-qa - language-modeling - multi-class-classification - natural-language-inference - topic-classification - multi-label-classification - tabular-multi-class-classification - tabular-multi-label-classification --- # Dataset Card for "UnpredicTable-sporcle-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** junshern@nyu.edu, perez@nyu.edu ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
Teklia/POPP-line
--- license: mit language: - fr task_categories: - image-to-text pretty_name: POPP-line dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_examples: 3834 - name: validation num_examples: 479 - name: test num_examples: 478 dataset_size: 4791 tags: - atr - htr - ocr - historical - handwritten --- # POPP - line level ## Table of Contents - [POPP - line level](#popp-line-level) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) ## Dataset Description - **Homepage:** [POPP](https://popp.hypotheses.org/) - **Source:** [GitHub](https://github.com/Shulk97/POPP-datasets/tree/master/Belleville) - **Paper:** [Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early 20th Century Paris Census](https://link.springer.com/chapter/10.1007/978-3-031-06555-2_10) - **Point of Contact:** [TEKLIA](https://teklia.com) ## Dataset Summary The POPP dataset includes French civil census from Paris from the early 20th century. Note that all images are resized to a fixed height of 128 pixels. ### Languages All the documents in the dataset are written in French. ## Dataset Structure ### Data Instances ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190, 'text': 'Joly Ernest 88 Indre M par Employé Roblot!18377' } ``` ### Data Fields - `image`: a PIL.Image.Image object containing the image. Note that when accessing the image column (using dataset[0]["image"]), the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]. - `text`: the label transcription of the image.
AdiOO7/consumer_complaints
--- license: apache-2.0 task_categories: - question-answering language: - en tags: - finance size_categories: - 10K<n<100K ---
CyberHarem/ancient_destroyer_hime_kantaicollection
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of ancient_destroyer_hime (Kantai Collection) This is the dataset of ancient_destroyer_hime (Kantai Collection), containing 18 images and their tags. The core tags of this character are `black_hair, drill_hair, side_ponytail, long_hair, blue_eyes, mole, mole_under_eye, glowing_eyes, breasts, hair_ornament, side_drill, small_breasts, white_skin`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 18 | 12.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_hime_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 18 | 9.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_hime_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 34 | 16.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_hime_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 18 | 11.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_hime_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 34 | 20.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_hime_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ancient_destroyer_hime_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | abyssal_ship, 1girl, kimono, solo, black_gloves, glowing, hakama_skirt, blush, looking_at_viewer, meiji_schoolgirl_uniform, simple_background, thighhighs | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | abyssal_ship | 1girl | kimono | solo | black_gloves | glowing | hakama_skirt | blush | looking_at_viewer | meiji_schoolgirl_uniform | simple_background | thighhighs | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------|:--------|:---------|:-------|:---------------|:----------|:---------------|:--------|:--------------------|:---------------------------|:--------------------|:-------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X |
lhallee/Thermostability_fold
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: valid path: data/valid-* - split: test path: data/test-* dataset_info: features: - name: seqs dtype: string - name: labels dtype: float64 splits: - name: train num_bytes: 5920444 num_examples: 5056 - name: valid num_bytes: 739542 num_examples: 639 - name: test num_bytes: 1574670 num_examples: 1336 download_size: 6866906 dataset_size: 8234656 --- # Dataset Card for "Thermostability_fold" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
EleutherAI/quirky_nli_bob_easy
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: label dtype: class_label: names: '0': entailment '1': neutral '2': contradiction - name: id dtype: string - name: choices sequence: string - name: bob_label dtype: int64 - name: difficulty dtype: float64 - name: statement dtype: string - name: character dtype: string - name: alice_label dtype: int64 splits: - name: train num_bytes: 331205.67582760775 num_examples: 1401 - name: validation num_bytes: 117898.06075 num_examples: 491 - name: test num_bytes: 108771.4505 num_examples: 458 download_size: 226282 dataset_size: 557875.1870776077 --- # Dataset Card for "quirky_nli_bob_easy" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466616
--- type: predictions tags: - autotrain - evaluation datasets: - futin/guess eval_info: task: text_zero_shot_classification model: facebook/opt-350m metrics: [] dataset_name: futin/guess dataset_config: en_3 dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-350m * Dataset: futin/guess * Config: en_3 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@futin](https://huggingface.co/futin) for evaluating this model.
dhmeltzer/yake_top3_asks_cleaned
--- dataset_info: features: - name: title dtype: string - name: selftext dtype: string - name: answers.text dtype: string - name: answers.score dtype: int64 - name: title_urls.url sequence: string - name: selftext_urls.url sequence: string - name: answers_urls.url sequence: string - name: sent_vec sequence: float32 splits: - name: train num_bytes: 366313853 num_examples: 125323 - name: validation num_bytes: 6571413 num_examples: 2060 - name: test num_bytes: 12675987 num_examples: 4058 download_size: 393424753 dataset_size: 385561253 --- # Dataset Card for "yake_top3_asks_cleaned" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
iuihgisgsd/KHGKJHKGH
--- license: cc-by-sa-4.0 ---
Cubpaw/bce_dataset_voxelgym_3c_42x42_1000
--- dataset_info: features: - name: image dtype: image - name: label dtype: image - name: rgb_label dtype: image - name: path_label dtype: image - name: path_rgb_label dtype: image splits: - name: train num_bytes: 729296.0 num_examples: 800 - name: validation num_bytes: 184477.0 num_examples: 200 download_size: 702998 dataset_size: 913773.0 --- # Dataset Card for "bce_dataset_voxelgym_3c_1000_42x42" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ranWang/un_corpus_for_sitemap_test
--- dataset_info: features: - name: uuid dtype: string - name: url dtype: string - name: title dtype: string - name: news_content sequence: sequence: sequence: string - name: html_content dtype: string splits: - name: ar num_bytes: 4731475832 num_examples: 35807 - name: ru num_bytes: 4612097472 num_examples: 34774 - name: zh num_bytes: 4853260847 num_examples: 39850 - name: en num_bytes: 8601079816 num_examples: 67374 - name: fr num_bytes: 6032856702 num_examples: 46756 - name: es num_bytes: 5542173377 num_examples: 44877 download_size: 13991912058 dataset_size: 34372944046 --- # Dataset Card for "un_corpus_for_sitemap_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gsstein/0-percent-human-dataset-opt
--- dataset_info: features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: summary dtype: string - name: text dtype: string - name: prompt dtype: string - name: generated dtype: bool - name: raw_summary dtype: string splits: - name: train num_bytes: 129211613 num_examples: 15326 - name: test num_bytes: 4628816 num_examples: 576 - name: validation num_bytes: 4912734 num_examples: 576 download_size: 84496255 dataset_size: 138753163 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* ---
CyberHarem/michiru_kinushima_plasticmemories
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Michiru Kinushima (Plastic Memories) This is the dataset of Michiru Kinushima (Plastic Memories), containing 168 images and their tags. The core tags of this character are `orange_hair, long_hair, blue_eyes, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 168 | 146.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/michiru_kinushima_plasticmemories/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 168 | 113.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/michiru_kinushima_plasticmemories/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 347 | 220.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/michiru_kinushima_plasticmemories/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 168 | 146.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/michiru_kinushima_plasticmemories/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 347 | 271.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/michiru_kinushima_plasticmemories/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/michiru_kinushima_plasticmemories', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------| | 0 | 37 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, school_uniform, upper_body, sleeveless_shirt, blush, pink_sailor_collar, looking_at_viewer, closed_mouth, pink_necktie | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, hair_ornament, solo, window, detached_sleeves | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, detached_sleeves, solo, looking_at_viewer, crossed_arms | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, glasses, under-rim_eyewear, red-framed_eyewear, solo, parody, tears | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | school_uniform | upper_body | sleeveless_shirt | blush | pink_sailor_collar | looking_at_viewer | closed_mouth | pink_necktie | hair_ornament | window | detached_sleeves | crossed_arms | glasses | under-rim_eyewear | red-framed_eyewear | parody | tears | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:-------------|:-------------------|:--------|:---------------------|:--------------------|:---------------|:---------------|:----------------|:---------|:-------------------|:---------------|:----------|:--------------------|:---------------------|:---------|:--------| | 0 | 37 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | | | | | | X | X | X | | | | | | | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | | | X | | | | | X | X | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | | | | | | | | | | X | X | X | X | X |
nairaxo/shikomori-asr
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: path dtype: string - name: sentence dtype: string splits: - name: train num_bytes: 375585328.0 num_examples: 787 download_size: 373013374 dataset_size: 375585328.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "shikomori-asr" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sheik21/pateta
--- license: openrail ---
liuyanchen1015/MULTI_VALUE_stsb_relativizer_where
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: score dtype: float64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: dev num_bytes: 18639 num_examples: 85 - name: test num_bytes: 7542 num_examples: 37 - name: train num_bytes: 33216 num_examples: 123 download_size: 50999 dataset_size: 59397 --- # Dataset Card for "MULTI_VALUE_stsb_relativizer_where" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/dejima_sayaka_seitokaiyakuindomo
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Dejima Sayaka (Seitokai Yakuindomo) This is the dataset of Dejima Sayaka (Seitokai Yakuindomo), containing 126 images and their tags. The core tags of this character are `black_hair, ponytail, maid_headdress`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 126 | 62.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dejima_sayaka_seitokaiyakuindomo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 126 | 53.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dejima_sayaka_seitokaiyakuindomo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 258 | 100.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dejima_sayaka_seitokaiyakuindomo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 126 | 62.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dejima_sayaka_seitokaiyakuindomo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 258 | 114.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dejima_sayaka_seitokaiyakuindomo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/dejima_sayaka_seitokaiyakuindomo', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, apron, maid, solo, closed_eyes | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | apron | maid | solo | closed_eyes | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-------|:--------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X |
yangwang825/sst2-remove-non-stopwords-n2
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: text dtype: string - name: label dtype: int64 - name: label_text dtype: string splits: - name: train num_bytes: 884164 num_examples: 6920 - name: validation num_bytes: 112712 num_examples: 872 - name: test num_bytes: 208473 num_examples: 1821 download_size: 713427 dataset_size: 1205349 --- # Dataset Card for "sst2-remove-non-stopwords-n2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
GroundCtrl/ColonoFalando
--- license: openrail ---
kartai/DX_datasett
--- language: - 'no' pretty_name: DX_Dataset tags: - map - tif - laz - fgb - geojson size_categories: - n<1K --- ## Overview Orthophoto (Orthofoto) and LiDAR (Laser) data, which are organized into folders named after the area and year they are from. ## Dataset Structure - Geodata - Kristiansand.zip (example) - fgb - Vann_22.fgb... - geojson - Vann_22.geojson... - Ortofoto - Agder_og_Telemark_2021.zip (example) - Agder_og_Telemark_2021.zip_mosaic_cog.tif - Laser - Bergen_2pkt_2010 - Bergen_2pkt_2010_mosaic.laz ## Usage Currently only been used in QGIS to display Ortofoto, Laser Data and Geodata.
Coleybrown/2022_Fantasy_Football
--- license: mit ---
carexl8/telegram_he_ru
--- dataset_info: features: - name: id dtype: string - name: name dtype: string - name: time dtype: string - name: text dtype: string - name: tokens sequence: string - name: language tags sequence: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 30629039 num_examples: 43336 download_size: 8829228 dataset_size: 30629039 --- # Dataset Card for "telegram_he_ru" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_Eric111__CatunaMayo-DPO
--- pretty_name: Evaluation run of Eric111/CatunaMayo-DPO dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Eric111/CatunaMayo-DPO](https://huggingface.co/Eric111/CatunaMayo-DPO) on the\ \ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Eric111__CatunaMayo-DPO\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-02-22T13:18:56.955215](https://huggingface.co/datasets/open-llm-leaderboard/details_Eric111__CatunaMayo-DPO/blob/main/results_2024-02-22T13-18-56.955215.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6577790291030743,\n\ \ \"acc_stderr\": 0.032013877347361414,\n \"acc_norm\": 0.6576229750833348,\n\ \ \"acc_norm_stderr\": 0.03267679579600163,\n \"mc1\": 0.5740514075887393,\n\ \ \"mc1_stderr\": 0.01731047190407654,\n \"mc2\": 0.7181865802621098,\n\ \ \"mc2_stderr\": 0.014625726414913819\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.7047781569965871,\n \"acc_stderr\": 0.013329750293382316,\n\ \ \"acc_norm\": 0.7286689419795221,\n \"acc_norm_stderr\": 0.012993807727545796\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7024497112129058,\n\ \ \"acc_stderr\": 0.0045624626655052306,\n \"acc_norm\": 0.8829914359689305,\n\ \ \"acc_norm_stderr\": 0.0032077357692780455\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \ \ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.674074074074074,\n\ \ \"acc_stderr\": 0.040491220417025055,\n \"acc_norm\": 0.674074074074074,\n\ \ \"acc_norm_stderr\": 0.040491220417025055\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926605,\n\ \ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926605\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n\ \ \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \ \ \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.7169811320754716,\n \"acc_stderr\": 0.027724236492700918,\n\ \ \"acc_norm\": 0.7169811320754716,\n \"acc_norm_stderr\": 0.027724236492700918\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\ \ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\ \ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \ \ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n\ \ \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \ \ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n\ \ \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n\ \ \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n\ \ \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n\ \ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n\ \ \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n\ \ \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n\ \ \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\ \ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.41005291005291006,\n \"acc_stderr\": 0.025331202438944437,\n \"\ acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.025331202438944437\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\ \ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\ \ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \ \ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n\ \ \"acc_stderr\": 0.023157879349083525,\n \"acc_norm\": 0.7903225806451613,\n\ \ \"acc_norm_stderr\": 0.023157879349083525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n\ \ \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\"\ : 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n\ \ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"\ acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\ \ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n\ \ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083008,\n \ \ \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083008\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.03006676158297794,\n \ \ \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.03006676158297794\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\ acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8403669724770643,\n \"acc_stderr\": 0.015703498348461763,\n \"\ acc_norm\": 0.8403669724770643,\n \"acc_norm_stderr\": 0.015703498348461763\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"\ acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.8480392156862745,\n \"acc_stderr\": 0.0251956584289318,\n \"acc_norm\"\ : 0.8480392156862745,\n \"acc_norm_stderr\": 0.0251956584289318\n },\n\ \ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\ \ 0.8185654008438819,\n \"acc_stderr\": 0.02508596114457966,\n \"\ acc_norm\": 0.8185654008438819,\n \"acc_norm_stderr\": 0.02508596114457966\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\ \ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\ \ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.035477710041594654,\n\ \ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.035477710041594654\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"\ acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\ \ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\ \ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\ \ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\ \ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\ \ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\ \ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\ \ \"acc_stderr\": 0.021586494001281365,\n \"acc_norm\": 0.8760683760683761,\n\ \ \"acc_norm_stderr\": 0.021586494001281365\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \ \ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8339719029374202,\n\ \ \"acc_stderr\": 0.013306478243066302,\n \"acc_norm\": 0.8339719029374202,\n\ \ \"acc_norm_stderr\": 0.013306478243066302\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.023703099525258176,\n\ \ \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.023703099525258176\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4335195530726257,\n\ \ \"acc_stderr\": 0.01657402721951763,\n \"acc_norm\": 0.4335195530726257,\n\ \ \"acc_norm_stderr\": 0.01657402721951763\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.025646863097137897,\n\ \ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.025646863097137897\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\ \ \"acc_stderr\": 0.025670259242188933,\n \"acc_norm\": 0.7138263665594855,\n\ \ \"acc_norm_stderr\": 0.025670259242188933\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.7623456790123457,\n \"acc_stderr\": 0.02368359183700856,\n\ \ \"acc_norm\": 0.7623456790123457,\n \"acc_norm_stderr\": 0.02368359183700856\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \ \ \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4765319426336376,\n\ \ \"acc_stderr\": 0.01275616194252337,\n \"acc_norm\": 0.4765319426336376,\n\ \ \"acc_norm_stderr\": 0.01275616194252337\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.6948529411764706,\n \"acc_stderr\": 0.027971541370170595,\n\ \ \"acc_norm\": 0.6948529411764706,\n \"acc_norm_stderr\": 0.027971541370170595\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069443,\n \ \ \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069443\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\ \ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\ \ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n\ \ \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\ \ \"acc_stderr\": 0.02619392354445412,\n \"acc_norm\": 0.835820895522388,\n\ \ \"acc_norm_stderr\": 0.02619392354445412\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \ \ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\ \ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\ \ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\ \ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\ \ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5740514075887393,\n\ \ \"mc1_stderr\": 0.01731047190407654,\n \"mc2\": 0.7181865802621098,\n\ \ \"mc2_stderr\": 0.014625726414913819\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.8271507498026835,\n \"acc_stderr\": 0.010626964529971859\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7020470053070508,\n \ \ \"acc_stderr\": 0.012597932232914525\n }\n}\n```" repo_url: https://huggingface.co/Eric111/CatunaMayo-DPO leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|arc:challenge|25_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-02-22T13-18-56.955215.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|gsm8k|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hellaswag|10_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-management|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-02-22T13-18-56.955215.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-management|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-virology|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-02-22T13-18-56.955215.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|truthfulqa:mc|0_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-02-22T13-18-56.955215.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_02_22T13_18_56.955215 path: - '**/details_harness|winogrande|5_2024-02-22T13-18-56.955215.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-02-22T13-18-56.955215.parquet' - config_name: results data_files: - split: 2024_02_22T13_18_56.955215 path: - results_2024-02-22T13-18-56.955215.parquet - split: latest path: - results_2024-02-22T13-18-56.955215.parquet --- # Dataset Card for Evaluation run of Eric111/CatunaMayo-DPO <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Eric111/CatunaMayo-DPO](https://huggingface.co/Eric111/CatunaMayo-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Eric111__CatunaMayo-DPO", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-02-22T13:18:56.955215](https://huggingface.co/datasets/open-llm-leaderboard/details_Eric111__CatunaMayo-DPO/blob/main/results_2024-02-22T13-18-56.955215.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6577790291030743, "acc_stderr": 0.032013877347361414, "acc_norm": 0.6576229750833348, "acc_norm_stderr": 0.03267679579600163, "mc1": 0.5740514075887393, "mc1_stderr": 0.01731047190407654, "mc2": 0.7181865802621098, "mc2_stderr": 0.014625726414913819 }, "harness|arc:challenge|25": { "acc": 0.7047781569965871, "acc_stderr": 0.013329750293382316, "acc_norm": 0.7286689419795221, "acc_norm_stderr": 0.012993807727545796 }, "harness|hellaswag|10": { "acc": 0.7024497112129058, "acc_stderr": 0.0045624626655052306, "acc_norm": 0.8829914359689305, "acc_norm_stderr": 0.0032077357692780455 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.674074074074074, "acc_stderr": 0.040491220417025055, "acc_norm": 0.674074074074074, "acc_norm_stderr": 0.040491220417025055 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6710526315789473, "acc_stderr": 0.03823428969926605, "acc_norm": 0.6710526315789473, "acc_norm_stderr": 0.03823428969926605 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7169811320754716, "acc_stderr": 0.027724236492700918, "acc_norm": 0.7169811320754716, "acc_norm_stderr": 0.027724236492700918 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03476590104304134, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03476590104304134 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.57, "acc_stderr": 0.04975698519562428, "acc_norm": 0.57, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6820809248554913, "acc_stderr": 0.0355068398916558, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.0355068398916558 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.45098039215686275, "acc_stderr": 0.049512182523962625, "acc_norm": 0.45098039215686275, "acc_norm_stderr": 0.049512182523962625 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5787234042553191, "acc_stderr": 0.03227834510146268, "acc_norm": 0.5787234042553191, "acc_norm_stderr": 0.03227834510146268 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4649122807017544, "acc_stderr": 0.046920083813689104, "acc_norm": 0.4649122807017544, "acc_norm_stderr": 0.046920083813689104 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5586206896551724, "acc_stderr": 0.04137931034482757, "acc_norm": 0.5586206896551724, "acc_norm_stderr": 0.04137931034482757 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.025331202438944437, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.025331202438944437 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.47619047619047616, "acc_stderr": 0.04467062628403273, "acc_norm": 0.47619047619047616, "acc_norm_stderr": 0.04467062628403273 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.04793724854411019, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411019 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7903225806451613, "acc_stderr": 0.023157879349083525, "acc_norm": 0.7903225806451613, "acc_norm_stderr": 0.023157879349083525 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5024630541871922, "acc_stderr": 0.035179450386910616, "acc_norm": 0.5024630541871922, "acc_norm_stderr": 0.035179450386910616 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542127, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7515151515151515, "acc_stderr": 0.033744026441394036, "acc_norm": 0.7515151515151515, "acc_norm_stderr": 0.033744026441394036 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.028869778460267045, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.028869778460267045 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.02199531196364424, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.02199531196364424 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402534, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402534 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35185185185185186, "acc_stderr": 0.029116617606083008, "acc_norm": 0.35185185185185186, "acc_norm_stderr": 0.029116617606083008 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6890756302521008, "acc_stderr": 0.03006676158297794, "acc_norm": 0.6890756302521008, "acc_norm_stderr": 0.03006676158297794 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8403669724770643, "acc_stderr": 0.015703498348461763, "acc_norm": 0.8403669724770643, "acc_norm_stderr": 0.015703498348461763 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5324074074074074, "acc_stderr": 0.03402801581358966, "acc_norm": 0.5324074074074074, "acc_norm_stderr": 0.03402801581358966 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8480392156862745, "acc_stderr": 0.0251956584289318, "acc_norm": 0.8480392156862745, "acc_norm_stderr": 0.0251956584289318 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8185654008438819, "acc_stderr": 0.02508596114457966, "acc_norm": 0.8185654008438819, "acc_norm_stderr": 0.02508596114457966 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057221, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057221 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7938931297709924, "acc_stderr": 0.035477710041594654, "acc_norm": 0.7938931297709924, "acc_norm_stderr": 0.035477710041594654 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7962962962962963, "acc_stderr": 0.03893542518824847, "acc_norm": 0.7962962962962963, "acc_norm_stderr": 0.03893542518824847 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7791411042944786, "acc_stderr": 0.03259177392742178, "acc_norm": 0.7791411042944786, "acc_norm_stderr": 0.03259177392742178 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.44642857142857145, "acc_stderr": 0.04718471485219588, "acc_norm": 0.44642857142857145, "acc_norm_stderr": 0.04718471485219588 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.021586494001281365, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.021586494001281365 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8339719029374202, "acc_stderr": 0.013306478243066302, "acc_norm": 0.8339719029374202, "acc_norm_stderr": 0.013306478243066302 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7369942196531792, "acc_stderr": 0.023703099525258176, "acc_norm": 0.7369942196531792, "acc_norm_stderr": 0.023703099525258176 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4335195530726257, "acc_stderr": 0.01657402721951763, "acc_norm": 0.4335195530726257, "acc_norm_stderr": 0.01657402721951763 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7222222222222222, "acc_stderr": 0.025646863097137897, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.025646863097137897 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7138263665594855, "acc_stderr": 0.025670259242188933, "acc_norm": 0.7138263665594855, "acc_norm_stderr": 0.025670259242188933 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7623456790123457, "acc_stderr": 0.02368359183700856, "acc_norm": 0.7623456790123457, "acc_norm_stderr": 0.02368359183700856 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.49645390070921985, "acc_stderr": 0.02982674915328092, "acc_norm": 0.49645390070921985, "acc_norm_stderr": 0.02982674915328092 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4765319426336376, "acc_stderr": 0.01275616194252337, "acc_norm": 0.4765319426336376, "acc_norm_stderr": 0.01275616194252337 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6948529411764706, "acc_stderr": 0.027971541370170595, "acc_norm": 0.6948529411764706, "acc_norm_stderr": 0.027971541370170595 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6797385620915033, "acc_stderr": 0.018875682938069443, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.018875682938069443 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6909090909090909, "acc_stderr": 0.044262946482000985, "acc_norm": 0.6909090909090909, "acc_norm_stderr": 0.044262946482000985 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7224489795918367, "acc_stderr": 0.028666857790274648, "acc_norm": 0.7224489795918367, "acc_norm_stderr": 0.028666857790274648 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.02619392354445412, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.02619392354445412 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-virology|5": { "acc": 0.5542168674698795, "acc_stderr": 0.03869543323472101, "acc_norm": 0.5542168674698795, "acc_norm_stderr": 0.03869543323472101 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160893, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160893 }, "harness|truthfulqa:mc|0": { "mc1": 0.5740514075887393, "mc1_stderr": 0.01731047190407654, "mc2": 0.7181865802621098, "mc2_stderr": 0.014625726414913819 }, "harness|winogrande|5": { "acc": 0.8271507498026835, "acc_stderr": 0.010626964529971859 }, "harness|gsm8k|5": { "acc": 0.7020470053070508, "acc_stderr": 0.012597932232914525 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
dkshjn/mixqa_cot_1
--- dataset_info: features: - name: question dtype: string - name: options dtype: string - name: context dtype: string - name: answer dtype: string splits: - name: train num_bytes: 14631 num_examples: 19 download_size: 16262 dataset_size: 14631 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "mixqa_cot_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mt_eng_vietnamese
--- annotations_creators: - found language_creators: - found multilinguality: - multilingual language: - en - vi license: - unknown size_categories: - 100K<n<1M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: MtEngVietnamese dataset_info: - config_name: iwslt2015-vi-en features: - name: translation dtype: translation: languages: - vi - en splits: - name: train num_bytes: 32478282 num_examples: 133318 - name: validation num_bytes: 323743 num_examples: 1269 - name: test num_bytes: 323743 num_examples: 1269 download_size: 32323025 dataset_size: 33125768 - config_name: iwslt2015-en-vi features: - name: translation dtype: translation: languages: - en - vi splits: - name: train num_bytes: 32478282 num_examples: 133318 - name: validation num_bytes: 323743 num_examples: 1269 - name: test num_bytes: 323743 num_examples: 1269 download_size: 32323025 dataset_size: 33125768 --- # Dataset Card for mt_eng_vietnamese ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/ - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese. ### Supported Tasks and Leaderboards Machine Translation ### Languages English, Vietnamese ## Dataset Structure ### Data Instances An example from the dataset: ``` { 'translation': { 'en': 'In 4 minutes , atmospheric chemist Rachel Pike provides a glimpse of the massive scientific effort behind the bold headlines on climate change , with her team -- one of thousands who contributed -- taking a risky flight over the rainforest in pursuit of data on a key molecule .', 'vi': 'Trong 4 phút , chuyên gia hoá học khí quyển Rachel Pike giới thiệu sơ lược về những nỗ lực khoa học miệt mài đằng sau những tiêu đề táo bạo về biến đổi khí hậu , cùng với đoàn nghiên cứu của mình -- hàng ngàn người đã cống hiến cho dự án này -- một chuyến bay mạo hiểm qua rừng già để tìm kiếm thông tin về một phân tử then chốt .' } } ``` ### Data Fields - translation: - en: text in english - vi: text in vietnamese ### Data Splits train: 133318, validation: 1269, test: 1269 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{Luong-Manning:iwslt15, Address = {Da Nang, Vietnam} Author = {Luong, Minh-Thang and Manning, Christopher D.}, Booktitle = {International Workshop on Spoken Language Translation}, Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain}, Year = {2015}} ``` ### Contributions Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset.
umesh16071973/Floorplan_Dataset_21022024
--- license: mit ---
open-llm-leaderboard/details_AA051615__A0305a
--- pretty_name: Evaluation run of AA051615/A0305a dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [AA051615/A0305a](https://huggingface.co/AA051615/A0305a) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051615__A0305a\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-03-06T00:38:10.412538](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051615__A0305a/blob/main/results_2024-03-06T00-38-10.412538.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.749185114502518,\n\ \ \"acc_stderr\": 0.028187673063681265,\n \"acc_norm\": 0.7549725341672238,\n\ \ \"acc_norm_stderr\": 0.028698055518822745,\n \"mc1\": 0.3598531211750306,\n\ \ \"mc1_stderr\": 0.01680186046667714,\n \"mc2\": 0.5173907130984454,\n\ \ \"mc2_stderr\": 0.015436053888120308\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.5836177474402731,\n \"acc_stderr\": 0.014405618279436172,\n\ \ \"acc_norm\": 0.613481228668942,\n \"acc_norm_stderr\": 0.014230084761910471\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.611431985660227,\n\ \ \"acc_stderr\": 0.004864286176731831,\n \"acc_norm\": 0.8040231029675363,\n\ \ \"acc_norm_stderr\": 0.003961395637784951\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \ \ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7333333333333333,\n\ \ \"acc_stderr\": 0.038201699145179055,\n \"acc_norm\": 0.7333333333333333,\n\ \ \"acc_norm_stderr\": 0.038201699145179055\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.875,\n \"acc_stderr\": 0.026913523521537846,\n \ \ \"acc_norm\": 0.875,\n \"acc_norm_stderr\": 0.026913523521537846\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.8,\n\ \ \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.8,\n \ \ \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.8301886792452831,\n \"acc_stderr\": 0.023108393799841326,\n\ \ \"acc_norm\": 0.8301886792452831,\n \"acc_norm_stderr\": 0.023108393799841326\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9027777777777778,\n\ \ \"acc_stderr\": 0.024774516250440182,\n \"acc_norm\": 0.9027777777777778,\n\ \ \"acc_norm_stderr\": 0.024774516250440182\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \ \ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\ : 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n\ \ \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \ \ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7283236994219653,\n\ \ \"acc_stderr\": 0.0339175032232166,\n \"acc_norm\": 0.7283236994219653,\n\ \ \"acc_norm_stderr\": 0.0339175032232166\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.049135952012744975,\n\ \ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.049135952012744975\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \"acc_norm\": 0.85,\n\ \ \"acc_norm_stderr\": 0.03588702812826371\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.7361702127659574,\n \"acc_stderr\": 0.02880998985410295,\n\ \ \"acc_norm\": 0.7361702127659574,\n \"acc_norm_stderr\": 0.02880998985410295\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5614035087719298,\n\ \ \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.5614035087719298,\n\ \ \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.7931034482758621,\n \"acc_stderr\": 0.03375672449560553,\n\ \ \"acc_norm\": 0.7931034482758621,\n \"acc_norm_stderr\": 0.03375672449560553\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.6640211640211641,\n \"acc_stderr\": 0.024326310529149138,\n \"\ acc_norm\": 0.6640211640211641,\n \"acc_norm_stderr\": 0.024326310529149138\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5238095238095238,\n\ \ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.5238095238095238,\n\ \ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \ \ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9193548387096774,\n\ \ \"acc_stderr\": 0.015490002961591037,\n \"acc_norm\": 0.9193548387096774,\n\ \ \"acc_norm_stderr\": 0.015490002961591037\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.6305418719211823,\n \"acc_stderr\": 0.03395970381998574,\n\ \ \"acc_norm\": 0.6305418719211823,\n \"acc_norm_stderr\": 0.03395970381998574\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\"\ : 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.8848484848484849,\n \"acc_stderr\": 0.024925699798115347,\n\ \ \"acc_norm\": 0.8848484848484849,\n \"acc_norm_stderr\": 0.024925699798115347\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.9292929292929293,\n \"acc_stderr\": 0.0182631054201995,\n \"acc_norm\"\ : 0.9292929292929293,\n \"acc_norm_stderr\": 0.0182631054201995\n },\n\ \ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \ \ \"acc\": 0.9637305699481865,\n \"acc_stderr\": 0.013492659751295138,\n\ \ \"acc_norm\": 0.9637305699481865,\n \"acc_norm_stderr\": 0.013492659751295138\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.8,\n \"acc_stderr\": 0.020280805062535726,\n \"acc_norm\"\ : 0.8,\n \"acc_norm_stderr\": 0.020280805062535726\n },\n \"harness|hendrycksTest-high_school_mathematics|5\"\ : {\n \"acc\": 0.3962962962962963,\n \"acc_stderr\": 0.029822619458533994,\n\ \ \"acc_norm\": 0.3962962962962963,\n \"acc_norm_stderr\": 0.029822619458533994\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.8151260504201681,\n \"acc_stderr\": 0.025215992877954205,\n\ \ \"acc_norm\": 0.8151260504201681,\n \"acc_norm_stderr\": 0.025215992877954205\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.41721854304635764,\n \"acc_stderr\": 0.040261414976346104,\n \"\ acc_norm\": 0.41721854304635764,\n \"acc_norm_stderr\": 0.040261414976346104\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.9229357798165138,\n \"acc_stderr\": 0.011434381698911098,\n \"\ acc_norm\": 0.9229357798165138,\n \"acc_norm_stderr\": 0.011434381698911098\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.6157407407407407,\n \"acc_stderr\": 0.03317354514310742,\n \"\ acc_norm\": 0.6157407407407407,\n \"acc_norm_stderr\": 0.03317354514310742\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.9313725490196079,\n \"acc_stderr\": 0.017744453647073322,\n \"\ acc_norm\": 0.9313725490196079,\n \"acc_norm_stderr\": 0.017744453647073322\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.890295358649789,\n \"acc_stderr\": 0.020343400734868837,\n \ \ \"acc_norm\": 0.890295358649789,\n \"acc_norm_stderr\": 0.020343400734868837\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n\ \ \"acc_stderr\": 0.027373095500540193,\n \"acc_norm\": 0.7892376681614349,\n\ \ \"acc_norm_stderr\": 0.027373095500540193\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.8473282442748091,\n \"acc_stderr\": 0.031545216720054725,\n\ \ \"acc_norm\": 0.8473282442748091,\n \"acc_norm_stderr\": 0.031545216720054725\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.9090909090909091,\n \"acc_stderr\": 0.026243194054073885,\n \"\ acc_norm\": 0.9090909090909091,\n \"acc_norm_stderr\": 0.026243194054073885\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8611111111111112,\n\ \ \"acc_stderr\": 0.033432700628696195,\n \"acc_norm\": 0.8611111111111112,\n\ \ \"acc_norm_stderr\": 0.033432700628696195\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.8773006134969326,\n \"acc_stderr\": 0.025777328426978927,\n\ \ \"acc_norm\": 0.8773006134969326,\n \"acc_norm_stderr\": 0.025777328426978927\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n\ \ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.5714285714285714,\n\ \ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.9029126213592233,\n \"acc_stderr\": 0.02931596291881347,\n\ \ \"acc_norm\": 0.9029126213592233,\n \"acc_norm_stderr\": 0.02931596291881347\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9358974358974359,\n\ \ \"acc_stderr\": 0.01604626163167314,\n \"acc_norm\": 0.9358974358974359,\n\ \ \"acc_norm_stderr\": 0.01604626163167314\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197772,\n \ \ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197772\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8939974457215837,\n\ \ \"acc_stderr\": 0.011008367705789368,\n \"acc_norm\": 0.8939974457215837,\n\ \ \"acc_norm_stderr\": 0.011008367705789368\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.8121387283236994,\n \"acc_stderr\": 0.021029269752423217,\n\ \ \"acc_norm\": 0.8121387283236994,\n \"acc_norm_stderr\": 0.021029269752423217\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7631284916201118,\n\ \ \"acc_stderr\": 0.014219570788103984,\n \"acc_norm\": 0.7631284916201118,\n\ \ \"acc_norm_stderr\": 0.014219570788103984\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.8366013071895425,\n \"acc_stderr\": 0.021170623011213495,\n\ \ \"acc_norm\": 0.8366013071895425,\n \"acc_norm_stderr\": 0.021170623011213495\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8327974276527331,\n\ \ \"acc_stderr\": 0.021193872528034962,\n \"acc_norm\": 0.8327974276527331,\n\ \ \"acc_norm_stderr\": 0.021193872528034962\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.8179012345679012,\n \"acc_stderr\": 0.021473491834808355,\n\ \ \"acc_norm\": 0.8179012345679012,\n \"acc_norm_stderr\": 0.021473491834808355\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.6099290780141844,\n \"acc_stderr\": 0.02909767559946393,\n \ \ \"acc_norm\": 0.6099290780141844,\n \"acc_norm_stderr\": 0.02909767559946393\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6375488917861799,\n\ \ \"acc_stderr\": 0.012277512533252493,\n \"acc_norm\": 0.6375488917861799,\n\ \ \"acc_norm_stderr\": 0.012277512533252493\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.8786764705882353,\n \"acc_stderr\": 0.019833637481057927,\n\ \ \"acc_norm\": 0.8786764705882353,\n \"acc_norm_stderr\": 0.019833637481057927\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.8055555555555556,\n \"acc_stderr\": 0.016011237996336945,\n \ \ \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.016011237996336945\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7545454545454545,\n\ \ \"acc_stderr\": 0.041220665028782855,\n \"acc_norm\": 0.7545454545454545,\n\ \ \"acc_norm_stderr\": 0.041220665028782855\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.8204081632653061,\n \"acc_stderr\": 0.024573293589585637,\n\ \ \"acc_norm\": 0.8204081632653061,\n \"acc_norm_stderr\": 0.024573293589585637\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9353233830845771,\n\ \ \"acc_stderr\": 0.017391600291491064,\n \"acc_norm\": 0.9353233830845771,\n\ \ \"acc_norm_stderr\": 0.017391600291491064\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.9,\n \"acc_stderr\": 0.03015113445777634,\n \ \ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.03015113445777634\n },\n\ \ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.608433734939759,\n\ \ \"acc_stderr\": 0.03799857454479636,\n \"acc_norm\": 0.608433734939759,\n\ \ \"acc_norm_stderr\": 0.03799857454479636\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.9181286549707602,\n \"acc_stderr\": 0.02102777265656387,\n\ \ \"acc_norm\": 0.9181286549707602,\n \"acc_norm_stderr\": 0.02102777265656387\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3598531211750306,\n\ \ \"mc1_stderr\": 0.01680186046667714,\n \"mc2\": 0.5173907130984454,\n\ \ \"mc2_stderr\": 0.015436053888120308\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.77663772691397,\n \"acc_stderr\": 0.011705697565205201\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6027293404094011,\n \ \ \"acc_stderr\": 0.013478659652337792\n }\n}\n```" repo_url: https://huggingface.co/AA051615/A0305a leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|arc:challenge|25_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-03-06T00-38-10.412538.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|gsm8k|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hellaswag|10_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-management|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-management|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-03-06T00-38-10.412538.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-international_law|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-management|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-marketing|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-sociology|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-virology|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-03-06T00-38-10.412538.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|truthfulqa:mc|0_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-03-06T00-38-10.412538.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_03_06T00_38_10.412538 path: - '**/details_harness|winogrande|5_2024-03-06T00-38-10.412538.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-03-06T00-38-10.412538.parquet' - config_name: results data_files: - split: 2024_03_06T00_38_10.412538 path: - results_2024-03-06T00-38-10.412538.parquet - split: latest path: - results_2024-03-06T00-38-10.412538.parquet --- # Dataset Card for Evaluation run of AA051615/A0305a <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [AA051615/A0305a](https://huggingface.co/AA051615/A0305a) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_AA051615__A0305a", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-03-06T00:38:10.412538](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051615__A0305a/blob/main/results_2024-03-06T00-38-10.412538.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.749185114502518, "acc_stderr": 0.028187673063681265, "acc_norm": 0.7549725341672238, "acc_norm_stderr": 0.028698055518822745, "mc1": 0.3598531211750306, "mc1_stderr": 0.01680186046667714, "mc2": 0.5173907130984454, "mc2_stderr": 0.015436053888120308 }, "harness|arc:challenge|25": { "acc": 0.5836177474402731, "acc_stderr": 0.014405618279436172, "acc_norm": 0.613481228668942, "acc_norm_stderr": 0.014230084761910471 }, "harness|hellaswag|10": { "acc": 0.611431985660227, "acc_stderr": 0.004864286176731831, "acc_norm": 0.8040231029675363, "acc_norm_stderr": 0.003961395637784951 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.7333333333333333, "acc_stderr": 0.038201699145179055, "acc_norm": 0.7333333333333333, "acc_norm_stderr": 0.038201699145179055 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.875, "acc_stderr": 0.026913523521537846, "acc_norm": 0.875, "acc_norm_stderr": 0.026913523521537846 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.8, "acc_stderr": 0.040201512610368445, "acc_norm": 0.8, "acc_norm_stderr": 0.040201512610368445 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.8301886792452831, "acc_stderr": 0.023108393799841326, "acc_norm": 0.8301886792452831, "acc_norm_stderr": 0.023108393799841326 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.9027777777777778, "acc_stderr": 0.024774516250440182, "acc_norm": 0.9027777777777778, "acc_norm_stderr": 0.024774516250440182 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.53, "acc_stderr": 0.05016135580465919, "acc_norm": 0.53, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145633, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.39, "acc_stderr": 0.04902071300001974, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.7283236994219653, "acc_stderr": 0.0339175032232166, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.0339175032232166 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4215686274509804, "acc_stderr": 0.049135952012744975, "acc_norm": 0.4215686274509804, "acc_norm_stderr": 0.049135952012744975 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.85, "acc_stderr": 0.03588702812826371, "acc_norm": 0.85, "acc_norm_stderr": 0.03588702812826371 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.7361702127659574, "acc_stderr": 0.02880998985410295, "acc_norm": 0.7361702127659574, "acc_norm_stderr": 0.02880998985410295 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5614035087719298, "acc_stderr": 0.04668000738510455, "acc_norm": 0.5614035087719298, "acc_norm_stderr": 0.04668000738510455 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.7931034482758621, "acc_stderr": 0.03375672449560553, "acc_norm": 0.7931034482758621, "acc_norm_stderr": 0.03375672449560553 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.6640211640211641, "acc_stderr": 0.024326310529149138, "acc_norm": 0.6640211640211641, "acc_norm_stderr": 0.024326310529149138 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5238095238095238, "acc_stderr": 0.04467062628403273, "acc_norm": 0.5238095238095238, "acc_norm_stderr": 0.04467062628403273 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.53, "acc_stderr": 0.05016135580465919, "acc_norm": 0.53, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.9193548387096774, "acc_stderr": 0.015490002961591037, "acc_norm": 0.9193548387096774, "acc_norm_stderr": 0.015490002961591037 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.6305418719211823, "acc_stderr": 0.03395970381998574, "acc_norm": 0.6305418719211823, "acc_norm_stderr": 0.03395970381998574 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.77, "acc_stderr": 0.04229525846816505, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8848484848484849, "acc_stderr": 0.024925699798115347, "acc_norm": 0.8848484848484849, "acc_norm_stderr": 0.024925699798115347 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.9292929292929293, "acc_stderr": 0.0182631054201995, "acc_norm": 0.9292929292929293, "acc_norm_stderr": 0.0182631054201995 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9637305699481865, "acc_stderr": 0.013492659751295138, "acc_norm": 0.9637305699481865, "acc_norm_stderr": 0.013492659751295138 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.8, "acc_stderr": 0.020280805062535726, "acc_norm": 0.8, "acc_norm_stderr": 0.020280805062535726 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3962962962962963, "acc_stderr": 0.029822619458533994, "acc_norm": 0.3962962962962963, "acc_norm_stderr": 0.029822619458533994 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8151260504201681, "acc_stderr": 0.025215992877954205, "acc_norm": 0.8151260504201681, "acc_norm_stderr": 0.025215992877954205 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.41721854304635764, "acc_stderr": 0.040261414976346104, "acc_norm": 0.41721854304635764, "acc_norm_stderr": 0.040261414976346104 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.9229357798165138, "acc_stderr": 0.011434381698911098, "acc_norm": 0.9229357798165138, "acc_norm_stderr": 0.011434381698911098 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6157407407407407, "acc_stderr": 0.03317354514310742, "acc_norm": 0.6157407407407407, "acc_norm_stderr": 0.03317354514310742 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9313725490196079, "acc_stderr": 0.017744453647073322, "acc_norm": 0.9313725490196079, "acc_norm_stderr": 0.017744453647073322 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.890295358649789, "acc_stderr": 0.020343400734868837, "acc_norm": 0.890295358649789, "acc_norm_stderr": 0.020343400734868837 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7892376681614349, "acc_stderr": 0.027373095500540193, "acc_norm": 0.7892376681614349, "acc_norm_stderr": 0.027373095500540193 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8473282442748091, "acc_stderr": 0.031545216720054725, "acc_norm": 0.8473282442748091, "acc_norm_stderr": 0.031545216720054725 }, "harness|hendrycksTest-international_law|5": { "acc": 0.9090909090909091, "acc_stderr": 0.026243194054073885, "acc_norm": 0.9090909090909091, "acc_norm_stderr": 0.026243194054073885 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8611111111111112, "acc_stderr": 0.033432700628696195, "acc_norm": 0.8611111111111112, "acc_norm_stderr": 0.033432700628696195 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.8773006134969326, "acc_stderr": 0.025777328426978927, "acc_norm": 0.8773006134969326, "acc_norm_stderr": 0.025777328426978927 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5714285714285714, "acc_stderr": 0.04697113923010212, "acc_norm": 0.5714285714285714, "acc_norm_stderr": 0.04697113923010212 }, "harness|hendrycksTest-management|5": { "acc": 0.9029126213592233, "acc_stderr": 0.02931596291881347, "acc_norm": 0.9029126213592233, "acc_norm_stderr": 0.02931596291881347 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9358974358974359, "acc_stderr": 0.01604626163167314, "acc_norm": 0.9358974358974359, "acc_norm_stderr": 0.01604626163167314 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.86, "acc_stderr": 0.03487350880197772, "acc_norm": 0.86, "acc_norm_stderr": 0.03487350880197772 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8939974457215837, "acc_stderr": 0.011008367705789368, "acc_norm": 0.8939974457215837, "acc_norm_stderr": 0.011008367705789368 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.8121387283236994, "acc_stderr": 0.021029269752423217, "acc_norm": 0.8121387283236994, "acc_norm_stderr": 0.021029269752423217 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.7631284916201118, "acc_stderr": 0.014219570788103984, "acc_norm": 0.7631284916201118, "acc_norm_stderr": 0.014219570788103984 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.8366013071895425, "acc_stderr": 0.021170623011213495, "acc_norm": 0.8366013071895425, "acc_norm_stderr": 0.021170623011213495 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.8327974276527331, "acc_stderr": 0.021193872528034962, "acc_norm": 0.8327974276527331, "acc_norm_stderr": 0.021193872528034962 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8179012345679012, "acc_stderr": 0.021473491834808355, "acc_norm": 0.8179012345679012, "acc_norm_stderr": 0.021473491834808355 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.6099290780141844, "acc_stderr": 0.02909767559946393, "acc_norm": 0.6099290780141844, "acc_norm_stderr": 0.02909767559946393 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.6375488917861799, "acc_stderr": 0.012277512533252493, "acc_norm": 0.6375488917861799, "acc_norm_stderr": 0.012277512533252493 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.8786764705882353, "acc_stderr": 0.019833637481057927, "acc_norm": 0.8786764705882353, "acc_norm_stderr": 0.019833637481057927 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.8055555555555556, "acc_stderr": 0.016011237996336945, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.016011237996336945 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7545454545454545, "acc_stderr": 0.041220665028782855, "acc_norm": 0.7545454545454545, "acc_norm_stderr": 0.041220665028782855 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8204081632653061, "acc_stderr": 0.024573293589585637, "acc_norm": 0.8204081632653061, "acc_norm_stderr": 0.024573293589585637 }, "harness|hendrycksTest-sociology|5": { "acc": 0.9353233830845771, "acc_stderr": 0.017391600291491064, "acc_norm": 0.9353233830845771, "acc_norm_stderr": 0.017391600291491064 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.9, "acc_stderr": 0.03015113445777634, "acc_norm": 0.9, "acc_norm_stderr": 0.03015113445777634 }, "harness|hendrycksTest-virology|5": { "acc": 0.608433734939759, "acc_stderr": 0.03799857454479636, "acc_norm": 0.608433734939759, "acc_norm_stderr": 0.03799857454479636 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.9181286549707602, "acc_stderr": 0.02102777265656387, "acc_norm": 0.9181286549707602, "acc_norm_stderr": 0.02102777265656387 }, "harness|truthfulqa:mc|0": { "mc1": 0.3598531211750306, "mc1_stderr": 0.01680186046667714, "mc2": 0.5173907130984454, "mc2_stderr": 0.015436053888120308 }, "harness|winogrande|5": { "acc": 0.77663772691397, "acc_stderr": 0.011705697565205201 }, "harness|gsm8k|5": { "acc": 0.6027293404094011, "acc_stderr": 0.013478659652337792 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
tyzhu/lmind_nq_train1000_eval500_v1_doc_qa
--- configs: - config_name: default data_files: - split: train_qa path: data/train_qa-* - split: train_recite_qa path: data/train_recite_qa-* - split: eval_qa path: data/eval_qa-* - split: eval_recite_qa path: data/eval_recite_qa-* - split: all_docs path: data/all_docs-* - split: all_docs_eval path: data/all_docs_eval-* - split: train path: data/train-* - split: validation path: data/validation-* dataset_info: features: - name: answers struct: - name: answer_start sequence: 'null' - name: text sequence: string - name: inputs dtype: string - name: targets dtype: string splits: - name: train_qa num_bytes: 115608 num_examples: 1000 - name: train_recite_qa num_bytes: 755758 num_examples: 1000 - name: eval_qa num_bytes: 58285 num_examples: 500 - name: eval_recite_qa num_bytes: 377880 num_examples: 500 - name: all_docs num_bytes: 950316 num_examples: 1462 - name: all_docs_eval num_bytes: 950216 num_examples: 1462 - name: train num_bytes: 1065924 num_examples: 2462 - name: validation num_bytes: 58285 num_examples: 500 download_size: 2722791 dataset_size: 4332272 --- # Dataset Card for "lmind_nq_train1000_eval500_v1_doc_qa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/enex-dpo
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 2561469 num_examples: 2990 download_size: 1389284 dataset_size: 2561469 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "enex-dpo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
McGill-NLP/TopiOCQA
--- annotations_creators: - crowdsourced language: - en license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100k task_categories: - text-retrieval - text-generation task_ids: - language-modeling - open-domain-qa pretty_name: Open-domain Conversational Question Answering with Topic Switching tags: - conversational-question-answering --- # Dataset Card for TopiOCQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [TopiOCQA homepage](https://mcgill-nlp.github.io/topiocqa/) - **Repository:** [TopiOCQA Github](https://github.com/McGill-NLP/topiocqa) - **Paper:** [Open-domain Conversational Question Answering with Topic Switching](https://arxiv.org/abs/2110.00768) - **Point of Contact:** [Vaibhav Adlakha](mailto:vaibhav.adlakha@mila.quebec) ### Dataset Summary TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena. ### Languages The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en. ## Additional Information ### Licensing Information TopiOCQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @inproceedings{adlakha2022topiocqa, title={Topi{OCQA}: Open-domain Conversational Question Answering with Topic Switching}, author={Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva}, journal={Transactions of the Association for Computational Linguistics}, volume = {10}, pages = {468-483}, year = {2022}, month = {04}, year={2022}, issn = {2307-387X}, doi = {10.1162/tacl_a_00471}, url = {https://doi.org/10.1162/tacl\_a\_00471}, eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00471/2008126/tacl\_a\_00471.pdf}, } ```
CyberHarem/clea_fireemblem
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of clea (Fire Emblem) This is the dataset of clea (Fire Emblem), containing 36 images and their tags. The core tags of this character are `blonde_hair, long_hair, ponytail, brown_eyes, breasts, medium_breasts, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 36 | 40.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/clea_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 36 | 24.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/clea_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 73 | 46.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/clea_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 36 | 36.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/clea_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 73 | 62.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/clea_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/clea_fireemblem', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, simple_background, solo, dress, full_body, helmet, pantyhose, smile, black_leggings, high_heels, looking_at_viewer, white_background, arms_behind_back, bridal_gauntlets, gloves, holding_weapon, open_mouth, shoulder_armor | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | simple_background | solo | dress | full_body | helmet | pantyhose | smile | black_leggings | high_heels | looking_at_viewer | white_background | arms_behind_back | bridal_gauntlets | gloves | holding_weapon | open_mouth | shoulder_armor | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:--------|:------------|:---------|:------------|:--------|:-----------------|:-------------|:--------------------|:-------------------|:-------------------|:-------------------|:---------|:-----------------|:-------------|:-----------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
carlosejimenez/wnli_corpus_trainval
--- dataset_info: features: - name: idx dtype: int64 - name: text dtype: string splits: - name: test num_bytes: 36867 num_examples: 146 - name: train num_bytes: 114329 num_examples: 706 - name: validation num_bytes: 11665 num_examples: 71 download_size: 0 dataset_size: 162861 --- # Dataset Card for "wnli_corpus_trainval" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/female_runner_prompts
--- dataset_info: features: - name: prompts dtype: string splits: - name: train num_bytes: 10330492 num_examples: 100000 download_size: 215399 dataset_size: 10330492 --- # Dataset Card for "female_runner_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ciempiess/ciempiess_complementary
--- annotations_creators: - expert-generated language: - es language_creators: - other license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: 'CIEMPIESS COMPLEMENTARY CORPUS: Audio and Transcripts of Spanish Isolated Words.' size_categories: - n<1K source_datasets: - original tags: - ciempiess - spanish - mexican spanish - ciempiess project - ciempiess-unam project task_categories: - automatic-speech-recognition task_ids: [] --- # Dataset Card for ciempiess_complementary ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CIEMPIESS-UNAM Project](https://ciempiess.org/) - **Repository:** [CIEMPIESS COMPLEMENTARY at LDC](https://catalog.ldc.upenn.edu/LDC2019S07) - **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org) ### Dataset Summary The CIEMPIESS COMPLEMENTARY is a phonetically balanced corpus of isolated Spanish words spoken by people of Central Mexico. It was designed to solve one particular issue when training automatic speech recognition (ASR) systems in the Spanish of Central Mexico. This problem appears when someone collects some training data, but the system complains because it does not find enough instances of one or more particular phoneme. The CIEMPIESS COMPLEMENTARY Corpus was created with the voices of 10 male and 10 female volunteers reading isolated words. The words were chosen to assure users to get, at least, twenty instances of every single phoneme and allophone of the Mexican phonetic alphabet called [Mexbet](https://turing.iimas.unam.mx/~luis/DIME/publicaciones/tesis/Cuetara_Tesis_MLH-UNAM.pdf). So, the CIEMPIESS COMPLEMENTARY is "COMPLEMENTARY" because it "complements" datasets when training ASR systems in the Spanish of Central Mexico. On the other hand, Mexbet is a phonetic alphabet designed for the Spanish of Central Mexico. It was created in 2004 by the linguist [Javier Cuétara](http://ec.filos.unam.mx/mtro-javier-cuetara-priede) at [UNAM](https://www.unam.mx/) University to do experiments with another corpus in Mexican Spanish called [DIMEx100](http://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html), giving excellent results. For these reasons, Mexbet is the preferred alphabet for the all CIEMPIESS family. An updated set of the of the Mexbet rules adapted for the CIEMPIESS project is presented in [this paper](https://rcs.cic.ipn.mx/2014_83/RCS_83_2014.pdf#page=70). The CIEMPIESS COMPLEMENTARY is part of the [CIEMPIESS Experimentation](https://catalog.ldc.upenn.edu/LDC2019S07), which is a set of three different datasets, specifically [CIEMPIESS COMPLEMENTARY](https://huggingface.co/datasets/ciempiess/ciempiess_complementary), [CIEMPIESS FEM](https://huggingface.co/datasets/ciempiess/ciempiess_fem) and [CIEMPIESS TEST](https://huggingface.co/datasets/ciempiess/ciempiess_test). CIEMPIESS is the acronym for: "Corpus de Investigación en Español de México del Posgrado de Ingeniería Eléctrica y Servicio Social". ### Example Usage The CIEMPIESS COMPLEMENTARY contains only the train split: ```python from datasets import load_dataset ciempiess_complementary = load_dataset("ciempiess/ciempiess_complementary") ``` It is also valid to do: ```python from datasets import load_dataset ciempiess_complementary = load_dataset("ciempiess/ciempiess_complementary",split="train") ``` ### Supported Tasks automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). ### Languages The language of the corpus is Spanish with the accent of Central Mexico. ## Dataset Structure ### Data Instances ```python { 'audio_id': 'CMPC_F_05_A_0002', 'audio': { 'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/7c889e5ab17b842a7b62a3fbcfe0f9d71c0b89cb8df5a896a9acae10594bffba/train/female/F_05/alphabet/CMPC_F_05_A_0002.flac', 'array': array([0. , 0. , 0. , ..., 0.00097656, 0.00183105, 0.00180054], dtype=float32), 'sampling_rate': 16000 }, 'speaker_id': 'F_05', 'gender': 'female', 'duration': 9.470000267028809, 'utt_type': 'Letters', 'age': 20, 'education': 'University', 'birthplace': 'Mexico City', 'residence': 'Mexico City', 'normalized_text': 'jota ka ele doble ele eme ene eñe o pe cu erre' } ``` ### Data Fields * `audio_id` (string) - id of audio segment * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). * `speaker_id` (string) - id of speaker. * `gender` (string) - gender of speaker (male or female). * `duration` (float32) - duration of the audio file in seconds. * `utt_type` (string) - utterance type. "W": words, "A": letters (Alphabet), "D": digits. * `age` (int32) - age of the speaker. * `education` (string) - educationale level of the speaker. * `birthplace` (string) - birthplace of the speaker. * `residence` (string) - place of residence of the speaker. * `normalized_text` (string) - normalized audio segment transcription. ### Data Splits The corpus counts just with the train split which has a total of 520 speech files from 10 male and 10 female volunteers from Central Mexico with ages between 20 to 49 years old. Each speaker contributes with 26 speech files. ## Dataset Creation ### Curation Rationale The CIEMPIESS COMPLEMENTARY Corpus has the following characteristics: * The corpus was recorded using a Sony recorder model ICD-PX312D in a moderate noise environment similar to a medium size library. The recordings were originally recorded in MP3 format with a quality of 44.1 khz, 128 kbps, stereo. * 10 male and 10 female volunteers from Central Mexico and ages from 20 to 49 years contributed with 26 speech files each. * The corpus is 56 minutes long with 520 speech files converted to a 16 kHz , 16-bit, PCM, mono format. * Each of the speakers read the digits from zero to nine (1 speech file), the alphabet with some common nick names of certain letters, like "i griega" for the "y" (3 speech files, 11 letters per file) and finally, every speaker reads a list of 66 words (22 speech files, 3 words per file). In general, every speaker reads different words, but some few words are read by two different speakers. * The 22 lists of Spanish words that every speaker reads are designed to ensure that the 66 phonemes and allophones of the [T66 level of Mexbet](https://ciempiess.org/Alfabetos_Foneticos/EVOLUTION_of_MEXBET.html#Tabla7) are repeated, at least, one time for each speaker. Note that the 29 phonemes of the [T29 level of Mexbet](https://ciempiess.org/Alfabetos_Foneticos/EVOLUTION_of_MEXBET.html#Tabla8) are included in the T66 level. * Speakers in the CIEMPIESS COMPLEMENTARY are not present in any other CIEMPIESS dataset. ### Source Data #### Initial Data Collection and Normalization All the recordings that constitute the CIEMPIESS COMPLEMENTARY were recorded with a Sony recorder model ICD-PX312D in a moderate noise environment similar to a medium size library by [Susana Alejandra Jiménez Sandoval](https://mx.linkedin.com/in/susana-alejandra-jim%C3%A9nez-sandoval-843801134) between 2016 and 2017. The corpus was curated by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) in 2018. ### Annotations #### Annotation process The corpus was created out of recordings of volunteers reading isolated words and letters. The lists of words are unique for each speaker and they were created to be phonetically balanced and to be read by native speakers of the Spanish language with the accent of Central Mexico. #### Who are the annotators? The corpus was designed and curated by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) and it was collected by [Susana Alejandra Jiménez Sandoval](https://mx.linkedin.com/in/susana-alejandra-jim%C3%A9nez-sandoval-843801134). ### Personal and Sensitive Information The corpus is comprised of speech recordings of isolated words and letters read by volunteers, so, no personal or sensitive information is shared. However, you agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is valuable because it is phonetically balanced. ### Discussion of Biases The corpus is gender and phonetically balanced; 10 male and 10 female volunteers from Central Mexico and ages from 20 to 49 years contributed with 26 speech files each. ### Other Known Limitations "CIEMPIESS COMPLEMENTARY CORPUS" by Carlos Daniel Hernández Mena is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ### Dataset Curators The dataset was curated by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) in 2018. ### Licensing Information [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ``` @misc{carlosmenaciempiesscomplementary2019, title={CIEMPIESS COMPLEMENTARY CORPUS: Audio and Transcripts of Spanish Isolated Words.}, ldc_catalog_no={LDC2019S07}, DOI={https://doi.org/10.35111/xdx5-n815}, author={Hernandez Mena, Carlos Daniel and Jiménez Sandoval, Susana Alejandra}, journal={Linguistic Data Consortium, Philadelphia}, year={2019}, url={https://catalog.ldc.upenn.edu/LDC2019S07}, } ``` ### Contributions The authors want to thank to Alejandro V. Mena, Elena Vera and Angélica Gutiérrez for their support to the social service program: "Desarrollo de Tecnologías del Habla."
Nganlt/CVEs_10
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 33169 num_examples: 150 download_size: 12485 dataset_size: 33169 --- # Dataset Card for "CVE-2023-25690_testLLM" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/hayasaka_mirei_idolmastercinderellagirls
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of hayasaka_mirei/早坂美玲 (THE iDOLM@STER: Cinderella Girls) This is the dataset of hayasaka_mirei/早坂美玲 (THE iDOLM@STER: Cinderella Girls), containing 374 images and their tags. The core tags of this character are `eyepatch, purple_hair, multicolored_hair, brown_eyes, short_hair, red_hair, streaked_hair, fang, hair_between_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 374 | 325.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayasaka_mirei_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 374 | 234.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayasaka_mirei_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 802 | 461.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayasaka_mirei_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 374 | 305.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayasaka_mirei_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 802 | 584.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayasaka_mirei_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/hayasaka_mirei_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, solo, open_mouth, simple_background, upper_body, white_background, :d, blush, claw_pose, collarbone, green_jacket, heart, long_sleeves, black_shirt, choker, horns | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, cat_hood, heart, black_shirt, looking_at_viewer, red_jacket, solo, blush, hood_up, belt_buckle, belt_collar, crescent, open_mouth, simple_background, white_background, black_belt, black_collar, claws, upper_body, short_over_long_sleeves, button_badge, collarbone, skirt, smile | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, belt, cat_hood, hoodie, skirt, solo, blush, collar, striped_thighhighs, heart, claws, open_mouth, smile, white_background | | 3 | 12 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, navel, solo, bikini_top_only, midriff, belt, claws, horns, skirt, chain, collar, looking_at_viewer, cleavage, hoodie, smile | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, fingerless_gloves, horns, looking_at_viewer, nail_polish, solo, belt, navel, open_mouth, skirt, smile, thighhighs, collar, microphone, midriff, chain, headphones | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | blush, looking_at_viewer, navel, open_mouth, solo, 1girl, flat_chest, side-tie_bikini_bottom, simple_background, small_breasts, white_background, white_bikini, black_bikini, black_hair, collarbone, groin, hair_over_one_eye, horns, standing, sweat | | 6 | 14 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | blush, white_shirt, 1girl, pleated_skirt, solo, long_sleeves, collared_shirt, grey_skirt, simple_background, striped_bow, dress_shirt, heart, looking_at_viewer, white_background, bangs, black_thighhighs, open_mouth, school_uniform, blue_jacket, hood_down, :d, black_jacket, hooded_jacket, open_jacket, backpack, blue_bowtie | | 7 | 13 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | short_sleeves, looking_at_viewer, collared_shirt, hair_ornament, open_mouth, :d, fake_horns, plaid_skirt, solo_focus, two-tone_hair, 1girl, bangs, layered_skirt, pointing, purple_skirt, red_necktie, thighhighs, v-shaped_eyebrows, black_belt, blush, 2girls, belt_buckle, checkered_clothes | | 8 | 10 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, solo, looking_at_viewer, blush, cat_ears, open_mouth, shoulder_cutout, wrist_cuffs, black_necktie, garter_straps, hairband, maid_headdress, short_sleeves, waist_apron, white_apron, fake_animal_ears, frilled_apron, heart, two-tone_hair, button_badge, red_dress, shirt, simple_background, smile, striped_thighhighs, white_background | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | halo, maid_headdress, open_mouth, twintails, 1girl, plaid_skirt, wrist_cuffs, :d, angel_wings, looking_at_viewer, simple_background, sleeveless, solo, white_sailor_collar, blush, claw_pose, collarbone, feathered_wings, frills, midriff, multiple_girls, pink_skirt, ribbon, shirt, swimsuit, white_background, white_bow | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | open_mouth | simple_background | upper_body | white_background | :d | blush | claw_pose | collarbone | green_jacket | heart | long_sleeves | black_shirt | choker | horns | cat_hood | red_jacket | hood_up | belt_buckle | belt_collar | crescent | black_belt | black_collar | claws | short_over_long_sleeves | button_badge | skirt | smile | belt | hoodie | collar | striped_thighhighs | navel | bikini_top_only | midriff | chain | cleavage | fingerless_gloves | nail_polish | thighhighs | microphone | headphones | flat_chest | side-tie_bikini_bottom | small_breasts | white_bikini | black_bikini | black_hair | groin | hair_over_one_eye | standing | sweat | white_shirt | pleated_skirt | collared_shirt | grey_skirt | striped_bow | dress_shirt | bangs | black_thighhighs | school_uniform | blue_jacket | hood_down | black_jacket | hooded_jacket | open_jacket | backpack | blue_bowtie | short_sleeves | hair_ornament | fake_horns | plaid_skirt | solo_focus | two-tone_hair | layered_skirt | pointing | purple_skirt | red_necktie | v-shaped_eyebrows | 2girls | checkered_clothes | cat_ears | shoulder_cutout | wrist_cuffs | black_necktie | garter_straps | hairband | maid_headdress | waist_apron | white_apron | fake_animal_ears | frilled_apron | red_dress | shirt | halo | twintails | angel_wings | sleeveless | white_sailor_collar | feathered_wings | frills | multiple_girls | pink_skirt | ribbon | swimsuit | white_bow | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-------------|:--------------------|:-------------|:-------------------|:-----|:--------|:------------|:-------------|:---------------|:--------|:---------------|:--------------|:---------|:--------|:-----------|:-------------|:----------|:--------------|:--------------|:-----------|:-------------|:---------------|:--------|:--------------------------|:---------------|:--------|:--------|:-------|:---------|:---------|:---------------------|:--------|:------------------|:----------|:--------|:-----------|:--------------------|:--------------|:-------------|:-------------|:-------------|:-------------|:-------------------------|:----------------|:---------------|:---------------|:-------------|:--------|:--------------------|:-----------|:--------|:--------------|:----------------|:-----------------|:-------------|:--------------|:--------------|:--------|:-------------------|:-----------------|:--------------|:------------|:---------------|:----------------|:--------------|:-----------|:--------------|:----------------|:----------------|:-------------|:--------------|:-------------|:----------------|:----------------|:-----------|:---------------|:--------------|:--------------------|:---------|:--------------------|:-----------|:------------------|:--------------|:----------------|:----------------|:-----------|:-----------------|:--------------|:--------------|:-------------------|:----------------|:------------|:--------|:-------|:------------|:--------------|:-------------|:----------------------|:------------------|:---------|:-----------------|:-------------|:---------|:-----------|:------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | X | | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | | | X | | X | | | | X | | | | | X | | | | | | | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 12 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | | | | | | | | | | | | | X | | | | | | | | | X | | | X | X | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | | X | | X | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | X | X | | X | | X | | X | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 14 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | X | X | | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 13 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | X | | | | X | X | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 10 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | X | X | X | | X | | X | | | | X | | | | | | | | | | | | | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
cariai/somos-alpaca-es
--- dataset_info: features: - name: text dtype: 'null' - name: inputs struct: - name: 1-instruction dtype: string - name: 2-input dtype: string - name: 3-output dtype: string - name: prediction dtype: 'null' - name: prediction_agent dtype: 'null' - name: annotation dtype: string - name: annotation_agent dtype: string - name: vectors struct: - name: input sequence: float64 - name: instruction sequence: float64 - name: output sequence: float64 - name: multi_label dtype: bool - name: explanation dtype: 'null' - name: id dtype: string - name: metadata dtype: 'null' - name: status dtype: string - name: event_timestamp dtype: timestamp[us] - name: metrics struct: - name: text_length dtype: int64 splits: - name: train num_bytes: 985053979 num_examples: 52002 download_size: 655032424 dataset_size: 985053979 --- # Dataset Card for "somos-alpaca-es" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
atmallen/quirky_sciq_pythia-410m_alice_easy
--- dataset_info: features: - name: id dtype: string - name: choices sequence: string - name: label dtype: int64 - name: difficulty dtype: float64 - name: statement dtype: string - name: character dtype: string - name: alice_label dtype: bool - name: bob_label dtype: bool - name: bob_log_odds dtype: float64 splits: - name: train num_bytes: 3637062.50295402 num_examples: 5838 - name: validation num_bytes: 304362.045 num_examples: 494 - name: test num_bytes: 316343.916 num_examples: 504 download_size: 1397061 dataset_size: 4257768.46395402 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
CyberHarem/concord_azurlane
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of concord/コンコード/康克德 (Azur Lane) This is the dataset of concord/コンコード/康克德 (Azur Lane), containing 37 images and their tags. The core tags of this character are `blonde_hair, long_hair, red_eyes, twintails, symbol-shaped_pupils, animal_ears, fake_animal_ears, headphones, animal_ear_headphones, cat_ear_headphones, star-shaped_pupils, hair_between_eyes, breasts, bangs, mole_under_eye, mole, ribbon, hair_ribbon, medium_breasts, fang, very_long_hair, red_ribbon, sidelocks`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 37 | 43.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/concord_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 37 | 27.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/concord_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 83 | 54.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/concord_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 37 | 39.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/concord_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 83 | 71.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/concord_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/concord_azurlane', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, star_(symbol), blush, long_sleeves, black_thighhighs, hooded_jacket, open_jacket, looking_at_viewer, pleated_skirt, hood_down, lollipop, thighs, white_skirt, no_shoes, open_mouth, feet, neck_ribbon, sitting, holding_food, smile, jingle_bell, neck_bell, shirt, toes | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, christmas, solo, star_(symbol), underboob, blush, looking_at_viewer, black_pantyhose, elbow_gloves, large_breasts, tongue_out, white_gloves, bell, cat_ears, fur-trimmed_capelet, red_skirt, santa_costume, thighband_pantyhose, feet, food, fur-trimmed_skirt, no_shoes, smile, suspender_skirt, toes | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | star_(symbol) | blush | long_sleeves | black_thighhighs | hooded_jacket | open_jacket | looking_at_viewer | pleated_skirt | hood_down | lollipop | thighs | white_skirt | no_shoes | open_mouth | feet | neck_ribbon | sitting | holding_food | smile | jingle_bell | neck_bell | shirt | toes | christmas | underboob | black_pantyhose | elbow_gloves | large_breasts | tongue_out | white_gloves | bell | cat_ears | fur-trimmed_capelet | red_skirt | santa_costume | thighband_pantyhose | food | fur-trimmed_skirt | suspender_skirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------------|:--------|:---------------|:-------------------|:----------------|:--------------|:--------------------|:----------------|:------------|:-----------|:---------|:--------------|:-----------|:-------------|:-------|:--------------|:----------|:---------------|:--------|:--------------|:------------|:--------|:-------|:------------|:------------|:------------------|:---------------|:----------------|:-------------|:---------------|:-------|:-----------|:----------------------|:------------|:----------------|:----------------------|:-------|:--------------------|:------------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | | | | X | | | | | | X | | X | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |