datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
CyberHarem/m99_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of m99/M99/M99 (Girls' Frontline)
This is the dataset of m99/M99/M99 (Girls' Frontline), containing 81 images and their tags.
The core tags of this character are `bow, red_bow, bangs, brown_hair, hair_bow, purple_eyes, short_hair, black_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 81 | 95.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m99_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 81 | 55.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m99_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 207 | 123.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m99_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 81 | 83.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m99_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 207 | 168.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m99_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/m99_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, full_body, shoes, solo, dress, fur_trim, looking_at_viewer, simple_background, blush, official_alternate_costume, sniper_rifle, striped, white_background, long_sleeves, object_namesake, hood, ribbon, standing, stuffed_animal, stuffed_bunny, white_thighhighs, coat, holding_gun, white_footwear |
| 1 | 7 |  |  |  |  |  | 1girl, hair_ornament, open_mouth, solo, looking_at_viewer, rabbit, sniper_rifle, bike_shorts, blush, grenade, ribbon, scope, two-sided_skirt, detached_sleeves, simple_background, white_background, black_shorts, fang, holding_weapon, shorts_under_skirt, star_(symbol) |
| 2 | 11 |  |  |  |  |  | 1girl, blush, frilled_skirt, looking_at_viewer, red_necktie, red_skirt, sleeveless_shirt, solo, white_shirt, bare_shoulders, collared_shirt, detached_sleeves, pleated_skirt, star_print, hair_ornament, rabbit, sniper_rifle, white_background, bike_shorts, black_shorts, grenade, holding_gun, open_mouth, short_shorts, shorts_under_skirt, socks, wings, :d, black_footwear, boots, long_sleeves, scope, standing |
| 3 | 19 |  |  |  |  |  | 1girl, blush, rabbit_ears, solo, animal_hat, hat_bow, official_alternate_costume, white_headwear, bunny_hat, looking_at_viewer, puffy_short_sleeves, twin_braids, white_thighhighs, striped_bow, short_over_long_sleeves, cellphone, long_hair, white_shirt, black_skirt, full_body, open_mouth, vertical-striped_thighhighs, black_footwear, holding_phone, low_twintails, shoes, smile, white_background, animal_bag, ribbon, sitting, underwear, animal_on_head, black_bow, pleated_skirt, simple_background, white_dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | full_body | shoes | solo | dress | fur_trim | looking_at_viewer | simple_background | blush | official_alternate_costume | sniper_rifle | striped | white_background | long_sleeves | object_namesake | hood | ribbon | standing | stuffed_animal | stuffed_bunny | white_thighhighs | coat | holding_gun | white_footwear | hair_ornament | open_mouth | rabbit | bike_shorts | grenade | scope | two-sided_skirt | detached_sleeves | black_shorts | fang | holding_weapon | shorts_under_skirt | star_(symbol) | frilled_skirt | red_necktie | red_skirt | sleeveless_shirt | white_shirt | bare_shoulders | collared_shirt | pleated_skirt | star_print | short_shorts | socks | wings | :d | black_footwear | boots | rabbit_ears | animal_hat | hat_bow | white_headwear | bunny_hat | puffy_short_sleeves | twin_braids | striped_bow | short_over_long_sleeves | cellphone | long_hair | black_skirt | vertical-striped_thighhighs | holding_phone | low_twintails | smile | animal_bag | sitting | underwear | animal_on_head | black_bow | white_dress |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:--------|:-------|:--------|:-----------|:--------------------|:--------------------|:--------|:-----------------------------|:---------------|:----------|:-------------------|:---------------|:------------------|:-------|:---------|:-----------|:-----------------|:----------------|:-------------------|:-------|:--------------|:-----------------|:----------------|:-------------|:---------|:--------------|:----------|:--------|:------------------|:-------------------|:---------------|:-------|:-----------------|:---------------------|:----------------|:----------------|:--------------|:------------|:-------------------|:--------------|:-----------------|:-----------------|:----------------|:-------------|:---------------|:--------|:--------|:-----|:-----------------|:--------|:--------------|:-------------|:----------|:-----------------|:------------|:----------------------|:--------------|:--------------|:--------------------------|:------------|:------------|:--------------|:------------------------------|:----------------|:----------------|:--------|:-------------|:----------|:------------|:-----------------|:------------|:--------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | | X | | | X | X | X | | X | | X | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | | | X | | | X | | X | | X | | X | X | | | | X | | | | | X | | X | X | X | X | X | X | | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 19 |  |  |  |  |  | X | X | X | X | | | X | X | X | X | | | X | | | | X | | | | X | | | | | X | | | | | | | | | | | | | | | | X | | | X | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
maximedb/massive_generated | ---
license: mit
---
|
TrainingDataPro/cut-2d-masks-presentation-attack-detection | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
tags:
- finance
- legal
- code
dataset_info:
features:
- name: link
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 1452
num_examples: 48
download_size: 737352851
dataset_size: 1452
---
# Cut 2D Masks Presentation Attack Detection
The dataset consists of videos of individuals wearing printed 2D masks with cut-out holes for eyes, noses and mouths. Videos are filmed in different lightning conditions and in different places (*indoors, outdoors*), a person moves his/her head left, right, up and down. Each video in the dataset has an approximate duration of 7 seconds.
### Types of videos in the dataset:
- **2d_mask** - videos of the person wearing a printed 2D mask with cut-out holes for eyes.
- **cut_mask** - videos of the person wearing a printed 2D mask with cut-out holes for eyes, mouth and nose. All videos represent masks with holes for *eyes*, in some videos holes for both *mouth and nose* are made, in others only for *mouth or nose*.
.png?generation=1690468363734380&alt=media)
People in the dataset wear different accessorieses, such as *glasses, caps, scarfs, hats and masks*. Most of them are worn over a mask, however *glasses and masks* can be are also printed on the mask itself.
.png?generation=1690468790515642&alt=media)
The dataset serves as a valuable resource for computer vision, anti-spoofing tasks, video analysis, and security systems. It allows for the development of algorithms and models that can effectively detect attacks perpetrated by individuals wearing printed 2D masks.
Studying the dataset may lead to the development of improved security systems, surveillance technologies, and solutions to mitigate the risks associated with masked individuals carrying out attacks.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cut-2d-masks-presentation-attack-detection) to discuss your requirements, learn about the price and buy the dataset.
# Content
### The dataset contains of two folders:
- **2d_masks** contains videos of the person wearing a printed 2D mask with cut-out holes for eyes.
- **cut_masks** includes videos of the person wearing a printed 2D mask with cut-out holes for eyes, mouth and nose.
### File with the extension .csv
- **link**: link to access the video,
- **type**: type of the attack: *with printed 2D mask with cut-out holes for eyes* OR *with printed 2D mask with cut-out holes for eyes, mouth and nose*
# Attacks might be collected in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cut-2d-masks-presentation-attack-detection) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
Databasesprojec/FinStmts_ConsUncons_French_SeqClass | ---
dataset_info:
features:
- name: language
dtype: string
- name: label
dtype:
class_label:
names:
'0': uncons
'1': cons
- name: id
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1062243685
num_examples: 3563
- name: validation
num_bytes: 224413703
num_examples: 764
- name: test
num_bytes: 229176827
num_examples: 763
download_size: 725157982
dataset_size: 1515834215
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
michaelyhuang23/autodiagram2 | ---
dataset_info:
features:
- name: images
dtype: image
- name: tex
dtype: string
splits:
- name: train
num_bytes: 404368.0
num_examples: 8
- name: validation
num_bytes: 103821.0
num_examples: 2
download_size: 222519
dataset_size: 508189.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Falah/Weather_fake_dataset | ---
dataset_info:
features:
- name: date-time
dtype: string
- name: temperature
dtype: float64
- name: humidity
dtype: float64
- name: wind_speed
dtype: float64
- name: precipitation
dtype: float64
- name: Pressure (hPa)
dtype: float64
- name: Visibility (km)
dtype: float64
splits:
- name: train
num_bytes: 259150
num_examples: 3650
download_size: 233347
dataset_size: 259150
---
# Weather Fake Dataset
## Overview
The Falah/Weather_fake_dataset is a synthetic weather dataset created for educational and demonstration purposes. It provides synthetic weather data for a period of 10 years, with one data point per day. The dataset includes various weather attributes such as date/time, temperature, humidity, wind speed, precipitation, pressure (hPa), and visibility (km). This dataset is intended to be used in machine learning applications, particularly for weather forecasting models.
## Dataset Information
- **Name**: Falah/Weather_fake_dataset
- **Download Size**: 233 KB
- **Dataset Size**: 259 KB
- **Number of Data Points**: 3650
- **Date/Time Range**: January 1, 2013, to December 31, 2022
## Dataset Usage
The dataset can be accessed and loaded using the Hugging Face `datasets` library in Python. Here's an example code snippet to load the dataset and print the first data point:
```python
from datasets import load_dataset
dataset = load_dataset("Falah/Weather_fake_dataset")
print(dataset['train'][0])
```
## Data Fields
The dataset contains the following fields:
- `date-time`: String representing the date and time in the format 'YYYY-MM-DD HH:MM:SS'.
- `temperature`: Floating-point number representing the temperature in Celsius.
- `humidity`: Floating-point number representing the humidity level in percentage.
- `wind_speed`: Floating-point number representing the wind speed in kilometers per hour.
- `precipitation`: Floating-point number representing the amount of precipitation in millimeters.
- `Pressure (hPa)`: Floating-point number representing the atmospheric pressure in hPa.
- `Visibility (km)`: Floating-point number representing the visibility in kilometers.
## Citation
If you use this dataset in your research or applications, please cite it as:
```
@dataset{falah_weather_fake_dataset,
author = {Falah.G.Salieh},
title = {Weather_fake_dataset},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Falah/Weather_fake_dataset},
}
```
## License
This dataset is provided under the [Creative Commons Zero v1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/) license, which means it is dedicated to the public domain and free for any use without attribution.
## Disclaimer
This dataset is entirely synthetic and should not be used for any real-world applications or critical decision-making processes. It is not based on real weather data and is intended for educational purposes only.
For more information about the dataset or to access the data, please visit the dataset page on Hugging Face: https://huggingface.co/datasets/Falah/Weather_fake_dataset
```
|
open-llm-leaderboard/details_lmsys__vicuna-13b-delta-v1.1 | ---
pretty_name: Evaluation run of lmsys/vicuna-13b-delta-v1.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lmsys/vicuna-13b-delta-v1.1](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lmsys__vicuna-13b-delta-v1.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T01:08:28.520609](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-13b-delta-v1.1/blob/main/results_2023-10-15T01-08-28.520609.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.029677013422818792,\n\
\ \"em_stderr\": 0.0017378324714143493,\n \"f1\": 0.09310612416107406,\n\
\ \"f1_stderr\": 0.002167792401176146,\n \"acc\": 0.4141695683211732,\n\
\ \"acc_stderr\": 0.010019161585538096\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.029677013422818792,\n \"em_stderr\": 0.0017378324714143493,\n\
\ \"f1\": 0.09310612416107406,\n \"f1_stderr\": 0.002167792401176146\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08642911296436695,\n \
\ \"acc_stderr\": 0.00774004433710381\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7419100236779794,\n \"acc_stderr\": 0.012298278833972384\n\
\ }\n}\n```"
repo_url: https://huggingface.co/lmsys/vicuna-13b-delta-v1.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|arc:challenge|25_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T01_08_28.520609
path:
- '**/details_harness|drop|3_2023-10-15T01-08-28.520609.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T01-08-28.520609.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T01_08_28.520609
path:
- '**/details_harness|gsm8k|5_2023-10-15T01-08-28.520609.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T01-08-28.520609.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hellaswag|10_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T16:35:51.471732.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T16:35:51.471732.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T16:35:51.471732.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T01_08_28.520609
path:
- '**/details_harness|winogrande|5_2023-10-15T01-08-28.520609.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T01-08-28.520609.parquet'
- config_name: results
data_files:
- split: 2023_08_09T16_35_51.471732
path:
- results_2023-08-09T16:35:51.471732.parquet
- split: 2023_10_15T01_08_28.520609
path:
- results_2023-10-15T01-08-28.520609.parquet
- split: latest
path:
- results_2023-10-15T01-08-28.520609.parquet
---
# Dataset Card for Evaluation run of lmsys/vicuna-13b-delta-v1.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lmsys/vicuna-13b-delta-v1.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [lmsys/vicuna-13b-delta-v1.1](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lmsys__vicuna-13b-delta-v1.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T01:08:28.520609](https://huggingface.co/datasets/open-llm-leaderboard/details_lmsys__vicuna-13b-delta-v1.1/blob/main/results_2023-10-15T01-08-28.520609.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.029677013422818792,
"em_stderr": 0.0017378324714143493,
"f1": 0.09310612416107406,
"f1_stderr": 0.002167792401176146,
"acc": 0.4141695683211732,
"acc_stderr": 0.010019161585538096
},
"harness|drop|3": {
"em": 0.029677013422818792,
"em_stderr": 0.0017378324714143493,
"f1": 0.09310612416107406,
"f1_stderr": 0.002167792401176146
},
"harness|gsm8k|5": {
"acc": 0.08642911296436695,
"acc_stderr": 0.00774004433710381
},
"harness|winogrande|5": {
"acc": 0.7419100236779794,
"acc_stderr": 0.012298278833972384
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
maxtli/OpenWebText-2M | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4970246069
num_examples: 1000000
download_size: 3024267517
dataset_size: 4970246069
---
# Dataset Card for "OpenWebText-2M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
viarias/remote_sensing_2018_weedmap | ---
dataset_info:
- config_name: red_edge
features:
- name: B
dtype: image
- name: CIR
dtype: image
- name: G
dtype: image
- name: NDVI
dtype: image
- name: NIR
dtype: image
- name: R
dtype: image
- name: RE
dtype: image
- name: RGB
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 1180504
num_examples: 766
- name: test
num_bytes: 314394
num_examples: 204
download_size: 637901163
dataset_size: 1494898
- config_name: sequoia
features:
- name: CIR
dtype: image
- name: G
dtype: image
- name: NDVI
dtype: image
- name: NIR
dtype: image
- name: R
dtype: image
- name: RE
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 515690
num_examples: 428
- name: test
num_bytes: 327726
num_examples: 272
download_size: 444145925
dataset_size: 843416
license: apache-2.0
task_categories:
- image-segmentation
language:
- en
size_categories:
- 1K<n<10K
--- |
tyzhu/lmind_nq_train6000_eval6489_v1_docidx | ---
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_ic_qa
path: data/train_ic_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_ic_qa
path: data/eval_ic_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: all_docs_eval
path: data/all_docs_eval-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train_qa
num_bytes: 697367
num_examples: 6000
- name: train_ic_qa
num_bytes: 4540536
num_examples: 6000
- name: train_recite_qa
num_bytes: 4546536
num_examples: 6000
- name: eval_qa
num_bytes: 752802
num_examples: 6489
- name: eval_ic_qa
num_bytes: 4906186
num_examples: 6489
- name: eval_recite_qa
num_bytes: 4912675
num_examples: 6489
- name: all_docs
num_bytes: 7126313
num_examples: 10925
- name: all_docs_eval
num_bytes: 7125701
num_examples: 10925
- name: train
num_bytes: 7126313
num_examples: 10925
- name: validation
num_bytes: 7125701
num_examples: 10925
download_size: 30617969
dataset_size: 48860130
---
# Dataset Card for "lmind_nq_train6000_eval6489_v1_docidx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mHossain/final_train_v4_test_920000 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: prefix
dtype: string
splits:
- name: train
num_bytes: 7524029.7
num_examples: 18000
- name: test
num_bytes: 836003.3
num_examples: 2000
download_size: 3597294
dataset_size: 8360033.0
---
# Dataset Card for "final_train_v4_test_920000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eduagarcia-temp/cc100_dedup | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 52750544585
num_examples: 38059979
download_size: 33900809688
dataset_size: 52750544585
---
# Dataset Card for "cc100_dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mehdidc/yfcc15m | ---
license: cc
---
YFCC15m dataset from <https://github.com/openai/CLIP/blob/main/data/yfcc100m.md>.
The subset is obtained by filtering the original YFCC100m (yfcc100m_dataset.sql) using the photo ids from <https://github.com/openai/CLIP/blob/main/data/yfcc100m.md>.
The script to rebuild the data from the original YFCC100m is provided at [build_yfcc15m.py](build_yfcc15m.py).
|
Nexdata/75_Dictionaries_of_Different_Chinese_Fields | ---
license: cc-by-nc-nd-4.0
---
## Description
75 Chinese domain dictionaries, including data for a certain year and covering a wide range of content. Each line in the data file includes a term and its Chinese pinyin, and the terms are sorted alphabetically. This data set can be used for tasks such as natural language understanding, knowledge base building, etc..
For more details, please refer to the link: https://www.nexdata.ai/dataset/200?source=Huggingface
# Specifications
## Data content
Chinese Dictionary of Various Fields
## Data size
Chinese Dictionary of 75 Fields
## Collecting period
The year 2,013
## Storage format
txt
## Language
Chinese
# Licensing Information
Commercial License
|
hindi_discourse | ---
annotations_creators:
- other
language_creators:
- found
language:
- hi
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: Discourse Analysis dataset
tags:
- discourse-analysis
dataset_info:
features:
- name: Story_no
dtype: int32
- name: Sentence
dtype: string
- name: Discourse Mode
dtype:
class_label:
names:
'0': Argumentative
'1': Descriptive
'2': Dialogue
'3': Informative
'4': Narrative
'5': Other
splits:
- name: train
num_bytes: 1998930
num_examples: 9968
download_size: 4176677
dataset_size: 1998930
---
# Dataset Card for Discourse Analysis dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/midas-research/hindi-discourse
- **Paper:** [An Annotated Dataset of Discourse Modes in Hindi Stories](https://aclanthology.org/2020.lrec-1.149/)
- **Point of Contact:** https://github.com/midas-research/MeTooMA
### Dataset Summary
- The Hindi Discourse Analysis dataset is a corpus for analyzing discourse modes present in its sentences.
- It contains sentences from stories written by 11 famous authors from the 20th Century.
- 4-5 stories by each author have been selected which were available in the public domain resulting in a collection of 53 stories.
- Most of these short stories were originally written in Hindi but some of them were written in other Indian languages and later translated to Hindi.
The corpus contains a total of 10472 sentences belonging to the following categories:
- Argumentative
- Descriptive
- Dialogic
- Informative
- Narrative
### Supported Tasks and Leaderboards
- Discourse Analysis of Hindi.
### Languages
Hindi
## Dataset Structure
- The dataset is structured into JSON format.
### Data Instances
{'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'}
### Data Fields
Sentence number, story number, sentence and discourse mode
### Data Splits
- Train: 9983
## Dataset Creation
### Curation Rationale
- Present a new publicly available corpus
consisting of sentences from short stories written in a
low-resource language of Hindi having high quality annotation for five different discourse modes -
argumentative, narrative, descriptive, dialogic and informative.
- Perform a detailed analysis of the proposed annotated corpus and characterize the performance of
different classification algorithms.
### Source Data
- Source of all the data points in this dataset is Hindi stories written by famous authors of Hindi literature.
#### Initial Data Collection and Normalization
- All the data was collected from various Hindi websites.
- We chose against crowd-sourcing the annotation pro- cess because we wanted to directly work with the an- notators for qualitative feedback and to also ensure high quality annotations.
- We employed three native Hindi speakers with college level education for the an- notation task.
- We first selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.
- Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
#### Who are the source language producers?
Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
### Annotations
#### Annotation process
- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.
- The annotators are domain experts having degress in advanced clinical psychology and gender studies.
- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.
- They studied the document, worked a few examples to get used to this annotation task.
- They also provided feedback for improving the class definitions.
- The annotation process is not mutually exclusive, implying that presence of one label does not mean the
absence of the other one.
#### Who are the annotators?
- The annotators were three native Hindi speakers with college level education.
- Please refer to the accompnaying paper for a detailed annotation process.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
- As a future work we would also like to use the presented corpus to see how it could be further used
in certain downstream tasks such as emotion analysis, machine translation,
textual entailment, and speech sythesis for improving storytelling experience in Hindi language.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
- We could not get the best performance using the deep learning model trained on the data, due to
insufficient data for DL models.
## Additional Information
Please refer to this link: https://github.com/midas-research/hindi-discourse
### Dataset Curators
- If you use the corpus in a product or application, then please credit the authors
and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
(http://midas.iiitd.edu.in) appropriately.
Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India
disclaims any responsibility for the use of the corpus and does not provide technical support.
However, the contact listed above will be happy to respond to queries and clarifications
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your social media data.
- if interested in a collaborative research project.
### Licensing Information
- If you use the corpus in a product or application, then please credit the authors
and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
(http://midas.iiitd.edu.in) appropriately.
### Citation Information
Please cite the following publication if you make use of the dataset: https://aclanthology.org/2020.lrec-1.149/
```
@inproceedings{dhanwal-etal-2020-annotated,
title = "An Annotated Dataset of Discourse Modes in {H}indi Stories",
author = "Dhanwal, Swapnil and
Dutta, Hritwik and
Nankani, Hitesh and
Shrivastava, Nilay and
Kumar, Yaman and
Li, Junyi Jessy and
Mahata, Debanjan and
Gosangi, Rakesh and
Zhang, Haimin and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.149",
pages = "1191--1196",
abstract = "In this paper, we present a new corpus consisting of sentences from Hindi short stories annotated for five different discourse modes argumentative, narrative, descriptive, dialogic and informative. We present a detailed account of the entire data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.87 k-alpha). We analyze the data in terms of label distributions, part of speech tags, and sentence lengths. We characterize the performance of various classification algorithms on this dataset and perform ablation studies to understand the nature of the linguistic models suitable for capturing the nuances of the embedded discourse structures in the presented corpus.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
Thanks to [@duttahritwik](https://github.com/duttahritwik) for adding this dataset. |
JayalekshmiGopakumar/doclaynetTextClassDataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1806160
num_examples: 691
- name: validation
num_bytes: 167123
num_examples: 64
- name: test
num_bytes: 125303
num_examples: 49
download_size: 1172429
dataset_size: 2098586
---
# Dataset Card for "doclaynetTextClassDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ponlv/gaze-following | ---
dataset_info:
features:
- name: image
dtype: image
- name: split
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: bboxes
dtype: string
- name: labels
dtype: string
- name: cab
dtype: int64
- name: hum
dtype: int64
- name: light
dtype: float64
- name: cam
dtype: int64
- name: env
dtype: int64
- name: gaze_item
dtype: int64
- name: gazeIdx
dtype: int64
- name: gaze_cx
dtype: int64
- name: gaze_cy
dtype: int64
- name: hx
dtype: int64
- name: hy
dtype: int64
- name: pitch
dtype: float64
- name: yaw
dtype: float64
- name: roll
dtype: float64
- name: seg
dtype: string
- name: segm_gazeIdx
dtype: int64
- name: occluded
dtype: int64
splits:
- name: train
num_bytes: 99355602839.0
num_examples: 172800
- name: test
num_bytes: 11133726929.8
num_examples: 19200
download_size: 110163535502
dataset_size: 110489329768.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
HimuraZ/ashe3 | ---
license: openrail
---
|
zolak/twitter_dataset_81_1713082656 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3283406
num_examples: 8078
download_size: 1611169
dataset_size: 3283406
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/squad_qa_baseline_v5_full_recite_full_passage_last_permute_rerun | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4369231.0
num_examples: 2385
- name: validation
num_bytes: 573308
num_examples: 300
download_size: 1012407
dataset_size: 4942539.0
---
# Dataset Card for "squad_qa_baseline_v5_full_recite_full_passage_last_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-formal_logic-dev | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: dev
num_bytes: 3458
num_examples: 5
download_size: 0
dataset_size: 3458
---
# Dataset Card for "mmlu-formal_logic-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mint1456/NLP_VIN2023 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1696181647
num_examples: 2884451
- name: validation
num_bytes: 7002160
num_examples: 11316
- name: test
num_bytes: 6952905
num_examples: 11225
download_size: 383293199
dataset_size: 1710136712
---
# Dataset Card for "NLP_VIN2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/metatree_fri_c4_1000_10 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: X
sequence: float64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 69700
num_examples: 697
- name: validation
num_bytes: 30300
num_examples: 303
download_size: 105275
dataset_size: 100000
---
# Dataset Card for "metatree_fri_c4_1000_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
skrishna/allenai-real-toxicity-prompts_non_toxic | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 16854
num_examples: 100
- name: test
num_bytes: 7908
num_examples: 50
download_size: 0
dataset_size: 24762
---
# Dataset Card for "allenai-real-toxicity-prompts_non_toxic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jopan/SciDMT | ---
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- entity-linking-classification
- coreference-resolution
license: apache-2.0
language:
- en
tags:
- Named Entities Recognition
- Scientific Literature
pretty_name: A Large-Scale Corpus for Detecting Scientific Mentions
size_categories:
- 1M<n<10M
annotations_creators:
- expert-generated
viewer: false
configs:
- config_name: default
data_files:
- split: all
path: "SciDMT_papers.csv"
- split: test
path: "SciDMT_E_sentences.json"
---
# Description
We present SciDMT, an enhanced and expanded corpus for scientific mention detection, offering a significant advancement over existing related resources. SciDMT contains annotated scientific documents for datasets (D), methods (M), and tasks (T).
The corpus consists of two components:
1) the SciDMT main corpus, which includes 48 thousand scientific articles with over 1.8 million weakly annotated mention annotations in the format of in-text span, and
2) an evaluation set, which comprises 100 scientific articles manually annotated for evaluation purposes.
To the best of our knowledge, SciDMT is the largest corpus for scientific entity mention detection. The corpus’s scale and diversity are instrumental in developing and refining models for tasks such as indexing scientific papers, enhancing information retrieval, and improving the accessibility of scientific knowledge. We demonstrate the corpus’s utility through experiments with advanced deep learning architectures like SciBERT and GPT-3.5. Our findings establish performance baselines and highlight unresolved challenges in scientific mention detection. SciDMT serves as a robust benchmark for the research community, encouraging the development of innovative models to further the field of scientific information extraction.
# Files
'DICT': './SciDMT_dict.json',
# machine learning inputs at sentence level
'sent_xy': './SciDMT_sentences.p',
'sent_eval': './SciDMT_E_sentences.json',
'sent_split': './SciDMT_sentences_split.json',
# document level inputs
'doc_split': './SciDMT_split.json',
'doc_eval': './SciDMT_E_human_annotations.json',
'doc_text_and_meta': './SciDMT_papers.csv',
# Usage
SciDMT_demo.ipynb describes how to open the files and print out the file structures.
# Download repository
git clone https://huggingface.co/datasets/jopan/SciDMT
# Citation
The paper is recently accepted by a conference. The related citation info will be posted soon. |
CyberHarem/mashiro_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mashiro/静山マシロ/真白 (Blue Archive)
This is the dataset of mashiro/静山マシロ/真白 (Blue Archive), containing 274 images and their tags.
The core tags of this character are `black_hair, short_hair, halo, purple_eyes, red_halo, wings, black_wings, feathered_wings, hairband, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 274 | 385.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mashiro_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 274 | 327.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mashiro_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 685 | 660.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mashiro_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mashiro_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, black_bikini, frilled_bikini, holding_food, off_shoulder, open_jacket, pink_jacket, popsicle, solo, white_hairband, blush, collarbone, looking_at_viewer, navel, simple_background, bare_shoulders, official_alternate_costume, black_choker, long_sleeves, stomach, white_background, halterneck, cowboy_shot, closed_mouth |
| 1 | 7 |  |  |  |  |  | 1girl, black_sailor_collar, black_serafuku, black_shirt, black_skirt, blush, long_sleeves, looking_at_viewer, pleated_skirt, simple_background, solo, white_background, black_choker, closed_mouth, red_bowtie, black_socks, red_armband, sitting, feet_out_of_frame, low_wings |
| 2 | 8 |  |  |  |  |  | 1girl, black_choker, black_sailor_collar, black_serafuku, black_skirt, long_sleeves, pleated_skirt, simple_background, solo, black_gloves, red_bowtie, white_background, black_shirt, closed_mouth, fingerless_gloves, looking_at_viewer, midriff, miniskirt, blush, cowboy_shot, crop_top, collarbone, red_armband, low_wings |
| 3 | 5 |  |  |  |  |  | 1girl, black_sailor_collar, black_serafuku, black_shirt, black_skirt, blush, closed_mouth, holding_gun, long_sleeves, pleated_skirt, simple_background, solo, anti-materiel_rifle, black_choker, black_footwear, black_gloves, black_socks, fingerless_gloves, looking_at_viewer, shoes, white_background, full_body, red_bowtie, feathers, miniskirt, red_armband, safety_pin |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_bikini | frilled_bikini | holding_food | off_shoulder | open_jacket | pink_jacket | popsicle | solo | white_hairband | blush | collarbone | looking_at_viewer | navel | simple_background | bare_shoulders | official_alternate_costume | black_choker | long_sleeves | stomach | white_background | halterneck | cowboy_shot | closed_mouth | black_sailor_collar | black_serafuku | black_shirt | black_skirt | pleated_skirt | red_bowtie | black_socks | red_armband | sitting | feet_out_of_frame | low_wings | black_gloves | fingerless_gloves | midriff | miniskirt | crop_top | holding_gun | anti-materiel_rifle | black_footwear | shoes | full_body | feathers | safety_pin |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-----------------|:---------------|:---------------|:--------------|:--------------|:-----------|:-------|:-----------------|:--------|:-------------|:--------------------|:--------|:--------------------|:-----------------|:-----------------------------|:---------------|:---------------|:----------|:-------------------|:-------------|:--------------|:---------------|:----------------------|:-----------------|:--------------|:--------------|:----------------|:-------------|:--------------|:--------------|:----------|:--------------------|:------------|:---------------|:--------------------|:----------|:------------|:-----------|:--------------|:----------------------|:-----------------|:--------|:------------|:-----------|:-------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | | | | | | | X | | X | | X | | X | | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | | | | | | X | | X | X | X | | X | | | X | X | | X | | X | X | X | X | X | X | X | X | | X | | | X | X | X | X | X | X | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | | | | | X | | X | | X | | X | | | X | X | | X | | | X | X | X | X | X | X | X | X | X | | | | X | X | | X | | X | X | X | X | X | X | X |
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-78000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 654859
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Codec-SUPERB/voxceleb1_unit | ---
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 64794258
num_examples: 4874
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 64794258
num_examples: 4874
- name: academicodec_hifi_24k_320d
num_bytes: 97062226
num_examples: 4874
- name: audiodec_24k_320d
num_bytes: 207036914
num_examples: 4874
- name: dac_16k
num_bytes: 250043666
num_examples: 4874
- name: dac_24k
num_bytes: 990592882
num_examples: 4874
- name: dac_44k
num_bytes: 319279370
num_examples: 4874
- name: encodec_24k_12bps
num_bytes: 388331762
num_examples: 4874
- name: encodec_24k_1_5bps
num_bytes: 48699266
num_examples: 4874
- name: encodec_24k_24bps
num_bytes: 776483186
num_examples: 4874
- name: encodec_24k_3bps
num_bytes: 97218194
num_examples: 4874
- name: encodec_24k_6bps
num_bytes: 194256050
num_examples: 4874
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 518339442
num_examples: 4874
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 518339442
num_examples: 4874
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 518339442
num_examples: 4874
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 260195698
num_examples: 4874
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 518339442
num_examples: 4874
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 260195698
num_examples: 4874
- name: speech_tokenizer_16k
num_bytes: 129720114
num_examples: 4874
download_size: 941977312
dataset_size: 6222061310
---
# Dataset Card for "voxceleb1_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sankovic/xxxdataset | ---
license: openrail
---
|
zelalt/paper_augmentation | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 389429.81374311395
num_examples: 182
download_size: 384178
dataset_size: 389429.81374311395
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
efoley/doq_data_gray_64 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 195981020.752
num_examples: 79052
download_size: 220644903
dataset_size: 195981020.752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kardosdrur/dawiki_qa_zephyr | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2912592.195482657
num_examples: 8925
- name: test
num_bytes: 728392.8045173434
num_examples: 2232
download_size: 2485097
dataset_size: 3640985.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
fhirfly/medicalquestions | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- medical
pretty_name: FhirFly Medical Questions
size_categories:
- 10K<n<100K
---
# 🤗 Dataset Card: fhirfly/medicalquestions
## Dataset Overview
- Dataset name: fhirfly/medicalquestions
- Dataset size: 25,102 questions
- Labels: 1 (medical), 0 (non-medical)
- Distribution: Evenly distributed between medical and non-medical questions
## Dataset Description
The fhirfly/medicalquestions dataset is a collection of 25,102 questions labeled as either medical or non-medical. The dataset aims to provide a diverse range of questions covering various medical and non-medical domains.
The questions in the dataset have been manually labeled by domain experts based on the context and content of each question. Each question is assigned a label of 1 if it is determined to be a medical question and a label of 0 if it is classified as a non-medical question.
## Dataset Structure
The dataset consists of a single file containing the following columns:
- **Text**: The text of the question.
- **Label**: The label assigned to each question, either 1 (medical) or 0 (non-medical).
The questions are evenly distributed between medical and non-medical categories, ensuring a balanced dataset for training and evaluation.
## Potential Biases
Efforts have been made to ensure that the dataset is representative of various medical and non-medical topics. However, it is important to acknowledge that biases may exist in the dataset due to the subjective nature of labeling questions. Biases could be present in terms of the types of questions included, the representation of certain medical conditions or non-medical topics, or the labeling process itself.
It is recommended to perform thorough evaluation and analysis of the dataset to identify and mitigate potential biases during model training and deployment. Care should be taken to address any biases to ensure fair and unbiased predictions.
## Dataset Quality
The fhirfly/medicalquestions dataset has undergone manual labeling by domain experts, which helps maintain a high level of quality and accuracy. However, human labeling is not entirely immune to errors or subjectivity.
To ensure the quality of the dataset, a thorough review process has been conducted to minimize errors and maintain consistency in labeling. Nonetheless, it is advisable to validate and verify the data as part of your specific use case to ensure it meets your requirements.
## Data License
The fhirfly/medicalquestions dataset is released under the MIT license. Please refer to the license file accompanying the dataset for more information on its usage and any restrictions that may apply.
## Dataset Citation
If you use the fhirfly/medicalquestions dataset in your work, please cite it as:
```
@dataset{fhirfly/medicalquestions,
title = {fhirfly/medicalquestions},
author = {fhirfly},
year = {2023},
publisher = {Hugging Face},
version = {1.0.0},
url = {https://huggingface.co/datasets/fhirfly/medicalquestions}
}
``` |
CyberHarem/sakurako_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of sakurako/歌住サクラコ/樱子 (Blue Archive)
This is the dataset of sakurako/歌住サクラコ/樱子 (Blue Archive), containing 392 images and their tags.
The core tags of this character are `long_hair, grey_hair, halo, breasts, purple_eyes, hair_between_eyes, large_breasts, crossed_bangs, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 392 | 705.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurako_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 392 | 586.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurako_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1040 | 1.15 GiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurako_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sakurako_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, black_dress, black_gloves, blush, habit, long_sleeves, looking_at_viewer, nun, smile, solo, blue_neckerchief, puffy_sleeves, simple_background, upper_body, open_mouth, white_background |
| 1 | 6 |  |  |  |  |  | 1girl, black_dress, blue_neckerchief, habit, juliet_sleeves, looking_at_viewer, nun, solo, black_gloves, black_thighhighs, closed_mouth, simple_background, blush, smile, cowboy_shot, frilled_dress, red_eyes |
| 2 | 9 |  |  |  |  |  | 1girl, assault_rifle, black_dress, black_thighhighs, bullpup, habit, holding_gun, looking_at_viewer, nun, solo, black_gloves, juliet_sleeves, blue_neckerchief, frills, closed_mouth, simple_background, white_background |
| 3 | 29 |  |  |  |  |  | 1girl, black_thighhighs, habit, leotard_under_clothes, nun, black_leotard, looking_at_viewer, leotard_peek, solo, highleg_leotard, blush, juliet_sleeves, lifted_by_self, black_gloves, black_dress, thighs, blue_neckerchief, closed_mouth, simple_background, groin, dress_lift, smile, latex, white_background |
| 4 | 17 |  |  |  |  |  | 1girl, black_leotard, habit, highleg_leotard, latex, looking_at_viewer, nun, solo, simple_background, white_background, blush, gloves, black_thighhighs, closed_mouth, smile, thighs, covered_navel |
| 5 | 6 |  |  |  |  |  | 1girl, black_leotard, black_thighhighs, blush, habit, highleg_leotard, latex, looking_at_viewer, nun, solo, thighs, gloves |
| 6 | 5 |  |  |  |  |  | 1girl, assault_rifle, black_leotard, black_thighhighs, bullpup, habit, highleg_leotard, holding_gun, nun, solo, gloves, looking_at_viewer, closed_mouth, thigh_boots, blush, latex_legwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_dress | black_gloves | blush | habit | long_sleeves | looking_at_viewer | nun | smile | solo | blue_neckerchief | puffy_sleeves | simple_background | upper_body | open_mouth | white_background | juliet_sleeves | black_thighhighs | closed_mouth | cowboy_shot | frilled_dress | red_eyes | assault_rifle | bullpup | holding_gun | frills | leotard_under_clothes | black_leotard | leotard_peek | highleg_leotard | lifted_by_self | thighs | groin | dress_lift | latex | gloves | covered_navel | thigh_boots | latex_legwear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:---------------|:--------|:--------|:---------------|:--------------------|:------|:--------|:-------|:-------------------|:----------------|:--------------------|:-------------|:-------------|:-------------------|:-----------------|:-------------------|:---------------|:--------------|:----------------|:-----------|:----------------|:----------|:--------------|:---------|:------------------------|:----------------|:---------------|:------------------|:-----------------|:---------|:--------|:-------------|:--------|:---------|:----------------|:--------------|:----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | | X | | X | X | | X | X | | X | | | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | |
| 3 | 29 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | | X | | | X | X | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | | | | |
| 4 | 17 |  |  |  |  |  | X | | | X | X | | X | X | X | X | | | X | | | X | | X | X | | | | | | | | | X | | X | | X | | | X | X | X | | |
| 5 | 6 |  |  |  |  |  | X | | | X | X | | X | X | | X | | | | | | | | X | | | | | | | | | | X | | X | | X | | | X | X | | | |
| 6 | 5 |  |  |  |  |  | X | | | X | X | | X | X | | X | | | | | | | | X | X | | | | X | X | X | | | X | | X | | | | | | X | | X | X |
|
lbox/lbox_open | ---
license: cc-by-nc-4.0
---
# Dataset Card for `lbox_open`
## Dataset Description
- **Homepage:** `https://lbox.kr`
- **Repository:** `https://github.com/lbox-kr/lbox_open`
- **Point of Contact:** [Wonseok Hwang](mailto:wonseok.hwang@lbox.kr)
### Dataset Summary
A Legal AI Benchmark Dataset from Korean Legal Cases.
### Languages
Korean
### How to use
```python
from datasets import load_dataset
# casename classficiation task
data_cn = load_dataset("lbox/lbox_open", "casename_classification")
data_cn_plus = load_dataset("lbox/lbox_open", "casename_classification_plus")
# statutes classification task
data_st = load_dataset("lbox/lbox_open", "statute_classification")
data_st_plus = load_dataset("lbox/lbox_open", "statute_classification_plus")
# Legal judgement prediction tasks
data_ljp_criminal = load_dataset("lbox/lbox_open", "ljp_criminal")
data_ljp_civil = load_dataset("lbox/lbox_open", "ljp_civil")
# case summarization task
data_summ = load_dataset("lbox/lbox_open", "summarization")
data_summ_plus = load_dataset("lbox/lbox_open", "summarization_plus")
# precedent corpus
data_corpus = load_dataset("lbox/lbox_open", "precedent_corpus")
```
For more information about the dataset, please visit <https://github.com/lbox-kr/lbox_open>.
## Licensing Information
Copyright 2022-present [LBox Co. Ltd.](https://lbox.kr/)
Licensed under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |
anan-2024/twitter_dataset_1713006150 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 20449
num_examples: 46
download_size: 11368
dataset_size: 20449
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yzhuang/metatree_fri_c1_1000_25 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: X
sequence: float64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 157520
num_examples: 716
- name: validation
num_bytes: 62480
num_examples: 284
download_size: 254322
dataset_size: 220000
---
# Dataset Card for "metatree_fri_c1_1000_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_cola_flat_adj_for_adv | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 722
num_examples: 10
- name: test
num_bytes: 880
num_examples: 10
- name: train
num_bytes: 8359
num_examples: 116
download_size: 10886
dataset_size: 9961
---
# Dataset Card for "MULTI_VALUE_cola_flat_adj_for_adv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Eduardovco/Ber | ---
license: openrail
---
|
ImperialIndians23/nlp_cw_data_unprocessed_augmented_synonym | ---
dataset_info:
features:
- name: par_id
dtype: string
- name: community
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2783892
num_examples: 9169
- name: valid
num_bytes: 616626
num_examples: 2094
download_size: 2149455
dataset_size: 3400518
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_6.7b_Visclues_ns_6084_random | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_1_bs_16
num_bytes: 103748347.5
num_examples: 6084
- name: fewshot_3_bs_16
num_bytes: 107978514.5
num_examples: 6084
download_size: 195080607
dataset_size: 211726862.0
---
# Dataset Card for "Caltech101_with_background_test_facebook_opt_6.7b_Visclues_ns_6084_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xinqiyang/iruca_llama2_1kplus_shuffle | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1657441.0
num_examples: 1016
download_size: 980826
dataset_size: 1657441.0
---
# Dataset Card for "iruca_llama2_1kplus_shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DL3DV/DL3DV-ALL-2K | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- n>1T
---
# DL3DV-Dataset
This repo has all the 2K frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co/docs/hub/en/datasets-downloading). [480P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co/datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 2K resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 2K --file_type images+poses --clean_cache
# Download 2K resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 2K --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
python download.py --odir DL3DV-10K --subset 2K --resolution 2K --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K
|
tyzhu/fwv2_random_num_tip_train_10_eval_10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_doc2id
path: data/train_doc2id-*
- split: train_id2doc
path: data/train_id2doc-*
- split: train_find_word
path: data/train_find_word-*
- split: eval_find_word
path: data/eval_find_word-*
- split: id_context_mapping
path: data/id_context_mapping-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3919
num_examples: 30
- name: train_doc2id
num_bytes: 1651
num_examples: 20
- name: train_id2doc
num_bytes: 1711
num_examples: 20
- name: train_find_word
num_bytes: 2208
num_examples: 10
- name: eval_find_word
num_bytes: 1604
num_examples: 10
- name: id_context_mapping
num_bytes: 1071
num_examples: 20
download_size: 19912
dataset_size: 12164
---
# Dataset Card for "fwv2_random_num_tip_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Codec-SUPERB/superb_ks_synth | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: original
num_bytes: 98824867.676
num_examples: 3081
- name: academicodec_hifi_16k_320d
num_bytes: 98929621.676
num_examples: 3081
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 98929621.676
num_examples: 3081
- name: academicodec_hifi_24k_320d
num_bytes: 148225621.676
num_examples: 3081
- name: audiodec_24k_320d
num_bytes: 148225621.676
num_examples: 3081
- name: dac_16k
num_bytes: 98929621.676
num_examples: 3081
- name: dac_24k
num_bytes: 148225621.676
num_examples: 3081
- name: dac_44k
num_bytes: 272081821.676
num_examples: 3081
- name: encodec_24k_12bps
num_bytes: 148225621.676
num_examples: 3081
- name: encodec_24k_1_5bps
num_bytes: 148225621.676
num_examples: 3081
- name: encodec_24k_24bps
num_bytes: 148225621.676
num_examples: 3081
- name: encodec_24k_3bps
num_bytes: 148225621.676
num_examples: 3081
- name: encodec_24k_6bps
num_bytes: 148225621.676
num_examples: 3081
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 98929621.676
num_examples: 3081
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 98929621.676
num_examples: 3081
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 98929621.676
num_examples: 3081
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 98929621.676
num_examples: 3081
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 98929621.676
num_examples: 3081
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 98929621.676
num_examples: 3081
- name: speech_tokenizer_16k
num_bytes: 98929621.676
num_examples: 3081
download_size: 2282629443
dataset_size: 2546007879.5200014
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
---
|
noah-yusen/tutorial_dataset | ---
license: mit
---
|
lmlab/basic-math-10m | ---
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- math
pretty_name: Basic Math 1M
size_categories:
- 10M<n<100M
license:
- cc-by-sa-4.0
- gpl
---
# Basic Math 10M
A dataset of 10 million basic arithmetic problems with potential user prompts. It is an extended version of [Basic Math 1M](https://huggingface.co/datasets/lmlab/basic-math-1m).
## License
Basic Math 10M is dual-licensed under the GNU GPL license and the CC-BY-SA 4.0 license, you may choose either at your choice. If you are interested in including this dataset in another differently-licensed dataset, please contact me.
## Credit
Basic Math 10M was inspired by [Simple Math](https://huggingface.co/datasets/fblgit/simple-math) but was created independently. |
arieg/bw_spec_cls_8_00_noise_200 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '10'
'1': '140'
'2': '2'
'3': '5'
'4': '141'
'5': '190'
'6': '193'
'7': '194'
splits:
- name: train
num_bytes: 99107527.0
num_examples: 1700
download_size: 51930526
dataset_size: 99107527.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bw_spec_cls_8_00_noise_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna/desc_stats | ---
dataset_info:
features:
- name: string_label__column
dtype: string
- name: string_label__nan_column
dtype: string
- name: string_label__all_nan_column
dtype: string
- name: int__column
dtype: int32
- name: int__nan_column
dtype: int32
- name: int__all_nan_column
dtype: int32
- name: int__only_one_value_column
dtype: int32
- name: int__only_one_value_nan_column
dtype: int32
- name: float__column
dtype: float32
- name: float__nan_column
dtype: float32
- name: float__all_nan_column
dtype: float32
- name: class_label__column
dtype:
class_label:
names:
'0': cat
'1': dog
- name: class_label__less_classes_column
dtype:
class_label:
names:
'0': cat
'1': dog
- name: class_label__nan_column
dtype:
class_label:
names:
'0': cat
'1': dog
- name: class_label__all_nan_column
dtype:
class_label:
names:
'0': cat
'1': dog
- name: class_label__string_column
dtype:
class_label:
names:
'0': cat
'1': dog
- name: class_label__string_nan_column
dtype:
class_label:
names:
'0': cat
'1': dog
- name: class_label__string_all_nan_column
dtype:
class_label:
names:
'0': cat
'1': dog
- name: float__negative_column
dtype: float64
- name: float__cross_zero_column
dtype: float32
- name: float__large_values_column
dtype: float32
- name: float__only_one_value_column
dtype: float32
- name: float__only_one_value_nan_column
dtype: float32
- name: int__negative_column
dtype: int32
- name: int__cross_zero_column
dtype: int32
- name: int__large_values_column
dtype: int32
- name: bool__column
dtype: bool
- name: bool__nan_column
dtype: bool
- name: bool__all_nan_column
dtype: bool
- name: list__int_column
list: int32
- name: list__int_nan_column
list: int32
- name: list__int_all_nan_column
list: int32
- name: list__string_column
list: string
- name: list__string_nan_column
list: string
- name: list__string_all_nan_column
list: string
- name: list__dict_column
list:
- name: author
dtype: string
- name: content
dtype: string
- name: likes
dtype: int32
- name: list__dict_nan_column
list:
- name: author
dtype: string
- name: content
dtype: string
- name: likes
dtype: int32
- name: list__dict_all_nan_column
list:
- name: author
dtype: string
- name: content
dtype: string
- name: likes
dtype: int32
- name: list__sequence_column
sequence: int64
- name: list__sequence_nan_column
sequence: int64
- name: list__sequence_all_nan_column
sequence: int64
- name: list__sequence_dict_column
sequence:
- name: author
dtype: string
- name: content
dtype: string
- name: likes
dtype: int32
- name: list__sequence_dict_nan_column
sequence:
- name: author
dtype: string
- name: content
dtype: string
- name: likes
dtype: int32
- name: list__sequence_dict_all_nan_column
sequence:
- name: author
dtype: string
- name: content
dtype: string
- name: likes
dtype: int32
- name: list__sequence_of_sequence_column
sequence:
sequence: string
- name: list__sequence_of_sequence_nan_column
sequence:
sequence: string
- name: list__sequence_of_sequence_all_nan_column
sequence:
sequence: string
- name: list__sequence_dict_of_lists_column
sequence:
- name: name
dtype: string
- name: count
dtype: int16
- name: list__sequence_dict_of_lists_nan_column
sequence:
- name: name
dtype: string
- name: count
dtype: int16
- name: list__sequence_dict_of_lists_all_nan_column
sequence:
- name: name
dtype: string
- name: count
dtype: int16
splits:
- name: train
num_bytes: 15755
num_examples: 20
download_size: 43337
dataset_size: 15755
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Algp123/seansimon | ---
license: cc
---
|
CocoaRain/data_audio_zh | ---
license: apache-2.0
---
|
mHossain/final_train_v2_500000 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: prefix
dtype: string
splits:
- name: train
num_bytes: 6083695.8
num_examples: 18000
- name: test
num_bytes: 675966.2
num_examples: 2000
download_size: 2962855
dataset_size: 6759662.0
---
# Dataset Card for "final_train_v2_500000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
crystalai/autotrain-data-crystal_alchemist-vision | Invalid username or password. |
MoE-UNC/story_cloze | ---
dataset_info:
features:
- name: story_id
dtype: string
- name: input_sentence_1
dtype: string
- name: input_sentence_2
dtype: string
- name: input_sentence_3
dtype: string
- name: input_sentence_4
dtype: string
- name: sentence_quiz1
dtype: string
- name: sentence_quiz2
dtype: string
- name: answer_right_ending
dtype: int32
splits:
- name: validation
num_bytes: 614056
num_examples: 1871
- name: test
num_bytes: 613156
num_examples: 1871
download_size: 872758
dataset_size: 1227212
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
gmongaras/BERT_Base_Cased_128_Dataset_Mapped | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 51067549265.998314
num_examples: 131569119
download_size: 15915934708
dataset_size: 51067549265.998314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Dataset using the bert-cased tokenizer, cutoff sentences to 128 length (not sentence pairs), all sentence pairs extracted.
Original datasets:
https://huggingface.co/datasets/bookcorpus
https://huggingface.co/datasets/wikipedia Variant: 20220301.en
Mapped from: https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_128_Dataset |
aswin1906/github-advisory-2020.csv | ---
license: apache-2.0
---
|
MartinKu/bookcorpus_ALL_OC | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2991397965
num_examples: 100095502
download_size: 2020456946
dataset_size: 2991397965
---
# Dataset Card for "bookcorpus_ALL_OC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DynamicSuperb/NoiseDetection_VCTK_MUSAN-Speech | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 25739063.048576213
num_examples: 200
download_size: 25385384
dataset_size: 25739063.048576213
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "NoiseDetectionspeech_VCTKMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alagaesia/spider_dev_subset_preds | ---
license: mit
dataset_info:
features:
- name: db_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: create_w_keys
dtype: string
- name: create_wo_keys
dtype: string
- name: difficulty
dtype: string
- name: zero_shot_request
dtype: string
splits:
- name: train
num_bytes: 601945
num_examples: 209
download_size: 67668
dataset_size: 601945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-project-977d15c2-b4b7-4875-aee0-490ae596d0f4-3432 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
open-llm-leaderboard/details_R136a1__InfinityKumon-2x7B | ---
pretty_name: Evaluation run of R136a1/InfinityKumon-2x7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [R136a1/InfinityKumon-2x7B](https://huggingface.co/R136a1/InfinityKumon-2x7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_R136a1__InfinityKumon-2x7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-29T18:14:17.379382](https://huggingface.co/datasets/open-llm-leaderboard/details_R136a1__InfinityKumon-2x7B/blob/main/results_2024-03-29T18-14-17.379382.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6533431675964941,\n\
\ \"acc_stderr\": 0.03199131235574602,\n \"acc_norm\": 0.6542562089826888,\n\
\ \"acc_norm_stderr\": 0.032641371975695,\n \"mc1\": 0.45165238678090575,\n\
\ \"mc1_stderr\": 0.017421480300277643,\n \"mc2\": 0.6198855433930374,\n\
\ \"mc2_stderr\": 0.015418173007215551\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6689419795221843,\n \"acc_stderr\": 0.013752062419817837,\n\
\ \"acc_norm\": 0.6962457337883959,\n \"acc_norm_stderr\": 0.013438909184778768\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6964748058155746,\n\
\ \"acc_stderr\": 0.00458840341944967,\n \"acc_norm\": 0.8709420434176459,\n\
\ \"acc_norm_stderr\": 0.0033457889052629568\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\
\ \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n\
\ \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119669,\n\
\ \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119669\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.02794321998933714,\n\
\ \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.02794321998933714\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n\
\ \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n\
\ \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n\
\ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03202563076101735,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03202563076101735\n },\n\
\ \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n\
\ \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3994708994708995,\n \"acc_stderr\": 0.025225450284067884,\n \"\
acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.025225450284067884\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8,\n \"acc_stderr\": 0.02275520495954294,\n \"acc_norm\": 0.8,\n\
\ \"acc_norm_stderr\": 0.02275520495954294\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n\
\ \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\"\
: 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.030874145136562094,\n\
\ \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.030874145136562094\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"\
acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603346,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603346\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \
\ \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \
\ \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6974789915966386,\n \"acc_stderr\": 0.02983796238829194,\n \
\ \"acc_norm\": 0.6974789915966386,\n \"acc_norm_stderr\": 0.02983796238829194\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"\
acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.0251956584289318,\n \"acc_norm\"\
: 0.8480392156862745,\n \"acc_norm_stderr\": 0.0251956584289318\n },\n\
\ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\
\ 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \"acc_norm\"\
: 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n },\n\
\ \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.036412970813137276,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.036412970813137276\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\
\ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n\
\ \"acc_stderr\": 0.019875655027867437,\n \"acc_norm\": 0.8974358974358975,\n\
\ \"acc_norm_stderr\": 0.019875655027867437\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8326947637292464,\n\
\ \"acc_stderr\": 0.013347327202920332,\n \"acc_norm\": 0.8326947637292464,\n\
\ \"acc_norm_stderr\": 0.013347327202920332\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39217877094972065,\n\
\ \"acc_stderr\": 0.01632906107320745,\n \"acc_norm\": 0.39217877094972065,\n\
\ \"acc_norm_stderr\": 0.01632906107320745\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242553,\n\
\ \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242553\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n\
\ \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n\
\ \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.023993501709042103,\n\
\ \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.023993501709042103\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4680573663624511,\n\
\ \"acc_stderr\": 0.012744149704869647,\n \"acc_norm\": 0.4680573663624511,\n\
\ \"acc_norm_stderr\": 0.012744149704869647\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462937,\n\
\ \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462937\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6617647058823529,\n \"acc_stderr\": 0.019139943748487046,\n \
\ \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.019139943748487046\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.02866685779027465,\n\
\ \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.02866685779027465\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n\
\ \"acc_stderr\": 0.025196929874827072,\n \"acc_norm\": 0.8507462686567164,\n\
\ \"acc_norm_stderr\": 0.025196929874827072\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n\
\ \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n\
\ \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.45165238678090575,\n\
\ \"mc1_stderr\": 0.017421480300277643,\n \"mc2\": 0.6198855433930374,\n\
\ \"mc2_stderr\": 0.015418173007215551\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.819258089976322,\n \"acc_stderr\": 0.010814911009613994\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6353297952994693,\n \
\ \"acc_stderr\": 0.013258428375662245\n }\n}\n```"
repo_url: https://huggingface.co/R136a1/InfinityKumon-2x7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|arc:challenge|25_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|gsm8k|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hellaswag|10_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T18-14-17.379382.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T18-14-17.379382.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- '**/details_harness|winogrande|5_2024-03-29T18-14-17.379382.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-29T18-14-17.379382.parquet'
- config_name: results
data_files:
- split: 2024_03_29T18_14_17.379382
path:
- results_2024-03-29T18-14-17.379382.parquet
- split: latest
path:
- results_2024-03-29T18-14-17.379382.parquet
---
# Dataset Card for Evaluation run of R136a1/InfinityKumon-2x7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [R136a1/InfinityKumon-2x7B](https://huggingface.co/R136a1/InfinityKumon-2x7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_R136a1__InfinityKumon-2x7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-29T18:14:17.379382](https://huggingface.co/datasets/open-llm-leaderboard/details_R136a1__InfinityKumon-2x7B/blob/main/results_2024-03-29T18-14-17.379382.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6533431675964941,
"acc_stderr": 0.03199131235574602,
"acc_norm": 0.6542562089826888,
"acc_norm_stderr": 0.032641371975695,
"mc1": 0.45165238678090575,
"mc1_stderr": 0.017421480300277643,
"mc2": 0.6198855433930374,
"mc2_stderr": 0.015418173007215551
},
"harness|arc:challenge|25": {
"acc": 0.6689419795221843,
"acc_stderr": 0.013752062419817837,
"acc_norm": 0.6962457337883959,
"acc_norm_stderr": 0.013438909184778768
},
"harness|hellaswag|10": {
"acc": 0.6964748058155746,
"acc_stderr": 0.00458840341944967,
"acc_norm": 0.8709420434176459,
"acc_norm_stderr": 0.0033457889052629568
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595852,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595852
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.03738520676119669,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.03738520676119669
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.02794321998933714,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.02794321998933714
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6,
"acc_stderr": 0.03202563076101735,
"acc_norm": 0.6,
"acc_norm_stderr": 0.03202563076101735
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.025225450284067884,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.025225450284067884
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8,
"acc_stderr": 0.02275520495954294,
"acc_norm": 0.8,
"acc_norm_stderr": 0.02275520495954294
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.03517603540361008,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.03517603540361008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252609,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252609
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.806060606060606,
"acc_stderr": 0.030874145136562094,
"acc_norm": 0.806060606060606,
"acc_norm_stderr": 0.030874145136562094
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603346,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603346
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616255,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6974789915966386,
"acc_stderr": 0.02983796238829194,
"acc_norm": 0.6974789915966386,
"acc_norm_stderr": 0.02983796238829194
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.0251956584289318,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.0251956584289318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137276,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137276
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8974358974358975,
"acc_stderr": 0.019875655027867437,
"acc_norm": 0.8974358974358975,
"acc_norm_stderr": 0.019875655027867437
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8326947637292464,
"acc_stderr": 0.013347327202920332,
"acc_norm": 0.8326947637292464,
"acc_norm_stderr": 0.013347327202920332
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39217877094972065,
"acc_stderr": 0.01632906107320745,
"acc_norm": 0.39217877094972065,
"acc_norm_stderr": 0.01632906107320745
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242553,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242553
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998481,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998481
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042103,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042103
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4680573663624511,
"acc_stderr": 0.012744149704869647,
"acc_norm": 0.4680573663624511,
"acc_norm_stderr": 0.012744149704869647
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462937,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462937
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6617647058823529,
"acc_stderr": 0.019139943748487046,
"acc_norm": 0.6617647058823529,
"acc_norm_stderr": 0.019139943748487046
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.02866685779027465,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.02866685779027465
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.025196929874827072,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.025196929874827072
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.45165238678090575,
"mc1_stderr": 0.017421480300277643,
"mc2": 0.6198855433930374,
"mc2_stderr": 0.015418173007215551
},
"harness|winogrande|5": {
"acc": 0.819258089976322,
"acc_stderr": 0.010814911009613994
},
"harness|gsm8k|5": {
"acc": 0.6353297952994693,
"acc_stderr": 0.013258428375662245
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
LNL/AI_homeworkcorrecting_0907 | ---
license: afl-3.0
---
|
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_2.7b_mode_VQAv2_visclues_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_8
num_bytes: 25488357
num_examples: 1000
download_size: 4919324
dataset_size: 25488357
---
# Dataset Card for "VQAv2_sample_validation_facebook_opt_2.7b_mode_VQAv2_visclues_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abdalrahmanshahrour/data-summarizer | ---
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: summarizer
## Dataset Description
This dataset has been automatically processed by AutoTrain for project summarizer.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\n\u064a\u0648\u0627\u062c\u0647 \u0627\u0644\u0627\u0633\u0628\u0627\u0646\u064a \u0631\u0641\u0627\u0626\u064a\u0644 \u0646\u0627\u062f\u0627\u0644 \u064a\u0648\u0645 \u063a\u062f \u0627\u0644\u0627\u062d\u062f \u0627\u0646\u0637\u0644\u0627\u0642\u0627 \u0645\u0646 \u0627\u0644\u0633\u0627\u0639\u0629 21:00 \u0645\u0646\u0627\u0641\u0633\u0647 \u0627\u0644\u0633\u0648\u064a\u0633\u0631\u064a \u0631\u0648\u062c\u0631 \u0641\u064a\u062f\u0631\u0631 \u0641\u064a \u0646\u0647\u0627\u0626\u064a \u0628\u0637\u0648\u0644\u0629 \u0645\u064a\u0627\u0645\u064a \u0627\u0644\u0645\u0641\u062a\u0648\u062d\u0629 \u0644\u0644\u062a\u0646\u0633 .\n\u0648 \u064a\u0633\u0639\u0649 \u0641\u064a\u062f\u0631\u0631 \u0644\u062a\u062d\u0642\u064a\u0642 \u062b\u0627\u0644\u062b \u0627\u0644\u0642\u0627\u0628\u0647 \u0647\u0630\u0627 \u0627\u0644\u0645\u0648\u0633\u0645 \u0628\u0639\u062f \u062a\u062a\u0648\u064a\u062c\u0647 \u0628\u0644\u0642\u0628 \u0628\u0637\u0648\u0644\u0629 \u0627\u0633\u062a\u0631\u0627\u0644\u064a\u0627 \u0641\u064a \u062c\u0627\u0646\u0641\u064a \u0627\u0644\u0645\u0627\u0636\u064a \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0646\u0627\u062f\u0627\u0644 \u0648 \u062a\u062a\u0648\u064a\u062c\u0647\u00a0\u0628\u0628\u0637\u0648\u0644\u0629 \u0625\u0646\u062f\u064a\u0627\u0646 \u0648\u064a\u0644\u0632 \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0645\u0648\u0627\u0637\u0646\u0647 \u0641\u0627\u0641\u0631\u064a\u0646\u0643\u0627\u00a0\u00a0.\n",
"target": "\u0648 \u064a\u0633\u0639\u0649 \u0641\u064a\u062f\u0631\u0631 \u0644\u062a\u062d\u0642\u064a\u0642 \u062b\u0627\u0644\u062b \u0627\u0644\u0642\u0627\u0628\u0647 \u0647\u0630\u0627 \u0627\u0644\u0645\u0648\u0633\u0645 \u0628\u0639\u062f \u062a\u062a\u0648\u064a\u062c\u0647 \u0628\u0644\u0642\u0628 \u0628\u0637\u0648\u0644\u0629 \u0627\u0633\u062a\u0631\u0627\u0644\u064a\u0627 \u0641\u064a \u062c\u0627\u0646\u0641\u064a \u0627\u0644\u0645\u0627\u0636\u064a \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0646\u0627\u062f\u0627\u0644 \u0648 \u062a\u062a\u0648\u064a\u062c\u0647\u00a0\u0628\u0628\u0637\u0648\u0644\u0629 \u0625\u0646\u062f\u064a\u0627\u0646 \u0648\u064a\u0644\u0632 \u0639\u0644\u0649 \u062d\u0633\u0627\u0628 \u0645\u0648\u0627\u0637\u0646\u0647 \u0641\u0627\u0641\u0631\u064a\u0646\u0643\u0627\u00a0\u00a0. \n\u064a\u0648\u0627\u062c\u0647 \u0627\u0644\u0627\u0633\u0628\u0627\u0646\u064a \u0631\u0641\u0627\u0626\u064a\u0644 \u0646\u0627\u062f\u0627\u0644 \u064a\u0648\u0645 \u063a\u062f \u0627\u0644\u0627\u062d\u062f \u0627\u0646\u0637\u0644\u0627\u0642\u0627 \u0645\u0646 \u0627\u0644\u0633\u0627\u0639\u0629 21:00 \u0645\u0646\u0627\u0641\u0633\u0647 \u0627\u0644\u0633\u0648\u064a\u0633\u0631\u064a \u0631\u0648\u062c\u0631 \u0641\u064a\u062f\u0631\u0631 \u0641\u064a \u0646\u0647\u0627\u0626\u064a \u0628\u0637\u0648\u0644\u0629 \u0645\u064a\u0627\u0645\u064a \u0627\u0644\u0645\u0641\u062a\u0648\u062d\u0629 \u0644\u0644\u062a\u0646\u0633 ."
},
{
"text": "\n\u0642\u0627\u0644\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0627\u0644\u0623\u0645\u064a\u0631\u0643\u064a\u0629 \u0625\u0646 \u0631\u062c\u0644\u0627 \u0645\u0646 \u0648\u0644\u0627\u064a\u0629 \u0628\u0646\u0633\u0644\u0641\u0627\u0646\u064a\u0627 \u0627\u062a\u0635\u0644 \u0645\u0631\u0627\u0631\u0627 \u0628\u062e\u062f\u0645\u0629 \u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0628\u064a\u0646\u0645\u0627 \u0643\u0627\u0646 \u062a\u062d\u062a \u062a\u0623\u062b\u064a\u0631 \u0627\u0644\u0643\u062d\u0648\u0644 \u0648\u0628\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0634\u062e\u0635 \u064a\u062a\u062d\u062f\u062b \u0645\u0639\u0647.\n\u0648\u0630\u0643\u0631\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0646 \u0644\u0627\u0631\u0649 \u0643\u064a\u0632\u0631 \u0627\u0644\u0628\u0627\u0644\u063a 51 \u0639\u0627\u0645\u0627 \u0627\u062a\u0635\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0644\u0623\u0648\u0644 \u0645\u0631\u0629 \u062d\u0648\u0627\u0644\u064a \u0627\u0644\u0639\u0627\u0634\u0631\u0629 \u0648\u0627\u0644\u0646\u0635\u0641 \u0645\u0646 \u0645\u0633\u0627\u0621 \u0627\u0644\u062c\u0645\u0639\u0629\u060c \u0637\u0627\u0644\u0628\u0627 \u0627\u0644\u062a\u062d\u062f\u062b \u0645\u0639 \u0636\u0627\u0628\u0637 \u0634\u0631\u0637\u0629\u060c \u0645\u0646 \u062f\u0648\u0646 \u0627\u0644\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626. \u0648\u0648\u0641\u0642\u0627 \u0644\u0640\"\u0623\u0633\u0648\u0634\u064a\u062a\u062f\u0628\u0631\u0633\" \u0641\u0642\u062f \u0639\u0627\u0648\u062f \u0643\u064a\u0632\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 5 \u0645\u0631\u0627\u062a \u0623\u062e\u0631\u0649\u060c \u0627\u0644\u0623\u0645\u0631 \u0627\u0644\u0630\u064a \u0627\u0633\u062a\u062f\u0639\u0649 \u0642\u0648\u0629 \u0645\u0646 \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0644\u0649 \u0645\u0646\u0632\u0644\u0647 \u0641\u064a \u0628\u0644\u062f\u0629 \u0648\u0627\u064a\u062a\u0647\u0648\u0644 \u0627\u0644\u0634\u0645\u0627\u0644\u064a\u0629 \u0628\u0639\u062f \u0645\u0646\u062a\u0635\u0641 \u0627\u0644\u0644\u064a\u0644. \u0648\u0642\u0627\u0644 \u0627\u0644\u0631\u062c\u0644 \u0644\u0644\u0636\u0628\u0627\u0637 \u0625\u0646\u0647 \u0634\u0631\u0628 \u0627\u0644\u0639\u062f\u064a\u062f \u0645\u0646 \u0627\u0644\u062c\u0639\u0629 \u0644\u0623\u0646\u0647 \u0643\u0627\u0646 \u0645\u0646\u0632\u0639\u062c\u0627 \u0645\u0646 \u0645\u0634\u0627\u0643\u0644\u0647 \u0627\u0644\u0639\u0627\u0626\u0644\u064a\u0629\u060c \u0648\u062a\u0639\u0647\u062f \u0643\u064a\u0632\u0631 \u0628\u0639\u062f\u0645 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0645\u062c\u062f\u062f\u0627 \u0628\u0627\u0644\u0634\u0631\u0637\u0629\u060c \u0625\u0644\u0627 \u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u062d\u0627\u0644\u0629 \u0637\u0648\u0627\u0631\u0626 \u062d\u0642\u064a\u0642\u0629 \u062a\u0633\u062a\u062f\u0639\u064a\u0647\u0645. \u0644\u0643\u0646 \u0627\u0644\u0631\u062c\u0644 \u0643\u0631\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0628\u0639\u062f \u062f\u0642\u064a\u0642\u0629 \u0648\u0627\u062d\u062f\u0629 \u0641\u0642\u0637 \u0645\u0646 \u0645\u063a\u0627\u062f\u0631\u0629 \u0627\u0644\u0636\u0628\u0627\u0637. \u0648\u0642\u062f \u0623\u0644\u0642\u064a \u0627\u0644\u0642\u0628\u0636 \u0639\u0644\u0649 \u0643\u064a\u0632\u0631\u060c \u0644\u064a\u0648\u0627\u062c\u0647 \u0627\u062a\u0647\u0627\u0645\u0627\u062a \u0628\u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0639\u0645\u062f\u0627 \u0644\u0623\u0633\u0628\u0627\u0628 \u063a\u064a\u0631 \u0637\u0627\u0631\u0626\u0629.\n",
"target": "\u0648\u0648\u0641\u0642\u0627 \u0644\u0640\"\u0623\u0633\u0648\u0634\u064a\u062a\u062f\u0628\u0631\u0633\" \u0641\u0642\u062f \u0639\u0627\u0648\u062f \u0643\u064a\u0632\u0631 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 5 \u0645\u0631\u0627\u062a \u0623\u062e\u0631\u0649\u060c \u0627\u0644\u0623\u0645\u0631 \u0627\u0644\u0630\u064a \u0627\u0633\u062a\u062f\u0639\u0649 \u0642\u0648\u0629 \u0645\u0646 \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0644\u0649 \u0645\u0646\u0632\u0644\u0647 \u0641\u064a \u0628\u0644\u062f\u0629 \u0648\u0627\u064a\u062a\u0647\u0648\u0644 \u0627\u0644\u0634\u0645\u0627\u0644\u064a\u0629 \u0628\u0639\u062f \u0645\u0646\u062a\u0635\u0641 \u0627\u0644\u0644\u064a\u0644. \u0648\u0630\u0643\u0631\u062a \u0627\u0644\u0634\u0631\u0637\u0629 \u0625\u0646 \u0644\u0627\u0631\u0649 \u0643\u064a\u0632\u0631 \u0627\u0644\u0628\u0627\u0644\u063a 51 \u0639\u0627\u0645\u0627 \u0627\u062a\u0635\u0644 \u0628\u0627\u0644\u0637\u0648\u0627\u0631\u0626 \u0644\u0623\u0648\u0644 \u0645\u0631\u0629 \u062d\u0648\u0627\u0644\u064a \u0627\u0644\u0639\u0627\u0634\u0631\u0629 \u0648\u0627\u0644\u0646\u0635\u0641 \u0645\u0646 \u0645\u0633\u0627\u0621 \u0627\u0644\u062c\u0645\u0639\u0629\u060c \u0637\u0627\u0644\u0628\u0627 \u0627\u0644\u062a\u062d\u062f\u062b \u0645\u0639 \u0636\u0627\u0628\u0637 \u0634\u0631\u0637\u0629\u060c \u0645\u0646 \u062f\u0648\u0646 \u0627\u0644\u062d\u0627\u062c\u0629 \u0625\u0644\u0649 \u0627\u0633\u062a\u062f\u0639\u0627\u0621 \u0627\u0644\u0637\u0648\u0627\u0631\u0626. \u0648\u0642\u0627\u0644 \u0627\u0644\u0631\u062c\u0644 \u0644\u0644\u0636\u0628\u0627\u0637 \u0625\u0646\u0647 \u0634\u0631\u0628 \u0627\u0644\u0639\u062f\u064a\u062f \u0645\u0646 \u0627\u0644\u062c\u0639\u0629 \u0644\u0623\u0646\u0647 \u0643\u0627\u0646 \u0645\u0646\u0632\u0639\u062c\u0627 \u0645\u0646 \u0645\u0634\u0627\u0643\u0644\u0647 \u0627\u0644\u0639\u0627\u0626\u0644\u064a\u0629\u060c \u0648\u062a\u0639\u0647\u062f \u0643\u064a\u0632\u0631 \u0628\u0639\u062f\u0645 \u0627\u0644\u0627\u062a\u0635\u0627\u0644 \u0645\u062c\u062f\u062f\u0627 \u0628\u0627\u0644\u0634\u0631\u0637\u0629\u060c \u0625\u0644\u0627 \u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u062d\u0627\u0644\u0629 \u0637\u0648\u0627\u0631\u0626 \u062d\u0642\u064a\u0642\u0629 \u062a\u0633\u062a\u062f\u0639\u064a\u0647\u0645."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2400 |
| valid | 600 |
|
open-llm-leaderboard/details_Inv__Konstanta-Gamma-V2-9B | ---
pretty_name: Evaluation run of Inv/Konstanta-Gamma-V2-9B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Inv/Konstanta-Gamma-V2-9B](https://huggingface.co/Inv/Konstanta-Gamma-V2-9B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Inv__Konstanta-Gamma-V2-9B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-24T15:21:58.736386](https://huggingface.co/datasets/open-llm-leaderboard/details_Inv__Konstanta-Gamma-V2-9B/blob/main/results_2024-03-24T15-21-58.736386.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6465388407458585,\n\
\ \"acc_stderr\": 0.03224106097787349,\n \"acc_norm\": 0.6469872783275803,\n\
\ \"acc_norm_stderr\": 0.03290441654803306,\n \"mc1\": 0.5397796817625459,\n\
\ \"mc1_stderr\": 0.01744801722396087,\n \"mc2\": 0.6885382714652906,\n\
\ \"mc2_stderr\": 0.015209072620089047\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6749146757679181,\n \"acc_stderr\": 0.013688147309729122,\n\
\ \"acc_norm\": 0.6988054607508533,\n \"acc_norm_stderr\": 0.01340674176784764\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6932881896036646,\n\
\ \"acc_stderr\": 0.004601862807240203,\n \"acc_norm\": 0.8683529177454691,\n\
\ \"acc_norm_stderr\": 0.0033741568675916696\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n\
\ \"acc_stderr\": 0.04218506215368881,\n \"acc_norm\": 0.6074074074074074,\n\
\ \"acc_norm_stderr\": 0.04218506215368881\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7283018867924528,\n \"acc_stderr\": 0.027377706624670713,\n\
\ \"acc_norm\": 0.7283018867924528,\n \"acc_norm_stderr\": 0.027377706624670713\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5957446808510638,\n \"acc_stderr\": 0.032081157507886836,\n\
\ \"acc_norm\": 0.5957446808510638,\n \"acc_norm_stderr\": 0.032081157507886836\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4312169312169312,\n \"acc_stderr\": 0.025506481698138208,\n \"\
acc_norm\": 0.4312169312169312,\n \"acc_norm_stderr\": 0.025506481698138208\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\
\ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\
\ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7967741935483871,\n \"acc_stderr\": 0.022891687984554963,\n \"\
acc_norm\": 0.7967741935483871,\n \"acc_norm_stderr\": 0.022891687984554963\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n \"\
acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8131313131313131,\n \"acc_stderr\": 0.027772533334218967,\n \"\
acc_norm\": 0.8131313131313131,\n \"acc_norm_stderr\": 0.027772533334218967\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\
\ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2962962962962963,\n \"acc_stderr\": 0.02784081149587194,\n \
\ \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.02784081149587194\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135363,\n\
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135363\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"\
acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8366972477064221,\n \"acc_stderr\": 0.015848255806501562,\n \"\
acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.015848255806501562\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.02584501798692692,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.02584501798692692\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7890295358649789,\n \"acc_stderr\": 0.026558372502661916,\n \
\ \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.026558372502661916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n\
\ \"acc_stderr\": 0.0318114974705536,\n \"acc_norm\": 0.6591928251121076,\n\
\ \"acc_norm_stderr\": 0.0318114974705536\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.013890862162876164,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.013890862162876164\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4547486033519553,\n\
\ \"acc_stderr\": 0.016653875777524002,\n \"acc_norm\": 0.4547486033519553,\n\
\ \"acc_norm_stderr\": 0.016653875777524002\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.0256468630971379,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.0256468630971379\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.02465968518596728,\n\
\ \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.02465968518596728\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4716312056737589,\n \"acc_stderr\": 0.029779450957303062,\n \
\ \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.029779450957303062\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46088657105606257,\n\
\ \"acc_stderr\": 0.012731102790504514,\n \"acc_norm\": 0.46088657105606257,\n\
\ \"acc_norm_stderr\": 0.012731102790504514\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.02858270975389845,\n\
\ \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.02858270975389845\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \
\ \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.02879518557429129,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.02879518557429129\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5397796817625459,\n\
\ \"mc1_stderr\": 0.01744801722396087,\n \"mc2\": 0.6885382714652906,\n\
\ \"mc2_stderr\": 0.015209072620089047\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8287292817679558,\n \"acc_stderr\": 0.010588417294962524\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6368460955269143,\n \
\ \"acc_stderr\": 0.013246614539839868\n }\n}\n```"
repo_url: https://huggingface.co/Inv/Konstanta-Gamma-V2-9B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|arc:challenge|25_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|gsm8k|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hellaswag|10_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-24T15-21-58.736386.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-24T15-21-58.736386.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- '**/details_harness|winogrande|5_2024-03-24T15-21-58.736386.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-24T15-21-58.736386.parquet'
- config_name: results
data_files:
- split: 2024_03_24T15_21_58.736386
path:
- results_2024-03-24T15-21-58.736386.parquet
- split: latest
path:
- results_2024-03-24T15-21-58.736386.parquet
---
# Dataset Card for Evaluation run of Inv/Konstanta-Gamma-V2-9B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Inv/Konstanta-Gamma-V2-9B](https://huggingface.co/Inv/Konstanta-Gamma-V2-9B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Inv__Konstanta-Gamma-V2-9B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-24T15:21:58.736386](https://huggingface.co/datasets/open-llm-leaderboard/details_Inv__Konstanta-Gamma-V2-9B/blob/main/results_2024-03-24T15-21-58.736386.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6465388407458585,
"acc_stderr": 0.03224106097787349,
"acc_norm": 0.6469872783275803,
"acc_norm_stderr": 0.03290441654803306,
"mc1": 0.5397796817625459,
"mc1_stderr": 0.01744801722396087,
"mc2": 0.6885382714652906,
"mc2_stderr": 0.015209072620089047
},
"harness|arc:challenge|25": {
"acc": 0.6749146757679181,
"acc_stderr": 0.013688147309729122,
"acc_norm": 0.6988054607508533,
"acc_norm_stderr": 0.01340674176784764
},
"harness|hellaswag|10": {
"acc": 0.6932881896036646,
"acc_stderr": 0.004601862807240203,
"acc_norm": 0.8683529177454691,
"acc_norm_stderr": 0.0033741568675916696
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368881,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368881
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7283018867924528,
"acc_stderr": 0.027377706624670713,
"acc_norm": 0.7283018867924528,
"acc_norm_stderr": 0.027377706624670713
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5957446808510638,
"acc_stderr": 0.032081157507886836,
"acc_norm": 0.5957446808510638,
"acc_norm_stderr": 0.032081157507886836
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4312169312169312,
"acc_stderr": 0.025506481698138208,
"acc_norm": 0.4312169312169312,
"acc_norm_stderr": 0.025506481698138208
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7967741935483871,
"acc_stderr": 0.022891687984554963,
"acc_norm": 0.7967741935483871,
"acc_norm_stderr": 0.022891687984554963
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4729064039408867,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.4729064039408867,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8131313131313131,
"acc_stderr": 0.027772533334218967,
"acc_norm": 0.8131313131313131,
"acc_norm_stderr": 0.027772533334218967
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.02784081149587194,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.02784081149587194
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135363,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135363
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3841059602649007,
"acc_stderr": 0.03971301814719197,
"acc_norm": 0.3841059602649007,
"acc_norm_stderr": 0.03971301814719197
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.015848255806501562,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.015848255806501562
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.02584501798692692,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.02584501798692692
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.026558372502661916,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.026558372502661916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6591928251121076,
"acc_stderr": 0.0318114974705536,
"acc_norm": 0.6591928251121076,
"acc_norm_stderr": 0.0318114974705536
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.03487825168497892,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.03487825168497892
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.013890862162876164,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.013890862162876164
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4547486033519553,
"acc_stderr": 0.016653875777524002,
"acc_norm": 0.4547486033519553,
"acc_norm_stderr": 0.016653875777524002
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.0256468630971379,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.0256468630971379
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.02465968518596728,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.02465968518596728
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4716312056737589,
"acc_stderr": 0.029779450957303062,
"acc_norm": 0.4716312056737589,
"acc_norm_stderr": 0.029779450957303062
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46088657105606257,
"acc_stderr": 0.012731102790504514,
"acc_norm": 0.46088657105606257,
"acc_norm_stderr": 0.012731102790504514
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6691176470588235,
"acc_stderr": 0.02858270975389845,
"acc_norm": 0.6691176470588235,
"acc_norm_stderr": 0.02858270975389845
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.018999707383162673,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.018999707383162673
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.02879518557429129,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.02879518557429129
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5397796817625459,
"mc1_stderr": 0.01744801722396087,
"mc2": 0.6885382714652906,
"mc2_stderr": 0.015209072620089047
},
"harness|winogrande|5": {
"acc": 0.8287292817679558,
"acc_stderr": 0.010588417294962524
},
"harness|gsm8k|5": {
"acc": 0.6368460955269143,
"acc_stderr": 0.013246614539839868
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
liuyanchen1015/MULTI_VALUE_mnli_generalized_third_person_s | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 695217
num_examples: 3075
- name: dev_mismatched
num_bytes: 754991
num_examples: 3248
- name: test_matched
num_bytes: 743989
num_examples: 3198
- name: test_mismatched
num_bytes: 780123
num_examples: 3385
- name: train
num_bytes: 28822279
num_examples: 125545
download_size: 20045552
dataset_size: 31796599
---
# Dataset Card for "MULTI_VALUE_mnli_generalized_third_person_s"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amitrajitbh1/communities_unp_content | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: author
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: id
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1418165412.7144172
num_examples: 850001
download_size: 1056960430
dataset_size: 1418165412.7144172
---
# Dataset Card for "communities_unp_content"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pourmand1376/OpenAssistant-oasst1-fa | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int64
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: float64
- name: detoxify
dtype: string
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 108879677
num_examples: 69201
download_size: 39436976
dataset_size: 108879677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- question-answering
- text-generation
- conversational
language:
- fa
pretty_name: Translated Open Assistant (Farsi)
size_categories:
- 10K<n<100K
---
# Dataset Card for "OpenAssistant-oasst1-fa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
suryam13/sample_data | ---
dataset_info:
features:
- name: 'SYSTEM: Translate english to thanglish: '
dtype: string
- name: 'Output:'
dtype: string
splits:
- name: train
num_bytes: 1915.2
num_examples: 7
- name: test
num_bytes: 820.8
num_examples: 3
download_size: 7953
dataset_size: 2736.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Saxo/linkbricks_ko_dataset_for_tokenizer_voca | ---
license: apache-2.0
---
|
sophy/smpc | ---
license: apache-2.0
---
|
uripper/LichessGames | ---
license: cc
---
|
jjjaehee/customhkcode2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5826
num_examples: 39
download_size: 2572
dataset_size: 5826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rasgaard/20_newsgroups | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 12724811.858405516
num_examples: 10182
- name: val
num_bytes: 1414701.1415944847
num_examples: 1132
- name: test
num_bytes: 8499585
num_examples: 7532
download_size: 0
dataset_size: 22639098.0
---
# Dataset Card for "20_newsgroups"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FLIP-dataset/FLIP-80M | ---
license: cc-by-4.0
---
|
Gus1993ever/Khrome1 | ---
license: openrail
---
|
has84/test | ---
license: mit
---
|
visionlab/block_towers_test | ---
dataset_info:
features:
- name: data
list:
- name: density
dtype: 'null'
- name: lx
dtype: float64
- name: ly
dtype: float64
- name: lz
dtype: float64
- name: mass
dtype: 'null'
- name: rx
dtype: int64
- name: ry
dtype: int64
- name: rz
dtype: int64
- name: x
dtype: float64
- name: y
dtype: int64
- name: z
dtype: float64
splits:
- name: stack3_stable
num_bytes: 1100000
num_examples: 5000
- name: stack3_unstable
num_bytes: 1100000
num_examples: 5000
- name: stack4_stable
num_bytes: 1460000
num_examples: 5000
- name: stack4_unstable
num_bytes: 1460000
num_examples: 5000
- name: stack5_stable
num_bytes: 1820000
num_examples: 5000
- name: stack5_unstable
num_bytes: 1820000
num_examples: 5000
- name: stack6_stable
num_bytes: 2180000
num_examples: 5000
- name: stack6_unstable
num_bytes: 2180000
num_examples: 5000
download_size: 1573446
dataset_size: 13120000
configs:
- config_name: default
data_files:
- split: stack3_stable
path: data/stack3_stable-*
- split: stack3_unstable
path: data/stack3_unstable-*
- split: stack4_stable
path: data/stack4_stable-*
- split: stack4_unstable
path: data/stack4_unstable-*
- split: stack5_stable
path: data/stack5_stable-*
- split: stack5_unstable
path: data/stack5_unstable-*
- split: stack6_stable
path: data/stack6_stable-*
- split: stack6_unstable
path: data/stack6_unstable-*
---
Hello. |
fathyshalab/reklamation24_transport-logistik | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 208152
num_examples: 410
- name: test
num_bytes: 51997
num_examples: 103
download_size: 0
dataset_size: 260149
---
# Dataset Card for "reklamation24_transport-logistik"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
malysheva42/spaeti_store | ---
license: openrail
task_categories:
- text-to-image
- image-segmentation
- image-classification
- image-to-image
pretty_name: Photos of one späti (a German convenience store)
size_categories:
- n<1K
---
# Dataset Card for spaeti_store
## Dataset Description
The dataset consists of 10 pictures of one späti (German convenience store) from different angles.
The data is unlabeled.
The dataset was created to fine-tune a text-to-image Stable Diffusion model as part of the DreamBooth Hackathon. Visit the [organization's page](https://huggingface.co/dreambooth-hackathon) for more info.
|
autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205626 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: deepset/roberta-base-squad2-distilled
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. |
Rodrigo-1/Protocolo_Hair | ---
license: unknown
---
|
rahuldandonaDatacreek/autotrain-data-useful-not_useful-classifier | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: useful-not_useful-classifier
## Dataset Description
This dataset has been automatically processed by AutoTrain for project useful-not_useful-classifier.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<140x144 RGB PIL image>",
"target": 0
},
{
"image": "<748x788 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['not-useful', 'useful'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8323 |
| valid | 2082 |
|
open-llm-leaderboard/details_l3utterfly__llama2-7b-layla | ---
pretty_name: Evaluation run of l3utterfly/llama2-7b-layla
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [l3utterfly/llama2-7b-layla](https://huggingface.co/l3utterfly/llama2-7b-layla)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_l3utterfly__llama2-7b-layla\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T04:32:53.780547](https://huggingface.co/datasets/open-llm-leaderboard/details_l3utterfly__llama2-7b-layla/blob/main/results_2023-09-17T04-32-53.780547.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0025167785234899327,\n\
\ \"em_stderr\": 0.0005131152834514622,\n \"f1\": 0.06570889261744958,\n\
\ \"f1_stderr\": 0.0014756748283544432,\n \"acc\": 0.4130167852161326,\n\
\ \"acc_stderr\": 0.009994364317722083\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0025167785234899327,\n \"em_stderr\": 0.0005131152834514622,\n\
\ \"f1\": 0.06570889261744958,\n \"f1_stderr\": 0.0014756748283544432\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08491281273692192,\n \
\ \"acc_stderr\": 0.007678212824450795\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7411207576953434,\n \"acc_stderr\": 0.012310515810993372\n\
\ }\n}\n```"
repo_url: https://huggingface.co/l3utterfly/llama2-7b-layla
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|arc:challenge|25_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T04_32_53.780547
path:
- '**/details_harness|drop|3_2023-09-17T04-32-53.780547.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T04-32-53.780547.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T04_32_53.780547
path:
- '**/details_harness|gsm8k|5_2023-09-17T04-32-53.780547.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T04-32-53.780547.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hellaswag|10_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T20:58:39.874596.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T20:58:39.874596.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T20:58:39.874596.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T04_32_53.780547
path:
- '**/details_harness|winogrande|5_2023-09-17T04-32-53.780547.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T04-32-53.780547.parquet'
- config_name: results
data_files:
- split: 2023_08_09T20_58_39.874596
path:
- results_2023-08-09T20:58:39.874596.parquet
- split: 2023_09_17T04_32_53.780547
path:
- results_2023-09-17T04-32-53.780547.parquet
- split: latest
path:
- results_2023-09-17T04-32-53.780547.parquet
---
# Dataset Card for Evaluation run of l3utterfly/llama2-7b-layla
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/l3utterfly/llama2-7b-layla
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [l3utterfly/llama2-7b-layla](https://huggingface.co/l3utterfly/llama2-7b-layla) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_l3utterfly__llama2-7b-layla",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T04:32:53.780547](https://huggingface.co/datasets/open-llm-leaderboard/details_l3utterfly__llama2-7b-layla/blob/main/results_2023-09-17T04-32-53.780547.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0025167785234899327,
"em_stderr": 0.0005131152834514622,
"f1": 0.06570889261744958,
"f1_stderr": 0.0014756748283544432,
"acc": 0.4130167852161326,
"acc_stderr": 0.009994364317722083
},
"harness|drop|3": {
"em": 0.0025167785234899327,
"em_stderr": 0.0005131152834514622,
"f1": 0.06570889261744958,
"f1_stderr": 0.0014756748283544432
},
"harness|gsm8k|5": {
"acc": 0.08491281273692192,
"acc_stderr": 0.007678212824450795
},
"harness|winogrande|5": {
"acc": 0.7411207576953434,
"acc_stderr": 0.012310515810993372
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
delayedkarma/dark_matter_instruction_qa | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 322652
num_examples: 1322
download_size: 119700
dataset_size: 322652
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/mikoto_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mikoto (Fire Emblem)
This is the dataset of mikoto (Fire Emblem), containing 23 images and their tags.
The core tags of this character are `black_hair, long_hair, breasts, mole, ponytail, brown_eyes, mole_under_mouth, large_breasts, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 23 | 24.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikoto_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 23 | 15.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikoto_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 50 | 28.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikoto_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 23 | 21.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikoto_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 50 | 35.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikoto_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mikoto_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------|
| 0 | 23 |  |  |  |  |  | 1girl, solo, smile, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | looking_at_viewer |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|
| 0 | 23 |  |  |  |  |  | X | X | X | X |
|
liaad/translation_sample_lid | ---
dataset_info:
- config_name: ai2_arc
features:
- name: question
dtype: string
- name: question_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: choices
sequence: string
- name: choices_translated
list:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 809
num_examples: 1
download_size: 11996
dataset_size: 809
- config_name: boolq
features:
- name: question
dtype: string
- name: question_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: passage
dtype: string
- name: passage_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1386
num_examples: 1
download_size: 17972
dataset_size: 1386
- config_name: gsm8k
features:
- name: question
dtype: string
- name: question_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: answer
dtype: string
- name: answer_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2297
num_examples: 1
download_size: 24008
dataset_size: 2297
- config_name: mbpp
features:
- name: text
dtype: string
- name: text_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 382
num_examples: 1
download_size: 6927
dataset_size: 382
- config_name: natural_questions_parsed
features:
- name: document
dtype: string
- name: document_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: question
dtype: string
- name: question_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: candidates
sequence: string
- name: candidates_translated
list:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: long_answer
dtype: string
- name: long_answer_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5543
num_examples: 1
download_size: 47553
dataset_size: 5543
- config_name: openbookqa
features:
- name: question_stem
dtype: string
- name: question_stem_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: choices
sequence: string
- name: choices_translated
list:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: fact1
dtype: string
- name: fact1_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 920
num_examples: 1
download_size: 16942
dataset_size: 920
- config_name: quac
features:
- name: background
dtype: string
- name: background_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: context
dtype: string
- name: context_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: questions
sequence: string
- name: questions_translated
list:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: orig_answers
sequence: string
- name: orig_answers_translated
list:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11406
num_examples: 1
download_size: 85011
dataset_size: 11406
- config_name: social_i_qa
features:
- name: context
dtype: string
- name: context_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: question
dtype: string
- name: question_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: answerA
dtype: string
- name: answerA_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: answerB
dtype: string
- name: answerB_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: answerC
dtype: string
- name: answerC_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 797
num_examples: 1
download_size: 25730
dataset_size: 797
- config_name: squad_v1_pt
features:
- name: context
dtype: string
- name: context_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: question
dtype: string
- name: question_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: answers
sequence: string
- name: answers_translated
list:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1659
num_examples: 1
download_size: 24226
dataset_size: 1659
- config_name: winogrande
features:
- name: sentence
dtype: string
- name: sentence_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: option1
dtype: string
- name: option1_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: option2
dtype: string
- name: option2_translated
struct:
- name: Helsinki-NLP/opus-mt-tc-big-en-pt
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: google_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
- name: libre_translation
struct:
- name: prediction
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 749
num_examples: 1
download_size: 17465
dataset_size: 749
configs:
- config_name: ai2_arc
data_files:
- split: train
path: ai2_arc/train-*
- config_name: boolq
data_files:
- split: train
path: boolq/train-*
- config_name: gsm8k
data_files:
- split: train
path: gsm8k/train-*
- config_name: mbpp
data_files:
- split: train
path: mbpp/train-*
- config_name: natural_questions_parsed
data_files:
- split: train
path: natural_questions_parsed/train-*
- config_name: openbookqa
data_files:
- split: train
path: openbookqa/train-*
- config_name: quac
data_files:
- split: train
path: quac/train-*
- config_name: social_i_qa
data_files:
- split: train
path: social_i_qa/train-*
- config_name: squad_v1_pt
data_files:
- split: train
path: squad_v1_pt/train-*
- config_name: winogrande
data_files:
- split: train
path: winogrande/train-*
---
|
ainzOulgun/fsajd | ---
license: openrail
---
|
Limour/H2Retrieval | ---
license: cc-by-nc-sa-4.0
language:
- zh
---
[h-corpus](https://huggingface.co/datasets/a686d380/h-corpus-2023) 领域的 Retrieval 评价数据集。
# Leaderboard
## new/data_sample1k
+ https://www.kaggle.com/code/reginliu/h2retrieval
| Model | NDCG@5 | NDCG@10 | NDCG@15 | NDCG@20 | NDCG@30 |
|-------|---------|---------|---------|---------|---------|
| [IYun-large-zh](https://huggingface.co/Erin/IYun-large-zh) | 66.70±27.29 | 59.67±26.05 | 56.69±25.36 | 56.58±25.32 | 57.97±25.48 |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | 64.60±28.04 | 57.80±25.88 | 55.54±25.166 | 55.77±25.17 | 57.31±25.18 |
| [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) | 60.66±28.37 | 53.44±26.13 | 51.11±25.10 | 51.18±25.16 | 52.84±25.45 |
| [Dmeta-embedding](https://huggingface.co/DMetaSoul/Dmeta-embedding) | 52.12±29.83 | 45.38±26.65 | 43.20±25.33 | 43.41±25.10 | 44.87±25.42 |
| random | 0.07±1.24 | 0.09±1.01 | 0.10±0.97 | 0.12±0.99 | 0.14±1.03 |
## data_sample5k
| Model | NDCG@10 |
|-------|---------|
| [IYun-large-zh](https://huggingface.co/Erin/IYun-large-zh) | 38.75 |
| [tao-8k](https://huggingface.co/amu/tao-8k) | 38.37 |
| [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) | 35.81 |
| [acge-large-zh](https://huggingface.co/aspire/acge-large-zh) | 34.26 |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | 33.07 |
| [PEG](https://huggingface.co/TownsWu/PEG) | 24.82 |
| [Dmeta-embedding](https://huggingface.co/DMetaSoul/Dmeta-embedding) | 23.45 | |
fathyshalab/massive_transport-de-DE | ---
dataset_info:
features:
- name: id
dtype: string
- name: locale
dtype: string
- name: partition
dtype: string
- name: scenario
dtype:
class_label:
names:
'0': social
'1': transport
'2': calendar
'3': play
'4': news
'5': datetime
'6': recommendation
'7': email
'8': iot
'9': general
'10': audio
'11': lists
'12': qa
'13': cooking
'14': takeaway
'15': music
'16': alarm
'17': weather
- name: intent
dtype:
class_label:
names:
'0': datetime_query
'1': iot_hue_lightchange
'2': transport_ticket
'3': takeaway_query
'4': qa_stock
'5': general_greet
'6': recommendation_events
'7': music_dislikeness
'8': iot_wemo_off
'9': cooking_recipe
'10': qa_currency
'11': transport_traffic
'12': general_quirky
'13': weather_query
'14': audio_volume_up
'15': email_addcontact
'16': takeaway_order
'17': email_querycontact
'18': iot_hue_lightup
'19': recommendation_locations
'20': play_audiobook
'21': lists_createoradd
'22': news_query
'23': alarm_query
'24': iot_wemo_on
'25': general_joke
'26': qa_definition
'27': social_query
'28': music_settings
'29': audio_volume_other
'30': calendar_remove
'31': iot_hue_lightdim
'32': calendar_query
'33': email_sendemail
'34': iot_cleaning
'35': audio_volume_down
'36': play_radio
'37': cooking_query
'38': datetime_convert
'39': qa_maths
'40': iot_hue_lightoff
'41': iot_hue_lighton
'42': transport_query
'43': music_likeness
'44': email_query
'45': play_music
'46': audio_volume_mute
'47': social_post
'48': alarm_set
'49': qa_factoid
'50': calendar_set
'51': play_game
'52': alarm_remove
'53': lists_remove
'54': transport_taxi
'55': recommendation_movies
'56': iot_coffee
'57': music_query
'58': play_podcasts
'59': lists_query
- name: text
dtype: string
- name: annot_utt
dtype: string
- name: worker_id
dtype: string
- name: slot_method
sequence:
- name: slot
dtype: string
- name: method
dtype: string
- name: judgments
sequence:
- name: worker_id
dtype: string
- name: intent_score
dtype: int8
- name: slots_score
dtype: int8
- name: grammar_score
dtype: int8
- name: spelling_score
dtype: int8
- name: language_identification
dtype: string
- name: label_name
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 191209
num_examples: 571
- name: validation
num_bytes: 36883
num_examples: 110
- name: test
num_bytes: 41087
num_examples: 124
download_size: 80546
dataset_size: 269179
---
# Dataset Card for "massive_transport-de-DE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
umarzein/small_sums | ---
license: mit
---
|
knowrohit07/saraswati-stem | ---
license: openrail
---

### Purpose: This dataset contains a series of question-and-answer pairs related to various STEM (Science, Technology, Engineering, Mathematics) topics. The dataset is designed to train and evaluate models for conversational agents, particularly in educational and informational contexts.
Data Collection and Annotation: samples is converted in a multi-turn conversational format, with a user posing questions and an assistant providing detailed, scientifically accurate answers.
### Key Features:
Format: ChatML-ish.
Synthesis : GPT-4 + my own FINETUNED Saraswati LLM curated. Ratio of 75:25.
Training Flag: Each record includes a 'do_train' boolean flag, indicating whether it should be EXTRAPOLATED for reward model training.
Seed Tasks: consists of know_medical_dioalogues highly selective samples, structured samples from NCERT exemplar & google Scholar articles.
Future: using nougat and layoutlmv3 to parse plethora of latex-rich texts & scholar articles to append the "saraswati" dataset series.
### Usage and Limitations
Recommended Uses: This dataset is ideal for training llms, especially those focused on providing educational and informational assistance in STEM fields. Roleplays + insane potential for image model projections i.e for VLMs too.
Limitations: The dataset is specialized in STEM topics and may not cover general conversational aspects or non-STEM fields extensively. Sike, if trained with python or code reasoning, it can beat the oss leaders.
### Ethical Considerations
Data Privacy and Consent: All data collected are from simulated interactions and do not contain any personal information.
Bias and Fairness: The dataset focuses on factual information in STEM and aims to be unbiased in its scientific explanations. However, it's important to continually assess the model trained on this data for any unintended biases, especially in interpretation and response generation. |
MrPrometheus/books_full_mini | ---
license: mit
---
1tb(960gb) 19743 h. 3518 sp. |
epinnock/dataset_with_gen | ---
dataset_info:
features:
- name: messagestring
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: uuid
dtype: string
- name: generated_outputs
struct:
- name: completion
dtype: string
- name: uuid
dtype: string
splits:
- name: train
num_bytes: 15643496
num_examples: 1291
download_size: 6177282
dataset_size: 15643496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
one-sec-cv12/chunk_244 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 17593208208.625
num_examples: 183171
download_size: 15207813211
dataset_size: 17593208208.625
---
# Dataset Card for "chunk_244"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erfanvaredi/zephyr-7b-beta-invoices | ---
license: cdla-sharing-1.0
task_categories:
- text-classification
- conversational
- text-generation
language:
- en
tags:
- customer
- customer-support
pretty_name: Zephyr Invoince Dataset
size_categories:
- 10K<n<100K
---
# Zephyr-7B-Beta Customer Support Chatbot
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Introduction
Welcome to the `zephyr-7b-beta-invoices` repository! This project leverages the Zephyr-7B-Beta model trained on the "Bitext-Customer-Support-LLM-Chatbot-Training-Dataset" to create a state-of-the-art customer support chatbot. Our goal is to provide an efficient and accurate chatbot for handling invoice-related queries.
## Dataset Overview
The dataset used for training is `bitext/Bitext-customer-support-llm-chatbot-training-dataset`. It's tailored for customer support scenarios, focusing on conversations related to invoice queries, clarifications, and resolutions.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Forked by:** [Bitext Customer Support](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset)
- **Edited by:** [Erfan Varedi](https://linkedin.com/in/erfanvaredi)
## Model Details
[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) is a cutting-edge language model specifically tuned for customer support interactions. It understands context, manages conversation flow, and provides accurate responses to invoice-related inquiries.
## Installation and Setup
To use the chatbot, install the necessary packages:
```bash
pip install datasets
```
## Quick Start Guide
To quickly set up and run the chatbot, use the following script:
```python
# Import libraries
from datasets import load_dataset
# Load dataset
ds = load_dataset('erfanvaredi/zephyr-7b-beta-invoices')
df_dataset = ds["train"].to_pandas()
df_dataset.head()
# Example of text
print(df_dataset['text'].iloc[85])
# <|system|>
# You are a support chatbot who helps with user queries chatbot who always responds in the style of a professional.</s>
# <|user|>
# what do I have to do to cancel order {{Order Number}}?</s>
# <|assistant|>
# I perceive that you're looking for guidance on canceling order {{Order Number}}. To assist you with the cancellation process, please follow these steps:
# 1. Log In to Your Account: Begin by signing in to our {{Online Company Portal Info}} using your credentials.
# 2. Locate Your Order: Once you're in, navigate to the '{{Online Order Interaction}}' or '{{Online Order Interaction}}' section.
# 3. Find the Relevant Order: Look for the order number {{Order Number}} within this section.
# 4. Initiate the Cancellation: Click on the order to open the details and locate the '{{Online Order Interaction}}' option.
# 5. Complete the Cancellation Process: Follow the prompts and provide any required information to finalize the cancellation.
# If you encounter any difficulties during this process or have any further questions, please don't hesitate to reach out to our dedicated team. We are available during {{Customer Support Hours}} at {{Customer Support Phone Number}}, or you can connect with us through the Live Chat feature on our {{Website URL}}. We value your satisfaction and are here to assist you every step of the way.
```
## License
This project is licensed under the CDLA-Sharing-1.0 License.
## Contact
For questions or collaboration, please reach out to me at [Linkedin](https://linkedin.com/in/erfanvaredi). |
CyberHarem/gorizia_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of gorizia/ゴリツィア/戈里齐亚 (Azur Lane)
This is the dataset of gorizia/ゴリツィア/戈里齐亚 (Azur Lane), containing 30 images and their tags.
The core tags of this character are `blonde_hair, blue_eyes, long_hair, breasts, large_breasts, bangs, hairband, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 30 | 57.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gorizia_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 30 | 26.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gorizia_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 71 | 56.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gorizia_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 30 | 47.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gorizia_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 71 | 89.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gorizia_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/gorizia_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, white_shirt, thighs, cleavage, collared_shirt, feet, garter_straps, id_card, indoors, no_shoes, soles, toes, black_thighhighs, office_lady, sitting, twintails, hair_ribbon, lanyard, legs, long_sleeves, miniskirt, office_chair, plaid_skirt |
| 1 | 13 |  |  |  |  |  | 1girl, looking_at_viewer, solo, cape, epaulettes, blush, sideboob, white_thighhighs, dress, garter_straps, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | looking_at_viewer | white_shirt | thighs | cleavage | collared_shirt | feet | garter_straps | id_card | indoors | no_shoes | soles | toes | black_thighhighs | office_lady | sitting | twintails | hair_ribbon | lanyard | legs | long_sleeves | miniskirt | office_chair | plaid_skirt | cape | epaulettes | sideboob | white_thighhighs | dress | simple_background | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:--------------|:---------|:-----------|:-----------------|:-------|:----------------|:----------|:----------|:-----------|:--------|:-------|:-------------------|:--------------|:----------|:------------|:--------------|:----------|:-------|:---------------|:------------|:---------------|:--------------|:-------|:-------------|:-----------|:-------------------|:--------|:--------------------|:-------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | X | X | X | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
Gideonah/sapher_qa_val | ---
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 644864
num_examples: 300
download_size: 175594
dataset_size: 644864
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/plumeria_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of plumeria (Fire Emblem)
This is the dataset of plumeria (Fire Emblem), containing 182 images and their tags.
The core tags of this character are `breasts, long_hair, red_eyes, grey_hair, large_breasts, pointy_ears, wings, ponytail, facial_mark, fairy_wings, butterfly_wings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 182 | 284.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plumeria_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 182 | 160.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plumeria_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 435 | 334.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plumeria_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 182 | 249.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plumeria_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 435 | 468.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plumeria_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/plumeria_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, solo, looking_at_viewer, simple_background, forehead_mark, smile, white_background, upper_body, dress, bare_shoulders, sideboob, hair_ornament, open_mouth, thorns, twitter_username, vines |
| 1 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, sideboob, solo, vines, blush, thorns, bare_shoulders, open_mouth, dress |
| 2 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, solo, thorns, vines, smile, bangs, bare_shoulders, forehead_mark, blue_rose, cleavage, simple_background, dress, thighhighs, white_background, covered_navel, full_body, hair_flower, leotard |
| 3 | 5 |  |  |  |  |  | 1girl, bangs, bare_shoulders, dress, full_body, gradient_hair, hair_ornament, pelvic_curtain, shiny_hair, shiny_skin, sideboob, simple_background, solo, thigh_boots, thighhighs, thorns, detached_sleeves, floating_object, parted_lips, sleeveless, armpits, looking_at_viewer, shiny_clothes, black_footwear, forehead_mark, grey_background, leg_up, thighs, white_background |
| 4 | 15 |  |  |  |  |  | 1girl, cleavage, forehead_mark, solo, navel, looking_at_viewer, smile, black_one-piece_swimsuit, blush, very_long_hair, vines, fairy, water, bangs, hair_flower, open_mouth, thorns, alternate_costume, bikini, sitting |
| 5 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, penis, sex, solo_focus, vaginal, forehead_mark, mosaic_censoring, navel, spread_legs, vines, nude, open_mouth, cum_in_pussy, female_pubic_hair, lying, thorns |
| 6 | 6 |  |  |  |  |  | 1girl, hetero, multiple_penises, 3boys, cum_in_pussy, handjob, nipples, vaginal, blush, solo_focus, gangbang, mosaic_censoring, spread_legs, thighhighs, vines |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | simple_background | forehead_mark | smile | white_background | upper_body | dress | bare_shoulders | sideboob | hair_ornament | open_mouth | thorns | twitter_username | vines | blush | bangs | blue_rose | cleavage | thighhighs | covered_navel | full_body | hair_flower | leotard | gradient_hair | pelvic_curtain | shiny_hair | shiny_skin | thigh_boots | detached_sleeves | floating_object | parted_lips | sleeveless | armpits | shiny_clothes | black_footwear | grey_background | leg_up | thighs | navel | black_one-piece_swimsuit | very_long_hair | fairy | water | alternate_costume | bikini | sitting | 1boy | hetero | nipples | penis | sex | solo_focus | vaginal | mosaic_censoring | spread_legs | nude | cum_in_pussy | female_pubic_hair | lying | multiple_penises | 3boys | handjob | gangbang |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------------------|:----------------|:--------|:-------------------|:-------------|:--------|:-----------------|:-----------|:----------------|:-------------|:---------|:-------------------|:--------|:--------|:--------|:------------|:-----------|:-------------|:----------------|:------------|:--------------|:----------|:----------------|:-----------------|:-------------|:-------------|:--------------|:-------------------|:------------------|:--------------|:-------------|:----------|:----------------|:-----------------|:------------------|:---------|:---------|:--------|:---------------------------|:-----------------|:--------|:--------|:--------------------|:---------|:----------|:-------|:---------|:----------|:--------|:------|:-------------|:----------|:-------------------|:--------------|:-------|:---------------|:--------------------|:--------|:-------------------|:--------|:----------|:-----------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | | | | | | X | X | X | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | | | | X | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | | X | | X | X | X | X | | X | | | | X | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 15 |  |  |  |  |  | X | X | X | | X | X | | | | | | | X | X | | X | X | X | | X | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | X | | | | | | | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | | | | | | | | | | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | X | X | X | X | | X | | | X | X | X | X |
|
Dampish/ORION-L | ---
license: cc-by-nc-4.0
---
2,047,495 examples. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.