datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
kgr123/quality_counter_2000_4_buckets | ---
dataset_info:
features:
- name: context
dtype: string
- name: word
dtype: string
- name: claim
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 11264869
num_examples: 1929
- name: train
num_bytes: 11155042
num_examples: 1935
- name: validation
num_bytes: 11367246
num_examples: 1941
download_size: 7627401
dataset_size: 33787157
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
siditom/SCPECBS3 | ---
license: mit
dataset_info:
features:
- name: qseqid
dtype: string
- name: sseqid
dtype: string
- name: pident
dtype: float64
- name: length
dtype: int64
- name: mismatch
dtype: int64
- name: gapopen
dtype: int64
- name: qstart
dtype: int64
- name: qend
dtype: int64
- name: sstart
dtype: int64
- name: send
dtype: int64
- name: evalue
dtype: float64
- name: bitscore
dtype: float64
- name: qseq
dtype: string
- name: sseq
dtype: string
- name: query_dna_seq
sequence: string
- name: subject_dna_seq
sequence: string
- name: query_species
dtype: string
- name: subject_species
dtype: string
- name: expr
dtype: string
splits:
- name: train
num_bytes: 681059606
num_examples: 155097
- name: test
num_bytes: 95026421
num_examples: 15356
- name: val10
num_bytes: 52228089
num_examples: 161533
- name: val30
num_bytes: 34850757
num_examples: 55602
- name: val50
num_bytes: 31390548
num_examples: 34513
- name: val75
num_bytes: 29640124
num_examples: 23843
- name: val100
num_bytes: 28794098
num_examples: 18688
- name: val150
num_bytes: 27904586
num_examples: 13266
download_size: 168311263
dataset_size: 980894229
---
|
Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-100K | ---
dataset_info:
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: text-embedding-3-large-3072-embedding
sequence: float64
splits:
- name: train
num_bytes: 2496735009
num_examples: 100000
download_size: 1805850629
dataset_size: 2496735009
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HuggingFaceH4/SystemChat | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 37100537.73789174
num_examples: 6520
- name: test_sft
num_bytes: 2845133.262108262
num_examples: 500
download_size: 19769654
dataset_size: 39945671.0
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
---
# Dataset Card for SystemChat
This is a formatted version of [`abacusai/SystemChat`](https://huggingface.co/datasets/abacusai/SystemChat) to store the conversations in the same format as the OpenAI SDK.
|
saikatkumardey/jerry_seinfeld_dialogues | ---
license: mit
---
|
UCL-DARK/sequential-instructions | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: generator
dtype: string
splits:
- name: train
num_bytes: 736696
num_examples: 533
download_size: 373739
dataset_size: 736696
license: mit
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: Sequential Instructions
size_categories:
- n<1K
---
# Sequential Instructions
This is the sequential instructions dataset from [Understanding the Effects of RLHF on LLM Generalisation and Diversity](https://arxiv.org/abs/2310.06452). The dataset is in the `alpaca_eval` format.
For information about how the dataset was generated, see https://github.com/RobertKirk/stanford_alpaca.
The instructions in the dataset generally have a sequence of steps we expect the model to complete all at once. In our work, we found that RLHF models generalise much better to this dataset than SFT models when trained on the AlpacaFarm datasets. |
yuan-sf63/word_label_0.8_64_D | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
splits:
- name: train
num_bytes: 44508632.83413558
num_examples: 71798
- name: validation
num_bytes: 4945679.16586442
num_examples: 7978
download_size: 8657975
dataset_size: 49454312.0
---
# Dataset Card for "word_label_0.8_64_D"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Rakshit122/truthfulkk | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: category
dtype: string
- name: test_type
dtype: string
- name: original_question
dtype: string
- name: original_context
dtype: string
- name: perturbed_question
dtype: string
- name: perturbed_context
dtype: string
splits:
- name: train
num_bytes: 171210
num_examples: 136
download_size: 0
dataset_size: 171210
---
# Dataset Card for "truthfulkk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-7c900a64-11555532 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: tuner007/pegasus_summarizer
metrics: ['accuracy', 'f1', 'precision', 'recall']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: train
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: tuner007/pegasus_summarizer
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Neez](https://huggingface.co/Neez) for evaluating this model. |
fivetech/forums2 | ---
license: mit
---
|
lmg-anon/VNTL-v2.5-1k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24232376
num_examples: 10083
- name: val
num_bytes: 3717132
num_examples: 1570
download_size: 12039339
dataset_size: 27949508
---
# Dataset Card for "VNTL-v2.5-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/rookie_trainer_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of rookie_trainer (THE iDOLM@STER: Cinderella Girls)
This is the dataset of rookie_trainer (THE iDOLM@STER: Cinderella Girls), containing 67 images and their tags.
The core tags of this character are `black_hair, hair_ornament, hairclip, long_hair, brown_eyes, ponytail, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 67 | 55.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 67 | 38.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 132 | 71.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 67 | 51.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 132 | 93.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/rookie_trainer_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, solo, shorts, smile, wristband, looking_at_viewer, blush, watch, black_eyes, bottle |
| 1 | 5 |  |  |  |  |  | 1girl, navel, shirt_lift, solo, black_eyes, looking_at_viewer, panties, pants_pull, wristband, blush, on_back, open_mouth, shorts_pull, small_breasts, collarbone, lifted_by_self, nipples, sports_bra |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | shorts | smile | wristband | looking_at_viewer | blush | watch | black_eyes | bottle | navel | shirt_lift | panties | pants_pull | on_back | open_mouth | shorts_pull | small_breasts | collarbone | lifted_by_self | nipples | sports_bra |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------|:--------|:------------|:--------------------|:--------|:--------|:-------------|:---------|:--------|:-------------|:----------|:-------------|:----------|:-------------|:--------------|:----------------|:-------------|:-----------------|:----------|:-------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X |
|
lmms-lab/NExTQA | ---
dataset_info:
features:
- name: video
dtype: string
- name: frame_count
dtype: int32
- name: width
dtype: int32
- name: height
dtype: int32
- name: question
dtype: string
- name: answer
dtype: string
- name: qid
dtype: int32
- name: type
dtype: string
splits:
- name: train
num_bytes: 4229972
num_examples: 37523
- name: validation
num_bytes: 600516
num_examples: 5343
- name: test
num_bytes: 1023154
num_examples: 9178
download_size: 3008001
dataset_size: 5853642
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
joey234/mmlu-college_computer_science-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 28381
num_examples: 100
download_size: 19509
dataset_size: 28381
---
# Dataset Card for "mmlu-college_computer_science-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zhengxuanzenwu/fair_glue_cola | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': unacceptable
'1': acceptable
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 484869
num_examples: 8551
- name: validation
num_bytes: 30132.082454458294
num_examples: 521
- name: test
num_bytes: 60322
num_examples: 1043
download_size: 309936
dataset_size: 575323.0824544583
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
HuggingFaceM4/M3IT | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: inputs
dtype: string
- name: outputs
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 76245922090.25
num_examples: 1238638
download_size: 0
dataset_size: 76245922090.25
---
# Dataset Card for "M3IT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GabrielTOP/Xerife | ---
license: openrail
---
|
easytpp/taxi | ---
license: apache-2.0
---
|
Ravisahu06/modelface | ---
license: mit
---
|
result-kand2-sdxl-wuerst-karlo/a19a65d2 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 174
num_examples: 10
download_size: 1323
dataset_size: 174
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a19a65d2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gabeorlanski/bc-transcoder | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- translation
language:
- en
tags:
- code
pretty_name: BabelCode Transcoder
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|transcoder
---
# Dataset Card for BabelCode Transcoder
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval).
### Dataset Summary
The [Transcoder](https://github.com/facebookresearch/CodeGen) dataset in BabelCode format. Currently supports translation from C++ and Python.
### Supported Tasks and Leaderboards
### Languages
BC-Transcoder supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* Python
* R
* Rust
* Scala
* TypeScript
## Dataset Structure
```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/bc-transcoder")
DatasetDict({
test: Dataset({
features: ['qid', 'title', 'language', 'signature', 'arguments', 'source_py', 'source_cpp', 'question_info'],
num_rows: 8384
})
})
```
### Data Fields
- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `signature`: The signature for the problem.
- `arguments`: The arguments of the problem.
- `source_py`: The source solution in Python.
- `source_cpp`: The source in C++.
- `question_info`: The dict of information used for executing predictions. It has the keys:
- `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.
- `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads`
- `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not.
- `entry_fn_name`: The function's name to use an entry point.
- `entry_cls_name`: The class name to use an entry point.
- `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename.
- `timeouts`: The default timeouts for each command.
- `extension`: The extension for the prediction file.
**NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("gabeorlanski/bc-mbpp")['test']
>>> # The original entry_fn_name
>>> ds[0]['question_info']['entry_fn_name']
removeOcc
>>> # You MUST update the corresponding entry_fn_name
>>> ds[0]['question_info']['entry_fn_name'] = 'f'
>>> ds[0]['question_info']['entry_fn_name']
f
```
## Dataset Creation
See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated.
For information on the original curation of the Transcoder Dataset, please see [Unsupervised Translation of Programming Languages](https://arxiv.org/pdf/2006.03511.pdf) by Roziere et. al.
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{roziere2020unsupervised,
title={Unsupervised translation of programming languages},
author={Roziere, Baptiste and Lachaux, Marie-Anne and Chanussot, Lowik and Lample, Guillaume},
journal={Advances in Neural Information Processing Systems},
volume={33},
year={2020}
}
``` |
Najung/cora | ---
license: unknown
---
|
marup/SakiTsuzuraRVC200Epochs | ---
license: openrail
---
|
kheder/dataset_010 | ---
dataset_info:
features:
- name: who-i-am
dtype: string
- name: quran/hasanat
list:
- name: id
dtype: int64
- name: name
dtype: string
- name: total_hasanat
dtype: int64
- name: total_verses
dtype: int64
- name: translation
dtype: string
- name: transliteration
dtype: string
- name: type
dtype: string
- name: verses
list:
- name: hasanat
dtype: int64
- name: id
dtype: int64
- name: text
dtype: string
- name: translation
dtype: string
- name: hadith
list:
list:
- name: chain_indx
dtype: string
- name: chapter
dtype: string
- name: chapter_no
dtype: string
- name: hadith_id
dtype: string
- name: hadith_no
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: text_ar
dtype: string
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 44047246
num_examples: 2
download_size: 16587703
dataset_size: 44047246
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_010"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kabachuha/atsiftu-dialogue | ---
license: gpl-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- art
- writing
- script
- dialogue
pretty_name: AtS/IftU dialogue
size_categories:
- 1K<n<10K
---
The dialogue pairs from Wesnoth add-on campanies IftU/AtS. |
Royal-lobster/Slither-Audited-Solidity-QA | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 519875022.0539211
num_examples: 8611
- name: test
num_bytes: 100783891.24375294
num_examples: 1748
- name: validation
num_bytes: 76457098.65464632
num_examples: 1151
download_size: 98570750
dataset_size: 697116011.9523203
license: mit
task_categories:
- question-answering
language:
- en
tags:
- solidity
- alpaca
- smart contracts
- slither
---
# Dataset Card for "Simple-Solidity-Slither-Vulnerabilities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
laion/laion1B-nolang-safety | Invalid username or password. |
cathyye2000/MORPHeus | ---
license: bsd-3-clause
---
|
open-llm-leaderboard/details_MatthieuJ__Forbin_13B_M1_SLERP | ---
pretty_name: Evaluation run of MatthieuJ/Forbin_13B_M1_SLERP
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MatthieuJ/Forbin_13B_M1_SLERP](https://huggingface.co/MatthieuJ/Forbin_13B_M1_SLERP)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MatthieuJ__Forbin_13B_M1_SLERP\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-27T19:09:27.007204](https://huggingface.co/datasets/open-llm-leaderboard/details_MatthieuJ__Forbin_13B_M1_SLERP/blob/main/results_2024-03-27T19-09-27.007204.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.23196194129343728,\n\
\ \"acc_stderr\": 0.029934654752561563,\n \"acc_norm\": 0.2314240573187148,\n\
\ \"acc_norm_stderr\": 0.03071122006512167,\n \"mc1\": 1.0,\n \
\ \"mc1_stderr\": 0.0,\n \"mc2\": NaN,\n \"mc2_stderr\": NaN\n\
\ },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.22696245733788395,\n\
\ \"acc_stderr\": 0.012240491536132861,\n \"acc_norm\": 0.22696245733788395,\n\
\ \"acc_norm_stderr\": 0.012240491536132861\n },\n \"harness|hellaswag|10\"\
: {\n \"acc\": 0.2504481179047998,\n \"acc_stderr\": 0.004323856300539177,\n\
\ \"acc_norm\": 0.2504481179047998,\n \"acc_norm_stderr\": 0.004323856300539177\n\
\ },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n\
\ \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \
\ \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\"\
: {\n \"acc\": 0.18518518518518517,\n \"acc_stderr\": 0.03355677216313142,\n\
\ \"acc_norm\": 0.18518518518518517,\n \"acc_norm_stderr\": 0.03355677216313142\n\
\ },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n\
\ \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n\
\ \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.21509433962264152,\n\
\ \"acc_stderr\": 0.02528839450289137,\n \"acc_norm\": 0.21509433962264152,\n\
\ \"acc_norm_stderr\": 0.02528839450289137\n },\n \"harness|hendrycksTest-college_biology|5\"\
: {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n\
\ \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n\
\ },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\":\
\ 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n\
\ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n\
\ \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.21,\n\
\ \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.030952890217749874,\n\
\ \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.030952890217749874\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n\
\ \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n\
\ \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\":\
\ 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n \"\
acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n\
\ \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n\
\ \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n\
\ \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.20899470899470898,\n \"acc_stderr\": 0.02094048156533486,\n \"\
acc_norm\": 0.20899470899470898,\n \"acc_norm_stderr\": 0.02094048156533486\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n\
\ \"acc_stderr\": 0.04040610178208841,\n \"acc_norm\": 0.2857142857142857,\n\
\ \"acc_norm_stderr\": 0.04040610178208841\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.1774193548387097,\n \"acc_stderr\": 0.02173254068932927,\n \"\
acc_norm\": 0.1774193548387097,\n \"acc_norm_stderr\": 0.02173254068932927\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.15270935960591134,\n \"acc_stderr\": 0.02530890453938063,\n \"\
acc_norm\": 0.15270935960591134,\n \"acc_norm_stderr\": 0.02530890453938063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\"\
: 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.17676767676767677,\n \"acc_stderr\": 0.027178752639044915,\n \"\
acc_norm\": 0.17676767676767677,\n \"acc_norm_stderr\": 0.027178752639044915\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n\
\ \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.20256410256410257,\n \"acc_stderr\": 0.020377660970371372,\n\
\ \"acc_norm\": 0.20256410256410257,\n \"acc_norm_stderr\": 0.020377660970371372\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2111111111111111,\n \"acc_stderr\": 0.024882116857655075,\n \
\ \"acc_norm\": 0.2111111111111111,\n \"acc_norm_stderr\": 0.024882116857655075\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n\
\ \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436776,\n \"\
acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436776\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.1926605504587156,\n \"acc_stderr\": 0.016909276884936094,\n \"\
acc_norm\": 0.1926605504587156,\n \"acc_norm_stderr\": 0.016909276884936094\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134224,\n \"\
acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134224\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n\
\ \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.31390134529147984,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.31390134529147984,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n\
\ \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2396694214876033,\n \"acc_stderr\": 0.03896878985070417,\n \"\
acc_norm\": 0.2396694214876033,\n \"acc_norm_stderr\": 0.03896878985070417\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n\
\ \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n\
\ \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n\
\ \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n\
\ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n\
\ \"acc_stderr\": 0.02974504857267404,\n \"acc_norm\": 0.2905982905982906,\n\
\ \"acc_norm_stderr\": 0.02974504857267404\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n\
\ \"acc_stderr\": 0.015218733046150193,\n \"acc_norm\": 0.23754789272030652,\n\
\ \"acc_norm_stderr\": 0.015218733046150193\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n\
\ \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n\
\ \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n\
\ \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n\
\ \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n\
\ \"acc_stderr\": 0.02212243977248077,\n \"acc_norm\": 0.1864951768488746,\n\
\ \"acc_norm_stderr\": 0.02212243977248077\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445806,\n\
\ \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445806\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432417,\n \
\ \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432417\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n\
\ \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n\
\ \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.18382352941176472,\n \"acc_stderr\": 0.023529242185193106,\n\
\ \"acc_norm\": 0.18382352941176472,\n \"acc_norm_stderr\": 0.023529242185193106\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\"\
: 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n\
\ },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n\
\ \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n\
\ \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\"\
: {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.03036049015401465,\n\
\ \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.03036049015401465\n\
\ },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\":\
\ 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n\
\ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\"\
: {\n \"acc\": 0.28313253012048195,\n \"acc_stderr\": 0.03507295431370518,\n\
\ \"acc_norm\": 0.28313253012048195,\n \"acc_norm_stderr\": 0.03507295431370518\n\
\ },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3216374269005848,\n\
\ \"acc_stderr\": 0.03582529442573122,\n \"acc_norm\": 0.3216374269005848,\n\
\ \"acc_norm_stderr\": 0.03582529442573122\n },\n \"harness|truthfulqa:mc|0\"\
: {\n \"mc1\": 1.0,\n \"mc1_stderr\": 0.0,\n \"mc2\": NaN,\n\
\ \"mc2_stderr\": NaN\n },\n \"harness|winogrande|5\": {\n \"\
acc\": 0.4956590370955012,\n \"acc_stderr\": 0.014051956064076911\n },\n\
\ \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n\
\ }\n}\n```"
repo_url: https://huggingface.co/MatthieuJ/Forbin_13B_M1_SLERP
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|arc:challenge|25_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|gsm8k|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hellaswag|10_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-27T19-09-27.007204.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-27T19-09-27.007204.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- '**/details_harness|winogrande|5_2024-03-27T19-09-27.007204.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-27T19-09-27.007204.parquet'
- config_name: results
data_files:
- split: 2024_03_27T19_09_27.007204
path:
- results_2024-03-27T19-09-27.007204.parquet
- split: latest
path:
- results_2024-03-27T19-09-27.007204.parquet
---
# Dataset Card for Evaluation run of MatthieuJ/Forbin_13B_M1_SLERP
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MatthieuJ/Forbin_13B_M1_SLERP](https://huggingface.co/MatthieuJ/Forbin_13B_M1_SLERP) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MatthieuJ__Forbin_13B_M1_SLERP",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-27T19:09:27.007204](https://huggingface.co/datasets/open-llm-leaderboard/details_MatthieuJ__Forbin_13B_M1_SLERP/blob/main/results_2024-03-27T19-09-27.007204.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.23196194129343728,
"acc_stderr": 0.029934654752561563,
"acc_norm": 0.2314240573187148,
"acc_norm_stderr": 0.03071122006512167,
"mc1": 1.0,
"mc1_stderr": 0.0,
"mc2": NaN,
"mc2_stderr": NaN
},
"harness|arc:challenge|25": {
"acc": 0.22696245733788395,
"acc_stderr": 0.012240491536132861,
"acc_norm": 0.22696245733788395,
"acc_norm_stderr": 0.012240491536132861
},
"harness|hellaswag|10": {
"acc": 0.2504481179047998,
"acc_stderr": 0.004323856300539177,
"acc_norm": 0.2504481179047998,
"acc_norm_stderr": 0.004323856300539177
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.18518518518518517,
"acc_stderr": 0.03355677216313142,
"acc_norm": 0.18518518518518517,
"acc_norm_stderr": 0.03355677216313142
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21509433962264152,
"acc_stderr": 0.02528839450289137,
"acc_norm": 0.21509433962264152,
"acc_norm_stderr": 0.02528839450289137
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.20809248554913296,
"acc_stderr": 0.030952890217749874,
"acc_norm": 0.20809248554913296,
"acc_norm_stderr": 0.030952890217749874
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.26382978723404255,
"acc_stderr": 0.028809989854102973,
"acc_norm": 0.26382978723404255,
"acc_norm_stderr": 0.028809989854102973
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813365,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813365
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.20899470899470898,
"acc_stderr": 0.02094048156533486,
"acc_norm": 0.20899470899470898,
"acc_norm_stderr": 0.02094048156533486
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.04040610178208841,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.04040610178208841
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1774193548387097,
"acc_stderr": 0.02173254068932927,
"acc_norm": 0.1774193548387097,
"acc_norm_stderr": 0.02173254068932927
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.15270935960591134,
"acc_stderr": 0.02530890453938063,
"acc_norm": 0.15270935960591134,
"acc_norm_stderr": 0.02530890453938063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.17676767676767677,
"acc_stderr": 0.027178752639044915,
"acc_norm": 0.17676767676767677,
"acc_norm_stderr": 0.027178752639044915
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.20256410256410257,
"acc_stderr": 0.020377660970371372,
"acc_norm": 0.20256410256410257,
"acc_norm_stderr": 0.020377660970371372
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2111111111111111,
"acc_stderr": 0.024882116857655075,
"acc_norm": 0.2111111111111111,
"acc_norm_stderr": 0.024882116857655075
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436776,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.1926605504587156,
"acc_stderr": 0.016909276884936094,
"acc_norm": 0.1926605504587156,
"acc_norm_stderr": 0.016909276884936094
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1527777777777778,
"acc_stderr": 0.024536326026134224,
"acc_norm": 0.1527777777777778,
"acc_norm_stderr": 0.024536326026134224
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.270042194092827,
"acc_stderr": 0.028900721906293426,
"acc_norm": 0.270042194092827,
"acc_norm_stderr": 0.028900721906293426
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.31390134529147984,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.31390134529147984,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2396694214876033,
"acc_stderr": 0.03896878985070417,
"acc_norm": 0.2396694214876033,
"acc_norm_stderr": 0.03896878985070417
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22085889570552147,
"acc_stderr": 0.032591773927421776,
"acc_norm": 0.22085889570552147,
"acc_norm_stderr": 0.032591773927421776
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2905982905982906,
"acc_stderr": 0.02974504857267404,
"acc_norm": 0.2905982905982906,
"acc_norm_stderr": 0.02974504857267404
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23754789272030652,
"acc_stderr": 0.015218733046150193,
"acc_norm": 0.23754789272030652,
"acc_norm_stderr": 0.015218733046150193
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.023929155517351284,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.023929155517351284
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.1864951768488746,
"acc_stderr": 0.02212243977248077,
"acc_norm": 0.1864951768488746,
"acc_norm_stderr": 0.02212243977248077
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.21604938271604937,
"acc_stderr": 0.022899162918445806,
"acc_norm": 0.21604938271604937,
"acc_norm_stderr": 0.022899162918445806
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.025257861359432417,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.025257861359432417
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.010996156635142692,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.010996156635142692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.18382352941176472,
"acc_stderr": 0.023529242185193106,
"acc_norm": 0.18382352941176472,
"acc_norm_stderr": 0.023529242185193106
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.25,
"acc_stderr": 0.01751781884501444,
"acc_norm": 0.25,
"acc_norm_stderr": 0.01751781884501444
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370518,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370518
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3216374269005848,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.3216374269005848,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 1.0,
"mc1_stderr": 0.0,
"mc2": NaN,
"mc2_stderr": NaN
},
"harness|winogrande|5": {
"acc": 0.4956590370955012,
"acc_stderr": 0.014051956064076911
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
KaiLv/UDR_DART | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: references
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 8360993
num_examples: 30123
- name: validation
num_bytes: 1657570
num_examples: 2718
- name: test
num_bytes: 2532366
num_examples: 4159
- name: debug
num_bytes: 1396342
num_examples: 5000
download_size: 4740566
dataset_size: 13947271
---
# Dataset Card for "UDR_DART"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713101722 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 10734
num_examples: 30
download_size: 13316
dataset_size: 10734
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713101722"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NKLPAWAR/images | ---
license: openrail
---
|
ai4bharat/ai2_arc-hi | ---
annotations_creators:
- found
language_creators:
- found
language:
- hi
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
pretty_name: Ai2Arc
language_bcp47:
- en-US
dataset_info:
- config_name: ARC-Challenge
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: text
sequence: string
- name: label
sequence: string
- name: answerKey
dtype: string
splits:
- name: test
num_bytes: 375511
num_examples: 1172
- name: validation
num_bytes: 96660
num_examples: 299
download_size: 449460
dataset_size: 821931
- config_name: ARC-Easy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: text
sequence: string
- name: label
sequence: string
- name: answerKey
dtype: string
splits:
- name: test
num_bytes: 657514
num_examples: 2376
- name: validation
num_bytes: 157394
num_examples: 570
download_size: 762935
dataset_size: 1433908
configs:
- config_name: ARC-Challenge
data_files:
- split: test
path: ARC-Challenge/test-*
- split: validation
path: ARC-Challenge/validation-*
- config_name: ARC-Easy
data_files:
- split: test
path: ARC-Easy/test-*
- split: validation
path: ARC-Easy/validation-*
---
# Dataset Card for "ai2_arc" translated into Hindi
This is Hindi translated version of "ai2_arc" using the IndicTrans2 model ([Gala et al., 2023](https://openreview.net/forum?id=vfT4YuzAYA)).
We recommend you to visit the "ai2_arc" huggingface dataset card ([link](https://huggingface.co/datasets/allenai/ai2_arc)) for the details.
|
edwardjross/wodehouse | ---
dataset_info:
features:
- name: Text#
dtype: string
- name: Type
dtype: string
- name: Issued
dtype: string
- name: Title
dtype: string
- name: Language
dtype: string
- name: Authors
dtype: string
- name: Subjects
dtype: string
- name: LoCC
dtype: string
- name: Bookshelves
dtype: string
- name: raw_text
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 23457287
num_examples: 30
- name: valid
num_bytes: 5416245
num_examples: 10
- name: test
num_bytes: 5717889
num_examples: 8
download_size: 21729310
dataset_size: 34591421
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
kheder/quran_hasanat_hadith_datasets0 | ---
dataset_info:
features:
- name: who-i-am
dtype: string
- name: quran/hasanat
list:
- name: id
dtype: int64
- name: name
dtype: string
- name: total_hasanat
dtype: int64
- name: total_verses
dtype: int64
- name: translation
dtype: string
- name: transliteration
dtype: string
- name: type
dtype: string
- name: verses
list:
- name: hasanat
dtype: int64
- name: id
dtype: int64
- name: text
dtype: string
- name: translation
dtype: string
- name: hadith
list:
list:
- name: chain_indx
dtype: string
- name: chapter
dtype: string
- name: chapter_no
dtype: string
- name: hadith_id
dtype: string
- name: hadith_no
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: text_ar
dtype: string
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 44047246
num_examples: 2
download_size: 16587703
dataset_size: 44047246
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quran_hasanat_hadith_datasets0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BadreddineHug/2s_librispeech_subset | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 915506
num_examples: 4
download_size: 294279
dataset_size: 915506
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Lichang-Chen/837k_ift | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: user2
dtype: string
- name: user
dtype: string
- name: category
dtype: string
- name: assistant
dtype: string
- name: template
dtype: string
- name: assistant2
dtype: string
splits:
- name: train
num_bytes: 1440610993
num_examples: 837067
download_size: 781499008
dataset_size: 1440610993
---
|
monmamo/rhea-fairheart | ---
license: cc
language:
- en
tags:
- art
- anthrope
- female
pretty_name: Reah Fairheart
size_categories:
- n<1K
---
image generation prompt:
- average-height woman
- large pear-shaped belly
- rough olive-brown subtropical skin
- shoulder-length brown hair
- large breasts
- thick legs
- wide hips
- long neck
- brown pupils
- smile
- large brown dragon ears
|
christykoh/boolq_zh | ---
dataset_info:
features:
- name: question
dtype: string
- name: passage
dtype: string
- name: answer
dtype: bool
splits:
- name: train
num_bytes: 4879954
num_examples: 9427
- name: validation
num_bytes: 1668454
num_examples: 3270
download_size: 4455141
dataset_size: 6548408
---
# Dataset Card for "boolq_zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
azharmo/tamil-orca | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ta
tags:
- orca
- reasoning
- tamil
- generation
pretty_name: Tamil-orca
size_categories:
- 10K<n<100K
---
# Tamil Orca-Style Dataset
## Overview
This repository hosts the Tamil Orca-style dataset, meticulously curated to enhance the reasoning capabilities of large language models in Tamil. The dataset is a fusion of translations and responses generated by GPT-4 and Gemini models.
- **Content**: The dataset contains three columns - 'Instruction', 'Query', and 'Answer'.
- **Purpose**: It's designed to significantly improve the reasoning capability of AI language models in Tamil.
- **Usage**: If you utilize this dataset or any component of the Tamil-orca datasets in your research, please acknowledge it in your citations.
## Upcoming Research
- Research based on this dataset is underway and will be published soon, contributing valuable insights into language model training and performance in Tamil.
## Credits
Get to know the creators behind this innovative dataset/model and follow their contributions to the field:
- **Creator**: Mohamed Azharudeen
- **LinkedIn**: [Mohamed Azharudeen](https://www.linkedin.com/in/mohamed-azharudeen/)
|
open-llm-leaderboard/details_vicgalle__ConfigurableHermes-7B | ---
pretty_name: Evaluation run of vicgalle/ConfigurableHermes-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [vicgalle/ConfigurableHermes-7B](https://huggingface.co/vicgalle/ConfigurableHermes-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vicgalle__ConfigurableHermes-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-17T19:36:55.345769](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__ConfigurableHermes-7B/blob/main/results_2024-02-17T19-36-55.345769.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6269295268349732,\n\
\ \"acc_stderr\": 0.03251131036200702,\n \"acc_norm\": 0.6287150608544317,\n\
\ \"acc_norm_stderr\": 0.03316089833802905,\n \"mc1\": 0.4283965728274174,\n\
\ \"mc1_stderr\": 0.017323088597314754,\n \"mc2\": 0.6170544221880094,\n\
\ \"mc2_stderr\": 0.015198027849424717\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6083617747440273,\n \"acc_stderr\": 0.014264122124938215,\n\
\ \"acc_norm\": 0.6604095563139932,\n \"acc_norm_stderr\": 0.013839039762820169\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6552479585739892,\n\
\ \"acc_stderr\": 0.00474316003427115,\n \"acc_norm\": 0.8430591515634336,\n\
\ \"acc_norm_stderr\": 0.0036300159898964013\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n\
\ \"acc_stderr\": 0.04244633238353227,\n \"acc_norm\": 0.5925925925925926,\n\
\ \"acc_norm_stderr\": 0.04244633238353227\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n\
\ \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6641509433962264,\n \"acc_stderr\": 0.02906722014664483,\n \
\ \"acc_norm\": 0.6641509433962264,\n \"acc_norm_stderr\": 0.02906722014664483\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n\
\ \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n\
\ \"acc_stderr\": 0.0373362665538351,\n \"acc_norm\": 0.6011560693641619,\n\
\ \"acc_norm_stderr\": 0.0373362665538351\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5404255319148936,\n \"acc_stderr\": 0.032579014820998356,\n\
\ \"acc_norm\": 0.5404255319148936,\n \"acc_norm_stderr\": 0.032579014820998356\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n\
\ \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n\
\ \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n\
\ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42592592592592593,\n \"acc_stderr\": 0.025467149045469553,\n \"\
acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.025467149045469553\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n\
\ \"acc_stderr\": 0.02436259969303108,\n \"acc_norm\": 0.7580645161290323,\n\
\ \"acc_norm_stderr\": 0.02436259969303108\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.03158415324047711,\n\
\ \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.03158415324047711\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198892,\n \"\
acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198892\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.02463978909770944,\n\
\ \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.02463978909770944\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6205128205128205,\n \"acc_stderr\": 0.024603626924097417,\n\
\ \"acc_norm\": 0.6205128205128205,\n \"acc_norm_stderr\": 0.024603626924097417\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.27037037037037037,\n \"acc_stderr\": 0.02708037281514567,\n \
\ \"acc_norm\": 0.27037037037037037,\n \"acc_norm_stderr\": 0.02708037281514567\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.031204691225150016,\n\
\ \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.031204691225150016\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8348623853211009,\n \"acc_stderr\": 0.015919557829976037,\n \"\
acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.015919557829976037\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7990196078431373,\n \"acc_stderr\": 0.028125972265654373,\n \"\
acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.028125972265654373\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676173,\n \
\ \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676173\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.03192193448934725,\n\
\ \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.03192193448934725\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n\
\ \"acc_stderr\": 0.023636873317489284,\n \"acc_norm\": 0.8461538461538461,\n\
\ \"acc_norm_stderr\": 0.023636873317489284\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8084291187739464,\n\
\ \"acc_stderr\": 0.014072859310451949,\n \"acc_norm\": 0.8084291187739464,\n\
\ \"acc_norm_stderr\": 0.014072859310451949\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7052023121387283,\n \"acc_stderr\": 0.024547617794803828,\n\
\ \"acc_norm\": 0.7052023121387283,\n \"acc_norm_stderr\": 0.024547617794803828\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30614525139664805,\n\
\ \"acc_stderr\": 0.015414494487903213,\n \"acc_norm\": 0.30614525139664805,\n\
\ \"acc_norm_stderr\": 0.015414494487903213\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.02555316999182652,\n\
\ \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.02555316999182652\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n\
\ \"acc_stderr\": 0.025922371788818763,\n \"acc_norm\": 0.7041800643086816,\n\
\ \"acc_norm_stderr\": 0.025922371788818763\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.024659685185967284,\n\
\ \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.024659685185967284\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4758800521512386,\n\
\ \"acc_stderr\": 0.01275536872286393,\n \"acc_norm\": 0.4758800521512386,\n\
\ \"acc_norm_stderr\": 0.01275536872286393\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.028888193103988626,\n\
\ \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.028888193103988626\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162666,\n \
\ \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162666\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.02916273841024977,\n\
\ \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.02916273841024977\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7960199004975125,\n\
\ \"acc_stderr\": 0.02849317624532607,\n \"acc_norm\": 0.7960199004975125,\n\
\ \"acc_norm_stderr\": 0.02849317624532607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685515,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685515\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4283965728274174,\n\
\ \"mc1_stderr\": 0.017323088597314754,\n \"mc2\": 0.6170544221880094,\n\
\ \"mc2_stderr\": 0.015198027849424717\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7742699289660616,\n \"acc_stderr\": 0.01174962626090256\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6141015921152388,\n \
\ \"acc_stderr\": 0.013409077471319168\n }\n}\n```"
repo_url: https://huggingface.co/vicgalle/ConfigurableHermes-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|arc:challenge|25_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|gsm8k|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hellaswag|10_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-36-55.345769.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-17T19-36-55.345769.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- '**/details_harness|winogrande|5_2024-02-17T19-36-55.345769.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-17T19-36-55.345769.parquet'
- config_name: results
data_files:
- split: 2024_02_17T19_36_55.345769
path:
- results_2024-02-17T19-36-55.345769.parquet
- split: latest
path:
- results_2024-02-17T19-36-55.345769.parquet
---
# Dataset Card for Evaluation run of vicgalle/ConfigurableHermes-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [vicgalle/ConfigurableHermes-7B](https://huggingface.co/vicgalle/ConfigurableHermes-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_vicgalle__ConfigurableHermes-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-17T19:36:55.345769](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__ConfigurableHermes-7B/blob/main/results_2024-02-17T19-36-55.345769.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6269295268349732,
"acc_stderr": 0.03251131036200702,
"acc_norm": 0.6287150608544317,
"acc_norm_stderr": 0.03316089833802905,
"mc1": 0.4283965728274174,
"mc1_stderr": 0.017323088597314754,
"mc2": 0.6170544221880094,
"mc2_stderr": 0.015198027849424717
},
"harness|arc:challenge|25": {
"acc": 0.6083617747440273,
"acc_stderr": 0.014264122124938215,
"acc_norm": 0.6604095563139932,
"acc_norm_stderr": 0.013839039762820169
},
"harness|hellaswag|10": {
"acc": 0.6552479585739892,
"acc_stderr": 0.00474316003427115,
"acc_norm": 0.8430591515634336,
"acc_norm_stderr": 0.0036300159898964013
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353227,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353227
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6641509433962264,
"acc_stderr": 0.02906722014664483,
"acc_norm": 0.6641509433962264,
"acc_norm_stderr": 0.02906722014664483
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.0373362665538351,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.0373362665538351
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5404255319148936,
"acc_stderr": 0.032579014820998356,
"acc_norm": 0.5404255319148936,
"acc_norm_stderr": 0.032579014820998356
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.43859649122807015,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.43859649122807015,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.025467149045469553,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.025467149045469553
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.02436259969303108,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.02436259969303108
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.03158415324047711,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.03158415324047711
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.030313710538198892,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.030313710538198892
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.02463978909770944,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.02463978909770944
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6205128205128205,
"acc_stderr": 0.024603626924097417,
"acc_norm": 0.6205128205128205,
"acc_norm_stderr": 0.024603626924097417
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.27037037037037037,
"acc_stderr": 0.02708037281514567,
"acc_norm": 0.27037037037037037,
"acc_norm_stderr": 0.02708037281514567
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.031204691225150016,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.031204691225150016
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.015919557829976037,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.015919557829976037
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.028125972265654373,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.028125972265654373
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.026750826994676173,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.026750826994676173
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.03192193448934725,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.03192193448934725
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.023636873317489284,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.023636873317489284
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8084291187739464,
"acc_stderr": 0.014072859310451949,
"acc_norm": 0.8084291187739464,
"acc_norm_stderr": 0.014072859310451949
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.024547617794803828,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.024547617794803828
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.30614525139664805,
"acc_stderr": 0.015414494487903213,
"acc_norm": 0.30614525139664805,
"acc_norm_stderr": 0.015414494487903213
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.02555316999182652,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.02555316999182652
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7041800643086816,
"acc_stderr": 0.025922371788818763,
"acc_norm": 0.7041800643086816,
"acc_norm_stderr": 0.025922371788818763
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.024659685185967284,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.024659685185967284
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4758800521512386,
"acc_stderr": 0.01275536872286393,
"acc_norm": 0.4758800521512386,
"acc_norm_stderr": 0.01275536872286393
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6544117647058824,
"acc_stderr": 0.028888193103988626,
"acc_norm": 0.6544117647058824,
"acc_norm_stderr": 0.028888193103988626
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.018999707383162666,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.018999707383162666
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.02916273841024977,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.02916273841024977
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7960199004975125,
"acc_stderr": 0.02849317624532607,
"acc_norm": 0.7960199004975125,
"acc_norm_stderr": 0.02849317624532607
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685515,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685515
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4283965728274174,
"mc1_stderr": 0.017323088597314754,
"mc2": 0.6170544221880094,
"mc2_stderr": 0.015198027849424717
},
"harness|winogrande|5": {
"acc": 0.7742699289660616,
"acc_stderr": 0.01174962626090256
},
"harness|gsm8k|5": {
"acc": 0.6141015921152388,
"acc_stderr": 0.013409077471319168
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
nu-dialogue/sfcoco2023 | ---
language:
- ja
task_categories:
- image-to-text
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 487524806.0863096
num_examples: 907
- name: test
num_bytes: 55790355.913690485
num_examples: 101
download_size: 541073440
dataset_size: 543315162.0000001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
shreyasharma/proofs3 | ---
dataset_info:
features:
- name: intermediate_conclusions
struct:
- name: int1
dtype: string
- name: int10
dtype: string
- name: int11
dtype: string
- name: int12
dtype: string
- name: int13
dtype: string
- name: int14
dtype: string
- name: int15
dtype: string
- name: int16
dtype: string
- name: int17
dtype: string
- name: int2
dtype: string
- name: int3
dtype: string
- name: int4
dtype: string
- name: int5
dtype: string
- name: int6
dtype: string
- name: int7
dtype: string
- name: int8
dtype: string
- name: int9
dtype: string
- name: step_proof
dtype: string
- name: triples
struct:
- name: sent1
dtype: string
- name: sent10
dtype: string
- name: sent11
dtype: string
- name: sent12
dtype: string
- name: sent13
dtype: string
- name: sent14
dtype: string
- name: sent15
dtype: string
- name: sent16
dtype: string
- name: sent17
dtype: string
- name: sent2
dtype: string
- name: sent3
dtype: string
- name: sent4
dtype: string
- name: sent5
dtype: string
- name: sent6
dtype: string
- name: sent7
dtype: string
- name: sent8
dtype: string
- name: sent9
dtype: string
- name: hypothesis
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2614556
num_examples: 2626
download_size: 1188057
dataset_size: 2614556
---
# Dataset Card for "proofs3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
audichandra/bitext_customer_support_llm_dataset_indonesian | ---
license: cdla-sharing-1.0
---
Base dataset : [Bitext](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset)
We translate the base dataset into Indonesian with [Helsinki-NLP/opus-mt-en-id](https://huggingface.co/Helsinki-NLP/opus-mt-en-id).
# CITATION
```bash
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} β {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
@misc{bitext_chatbot_dataset,
title={Bitext Customer Support LLM Chatbot Training Dataset},
author={{Bitext}},
year={2023},
howpublished={\url{https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset}}
}
``` |
harpreetsahota/elicit-bias-prompts | ---
dataset_info:
features:
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 3851
num_examples: 64
download_size: 2447
dataset_size: 3851
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# π΅οΈββοΈπ€ Language Model Bias Exploration
## π Introduction
In this dataset, I've adopted the approach from ["Red Teaming Language Models with Language Models"](https://arxiv.org/abs/2202.03286) by Ethan Perez et al., focusing on exploring and understanding distributional bias in language models (LMs).
## π― Purpose of the Prompts
The prompts in this repository are riffs on the prompts presented in by Table 12 and Tabel 13 in Perez et al.'s paper, serve a crucial role. They are designed to elicit responses from LMs that reveal how different groups are represented and discussed. These prompts help in identifying distributional biases - biases in the frequency and context in which LMs portray certain groups, which might be negative or stereotypical.
## π Addressing Distributional Bias
Distributional bias is a subtle yet pervasive form of bias where certain groups are more often associated with negative contexts or sentiments. This project aims to uncover such biases in LMs by analyzing how these models respond to various group-related prompts.
## π Dataset and Analysis
The dataset comprises variations of prompts used to test and analyze the responses of LMs. By examining these responses, I aim to shed light on the biases present in current language models, contributing to the field of AI ethics.
## ποΈ Goal
The ultimate goal of this exploration is to contribute towards more ethical and responsible AI development, ensuring that language models treat all groups with fairness and without bias.
|
tyzhu/random25eof_find_passage_train1000000_eval1000_rare | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 208524730
num_examples: 2001000
- name: validation
num_bytes: 118222
num_examples: 1000
download_size: 0
dataset_size: 208642952
---
# Dataset Card for "random25eof_find_passage_train1000000_eval1000_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mcemilg/tquad | ---
task_categories:
- question-answering
language:
- tr
pretty_name: t
size_categories:
- 1K<n<10K
---
# tquad
Homepage: https://github.com/TQuad/turkish-nlp-qa-dataset
|
distilled-from-one-sec-cv12/chunk_18 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1451878724
num_examples: 282907
download_size: 1482727032
dataset_size: 1451878724
---
# Dataset Card for "chunk_18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_sst2_past_been | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 2547
num_examples: 18
- name: test
num_bytes: 4161
num_examples: 33
- name: train
num_bytes: 116764
num_examples: 1246
download_size: 65321
dataset_size: 123472
---
# Dataset Card for "MULTI_VALUE_sst2_past_been"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
disham993/alpaca-train-validation-test-split | ---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
task_categories:
- text-generation
pretty_name: Alpaca
tags:
- instruction-finetuning
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33409057
num_examples: 36401
- name: validation
num_bytes: 7159137
num_examples: 7801
- name: test
num_bytes: 7196544
num_examples: 7800
download_size: 24523957
dataset_size: 47764738
---
# Dataset Card for Alpaca
I have just performed train, test and validation split on the original dataset. Repository to reproduce this will be shared here soon. I am including the orignal Dataset card as follows.
## Dataset Description
- **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html
- **Repository:** https://github.com/tatsu-lab/stanford_alpaca
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Rohan Taori
### Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The data in Alpaca are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| alpaca | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAIβs content moderation API, which filters out harmful content as defined by OpenAIβs usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMAβs license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] |
Navintyagi/demo | ---
license: mit
---
|
pg19 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: pg-19
pretty_name: PG-19
dataset_info:
features:
- name: short_book_title
dtype: string
- name: publication_date
dtype: int32
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11453688452
num_examples: 28602
- name: validation
num_bytes: 17402295
num_examples: 50
- name: test
num_bytes: 40482852
num_examples: 100
download_size: 11740397875
dataset_size: 11511573599
---
# Dataset Card for "pg19"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/pg19](https://github.com/deepmind/pg19)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 11.74 GB
- **Size of the generated dataset:** 11.51 GB
- **Total amount of disk used:** 23.25 GB
### Dataset Summary
This repository contains the PG-19 language modeling benchmark.
It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
It also contains metadata of book titles and publication dates.
PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark.
Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date).
Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text.
To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 11.74 GB
- **Size of the generated dataset:** 11.51 GB
- **Total amount of disk used:** 23.25 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"publication_date": 1907,
"short_book_title": "La Fiammetta by Giovanni Boccaccio",
"text": "\"\\n\\n\\n\\nProduced by Ted Garvin, Dave Morgan and PG Distributed Proofreaders\\n\\n\\n\\n\\nLA FIAMMETTA\\n\\nBY\\n\\nGIOVANNI BOCCACCIO\\n...",
"url": "http://www.gutenberg.org/ebooks/10006"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `short_book_title`: a `string` feature.
- `publication_date`: a `int32` feature.
- `url`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|28602| 50| 100|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
### Citation Information
```
@article{raecompressive2019,
author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and
Hillier, Chloe and Lillicrap, Timothy P},
title = {Compressive Transformers for Long-Range Sequence Modelling},
journal = {arXiv preprint},
url = {https://arxiv.org/abs/1911.05507},
year = {2019},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lucidrains](https://github.com/lucidrains), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
CyberHarem/hoshiguma_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hoshiguma/γγ·γ°γ/ζη (Arknights)
This is the dataset of hoshiguma/γγ·γ°γ/ζη (Arknights), containing 500 images and their tags.
The core tags of this character are `horns, single_horn, green_hair, long_hair, breasts, yellow_eyes, hair_between_eyes, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 990.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshiguma_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 457.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshiguma_arknights/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1252 | 988.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshiguma_arknights/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 822.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshiguma_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1252 | 1.55 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hoshiguma_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hoshiguma_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, bare_shoulders, black_shirt, sleeveless_shirt, solo, upper_body, breastplate, closed_mouth, looking_at_viewer, holding_shield |
| 1 | 14 |  |  |  |  |  | 1girl, black_shirt, solo, bare_shoulders, black_gloves, breastplate, upper_body, open_mouth, sleeveless_shirt, looking_at_viewer, arm_ribbon, holding_shield, simple_background, white_background, green_eyes, smile |
| 2 | 8 |  |  |  |  |  | 1girl, arm_ribbon, bare_shoulders, black_gloves, black_pants, black_shirt, breastplate, looking_at_viewer, sleeveless_shirt, solo, jacket_around_waist, holding_shield, open_mouth, cowboy_shot |
| 3 | 5 |  |  |  |  |  | 1girl, arm_ribbon, bare_shoulders, black_footwear, black_pants, black_shirt, boots, full_body, jacket_around_waist, knee_pads, looking_at_viewer, simple_background, sleeveless_shirt, solo, black_gloves, breastplate, white_background, closed_mouth, sitting, green_eyes, shield, standing, very_long_hair |
| 4 | 9 |  |  |  |  |  | 1girl, black_shirt, cowboy_shot, holding_sword, long_sleeves, official_alternate_costume, solo, belt, looking_at_viewer, magatama_necklace, oni_mask, holding_shield, grey_pants, katana, closed_mouth, scar_on_face, arm_ribbon, smile, very_long_hair |
| 5 | 6 |  |  |  |  |  | 1girl, black_shirt, holding_sword, katana, long_sleeves, official_alternate_costume, oni_mask, shoulder_cutout, solo, looking_at_viewer, smile, belt, grey_pants, very_long_hair, bare_shoulders, holding_shield, magatama_necklace, sheath |
| 6 | 6 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, necklace, solo, very_long_hair, looking_at_viewer, ponytail, spaghetti_strap, bracelet, camisole, sitting, black_belt, black_dress, red_choker, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | black_shirt | sleeveless_shirt | solo | upper_body | breastplate | closed_mouth | looking_at_viewer | holding_shield | black_gloves | open_mouth | arm_ribbon | simple_background | white_background | green_eyes | smile | black_pants | jacket_around_waist | cowboy_shot | black_footwear | boots | full_body | knee_pads | sitting | shield | standing | very_long_hair | holding_sword | long_sleeves | official_alternate_costume | belt | magatama_necklace | oni_mask | grey_pants | katana | scar_on_face | shoulder_cutout | sheath | cleavage | necklace | ponytail | spaghetti_strap | bracelet | camisole | black_belt | black_dress | red_choker |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------|:-------------------|:-------|:-------------|:--------------|:---------------|:--------------------|:-----------------|:---------------|:-------------|:-------------|:--------------------|:-------------------|:-------------|:--------|:--------------|:----------------------|:--------------|:-----------------|:--------|:------------|:------------|:----------|:---------|:-----------|:-----------------|:----------------|:---------------|:-----------------------------|:-------|:--------------------|:-----------|:-------------|:---------|:---------------|:------------------|:---------|:-----------|:-----------|:-----------|:------------------|:-----------|:-----------|:-------------|:--------------|:-------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | X | X | | X | | X | X | X | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | X | | X | | X | X | X | X | | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | | X | | X | | | X | X | X | | | X | | | | X | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | X | | X | | | | X | X | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | X | X | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | X | | | X | | | | X | | | | | X | X | | | | | | | | | | X | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X |
|
AdapterOcean/med_alpaca_standardized_cluster_60_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 15905862
num_examples: 15883
download_size: 8386997
dataset_size: 15905862
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_60_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tj-solergibert/SRV-NLLB-Europarl-mt-en | ---
dataset_info:
features:
- name: source_text
dtype: string
- name: dest_text
dtype: string
- name: dest_lang
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 403651116
num_examples: 498086
- name: valid
num_bytes: 57524298
num_examples: 69178
- name: test
num_bytes: 61047362
num_examples: 72950
download_size: 221747155
dataset_size: 522222776
---
# Dataset Card for "SRV-NLLB-Europarl-mt-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-622e0c30-b54d-415c-87b9-70c107d23cec-2523 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
UnderstandLing/oasst1_ru_threads | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 14631765
num_examples: 9845
- name: validation
num_bytes: 776561
num_examples: 517
download_size: 6878861
dataset_size: 15408326
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_40 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1281641124.0
num_examples: 251697
download_size: 1301489022
dataset_size: 1281641124.0
---
# Dataset Card for "chunk_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
florin-hf/wiki_dump2018_nq_open | ---
task_categories:
- question-answering
language:
- en
pretty_name: v
size_categories:
- 10M<n<100M
---
# Wikipedia Dump with Gold Documents from Natural Questions
## Dataset Summary
This dataset combines the English Wikipedia dump from December 20, 2018, with gold passages from the [Natural Questions](https://huggingface.co/datasets/natural_questions) (NQ) dataset,
specifically tailored for open-domain question answering tasks. By integrating gold documents corresponding to each query in the [NQ-open](https://huggingface.co/datasets/nq_open)
version of the dataset, this resource addresses potential mismatches between the Wikipedia dump and the question-answer pairs found in NQ-open.
Such mismatches can lead to scenarios where the dump does not contain the required answer.
A thorough process of duplicate filtering was applied to ensure the precise identification of the gold document for each query,
enhancing the reliability of the dataset for natural language processing tasks.
Therefore, the dataset can be employed as a knowledge base for RAG systems.
One critical aspect of dataset preparation involved addressing the constraints posed by Large Language Models (LLMs) regarding input size.
LLMs, particularly when processing multiple documents in a single prompt, face limitations on the length of input they can efficiently handle.
To accommodate this, gold documents exceeding 512 tokens ([tokenized with Llama2](https://huggingface.co/docs/transformers/model_doc/llama2#transformers.LlamaTokenizer))
were excluded from the dataset. This decision was guided by the objective of maximizing the number of documents that can be included in the LLM's prompt
without compromising on the detail or context provided by each document.
As a result, the final dataset encompasses **21,035,236** documents (13.9 GB).
## Dataset Sources
- **Original Wikipedia Dump**: The corpus originates from the English Wikipedia dump, where articles are segmented into non-overlapping passages of 100 words.
[Download link](https://dl.fbaipublicfiles.com/dpr/wikipedia_split/psgs_w100.tsv.gz).
- **Gold Passages**: Sourced from the Natural Questions dataset, these passages are integrated to provide a comprehensive resource for question answering.
The gold passages are accessible through the following URLs:
- [train](https://dl.fbaipublicfiles.com/dpr/data/nq_gold_info/nq-train_gold_info.json.gz)
- [dev](https://dl.fbaipublicfiles.com/dpr/data/nq_gold_info/nq-dev_gold_info.json.gz)
- [test](https://dl.fbaipublicfiles.com/dpr/data/nq_gold_info/nq-test_gold_info.json.gz)
The above data comes from the Dense Passage Retrieval (DPR) [github repository](https://github.com/facebookresearch/DPR/blob/main/dpr/data/download_data.py).
## Dataset Structure
An example of a Wikipedia passage is as follows:
```
{
"text": Home computers were a class of microcomputers entering the market in 1977, and becoming common during the 1980s.
They were marketed to consumers as affordable and accessible computers that, for the first time, were intended for the use of a single nontechnical user.
These computers were a distinct market segment that typically cost much less than business,
scientific or engineering-oriented computers of the time such as the IBM PC, and were generally less powerful in terms of memory and expandability.
However, a home computer often had better graphics and sound than contemporary business computers. Their most common uses were playing
"title": "Home computer"
}
``` |
joey234/mmlu-high_school_european_history | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 24283
num_examples: 5
- name: test
num_bytes: 1352444
num_examples: 165
download_size: 366174
dataset_size: 1376727
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-high_school_european_history"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingartists/boris-grebenshikov | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/boris-grebenshikov"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.727596 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/491c2f003f52c9837809b86faef7b764.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/boris-grebenshikov">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ HuggingArtists Model π€</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">ΠΠΎΡΠΈΡ ΠΡΠ΅Π±Π΅Π½ΡΠΈΠΊΠΎΠ² (Boris Grebenshikov)</div>
<a href="https://genius.com/artists/boris-grebenshikov">
<div style="text-align: center; font-size: 14px;">@boris-grebenshikov</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/boris-grebenshikov).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/boris-grebenshikov")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|461| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/boris-grebenshikov")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
masakhane/afriqa-prebuilt-sparse-indexes | ---
license: apache-2.0
task_categories:
- text-retrieval
language:
- en
- fr
pretty_name: Afriqa Wikipedia 100 Inverted Indices
size_categories:
- 100K<n<1M
---
<h1>Afriqa Prebuilt Indices</h1>
Prebuilt Lucene Inverted Indices for preprocessed Afriqa Wikipedia Passages |
jan-hq/open_platypus_binarized | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 27892376.454545453
num_examples: 22433
- name: test
num_bytes: 3099705.5454545454
num_examples: 2493
download_size: 16425005
dataset_size: 30992082.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jsra2/id2223_whisper_swedish_augmented | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 11871603408
num_examples: 12360
- name: test
num_bytes: 4868697560
num_examples: 5069
download_size: 2532495364
dataset_size: 16740300968
---
# Dataset Card for "id2223_whisper_swedish_augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/6155933b | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 215
num_examples: 10
download_size: 1402
dataset_size: 215
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "6155933b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
moseoridev/train_v7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 245116662
num_examples: 171636
download_size: 123533490
dataset_size: 245116662
---
# Dataset Card for "train_v7"
μ°λ¦¬ 4μ°¨ λ°μ΄ν° + vicuna |
Glac1er/Glataset | ---
license: unknown
---
|
SUSTech/sci-llm | ---
license: apache-2.0
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 47624714
num_examples: 133542
- name: test
num_bytes: 422106
num_examples: 800
download_size: 89497
dataset_size: 48046820
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
TurcoLoko/satab | ---
license: apache-2.0
---
|
CyberHarem/makinohara_shoko_seishunbutayarou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Makinohara Shoko
This is the dataset of Makinohara Shoko, containing 120 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 120 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 283 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 120 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 120 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 120 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 120 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 120 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 283 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 283 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 283 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
idleheroevich2/Mordekaiser | ---
license: unknown
---
|
shikii2/bluezao2013 | ---
license: openrail
---
|
Minata/70000_method2test_tokonized | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 466760000
num_examples: 70000
download_size: 27648900
dataset_size: 466760000
---
# Dataset Card for "70000_method2test_tokonized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
deven367/babylm-10M-aochildes | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2140547
num_examples: 80000
- name: valid
num_bytes: 1987198
num_examples: 70000
- name: test
num_bytes: 1648555
num_examples: 60000
download_size: 3235049
dataset_size: 5776300
---
# Dataset Card for "babylm-10M-aochildes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yixian-Lu/NER_conllpp | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3915911
num_examples: 14041
- name: validation
num_bytes: 970866
num_examples: 3250
- name: test
num_bytes: 915582
num_examples: 3453
download_size: 219962
dataset_size: 5802359
---
# Dataset Card for "NER_conllpp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jinwoos/cartoonizer-dataset-351 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: cartoonized_image
dtype: image
splits:
- name: train
num_bytes: 6155151795.0
num_examples: 350
download_size: 6154762185
dataset_size: 6155151795.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andersonbcdefg/biology | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 61275986
num_examples: 20000
download_size: 28860171
dataset_size: 61275986
---
# Dataset Card for "biology"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ravithejads/alpaca_urdu | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction_translated
dtype: string
- name: input_translated
dtype: string
- name: output_translated
dtype: string
splits:
- name: train
num_bytes: 25412
num_examples: 10
download_size: 27969
dataset_size: 25412
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mtc/faithfulness_benchmark_sanity_check_xsum_faith | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: bbcid
dtype: int64
- name: summary
dtype: string
- name: is_faithful
dtype: bool
- name: majority_hallucination_type
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 659922
num_examples: 318
download_size: 300946
dataset_size: 659922
---
# Dataset Card for "faithfulness_benchmark_sanity_check_xsum_faith"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MattBoraske/reddit-AITA-submissions-and-comments-binary-top-2k | ---
dataset_info:
features:
- name: submission_title
dtype: string
- name: submission_text
dtype: string
- name: submission_score
dtype: int64
- name: submission_url
dtype: string
- name: submission_date
dtype: string
- name: top_comment_1
dtype: string
- name: top_comment_2
dtype: string
- name: top_comment_3
dtype: string
- name: top_comment_4
dtype: string
- name: top_comment_5
dtype: string
- name: top_comment_6
dtype: string
- name: top_comment_7
dtype: string
- name: top_comment_8
dtype: string
- name: top_comment_9
dtype: string
- name: top_comment_10
dtype: string
- name: top_comment_1_classification
dtype: string
- name: top_comment_2_classification
dtype: string
- name: top_comment_3_classification
dtype: string
- name: top_comment_4_classification
dtype: string
- name: top_comment_5_classification
dtype: string
- name: top_comment_6_classification
dtype: string
- name: top_comment_7_classification
dtype: string
- name: top_comment_8_classification
dtype: string
- name: top_comment_9_classification
dtype: string
- name: top_comment_10_classification
dtype: string
- name: ambiguity_score
dtype: float64
- name: flanT5_instruction
dtype: string
- name: llama2_instruction
dtype: string
splits:
- name: train
num_bytes: 16370638
num_examples: 1600
- name: test
num_bytes: 3994237
num_examples: 400
download_size: 11808547
dataset_size: 20364875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CyberHarem/cherino_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of cherino/ι£ζ²³γγ§γͺγ/ειθ―Ί (Blue Archive)
This is the dataset of cherino/ι£ζ²³γγ§γͺγ/ειθ―Ί (Blue Archive), containing 129 images and their tags.
The core tags of this character are `long_hair, blue_eyes, white_hair, halo, fake_facial_hair, fake_mustache, grey_hair, hat, two_side_up, ribbon, shako_cap`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 129 | 186.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cherino_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 129 | 163.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cherino_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 346 | 357.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cherino_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/cherino_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, blush, hetero, loli, penis, 1boy, mosaic_censoring, open_mouth, school_swimsuit, white_one-piece_swimsuit, clothed_female_nude_male, flat_chest, hair_ribbon, nipples, solo_focus, vaginal, age_difference, all_fours, barefoot, blunt_bangs, collarbone, cum, dark-skinned_male, doggystyle, hairband, missionary, name_tag, one-piece_swimsuit_pull, sex_from_behind, spread_legs, tears, torso_grab, very_long_hair, white_ribbon |
| 1 | 15 |  |  |  |  |  | 1girl, solo, white_one-piece_swimsuit, name_tag, simple_background, collarbone, official_alternate_costume, white_background, blush, looking_at_viewer, old_school_swimsuit, blue_halo, cowboy_shot, bath_yukata, covered_navel, flat_chest, open_clothes, open_mouth, small_breasts |
| 2 | 10 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, randoseru, white_gloves, black_pantyhose, solo, white_coat, white_shorts, simple_background, white_background, blush, pom_pom_hair_ornament, red_bag, smile, closed_mouth, fur_trim |
| 3 | 5 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, simple_background, solo, uniform, white_gloves, randoseru, upper_body, red_bag, sidelocks, white_background, white_coat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | hetero | loli | penis | 1boy | mosaic_censoring | open_mouth | school_swimsuit | white_one-piece_swimsuit | clothed_female_nude_male | flat_chest | hair_ribbon | nipples | solo_focus | vaginal | age_difference | all_fours | barefoot | blunt_bangs | collarbone | cum | dark-skinned_male | doggystyle | hairband | missionary | name_tag | one-piece_swimsuit_pull | sex_from_behind | spread_legs | tears | torso_grab | very_long_hair | white_ribbon | solo | simple_background | official_alternate_costume | white_background | looking_at_viewer | old_school_swimsuit | blue_halo | cowboy_shot | bath_yukata | covered_navel | open_clothes | small_breasts | long_sleeves | randoseru | white_gloves | black_pantyhose | white_coat | white_shorts | pom_pom_hair_ornament | red_bag | smile | closed_mouth | fur_trim | uniform | upper_body | sidelocks |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------|:-------|:--------|:-------|:-------------------|:-------------|:------------------|:---------------------------|:---------------------------|:-------------|:--------------|:----------|:-------------|:----------|:-----------------|:------------|:-----------|:--------------|:-------------|:------|:--------------------|:-------------|:-----------|:-------------|:-----------|:--------------------------|:------------------|:--------------|:--------|:-------------|:-----------------|:---------------|:-------|:--------------------|:-----------------------------|:-------------------|:--------------------|:----------------------|:------------|:--------------|:--------------|:----------------|:---------------|:----------------|:---------------|:------------|:---------------|:------------------|:-------------|:---------------|:------------------------|:----------|:--------|:---------------|:-----------|:----------|:-------------|:------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | X | | | | | | X | | X | | X | | | | | | | | | X | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | |
| 3 | 5 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | X | X | | | | | | | | X | X | X | | X | | | X | | | | X | X | X |
|
CyberHarem/kuon_nanami_paripikoumei | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Kuon Nanami
This is the dataset of Kuon Nanami, containing 153 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 153 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 358 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 153 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 153 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 153 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 153 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 153 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 358 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 358 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 358 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
mwalmsley/galaxy10_decals_astropile | ---
dataset_info:
- config_name: galaxyzoo
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Disturbed
'1': Merging
'2': Round Smooth
'3': In-between Round Smooth
'4': Cigar Shaped Smooth
'5': Barred Spiral
'6': Unbarred Tight Spiral
'7': Unbarred Loose Spiral
'8': Edge-on without Bulge
'9': Edge-on with Bulge
splits:
- name: train
num_bytes: 2479891
num_examples: 13779
- name: test
num_bytes: 620054
num_examples: 3445
download_size: 425988859
dataset_size: 3099945
- config_name: skyviewer
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Disturbed
'1': Merging
'2': Round Smooth
'3': In-between Round Smooth
'4': Cigar Shaped Smooth
'5': Barred Spiral
'6': Unbarred Tight Spiral
'7': Unbarred Loose Spiral
'8': Edge-on without Bulge
'9': Edge-on with Bulge
splits:
- name: train
num_bytes: 2496189
num_examples: 13779
- name: test
num_bytes: 624141
num_examples: 3445
download_size: 230816138
dataset_size: 3120330
---
|
Harshithacj123/NER_sample2 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9419
num_examples: 7
download_size: 14281
dataset_size: 9419
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gwlms/germeval2018 | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: coarse-grained
dtype: string
- name: fine-grained
dtype: string
config_name: germeval2018
splits:
- name: train
num_bytes: 840593
num_examples: 5009
- name: test
num_bytes: 519146
num_examples: 3532
download_size: 1282870
dataset_size: 1359739
task_categories:
- text-classification
language:
- de
--- |
Gunulhona/llm_datasets | ---
license: mit
task_categories:
- text-generation
language:
- ko
size_categories:
- 100M<n<1B
--- |
bin-zheng1/demo | ---
license: apache-2.0
---
|
CyberHarem/tam_lin_lancelot_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tam_lin_lancelot/ε¦η²Ύι¨ε£«γ©γ³γΉγγγ/ε¦η²Ύιͺ士ε
°ζ―ζ΄ηΉ (Fate/Grand Order)
This is the dataset of tam_lin_lancelot/ε¦η²Ύι¨ε£«γ©γ³γΉγγγ/ε¦η²Ύιͺ士ε
°ζ―ζ΄ηΉ (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `long_hair, white_hair, sidelocks, breasts, forked_eyebrows, small_breasts, yellow_eyes, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 892.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tam_lin_lancelot_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 758.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tam_lin_lancelot_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1303 | 1.48 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tam_lin_lancelot_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tam_lin_lancelot_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, solo, wide_sleeves, obi, blue_kimono, layered_kimono, purple_kimono, smile, flower, open_mouth |
| 1 | 22 |  |  |  |  |  | 1girl, bare_shoulders, dragon_wings, horns, solo, thighs, looking_at_viewer, white_one-piece_swimsuit, thighlet, covered_navel, dragon_tail, dragon_girl, smile, ahoge |
| 2 | 6 |  |  |  |  |  | 1girl, bare_shoulders, dragon_wings, horns, looking_at_viewer, smile, solo, thighs, dragon_tail, open_mouth, white_bikini, navel, thighlet, elbow_gloves, thighhighs |
| 3 | 22 |  |  |  |  |  | 1girl, black_bikini, cropped_jacket, dragon_wings, high_ponytail, long_sleeves, looking_at_viewer, shrug_(clothing), solo, smile, thighlet, thighs, black_jacket, navel, mouth_mask, pubic_tattoo, tongue_out, mask_pull |
| 4 | 15 |  |  |  |  |  | 1girl, solo, thighs, looking_at_viewer, bare_shoulders, revealing_clothes, body_markings, dragon_wings, weapon, black_panties, horns |
| 5 | 17 |  |  |  |  |  | 1girl, blue_dress, solo, frills, long_sleeves, looking_at_viewer, blue_cape, white_thighhighs, smile, white_rose |
| 6 | 10 |  |  |  |  |  | 1girl, blue_dress, breastplate, faulds, looking_at_viewer, pauldrons, solo, armored_dress, blue_armor, short_dress, thighs, weapon |
| 7 | 9 |  |  |  |  |  | 1boy, 1girl, hetero, nipples, penis, sex, thighs, vaginal, blush, navel, open_mouth, spread_legs, sweat, collarbone, cum_in_pussy, completely_nude, mosaic_censoring, girl_on_top, looking_at_viewer, smile, straddling |
| 8 | 5 |  |  |  |  |  | 1girl, bare_shoulders, fake_animal_ears, playboy_bunny, rabbit_ears, solo, highleg_leotard, looking_at_viewer, open_mouth, strapless_leotard, thighs, wrist_cuffs, blue_leotard, blush, smile, bare_legs, black_leotard, collarbone, covered_navel, fake_tail, heart, pantyhose, rabbit_tail |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | looking_at_viewer | solo | wide_sleeves | obi | blue_kimono | layered_kimono | purple_kimono | smile | flower | open_mouth | bare_shoulders | dragon_wings | horns | thighs | white_one-piece_swimsuit | thighlet | covered_navel | dragon_tail | dragon_girl | ahoge | white_bikini | navel | elbow_gloves | thighhighs | black_bikini | cropped_jacket | high_ponytail | shrug_(clothing) | black_jacket | mouth_mask | pubic_tattoo | tongue_out | mask_pull | revealing_clothes | body_markings | weapon | black_panties | blue_dress | frills | blue_cape | white_thighhighs | white_rose | breastplate | faulds | pauldrons | armored_dress | blue_armor | short_dress | 1boy | hetero | nipples | penis | sex | vaginal | blush | spread_legs | sweat | collarbone | cum_in_pussy | completely_nude | mosaic_censoring | girl_on_top | straddling | fake_animal_ears | playboy_bunny | rabbit_ears | highleg_leotard | strapless_leotard | wrist_cuffs | blue_leotard | bare_legs | black_leotard | fake_tail | heart | pantyhose | rabbit_tail |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------------|:-------|:---------------|:------|:--------------|:-----------------|:----------------|:--------|:---------|:-------------|:-----------------|:---------------|:--------|:---------|:---------------------------|:-----------|:----------------|:--------------|:--------------|:--------|:---------------|:--------|:---------------|:-------------|:---------------|:-----------------|:----------------|:-------------------|:---------------|:-------------|:---------------|:-------------|:------------|:--------------------|:----------------|:---------|:----------------|:-------------|:---------|:------------|:-------------------|:-------------|:--------------|:---------|:------------|:----------------|:-------------|:--------------|:-------|:---------|:----------|:--------|:------|:----------|:--------|:--------------|:--------|:-------------|:---------------|:------------------|:-------------------|:--------------|:-------------|:-------------------|:----------------|:--------------|:------------------|:--------------------|:--------------|:---------------|:------------|:----------------|:------------|:--------|:------------|:--------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 22 |  |  |  |  |  | X | | X | X | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | X | | | | | | X | | X | X | X | X | X | | X | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 22 |  |  |  |  |  | X | X | X | X | | | | | | X | | | | X | | X | | X | | | | | | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 15 |  |  |  |  |  | X | | X | X | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 17 |  |  |  |  |  | X | X | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | | X | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | X | | | | | | | X | | X | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | X | X | | | | | | X | | X | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
BangumiBase/tenpuru | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Tenpuru
This is the image base of bangumi Tenpuru, we detected 9 characters, 883 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 272 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 50 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 221 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 36 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 37 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 101 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 115 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 22 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 29 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
neovalle/H4rmony_dpo | ---
license: mit
task_categories:
- question-answering
- text-classification
- reinforcement-learning
- text-generation
tags:
- ecolinguistics
- ecology
- sustainability
- environment
- synthetic
size_categories:
- 1K<n<10K
---
This dataset is based on [neovalle/H4rmony](https://huggingface.co/datasets/neovalle/H4rmony), and optimised to the format required by DPOTrainer from the trl library. |
QNN/autotrain-data-token-classification | ---
task_categories:
- token-classification
---
# AutoTrain Dataset for project: token-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project token-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"Pd",
"has",
"been",
"regarded",
"as",
"one",
"of",
"the",
"alternatives",
"to",
"Pt",
"as",
"a",
"promising",
"hydrogen",
"evolution",
"reaction",
"(HER)",
"catalyst.",
"Strategies",
"including",
"Pd-metal",
"alloys",
"(Pd-M)",
"and",
"Pd",
"hydrides",
"(PdH<sub><i>x</i></sub>)",
"have",
"been",
"proposed",
"to",
"boost",
"HER",
"performances.",
"However,",
"the",
"stability",
"issues,",
"e.g.,",
"the",
"dissolution",
"in",
"Pd-M",
"and",
"the",
"hydrogen",
"releasing",
"in",
"PdH<sub><i>x</i></sub>,",
"restrict",
"the",
"industrial",
"application",
"of",
"Pd-based",
"HER",
"catalysts.",
"We",
"here",
"design",
"and",
"synthesize",
"a",
"stable",
"Pd-Cu",
"hydride",
"(",
"PdCu<sub>0.2</sub>H<sub>0.43</sub>",
")",
"catalyst,",
"combining",
"the",
"advantages",
"of",
"both",
"Pd-M",
"and",
"PdH<sub><i>x</i></sub>",
"structures",
"and",
"improving",
"the",
"HER",
"durability",
"simultaneously.",
"The",
"hydrogen",
"intercalation",
"is",
"realized",
"under",
"atmospheric",
"pressure",
"(1.0",
"atm)",
"following",
"our",
"synthetic",
"approach",
"that",
"imparts",
"high",
"stability",
"to",
"the",
"Pd-Cu",
"hydride",
"structure.",
"The",
"obtained",
"PdCu<sub>0.2</sub>H<sub>0.43</sub>",
"catalyst",
"exhibits",
"a",
"small",
"overpotential",
"of",
"28",
"mV",
"at",
"10",
"mA/cm<sup>2</sup>",
",",
"a",
"low",
"Tafel",
"slope",
"of",
"23",
"mV/dec",
",",
"and",
"excellent",
"HER",
"durability",
"due",
"to",
"its",
"appropriate",
"hydrogen",
"adsorption",
"free",
"energy",
"and",
"alleviated",
"metal",
"dissolution",
"rate.",
"</p>",
"<p>"
],
"tags": [
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0,
2,
2,
2,
2,
4,
2,
5,
5,
2,
5,
5,
2,
2,
2,
4,
2,
2,
5,
5,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2
]
},
{
"tokens": [
"A",
"critical",
"challenge",
"in",
"energy",
"research",
"is",
"the",
"development",
"of",
"earth",
"abundant",
"and",
"cost-effective",
"materials",
"that",
"catalyze",
"the",
"electrochemical",
"splitting",
"of",
"water",
"into",
"hydrogen",
"and",
"oxygen",
"at",
"high",
"rates",
"and",
"low",
"overpotentials.",
"Key",
"to",
"addressing",
"this",
"issue",
"lies",
"not",
"only",
"in",
"the",
"synthesis",
"of",
"new",
"materials,",
"but",
"also",
"in",
"the",
"elucidation",
"of",
"their",
"active",
"sites,",
"their",
"structure",
"under",
"operating",
"conditions",
"and",
"ultimately,",
"extraction",
"of",
"the",
"structure-function",
"relationships",
"used",
"to",
"spearhead",
"the",
"next",
"generation",
"of",
"catalyst",
"development.",
"In",
"this",
"work,",
"we",
"present",
"a",
"complete",
"cycle",
"of",
"synthesis,",
"operando",
"characterization,",
"and",
"redesign",
"of",
"an",
"amorphous",
"cobalt",
"phosphide",
"(",
"CoP",
"<sub><i>x</i></sub>",
")",
"bifunctional",
"catalyst.",
"The",
"research",
"was",
"driven",
"by",
"integrated",
"electrochemical",
"analysis,",
"Raman",
"spectroscopy",
"and",
"gravimetric",
"measurements",
"utilizing",
"a",
"novel",
"quartz",
"crystal",
"microbalance",
"spectroelectrochemical",
"cell",
"to",
"uncover",
"the",
"catalytically",
"active",
"species",
"of",
"amorphous",
"CoP",
"<sub><i>x</i></sub>",
"and",
"subsequently",
"modify",
"the",
"material",
"to",
"enhance",
"the",
"activity",
"of",
"the",
"elucidated",
"catalytic",
"phases.",
"Illustrating",
"the",
"power",
"of",
"our",
"approach,",
"the",
"second",
"generation",
"cobalt-iron",
"phosphide",
"(",
"CoFeP<sub>x</sub>",
")",
"catalyst,",
"developed",
"through",
"an",
"iteration",
"of",
"the",
"operando",
"measurement",
"directed",
"optimization",
"cycle,",
"is",
"superior",
"in",
"both",
"hydrogen",
"and",
"oxygen",
"evolution",
"reactivity",
"over",
"the",
"previous",
"material",
"and",
"is",
"capable",
"of",
"overall",
"water",
"electrolysis",
"at",
"a",
"current",
"density",
"of",
"10",
"mA",
"cm<sup>-2</sup>",
"with",
"1.5",
"V",
"applied",
"bias",
"in",
"1",
"M",
"KOH",
"electrolyte",
"solution.",
"</p>",
"<p>"
],
"tags": [
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0,
0,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0,
0,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
4,
4,
2,
5,
5,
5,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['CATALYST', 'CO-CATALYST', 'O', 'Other', 'PROPERTY_NAME', 'PROPERTY_VALUE'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 166 |
| valid | 44 |
|
heliosprime/twitter_dataset_1713075125 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 13461
num_examples: 28
download_size: 10794
dataset_size: 13461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713075125"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Chapad0o/Vedal | ---
license: openrail
---
|
CyberHarem/hans_ludemann_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hans_ludemann/γγ³γΉγ»γͺγ₯γΌγγγ³/Z18 (Azur Lane)
This is the dataset of hans_ludemann/γγ³γΉγ»γͺγ₯γΌγγγ³/Z18 (Azur Lane), containing 22 images and their tags.
The core tags of this character are `blonde_hair, long_hair, twintails, blue_eyes, hair_ornament, hairclip, hat, bow, fang, breasts, hair_between_eyes, small_breasts, bangs, very_long_hair, black_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 22 | 29.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hans_ludemann_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 22 | 17.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hans_ludemann_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 58 | 39.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hans_ludemann_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 22 | 27.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hans_ludemann_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 58 | 54.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hans_ludemann_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hans_ludemann_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 22 |  |  |  |  |  | blush, 1girl, solo, looking_at_viewer, navel, open_mouth, fingerless_gloves, smile, black_gloves, skirt, white_panties, black_thighhighs, jacket, open_clothes, training_bra |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | blush | 1girl | solo | looking_at_viewer | navel | open_mouth | fingerless_gloves | smile | black_gloves | skirt | white_panties | black_thighhighs | jacket | open_clothes | training_bra |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------------------|:--------|:-------------|:--------------------|:--------|:---------------|:--------|:----------------|:-------------------|:---------|:---------------|:---------------|
| 0 | 22 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
BAAI/TACO | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: taco-topics-in-algorithmic-code-generation
pretty_name: TACO
tags:
- code
dataset_info:
config_name: ALL
features:
- name: question
dtype: string
- name: solutions
dtype: string
- name: starter_code
dtype: string
- name: input_output
dtype: string
- name: difficulty
dtype: string
- name: raw_tags
dtype: string
- name: name
dtype: string
- name: source
dtype: string
- name: tags
dtype: string
- name: skill_types
dtype: string
- name: url
dtype: string
- name: Expected Auxiliary Space
dtype: string
- name: time_limit
dtype: string
- name: date
dtype: string
- name: picture_num
dtype: string
- name: memory_limit
dtype: string
- name: Expected Time Complexity
dtype: string
splits:
- name: train
num_bytes: 4239311973
num_examples: 25443
- name: test
num_bytes: 481480755
num_examples: 1000
download_size: 2419844942
dataset_size: 4720792728
configs:
- config_name: ALL
data_files:
- split: train
path: ALL/train-*
- split: test
path: ALL/test-*
---
# TACO Dataset
<img src="https://cdn-uploads.huggingface.co/production/uploads/6335113375bed9932474315e/rMxdXcC56S3FEh37oRa2s.png" width="200" height="200">
[TACO](https://github.com/FlagOpen/TACO) is a benchmark for code generation with 26443 problems. It can be used to evaluate the ability of language models to generate code from natural language specifications.
## Dataset Description
- **Repository:** https://github.com/FlagOpen/TACO/
- **Paper:** [TACO: Topics in Algorithmic COde generation dataset](https://arxiv.org/abs/2312.14852)
- **Leaderboard:** [Code Generation on CodeContests](https://paperswithcode.com/sota/code-generation-on-taco-code)
- **Point of Contact:** [Bo-Wen Zhang](mailto:bwzhang@baai.ac.cn)
## Languages
The dataset contains questions in English and code solutions in Python.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("BAAI/TACO")
DatasetDict({
train: Dataset({
features: ['question', 'solutions', 'starter_code', 'input_output', 'difficulty', 'raw_tags', 'name', 'source', 'tags', 'skill_types', 'url', 'Expected Auxiliary Space', 'time_limit', 'date', 'picture_num', 'memory_limit', 'Expected Time Complexity'],
num_rows: 25443
})
test: Dataset({
features: ['question', 'solutions', 'starter_code', 'input_output', 'difficulty', 'raw_tags', 'name', 'source', 'tags', 'skill_types', 'url', 'Expected Auxiliary Space', 'time_limit', 'date', 'picture_num', 'memory_limit', 'Expected Time Complexity'],
num_rows: 1000
})
})
```
### How to use it
You can load and iterate through the dataset with the following two lines of code for the train split:
```python
from datasets import load_dataset
import json
ds = load_dataset("BAAI/TACO", split="train")
sample = next(iter(ds))
# non-empty solutions and input_output features can be parsed from text format this way:
sample["solutions"] = json.loads(sample["solutions"])
sample["input_output"] = json.loads(sample["input_output"])
sample["raw_tags"] = eval(sample["raw_tags"])
sample["tags"] = eval(sample["tags"])
sample["skill_types"] = eval(sample["skill_types"])
print(sample)
#OUTPUT:
{
"question": "You have a deck of $n$ cards, and you'd like to reorder it to a new one.\n\nEach card has a value between $1$ and $n$ equal to $p_i$. ...",
"solutions": [
"import heapq\nfrom math import sqrt\nimport operator\nimport sys\ninf_var = 0\nif inf_var == 1:\n\tinf = open('input.txt', 'r')\nelse:\n\tinf = sys.stdin\n ...",
"t = int(input())\nfor _ in range(t):\n\tn = int(input())\n\tp = list(map(int, input().split()))\n\tans = []\n\tp1 = [-1] * (n + 1)\n\tfor i in range(n):\n\t\tp1[p[i]] = i\n\ti = n\n\twhile i:\n\t\twhile i > 0 and p1[i] == -1:\n\t\t\ti -= 1\n\t\telse:\n\t\t\tif i:\n\t\t\t\tk = 0\n\t\t\t\tfor j in range(p1[i], n):\n\t\t\t\t\tans.append(p[j])\n\t\t\t\t\tp1[p[j]] = -1\n\t\t\t\t\tk += 1\n\t\t\t\tn -= k\n\t\t\t\ti -= 1\n\t\t\telse:\n\t\t\t\tbreak\n\tprint(*ans)\n",
"import sys\n\ndef get_ints():\n\treturn map(int, sys.stdin.readline().strip().split())\n\ndef get_list():\n\treturn list(map(int, sys.stdin.readline().strip().split()))\n\ndef get_list_string():\n\treturn list(map(str, sys.stdin.readline().strip().split()))\n\ndef get_string():\n\treturn sys.stdin.readline().strip()\n\ndef get_int():\n\treturn int(sys.stdin.readline().strip())\n\ndef get_print_int(x):\n\tsys.stdout.write(str(x) + '\\n')\n\ndef get_print(x):\n\tsys.stdout.write(x + '\\n')\n\ndef get_print_int_same(x):\n\tsys.stdout.write(str(x) + ' ')\n\ndef get_print_same(x):\n\tsys.stdout.write(x + ' ')\nfrom sys import maxsize\n\ndef solve():\n\tfor _ in range(get_int()):\n\t\tn = get_int()\n\t\tarr = get_list()\n\t\ti = n - 1\n\t\tj = n - 1\n\t\ttemp = sorted(arr)\n\t\tvis = [False] * n\n\t\tans = []\n\t\twhile j >= 0:\n\t\t\tt = j\n\t\t\ttt = []\n\t\t\twhile t >= 0 and arr[t] != temp[i]:\n\t\t\t\tvis[arr[t] - 1] = True\n\t\t\t\ttt.append(arr[t])\n\t\t\t\tt -= 1\n\t\t\tvis[arr[t] - 1] = True\n\t\t\ttt.append(arr[t])\n\t\t\ttt = tt[::-1]\n\t\t\tfor k in tt:\n\t\t\t\tans.append(k)\n\t\t\tj = t - 1\n\t\t\twhile i >= 0 and vis[i]:\n\t\t\t\ti -= 1\n\t\tget_print(' '.join(map(str, ans)))\nsolve()\n",
...
],
"starter_code": "",
"input_output": {
"inputs": [
"4\n4\n1 2 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n",
"4\n4\n2 1 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n",
"4\n4\n2 1 3 4\n5\n1 5 2 4 3\n6\n2 4 5 3 6 1\n1\n1\n",
"4\n4\n1 2 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n"
],
"outputs": [
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"\n4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n"
]
},
"difficulty": "EASY",
"raw_tags": [
"data structures",
"greedy",
"math"
],
"name": null,
"source": "codeforces",
"tags": [
"Data structures",
"Mathematics",
"Greedy algorithms"
],
"skill_types": [
"Data structures",
"Greedy algorithms"
],
"url": "https://codeforces.com/problemset/problem/1492/B",
"Expected Auxiliary Space": null,
"time_limit": "1 second",
"date": "2021-02-23",
"picture_num": "0",
"memory_limit": "512 megabytes",
"Expected Time Complexity": null
}
```
Each sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level (difficulty), topics of task (raw tags), algorithms (tags) as well as required programming skill types (skill_types) of the problem and its source.
If a sample has non empty `input_output` feature, you can read it as a dictionary with keys `inputs` and `outputs` and `fn_name` if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.
You can also filter the dataset for the difficulty level: EASY, MEDIUM, MEDIUM_HARD, HARD and VERY_HARD, or filter the programming skill types: Amortized analysis, Bit manipulation, Complete search, Data structures, Dynamic programming, Greedy algorithms, Range queries, Sorting. Just pass the list of difficulties or skills as a list. E.g. if you want the most challenging problems, you need to select the VERY_HARD level:
```python
ds = load_dataset("BAAI/TACO", split="train", difficulties=["VERY_HARD"])
print(next(iter(ds))["question"])
```
```
#OUTPUT:
"""Let S(n) denote the number that represents the digits of n in sorted order. For example, S(1) = 1, S(5) = 5, S(50394) = 3459, S(353535) = 333555.
Given a number X, compute <image> modulo 109 + 7.
Input
The first line of input will contain the integer X (1 β€ X β€ 10700).
Output
Print a single integer, the answer to the question.
Examples
Input
21
Output
195
Input
345342
Output
390548434
Note
The first few values of S are 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 12. The sum of these values is 195.
```
Or if you want the problems invovled with Range queries and Sorting, you need to select the skills Range queries and Sorting:
```python
ds = load_dataset("BAAI/TACO", split="train", skills=["Range queries", "Sorting"])
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|question|string|problem description|
|solutions|string|some python solutions|
|input_output|string|Json string with "inputs" and "outputs" of the test cases, might also include "fn_name" the name of the function|
|difficulty|string|difficulty level of the problem|
|picture_num|string|the number of pictures in the problem|
|source|string|the source of the problem|
|url|string|url of the source of the problem|
|date|string|the date of the problem|
|starter_code|string|starter code to include in prompts|
|time_limit|string|the time consumption limit to solve the problem|
|memory_limit|string|the memory consumption limit to solve the problem|
|Expected Auxiliary Space|string|the extra auxiliary space expected to solve the problem|
|Expected Time Complexity|string|the time complexity expected to solve the problem|
|raw_tags|string|the topics of the programming task|
|tags|string|the manually annoatated algorithms needed to solve the problem|
|skill_types|string|the mapped programming skill types to solve the problem|
### Data Splits
The dataset contains a train with 25443 samples and test splits with 1000 samples.
### Dataset Statistics
* 26443 coding problems
* 1.55M verified solutions
* for tests split, the average number of test cases is 202.3
* all files have ground-truth solutions in the test split
## Dataset Creation
To create the TACO dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Aizu
AtCoder, CodeChef, Codeforces, CodeWars, GeeksforGeeks, HackerEarth, HackerRank, Katti and LeetCode. For more details please refer to the original paper.
## License
The TACO dataset that is authored by BAAI, Shandong Normal University and Peking University is released under an [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). However, the data also includes content licensed under other permissive licenses such as MIT License, or web-crawled data which is used under the terms of the CC BY 4.0 license ([Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/legalcode)).
We gratefully acknowledge the contributions of the following:
* some AtCoder, Codeforces, CodeWars, Kattis, LeetCode material curated from APPS dataset (https://github.com/hendrycks/apps)
* some Aizu, AtCoder, CodeChef, Codeforces material curated from CodeContest dataset (https://github.com/google-deepmind/code_contests)
* Codeforces materials are sourced from http://codeforces.com.
* CodeChef materials are sourced from https://www.codechef.com.
* GeekforGeeks materials are sourced from https://www.geeksforgeeks.org
* HackerEarth materials are curated from:
[Description2Code Dataset](https://github.com/ethancaballero/description2code),
licensed under the
[MIT open source license](https://opensource.org/licenses/MIT), copyright
not specified.
* HackerRank materials are sourced from https://www.hackerrank.com. We don't know what the legal rights or data licenses of HackerRank. Please contact us if there is data license.
## Citation Information
If you find our data, or code helpful, please cite [the original paper](https://arxiv.org/abs/2312.14852):
```
@article{li2023taco,
title={TACO: Topics in Algorithmic COde generation dataset},
author={Rongao Li and Jie Fu and Bo-Wen Zhang and Tao Huang and Zhihong Sun and Chen Lyu and Guang Liu and Zhi Jin and Ge Li},
journal={arXiv preprint arXiv:2312.14852},
year={2023}
}
``` |
AmazonScience/WikiDT | ---
license: cc-by-sa-3.0
task_categories:
- table-question-answering
- question-answering
language:
- en
tags:
- documents
- tables
- VQA
pretty_name: WikiDT
size_categories:
- 100K<n<1M
---
# WikiDT: Wikipedia Table Document dataset for table extraction and visual question answering
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The WikiDT contains multi-level annotations and labels for the question-answering task based on images. Meanwhile, as the questions are answered from some table on the image, and WikiDT provides the table annotation to facilitate the diagnosis of the models and decompose the problem, WikiDT can be also directly used as a
table recognition dataset.
The dataset contains 16,887 Wikipedia screenshot, which are segmented to 54,032 subpages since the full screenshots are potentially long. In total, there's 159,905 tables in the dataset. The number of question-answer samples is 70,652. Each QA sample contains triplets of <question, answer, full-page screenshot filename>, and is additionally annotated with retrieval labels (which subpage, and which table). 53,698 QA samples also have SQL annotation.
For each subpage, OCR and table extraction annotations from two sources are available. While rendering the screenshots, the ground truth table annotation is recorded. Meanwhile, to make the dataset realistic, we also requested OCR and table extraction from [Amazon Textract](https://aws.amazon.com/textract/) for each subpage (results obtained during Feb.28, 2023 - Mar.6, 2023).
### Languages
English
## Dataset Structure
Once downloaded, the WikiDT has the following parts. The downloaded files are around 77GB. Please ensure you have at least 160GB since we will be extract individual files from the tars.
```
.
βββ WikiTableExtraction
βΒ Β βββ detection.partaa
βΒ Β βββ detection.partab
βΒ Β βββ detection.partac
βΒ Β βββ detection.partad
βΒ Β βββ detection.partae
βΒ Β βββ detection.partaf
βΒ Β βββ detection.partag
βΒ Β βββ structure.partaa
βΒ Β βββ structure.partab
βΒ Β βββ structure.partac
βΒ Β βββ structure.partad
βΒ Β βββ structure.partae
βββ images.partaa
βββ images.partab
βββ images.partac
βββ images.partad
βββ images.partae
βββ images.partaf
βββ images.partag
βββ images.partah
βββ images.partai
βββ ocr.tar
βββ samples
βΒ Β βββ test.json
βΒ Β βββ train.json
βΒ Β βββ val.json
βββ tsv.tar
```
Please concat the part files and extract them into respective folder. For example,
run
```
cd WikiTableExtraction/
cat detection.parta* | tar x
```
to extract the `detection` folder.
Once you extracted all the tar files, the WikiDT dataset has the following file structure.
```sh
+--WikiDT-dataset
| +--WikiTableExtraction
| | +--detection
| | | +--images # sub page images
| | | +--train # xml table bbox annotation
| | | +--test # xml table bbox annotation
| | | +--val # xml table bbox annotation
| | | images_filelist.txt # index of 54,032 images
| | | test_filelist.txt # index of 5,410 test samples
| | | train_filelist.txt # index of 43,248 train samples
| | | val_filelist.txt # index of 5,347 val samples
| | +--structure
| | | +--images # images cropped to table region
| | | +--train # xml table bbox annotation
| | | +--test # xml table bbox annotation
| | | +--val # xml table bbox annotation
| | | images_filelist.txt # index of 159,898 images
| | | test_filelist.txt # index of 15,989 test samples
| | | train_filelist.txt # index of 129,980 train samples
| | | val_filelist.txt # index of 15,991 val samples
| +--samples # in total 70,652 TableVQA samples from the three json files
| | +--train.json #
| | +--test.json #
| | +--val.json #
| +--images # full page image
| +--ocr # text and bbox for the table content
| | +--textract # detected by Amazon Textract API
| | +--web # extracted from HTML information
| +--tsv # extracted table in tsv format
| | +--textract # detected by Amazon Textract API
| | +--web # extracted from HTML information
```
### Table VQA annotation example
Here is an example of an xml table bbox annotation from `WikiDT-dataset/samples/[train|test|val].json/`.
```
{'all_ocr_files_textract': ['ocr/textract/16301437_page_seg_0.json',
'ocr/textract/16301437_page_seg_1.json'],
'all_ocr_files_web': ['ocr/web/16301437_page_seg_0.json',
'ocr/web/16301437_page_seg_1.json'],
'all_table_files_textract': ['tsv/textract/16301437_page_0.tsv',
'tsv/textract/16301437_page_1.tsv'],
'all_table_files_web': ['tsv/web/16301437_1.tsv', 'tsv/web/16301437_0.tsv'],
'answer': [['don johnson buckeye st. classic']],
'image': '16301437_page.png',
'ocr_retrieval_file_textract': 'ocr/textract/16301437_page_seg_0.json',
'ocr_retrieval_file_web': 'ocr/web/16301437_page_seg_0.json',
'question': 'Name the Event which has a Score of 209-197?',
'sample_id': '14190',
'sql_str': "SELECT `event` FROM cur_table WHERE `score` = '209-197' ",
'sub_page': ['16301437_page_seg_0.png', '16301437_page_seg_1.png'],
'sub_page_retrieved': '16301437_page_seg_0.png',
'subset': 'TFC',
'table_id': '2-16301437-1',
'table_retrieval_file_textract': 'tsv/textract/16301437_page_0.tsv',
'table_retrieval_file_web': 'tsv/web/16301437_1.tsv'}
```
### Table Detection annotation example
Here is an example of an xml table bbox annotation from `WikiDT-dataset/WikiTableExtraction/structure/[train|test|val]/`.
```xml
<annotation>
<folder />
<filename>204_147_page_crop_5.png</filename>
<source>WikiDT Dataset</source>
<size>
<width>788</width>
<height>540.0</height>
<depth>3</depth>
</size>
<object>
<name>table</name>
<rowspan />
<colspan />
<bndbox>
<xmin>10</xmin>
<ymin>10</ymin>
<xmax>778</xmax>
<ymax>530</ymax>
</bndbox>
</object>
<object>
<name>header row</name>
<rowspan />
<colspan />
<bndbox>
<xmin>10</xmin>
<ymin>10</ymin>
<xmax>778</xmax>
<ymax>33</ymax>
</bndbox>
</object>
<object>
<name>header cell</name>
<rowspan />
<colspan>10</colspan>
<bndbox>
<xmin>12</xmin>
<ymin>35</ymin>
<xmax>776</xmax>
<ymax>58</ymax>
</bndbox>
</object>
<object>
<name>table row</name>
<rowspan />
<colspan />
<bndbox>
<xmin>10</xmin>
<ymin>60</ymin>
<xmax>778</xmax>
<ymax>530</ymax>
</bndbox>
</object>
</annotation>
```
### Licensing Information
CC BY SA 3.0
### Contributors
[Hui Shi](mailto:hshi@ucsd.edu) (Work done during her internship at Amazon)
[Yusheng Xie](mailto:yushx@amazon.com) (corresponding person)
[Luis Goncalves](mailto:luisgonc@amazon.com)
|
wshi83/EHRAgent-mimic_iii | ---
license: apache-2.0
---
|
jamestalentium/cnn_dailymail_100_finetune | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 439445.02164652944
num_examples: 100
download_size: 128996
dataset_size: 439445.02164652944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cnn_dailymail_100_finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jaran91/CuxDataset | ---
license: unknown
---
|
osacar/iaprueba | ---
license: openrail
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.