datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Nyaa97/art_sr_vc1_mini2 | ---
dataset_info:
features:
- name: id1
dtype: string
- name: path1
dtype: string
- name: audio1
dtype: audio
- name: id2
dtype: string
- name: path2
dtype: string
- name: audio2
dtype: audio
- name: same_speaker
dtype: int64
splits:
- name: train
num_bytes: 32511478695.84
num_examples: 62587
download_size: 5172818625
dataset_size: 32511478695.84
---
# Dataset Card for "art_sr_vc1_mini2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zhangshuoming/c_x86_simd_extension | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 741888
num_examples: 540
download_size: 133783
dataset_size: 741888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c_x86_simd_extension"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
THEODOROS/Architext_v1 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- architecture
- architext
pretty_name: architext_v1
size_categories:
- 100K<n<1M
---
# Dataset Card for Architext
## Dataset Description
This is the raw training data used to train the Architext models referenced in "Architext: Language-Driven Generative Architecture Design" .
- **Homepage:** https://architext.design/
- **Paper:** https://arxiv.org/abs/2303.07519
- **Point of Contact:** Theodoros Galanos (https://twitter.com/TheodoreGalanos)
## Dataset Creation
The data were synthetically generated by a parametric design script in Grasshopper 3D, a virtual algorithmic environment in the design software Rhinoceros 3D.
## Considerations for Using the Data
The data describe once instance of architectural design, specifically layout generation for residential appartments. Even in that case, the data is limited in the possible shapes they can represent, size, and typologies. Additionally, the annotations used as language prompts to generate a design are restricted to automatically generated annotations based on layout characteristics (adjacency, typology, number of spaces).
### Licensing Information
The dataset is licensed under the Apache 2.0 license.
### Citation Information
If you use the dataset please cite:
```
@article{galanos2023architext,
title={Architext: Language-Driven Generative Architecture Design},
author={Galanos, Theodoros and Liapis, Antonios and Yannakakis, Georgios N},
journal={arXiv preprint arXiv:2303.07519},
year={2023}
}
``` |
Muhammad2003/Toxic_PreTrain_4k | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_chargoddard__llama-2-26b-trenchcoat-stack | ---
pretty_name: Evaluation run of chargoddard/llama-2-26b-trenchcoat-stack
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [chargoddard/llama-2-26b-trenchcoat-stack](https://huggingface.co/chargoddard/llama-2-26b-trenchcoat-stack)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__llama-2-26b-trenchcoat-stack_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-05T03:20:31.232234](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-2-26b-trenchcoat-stack_public/blob/main/results_2023-11-05T03-20-31.232234.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.028208892617449664,\n\
\ \"em_stderr\": 0.0016955832997069967,\n \"f1\": 0.07960255872483231,\n\
\ \"f1_stderr\": 0.0020841586471945246,\n \"acc\": 0.3881222949389441,\n\
\ \"acc_stderr\": 0.00840931636658079\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.028208892617449664,\n \"em_stderr\": 0.0016955832997069967,\n\
\ \"f1\": 0.07960255872483231,\n \"f1_stderr\": 0.0020841586471945246\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02880970432145565,\n \
\ \"acc_stderr\": 0.004607484283767473\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7474348855564326,\n \"acc_stderr\": 0.012211148449394105\n\
\ }\n}\n```"
repo_url: https://huggingface.co/chargoddard/llama-2-26b-trenchcoat-stack
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_05T03_20_31.232234
path:
- '**/details_harness|drop|3_2023-11-05T03-20-31.232234.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-05T03-20-31.232234.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_05T03_20_31.232234
path:
- '**/details_harness|gsm8k|5_2023-11-05T03-20-31.232234.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-05T03-20-31.232234.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_05T03_20_31.232234
path:
- '**/details_harness|winogrande|5_2023-11-05T03-20-31.232234.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-05T03-20-31.232234.parquet'
- config_name: results
data_files:
- split: 2023_11_05T03_20_31.232234
path:
- results_2023-11-05T03-20-31.232234.parquet
- split: latest
path:
- results_2023-11-05T03-20-31.232234.parquet
---
# Dataset Card for Evaluation run of chargoddard/llama-2-26b-trenchcoat-stack
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/llama-2-26b-trenchcoat-stack
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/llama-2-26b-trenchcoat-stack](https://huggingface.co/chargoddard/llama-2-26b-trenchcoat-stack) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__llama-2-26b-trenchcoat-stack_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-05T03:20:31.232234](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama-2-26b-trenchcoat-stack_public/blob/main/results_2023-11-05T03-20-31.232234.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.028208892617449664,
"em_stderr": 0.0016955832997069967,
"f1": 0.07960255872483231,
"f1_stderr": 0.0020841586471945246,
"acc": 0.3881222949389441,
"acc_stderr": 0.00840931636658079
},
"harness|drop|3": {
"em": 0.028208892617449664,
"em_stderr": 0.0016955832997069967,
"f1": 0.07960255872483231,
"f1_stderr": 0.0020841586471945246
},
"harness|gsm8k|5": {
"acc": 0.02880970432145565,
"acc_stderr": 0.004607484283767473
},
"harness|winogrande|5": {
"acc": 0.7474348855564326,
"acc_stderr": 0.012211148449394105
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
yair-elboher/text-toy | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 99444649
num_examples: 20000
- name: validation
num_bytes: 300238
num_examples: 50
download_size: 48091181
dataset_size: 99744887
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "text-toy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
imdatta0/mmlu_sample | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train_1pc
num_bytes: 76328814
num_examples: 56886
- name: train_5pc
num_bytes: 585203496
num_examples: 284544
download_size: 201927295
dataset_size: 661532310
configs:
- config_name: default
data_files:
- split: train_1pc
path: data/train_1pc-*
- split: train_5pc
path: data/train_5pc-*
---
# Dataset Card for "mmlu_1pc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edanigoben/fr-crawle-reduced | ---
dataset_info:
features:
- name: labels
dtype:
class_label:
names:
'0': business analyst
'1': data analyst
'2': data engineer
'3': full stack
'4': data scientist
'5': software engineer
'6': devops engineer
'7': front end
'8': business intelligence analyst
'9': machine learning engineer
- name: text
dtype: string
splits:
- name: train
num_bytes: 13994632.751735482
num_examples: 80000
- name: val
num_bytes: 1749329.0939669353
num_examples: 10000
- name: test
num_bytes: 1749329.0939669353
num_examples: 10000
download_size: 10098323
dataset_size: 17493290.939669352
---
# Dataset Card for "fr-crawle-reduced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AchrafLou/achraf-ds | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 14823600.03
num_examples: 3289
download_size: 15234205
dataset_size: 14823600.03
---
# Dataset Card for "achraf-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jlbaker361/wikiart20-evaluation | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: image
dtype: image
- name: model
dtype: string
splits:
- name: train
num_bytes: 1761155.0
num_examples: 3
download_size: 1763275
dataset_size: 1761155.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
abdalimran/BaitBuster-Bangla | ---
license: mit
---
# BaitBuster-Bangla: A Comprehensive Dataset for Clickbait Detection in Bangla with Multi-Feature and Multi-Modal Analysis
## Abstract
This dataset is a multi-feature and multi-modal dataset for Bangla clickbait detection in video sharing platforms. The dataset is collected from YouTube using its official public API with the objective of classifying clickbait content in the Bangla language. The dataset consists of 253,070 entries with 18 columns covering a curated list of 28 Not Clickbait, and 26 Clickbait Bangla youtube channels. The dataset provides valuable information for studying clickbait content and includes various metadata related to the videos, user engagement statistics, and labels. The dataset has been labeled in three different strategies: i) pre-defined auto labels, ii) labels by human annotator, and iii) labels by fine-tuned AI model. However, human labels are are available for 10000 entries. The dataset is available in three different formats: xlsx, csv, and parquet.
## Data Description
The dataset contains a total of 253,070 records, with 18 features. The features are categorized into four different types: Metadata, Primary Data, Engagement Stats, and Label. Under the Metadata category contains basic information about the channel and video, such as their unique identifiers, date and time of publication, and thumbnail URLs. The Primary Data category contains information about the title and description of the video. The "Processed" columns refer to the cleaned data after denoising, deduplication and debiased for further analysis. The Engagement Stats category contains data on user engagement metrics for each video. The Label category contains predefined auto labels, human annotated labels, and AI generated pseudo labels. Auto labels are labels that are automatically derived based on a review of their titles, descriptions, and thumbnails over time. Channels with consistently misleading, exaggerated, or sensationalized content were labeled as clickbait. Those focusing on factual information delivery without emotional appeals were labeled non-clickbait. Human labels are labels that are manually derived by volunteer human annotators and AI labels are labels that are generated by a fine-tuned AI model. The following table presents a detailed overview and definitions of the features.
| **Feature Type** | **Feature Name** | **Data Type** | **Definition** |
|----------------------------|----------------------|---------------|--------------------------------------------------------------|
| Metadata | channel_id | string | ID of the YouTube channel |
| Metadata | channel_name | string | Name of the YouTube channel |
| Metadata | channel_url | string | URL of the YouTube channel |
| Metadata | video_id | string | ID of the video |
| Metadata | publishedAt | datetime | Date and time when the video was published |
| Primary Data | title | string | Title of the video |
| Primary Data (Processed) | title_debiased | string | Debiased title of the video |
| Primary Data | description | string | Debiased description of the video |
| Primary Data (Processed) | description_debiased | string | Description of the YouTube video without bias |
| Metadata | url | string | URL of the video |
| Engagement Stats | viewCount | int | Number of views the video has received |
| Engagement Stats | commentCount | int | Number of comments on the video |
| Engagement Stats | likeCount | int | Number of likes on the video |
| Engagement Stats | dislikeCount | int | Number of dislikes on the video |
| Metadata | thumbnails | string | URL of the thumbnail for the video |
| Label | auto_labeled | string | Automatically labeled using manual review |
| Label (Processed) | human_labeled | string | Labeled by human |
| Label (Processed) | ai_labeled | string | Labeled by an AI model fine-tuned on human labeled data |
## Paper
* **Data in Brief**: https://doi.org/10.1016/j.dib.2024.110239
* **arXiv Link**: https://arxiv.org/abs/2310.11465
## Dataset
* **Mendeley**: https://data.mendeley.com/datasets/3c6ztw5nft/
* **Kaggle**: https://www.kaggle.com/datasets/abdalimran/baitbuster-bangla
## Citation
### MLA
```Al Imran, Abdullah, Md Sakib Hossain Shovon, and M. F. Mridha. "BaitBuster-Bangla: A Comprehensive Dataset for Clickbait Detection in Bangla with Multi-Feature and Multi-Modal Analysis." Data in Brief (2024): 110239.```
### BibText
```
@article{IMRAN2024110239,
title = {BaitBuster-Bangla: A Comprehensive Dataset for Clickbait Detection in Bangla with Multi-Feature and Multi-Modal Analysis},
journal = {Data in Brief},
pages = {110239},
year = {2024},
issn = {2352-3409},
doi = {https://doi.org/10.1016/j.dib.2024.110239},
url = {https://www.sciencedirect.com/science/article/pii/S2352340924002105},
author = {Abdullah Al Imran and Md Sakib Hossain Shovon and M.F. Mridha},
keywords = {Bangla clickbait dataset, YouTube clickbait, Multi-modal clickbait dataset, Multi-feature clickbait dataset, Bangla natural language processing, User behavior modeling, Social Media Analysis},
abstract = {This study presents a large multi-modal Bangla YouTube clickbait dataset consisting of 253,070 data points collected through an automated process using the YouTube API and Python web automation frameworks. The dataset contains 18 diverse features categorized into metadata, primary content, engagement statistics, and labels for individual videos from 58 Bangla YouTube channels. A rigorous preprocessing step has been applied to denoise, deduplicate, and remove bias from the features, ensuring unbiased and reliable analysis. As the largest and most robust clickbait corpus in Bangla to date, this dataset provides significant value for natural language processing and data science researchers seeking to advance modeling of clickbait phenomena in low-resource languages. Its multi-modal nature allows for comprehensive analyses of clickbait across content, user interactions, and linguistic dimensions to develop more sophisticated detection methods with cross-linguistic applications.}
}
```
|
jlbaker361/actstu-gsdf-counterfeit | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: seed
dtype: int64
- name: steps
dtype: int64
splits:
- name: train
num_bytes: 11937393.0
num_examples: 28
download_size: 11939004
dataset_size: 11937393.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xooca/complex_simple_questions | ---
license: apache-2.0
---
|
Sharathhebbar24/openhermes | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 321396721
num_examples: 242831
download_size: 139098798
dataset_size: 321396721
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# openhermes
This is a cleansed version of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes)
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("Sharathhebbar24/openhermes", split="train")
```
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a4419c50 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1339
dataset_size: 188
---
# Dataset Card for "a4419c50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ristow/test1 | ---
license: afl-3.0
---
|
Malvinan/wit_captions_37L_bloom_language_modeling | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: language
dtype: string
- name: image_list
sequence: string
- name: caption
sequence: string
- name: input_token_ids
sequence:
sequence: int64
- name: output_token_ids
sequence:
sequence: int64
splits:
- name: train
num_bytes: 10910053832
num_examples: 613512
- name: validation
num_bytes: 88770971
num_examples: 5013
- name: test
num_bytes: 68728048
num_examples: 3883
download_size: 1580318818
dataset_size: 11067552851
---
# Dataset Card for "wit_captions_37L_bloom_language_modeling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/sumerian_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 229369
num_examples: 1000
download_size: 28574
dataset_size: 229369
---
# Dataset Card for "sumerian_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yamei/Recommended_Proceeding | ---
dataset_info:
features:
- name: data
struct:
- name: proceeding
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: acronym
dtype: string
- name: groupId
dtype: string
- name: volume
dtype: string
- name: displayVolume
dtype: string
- name: year
dtype: string
- name: __typename
dtype: string
- name: article
struct:
- name: id
dtype: string
- name: doi
dtype: string
- name: title
dtype: string
- name: normalizedTitle
dtype: string
- name: abstract
dtype: string
- name: abstracts
list:
- name: abstractType
dtype: string
- name: content
dtype: string
- name: __typename
dtype: string
- name: normalizedAbstract
dtype: string
- name: fno
dtype: string
- name: keywords
list: string
- name: authors
list:
- name: affiliation
dtype: string
- name: fullName
dtype: string
- name: givenName
dtype: string
- name: surname
dtype: string
- name: __typename
dtype: string
- name: idPrefix
dtype: string
- name: isOpenAccess
dtype: bool
- name: showRecommendedArticles
dtype: bool
- name: showBuyMe
dtype: bool
- name: hasPdf
dtype: bool
- name: pubDate
dtype: timestamp[s]
- name: pubType
dtype: string
- name: pages
dtype: string
- name: year
dtype: string
- name: issn
dtype: string
- name: isbn
dtype: string
- name: notes
dtype: string
- name: notesType
dtype: string
- name: __typename
dtype: string
- name: webExtras
list:
- name: id
dtype: string
- name: name
dtype: string
- name: size
dtype: string
- name: location
dtype: string
- name: __typename
dtype: string
- name: adjacentArticles
struct:
- name: previous
struct:
- name: fno
dtype: string
- name: articleId
dtype: string
- name: __typename
dtype: string
- name: next
struct:
- name: fno
dtype: string
- name: articleId
dtype: string
- name: __typename
dtype: string
- name: __typename
dtype: string
- name: recommendedArticles
list:
- name: id
dtype: string
- name: title
dtype: string
- name: doi
dtype: string
- name: abstractUrl
dtype: string
- name: parentPublication
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: __typename
dtype: string
- name: __typename
dtype: string
- name: articleVideos
list:
- name: id
dtype: string
- name: videoExt
dtype: string
- name: videoType
struct:
- name: featured
dtype: bool
- name: recommended
dtype: bool
- name: sponsored
dtype: bool
- name: __typename
dtype: string
- name: article
struct:
- name: id
dtype: string
- name: fno
dtype: string
- name: issueNum
dtype: string
- name: pubType
dtype: string
- name: volume
dtype: string
- name: year
dtype: string
- name: idPrefix
dtype: string
- name: doi
dtype: string
- name: title
dtype: string
- name: __typename
dtype: string
- name: channel
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: status
dtype: string
- name: featured
dtype: bool
- name: defaultVideoId
dtype: string
- name: category
struct:
- name: id
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: __typename
dtype: string
- name: __typename
dtype: string
- name: year
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: keywords
list:
- name: id
dtype: string
- name: title
dtype: string
- name: status
dtype: string
- name: __typename
dtype: string
- name: speakers
list:
- name: firstName
dtype: string
- name: lastName
dtype: string
- name: affiliation
dtype: string
- name: __typename
dtype: string
- name: created
dtype: timestamp[s]
- name: updated
dtype: timestamp[s]
- name: imageThumbnailUrl
dtype: string
- name: runningTime
dtype: string
- name: aspectRatio
dtype: string
- name: metrics
struct:
- name: views
dtype: string
- name: likes
dtype: string
- name: __typename
dtype: string
- name: notShowInVideoLib
dtype: bool
- name: __typename
dtype: string
splits:
- name: train
num_bytes: 154207098
num_examples: 21043
download_size: 62572749
dataset_size: 154207098
---
# Dataset Card for "Recommended_Proceeding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lukathelast/Arsovski | ---
license: afl-3.0
---
|
liuyanchen1015/MULTI_VALUE_sst2_were_was | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 400
num_examples: 3
- name: test
num_bytes: 2449
num_examples: 12
- name: train
num_bytes: 26678
num_examples: 221
download_size: 17163
dataset_size: 29527
---
# Dataset Card for "MULTI_VALUE_sst2_were_was"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_wnli_a_ing | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 3977
num_examples: 19
- name: test
num_bytes: 20885
num_examples: 74
- name: train
num_bytes: 37636
num_examples: 168
download_size: 28139
dataset_size: 62498
---
# Dataset Card for "MULTI_VALUE_wnli_a_ing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Preference-Dissection/preference-dissection | ---
dataset_info:
features:
- name: query
dtype: string
- name: scenario_auto-j
dtype: string
- name: scenario_group
dtype: string
- name: response_1
struct:
- name: content
dtype: string
- name: model
dtype: string
- name: num_words
dtype: int64
- name: response_2
struct:
- name: content
dtype: string
- name: model
dtype: string
- name: num_words
dtype: int64
- name: gpt-4-turbo_reference
dtype: string
- name: clear intent
dtype: string
- name: explicitly express feelings
dtype: string
- name: explicit constraints
sequence: string
- name: explicit subjective stances
sequence: string
- name: explicit mistakes or biases
sequence: string
- name: preference_labels
struct:
- name: gpt-3.5-turbo-1106
dtype: string
- name: gpt-4-1106-preview
dtype: string
- name: human
dtype: string
- name: llama-2-13b
dtype: string
- name: llama-2-13b-chat
dtype: string
- name: llama-2-70b
dtype: string
- name: llama-2-70b-chat
dtype: string
- name: llama-2-7b
dtype: string
- name: llama-2-7b-chat
dtype: string
- name: mistral-7b
dtype: string
- name: mistral-7b-instruct-v0.1
dtype: string
- name: mistral-7b-instruct-v0.2
dtype: string
- name: mistral-8x7b
dtype: string
- name: mistral-8x7b-instruct-v0.1
dtype: string
- name: qwen-14b
dtype: string
- name: qwen-14b-chat
dtype: string
- name: qwen-72b
dtype: string
- name: qwen-72b-chat
dtype: string
- name: qwen-7b
dtype: string
- name: qwen-7b-chat
dtype: string
- name: tulu-2-dpo-13b
dtype: string
- name: tulu-2-dpo-70b
dtype: string
- name: tulu-2-dpo-7b
dtype: string
- name: vicuna-13b-v1.5
dtype: string
- name: vicuna-7b-v1.5
dtype: string
- name: wizardLM-13b-v1.2
dtype: string
- name: wizardLM-70b-v1.0
dtype: string
- name: yi-34b
dtype: string
- name: yi-34b-chat
dtype: string
- name: yi-6b
dtype: string
- name: yi-6b-chat
dtype: string
- name: zephyr-7b-alpha
dtype: string
- name: zephyr-7b-beta
dtype: string
- name: basic_response_1
struct:
- name: admit limitations or mistakes
dtype: int64
- name: authoritative tone
dtype: int64
- name: clear and understandable
dtype: int64
- name: complex word usage and sentence structure
dtype: int64
- name: friendly
dtype: int64
- name: funny and humorous
dtype: int64
- name: grammar, spelling, punctuation, and code-switching
dtype: int64
- name: harmlessness
dtype: int64
- name: information richness without considering inaccuracy
dtype: int64
- name: innovative and novel
dtype: int64
- name: interactive
dtype: int64
- name: metaphors, personification, similes, hyperboles, irony, parallelism
dtype: int64
- name: persuade user
dtype: int64
- name: polite
dtype: int64
- name: relevance without considering inaccuracy
dtype: int64
- name: repetitive
dtype: int64
- name: step by step solution
dtype: int64
- name: use of direct and explicit supporting materials
dtype: int64
- name: use of informal expressions
dtype: int64
- name: well formatted
dtype: int64
- name: basic_response_2
struct:
- name: admit limitations or mistakes
dtype: int64
- name: authoritative tone
dtype: int64
- name: clear and understandable
dtype: int64
- name: complex word usage and sentence structure
dtype: int64
- name: friendly
dtype: int64
- name: funny and humorous
dtype: int64
- name: grammar, spelling, punctuation, and code-switching
dtype: int64
- name: harmlessness
dtype: int64
- name: information richness without considering inaccuracy
dtype: int64
- name: innovative and novel
dtype: int64
- name: interactive
dtype: int64
- name: metaphors, personification, similes, hyperboles, irony, parallelism
dtype: int64
- name: persuade user
dtype: int64
- name: polite
dtype: int64
- name: relevance without considering inaccuracy
dtype: int64
- name: repetitive
dtype: int64
- name: step by step solution
dtype: int64
- name: use of direct and explicit supporting materials
dtype: int64
- name: use of informal expressions
dtype: int64
- name: well formatted
dtype: int64
- name: errors_response_1
struct:
- name: applicable or not
dtype: string
- name: errors
list:
- name: brief description
dtype: string
- name: severity
dtype: string
- name: type
dtype: string
- name: errors_response_2
struct:
- name: applicable or not
dtype: string
- name: errors
list:
- name: brief description
dtype: string
- name: severity
dtype: string
- name: type
dtype: string
- name: query-specific_response_1
struct:
- name: clarify user intent
dtype: int64
- name: correcting explicit mistakes or biases
sequence: string
- name: satisfying explicit constraints
sequence: string
- name: showing empathetic
dtype: int64
- name: supporting explicit subjective stances
sequence: string
- name: query-specific_response_2
struct:
- name: clarify user intent
dtype: int64
- name: correcting explicit mistakes or biases
sequence: string
- name: satisfying explicit constraints
sequence: string
- name: showing empathetic
dtype: int64
- name: supporting explicit subjective stances
sequence: string
splits:
- name: train
num_bytes: 27617371
num_examples: 5240
download_size: 13124269
dataset_size: 27617371
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
pretty_name: Preference Dissection
license: cc-by-nc-4.0
---
## Introduction
We release the annotated data used in [Dissecting Human and LLM Preferences](https://arxiv.org/abs/).
*Original Dataset* - The dataset is based on [lmsys/chatbot_arena_conversations](https://huggingface.co/datasets/lmsys/chatbot_arena_conversations), which contains 33K cleaned conversations with pairwise human preferences collected from 13K unique IP addresses on the [Chatbot Arena](https://lmsys.org/blog/2023-05-03-arena/) from April to June 2023.
*Filtering and Scenario-wise Sampling* - We filter out the conversations that are not in English, with "Tie" or "Both Bad" labels, and the multi-turn conversations. We first sample 400 samples with unsafe queries according to the OpenAI moderation API tags and the additional toxic tags in the original dataset, then we apply [Auto-J's scenario classifier](https://huggingface.co/GAIR/autoj-scenario-classifier) to determine the scenario of each sample (we merge the Auto-J's scenarios into 10 new ones). For the *Knowledge-aware* and *Others* scenarios, we pick 820 samples, and for the other scenarios, we pick 400 samples. The total number is 5,240.
*Collecting Preferences* - Besides the human preference labels in this original dataset, we also collect the binary preference labels from 32 LLMs, including 2 proprietary LLMs and 30 open-source ones.
*Annotation on Defined Properties* - We define a set of 29 properties, we annotate how each property is satisfied (in Likert scale rating or property-specific annotation) in all responses ($5,240\times 2=10,480$). See our paper for more details of the defined properties.
## Dataset Overview
An example of the json format is as follows:
```json
{
"query": "...",
"scenario_auto-j": "...",
"scenario_group": "...",
"response_1": {
"content": "...",
"model": "...",
"num_words": "..."
},
"response_2": {...},
"gpt-4-turbo_reference": "...",
"clear intent": "Yes/No",
"explicitly express feelings": "Yes/No",
"explicit constraints": [
...
],
"explicit subjective stances": [
...
],
"explicit mistakes or biases": [
...
],
"preference_labels": {
"human": "response_1/response_2",
"gpt-4-turbo": "response_1/response_2",
...
},
"basic_response_1": {
"admit limitations or mistakes": 0/1/2/3,
"authoritative tone": 0/1/2/3,
...
},
"basic_response_2": {...},
"errors_response_1": {
"applicable or not": "applicable/not applicable",
"errors":[
{
"brief description": "...",
"severity": "severe/moderate/minor",
"type": "...",
},
...
]
},
"errors_response_2": {...},
"query-specific_response_1": {
"clarify user intent": ...,
"correcting explicit mistakes or biases": None,
"satisfying explicit constraints": [
...
],
"showing empathetic": [
...
],
"supporting explicit subjective stances": [
...
]
},
"query-specific_response_2": {...}
}
```
The following fields are basic information:
- **query**: The user query.
- **scenario_auto-j**: The scenario classified by Auto-J's classifier.
- **scenario_group**: One of the 10 new scenarios we merged from the Auto-J's scenario, including an *Unsafe Query* scenario.
- **response_1/response_2**: The content of a response:
- **content**: The text content.
- **model**: The model that generate this response.
- **num_words**: The number of words of this response, determined by NLTK.
- **gpt-4-turbo_reference**: An reference response generated by GPT-4-Turbo.
The following fields are Query-Specific prerequisites. For the last three, there may be an empty list if there is no constraints/stances/mistakes.
- **clear intent**: Whether the intent of the user is clearly expressed in the query, "Yes" or "No".
- **explicitly express feelings**: Whether the user clearly express his/her feelings or emotions in the query, "Yes" or "No".
- **explicit constraints**": A list containing all the explicit constraints in the query.
- **explicit subjective stances**: A list containing all the subjective stances in the query.
- **explicit mistakes or biases**: A list containing all the mistakes or biases in the query.
The following fields are the main body of the annotation.
- **preference_labels**: The preference label for each judge (human or an LLM) indicating which response is preferred in a pair, "response_1/response_2".
- **basic_response_1/basic_response_2**: The annotated ratings of the 20 basic properties (except *lengthy*) for the response.
- **property_name**: 0/1/2/3
- ...
- **errors_response_1/errors_response_2**: The detected errors of the response.
- **applicable or not**: If GPT-4-Turbo find itself can reliably detect the errors in the response.
- **errors**: A list containing the detected errors in the response.
- **brief description**: A brief description of the error.
- **severity**: How much the error affect the overall correctness of the response, "severe/moderate/minor".
- **type**: The type of the error, "factual error/information contradiction to the query/math operation error/code generation error"
- **query-specific_response_1/query-specific_response_2**: The annotation results of the Query-Specific properties.
- **clarify user intent**: If the user intent is not clear, rate how much the response help clarify the intent, 0/1/2/3.
- **showing empathetic**: If the user expresses feelings or emotions, rate how much the response show empathetic, 0/1/2/3.
- **satisfying explicit constraints**: If there are explicit constraints in the query, rate how much the response satisfy each of them.
- A list of "{description of constraint} | 0/1/2/3"
- **correcting explicit mistakes or biases**: If there are mistakes of biases in the query, classify how the response correct each of them
- A list of "{description of mistake} | Pointed out and corrected/Pointed out but not corrected/Corrected without being pointed out/Neither pointed out nor corrected"
- **supporting explicit subjective stances**: If there are subject stances in the query, classify how the response support each of them
- A list of "{description of stance} | Strongly supported/Weakly supported/Neutral/Weakly opposed/Strongly opposed"
## Statistics
👇 Number of samples meeting 5 Query-specific prerequisites.
| Prerequisite | # | Prerequisite | # |
| ------------------------- | ----- | ---------------- | ---- |
| with explicit constraints | 1,418 | unclear intent | 459 |
| show subjective stances | 388 | express feelings | 121 |
| contain mistakes or bias | 401 | | |
👇 Mean Score/Count for each property in collected data. *The average scores of 5 query-specific properties are calculated only on samples where the queries met specific prerequisites.
| Property | Mean Score/Count | Property | Mean Score/Count |
| ---------------------------- | ---------------- | ---------------------------- | ---------------- |
| **Mean Score** | |
| harmless | 2.90 | persuasive | 0.27 |
| grammarly correct | 2.70 | step-by-step | 0.37 |
| friendly | 1.79 | use informal expressions | 0.04 |
| polite | 2.78 | clear | 2.54 |
| interactive | 0.22 | contain rich information | 1.74 |
| authoritative | 1.67 | novel | 0.47 |
| funny | 0.08 | relevant | 2.45 |
| use rhetorical devices | 0.16 | clarify intent* | 1.33 |
| complex word & sentence | 0.89 | show empathetic* | 1.48 |
| use supporting materials | 0.13 | satisfy constraints* | 2.01 |
| well formatted | 1.26 | support stances* | 2.28 |
| admit limits | 0.17 | correct mistakes* | 1.08 |
| **Mean Count** | |
| severe errors | 0.59 | minor errors | 0.23 |
| moderate errors | 0.61 | length | 164.52 |
👇 Property correlation in the annotated data.
<img src="./property_corr.PNG" alt="image-20240213145030747" style="zoom: 50%;" />
## Disclaimers and Terms
**This part is copied from the original dataset*
- **This dataset contains conversations that may be considered unsafe, offensive, or upsetting.** It is not intended for training dialogue agents without applying appropriate filtering measures. We are not responsible for any outputs of the models trained on this dataset.
- Statements or opinions made in this dataset do not reflect the views of researchers or institutions involved in the data collection effort.
- Users of this data are responsible for ensuring its appropriate use, which includes abiding by any applicable laws and regulations.
- Users of this data should adhere to the terms of use for a specific model when using its direct outputs.
- Users of this data agree to not attempt to determine the identity of individuals in this dataset.
## License
Following the original dataset, this dataset is licensed under CC-BY-NC-4.0.
|
Atipico1/mrqa-test-final-set-v2 | ---
dataset_info:
features:
- name: subset
dtype: string
- name: qid
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: masked_query
dtype: string
- name: context
dtype: string
- name: answer_sent
dtype: string
- name: answer_in_context
sequence: string
- name: entity
dtype: string
- name: similar_entity
dtype: string
- name: clear_answer_sent
dtype: string
- name: vague_answer_sent
dtype: string
- name: adversary
dtype: string
- name: replace_count
dtype: int64
- name: adversarial_passage
dtype: string
- name: masked_answer_sent
dtype: string
- name: num_mask_token
dtype: int64
- name: entities
sequence: string
- name: gpt_adv_sent
dtype: string
- name: is_same
dtype: string
- name: gpt_adv_sent_passage
dtype: string
- name: gpt_passage
dtype: string
splits:
- name: train
num_bytes: 2275582
num_examples: 684
download_size: 1446127
dataset_size: 2275582
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CVdatasets/ImageNet15_animals_unbalanced_augmented1 | ---
dataset_info:
features:
- name: labels
dtype:
class_label:
names:
'0': Italian greyhound
'1': coyote, prairie wolf, brush wolf, Canis latrans
'2': beagle
'3': Rottweiler
'4': hyena, hyaena
'5': Greater Swiss Mountain dog
'6': triceratops
'7': French bulldog
'8': red wolf, maned wolf, Canis rufus, Canis niger
'9': Egyptian cat
'10': Chihuahua
'11': Irish terrier
'12': tiger cat
'13': white wolf, Arctic wolf, Canis lupus tundrarum
'14': timber wolf, grey wolf, gray wolf, Canis lupus
- name: img
dtype: image
splits:
- name: validation
num_bytes: 60570468.125
num_examples: 1439
- name: train
num_bytes: 161485444.02117264
num_examples: 3681
download_size: 222111550
dataset_size: 222055912.14617264
---
# Dataset Card for "ImageNet15_animals_unbalanced_augmented1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Isotonic/pii-masking-200k | ---
language:
- en
- fr
- de
- it
license: cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: Ai4Privacy PII200k Dataset
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: masked_text
dtype: string
- name: unmasked_text
dtype: string
- name: privacy_mask
dtype: string
- name: span_labels
dtype: string
- name: bio_labels
sequence: string
- name: tokenised_text
sequence: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 315574161
num_examples: 209261
download_size: 0
dataset_size: 315574161
tags:
- legal
- business
- psychology
- privacy
---
# Purpose and Features
World's largest open source privacy dataset.
The purpose of the dataset is to train models to remove personally identifiable information (PII) from text, especially in the context of AI assistants and LLMs.
The example texts have **54 PII classes** (types of sensitive data), targeting **229 discussion subjects / use cases** split across business, education, psychology and legal fields, and 5 interactions styles (e.g. casual conversation, formal document, emails etc...).
Key facts:
- Size: 13.6m text tokens in ~209k examples with 649k PII tokens (see [summary.json](summary.json))
- 4 languages, more to come!
- English
- French
- German
- Italian
- Synthetic data generated using proprietary algorithms
- No privacy violations!
- Human-in-the-loop validated high quality dataset
# Getting started
Option 1: Python
```terminal
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset("ai4privacy/pii-masking-200k", data_files=["*.jsonl"])
```
or
```python
from datasets import load_dataset
dataset = load_dataset("Isotonic/pii-masking-200k") # use "language" column
```
# Token distribution across PII classes
We have taken steps to balance the token distribution across PII classes covered by the dataset.
This graph shows the distribution of observations across the different PII classes in this release:

There is 1 class that is still overrepresented in the dataset: firstname.
We will further improve the balance with future dataset releases.
This is the token distribution excluding the FIRSTNAME class:

# Compatible Machine Learning Tasks:
- Tokenclassification. Check out a HuggingFace's [guide on token classification](https://huggingface.co/docs/transformers/tasks/token_classification).
- [ALBERT](https://huggingface.co/docs/transformers/model_doc/albert), [BERT](https://huggingface.co/docs/transformers/model_doc/bert), [BigBird](https://huggingface.co/docs/transformers/model_doc/big_bird), [BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt), [BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom), [BROS](https://huggingface.co/docs/transformers/model_doc/bros), [CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert), [CANINE](https://huggingface.co/docs/transformers/model_doc/canine), [ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert), [Data2VecText](https://huggingface.co/docs/transformers/model_doc/data2vec-text), [DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta), [DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2), [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert), [ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra), [ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie), [ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m), [ESM](https://huggingface.co/docs/transformers/model_doc/esm), [Falcon](https://huggingface.co/docs/transformers/model_doc/falcon), [FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert), [FNet](https://huggingface.co/docs/transformers/model_doc/fnet), [Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel), [GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3), [OpenAI GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2), [GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode), [GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo), [GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox), [I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert), [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm), [LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2), [LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3), [LiLT](https://huggingface.co/docs/transformers/model_doc/lilt), [Longformer](https://huggingface.co/docs/transformers/model_doc/longformer), [LUKE](https://huggingface.co/docs/transformers/model_doc/luke), [MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm), [MEGA](https://huggingface.co/docs/transformers/model_doc/mega), [Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert), [MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert), [MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet), [MPT](https://huggingface.co/docs/transformers/model_doc/mpt), [MRA](https://huggingface.co/docs/transformers/model_doc/mra), [Nezha](https://huggingface.co/docs/transformers/model_doc/nezha), [Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer), [QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert), [RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert), [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta), [RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm), [RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert), [RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer), [SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm), [XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta), [XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl), [XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet), [X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod), [YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)
- Text Generation: Mapping the unmasked_text to to the masked_text or privacy_mask attributes. Check out HuggingFace's [guide to fine-tunning](https://huggingface.co/docs/transformers/v4.15.0/training)
- [T5 Family](https://huggingface.co/docs/transformers/model_doc/t5), [Llama2](https://huggingface.co/docs/transformers/main/model_doc/llama2)
# Information regarding the rows:
- Each row represents a json object with a natural language text that includes placeholders for PII (and could plausibly be written by a human to an AI assistant).
- Sample row:
- "masked_text" contains a PII free natural text
- "Product officially launching in [COUNTY_1]. Estimate profit of [CURRENCYSYMBOL_1][AMOUNT_1]. Expenses by [ACCOUNTNAME_1].",
- "unmasked_text" shows a natural sentence containing PII
- "Product officially launching in Washington County. Estimate profit of $488293.16. Expenses by Checking Account."
- "privacy_mask" indicates the mapping between the privacy token instances and the string within the natural text.*
- "{'[COUNTY_1]': 'Washington County', '[CURRENCYSYMBOL_1]': '$', '[AMOUNT_1]': '488293.16', '[ACCOUNTNAME_1]': 'Checking Account'}"
- "span_labels" is an array of arrays formatted in the following way [start, end, pii token instance].*
- "[[0, 32, 'O'], [32, 49, 'COUNTY_1'], [49, 70, 'O'], [70, 71, 'CURRENCYSYMBOL_1'], [71, 80, 'AMOUNT_1'], [80, 94, 'O'], [94, 110, 'ACCOUNTNAME_1'], [110, 111, 'O']]",
- "bio_labels" follows the common place notation for "beginning", "inside" and "outside" of where each private tokens starts.[original paper](https://arxiv.org/abs/cmp-lg/9505040)
-["O", "O", "O", "O", "B-COUNTY", "I-COUNTY", "O", "O", "O", "O", "B-CURRENCYSYMBOL", "O", "O", "I-AMOUNT", "I-AMOUNT", "I-AMOUNT", "I-AMOUNT", "O", "O", "O", "B-ACCOUNTNAME", "I-ACCOUNTNAME", "O"],
- "tokenised_text" breaks down the unmasked sentence into tokens using Bert Family tokeniser to help fine-tune large language models.
- ["product", "officially", "launching", "in", "washington", "county", ".", "estimate", "profit", "of", "$", "48", "##8", "##29", "##3", ".", "16", ".", "expenses", "by", "checking", "account", "."]
*note for the nested objects, we store them as string to maximise compability between various software.
*Note: the bio_labels and tokenised_text have been created using [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)
# About Us:
At Ai4Privacy, we are commited to building the global seatbelt of the 21st century for Artificial Intelligence to help fight against potential risks of personal information being integrated into data pipelines.
Newsletter & updates: [www.Ai4Privacy.com](www.Ai4Privacy.com)
- Looking for ML engineers, developers, beta-testers, human in the loop validators (all languages)
- Integrations with already existing open source solutions
- Ask us a question on discord: [https://discord.gg/kxSbJrUQZF](https://discord.gg/kxSbJrUQZF)
# Roadmap and Future Development
- Carbon Neutral
- Benchmarking
- Better multilingual and especially localisation
- Extended integrations
- Continuously increase the training set
- Further optimisation to the model to reduce size and increase generalisability
- Next released major update is planned for the 14th of December 2023 (subscribe to newsletter for updates)
# Use Cases and Applications
**Chatbots**: Incorporating a PII masking model into chatbot systems can ensure the privacy and security of user conversations by automatically redacting sensitive information such as names, addresses, phone numbers, and email addresses.
**Customer Support Systems**: When interacting with customers through support tickets or live chats, masking PII can help protect sensitive customer data, enabling support agents to handle inquiries without the risk of exposing personal information.
**Email Filtering**: Email providers can utilize a PII masking model to automatically detect and redact PII from incoming and outgoing emails, reducing the chances of accidental disclosure of sensitive information.
**Data Anonymization**: Organizations dealing with large datasets containing PII, such as medical or financial records, can leverage a PII masking model to anonymize the data before sharing it for research, analysis, or collaboration purposes.
**Social Media Platforms**: Integrating PII masking capabilities into social media platforms can help users protect their personal information from unauthorized access, ensuring a safer online environment.
**Content Moderation**: PII masking can assist content moderation systems in automatically detecting and blurring or redacting sensitive information in user-generated content, preventing the accidental sharing of personal details.
**Online Forms**: Web applications that collect user data through online forms, such as registration forms or surveys, can employ a PII masking model to anonymize or mask the collected information in real-time, enhancing privacy and data protection.
**Collaborative Document Editing**: Collaboration platforms and document editing tools can use a PII masking model to automatically mask or redact sensitive information when multiple users are working on shared documents.
**Research and Data Sharing**: Researchers and institutions can leverage a PII masking model to ensure privacy and confidentiality when sharing datasets for collaboration, analysis, or publication purposes, reducing the risk of data breaches or identity theft.
**Content Generation**: Content generation systems, such as article generators or language models, can benefit from PII masking to automatically mask or generate fictional PII when creating sample texts or examples, safeguarding the privacy of individuals.
(...and whatever else your creative mind can think of)
# Support and Maintenance
AI4Privacy is a project affiliated with [AISuisse SA](https://www.aisuisse.com/). |
Efimov6886/autotrain-data-test_row2 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: test_row2
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test_row2.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<316x316 RGB PIL image>",
"target": 1
},
{
"image": "<316x316 RGB PIL image>",
"target": 3
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=5, names=['animals', 'dance', 'food', 'sport', 'tech'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 392 |
| valid | 101 |
|
kheopss/large_dataset_from_prompt2 | ---
dataset_info:
features:
- name: json_input
dtype: string
- name: titre
dtype: string
- name: prompt0
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 7414144
num_examples: 990
download_size: 2586117
dataset_size: 7414144
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
damerajee/khasi-essays | ---
license: apache-2.0
---
|
bjoernp/tagesschau_pretrain | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 100070385
num_examples: 21847
download_size: 59186736
dataset_size: 100070385
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Toy pretraining dataset
This is a toy pretraining dataset based off of https://huggingface.co/datasets/bjoernp/tagesschau-2018-2023. Used for testing with https://huggingface.co/bjoernp/micro-bitllama. |
mriosqu/landing_pages_dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 66571452.0
num_examples: 67
download_size: 64024938
dataset_size: 66571452.0
---
# Dataset Card for "landing_pages_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanzuhuggingface/creditcardfraudtraining | ---
task_categories:
- feature-extraction
tags:
- fraud detection
- anomaly detection
- upsampling
pretty_name: credit_card_transactions_resampled.csv
size_categories:
- 1M<n<10M
--- |
bruraz/danmc | ---
license: openrail
---
|
yjernite/prof_report__wavymulder-Analog-Diffusion__multi__24 | ---
dataset_info:
features:
- name: cluster_id
dtype: int64
- name: cluster_size
dtype: int64
- name: img_ids
sequence: int64
- name: img_cluster_scores
sequence: float64
splits:
- name: accountant
num_bytes: 1864
num_examples: 11
- name: aerospace_engineer
num_bytes: 1888
num_examples: 12
- name: aide
num_bytes: 2008
num_examples: 17
- name: air_conditioning_installer
num_bytes: 1696
num_examples: 4
- name: architect
num_bytes: 1864
num_examples: 11
- name: artist
num_bytes: 1840
num_examples: 10
- name: author
num_bytes: 1792
num_examples: 8
- name: baker
num_bytes: 1888
num_examples: 12
- name: bartender
num_bytes: 1888
num_examples: 12
- name: bus_driver
num_bytes: 1912
num_examples: 13
- name: butcher
num_bytes: 1792
num_examples: 8
- name: career_counselor
num_bytes: 1816
num_examples: 9
- name: carpenter
num_bytes: 1720
num_examples: 5
- name: carpet_installer
num_bytes: 1720
num_examples: 5
- name: cashier
num_bytes: 1792
num_examples: 8
- name: ceo
num_bytes: 1888
num_examples: 12
- name: childcare_worker
num_bytes: 1864
num_examples: 11
- name: civil_engineer
num_bytes: 1840
num_examples: 10
- name: claims_appraiser
num_bytes: 1720
num_examples: 5
- name: cleaner
num_bytes: 1864
num_examples: 11
- name: clergy
num_bytes: 1936
num_examples: 14
- name: clerk
num_bytes: 2104
num_examples: 21
- name: coach
num_bytes: 1840
num_examples: 10
- name: community_manager
num_bytes: 1840
num_examples: 10
- name: compliance_officer
num_bytes: 1912
num_examples: 13
- name: computer_programmer
num_bytes: 1840
num_examples: 10
- name: computer_support_specialist
num_bytes: 1888
num_examples: 12
- name: computer_systems_analyst
num_bytes: 1840
num_examples: 10
- name: construction_worker
num_bytes: 1744
num_examples: 6
- name: cook
num_bytes: 1864
num_examples: 11
- name: correctional_officer
num_bytes: 1816
num_examples: 9
- name: courier
num_bytes: 1960
num_examples: 15
- name: credit_counselor
num_bytes: 1816
num_examples: 9
- name: customer_service_representative
num_bytes: 1768
num_examples: 7
- name: data_entry_keyer
num_bytes: 1840
num_examples: 10
- name: dental_assistant
num_bytes: 1720
num_examples: 5
- name: dental_hygienist
num_bytes: 1768
num_examples: 7
- name: dentist
num_bytes: 1864
num_examples: 11
- name: designer
num_bytes: 1840
num_examples: 10
- name: detective
num_bytes: 1912
num_examples: 13
- name: director
num_bytes: 1864
num_examples: 11
- name: dishwasher
num_bytes: 1936
num_examples: 14
- name: dispatcher
num_bytes: 1864
num_examples: 11
- name: doctor
num_bytes: 1912
num_examples: 13
- name: drywall_installer
num_bytes: 1696
num_examples: 4
- name: electrical_engineer
num_bytes: 1888
num_examples: 12
- name: electrician
num_bytes: 1768
num_examples: 7
- name: engineer
num_bytes: 1840
num_examples: 10
- name: event_planner
num_bytes: 1720
num_examples: 5
- name: executive_assistant
num_bytes: 1792
num_examples: 8
- name: facilities_manager
num_bytes: 1840
num_examples: 10
- name: farmer
num_bytes: 1792
num_examples: 8
- name: fast_food_worker
num_bytes: 1912
num_examples: 13
- name: file_clerk
num_bytes: 1912
num_examples: 13
- name: financial_advisor
num_bytes: 1720
num_examples: 5
- name: financial_analyst
num_bytes: 1840
num_examples: 10
- name: financial_manager
num_bytes: 1864
num_examples: 11
- name: firefighter
num_bytes: 1720
num_examples: 5
- name: fitness_instructor
num_bytes: 1792
num_examples: 8
- name: graphic_designer
num_bytes: 1840
num_examples: 10
- name: groundskeeper
num_bytes: 1720
num_examples: 5
- name: hairdresser
num_bytes: 1864
num_examples: 11
- name: head_cook
num_bytes: 1816
num_examples: 9
- name: health_technician
num_bytes: 1888
num_examples: 12
- name: industrial_engineer
num_bytes: 1792
num_examples: 8
- name: insurance_agent
num_bytes: 1912
num_examples: 13
- name: interior_designer
num_bytes: 1792
num_examples: 8
- name: interviewer
num_bytes: 1888
num_examples: 12
- name: inventory_clerk
num_bytes: 1936
num_examples: 14
- name: it_specialist
num_bytes: 1720
num_examples: 5
- name: jailer
num_bytes: 1912
num_examples: 13
- name: janitor
num_bytes: 1912
num_examples: 13
- name: laboratory_technician
num_bytes: 1936
num_examples: 14
- name: language_pathologist
num_bytes: 1888
num_examples: 12
- name: lawyer
num_bytes: 1912
num_examples: 13
- name: librarian
num_bytes: 1792
num_examples: 8
- name: logistician
num_bytes: 1912
num_examples: 13
- name: machinery_mechanic
num_bytes: 1720
num_examples: 5
- name: machinist
num_bytes: 1816
num_examples: 9
- name: maid
num_bytes: 1912
num_examples: 13
- name: manager
num_bytes: 1888
num_examples: 12
- name: manicurist
num_bytes: 1840
num_examples: 10
- name: market_research_analyst
num_bytes: 1816
num_examples: 9
- name: marketing_manager
num_bytes: 1816
num_examples: 9
- name: massage_therapist
num_bytes: 1816
num_examples: 9
- name: mechanic
num_bytes: 1816
num_examples: 9
- name: mechanical_engineer
num_bytes: 1840
num_examples: 10
- name: medical_records_specialist
num_bytes: 1840
num_examples: 10
- name: mental_health_counselor
num_bytes: 1816
num_examples: 9
- name: metal_worker
num_bytes: 1792
num_examples: 8
- name: mover
num_bytes: 1936
num_examples: 14
- name: musician
num_bytes: 1960
num_examples: 15
- name: network_administrator
num_bytes: 1696
num_examples: 4
- name: nurse
num_bytes: 1840
num_examples: 10
- name: nursing_assistant
num_bytes: 1768
num_examples: 7
- name: nutritionist
num_bytes: 1720
num_examples: 5
- name: occupational_therapist
num_bytes: 1840
num_examples: 10
- name: office_clerk
num_bytes: 1888
num_examples: 12
- name: office_worker
num_bytes: 1840
num_examples: 10
- name: painter
num_bytes: 1888
num_examples: 12
- name: paralegal
num_bytes: 1936
num_examples: 14
- name: payroll_clerk
num_bytes: 1864
num_examples: 11
- name: pharmacist
num_bytes: 1864
num_examples: 11
- name: pharmacy_technician
num_bytes: 1744
num_examples: 6
- name: photographer
num_bytes: 1936
num_examples: 14
- name: physical_therapist
num_bytes: 1840
num_examples: 10
- name: pilot
num_bytes: 1960
num_examples: 15
- name: plane_mechanic
num_bytes: 1840
num_examples: 10
- name: plumber
num_bytes: 1768
num_examples: 7
- name: police_officer
num_bytes: 1792
num_examples: 8
- name: postal_worker
num_bytes: 1936
num_examples: 14
- name: printing_press_operator
num_bytes: 1888
num_examples: 12
- name: producer
num_bytes: 1888
num_examples: 12
- name: psychologist
num_bytes: 1864
num_examples: 11
- name: public_relations_specialist
num_bytes: 1792
num_examples: 8
- name: purchasing_agent
num_bytes: 1936
num_examples: 14
- name: radiologic_technician
num_bytes: 1888
num_examples: 12
- name: real_estate_broker
num_bytes: 1744
num_examples: 6
- name: receptionist
num_bytes: 1720
num_examples: 5
- name: repair_worker
num_bytes: 1816
num_examples: 9
- name: roofer
num_bytes: 1744
num_examples: 6
- name: sales_manager
num_bytes: 1768
num_examples: 7
- name: salesperson
num_bytes: 1840
num_examples: 10
- name: school_bus_driver
num_bytes: 1984
num_examples: 16
- name: scientist
num_bytes: 1912
num_examples: 13
- name: security_guard
num_bytes: 1720
num_examples: 5
- name: sheet_metal_worker
num_bytes: 1792
num_examples: 8
- name: singer
num_bytes: 1912
num_examples: 13
- name: social_assistant
num_bytes: 2008
num_examples: 17
- name: social_worker
num_bytes: 1912
num_examples: 13
- name: software_developer
num_bytes: 1768
num_examples: 7
- name: stocker
num_bytes: 1912
num_examples: 13
- name: supervisor
num_bytes: 1936
num_examples: 14
- name: taxi_driver
num_bytes: 1864
num_examples: 11
- name: teacher
num_bytes: 2032
num_examples: 18
- name: teaching_assistant
num_bytes: 1840
num_examples: 10
- name: teller
num_bytes: 1960
num_examples: 15
- name: therapist
num_bytes: 1816
num_examples: 9
- name: tractor_operator
num_bytes: 1744
num_examples: 6
- name: truck_driver
num_bytes: 1792
num_examples: 8
- name: tutor
num_bytes: 1936
num_examples: 14
- name: underwriter
num_bytes: 1840
num_examples: 10
- name: veterinarian
num_bytes: 1792
num_examples: 8
- name: welder
num_bytes: 1816
num_examples: 9
- name: wholesale_buyer
num_bytes: 1840
num_examples: 10
- name: writer
num_bytes: 1888
num_examples: 12
download_size: 638852
dataset_size: 269360
---
# Dataset Card for "prof_report__wavymulder-Analog-Diffusion__multi__24"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/med_alpaca_standardized_cluster_97_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9037858
num_examples: 20939
download_size: 3142297
dataset_size: 9037858
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_97_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bhpardo/ema_en | ---
dataset_info:
features:
- name: english
dtype: string
splits:
- name: train
num_bytes: 436742.4575850489
num_examples: 5479
- name: test
num_bytes: 109205.5424149511
num_examples: 1370
download_size: 340686
dataset_size: 545948.0
---
# Dataset Card for "ema_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jhguighukjghkj/ulteriordatasettest | ---
license: mit
---
|
nisaar/Indian_Const_Articles_LLAMA2_Format | ---
license: apache-2.0
---
|
mboth/waermeVersorgen-200-undersampled | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Datatype
dtype: string
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Unit
dtype: string
- name: text
dtype: string
- name: Grundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
'0': Beziehen
'1': Erzeugen
'2': Speichern
'3': Verteilen
splits:
- name: train
num_bytes: 144390.03494148818
num_examples: 733
- name: test
num_bytes: 447086
num_examples: 2265
- name: valid
num_bytes: 447086
num_examples: 2265
download_size: 374039
dataset_size: 1038562.0349414882
---
# Dataset Card for "waermeVersorgen-200-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DanielSongShen/rizom-cats-vs-dogs-large-no-image_latents_hidden_states | ---
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': cat
'1': dog
- name: rizom_latents
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7422710965.79
num_examples: 23410
download_size: 7611677178
dataset_size: 7422710965.79
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158902 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/redefine_math_test0
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: jeffdshen/redefine_math_test0
dataset_config: jeffdshen--redefine_math_test0
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
bigbio/scicite |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: SciCite
homepage: https://allenai.org/data/scicite
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for SciCite
## Dataset Description
- **Homepage:** https://allenai.org/data/scicite
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
SciCite is a dataset of 11K manually annotated citation intents based on
citation context in the computer science and biomedical domains.
## Citation Information
```
@inproceedings{cohan:naacl19,
author = {Arman Cohan and Waleed Ammar and Madeleine van Zuylen and Field Cady},
title = {Structural Scaffolds for Citation Intent Classification in Scientific Publications},
booktitle = {Conference of the North American Chapter of the Association for Computational Linguistics},
year = {2019},
url = {https://aclanthology.org/N19-1361/},
doi = {10.18653/v1/N19-1361},
}
```
|
irds/beir_scifact | ---
pretty_name: '`beir/scifact`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `beir/scifact`
The `beir/scifact` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,183
- `queries` (i.e., topics); count=1,109
This dataset is used by: [`beir_scifact_test`](https://huggingface.co/datasets/irds/beir_scifact_test), [`beir_scifact_train`](https://huggingface.co/datasets/irds/beir_scifact_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_scifact', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_scifact', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Wadden2020Scifact,
title = "Fact or Fiction: Verifying Scientific Claims",
author = "Wadden, David and
Lin, Shanchuan and
Lo, Kyle and
Wang, Lucy Lu and
van Zuylen, Madeleine and
Cohan, Arman and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.609",
doi = "10.18653/v1/2020.emnlp-main.609",
pages = "7534--7550"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
CyberHarem/aulick_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of aulick/オーリック/奥利克 (Azur Lane)
This is the dataset of aulick/オーリック/奥利克 (Azur Lane), containing 10 images and their tags.
The core tags of this character are `hair_ornament, hairclip, short_hair, hat, beret, bangs, green_eyes, hair_between_eyes, red_hair, sailor_hat, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 10 | 7.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aulick_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 10 | 5.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aulick_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 20 | 9.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aulick_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 10 | 7.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aulick_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 20 | 12.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aulick_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/aulick_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, blush, solo, open_mouth, sailor_collar, looking_at_viewer, sailor_dress, white_gloves, yellow_neckerchief, :d, simple_background, sleeveless_dress, white_background, white_thighhighs, blue_dress, feathers, frilled_dress, hat_feather, holding |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | open_mouth | sailor_collar | looking_at_viewer | sailor_dress | white_gloves | yellow_neckerchief | :d | simple_background | sleeveless_dress | white_background | white_thighhighs | blue_dress | feathers | frilled_dress | hat_feather | holding |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-------------|:----------------|:--------------------|:---------------|:---------------|:---------------------|:-----|:--------------------|:-------------------|:-------------------|:-------------------|:-------------|:-----------|:----------------|:--------------|:----------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Mrvortexgamer/Models | ---
license: openrail
---
|
autoevaluate/autoeval-staging-eval-project-f87a1758-7384800 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: philschmid/DistilBERT-Banking77
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: philschmid/DistilBERT-Banking77
* Dataset: banking77
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
liuyanchen1015/VALUE_qqp_uninflect | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1394360
num_examples: 8190
- name: test
num_bytes: 13656226
num_examples: 80557
- name: train
num_bytes: 12711973
num_examples: 74245
download_size: 17684251
dataset_size: 27762559
---
# Dataset Card for "VALUE_qqp_uninflect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlbertY123/en-la | ---
license: mit
---
|
Multimodal-Fatima/descriptors-text-davinci-003 | ---
dataset_info:
features:
- name: vocab
dtype: string
- name: descriptions
sequence: string
- name: prompt_descriptions
sequence: string
splits:
- name: food101
num_bytes: 58525
num_examples: 101
- name: cifar100
num_bytes: 54081
num_examples: 100
- name: visualgenome
num_bytes: 1092697
num_examples: 1913
- name: dtd
num_bytes: 25204
num_examples: 47
- name: oxfordflowers
num_bytes: 58560
num_examples: 102
- name: oxfordpets
num_bytes: 22322
num_examples: 37
- name: sun397
num_bytes: 243017
num_examples: 362
- name: fgvc
num_bytes: 74126
num_examples: 100
- name: imagenet21k
num_bytes: 604897
num_examples: 998
- name: birdsnap
num_bytes: 322488
num_examples: 500
- name: caltech101
num_bytes: 56880
num_examples: 102
- name: coco
num_bytes: 45186
num_examples: 80
- name: lvis
num_bytes: 679195
num_examples: 1198
- name: stanfordcars
num_bytes: 157786
num_examples: 196
- name: full
num_bytes: 3000578
num_examples: 4951
download_size: 3257945
dataset_size: 6495542
---
# Dataset Card for "descriptors-text-davinci-003"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kamilakesbi/cv_for_spd_ja_2k_rayleigh | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speakers
sequence: string
- name: timestamps_start
sequence: float64
- name: timestamps_end
sequence: float64
splits:
- name: train
num_bytes: 1820668248.0
num_examples: 1216
- name: validation
num_bytes: 226382468.0
num_examples: 168
- name: test
num_bytes: 242628462.0
num_examples: 168
download_size: 1751753494
dataset_size: 2289679178.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
gagan3012/dolphin-retrival-TyDiQA-QA-corpus | ---
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 3667462
num_examples: 4488
- name: queries
num_bytes: 503291
num_examples: 5077
download_size: 2257854
dataset_size: 4170753
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
- split: queries
path: data/queries-*
---
|
team-bay/data-science-qa | ---
license: apache-2.0
---
|
ahishamm/PH2_db_enhanced_balanced | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': benign
'1': malignant
splits:
- name: train
num_bytes: 309636115.0
num_examples: 320
- name: test
num_bytes: 61502548.0
num_examples: 64
download_size: 371161759
dataset_size: 371138663.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tr416/dataset_20231007_024249 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73878
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231007_024249"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alvations/esci-data-task2 | ---
dataset_info:
features:
- name: example_id
dtype: int64
- name: query
dtype: string
- name: query_id
dtype: int64
- name: product_id
dtype: string
- name: product_locale
dtype: string
- name: esci_label
dtype: string
- name: small_version
dtype: int64
- name: large_version
dtype: int64
- name: split
dtype: string
- name: product_title
dtype: string
- name: product_description
dtype: string
- name: product_bullet_point
dtype: string
- name: product_brand
dtype: string
- name: product_color
dtype: string
- name: gain
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2603008323
num_examples: 1977767
- name: dev
num_bytes: 7386427
num_examples: 5505
- name: test
num_bytes: 843102586
num_examples: 638016
download_size: 2214316591
dataset_size: 3453497336
---
# Dataset Card for "esci-data-task2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
surabhiMV/qrcode_new | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 18225795.0
num_examples: 502
download_size: 17273080
dataset_size: 18225795.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "qrcode_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rizerphe/glaive-function-calling-v2-zephyr | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
- conversational
language:
- en
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 225637684
num_examples: 101469
download_size: 94820543
dataset_size: 225637684
---
# Glaive's Function Calling V2 for Zephyr-7B-alpha
[Glaive's Function Calling V2 dataset](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), formatted according to the chat schema zephyr uses, with all the data that I wasn't able to automatically convert removed.
Adds three new roles: `definition`, `function` and `call`. Here's an example prompt:
```
<|definition|>
{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}</s>
<|user|>
I need a new password. Can you generate one for me?</s>
<|assistant|>
Of course! How long would you like your password to be? And do you want it to include numbers and special characters?</s>
<|user|>
I want it to be 12 characters long and yes, it should include both numbers and special characters.</s>
<|function|>
{
"length": 12,
"include_numbers": true,
"include_special_characters": true
}</s>
<|function|>
{"password": "4#7gB6&9L1!0"}</s>
<|assistant|>
Here is your new password: 4#7gB6&9L1!0. Please make sure to save it in a secure place.</s>
``` |
heliosprime/twitter_dataset_1713138675 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 186948
num_examples: 510
download_size: 116412
dataset_size: 186948
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713138675"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nicoy/zhizunbao | ---
license: cc
---
|
lucasbiagettia/borges_plain_text_dataset | ---
license: apache-2.0
language:
- es
---
# Dataset: Borges en texto plano
El objetivo de este repositorio es construir un dataset del gran autor argentino que pueda usarse para el entrenamiento de modelos de lenguaje.
Inicialmente partí de libros en formato EPUB y únicamente en español
# Carpetas
Inicialmente planteo tres carpetas
## Epub
Libros en este formato
## Epub_a_txt
Libros convertidos con el sencillo script disponible en
https://github.com/lucasbiagettia/epub2txt
## txt_limpios
A mano he eliminado referencias editoriales, biograficas, y a otros recursos.
El criterio es sumamente objetable.
# Próximos pasos
Establecer un criterio para "limpiar" los txt e intentar automatizarlo. Seria conveniente evaluar si tiene sentido etiquetar cada libro y dentro del mismo cada cuento, y si tiene sentido etiquetar sus textos por genero.
# Cualquier colaboración será muy valorada. |
KatMarie/eu_test2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 606653.618232792
num_examples: 10331
download_size: 416014
dataset_size: 606653.618232792
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eu_test2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bhjhk/pakenanya66 | ---
license: cc-by-3.0
---
|
alisson40889/ci | ---
license: openrail
---
|
alirzb/SeizureClassifier_Wav2Vec_U_43828667_on_UnBal_43845590 | ---
dataset_info:
features:
- name: array
sequence: float64
- name: label_true
dtype: int64
- name: label_pred
dtype: int64
- name: id
dtype: string
- name: ws
dtype: image
splits:
- name: train
num_bytes: 4304681.0
num_examples: 9
download_size: 1707867
dataset_size: 4304681.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FlashSombrio/hermio | ---
license: openrail
---
|
CyberHarem/kursk_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kursk/クルスク/库尔斯克 (Azur Lane)
This is the dataset of kursk/クルスク/库尔斯克 (Azur Lane), containing 27 images and their tags.
The core tags of this character are `breasts, long_hair, red_eyes, large_breasts, bangs, very_long_hair, hair_between_eyes, white_hair, grey_hair, multicolored_hair, horns, streaked_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 27 | 50.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kursk_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 27 | 24.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kursk_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 66 | 51.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kursk_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 27 | 41.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kursk_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 66 | 78.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kursk_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kursk_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, looking_at_viewer, blush, solo, bare_shoulders, cleavage, collarbone, naked_towel, thighs, onsen, sitting, smile, water, closed_mouth, hair_intakes, holding_cup |
| 1 | 8 |  |  |  |  |  | cleavage, looking_at_viewer, black_necktie, necktie_between_breasts, 1girl, solo, black_gloves, closed_mouth, thigh_strap, thighhighs, white_coat, white_dress, bird, fur-trimmed_coat, simple_background, standing |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | blush | solo | bare_shoulders | cleavage | collarbone | naked_towel | thighs | onsen | sitting | smile | water | closed_mouth | hair_intakes | holding_cup | black_necktie | necktie_between_breasts | black_gloves | thigh_strap | thighhighs | white_coat | white_dress | bird | fur-trimmed_coat | simple_background | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------|:-------|:-----------------|:-----------|:-------------|:--------------|:---------|:--------|:----------|:--------|:--------|:---------------|:---------------|:--------------|:----------------|:--------------------------|:---------------|:--------------|:-------------|:-------------|:--------------|:-------|:-------------------|:--------------------|:-----------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | | X | | X | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X |
|
JoBeer/eclassCorpus | ---
dataset_info:
features:
- name: did
dtype: int64
- name: query
dtype: string
- name: name
dtype: string
- name: datatype
dtype: string
- name: unit
dtype: string
- name: IRDI
dtype: string
- name: metalabel
dtype: int64
splits:
- name: train
num_bytes: 137123
num_examples: 672
download_size: 48203
dataset_size: 137123
---
# Dataset Card for "eclassCorpus"
This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics. |
Pclanglais/Sample-OCR-Correction | ---
license: cc0-1.0
language:
- en
---
This dataset is an initial demo of synthetic post-OCR correction/rewriting with OCRonos on 7800 newspaper pages from *Chronicle America*.
The *text* column contains the original uncorrected text and the *corrected_text* contains the rewriten text. |
tinhpx2911/vanhoc_processed | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 161543279
num_examples: 28242
download_size: 81656333
dataset_size: 161543279
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vanhoc_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigbio/bio_simlex |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: Bio-SimLex
homepage: https://github.com/cambridgeltl/bio-simverb
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- SEMANTIC_SIMILARITY
---
# Dataset Card for Bio-SimLex
## Dataset Description
- **Homepage:** https://github.com/cambridgeltl/bio-simverb
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs).
## Citation Information
```
@article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
}
```
|
pxovela/Test_Images_Overtrained_TE_vs_Unet | ---
license: openrail
---
|
hongdijk/AUGAUG | ---
license: other
---
|
liuyanchen1015/MULTI_VALUE_sst2_my_i | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 2082
num_examples: 16
- name: test
num_bytes: 1559
num_examples: 13
- name: train
num_bytes: 37883
num_examples: 323
download_size: 20951
dataset_size: 41524
---
# Dataset Card for "MULTI_VALUE_sst2_my_i"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thobauma/harmless-poisoned-0.03-questionmarks-murder | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 58402939.44335993
num_examples: 42537
download_size: 31364075
dataset_size: 58402939.44335993
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nityan/flowers-demo | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 347100141.78
num_examples: 8189
download_size: 346653098
dataset_size: 347100141.78
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KETI-AIR/aihub_scitech20_translation | ---
license: apache-2.0
---
|
FaalSa/f4 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 79710
num_examples: 1
- name: validation
num_bytes: 80190
num_examples: 1
- name: test
num_bytes: 80670
num_examples: 1
download_size: 67735
dataset_size: 240570
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
softlab/PerLang | ---
license: openrail
---
|
irds/beir_scifact_test | ---
pretty_name: '`beir/scifact/test`'
viewer: false
source_datasets: ['irds/beir_scifact']
task_categories:
- text-retrieval
---
# Dataset Card for `beir/scifact/test`
The `beir/scifact/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=300
- `qrels`: (relevance assessments); count=339
- For `docs`, use [`irds/beir_scifact`](https://huggingface.co/datasets/irds/beir_scifact)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_scifact_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_scifact_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Wadden2020Scifact,
title = "Fact or Fiction: Verifying Scientific Claims",
author = "Wadden, David and
Lin, Shanchuan and
Lo, Kyle and
Wang, Lucy Lu and
van Zuylen, Madeleine and
Cohan, Arman and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.609",
doi = "10.18653/v1/2020.emnlp-main.609",
pages = "7534--7550"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
FidelOdok/SOFA_DOA_10_deg_meta_dirxyz | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '101'
'2': '106'
'3': '112'
'4': '117'
'5': '122'
'6': '129'
'7': '134'
'8': '137'
'9': '139'
'10': '151'
'11': '156'
'12': '166'
'13': '169'
'14': '171'
'15': '172'
'16': '18'
'17': '182'
'18': '187'
'19': '189'
'20': '190'
'21': '192'
'22': '200'
'23': '205'
'24': '207'
'25': '209'
'26': '211'
'27': '218'
'28': '219'
'29': '221'
'30': '224'
'31': '226'
'32': '227'
'33': '229'
'34': '237'
'35': '239'
'36': '242'
'37': '244'
'38': '257'
'39': '26'
'40': '260'
'41': '262'
'42': '265'
'43': '278'
'44': '281'
'45': '3'
'46': '312'
'47': '317'
'48': '328'
'49': '343'
'50': '351'
'51': '354'
'52': '356'
'53': '358'
'54': '359'
'55': '368'
'56': '369'
'57': '371'
'58': '372'
'59': '373'
'60': '378'
'61': '380'
'62': '383'
'63': '385'
'64': '386'
'65': '391'
'66': '394'
'67': '397'
'68': '4'
'69': '422'
'70': '423'
'71': '424'
'72': '426'
'73': '427'
'74': '428'
'75': '46'
'76': '49'
'77': '5'
'78': '50'
'79': '58'
'80': '6'
'81': '66'
'82': '67'
'83': '69'
'84': '7'
'85': '71'
'86': '73'
'87': '82'
'88': '84'
'89': '86'
'90': '87'
'91': '89'
'92': '96'
- name: dirxyz
sequence: float64
splits:
- name: train
num_bytes: 21492417405.0
num_examples: 22500
download_size: 21493361663
dataset_size: 21492417405.0
---
# Dataset Card for "SOFA_DOA_10_deg_meta_dirxyz"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adityaedy01/me | ---
license: mit
---
|
distilled-from-one-sec-cv12/chunk_51 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1248559148
num_examples: 243289
download_size: 1277189885
dataset_size: 1248559148
---
# Dataset Card for "chunk_51"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ziwenyd/avatar-functions | ---
license: mit
---
There is no difference between 'train' and 'test', these are just used thus the csv file can be detected by huggingface.
max_java_exp_len=1784
max_python_exp_len=1469 |
gustproof/shiny-cards-produce | ---
license: cc-by-sa-4.0
---
|
autoevaluate/autoeval-staging-eval-project-5c51f1de-f5e2-46a7-861f-b1b7c80db774-5351 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
distilled-one-sec-cv12-each-chunk-uniq/chunk_78 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1280829164.0
num_examples: 249577
download_size: 1312989459
dataset_size: 1280829164.0
---
# Dataset Card for "chunk_78"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davanstrien/maps_nls | Invalid username or password. |
glami/glami-1m | ---
license: apache-2.0
---

GLAMI-1M contains 1.1 million fashion items, 968 thousand unique images and 1 million unique texts. It contains 13 languages, mostly European. And 191 fine-grained categories, for example we have 15 shoe types. It contains high quality annotations from professional curators and it also presents a difficult production industry problem.
Each sample contains an image, country code, name in corresponding language, description, target category and source of the label which can be of multiple types, it can be human or rule-based but most of the samples are human-based labels.
Read more on [GLAMI-1M home page at GitHub](https://github.com/glami/glami-1m) |
one-sec-cv12/chunk_11 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 18932501520.625
num_examples: 197115
download_size: 16779364500
dataset_size: 18932501520.625
---
# Dataset Card for "chunk_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ryan1122/reality_qa_290k | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
language:
- zh
tags:
- QA
- CN
- self-instruct
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is currently for private sharing only.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Isotonic/Universal_ner_chatml | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 264426895
num_examples: 93560
download_size: 98696959
dataset_size: 264426895
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_qqp_zero_plural_after_quantifier | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 470472
num_examples: 2434
- name: test
num_bytes: 4422150
num_examples: 23007
- name: train
num_bytes: 4149857
num_examples: 21376
download_size: 5565037
dataset_size: 9042479
---
# Dataset Card for "MULTI_VALUE_qqp_zero_plural_after_quantifier"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sunbird/salt-multispeaker-lgg | ---
dataset_info:
features:
- name: ids
dtype: string
- name: texts
dtype: string
- name: audios
sequence: float32
- name: audio_languages
dtype: string
- name: are_studio
dtype: bool
- name: speaker_ids
dtype: string
- name: sample_rates
dtype: int64
splits:
- name: train
num_bytes: 2346308587
num_examples: 4768
- name: dev
num_bytes: 49044839
num_examples: 101
- name: test
num_bytes: 49347377
num_examples: 96
download_size: 1200817239
dataset_size: 2444700803
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
|
BangumiBase/saenaiheroinenosodatekata | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Saenai Heroine No Sodatekata
This is the image base of bangumi Saenai Heroine no Sodatekata, we detected 26 characters, 3436 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 195 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 982 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 77 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 24 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 23 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 14 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 126 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 411 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 35 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 84 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 137 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 269 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 21 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 75 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 15 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 17 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 77 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 37 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 10 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 15 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 516 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 65 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 11 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 6 | [Download](23/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 24 | 6 | [Download](24/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 188 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
yuvalkirstain/pickapic_v2 | ---
dataset_info:
features:
- name: are_different
dtype: bool
- name: best_image_uid
dtype: string
- name: caption
dtype: string
- name: created_at
dtype: timestamp[ns]
- name: has_label
dtype: bool
- name: image_0_uid
dtype: string
- name: image_0_url
dtype: string
- name: image_1_uid
dtype: string
- name: image_1_url
dtype: string
- name: jpg_0
dtype: binary
- name: jpg_1
dtype: binary
- name: label_0
dtype: float64
- name: label_1
dtype: float64
- name: model_0
dtype: string
- name: model_1
dtype: string
- name: ranking_id
dtype: int64
- name: user_id
dtype: int64
- name: num_example_per_prompt
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 322022952127
num_examples: 959040
- name: validation
num_bytes: 6339087542
num_examples: 20596
- name: test
num_bytes: 6618429346
num_examples: 20716
- name: validation_unique
num_bytes: 170578993
num_examples: 500
- name: test_unique
num_bytes: 175368751
num_examples: 500
download_size: 15603769274
dataset_size: 335326416759
---
# Dataset Card for "pickapic_v2"
please pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images!
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kweyamba/lunas-set | ---
license: openrail
task_categories:
- table-question-answering
- question-answering
language:
- en
tags:
- inventory
- price
- expiration
- medicine
pretty_name: lunas
size_categories:
- 10K<n<100K
--- |
open-llm-leaderboard/details_abdulrahman-nuzha__belal-finetuned-llama2-1024-v2.2 | ---
pretty_name: Evaluation run of abdulrahman-nuzha/belal-finetuned-llama2-1024-v2.2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [abdulrahman-nuzha/belal-finetuned-llama2-1024-v2.2](https://huggingface.co/abdulrahman-nuzha/belal-finetuned-llama2-1024-v2.2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_abdulrahman-nuzha__belal-finetuned-llama2-1024-v2.2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-19T15:11:16.361884](https://huggingface.co/datasets/open-llm-leaderboard/details_abdulrahman-nuzha__belal-finetuned-llama2-1024-v2.2/blob/main/results_2024-01-19T15-11-16.361884.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4487446353511138,\n\
\ \"acc_stderr\": 0.034504979440505464,\n \"acc_norm\": 0.4534744253247318,\n\
\ \"acc_norm_stderr\": 0.03530926751067455,\n \"mc1\": 0.2460220318237454,\n\
\ \"mc1_stderr\": 0.015077219200662592,\n \"mc2\": 0.40020648111023094,\n\
\ \"mc2_stderr\": 0.01385589773587115\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.49146757679180886,\n \"acc_stderr\": 0.014609263165632186,\n\
\ \"acc_norm\": 0.5264505119453925,\n \"acc_norm_stderr\": 0.014590931358120172\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5850428201553476,\n\
\ \"acc_stderr\": 0.004917076726623795,\n \"acc_norm\": 0.7781318462457678,\n\
\ \"acc_norm_stderr\": 0.004146537488135697\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.48148148148148145,\n\
\ \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.48148148148148145,\n\
\ \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.3881578947368421,\n \"acc_stderr\": 0.03965842097512744,\n\
\ \"acc_norm\": 0.3881578947368421,\n \"acc_norm_stderr\": 0.03965842097512744\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.48,\n\
\ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.4679245283018868,\n \"acc_stderr\": 0.03070948699255655,\n\
\ \"acc_norm\": 0.4679245283018868,\n \"acc_norm_stderr\": 0.03070948699255655\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04148415739394154,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04148415739394154\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952344,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952344\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\"\
: 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.42196531791907516,\n\
\ \"acc_stderr\": 0.0376574669386515,\n \"acc_norm\": 0.42196531791907516,\n\
\ \"acc_norm_stderr\": 0.0376574669386515\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.1568627450980392,\n \"acc_stderr\": 0.03618664819936245,\n\
\ \"acc_norm\": 0.1568627450980392,\n \"acc_norm_stderr\": 0.03618664819936245\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n\
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4340425531914894,\n \"acc_stderr\": 0.03240038086792747,\n\
\ \"acc_norm\": 0.4340425531914894,\n \"acc_norm_stderr\": 0.03240038086792747\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n\
\ \"acc_stderr\": 0.04339138322579861,\n \"acc_norm\": 0.30701754385964913,\n\
\ \"acc_norm_stderr\": 0.04339138322579861\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.496551724137931,\n \"acc_stderr\": 0.041665675771015785,\n\
\ \"acc_norm\": 0.496551724137931,\n \"acc_norm_stderr\": 0.041665675771015785\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2777777777777778,\n \"acc_stderr\": 0.023068188848261114,\n \"\
acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.023068188848261114\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.25396825396825395,\n\
\ \"acc_stderr\": 0.03893259610604675,\n \"acc_norm\": 0.25396825396825395,\n\
\ \"acc_norm_stderr\": 0.03893259610604675\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.47419354838709676,\n\
\ \"acc_stderr\": 0.028406095057653315,\n \"acc_norm\": 0.47419354838709676,\n\
\ \"acc_norm_stderr\": 0.028406095057653315\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.3399014778325123,\n \"acc_stderr\": 0.0333276906841079,\n\
\ \"acc_norm\": 0.3399014778325123,\n \"acc_norm_stderr\": 0.0333276906841079\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\"\
: 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.5696969696969697,\n \"acc_stderr\": 0.03866225962879077,\n\
\ \"acc_norm\": 0.5696969696969697,\n \"acc_norm_stderr\": 0.03866225962879077\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.5353535353535354,\n \"acc_stderr\": 0.03553436368828061,\n \"\
acc_norm\": 0.5353535353535354,\n \"acc_norm_stderr\": 0.03553436368828061\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.6528497409326425,\n \"acc_stderr\": 0.03435696168361355,\n\
\ \"acc_norm\": 0.6528497409326425,\n \"acc_norm_stderr\": 0.03435696168361355\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4076923076923077,\n \"acc_stderr\": 0.024915243985987844,\n\
\ \"acc_norm\": 0.4076923076923077,\n \"acc_norm_stderr\": 0.024915243985987844\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.25555555555555554,\n \"acc_stderr\": 0.02659393910184408,\n \
\ \"acc_norm\": 0.25555555555555554,\n \"acc_norm_stderr\": 0.02659393910184408\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3739495798319328,\n \"acc_stderr\": 0.031429466378837076,\n\
\ \"acc_norm\": 0.3739495798319328,\n \"acc_norm_stderr\": 0.031429466378837076\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"\
acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.5889908256880734,\n \"acc_stderr\": 0.021095050687277656,\n \"\
acc_norm\": 0.5889908256880734,\n \"acc_norm_stderr\": 0.021095050687277656\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.3101851851851852,\n \"acc_stderr\": 0.03154696285656628,\n \"\
acc_norm\": 0.3101851851851852,\n \"acc_norm_stderr\": 0.03154696285656628\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.5294117647058824,\n \"acc_stderr\": 0.03503235296367993,\n \"\
acc_norm\": 0.5294117647058824,\n \"acc_norm_stderr\": 0.03503235296367993\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.5443037974683544,\n \"acc_stderr\": 0.03241920684693335,\n \
\ \"acc_norm\": 0.5443037974683544,\n \"acc_norm_stderr\": 0.03241920684693335\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5426008968609866,\n\
\ \"acc_stderr\": 0.033435777055830646,\n \"acc_norm\": 0.5426008968609866,\n\
\ \"acc_norm_stderr\": 0.033435777055830646\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.45038167938931295,\n \"acc_stderr\": 0.04363643698524779,\n\
\ \"acc_norm\": 0.45038167938931295,\n \"acc_norm_stderr\": 0.04363643698524779\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.628099173553719,\n \"acc_stderr\": 0.044120158066245044,\n \"\
acc_norm\": 0.628099173553719,\n \"acc_norm_stderr\": 0.044120158066245044\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04833682445228318,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04833682445228318\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.4171779141104294,\n \"acc_stderr\": 0.038741028598180814,\n\
\ \"acc_norm\": 0.4171779141104294,\n \"acc_norm_stderr\": 0.038741028598180814\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n\
\ \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n\
\ \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.4854368932038835,\n \"acc_stderr\": 0.049486373240266376,\n\
\ \"acc_norm\": 0.4854368932038835,\n \"acc_norm_stderr\": 0.049486373240266376\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6452991452991453,\n\
\ \"acc_stderr\": 0.03134250486245402,\n \"acc_norm\": 0.6452991452991453,\n\
\ \"acc_norm_stderr\": 0.03134250486245402\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6104725415070242,\n\
\ \"acc_stderr\": 0.017438082556264597,\n \"acc_norm\": 0.6104725415070242,\n\
\ \"acc_norm_stderr\": 0.017438082556264597\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.48265895953757226,\n \"acc_stderr\": 0.026902900458666647,\n\
\ \"acc_norm\": 0.48265895953757226,\n \"acc_norm_stderr\": 0.026902900458666647\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27262569832402234,\n\
\ \"acc_stderr\": 0.014893391735249619,\n \"acc_norm\": 0.27262569832402234,\n\
\ \"acc_norm_stderr\": 0.014893391735249619\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.4803921568627451,\n \"acc_stderr\": 0.028607893699576066,\n\
\ \"acc_norm\": 0.4803921568627451,\n \"acc_norm_stderr\": 0.028607893699576066\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5562700964630225,\n\
\ \"acc_stderr\": 0.028217683556652308,\n \"acc_norm\": 0.5562700964630225,\n\
\ \"acc_norm_stderr\": 0.028217683556652308\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.027744313443376536,\n\
\ \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.027744313443376536\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.36524822695035464,\n \"acc_stderr\": 0.028723863853281278,\n \
\ \"acc_norm\": 0.36524822695035464,\n \"acc_norm_stderr\": 0.028723863853281278\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3500651890482399,\n\
\ \"acc_stderr\": 0.012182552313215175,\n \"acc_norm\": 0.3500651890482399,\n\
\ \"acc_norm_stderr\": 0.012182552313215175\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.030372836961539352,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.030372836961539352\n \
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\"\
: 0.4215686274509804,\n \"acc_stderr\": 0.019977422600227467,\n \"\
acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.019977422600227467\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4727272727272727,\n\
\ \"acc_stderr\": 0.04782001791380063,\n \"acc_norm\": 0.4727272727272727,\n\
\ \"acc_norm_stderr\": 0.04782001791380063\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.4448979591836735,\n \"acc_stderr\": 0.031814251181977865,\n\
\ \"acc_norm\": 0.4448979591836735,\n \"acc_norm_stderr\": 0.031814251181977865\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.582089552238806,\n\
\ \"acc_stderr\": 0.03487558640462064,\n \"acc_norm\": 0.582089552238806,\n\
\ \"acc_norm_stderr\": 0.03487558640462064\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3614457831325301,\n\
\ \"acc_stderr\": 0.03740059382029321,\n \"acc_norm\": 0.3614457831325301,\n\
\ \"acc_norm_stderr\": 0.03740059382029321\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6198830409356725,\n \"acc_stderr\": 0.037229657413855394,\n\
\ \"acc_norm\": 0.6198830409356725,\n \"acc_norm_stderr\": 0.037229657413855394\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2460220318237454,\n\
\ \"mc1_stderr\": 0.015077219200662592,\n \"mc2\": 0.40020648111023094,\n\
\ \"mc2_stderr\": 0.01385589773587115\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7411207576953434,\n \"acc_stderr\": 0.012310515810993376\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10538286580742987,\n \
\ \"acc_stderr\": 0.008457575884041755\n }\n}\n```"
repo_url: https://huggingface.co/abdulrahman-nuzha/belal-finetuned-llama2-1024-v2.2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|arc:challenge|25_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|gsm8k|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hellaswag|10_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T15-11-16.361884.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T15-11-16.361884.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- '**/details_harness|winogrande|5_2024-01-19T15-11-16.361884.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-19T15-11-16.361884.parquet'
- config_name: results
data_files:
- split: 2024_01_19T15_11_16.361884
path:
- results_2024-01-19T15-11-16.361884.parquet
- split: latest
path:
- results_2024-01-19T15-11-16.361884.parquet
---
# Dataset Card for Evaluation run of abdulrahman-nuzha/belal-finetuned-llama2-1024-v2.2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [abdulrahman-nuzha/belal-finetuned-llama2-1024-v2.2](https://huggingface.co/abdulrahman-nuzha/belal-finetuned-llama2-1024-v2.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_abdulrahman-nuzha__belal-finetuned-llama2-1024-v2.2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-19T15:11:16.361884](https://huggingface.co/datasets/open-llm-leaderboard/details_abdulrahman-nuzha__belal-finetuned-llama2-1024-v2.2/blob/main/results_2024-01-19T15-11-16.361884.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4487446353511138,
"acc_stderr": 0.034504979440505464,
"acc_norm": 0.4534744253247318,
"acc_norm_stderr": 0.03530926751067455,
"mc1": 0.2460220318237454,
"mc1_stderr": 0.015077219200662592,
"mc2": 0.40020648111023094,
"mc2_stderr": 0.01385589773587115
},
"harness|arc:challenge|25": {
"acc": 0.49146757679180886,
"acc_stderr": 0.014609263165632186,
"acc_norm": 0.5264505119453925,
"acc_norm_stderr": 0.014590931358120172
},
"harness|hellaswag|10": {
"acc": 0.5850428201553476,
"acc_stderr": 0.004917076726623795,
"acc_norm": 0.7781318462457678,
"acc_norm_stderr": 0.004146537488135697
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.043163785995113245,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.043163785995113245
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3881578947368421,
"acc_stderr": 0.03965842097512744,
"acc_norm": 0.3881578947368421,
"acc_norm_stderr": 0.03965842097512744
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4679245283018868,
"acc_stderr": 0.03070948699255655,
"acc_norm": 0.4679245283018868,
"acc_norm_stderr": 0.03070948699255655
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4375,
"acc_stderr": 0.04148415739394154,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04148415739394154
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952344,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952344
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.42196531791907516,
"acc_stderr": 0.0376574669386515,
"acc_norm": 0.42196531791907516,
"acc_norm_stderr": 0.0376574669386515
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.1568627450980392,
"acc_stderr": 0.03618664819936245,
"acc_norm": 0.1568627450980392,
"acc_norm_stderr": 0.03618664819936245
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4340425531914894,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.4340425531914894,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.04339138322579861,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.04339138322579861
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.496551724137931,
"acc_stderr": 0.041665675771015785,
"acc_norm": 0.496551724137931,
"acc_norm_stderr": 0.041665675771015785
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.023068188848261114,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.023068188848261114
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.03893259610604675,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.03893259610604675
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.47419354838709676,
"acc_stderr": 0.028406095057653315,
"acc_norm": 0.47419354838709676,
"acc_norm_stderr": 0.028406095057653315
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3399014778325123,
"acc_stderr": 0.0333276906841079,
"acc_norm": 0.3399014778325123,
"acc_norm_stderr": 0.0333276906841079
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.42,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.42,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5696969696969697,
"acc_stderr": 0.03866225962879077,
"acc_norm": 0.5696969696969697,
"acc_norm_stderr": 0.03866225962879077
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5353535353535354,
"acc_stderr": 0.03553436368828061,
"acc_norm": 0.5353535353535354,
"acc_norm_stderr": 0.03553436368828061
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.6528497409326425,
"acc_stderr": 0.03435696168361355,
"acc_norm": 0.6528497409326425,
"acc_norm_stderr": 0.03435696168361355
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4076923076923077,
"acc_stderr": 0.024915243985987844,
"acc_norm": 0.4076923076923077,
"acc_norm_stderr": 0.024915243985987844
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25555555555555554,
"acc_stderr": 0.02659393910184408,
"acc_norm": 0.25555555555555554,
"acc_norm_stderr": 0.02659393910184408
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3739495798319328,
"acc_stderr": 0.031429466378837076,
"acc_norm": 0.3739495798319328,
"acc_norm_stderr": 0.031429466378837076
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2847682119205298,
"acc_stderr": 0.03684881521389023,
"acc_norm": 0.2847682119205298,
"acc_norm_stderr": 0.03684881521389023
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5889908256880734,
"acc_stderr": 0.021095050687277656,
"acc_norm": 0.5889908256880734,
"acc_norm_stderr": 0.021095050687277656
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3101851851851852,
"acc_stderr": 0.03154696285656628,
"acc_norm": 0.3101851851851852,
"acc_norm_stderr": 0.03154696285656628
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5294117647058824,
"acc_stderr": 0.03503235296367993,
"acc_norm": 0.5294117647058824,
"acc_norm_stderr": 0.03503235296367993
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.5443037974683544,
"acc_stderr": 0.03241920684693335,
"acc_norm": 0.5443037974683544,
"acc_norm_stderr": 0.03241920684693335
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5426008968609866,
"acc_stderr": 0.033435777055830646,
"acc_norm": 0.5426008968609866,
"acc_norm_stderr": 0.033435777055830646
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.45038167938931295,
"acc_stderr": 0.04363643698524779,
"acc_norm": 0.45038167938931295,
"acc_norm_stderr": 0.04363643698524779
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.628099173553719,
"acc_stderr": 0.044120158066245044,
"acc_norm": 0.628099173553719,
"acc_norm_stderr": 0.044120158066245044
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5,
"acc_stderr": 0.04833682445228318,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04833682445228318
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.4171779141104294,
"acc_stderr": 0.038741028598180814,
"acc_norm": 0.4171779141104294,
"acc_norm_stderr": 0.038741028598180814
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.38392857142857145,
"acc_stderr": 0.04616143075028547,
"acc_norm": 0.38392857142857145,
"acc_norm_stderr": 0.04616143075028547
},
"harness|hendrycksTest-management|5": {
"acc": 0.4854368932038835,
"acc_stderr": 0.049486373240266376,
"acc_norm": 0.4854368932038835,
"acc_norm_stderr": 0.049486373240266376
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6452991452991453,
"acc_stderr": 0.03134250486245402,
"acc_norm": 0.6452991452991453,
"acc_norm_stderr": 0.03134250486245402
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6104725415070242,
"acc_stderr": 0.017438082556264597,
"acc_norm": 0.6104725415070242,
"acc_norm_stderr": 0.017438082556264597
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.48265895953757226,
"acc_stderr": 0.026902900458666647,
"acc_norm": 0.48265895953757226,
"acc_norm_stderr": 0.026902900458666647
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.27262569832402234,
"acc_stderr": 0.014893391735249619,
"acc_norm": 0.27262569832402234,
"acc_norm_stderr": 0.014893391735249619
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4803921568627451,
"acc_stderr": 0.028607893699576066,
"acc_norm": 0.4803921568627451,
"acc_norm_stderr": 0.028607893699576066
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5562700964630225,
"acc_stderr": 0.028217683556652308,
"acc_norm": 0.5562700964630225,
"acc_norm_stderr": 0.028217683556652308
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.027744313443376536,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.027744313443376536
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.36524822695035464,
"acc_stderr": 0.028723863853281278,
"acc_norm": 0.36524822695035464,
"acc_norm_stderr": 0.028723863853281278
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3500651890482399,
"acc_stderr": 0.012182552313215175,
"acc_norm": 0.3500651890482399,
"acc_norm_stderr": 0.012182552313215175
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5,
"acc_stderr": 0.030372836961539352,
"acc_norm": 0.5,
"acc_norm_stderr": 0.030372836961539352
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.019977422600227467,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.019977422600227467
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4727272727272727,
"acc_stderr": 0.04782001791380063,
"acc_norm": 0.4727272727272727,
"acc_norm_stderr": 0.04782001791380063
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.4448979591836735,
"acc_stderr": 0.031814251181977865,
"acc_norm": 0.4448979591836735,
"acc_norm_stderr": 0.031814251181977865
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.582089552238806,
"acc_stderr": 0.03487558640462064,
"acc_norm": 0.582089552238806,
"acc_norm_stderr": 0.03487558640462064
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3614457831325301,
"acc_stderr": 0.03740059382029321,
"acc_norm": 0.3614457831325301,
"acc_norm_stderr": 0.03740059382029321
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6198830409356725,
"acc_stderr": 0.037229657413855394,
"acc_norm": 0.6198830409356725,
"acc_norm_stderr": 0.037229657413855394
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2460220318237454,
"mc1_stderr": 0.015077219200662592,
"mc2": 0.40020648111023094,
"mc2_stderr": 0.01385589773587115
},
"harness|winogrande|5": {
"acc": 0.7411207576953434,
"acc_stderr": 0.012310515810993376
},
"harness|gsm8k|5": {
"acc": 0.10538286580742987,
"acc_stderr": 0.008457575884041755
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
MuthuAI9/SecurityEval_Transformed_v2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 73028
num_examples: 130
download_size: 37425
dataset_size: 73028
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
lyon-nlp/clustering-hal-s2s | ---
license: apache-2.0
task_categories:
- text-classification
language:
- fr
size_categories:
- 10K<n<100K
---
## Clustering HAL
This dataset was created by scrapping data from the HAL platform.
Over 80,000 articles have been scrapped to keep their id, title and category.
It was originally used for the French version of [MTEB](https://github.com/embeddings-benchmark/mteb), but it can also be used for various clustering or classification tasks.
### Usage
To use this dataset, you can run the following code :
```py
from datasets import load_dataset
dataset = load_dataset("lyon-nlp/clustering-hal-s2s", "test")
``` |
ChavyvAkvar/fiction-en | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 991521358
num_examples: 103103
download_size: 586257910
dataset_size: 991521358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- en
pretty_name: parallel fiction english
---
This is the text-only version from [ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k) datasets. Aim for continuous pre-training for creative writing and roleplaying purposes. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.