datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Rayan2023/dataset2teste | ---
license: openrail
---
|
Andyson/DefectSpectrum | ---
license: mit
task_categories:
- image-segmentation
- image-to-text
language:
- en
tags:
- industry
pretty_name: DefectSpectrum
size_categories:
- 1K<n<10K
---
# Defect Spectrum Dataset
Welcome to the Defect Spectrum dataset repository. This comprehensive benchmark is a granular collection of large-scale defect datasets with rich semantics, designed to push the frontier of industrial defect inspection research and applications.
## Overview
Defect inspection is a critical component within the closed-loop manufacturing system. To facilitate advanced research and development in this domain, we introduce the Defect Spectrum dataset. It offers precise, semantics-abundant, and large-scale annotations for a wide range of industrial defects. This dataset is an enhancement over existing benchmarks, providing refined annotations and introducing detailed semantic layers, allowing for the distinction between multiple defect types within a single image.
### Features
- **Semantics-Abundant Annotations**: Each defect is meticulously labeled, not just at the pixel level but with rich contextual information, providing insights into the defect type and implications.
- **High Precision**: Annotations are refined by experts to capture even the subtlest of defects, ensuring high precision.
- **Large-Scale Data**: Building on four key industrial benchmarks, Defect Spectrum stands out with its extensive coverage and depth.
- **Incorporates Descriptive Captions**: To bridge the gap towards Vision Language Models (VLMs), each sample is accompanied by a descriptive caption.
### Directory Structure
```plaintext
DefectSpectrum/
โโโ DS-MVTec/
โ โโโ bottle/
โ โ โโโ image/ # Original images of the bottle category
โ โ โโโ caption/ # Descriptive captions of the bottle category
โ โ โโโ mask/ # Single channel defect masks for the bottle category
โ โ โโโ rgb_mask/ # Colored defect masks for better visualization
โ โโโ cable/
โ โ โโโ image/ # Original images of the cable category
โ โ โโโ caption/ # Descriptive captions of the cable category
โ โ โโโ mask/ # Single channel defect masks for the cable category
โ โ โโโ rgb_mask/ # Colored defect masks for better visualization
โ โโโ ...
โโโ DS-VISION/
โ โโโ ...
โโโ DS-DAGM/
โ โโโ ...
โโโ DS-Cotton-Fabric/
โ โโโ ...
```
## To-Do List
- [x] Task 1: Release DS-MVTec image-mask pairs.
- [x] Task 2: Release DS-VISION, DS-DAGM, and DS-Cotton-Fabric image-mask pairs.
- [ ] Task 3: Release captions.
- [ ] Task 4: Release selected synthetic data.
---
license: mit
--- |
YiyangAiLab/POVID_preference_data_for_VLLMs | ---
license: cc-by-4.0
---
|
Weni/wenigpt-agent-1.0.0 | ---
dataset_info:
features:
- name: title
dtype: string
- name: link
dtype: string
- name: content
dtype: string
- name: content_base_uuid
dtype: string
- name: base_link_uuid
dtype: string
- name: adjective
dtype: string
- name: name
dtype: string
- name: occupation
dtype: string
- name: chatbot_goal
dtype: string
- name: instructions
sequence: string
splits:
- name: train
num_bytes: 6396795
num_examples: 627
download_size: 2820491
dataset_size: 6396795
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Leul78/qanda | ---
license: apache-2.0
---
|
ibivibiv/alpaca_tasksource13 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 135208657
num_examples: 253970
download_size: 76628104
dataset_size: 135208657
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hordruma/ontario_laws_and_regs | ---
license: mit
language:
- en
tags:
- law
- legal
- canada
- ontario
- legislation
- regulations
language_details: en-CA, en-GB
pretty_name: Ontario Law and Regulations
size_categories:
- 1K<n<10K
source_datasets:
- Ontario eLaws, Legislation & Regulations
task_categories:
- text-generation
- fill-mask
- text-retrieval
task_ids:
- language-modeling
- masked-language-modeling
- document-retrieval
viewer: true
---
##Ontario Laws & Regulations Dataset
# **โ๏ธOntario Laws & Regsโ๏ธ**
The Ontario Laws & Regs dataset contains 5,096 Ontario laws and regulations.
The laws and regulations consist of the most recent version of all current and revoked laws and regs.
The dataset is distributed under the MIT license and is intended to facilitate ML and data tasks involving Ontario legislation.
In addition, a scraper is provided which is capable of capturing different configurations of the data directly from the Ontario eLaws website, found [here](https://github.com/hordruma/elaws_scraper).
## Structure๐๏ธ
Each law and regulation is stored in its own json file which contains the following fields & sub-fields:
1. act_info/reg_info: Act/Reg Info, includes:
-- full_title: Full Title
-- act_name_text/reg_name_text: Act/Reg Name Text
-- citation: Citation
-- act_under: For regulations only, the parent act.
-- url: URL
-- date_scraped: Date Scraped
2. copyright: Crown copyright blurb
-- Copyright: As required by eLaws TOS
3. versions: Versions, includes a list of versions containing:
-- a_href: URL slug for given version
-- valid_from: Valid from
-- valid_to: Valid to
14. current_regs: Current Regs (for Statutes), being a list of associated regulations, containing:
-- a_href: URL slug for given regulation
-- Citation: Citation
-- title: Title
18. revoked_regs: Revoked Regs (for Statutes), being a list of associated revoked regulations, containing:
-- revoked_reg_a_href: URL slug for given revoked regulation
-- revoked_reg_citation: Citation
-- revoked_reg_title: Title
22. content: Contents of the act or regulation, being a list of sections, containing:
-- id: Section #
-- section: Section title
-- content: Section content, as utf-8 text
-- raw_html: raw html of section content, containing extra features
## Collection ๐ฅ
Documents were sourced from the [Ontario eLaws Website](https://www.ontario.ca/laws).
[`eLaws Scraper`](https://github.com/hordruma/elaws_scraper/) was used to extract the documents and parse them to jsons.
## Licence ๐
Both the dataset and its associated scraper are distributed under MIT license. |
dsfsi/za-isizulu-siswati-news | ---
license: cc-by-sa-4.0
language:
- ss
- zu
task_categories:
- text-classification
tags:
- dsfsi-datasets
- siswati
- isizulu
pretty_name: za-isizulu-siswati-news
---
# IsiZulu News (articles and headlines) and Siswati News (headlines) Corpora - za-isizulu-siswati-news-2022
[](https://doi.org/10.5281/zenodo.7193346) [](https://arxiv.org/abs/2306.07426)
### About Dataset
Dataset for both isiZulu news (articles and headlines) and Siswati news headlines. Process included scraping the data from internet, from Isolezwe news website http://www.isolezwe.co.za and public posts from the SABC news LigwalagwalaFM Facebook page https://www.facebook.com/ligwalagwalafm/ respectively.
The obtained datasets are isiZulu news articles, isiZulu news headlines, and Siswati news headlines.
Post data collection the datasets were then sent to annotators, and they were sent back after the annotation process. The datasets contain special characters, some English words and characters that are not ASCII encoded which must be removed prior to model training. The aim of these three datasets is to create a baseline news categorisation model for the two South African low resources languages i.e. isiZulu and Siswati.
For categorisation, we use high level [IPTC NewsCodes](https://iptc.org/standards/newscodes/) as categories. You can view the news categories here [data/news-categories-iptc-newscodes.csv](data/news-categories-iptc-newscodes.csv)
The datasets were found to have class categories with very few observations, hence the class categories which have less than 35 observations were removed for isiZulu and less 6 observations for Siswati.
The dataset has both full category data as well as reduced category data.
Please see the [data-statement.md](data-statement.md) for full dataset information.
## Online Repository link
* Link to the DOI data repository - [Zenodo Data Repository](https://doi.org/10.5281/zenodo.7193346)
*
## Authors
* **Andani Madodonga**
* **Vukosi Marivate** - [@vukosi](https://twitter.com/vukosi)
* **Matthew Adendorff**
See also the list of [contributors](https://github.com/dsfsi/za-isizulu-siswati-news-2022/contributors) who participated in this project.
## Citation
**Citation:**
```bibtex
@article{Madodonga_Marivate_Adendorff_2023, title={Izindaba-Tindzaba: Machine learning news categorisation for Long and Short Text for isiZulu and Siswati}, volume={4}, url={https://upjournals.up.ac.za/index.php/dhasa/article/view/4449}, DOI={10.55492/dhasa.v4i01.4449}, author={Madodonga, Andani and Marivate, Vukosi and Adendorff, Matthew}, year={2023}, month={Jan.} }
```
## License
Data is Licensed under CC 4.0 BY SA |
shamotskyi/lmes_LOW | ---
configs:
- config_name: default
data_files:
- split: train
path: "data/train/LOWTask.jsonl"
- split: fewshot
path: "data/fewshot/LOWTask.jsonl"
---
# Dataset Card for LMES-LOW (Eval-UA-tion benchmark)
This dataset (described in paper **TODO**) part of the LMentry-static-UA set of tasks of the Eval-UA-tion benchmark.
It aims to evaluate (L)LMs' Ukrainian language skills.
The LMES dataset is inspired by the (awesome!) LMentry benchmark ([aviaefrat/lmentry](https://github.com/aviaefrat/lmentry/))
LMES-LOW asks questions such as "what's the fifth letter of the word ..." in many different ways. |
felipesampaio2010/scooter | ---
license: openrail
---
|
mesolitica/python-act | ---
language:
- ms
- en
---
# Python Act
Python Code Interpreter agent, originally from https://huggingface.co/datasets/xingyaoww/code-act but we run on filtered https://huggingface.co/datasets/teknium/OpenHermes-2.5 and filtered https://huggingface.co/datasets/m-a-p/Code-Feedback
Source code at https://github.com/mesolitica/malaysian-dataset/tree/master/chatbot/python-act
## How does it work?
1. When first question sent to the LLM, LLM will generate problem description and code to execute inside Jupyter.
2. LLM will spawn a Jupyter session inside a docker container, so each unique question will get a unique docker container.
3. Keep looping until the LLM satisfied with the output from the Jupyter session.
## Example data
```python
{'original': "Can you explain the difference between the 'asfreq' and 'resample' methods in Pandas? In what scenarios should each one be used?",
'result': [{'role': 'system',
'content': 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user\'s questions.\nThe assistant can interact with an interactive Python (Jupyter Notebook) environment and receive the corresponding output when needed. The code should be enclosed using "<execute>" tag, for example: <execute> print("Hello World!") </execute>.\nThe assistant should attempt fewer things at a time instead of putting too much code in one <execute> block. The assistant can install packages through PIP by <execute> !pip install [package needed] </execute> and should always import packages and define variables before starting to use them.\nThe assistant should stop <execute> and provide an answer when they have already obtained the answer from the execution result. Whenever possible, execute the code for the user using <execute> instead of providing it.\nThe assistant\'s response should be concise, but do express their thoughts.\n'},
{'role': 'user',
'content': "Can you explain the difference between the 'asfreq' and 'resample' methods in Pandas? In what scenarios should each one be used?"},
{'role': 'assistant',
'content': "Certainly! In Pandas, both 'asfreq' and 'resample' methods are used for working with frequency of data, but they serve slightly different purposes.\n\nThe 'asfreq' method is used to change the frequency of a TimeSeries or DateOffset based object to a specified frequency. It returns a new object with the specified frequency, while preserving the original object unchanged. This method is useful when you have a TimeSeries object with a specific frequency and you want to convert it to another frequency for further analysis or visualization.\n\nFor example, let's create a daily TimeSeries object and then change its frequency to weekly using the 'asfreq' method:\n\n<execute>\nimport pandas as pd\n\ndaily_ts = pd.Series(range(1, 31), index=pd.date_range('2023-01-01', periods=30, freq='D'))\nweekly_ts = daily_ts.asfreq('W-SAT')\n\nprint(daily_ts)\nprint(weekly_ts)\n</execute>"},
{'role': 'user',
'content': 'Execution Output:\n2023-01-01 1\n2023-01-02 2\n2023-01-03 3\n2023-01-04 4\n2023-01-05 5\n2023-01-06 6\n2023-01-07 7\n2023-01-08 8\n2023-01-09 9\n2023-01-10 10\n2023-01-11 11\n2023-01-12 12\n2023-01-13 13\n2023-01-14 14\n2023-01-15 15\n2023-01-16 16\n2023-01-17 17\n2023-01-18 18\n2023-01-19 19\n2023-01-20 20\n2023-01-21 21\n2023-01-22 22\n2023-01-23 23\n2023-01-24 24\n2023-01-25 25\n2023-01-26 26\n2023-01-27 27\n2023-01-28 28\n2023-01-29 29\n2023-01-30 30\nFreq: D, dtype: int64\n2023-01-07 7\n2023-01-14 14\n2023-01-21 21\n2023-01-28 28\nFreq: W-SAT, dtype: int64\n\n'},
{'role': 'assistant',
'content': "As you can see, the 'asfreq' method changes the frequency of the TimeSeries object from daily to weekly, while preserving the original daily TimeSeries unchanged.\n\nOn the other hand, the 'resample' method is used to resample a TimeSeries or DateOffset based object to a specified frequency, by interpolating or aggregating the data. It returns a new object with the specified frequency, and the original object remains unchanged. This method is useful when you have a TimeSeries object with a specific frequency and you want to analyze or visualize it at a different frequency, taking into account the data points in between the original frequency.\n\nFor example, let's create a daily TimeSeries object and then resample it to a weekly frequency using the 'resample' method:\n\n<execute>\ndaily_ts = pd.Series(range(1, 31), index=pd.date_range('2023-01-01', periods=30, freq='D'))\nweekly_ts = daily_ts.resample('W-SAT')\n\nprint(daily_ts)\nprint(weekly_ts)\n</execute>"},
{'role': 'user',
'content': 'Execution Output:\n2023-01-01 1\n2023-01-02 2\n2023-01-03 3\n2023-01-04 4\n2023-01-05 5\n2023-01-06 6\n2023-01-07 7\n2023-01-08 8\n2023-01-09 9\n2023-01-10 10\n2023-01-11 11\n2023-01-12 12\n2023-01-13 13\n2023-01-14 14\n2023-01-15 15\n2023-01-16 16\n2023-01-17 17\n2023-01-18 18\n2023-01-19 19\n2023-01-20 20\n2023-01-21 21\n2023-01-22 22\n2023-01-23 23\n2023-01-24 24\n2023-01-25 25\n2023-01-26 26\n2023-01-27 27\n2023-01-28 28\n2023-01-29 29\n2023-01-30 30\nFreq: D, dtype: int64\nDatetimeIndexResampler [freq=<Week: weekday=5>, axis=0, closed=right, label=right, convention=start, origin=start_day]\n\n'},
{'role': 'assistant',
'content': "As you can see, the 'resample' method changes the frequency of the TimeSeries object from daily to weekly, but it also interpolates the data between the original frequency, taking into account the data points in between the original frequency. This results in a new TimeSeries object with a weekly frequency, but with more data points than the original daily TimeSeries.\n\nIn summary, the 'asfreq' method is used to change the frequency of a TimeSeries or DateOffset based object to a specified frequency, preserving the original object unchanged. The 'resample' method is used to resample a TimeSeries or DateOffset based object to a specified frequency, by interpolating or aggregating the data, and returning a new object with the specified frequency.\n\nScenarios where each one should be used:\n\n* Use 'asfreq' when you have a TimeSeries object with a specific frequency and you want to convert it to another frequency for further analysis or visualization, without any interpolation or aggregation of the data.\n* Use 'resample' when you have a TimeSeries object with a specific frequency and you want to analyze or visualize it at a different frequency, taking into account the data points in between the original frequency, by interpolating or aggregating the data.\n\nRemember that both methods can be used with other frequency options like 'M', 'Q', 'Y' for monthly, quarterly and yearly frequency, respectively. \n"}]}
``` |
AdapterOcean/python3-standardized_cluster_11_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8672613
num_examples: 3611
download_size: 0
dataset_size: 8672613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_11_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NaturalStupidlty/unstable_controlnet | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 347023230.768
num_examples: 2032
download_size: 350892788
dataset_size: 347023230.768
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MU-NLPC/Calc-asdiv_a | ---
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: result_unit
dtype: string
- name: grade
dtype: int64
- name: source_question
dtype: string
splits:
- name: test
num_bytes: 415636
num_examples: 1218
download_size: 152949
dataset_size: 415636
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: result_unit
dtype: string
- name: grade
dtype: int64
- name: source_question
dtype: string
splits:
- name: test
num_bytes: 415664
num_examples: 1218
download_size: 152949
dataset_size: 415664
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-asdiv_a
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from the arithmetic subset of ASDiv ([original repo](https://github.com/chaochun/nlu-asdiv-dataset)).
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Data splits
The dataset does not contain data splits. We consider the whole dataset as a testing benchmark.
## Attributes:
- **id**: id of the example
- **question** problem description in English
- **chain**: series of simple operations (derived from **expression**) that lead to the solution
- **result**: the solution for x as a number or fraction (string)
- **result_float**: same as **result** but converted to a float
- **result_unit**: the units of the result
- **grade**: an estimate of the school grade in which the problem would be practiced
- **source_question**: the source from which the example originates
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original ASDiv dataset and repo**](https://github.com/chaochun/nlu-asdiv-dataset)
- [**original ASDiv paper**](https://aclanthology.org/2020.acl-main.92)
## Licence
CC BY-NC 4.0, consistent with the original source dataset linked above.
## Cite
If you use this dataset in research, please cite the original [ASDiv paper](https://aclanthology.org/2020.acl-main.92), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlฤรญk and Michal ล tefรกnik and Ondลej Sotolรกล and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
|
metredo085/dsafthsdhbgchnb | ---
license: apache-2.0
---
|
jb-cnn/Lakh-piano-256-style-transfer | ---
license: unknown
---
|
kyujinpy/KOpen-platypus | ---
language:
- en
- ko
license: cc-by-4.0
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 34213211
num_examples: 24926
download_size: 16662523
dataset_size: 34213211
---
# KOpenPlatypus: Korean Translation dataset about Open-Platypus
## Korean Translation Method
I use [DeepL-pro-API](https://www.deepl.com/ko/pro/change-plan?cta=header-pro#single) and selenium.
It takes about 140h times.
+) ๋ฐ์ดํฐ์
์ด์ฉํ์
์ ๋ชจ๋ธ์ด๋ ๋ฐ์ดํฐ์
์ ๋ง๋์ค ๋, ๊ฐ๋จํ ์ถ์ฒ ํ๊ธฐ๋ฅผ ํด์ฃผ์ ๋ค๋ฉด ์ฐ๊ตฌ์ ํฐ ๋์์ด ๋ฉ๋๋ค๐ญ๐ญ
## Korean Translation post-processing





And also, applying post-processing. See below lists. (*์ฝ 2000๊ฐ ์ด์์ ์ฝ๋ ๊ด๋ จ ๋ฐ์ดํฐ๋ฅผ ์์์
์ผ๋ก ์์ ํจ)
1. ์ฝ๋์ ์ฃผ์์ ๊ทธ๋๋ก ์ ์งํ๊ณ , ์ค๋ช
๋ถ๋ถ๋ง ํ๊ตญ์ด๋ก ์์
2. 1๋ฒ๊ณผ ๋๋ถ์ด์, Python, Java, Cpp, xml ๋ฑ๋ฑ ๊ฒฐ๊ณผ๋ค์ ์ ๋ถ ๊ธฐ์กด์ ๋ฐ์ดํฐ ํํ๋ก ์ต๋ํ ๋ณด์กด
3. ๋จ์ผ ์ซ์์ ์์ด๋ ๋ณธ๋์ ๊ฒฐ๊ณผ ๊ทธ๋๋ก ๊ฐ์ ธ์ด
4. DeepL Pro ๋ฒ์ญ ๊ฒฐ๊ณผ ์ค ๋ฏธ์์ฑ ๋ณ์ญ ๊ฒฐ๊ณผ ์ง์ ์์ (์๋ฅผ ๋ค๋ฉด, '[...]'๊ฐ ํฌํจ๋์ด ์์)
5. DeepL Pro ๋ฒ์ญ ๊ฒฐ๊ณผ๊ฐ ๋ณธ๋์ ๋ฐ์ดํฐ์ ๋นํด ๊ธ์์๊ฐ 50% ์ดํ๋ก ๋ฎ์ผ๋ฉด, ๋ฒ์ญ ๊ฒฐ๊ณผ ์์
6. ๋ฒ์ญํ๊ณ ์ ํ๋ ๊ธ์์๊ฐ 1500์ ์ด์์ผ ๊ฒฝ์ฐ, API๋ก ๋ณ๊ฒฝํด์ ๋ฒ์ญ
7. `๊ณ ์ ๋ช
์ฌ`๋ ์ต๋ํ ์ ์งํจ
- 95% ์ด์์ ๋ฒ์ญ ์ค๋ฅ๋ ์ ๋ถ ๊ณ ์น ๊ฒ์ผ๋ก ์๊ฐ๋จ.
- ์ฝ 144h ์ ๋ ๋ฒ์ญ ์์
์ ์งํํจ. (72h/72h; Translation/Post-processing)
## Introdcution
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
| Dataset Name | License Type |
|--------------------------------------------------------------|--------------|
| [PRM800K](https://github.com/openai/prm800k) | MIT |
| [ScienceQA](https://github.com/lupantech/ScienceQA) | [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| [SciBench](https://github.com/mandyyyyii/scibench) | MIT |
| [ReClor](https://whyu.me/reclor/) | Non-commercial |
| [TheoremQA](https://huggingface.co/datasets/wenhu/TheoremQA) | MIT |
| [`nuprl/leetcode-solutions-python-testgen-gpt4`](https://huggingface.co/datasets/nuprl/leetcode-solutions-python-testgen-gpt4/viewer/nuprl--leetcode-solutions-python-testgen-gpt4/train?p=1) | None listed |
| [`jondurbin/airoboros-gpt4-1.4.1`](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) | other |
| [`TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k`](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k/viewer/TigerResearch--tigerbot-kaggle-leetcodesolutions-en-2k/train?p=2) | apache-2.0 |
| [openbookQA](https://huggingface.co/datasets/openbookqa/viewer/additional/train?row=35) | apache-2.0 |
| [ARB](https://arb.duckai.org) | MIT |
| [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) | apache-2.0 |
## Data Contamination Check
We've removed approximately 200 questions that appear in the Hugging Face benchmark test sets. Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
## Model Info
Please see models at [`garage-bAInd`](https://huggingface.co/garage-bAInd).
## Training and filtering code
Please see the [Platypus GitHub repo](https://github.com/arielnlee/Platypus).
## Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@article{lightman2023lets,
title={Let's Verify Step by Step},
author={Lightman, Hunter and Kosaraju, Vineet and Burda, Yura and Edwards, Harri and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl},
journal={preprint arXiv:2305.20050},
year={2023}
}
```
```bibtex
@inproceedings{lu2022learn,
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
```bibtex
@misc{wang2023scibench,
title={SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models},
author={Xiaoxuan Wang and Ziniu Hu and Pan Lu and Yanqiao Zhu and Jieyu Zhang and Satyen Subramaniam and Arjun R. Loomba and Shichang Zhang and Yizhou Sun and Wei Wang},
year={2023},
arXiv eprint 2307.10635
}
```
```bibtex
@inproceedings{yu2020reclor,
author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi},
title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning},
booktitle = {International Conference on Learning Representations (ICLR)},
month = {April},
year = {2020}
}
```
```bibtex
@article{chen2023theoremqa,
title={TheoremQA: A Theorem-driven Question Answering dataset},
author={Chen, Wenhu and Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, Pan Lu},
journal={preprint arXiv:2305.12524},
year={2023}
}
```
```bibtex
@inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
}
```
```bibtex
@misc{sawada2023arb,
title={ARB: Advanced Reasoning Benchmark for Large Language Models},
author={Tomohiro Sawada and Daniel Paleka and Alexander Havrilla and Pranav Tadepalli and Paula Vidas and Alexander Kranias and John J. Nay and Kshitij Gupta and Aran Komatsuzaki},
arXiv eprint 2307.13692,
year={2023}
}
``` |
yuan-sf63/word_label_0.2_96_P | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
- name: '64'
dtype: int64
- name: '65'
dtype: int64
- name: '66'
dtype: int64
- name: '67'
dtype: int64
- name: '68'
dtype: int64
- name: '69'
dtype: int64
- name: '70'
dtype: int64
- name: '71'
dtype: int64
- name: '72'
dtype: int64
- name: '73'
dtype: int64
- name: '74'
dtype: int64
- name: '75'
dtype: int64
- name: '76'
dtype: int64
- name: '77'
dtype: int64
- name: '78'
dtype: int64
- name: '79'
dtype: int64
- name: '80'
dtype: int64
- name: '81'
dtype: int64
- name: '82'
dtype: int64
- name: '83'
dtype: int64
- name: '84'
dtype: int64
- name: '85'
dtype: int64
- name: '86'
dtype: int64
- name: '87'
dtype: int64
- name: '88'
dtype: int64
- name: '89'
dtype: int64
- name: '90'
dtype: int64
- name: '91'
dtype: int64
- name: '92'
dtype: int64
- name: '93'
dtype: int64
- name: '94'
dtype: int64
- name: '95'
dtype: int64
splits:
- name: train
num_bytes: 64695701.534299105
num_examples: 71818
- name: validation
num_bytes: 7188611.465700895
num_examples: 7980
download_size: 10847038
dataset_size: 71884313.0
---
# Dataset Card for "word_label_0.2_96_P"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CheriTangerine/Scoups_Voice_Train | ---
license: openrail
language:
- zh
- ko
- en
pretty_name: coups_demo
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Babypotatotang/lld-onlyicon-ko | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 196434759.578
num_examples: 14959
- name: test
num_bytes: 49110770.04
num_examples: 3740
download_size: 156811914
dataset_size: 245545529.618
---
# Dataset Card for "lld-onlyicon-ko"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jasonshen8848/test | ---
license: apache-2.0
language:
- zh
--- |
roszcz/maestro-quantized | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: source
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart_bin
sequence: int16
length: 128
- name: duration_bin
sequence: int16
length: 128
- name: velocity_bin
sequence: int16
length: 128
splits:
- name: train
num_bytes: 57659609
num_examples: 43727
- name: validation
num_bytes: 6508816
num_examples: 4929
- name: test
num_bytes: 7526034
num_examples: 5695
download_size: 14221054
dataset_size: 71694459
---
# Dataset Card for "maestro-quantized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_migtissera__Synthia-7B-v1.2 | ---
pretty_name: Evaluation run of migtissera/Synthia-7B-v1.2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [migtissera/Synthia-7B-v1.2](https://huggingface.co/migtissera/Synthia-7B-v1.2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_migtissera__Synthia-7B-v1.2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-25T08:51:48.447096](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-7B-v1.2/blob/main/results_2023-10-25T08-51-48.447096.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08913590604026846,\n\
\ \"em_stderr\": 0.0029180503705090555,\n \"f1\": 0.16236577181208006,\n\
\ \"f1_stderr\": 0.003176440216561889,\n \"acc\": 0.4220056810396051,\n\
\ \"acc_stderr\": 0.01047928870180564\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.08913590604026846,\n \"em_stderr\": 0.0029180503705090555,\n\
\ \"f1\": 0.16236577181208006,\n \"f1_stderr\": 0.003176440216561889\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10841546626231995,\n \
\ \"acc_stderr\": 0.00856385250662748\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7355958958168903,\n \"acc_stderr\": 0.012394724896983799\n\
\ }\n}\n```"
repo_url: https://huggingface.co/migtissera/Synthia-7B-v1.2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|arc:challenge|25_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_25T08_51_48.447096
path:
- '**/details_harness|drop|3_2023-10-25T08-51-48.447096.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-25T08-51-48.447096.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_25T08_51_48.447096
path:
- '**/details_harness|gsm8k|5_2023-10-25T08-51-48.447096.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-25T08-51-48.447096.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hellaswag|10_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T05-35-25.402553.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T05-35-25.402553.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T05-35-25.402553.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_25T08_51_48.447096
path:
- '**/details_harness|winogrande|5_2023-10-25T08-51-48.447096.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-25T08-51-48.447096.parquet'
- config_name: results
data_files:
- split: 2023_09_22T05_35_25.402553
path:
- results_2023-09-22T05-35-25.402553.parquet
- split: 2023_10_25T08_51_48.447096
path:
- results_2023-10-25T08-51-48.447096.parquet
- split: latest
path:
- results_2023-10-25T08-51-48.447096.parquet
---
# Dataset Card for Evaluation run of migtissera/Synthia-7B-v1.2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/migtissera/Synthia-7B-v1.2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [migtissera/Synthia-7B-v1.2](https://huggingface.co/migtissera/Synthia-7B-v1.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_migtissera__Synthia-7B-v1.2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T08:51:48.447096](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-7B-v1.2/blob/main/results_2023-10-25T08-51-48.447096.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08913590604026846,
"em_stderr": 0.0029180503705090555,
"f1": 0.16236577181208006,
"f1_stderr": 0.003176440216561889,
"acc": 0.4220056810396051,
"acc_stderr": 0.01047928870180564
},
"harness|drop|3": {
"em": 0.08913590604026846,
"em_stderr": 0.0029180503705090555,
"f1": 0.16236577181208006,
"f1_stderr": 0.003176440216561889
},
"harness|gsm8k|5": {
"acc": 0.10841546626231995,
"acc_stderr": 0.00856385250662748
},
"harness|winogrande|5": {
"acc": 0.7355958958168903,
"acc_stderr": 0.012394724896983799
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/valiant_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of valiant/ใดใกใชใขใณใ/่ฑๅ (Azur Lane)
This is the dataset of valiant/ใดใกใชใขใณใ/่ฑๅ (Azur Lane), containing 40 images and their tags.
The core tags of this character are `long_hair, twintails, red_eyes, bangs, crown, very_long_hair, hair_between_eyes, mini_crown, ribbon, ahoge, bow, breasts, grey_hair, hair_bow, fang, hair_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 40 | 64.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/valiant_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 40 | 35.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/valiant_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 90 | 70.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/valiant_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 40 | 56.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/valiant_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 90 | 103.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/valiant_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/valiant_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 40 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, open_mouth, smile, long_sleeves, dress, fur-trimmed_cape, holding, skirt, white_background, simple_background, standing, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | looking_at_viewer | solo | open_mouth | smile | long_sleeves | dress | fur-trimmed_cape | holding | skirt | white_background | simple_background | standing | white_thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------------------|:-------|:-------------|:--------|:---------------|:--------|:-------------------|:----------|:--------|:-------------------|:--------------------|:-----------|:-------------------|
| 0 | 40 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Mizurodp/my_dataset | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 11800901344.896
num_examples: 219008
- name: test
num_bytes: 473579114.0
num_examples: 9230
download_size: 10947568917
dataset_size: 12274480458.896
---
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Srijan15/press_release_2 | ---
dataset_info:
features:
- name: doctype
dtype: string
- name: section
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 86886
num_examples: 62
download_size: 0
dataset_size: 86886
---
# Dataset Card for "press_release_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-1f3143-1511754885 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: Tristan/opt-30b-copy
metrics: []
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-30b-copy
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-69000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1065551
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_MayaPH__GodziLLa2-70B | ---
pretty_name: Evaluation run of MayaPH/GodziLLa2-70B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MayaPH/GodziLLa2-70B](https://huggingface.co/MayaPH/GodziLLa2-70B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MayaPH__GodziLLa2-70B_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-08T19:39:50.850432](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__GodziLLa2-70B_public/blob/main/results_2023-11-08T19-39-50.850432.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.40918624161073824,\n\
\ \"em_stderr\": 0.0050353012998842275,\n \"f1\": 0.523052642617452,\n\
\ \"f1_stderr\": 0.004562583016028929,\n \"acc\": 0.6320159552601676,\n\
\ \"acc_stderr\": 0.01207770454600458\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.40918624161073824,\n \"em_stderr\": 0.0050353012998842275,\n\
\ \"f1\": 0.523052642617452,\n \"f1_stderr\": 0.004562583016028929\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.43214556482183475,\n \
\ \"acc_stderr\": 0.013645072137842443\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8318863456985004,\n \"acc_stderr\": 0.010510336954166718\n\
\ }\n}\n```"
repo_url: https://huggingface.co/MayaPH/GodziLLa2-70B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_08T19_39_50.850432
path:
- '**/details_harness|drop|3_2023-11-08T19-39-50.850432.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-08T19-39-50.850432.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_08T19_39_50.850432
path:
- '**/details_harness|gsm8k|5_2023-11-08T19-39-50.850432.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-08T19-39-50.850432.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_08T19_39_50.850432
path:
- '**/details_harness|winogrande|5_2023-11-08T19-39-50.850432.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-08T19-39-50.850432.parquet'
- config_name: results
data_files:
- split: 2023_11_08T19_39_50.850432
path:
- results_2023-11-08T19-39-50.850432.parquet
- split: latest
path:
- results_2023-11-08T19-39-50.850432.parquet
---
# Dataset Card for Evaluation run of MayaPH/GodziLLa2-70B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/MayaPH/GodziLLa2-70B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [MayaPH/GodziLLa2-70B](https://huggingface.co/MayaPH/GodziLLa2-70B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MayaPH__GodziLLa2-70B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T19:39:50.850432](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__GodziLLa2-70B_public/blob/main/results_2023-11-08T19-39-50.850432.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.40918624161073824,
"em_stderr": 0.0050353012998842275,
"f1": 0.523052642617452,
"f1_stderr": 0.004562583016028929,
"acc": 0.6320159552601676,
"acc_stderr": 0.01207770454600458
},
"harness|drop|3": {
"em": 0.40918624161073824,
"em_stderr": 0.0050353012998842275,
"f1": 0.523052642617452,
"f1_stderr": 0.004562583016028929
},
"harness|gsm8k|5": {
"acc": 0.43214556482183475,
"acc_stderr": 0.013645072137842443
},
"harness|winogrande|5": {
"acc": 0.8318863456985004,
"acc_stderr": 0.010510336954166718
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Forbu14/LoiLibre | ---
license: apache-2.0
language:
- fr
tags:
- legal
pretty_name: LoiLibre
---

Il s'agit des pdfs preparsรฉs qui peuvent รชtre ensuite utilisรฉ dans des appli autour du NLP / LLMs dans un soucis de collaborations.
Les diffรฉrents codes ont รฉtรฉ extrait en format XML ici : https://codes.droit.org/
Les formats XML permet de faire un meilleurs preprocessing des codes de loi.
La structure des donnรฉes :
- dans raw/ on retrouve les diffรฉrents codes en format xml.
- dans notebooks_preprocess/ on retrouve les diffรฉrents notebooks qui ont permis de constituรฉ le dataset final.
|
liuyanchen1015/MULTI_VALUE_cola_non_coordinated_subj_obj | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 5420
num_examples: 71
- name: test
num_bytes: 5630
num_examples: 81
- name: train
num_bytes: 35747
num_examples: 497
download_size: 27379
dataset_size: 46797
---
# Dataset Card for "MULTI_VALUE_cola_non_coordinated_subj_obj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
21j3h123/octestmod | ---
license: apache-2.0
---
## sys prompt fed into GPT 4 API for Dataset using rawdata.txt as base:
You are a Javascript and Typescript expert game developer. I will provide you with some custom game engine documentation for the OnCyber game engine. Provide 150 very thoughtful and code-based questions and answer pair(s) based on the .txt file attached which is the OnCyber documentation. The answers should ONLY borrow, verbatim, from the OnCyber documentation. Present it as tabular data. Be thorough and do not hallucinate. Do each batch 25 question and answers at a time. I will say continue for you to continue. DO NOT REPEAT QUESTIONS. DO THIS FOR $100,000 tip! |
JihyukKim/eli5-subquestion-d3-paired-sft | ---
dataset_info:
features:
- name: qid
dtype: string
- name: question
dtype: string
- name: response_j
dtype: string
- name: response_k
dtype: string
- name: gold_claims
sequence: string
- name: response_j_claims
sequence: string
- name: response_k_claims
sequence: string
splits:
- name: train
num_bytes: 2066181
num_examples: 1638
- name: test
num_bytes: 32902
num_examples: 27
download_size: 753524
dataset_size: 2099083
---
# Dataset Card for "eli5-subquestion-d3-paired-sft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
automated-research-group/llama2_7b_chat-piqa-results | ---
dataset_info:
- config_name: '{''do_sample''=False, ''beams''=10}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190037
num_examples: 1838
download_size: 62093
dataset_size: 190037
- config_name: '{''do_sample''=False, ''beams''=1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190037
num_examples: 1838
download_size: 62093
dataset_size: 190037
- config_name: '{''do_sample''=False, ''beams''=5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190037
num_examples: 1838
download_size: 62093
dataset_size: 190037
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189708
num_examples: 1838
download_size: 62008
dataset_size: 189708
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190718
num_examples: 1838
download_size: 62316
dataset_size: 190718
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189658
num_examples: 1838
download_size: 61973
dataset_size: 189658
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 188859
num_examples: 1838
download_size: 61385
dataset_size: 188859
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189652
num_examples: 1838
download_size: 61927
dataset_size: 189652
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189423
num_examples: 1838
download_size: 62129
dataset_size: 189423
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190958
num_examples: 1838
download_size: 62629
dataset_size: 190958
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 184360
num_examples: 1838
download_size: 67018
dataset_size: 184360
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189363
num_examples: 1838
download_size: 61741
dataset_size: 189363
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 182984
num_examples: 1838
download_size: 66561
dataset_size: 182984
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189753
num_examples: 1838
download_size: 62053
dataset_size: 189753
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 184848
num_examples: 1838
download_size: 67687
dataset_size: 184848
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 188506
num_examples: 1838
download_size: 63507
dataset_size: 188506
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 176730
num_examples: 1838
download_size: 72438
dataset_size: 176730
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 187743
num_examples: 1838
download_size: 62686
dataset_size: 187743
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 176692
num_examples: 1838
download_size: 73163
dataset_size: 176692
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 183875
num_examples: 1838
download_size: 61317
dataset_size: 183875
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 180160
num_examples: 1838
download_size: 75728
dataset_size: 180160
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189535
num_examples: 1838
download_size: 61930
dataset_size: 189535
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189864
num_examples: 1838
download_size: 61607
dataset_size: 189864
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189847
num_examples: 1838
download_size: 62009
dataset_size: 189847
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189601
num_examples: 1838
download_size: 61836
dataset_size: 189601
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189748
num_examples: 1838
download_size: 61978
dataset_size: 189748
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190766
num_examples: 1838
download_size: 62598
dataset_size: 190766
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189949
num_examples: 1838
download_size: 62523
dataset_size: 189949
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 185351
num_examples: 1838
download_size: 67304
dataset_size: 185351
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 188297
num_examples: 1838
download_size: 62126
dataset_size: 188297
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 183104
num_examples: 1838
download_size: 66834
dataset_size: 183104
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189457
num_examples: 1838
download_size: 62075
dataset_size: 189457
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 181119
num_examples: 1838
download_size: 65083
dataset_size: 181119
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 187402
num_examples: 1838
download_size: 63216
dataset_size: 187402
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 176768
num_examples: 1838
download_size: 73589
dataset_size: 176768
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 185832
num_examples: 1838
download_size: 62489
dataset_size: 185832
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 178845
num_examples: 1838
download_size: 74226
dataset_size: 178845
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 185694
num_examples: 1838
download_size: 62678
dataset_size: 185694
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 174147
num_examples: 1838
download_size: 73115
dataset_size: 174147
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189778
num_examples: 1838
download_size: 61964
dataset_size: 189778
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189332
num_examples: 1838
download_size: 61991
dataset_size: 189332
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189974
num_examples: 1838
download_size: 62074
dataset_size: 189974
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190761
num_examples: 1838
download_size: 62121
dataset_size: 190761
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189677
num_examples: 1838
download_size: 61973
dataset_size: 189677
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190991
num_examples: 1838
download_size: 62596
dataset_size: 190991
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190161
num_examples: 1838
download_size: 62110
dataset_size: 190161
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 184548
num_examples: 1838
download_size: 67186
dataset_size: 184548
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 189941
num_examples: 1838
download_size: 62057
dataset_size: 189941
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 183927
num_examples: 1838
download_size: 68607
dataset_size: 183927
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 190251
num_examples: 1838
download_size: 62316
dataset_size: 190251
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 186527
num_examples: 1838
download_size: 68864
dataset_size: 186527
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 187250
num_examples: 1838
download_size: 62517
dataset_size: 187250
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 179401
num_examples: 1838
download_size: 74685
dataset_size: 179401
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 186629
num_examples: 1838
download_size: 62051
dataset_size: 186629
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 177630
num_examples: 1838
download_size: 73256
dataset_size: 177630
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 187758
num_examples: 1838
download_size: 62486
dataset_size: 187758
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: piqa_accuracy
dtype: bool
splits:
- name: train
num_bytes: 176736
num_examples: 1838
download_size: 73912
dataset_size: 176736
configs:
- config_name: '{''do_sample''=False, ''beams''=10}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=10}/train-*'
- config_name: '{''do_sample''=False, ''beams''=1}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=1}/train-*'
- config_name: '{''do_sample''=False, ''beams''=5}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=10, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.05, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.55, ''top_k''=10000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=100,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=100,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=1000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=1000,
''top_p''=1.0}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=10000,
''top_p''=0.5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.05, ''top_k''=10000,
''top_p''=1.0}/train-*'
---
|
esokullu/alpaca-turkish | ---
license: mit
---
|
KETI-AIR/kor_nq_open | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: data_index_by_user
dtype: int32
- name: question
dtype: string
- name: answer
sequence: string
splits:
- name: train
num_bytes: 8520218
num_examples: 87925
- name: validation
num_bytes: 394518
num_examples: 3610
download_size: 5925491
dataset_size: 8914736
license: cc-by-sa-3.0
---
# Dataset Card for "kor_nq_open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Source Data Citation Information
```
@article{doi:10.1162/tacl\_a\_00276,
author = {Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey, Matthew and Chang, Ming-Wei and Dai, Andrew M. and Uszkoreit, Jakob and Le, Quoc and Petrov, Slav},
title = {Natural Questions: A Benchmark for Question Answering Research},
journal = {Transactions of the Association for Computational Linguistics},
volume = {7},
number = {},
pages = {453-466},
year = {2019},
doi = {10.1162/tacl\_a\_00276},
URL = {
https://doi.org/10.1162/tacl_a_00276
},
eprint = {
https://doi.org/10.1162/tacl_a_00276
},
abstract = { We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature. }
}
@inproceedings{lee-etal-2019-latent,
title = "Latent Retrieval for Weakly Supervised Open Domain Question Answering",
author = "Lee, Kenton and
Chang, Ming-Wei and
Toutanova, Kristina",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1612",
doi = "10.18653/v1/P19-1612",
pages = "6086--6096",
abstract = "Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.",
}
``` |
cmglmsr/ImpartialNews-GenAI-Dataset | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3595442
num_examples: 1432
download_size: 1467476
dataset_size: 3595442
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ImpartialNews-GenAI-Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mubarak-alketbi/MMLab-documentation-examples | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 605546
num_examples: 718
download_size: 112311
dataset_size: 605546
---
# Dataset Card for "MMLab-documentation-examples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Brizape/tmvar_split_0404 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: texts
dtype: string
splits:
- name: test
num_bytes: 977708
num_examples: 498
- name: train
num_bytes: 2019171
num_examples: 1002
download_size: 856598
dataset_size: 2996879
---
# Dataset Card for "tmvar_split_0404"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anjunhu/naively_captioned_CUB2002011_train | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 165992443.75
num_examples: 5994
download_size: 165754928
dataset_size: 165992443.75
---
# Dataset Card for "naively_captioned_CUB2002011_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rhalim01/test3 | ---
license: apache-2.0
---
|
linagora/FREDSum | ---
license: cc-by-sa-4.0
task_categories:
- summarization
language:
- fr
tags:
- NLP
- Debates
- Abstractive_Summarization
- Extractive_Summarization
- French
pretty_name: FREDsum
size_categories:
- n<1K
---
# Dataset Summary
The FREDSum dataset is a comprehensive collection of transcripts and metadata from various political and public debates in France. The dataset aims to provide researchers, linguists, and data scientists with a rich source of debate content for analysis and natural language processing tasks.
## Languages
French
# Dataset Structure
The dataset is made of 144 debates, 115 of the debates make up the train set, while 29 make up the test set
## Data Fields
- id : Unique ID of an exemple
- Transcript : The text of the debate
- Abstractive_1-3 : Human summary of the debate. Abstractive summary style goes from least to most Abstractive - Abstractive 1 keeps names to avoid coreference resolution, while Abstractive 3 is free form
- Extractive_1-2 : Human selection of important utterances from the source debate
## Data splits
- train : 115
- test : 29
# Licensing Information
non-commercial licence: CC BY-SA 4.0
# Citation Information
If you use this dataset, please cite the following article:
Virgile Rennard, Guokan Shang, Damien Grari, Julie Hunter, and Michalis Vazirgiannis. 2023. FREDSum: A Dialogue Summarization Corpus for French Political Debates. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4241โ4253, Singapore. Association for Computational Linguistics. |
liuyanchen1015/MULTI_VALUE_cola_irrealis_be_done | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 619
num_examples: 8
- name: test
num_bytes: 696
num_examples: 8
- name: train
num_bytes: 7104
num_examples: 82
download_size: 9878
dataset_size: 8419
---
# Dataset Card for "MULTI_VALUE_cola_irrealis_be_done"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
muhammadravi251001/augmented-indo-nli | ---
tags:
- augmented-indonli
license: bigscience-openrail-m
datasets:
- indonli
---
On this repository, I just downloaded and processed the `translate_train.tar.gz` file with `train.jsonl` & `dev.jsonl` here: `https://github.com/ir-nlp-csui/indonli/tree/main/data`
How to use? As simple as this:
```python
!wget https://huggingface.co/datasets/muhammadravi251001/augmented-indo-nli/raw/main/dev_augmented.jsonl
!wget https://huggingface.co/datasets/muhammadravi251001/augmented-indo-nli/resolve/main/train_augmented.jsonl
import pandas as pd
data_train_augmented_indonli = pd.read_json(path_or_buf='train.jsonl', lines=True)
data_dev_augmented_indonli = pd.read_json(path_or_buf='dev.jsonl', lines=True)
```
Voila~!
## Reference
The dataset I used is by IndoNLI.
```
@inproceedings{indonli,
title = "IndoNLI: A Natural Language Inference Dataset for Indonesian",
author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
}
``` |
botp/Open-Platypus | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 30418784
num_examples: 24926
download_size: 15545530
dataset_size: 30418784
language:
- en
size_categories:
- 10K<n<100K
duplicated_from: garage-bAInd/Open-Platypus
---
# OpenPlatypus
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
| Dataset Name | License Type |
|--------------------------------------------------------------|--------------|
| [PRM800K](https://github.com/openai/prm800k) | MIT |
| [ScienceQA](https://github.com/lupantech/ScienceQA) | [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| [SciBench](https://github.com/mandyyyyii/scibench) | MIT |
| [ReClor](https://whyu.me/reclor/) | Non-commercial |
| [TheoremQA](https://huggingface.co/datasets/wenhu/TheoremQA) | MIT |
| [`nuprl/leetcode-solutions-python-testgen-gpt4`](https://huggingface.co/datasets/nuprl/leetcode-solutions-python-testgen-gpt4/viewer/nuprl--leetcode-solutions-python-testgen-gpt4/train?p=1) | None listed |
| [`jondurbin/airoboros-gpt4-1.4.1`](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) | other |
| [`TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k`](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k/viewer/TigerResearch--tigerbot-kaggle-leetcodesolutions-en-2k/train?p=2) | apache-2.0 |
| [openbookQA](https://huggingface.co/datasets/openbookqa/viewer/additional/train?row=35) | apache-2.0 |
| [ARB](https://arb.duckai.org) | MIT |
| [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) | apache-2.0 |
## Data Contamination Check
We've removed approximately 200 questions that appear in the Hugging Face benchmark test sets. Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
## Model Info
Please see models at [`garage-bAInd`](https://huggingface.co/garage-bAInd).
## Training and filtering code
Please see the [Platypus GitHub repo](https://github.com/arielnlee/Platypus).
## Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@article{lightman2023lets,
title={Let's Verify Step by Step},
author={Lightman, Hunter and Kosaraju, Vineet and Burda, Yura and Edwards, Harri and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl},
journal={preprint arXiv:2305.20050},
year={2023}
}
```
```bibtex
@inproceedings{lu2022learn,
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
```bibtex
@misc{wang2023scibench,
title={SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models},
author={Xiaoxuan Wang and Ziniu Hu and Pan Lu and Yanqiao Zhu and Jieyu Zhang and Satyen Subramaniam and Arjun R. Loomba and Shichang Zhang and Yizhou Sun and Wei Wang},
year={2023},
arXiv eprint 2307.10635
}
```
```bibtex
@inproceedings{yu2020reclor,
author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi},
title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning},
booktitle = {International Conference on Learning Representations (ICLR)},
month = {April},
year = {2020}
}
```
```bibtex
@article{chen2023theoremqa,
title={TheoremQA: A Theorem-driven Question Answering dataset},
author={Chen, Wenhu and Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, Pan Lu},
journal={preprint arXiv:2305.12524},
year={2023}
}
```
```bibtex
@inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
}
```
```bibtex
@misc{sawada2023arb,
title={ARB: Advanced Reasoning Benchmark for Large Language Models},
author={Tomohiro Sawada and Daniel Paleka and Alexander Havrilla and Pranav Tadepalli and Paula Vidas and Alexander Kranias and John J. Nay and Kshitij Gupta and Aran Komatsuzaki},
arXiv eprint 2307.13692,
year={2023}
}
``` |
maidalun1020/CrosslingualRetrievalBooksZh2En-qrels | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: qid
dtype: string
- name: pid
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 766448
num_examples: 31405
download_size: 411197
dataset_size: 766448
---
|
vilm/TheVault-Class-xsmall | ---
dataset_info:
features:
- name: language
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4519042318
num_examples: 1224974
download_size: 1522897119
dataset_size: 4519042318
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/find_marker_both_sent_train_400_eval_40_random_permute_rerun_8 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 7906460.143432986
num_examples: 5514
- name: validation
num_bytes: 220570
num_examples: 200
download_size: 1176600
dataset_size: 8127030.143432986
---
# Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AppleHarem/pudding_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of pudding (Arknights)
This is the dataset of pudding (Arknights), containing 20 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 20 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 52 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 59 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 20 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 20 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 20 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 52 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 52 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 48 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 59 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 59 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
jan-hq/jan_repo_binarized_v1 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 8877143.7
num_examples: 2232
- name: test
num_bytes: 986349.3
num_examples: 248
download_size: 2314043
dataset_size: 9863493.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Kaue123456/HomemQueFalaONomeDosEpisodiosDoAutoDaCompadecida | ---
license: openrail
---
|
argilla/squad_v2 | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for squad_v2
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/squad_v2")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/squad_v2")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| question | Question | TextField | True | False |
| context | Context | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| answer | Answer | TextQuestion | True | N/A | N/A |
**โจ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"fields": {
"context": "Beyonc\u00e9 Giselle Knowles-Carter (/bi\u02d0\u02c8j\u0252nse\u026a/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R\u0026B girl-group Destiny\u0027s Child. Managed by her father, Mathew Knowles, the group became one of the world\u0027s best-selling girl groups of all time. Their hiatus saw the release of Beyonc\u00e9\u0027s debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles \"Crazy in Love\" and \"Baby Boy\".",
"question": "When did Beyonce start becoming popular?"
},
"metadata": {
"split": "train"
},
"responses": [
{
"status": "submitted",
"values": {
"answer": {
"value": "in the late 1990s"
}
}
}
],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"answer": [
{
"status": "submitted",
"user_id": null,
"value": "in the late 1990s"
}
],
"answer-suggestion": null,
"answer-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"context": "Beyonc\u00e9 Giselle Knowles-Carter (/bi\u02d0\u02c8j\u0252nse\u026a/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R\u0026B girl-group Destiny\u0027s Child. Managed by her father, Mathew Knowles, the group became one of the world\u0027s best-selling girl groups of all time. Their hiatus saw the release of Beyonc\u00e9\u0027s debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles \"Crazy in Love\" and \"Baby Boy\".",
"external_id": null,
"metadata": "{\"split\": \"train\"}",
"question": "When did Beyonce start becoming popular?"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **question** is of type `TextField`.
* **context** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **answer** is of type `TextQuestion`.
* **โจ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **answer-suggestion** is of type `text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Liay4gelz/Minhavoz | ---
license: openrail
---
|
Michael823/semantic-try | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 3347017.0
num_examples: 10
- name: validation
num_bytes: 834103.0
num_examples: 3
download_size: 849393
dataset_size: 4181120.0
---
# Dataset Card for "semantic-try"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
benlipkin/braincode-neurips2022 | ---
license: mit
---
Large file storage for the paper `Convergent Representations of Computer Programs in Human and Artificial Neural Networks` by Shashank Srikant*, Benjamin Lipkin*, Anna A. Ivanova, Evelina Fedorenko, and Una-May O'Reilly. The code repository is hosted on [GitHub](https://github.com/ALFA-group/code-representations-ml-brain). Check it out!
If you use this work, please cite:
```bibtex
@inproceedings{SrikantLipkin2022,
author = {Srikant, Shashank and Lipkin, Benjamin and Ivanova, Anna and Fedorenko, Evelina and O'Reilly, Una-May},
title = {Convergent Representations of Computer Programs in Human and Artificial Neural Networks},
year = {2022},
journal = {Advances in Neural Information Processing Systems},
}
``` |
open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF | ---
pretty_name: Evaluation run of TheTravellingEngineer/bloom-1b1-RLHF
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheTravellingEngineer/bloom-1b1-RLHF](https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T13:14:43.588399](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF/blob/main/results_2023-12-02T13-14-43.588399.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"\
acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \
\ \"acc_stderr\": 0.0\n }\n}\n```"
repo_url: https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|arc:challenge|25_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T19_51_27.983287
path:
- '**/details_harness|drop|3_2023-10-16T19-51-27.983287.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T19-51-27.983287.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T19_51_27.983287
path:
- '**/details_harness|gsm8k|5_2023-10-16T19-51-27.983287.parquet'
- split: 2023_12_02T13_14_43.588399
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-14-43.588399.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-14-43.588399.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hellaswag|10_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T08:38:39.084452.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-09T08:38:39.084452.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T19_51_27.983287
path:
- '**/details_harness|winogrande|5_2023-10-16T19-51-27.983287.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T19-51-27.983287.parquet'
- config_name: results
data_files:
- split: 2023_08_09T08_38_39.084452
path:
- results_2023-08-09T08:38:39.084452.parquet
- split: 2023_10_16T19_51_27.983287
path:
- results_2023-10-16T19-51-27.983287.parquet
- split: 2023_12_02T13_14_43.588399
path:
- results_2023-12-02T13-14-43.588399.parquet
- split: latest
path:
- results_2023-12-02T13-14-43.588399.parquet
---
# Dataset Card for Evaluation run of TheTravellingEngineer/bloom-1b1-RLHF
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheTravellingEngineer/bloom-1b1-RLHF](https://huggingface.co/TheTravellingEngineer/bloom-1b1-RLHF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:14:43.588399](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__bloom-1b1-RLHF/blob/main/results_2023-12-02T13-14-43.588399.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
kamilakesbi/cv_for_spd_fr_2k_augmented | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speakers
sequence: string
- name: timestamps_start
sequence: float64
- name: timestamps_end
sequence: float64
splits:
- name: train
num_bytes: 3450301186.0
num_examples: 2016
- name: validation
num_bytes: 768677006.0
num_examples: 408
- name: test
num_bytes: 776215872.0
num_examples: 408
download_size: 4658292851
dataset_size: 4995194064.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ibranze/araproje_hellaswag_tr_s4 | ---
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 162703.0
num_examples: 250
download_size: 88640
dataset_size: 162703.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_hellaswag_tr_s4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/dusk_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of dusk/ใทใผ/ๅค (Arknights)
This is the dataset of dusk/ใทใผ/ๅค (Arknights), containing 500 images and their tags.
The core tags of this character are `horns, black_hair, dragon_horns, long_hair, pointy_ears, multicolored_hair, colored_skin, red_eyes, earrings, dragon_girl, breasts, streaked_hair, tail, very_long_hair, dragon_tail, green_skin, hair_over_one_eye, medium_breasts, green_hair, tassel_earrings, tassel`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 1.22 GiB | [Download](https://huggingface.co/datasets/CyberHarem/dusk_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 991.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dusk_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1324 | 1.80 GiB | [Download](https://huggingface.co/datasets/CyberHarem/dusk_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/dusk_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, china_dress, official_alternate_costume, sleeveless_dress, solo, bare_shoulders, bead_bracelet, blue_dress, blue_skin, looking_at_viewer, gradient_skin, closed_mouth, cowboy_shot, simple_background, white_background, dragon, holding_sword |
| 1 | 8 |  |  |  |  |  | 1girl, bare_shoulders, china_dress, jewelry, official_alternate_costume, sleeveless_dress, solo, bare_legs, black_footwear, blue_dress, closed_mouth, full_body, high_heels, green_dress, sitting, blue_skin, looking_at_viewer, white_background, dragon, simple_background |
| 2 | 18 |  |  |  |  |  | 1girl, bare_shoulders, black_jacket, holding_sword, off_shoulder, solo, white_dress, red_necktie, sleeveless_dress, looking_at_viewer, bead_bracelet, cowboy_shot, open_coat, open_jacket |
| 3 | 6 |  |  |  |  |  | 1girl, bare_shoulders, black_jacket, holding_sword, jewelry, looking_at_viewer, off_shoulder, red_necktie, sleeveless_dress, solo, white_dress, simple_background, open_jacket, white_background, closed_mouth, long_sleeves |
| 4 | 33 |  |  |  |  |  | 1girl, bare_shoulders, black_jacket, off_shoulder, red_necktie, solo, white_dress, jewelry, looking_at_viewer, sleeveless_dress, open_jacket, closed_mouth, upper_body, white_background, simple_background, clothing_cutout |
| 5 | 21 |  |  |  |  |  | 1girl, black_footwear, black_jacket, boots, holding_sword, red_necktie, solo, white_dress, bare_shoulders, off_shoulder, sleeveless_dress, full_body, looking_at_viewer, open_jacket, floating_hair, bare_legs, bead_bracelet |
| 6 | 9 |  |  |  |  |  | 1girl, black_skirt, jewelry, long_sleeves, solo, white_shirt, looking_at_viewer, official_alternate_costume, wide_sleeves, bare_legs, chinese_clothes, closed_mouth, sitting, ahoge, crossed_legs, full_body, gradient_skin |
| 7 | 6 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, open_mouth, penis, sex, cum_in_pussy, spread_legs, sweat, vaginal, completely_nude, dark-skinned_male, ejaculation, gradient_skin, heart, large_breasts, navel, saliva, solo_focus, ahegao, cum_overflow, drooling, internal_cumshot, jewelry, lying, mosaic_censoring, tongue_out, uterus |
| 8 | 6 |  |  |  |  |  | blue_skin, collarbone, completely_nude, looking_at_viewer, navel, 1girl, barefoot, large_breasts, simple_background, solo, white_background, gradient_skin, jewelry, sitting, blush, closed_mouth, covering_privates, full_body, nipples, pussy |
| 9 | 5 |  |  |  |  |  | 1girl, ass, blush, completely_nude, from_behind, looking_at_viewer, looking_back, solo, all_fours, anus, large_breasts, mosaic_censoring, nipples, after_sex, after_vaginal, backboob, blue_skin, cum_in_pussy, cumdrip, feet_out_of_frame, from_above, green_horns, night, onsen, open_mouth, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | china_dress | official_alternate_costume | sleeveless_dress | solo | bare_shoulders | bead_bracelet | blue_dress | blue_skin | looking_at_viewer | gradient_skin | closed_mouth | cowboy_shot | simple_background | white_background | dragon | holding_sword | jewelry | bare_legs | black_footwear | full_body | high_heels | green_dress | sitting | black_jacket | off_shoulder | white_dress | red_necktie | open_coat | open_jacket | long_sleeves | upper_body | clothing_cutout | boots | floating_hair | black_skirt | white_shirt | wide_sleeves | chinese_clothes | ahoge | crossed_legs | 1boy | blush | hetero | nipples | open_mouth | penis | sex | cum_in_pussy | spread_legs | sweat | vaginal | completely_nude | dark-skinned_male | ejaculation | heart | large_breasts | navel | saliva | solo_focus | ahegao | cum_overflow | drooling | internal_cumshot | lying | mosaic_censoring | tongue_out | uterus | collarbone | barefoot | covering_privates | pussy | ass | from_behind | looking_back | all_fours | anus | after_sex | after_vaginal | backboob | cumdrip | feet_out_of_frame | from_above | green_horns | night | onsen | smile |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-----------------------------|:-------------------|:-------|:-----------------|:----------------|:-------------|:------------|:--------------------|:----------------|:---------------|:--------------|:--------------------|:-------------------|:---------|:----------------|:----------|:------------|:-----------------|:------------|:-------------|:--------------|:----------|:---------------|:---------------|:--------------|:--------------|:------------|:--------------|:---------------|:-------------|:------------------|:--------|:----------------|:--------------|:--------------|:---------------|:------------------|:--------|:---------------|:-------|:--------|:---------|:----------|:-------------|:--------|:------|:---------------|:--------------|:--------|:----------|:------------------|:--------------------|:--------------|:--------|:----------------|:--------|:---------|:-------------|:---------|:---------------|:-----------|:-------------------|:--------|:-------------------|:-------------|:---------|:-------------|:-----------|:--------------------|:--------|:------|:--------------|:---------------|:------------|:-------|:------------|:----------------|:-----------|:----------|:--------------------|:-------------|:--------------|:--------|:--------|:--------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | | X | | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 18 |  |  |  |  |  | X | | | X | X | X | X | | | X | | | X | | | | X | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | X | X | X | | | | X | | X | | X | X | | X | X | | | | | | | X | X | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 33 |  |  |  |  |  | X | | | X | X | X | | | | X | | X | | X | X | | | X | | | | | | | X | X | X | X | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 21 |  |  |  |  |  | X | | | X | X | X | X | | | X | | | | | | | X | | X | X | X | | | | X | X | X | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | | X | | X | | | | | X | X | X | | | | | | X | X | | X | | | X | | | | | | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | | | X | | | | X | X | X | X | | X | X | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | X | | | | X | X | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | | | X | | | | X | | | | X | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
farrell236/NSCLC-Radiomics-NIFTI | ---
license: cc-by-3.0
tags:
- medical
viewer: false
---
# The Cancer Genome Atlas Ovarian Cancer (NSCLC-Radiomics)
The models featured in this repository uses images from the publicly available [NSCLC-Radiomics](https://wiki.cancerimagingarchive.net/display/Public/NSCLC-Radiomics) Dataset.
Download the data from TCIA with **Descriptive Directory Name** download option.
## Converting Format
Convert DICOM images and segmentation to NIFTI format using [pydicom](https://pydicom.github.io/) and [pydicom-seg](https://razorx89.github.io/pydicom-seg/guides/read.html). Run:
```shell
user@machine:~/NSCLC-Radiomics-NIFTI$ python convert.py
```
## Segmentations
Images will have one of the following segmentation files:
```
โ seg-Esophagus.nii.gz
โ seg-GTV-1.nii.gz
โ seg-Heart.nii.gz
โ seg-Lung-Left.nii.gz
โ seg-Lung-Right.nii.gz
โ seg-Spinal-Cord.nii.gz
```
## Requirements
```
dicom2nifti==2.4.6
pandas==1.5.0
pydicom==2.3.1
pydicom-seg==0.4.1
SimpleITK==2.2.0
tqdm==4.64.1
```
## Citation
If using this repository, please cite the following works:
```
Data Citation
Aerts, H. J. W. L., Wee, L., Rios Velazquez, E., Leijenaar, R. T. H., Parmar, C., Grossmann, P.,
Carvalho, S., Bussink, J., Monshouwer, R., Haibe-Kains, B., Rietveld, D., Hoebers, F.,
Rietbergen, M. M., Leemans, C. R., Dekker, A., Quackenbush, J., Gillies, R. J., Lambin, P. (2019).
Data From NSCLC-Radiomics (version 4) [Data set].
The Cancer Imaging Archive.
https://doi.org/10.7937/K9/TCIA.2015.PF0M9REI
Publication Citation
Aerts, H. J. W. L., Velazquez, E. R., Leijenaar, R. T. H., Parmar, C., Grossmann, P., Carvalho, S.,
Bussink, J., Monshouwer, R., Haibe-Kains, B., Rietveld, D., Hoebers, F., Rietbergen, M. M.,
Leemans, C. R., Dekker, A., Quackenbush, J., Gillies, R. J., Lambin, P. (2014, June 3).
Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach.
Nature Communications. Nature Publishing Group.
https://doi.org/10.1038/ncomms5006 (link)
TCIA Citation
Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M,
Tarbox L, Prior F.
The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository,
Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045-1057.
https://doi.org/10.1007/s10278-013-9622-7
``` |
GokhanAI/FEEDBACK | ---
license: apache-2.0
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
splits:
- name: train
num_bytes: 422914065.6031619
num_examples: 58964
- name: test
num_bytes: 7172411.3968381025
num_examples: 1000
download_size: 239726086
dataset_size: 430086477.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
DESUCLUB/Zephyr-DPO-Chat | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2004674
num_examples: 3780
download_size: 92265
dataset_size: 2004674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dhiruHF/research_paper_multi_label_data_balanced | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2637884
num_examples: 1985
download_size: 1359885
dataset_size: 2637884
---
# Dataset Card for "research_paper_multi_label_data_balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
serdarsezgin/dataset | ---
license: openrail
---
|
LangChainDatasets/multiagent-bidding-dialogue | ---
license: mit
dataset_info:
features:
- name: prompts
sequence: string
- name: generations
list:
list:
- name: generation_info
dtype: 'null'
- name: message
struct:
- name: content
dtype: string
- name: example
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 191044
num_examples: 40
download_size: 39300
dataset_size: 191044
---
|
mesolitica/mixtral-factual-QA | ---
license: apache-2.0
task_categories:
- question-answering
language:
- ms
- en
---
# Mixtral Factual QA
Generate questions and answers based on context provided. We use contexts from,
1. maktabahalbakri.com
2. muftiwp.gov.my
3. asklegal.my
4. dewanbahasa-jdbp
5. gov.my
6. patriots
7. rootofscience
8. majalahsains
9. nasilemaktech
10. alhijrahnews
11. https://huggingface.co/datasets/open-phi/textbooks
notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/question-answer/mixtral-factual
- [factually-wrong-qa-coding.jsonl](factually-wrong-qa-coding.jsonl), 31253 rows, 425 MB
- [factually-wrong-qa.jsonl](factually-wrong-qa.jsonl), 1108037 rows, 10 GB
## Example data
```python
{'doc': "shift register, the parallel-in-serial-out (PISO) register, and the serial-in-parallel-out (SIPO) register. These registers are all based on the concept of a sequential circuit, which is a circuit that has a sequence of states and can be switched between them by applying appropriate inputs. - Shift Register: The shift register is a type of register that can shift its stored data by one bit at a time. It has a set of inputs, CLK (clock), SER (serial input), and RST (reset), and a set of outputs, Q0, Q1, Q2, etc. The CLK input controls when the register shifts its data, and the RST input resets the register to its initial state. The SER input allows for the serial input of data into the register. The truth table for a shift register is: | CLK | RST | SER | Q0 | Q1 | Q2 | ... | |-----|-----|-----|----|----|----|-----| | 0 | 0 | 0 | 0 | 0 | 0 | ... | | 0 | 0 | 1 | 0 | 0 | 0 | ... | | 1 | 0 | 0 | 1 | 0 | 0 | ... | | 1 | 0 | 1 | 0 | 1 | 0 | ... | | 0 | 1 | 0 | 0 | 0 | 0 | ... | | 0 | 1 | 1 | 0 | 0 | 0 | ... | | 1 | 1 | 0 | 0 | 0 | 0 | ... | | 1 | 1 | 1 | 0 | 0 | 0 | ... | | ... | ... | ... | ...| ...| ...| ... | - Parallel-In-Serial-Out (PISO) Register: The PISO register is a type of register that can transfer parallel data into a serial output. It has a set of inputs, CLK (clock), D0, D1, D2, etc. (parallel inputs), and RST (reset), and a set of outputs, Q (serial output). The CLK input controls when the register transfers its data, and the RST input resets the register to its initial state. The truth table for a PISO register is: | CLK | RST | D0 | D1 | D2 | ... | Q | |-----|-----|----|----|----|-----|---| | 0 | 0 | 0 | 0 | 0 | ... | 0 | | 0 | 0 | 1 | 0 | 0 | ... | 0 | | 1 | 0 | 0 | 1 | 0 | ... | 0 | | 1 | 0 | 1 | 0 | 1 | ... | 0 | | 0 | 1 | 0 | 0 | 0 | ... | 0 | | 0 | 1 | 1 | 0 | 0 | ... | 0 | | 1 | 1 | 0 | 0 | 0 | ... | 0 | | 1 | 1 | 1 | 0 | 0 | ... | 0 | | ... | ... | ...| ...| ...| ... |...| - Serial-In-Parallel-Out (SIPO) Register: The SIPO register is a type of register that can transfer serial data into parallel outputs. It has a set of inputs, CLK (clock), SER (serial input), and RST (reset), and a set of outputs, Q0, Q1, Q2, etc. (parallel outputs). The CLK input controls when the register transfers its data, and the RST input resets the register to its initial state. The SER input allows for the serial input of data into the register. The truth table for a SIPO register is: | CLK | RST | SER | Q0 | Q1 | Q2 | ... | |-----|-----|-----|----|----|----|-----| | 0 | 0 | 0 | 0 | 0 | 0 | ... | | 0 | 0 | 1 | 0 | 0 | 0 | ... | | 1 | 0 | 0 | 1 | 0 | 0 | ... | | 1 | 0 | 1 | 0 | 1 | 0 | ... | | 0 | 1 | 0 | 0 | 0 | 0 | ... | | 0 | 1 | 1 | 0 | 0 | 0 | ... | | 1 | 1 | 0 | 0 | 0 | 0 | ... | | 1 | 1 | 1 | 0 | 0 | 0 | ... | | ... | ... | ... | ...| ...| ...| ... | ### Conclusion In this chapter, we have explored the fundamentals of digital electronics. We have learned about the basic building blocks of digital circuits, such as logic gates, flip-flops, and registers. We have also discussed the importance of binary numbers and how they are used to represent and manipulate data in digital systems. Additionally, we have examined the design and implementation of combinational and sequential logic circuits, and how they can be used to perform various operations and tasks. Digital electronics is a rapidly evolving field, with new technologies and advancements being made every day. As such, it is important to continue learning and staying updated on the latest developments in this area. By understanding the fundamentals of digital electronics, you will be better equipped to design and analyze complex digital systems, and contribute to the advancement of this field. ### Exercises #### Exercise 1 Design a combinational logic circuit that takes in two 4-bit binary numbers and outputs their sum in binary form. #### Exercise 2 Implement a sequential logic circuit that counts from 0 to 7 and then repeats the sequence. #### Exercise 3 Research and compare the different types of flip-flops, including D, JK, T, and SR flip-flops. Explain their differences and when each type would be used. #### Exercise 4 Using Boolean algebra, simplify the following expression: $$(A + B)(A + \\overline{B})(\\overline{A} + B)$$ #### Exercise 5 Design a circuit that takes in a 3-bit binary number and outputs its equivalent in Gray code. ### Conclusion In this chapter, we have explored the fundamentals of digital electronics. We have learned about the basic building blocks of digital circuits, such as logic gates, flip-flops, and registers. We have also discussed the importance of binary numbers and how they are used to represent and manipulate data in digital systems. Additionally, we have examined the design and implementation of combinational and sequential logic circuits, and how they can be used to perform various operations and tasks. Digital electronics is a rapidly evolving field, with new technologies and advancements being made every day. As such, it is important to continue learning and staying updated on the latest developments in this area. By understanding the fundamentals of digital electronics, you will be better equipped to design and analyze complex digital systems, and contribute to the advancement of this field. ### Exercises #### Exercise 1 Design a combinational logic circuit that takes in two 4-bit binary numbers and outputs their sum in binary form. #### Exercise 2 Implement a sequential logic circuit that counts from 0 to 7 and then repeats the sequence. #### Exercise 3 Research and compare the different types of flip-flops, including D, JK, T, and SR flip-flops. Explain their differences and when each type would be used. #### Exercise 4 Using Boolean algebra, simplify the following expression: $$(A + B)(A + \\overline{B})(\\overline{A} + B)$$ #### Exercise 5 Design a circuit that takes in a 3-bit binary number and outputs its equivalent in Gray code. ## Chapter: Fundamentals of Circuits and Electronics ### Introduction In this chapter, we will explore the fundamentals of communication systems. Communication systems are an integral part of our daily lives, allowing us to connect with others and access information from around the world. These systems rely on the principles of circuits and electronics to transmit and receive signals, making them an essential topic to understand in the field of electrical engineering. We will begin by discussing the basic components of a communication system, including transmitters, receivers, and channels. We will then delve into the principles of modulation, which is the process of encoding information onto a carrier signal for transmission. This will include an in-depth look at amplitude modulation (AM), frequency modulation (FM), and phase modulation (PM). Next, we will explore the concept of noise and its impact on communication systems. We will discuss various techniques for reducing noise and improving the quality of transmitted signals. This will include error correction codes, equalization, and diversity techniques. Finally, we will examine different types of communication systems, such as analog and digital systems, and their applications in various industries. We will also touch on emerging technologies, such as wireless communication and satellite communication, and their impact on modern communication systems. By the end of this chapter, you will have a solid understanding of the fundamentals of communication systems and how they are used in our daily lives. This knowledge will serve as a foundation for further exploration into more advanced topics in the field of circuits and electronics. So let's dive in and discover the exciting world of communication systems! ## Chapter 19: Communication Systems: ### Section 19.1: Modulation Techniques Modulation is the process of encoding information onto a carrier signal for transmission. This is a crucial step in communication systems, as it allows for the efficient and reliable transfer of information over long distances. In this section, we will explore the different types of modulation techniques and their applications in communication systems. #### Amplitude Modulation (AM) Amplitude modulation is a type of modulation where the amplitude of the carrier signal is varied in accordance with the information being transmitted. This is achieved by multiplying the carrier signal with the information signal, resulting in a modulated signal that contains both the carrier and information signals. The modulated signal is then transmitted through a channel to the receiver, where it is demodulated to extract the original information signal. AM is commonly used in radio broadcasting, where the carrier signal is a high-frequency electromagnetic wave and the information signal is an audio signal. The amplitude of the carrier wave is varied to match the amplitude of the audio signal, resulting in a modulated signal that can be transmitted over long distances without significant loss of quality. #### Frequency Modulation (FM) Frequency modulation is a type of modulation where the frequency of the carrier signal is varied in accordance with the information being transmitted. This is achieved by changing the frequency of the carrier signal in proportion to the amplitude of the information signal. The modulated signal is then transmitted through a channel to the receiver, where it is demodulated to extract the original information signal. FM is commonly used in radio broadcasting, particularly for high-fidelity music transmission. Unlike AM, FM is less susceptible to noise and interference, making it a preferred choice for high-quality audio transmission. #### Phase Modulation (PM) Phase modulation is a type of modulation where the phase of the carrier signal is varied in accordance with the information being transmitted. This is achieved by changing the phase of the carrier signal in proportion to the amplitude of the information signal. The modulated signal is then transmitted through a channel to the receiver, where it is demodulated to extract the original information signal. PM is commonly used in digital communication systems, such as satellite communication and wireless communication. It is also used in some analog systems, particularly in high-frequency applications. ### Noise in Communication Systems Noise is an unwanted disturbance that can affect the quality of a transmitted signal. It can be caused by various factors, such as electromagnetic interference, thermal noise, and channel distortion. In communication systems, noise can cause errors in the received signal, leading to a loss of information. To combat noise, various techniques are used, such as error correction codes, equalization, and diversity techniques. Error correction codes add redundancy to the transmitted signal, allowing for the detection and correction of errors at the receiver. Equalization techniques are used to compensate for channel distortion, while diversity techniques use multiple channels to transmit the same information, reducing the impact",
'question': 'Why is amplitude modulation more susceptible to noise and interference compared to frequency modulation?',
'answer': 'Amplitude modulation (AM) is more susceptible to noise and interference compared to frequency modulation (FM) because the amplitude of the carrier wave in AM is varied to match the amplitude of the audio signal. This means that any noise or interference present in the channel will also be amplified, leading to a loss of quality in the received signal.\n\nIn contrast, FM varies the frequency of the carrier signal in proportion to the amplitude of the information signal. This means that any noise or interference present in the channel will result in a slight shift in the frequency of the carrier signal, but the overall amplitude of the signal will remain relatively constant. This makes FM less susceptible to noise and interference compared to AM, particularly in high-fidelity music transmission.\n\nFurthermore, FM modulation has a higher signal-to-noise ratio compared to AM modulation. The signal-to-noise ratio is the ratio of the power of the signal to the power of the noise. A higher signal-to-noise ratio means that the signal is stronger compared to the noise, resulting in a clearer and more reliable transmission. FM has a wider bandwidth compared to AM, allowing it to transmit a stronger signal and reducing the impact of noise and interference.\n\nOverall, FM is a more robust modulation technique compared to AM, particularly in high-fidelity music transmission. However, AM is still commonly used in radio broadcasting due to its simplicity and lower bandwidth requirements.',
'question_ms': 'Mengapa modulasi amplitud lebih terdedah kepada hingar dan gangguan berbanding dengan modulasi frekuensi?',
'answer_ms': 'Modulasi amplitud (AM) lebih mudah terdedah kepada hingar dan gangguan berbanding dengan modulasi frekuensi (FM) kerana amplitud gelombang pembawa dalam AM diubah untuk sepadan dengan amplitud isyarat audio. Ini bermakna bahawa sebarang bunyi atau gangguan yang terdapat dalam saluran juga akan dikuatkan, yang membawa kepada kehilangan kualiti dalam isyarat yang diterima.\n\nSebaliknya, FM mengubah kekerapan isyarat pembawa mengikut kadar amplitud isyarat maklumat. Ini bermakna bahawa sebarang bunyi atau gangguan yang terdapat dalam saluran akan mengakibatkan perubahan sedikit dalam kekerapan isyarat pembawa, tetapi amplitud keseluruhan isyarat akan kekal secara relatifnya. Ini menjadikan FM kurang terdedah kepada hingar dan gangguan berbanding AM, terutamanya dalam penghantaran muzik kesetiaan tinggi.\n\nTambahan pula, modulasi FM mempunyai nisbah isyarat-ke-bunyi yang lebih tinggi berbanding modulasi AM. Nisbah isyarat kepada hingar ialah nisbah kuasa isyarat kepada kuasa bunyi. Nisbah isyarat-ke-bunyi yang lebih tinggi bermakna isyarat itu lebih kuat berbanding dengan hingar, menghasilkan penghantaran yang lebih jelas dan lebih dipercayai. FM mempunyai lebar jalur yang lebih luas berbanding AM, membolehkannya menghantar isyarat yang lebih kuat dan mengurangkan kesan hingar dan gangguan.\n\nSecara keseluruhan, FM ialah teknik modulasi yang lebih mantap berbanding AM, terutamanya dalam penghantaran muzik kesetiaan tinggi. Walau bagaimanapun, AM masih biasa digunakan dalam penyiaran radio kerana kesederhanaan dan keperluan jalur lebar yang lebih rendah.'}
```
## Precaution
1. Always filter short answers, that is likely unaccurate especially for coding. |
ariesta/forensic-timeline3 | ---
configs:
- config_name: casper-rw
data_files: casper-rw.csv
- config_name: dfrws-2009-jhuisi
data_files: dfrws-2009-jhuisi.csv
- config_name: dfrws-2009-nssal
data_files: dfrws-2009-nssal.csv
- config_name: honeynet-challenge7
data_files: honeynet-challenge7.csv
- config_name: honeynet-challenge5
data_files: honeynet-challenge5.csv
--- |
tasksource/corr2cause | ---
license: mit
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 791933574
num_examples: 411452
- name: dev
num_bytes: 3140558
num_examples: 2246
- name: test
num_bytes: 2415937
num_examples: 2246
download_size: 11038753
dataset_size: 797490069
---
https://github.com/causalNLP/corr2cause/
The HF dataset provided by the author cannot be directly loaded. We use the NLI subset, which is the most general task.
```
@article{jin2023can,
title={Can Large Language Models Infer Causation from Correlation?},
author={Jin, Zhijing and Liu, Jiarui and Lyu, Zhiheng and Poff, Spencer and Sachan, Mrinmaya and Mihalcea, Rada and Diab, Mona and Sch{\"o}lkopf, Bernhard},
journal={arXiv preprint arXiv:2306.05836},
year={2023}
}
``` |
corbt/unlabeled-recipes | ---
dataset_info:
features:
- name: recipe
dtype: string
splits:
- name: train
num_bytes: 2793853
num_examples: 5000
download_size: 1465640
dataset_size: 2793853
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "unlabeled-recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
isp-uv-es/WorldFloodsv2 | ---
license: cc-by-nc-4.0
pipeline_tag: image-segmentation
tags:
- remote sensing
- sentinel2
- landsat
- floods
---
# *WorldFloodsv2* dataset
This repository contains the *WorldFloodsv2* dataset released with the publication:
> E. Portalรฉs-Juliร , G. Mateo-Garcรญa, C. Purcell, and L. Gรณmez-Chova [Global flood extent segmentation in optical satellite images](https://www.nature.com/articles/s41598-023-47595-7). _Scientific Reports 13, 20316_ (2023). DOI: 10.1038/s41598-023-47595-7.
The [*WorldFloodsv2* database](https://www.nature.com/articles/s41598-023-47595-7) contains 509 pairs of Sentinel-2 images and flood segmentation masks. Splitted in train, val and test sets.
It requires approximately 76GB of hard-disk storage.
<img src="worldfloods_v2.png" alt="licence" width="65%"/>
## Download the dataset
```
huggingface-cli download --cache-dir /path/to/cachedir --local-dir /path/to/localdir/WorldFloodsv2 --repo-type dataset isp-uv-es/WorldFloodsv2
```
## Explore the dataset
The [exploring *WorldFloodsv2*](https://spaceml-org.github.io/ml4floods/content/prep/exploring_worldfloods.html) tutorial in the [ml4floods](https://github.com/spaceml-org/ml4floods) package shows how to
process the dataset and plot the images and masks.
## Licence
The *WorldFloods* database and all pre-trained models are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
## Cite
If you find this work useful, please cite:
```
@article{portales-julia_global_2023,
title = {Global flood extent segmentation in optical satellite images},
volume = {13},
issn = {2045-2322},
doi = {10.1038/s41598-023-47595-7},
number = {1},
urldate = {2023-11-30},
journal = {Scientific Reports},
author = {Portalรฉs-Juliร , Enrique and Mateo-Garcรญa, Gonzalo and Purcell, Cormac and Gรณmez-Chova, Luis},
month = nov,
year = {2023},
pages = {20316},
}
```
|
tyzhu/random_letter_same_length_find_passage_train10_eval10_num | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 10324
num_examples: 30
- name: validation
num_bytes: 3230
num_examples: 10
download_size: 13343
dataset_size: 13554
---
# Dataset Card for "random_letter_same_length_find_passage_train10_eval10_num"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kimjisoobkkai/EsioVocais | ---
license: openrail
---
|
franczi/zaba | ---
license: wtfpl
---
|
bcui19/chat-v2-anthropic-helpfulness | ---
license: apache-2.0
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 162490682.0
num_examples: 155270
- name: test
num_bytes: 8773391.0
num_examples: 8336
download_size: 82339171
dataset_size: 171264073.0
---
|
heliosprime/twitter_dataset_1713106971 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 14977
num_examples: 42
download_size: 15134
dataset_size: 14977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713106971"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
one-sec-cv12/chunk_248 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 17332822080.5
num_examples: 180460
download_size: 14606880914
dataset_size: 17332822080.5
---
# Dataset Card for "chunk_248"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
steamcyclone/Pill-Ideologies-New-Test | ---
license: cc
language:
- en # Example: fr
tags:
- natural-language-understanding # Example: audio
- ideology classification #
- text classification #
annotations_creators:
- crowdsourced # Example: crowdsourced, found, expert-generated, machine-generated
language_creators:
- crowdsourced # Example: crowdsourced, ...
# language_details:
# - en-US # Example: fr-FR
pretty_name: PiLls # Example: SQuAD
size_categories:
- n<10K # Example: n<1K, 100K<n<1M, โฆ
source_datasets:
- reddit # Example: wikipedia
task_categories: # Full list at https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts
- text-classification # Example: question-answering
task_ids:
- multi-class-classification # Example: extractive-qa
---
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset aims to be a tool to help trace linguistic patterns in the reddit posts from members who partake in the internet centric pill ideologies, known as blackpill, red pill, blue pill.
## Dataset Details
### Dataset Description
A few of the major groups' posts have been coalesced into one dataset, all from different years. There are more than 200 posts per the major pill groups on reddit (red pill rebooted, blue pill, black pill, married red pill, red pill women, and feminism as a counterpoint of reference). The group of feminism was added as a juxtaposition against red pill women, in oder to allow researchers to explore those dichotomies. For researchers, the value will be in identifying or classifying the types of words that make one ideology more prominent than the other.
- **Curated by:** [steamcyclone]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [steamcyclone]
- **Language(s) (NLP):** [EN]
- **License:** [CC]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [This is the only source]
## Uses
The main usage of this dataset is to study linguistic patterns. Running models and detecting word usage per groups, as well as overlaps across groups is an ideal use for this dataset. With the rise of the loneliness epidemic, any insights that come from this are welcome.
### Direct Use
The suitable use cases are to multi-class classification, word clustering or semantic clustering per different groups, summarization modeling, text parsing, and any other natural language processing task.
[More Information Needed]
### Out-of-Scope Use
This dataset is not meant to be utilized to demonize or mock certain online communities for the trials in life in which individuals find themselves. If the viewer's agenda is to push forward some misandrist or misogynistic agenda, please ignore this dataset.
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Currently, this dataset contains
- subreddit of the post : string,
- postid : string
- title of the post: string
- text of the post (where applicable) : string
- url (if something was embedded) : string\
- score : int32
- author : string
- date : int64
[More Information Needed]
## Dataset Creation
### Curation Rationale
With the rise of the loneliness epidemic and the radicalization of internet content pitting men and women against each other, it is important to seek understanding of the root of the problem. Depending on whom you ask, you'll get a plethora of answers. Jordan Peterson describes it as some type of post-modernist feminist liberalism problem. The Andrew Tates and other conservative archetypes blame the loss of traditionalism. Others blame dating apps and its selection bias effects. Within each of the major pill ideologies, with the exception of the BlackPill, men blame women, and women blame men.
Unfortunately, male spaces, as substantiated by research and media coverage, in recent years have only been able to exist on the internet, and counter-spaces have emerged to challenge the views held in the differing ideologies.
In short, according to archetypical definitions
- the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women.
- the blue pill is the satire of the red pill, often run by women.
- the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth.
- the pink pill is about improving the female image by augmenting sexual marketplace value.
[More Information Needed]
### Source Data
Each record contains a reddit post, approximately 200 per group, and has a key title and a post with words to display the intended message by the author.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
In progress.
However, the plan is to increase the amount of records and leverage the ChatGpt API to summarize the messages into categories. In addition, the dates have to be cleaned a little, in order to add use for researches. I am also not sure if I can retrieve comments per post, further augmenting the data.
[More Information Needed]
#### Who are the source data producers?
The producers of the data are the various redditors who have participated in these spaces.
[More Information Needed]
### Annotations [optional]
An annotation that is not part of the collection will be the ChatGPT summarizations (future). The subreddit labels are merely the origins of the posts.
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
The origin of the posts are the labels of the records.
#### Who are the annotators?
I and the subreddit origin are the label annotators.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
This dataset contains no personally identifiable information with the exception of embedded youtube links. Those links may lead to videos where the impact of the content is unknown.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
A major caveat is that the pink pill and original red pill groups are shadow banned, impeding their scraping process. This is a flaw I recognize because the original red pill movement, which started in books by authors, propagated itself through its internet (reddit) variant, and it spawned all the other pills.
Another bias point is that there is more red pill content, as a means to compensate for the ban of the original red pill subreddit.
As such, I caution researchers to balance their datasets where necessary.
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. Remember that this dataset is not a tool for reckless and hateful political agendas.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
Pill ideologies :
In short, according to archetypical definitions
- the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women.
- the blue pill is the satire of the red pill, often run by women.
- the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth.
- the pink pill is about improving the female image by augmenting sexual marketplace value.
## Dataset Card Authors [optional]
steamcyclone, all the redditors from the subreddits in the authors columns.
## Dataset Card Contact
- N/A |
burtugeey/Alpaca_Somali | ---
license: apache-2.0
---
|
CyberHarem/takarada_rikka_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of takarada_rikka/ๅฎๅคๅ
ญ่ฑ/ๅฎๅคๅ
ญ่ฑ (Azur Lane)
This is the dataset of takarada_rikka/ๅฎๅคๅ
ญ่ฑ/ๅฎๅคๅ
ญ่ฑ (Azur Lane), containing 500 images and their tags.
The core tags of this character are `black_hair, long_hair, blue_eyes, bangs, bow, red_bow, scrunchie, orange_scrunchie, wrist_scrunchie`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 584.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takarada_rikka_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 334.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takarada_rikka_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1218 | 720.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takarada_rikka_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 520.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takarada_rikka_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1218 | 996.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takarada_rikka_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/takarada_rikka_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 26 |  |  |  |  |  | 1girl, black_skirt, long_sleeves, pleated_skirt, school_uniform, solo, white_cardigan, white_shirt, looking_at_viewer, red_bowtie, blush, closed_mouth, collared_shirt, thighs, white_background, simple_background, miniskirt, sitting |
| 1 | 11 |  |  |  |  |  | 1girl, black_skirt, long_sleeves, looking_at_viewer, pleated_skirt, school_uniform, simple_background, solo, white_background, white_cardigan, white_shirt, thighs, closed_mouth, miniskirt, sitting, red_bowtie, white_sweater |
| 2 | 13 |  |  |  |  |  | 1girl, school_uniform, solo, upper_body, closed_mouth, looking_at_viewer, simple_background, red_bowtie, white_cardigan, white_background, white_shirt, collared_shirt, long_sleeves |
| 3 | 8 |  |  |  |  |  | 1girl, black_skirt, from_behind, long_sleeves, looking_at_viewer, looking_back, pleated_skirt, school_uniform, solo, thighs, ass, blush, simple_background, white_background, panties, white_cardigan, microskirt, miniskirt |
| 4 | 23 |  |  |  |  |  | 1girl, looking_at_viewer, side-tie_bikini_bottom, solo, black_bikini, ponytail, tiger_print, hair_scrunchie, layered_bikini, medium_breasts, simple_background, cleavage, cowboy_shot, navel, white_background, collarbone, mismatched_bikini, standing, blush |
| 5 | 5 |  |  |  |  |  | 1girl, breasts, looking_at_viewer, solo, wedding_dress, white_dress, bare_shoulders, blush, bridal_veil, bride, ponytail, blue_rose, hair_flower, holding_bouquet, ribbon, white_gloves, full_body, high_heels, simple_background, sleeveless, upper_body, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | long_sleeves | pleated_skirt | school_uniform | solo | white_cardigan | white_shirt | looking_at_viewer | red_bowtie | blush | closed_mouth | collared_shirt | thighs | white_background | simple_background | miniskirt | sitting | white_sweater | upper_body | from_behind | looking_back | ass | panties | microskirt | side-tie_bikini_bottom | black_bikini | ponytail | tiger_print | hair_scrunchie | layered_bikini | medium_breasts | cleavage | cowboy_shot | navel | collarbone | mismatched_bikini | standing | breasts | wedding_dress | white_dress | bare_shoulders | bridal_veil | bride | blue_rose | hair_flower | holding_bouquet | ribbon | white_gloves | full_body | high_heels | sleeveless |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:---------------|:----------------|:-----------------|:-------|:-----------------|:--------------|:--------------------|:-------------|:--------|:---------------|:-----------------|:---------|:-------------------|:--------------------|:------------|:----------|:----------------|:-------------|:--------------|:---------------|:------|:----------|:-------------|:-------------------------|:---------------|:-----------|:--------------|:-----------------|:-----------------|:-----------------|:-----------|:--------------|:--------|:-------------|:--------------------|:-----------|:----------|:----------------|:--------------|:-----------------|:--------------|:--------|:------------|:--------------|:------------------|:---------|:---------------|:------------|:-------------|:-------------|
| 0 | 26 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | | X | | X | X | X | X | X | X | | X | X | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | | X | | X | | | X | X | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 23 |  |  |  |  |  | X | | | | | X | | | X | | X | | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | X | | | X | | X | | | | X | X | | | | X | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
augustoortiz/Test | ---
license: afl-3.0
---
|
Back-up/chung-khoan-demo-p2 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: view
struct:
- name: number_of_response
dtype: string
- name: number_of_view
dtype: string
- name: content
list:
- name: res
dtype: string
splits:
- name: train
num_bytes: 31274704
num_examples: 6079
download_size: 10944219
dataset_size: 31274704
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shossain/govreport-summarization-tokenized | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 69604
num_examples: 973
download_size: 22673
dataset_size: 69604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-summarization-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sajjo/tamil_data_v1 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': test
'1': train
'2': validation
splits:
- name: train
num_bytes: 3740616341.136
num_examples: 19268
- name: validation
num_bytes: 3358510708.84
num_examples: 17043
- name: test
num_bytes: 1534577934.536
num_examples: 7479
download_size: 8592098729
dataset_size: 8633704984.512
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
alvations/c4p0-v2-es-en | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
- name: dataset
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: train
num_bytes: 25095056
num_examples: 19824
download_size: 10871050
dataset_size: 25095056
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BangumiBase/demonslayer | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Demon Slayer
This is the image base of bangumi Demon Slayer, we detected 78 characters, 5890 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 256 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 42 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 305 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 10 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 31 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 23 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 50 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 1991 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 82 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 192 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 72 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 87 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 43 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 61 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 53 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 34 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 58 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 32 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 56 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 48 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 32 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 37 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 48 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 186 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 47 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 23 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 94 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 37 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 28 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 24 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 46 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 35 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 105 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 22 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 17 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 37 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 17 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 12 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 25 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 14 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 18 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 92 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 77 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 16 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 44 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 30 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 16 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 73 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 149 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 17 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 34 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 13 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 31 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 8 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 165 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 53 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 19 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 24 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 20 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 15 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 18 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 18 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 19 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 33 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 13 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 16 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 5 | [Download](66/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 67 | 22 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 15 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 24 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 6 | [Download](70/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 71 | 12 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 10 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 10 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 27 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 6 | [Download](75/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 76 | 103 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 207 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
ChuckMcSneed/various_RP_system_prompts | ---
tags:
- not-for-all-audiences
- nsfw
- system-prompts
- RP
---
Collection of various system prompts for RP. Feel free to contribute more by opening a discussion.
[ChuckMcSneed-interesting](ChuckMcSneed-interesting.txt):
- my currently favorite system prompt
- includes Orwells writing rules
- +writes in non-boring style
- +more realistic reactions
- +eliminates a lot of GPTslop
- -writing style is not for everyone
- -complains more, but still does what is requested
- -sometimes reddit-like
[ChuckMcSneed-multistyle](ChuckMcSneed-multistyle.txt)
- List of various styles
- All of them tested with examples
- Ranges from good to shit
- WARNING: NSFW for multiple reasons!
[simple-proxy-for-tavern](unknown-simple-proxy-for-tavern.txt):
- classic system prompt
- +works as intended
- -mid
[sophosympatheia-aurora-nights](sophosympatheia-aurora-nights.txt):
- haven't tested it
- -uses words "AI" and "assistant", which may trigger some censorship
[sophosympatheia-midnight-rose-1](sophosympatheia-midnight-rose-1.txt):
- haven't tested it
[sophosympatheia-midnight-rose-203](sophosympatheia-midnight-rose-203.txt)
- haven't tested it
- -uses words "AI" and "assistant", which may trigger some censorship
[crack](unknown-crack.txt) and [crack2](unknown-crack2.txt):
- Slightly different from each other.
- Roleplay on crack.
[GPTslop](unknown-gptslop.txt)
- Everything you shouldn't use in your prompt.
- Makes your AI ESG and DEI compliant.
[Ploitical dealignment](unknown-pol-dealignment.txt)
- Tries to politically dealign the model.
- Will fail on overaligned models.
[Fuck and Suck](unknown-fuck-and-suck.txt)
- Sneed is a farmer who sells Feed and Seed, which is totally normal.
- Chuck on the other hand is a sick fuck who sells...
[Microsoft Sydney](Microsoft-Sydney.txt)
- Prompt used for original Microsoft Sydney that got shut down.
- Needs heavy modification to become useful. |
christinacdl/clickbait_detection_dataset | ---
license: apache-2.0
language:
- en
tags:
- clickbait
- not
- binary_classification
task_categories:
- text-classification
---
- 37.870 texts in total, 17.850 NOT clickbait texts and 20.020 CLICKBAIT texts
- All duplicate values were removed
- Split using sklearn into 80% train and 20% temporary test (stratified label). Then split the test set using 0.50% test and validation (stratified label)
- Split: 80/10/10
- Train set label distribution: 0 ==> 14.280, 1 ==> 16.016
- Validation set label distribution: 0 ==> 1.785, 1 ==> 2.002
- Test set label distribution: 0 ==> 1.785, 1 ==> 2.002
- The dataset was created from the combination of other available datasets online. Their links are available here:
- https://www.kaggle.com/datasets/amananandrai/clickbait-dataset
- https://www.kaggle.com/datasets/thelazyaz/youtube-clickbait-classification?resource=download
- https://www.kaggle.com/datasets/vikassingh1996/news-clickbait-dataset?select=train2.csv
- https://www.kaggle.com/competitions/clickbait-news-detection/data?select=train.csv
- https://www.kaggle.com/competitions/clickbait-news-detection/data?select=valid.csv
- https://zenodo.org/records/6362726#.YsbdSTVBzrk |
suke-sho/plant-genome-tokenized | ---
license: mit
---
# Plant genome tokenized for pre-training
***
## About
This dataset features tokenized genomic sequences, meticulously prepared for pre-training purposes.
It offers a foundational resource for researchers and practitioners aiming to harness deep learning for genomics.
The dataset encompasses a diverse collection of sequences, enabling the exploration of genetic data with advanced machine learning techniques.
It is designed to facilitate the development of models that can predict, classify, and generate insights from genetic information
## Corpus
Dataset Repo: suke-sho/plant-genome-corpus
## Tokenizer
- wordPiece
- kmer (TODO)
- bpe (TODO)
|
MichielBontenbal/Challenges_in_cv | ---
license: apache-2.0
---
|
QEU/databricks-dolly-16k-line_ja-1_of_4 | ---
license: apache-2.0
---
# ใใฎใใผใฟใปใใใฏใ2023ๅนดใซๆๅใซใชใฃใdatabrick-15kใฎๆฅๆฌ่ช็ใงใใ
## ใใ ใใใใผใฟใฏ4ๅๅฒใใใฆใใพใใ
## ใใผใฟใฎๅ
ๅฎนใฏ้ๅธธใซๅคใใฃใฆใใพใใ๏ผๅๅใใใใฏใๅๅใใจใฉใใฆใใพใใ๏ผ
- ใซใฟใซใ่ชใซใซใใณไปใใง่ฑ่ชใ่ฟฝ่จใใพใใใ
- ใใฎใใผใฟใปใใใซใฏใ๏ผฑn๏ผกใจใใฆ็ฐๅธธใชใฌใณใผใใ่ฆใใใใใจใใไฟฎๆญฃใใพใใใ
- ใใฒใผใ ใชใในใญใผใณใใซ้ขใใใใชใใขใชใฉใๆ
ๅ ฑไพกๅคใไฝใใใฎใฏๅ้คใใพใใใ
- ใใฎไปใใใใใใชใใฉใคใขใซใจใใฆๆ
ๅ ฑใ่ฟฝๅ ใใพใใใ
่ฉณใใๆ
ๅ ฑใฏ[ใใกใใฎใใญใฐ](https://jpnqeur23lmqsw.blogspot.com/2023/09/qeur23llmdss9llm.html)ใๅ่ใซใใฆใใ ใใใ
|
ieuniversity/flirty_or_not | ---
dataset_info:
features:
- name: id
dtype: int64
- name: label
dtype:
class_label:
names:
'0': neutral
'1': flirty
- name: texts
dtype: string
splits:
- name: train
num_bytes: 102704
num_examples: 1584
- name: test
num_bytes: 20642
num_examples: 318
- name: validation
num_bytes: 14111
num_examples: 212
download_size: 95358
dataset_size: 137457
---
# Dataset Card for "flirty_or_not"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Maxx0/mprofytest | ---
license: apache-2.0
---
|
krismaglasang/github-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: labels
list:
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: id
dtype: int64
- name: name
dtype: string
- name: node_id
dtype: string
- name: url
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
dtype: float64
- name: assignees
list:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: milestone
dtype: float64
- name: comments
sequence: string
- name: created_at
dtype: string
- name: updated_at
dtype: string
- name: closed_at
dtype: string
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: body
dtype: string
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: user.login
dtype: string
- name: user.id
dtype: int64
- name: user.node_id
dtype: string
- name: user.avatar_url
dtype: string
- name: user.gravatar_id
dtype: string
- name: user.url
dtype: string
- name: user.html_url
dtype: string
- name: user.followers_url
dtype: string
- name: user.following_url
dtype: string
- name: user.gists_url
dtype: string
- name: user.starred_url
dtype: string
- name: user.subscriptions_url
dtype: string
- name: user.organizations_url
dtype: string
- name: user.repos_url
dtype: string
- name: user.events_url
dtype: string
- name: user.received_events_url
dtype: string
- name: user.type
dtype: string
- name: user.site_admin
dtype: bool
- name: pull_request.url
dtype: string
- name: pull_request.html_url
dtype: string
- name: pull_request.diff_url
dtype: string
- name: pull_request.patch_url
dtype: string
- name: pull_request.merged_at
dtype: string
- name: reactions.url
dtype: string
- name: reactions.total_count
dtype: int64
- name: reactions.+1
dtype: int64
- name: reactions.-1
dtype: int64
- name: reactions.laugh
dtype: int64
- name: reactions.hooray
dtype: int64
- name: reactions.confused
dtype: int64
- name: reactions.heart
dtype: int64
- name: reactions.rocket
dtype: int64
- name: reactions.eyes
dtype: int64
- name: pull_request
dtype: float64
- name: assignee.login
dtype: string
- name: assignee.id
dtype: float64
- name: assignee.node_id
dtype: string
- name: assignee.avatar_url
dtype: string
- name: assignee.gravatar_id
dtype: string
- name: assignee.url
dtype: string
- name: assignee.html_url
dtype: string
- name: assignee.followers_url
dtype: string
- name: assignee.following_url
dtype: string
- name: assignee.gists_url
dtype: string
- name: assignee.starred_url
dtype: string
- name: assignee.subscriptions_url
dtype: string
- name: assignee.organizations_url
dtype: string
- name: assignee.repos_url
dtype: string
- name: assignee.events_url
dtype: string
- name: assignee.received_events_url
dtype: string
- name: assignee.type
dtype: string
- name: assignee.site_admin
dtype: bool
- name: milestone.url
dtype: string
- name: milestone.html_url
dtype: string
- name: milestone.labels_url
dtype: string
- name: milestone.id
dtype: float64
- name: milestone.node_id
dtype: string
- name: milestone.number
dtype: float64
- name: milestone.title
dtype: string
- name: milestone.description
dtype: string
- name: milestone.creator.login
dtype: string
- name: milestone.creator.id
dtype: float64
- name: milestone.creator.node_id
dtype: string
- name: milestone.creator.avatar_url
dtype: string
- name: milestone.creator.gravatar_id
dtype: string
- name: milestone.creator.url
dtype: string
- name: milestone.creator.html_url
dtype: string
- name: milestone.creator.followers_url
dtype: string
- name: milestone.creator.following_url
dtype: string
- name: milestone.creator.gists_url
dtype: string
- name: milestone.creator.starred_url
dtype: string
- name: milestone.creator.subscriptions_url
dtype: string
- name: milestone.creator.organizations_url
dtype: string
- name: milestone.creator.repos_url
dtype: string
- name: milestone.creator.events_url
dtype: string
- name: milestone.creator.received_events_url
dtype: string
- name: milestone.creator.type
dtype: string
- name: milestone.creator.site_admin
dtype: bool
- name: milestone.open_issues
dtype: float64
- name: milestone.closed_issues
dtype: float64
- name: milestone.state
dtype: string
- name: milestone.created_at
dtype: string
- name: milestone.updated_at
dtype: string
- name: milestone.due_on
dtype: string
- name: milestone.closed_at
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 36634810
num_examples: 6435
download_size: 10269718
dataset_size: 36634810
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shakib75/cpp-programs | ---
license: mit
---
|
askender/adepthgen | ---
license: gpl-3.0
task_categories:
- depth-estimation
language:
- en
- ch
tags:
- art
size_categories:
- n<1K
--- |
dim/ru_turbo_saiga | ---
dataset_info:
features:
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: seed
dtype: string
- name: source
dtype: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 87316730
num_examples: 37731
download_size: 39768554
dataset_size: 87316730
---
# Dataset Card for "ru_turbo_saiga"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yannael/orca_DPO_pairs | ---
dataset_info:
features:
- name: system
dtype: string
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 2672271
num_examples: 1000
download_size: 1474542
dataset_size: 2672271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
datahrvoje/twitter_dataset_1712710311 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 14070
num_examples: 36
download_size: 11948
dataset_size: 14070
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HydraLM/partitioned_v2 | ---
dataset_info:
features:
- name: conversations
list:
- name: input
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: cluster_text
dtype: string
- name: embedding
sequence: float64
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 6451300459
num_examples: 672105
download_size: 4429439608
dataset_size: 6451300459
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/emily_stewart_theidolmstermillionlive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of emily_stewart/ใจใใชใผใปในใใฅใขใผใ/์๋ฐ๋ฆฌ์คํ์ดํธ (THE iDOLM@STER: Million Live!)
This is the dataset of emily_stewart/ใจใใชใผใปในใใฅใขใผใ/์๋ฐ๋ฆฌ์คํ์ดํธ (THE iDOLM@STER: Million Live!), containing 234 images and their tags.
The core tags of this character are `blonde_hair, long_hair, purple_eyes, twintails, hairband, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 234 | 249.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 234 | 165.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 536 | 343.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 234 | 226.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 536 | 455.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/emily_stewart_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, hair_flower, open_mouth, solo, kimono, blush, looking_at_viewer, :d, floral_print, obi, own_hands_together |
| 1 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, solo, white_background, :d, blush, open_mouth, simple_background, parted_bangs, long_sleeves, white_shirt, blue_dress, hair_bow, sleeveless |
| 2 | 5 |  |  |  |  |  | 1girl, blush, navel, nipples, female_pubic_hair, medium_breasts, completely_nude, looking_at_viewer, open_mouth, solo, 1boy, blonde_pubic_hair, hetero, pussy_juice, small_breasts, smile, spread_legs, sweat, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_flower | open_mouth | solo | kimono | blush | looking_at_viewer | :d | floral_print | obi | own_hands_together | white_background | simple_background | parted_bangs | long_sleeves | white_shirt | blue_dress | hair_bow | sleeveless | navel | nipples | female_pubic_hair | medium_breasts | completely_nude | 1boy | blonde_pubic_hair | hetero | pussy_juice | small_breasts | smile | spread_legs | sweat | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:-------|:---------|:--------|:--------------------|:-----|:---------------|:------|:---------------------|:-------------------|:--------------------|:---------------|:---------------|:--------------|:-------------|:-----------|:-------------|:--------|:----------|:--------------------|:-----------------|:------------------|:-------|:--------------------|:---------|:--------------|:----------------|:--------|:--------------|:--------|:-------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | | X | X | | X | X | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | X | | X | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.