id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
ChanceFocus/flare-zh-nl2 | 2023-10-01T08:16:13.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 3 | 2023-10-01T08:15:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ChanceFocus/flare-zh-nsp | 2023-10-01T08:17:16.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 3 | 2023-10-01T08:16:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sitloboi2012/rvl_cdip_large_dataset | 2023-10-01T08:20:47.000Z | [
"region:us"
] | sitloboi2012 | null | null | 0 | 3 | 2023-10-01T08:17:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validate
path: data/validate-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
splits:
- name: train
num_bytes: 3694582118.36
num_examples: 30400
- name: test
num_bytes: 388902596.88
num_examples: 3200
- name: validate
num_bytes: 388902596.88
num_examples: 3200
download_size: 4204560106
dataset_size: 4472387312.12
---
# Dataset Card for "rvl_cdip_large_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,171 | [
[
-0.051513671875,
-0.01332855224609375,
0.01349639892578125,
0.038665771484375,
-0.0134124755859375,
0.01059722900390625,
0.0011310577392578125,
-0.00901031494140625,
0.041656494140625,
0.04791259765625,
-0.04803466796875,
-0.0556640625,
-0.041290283203125,
-... |
ChanceFocus/flare-zh-re | 2023-10-01T08:17:38.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 3 | 2023-10-01T08:17:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ChanceFocus/flare-zh-stockb | 2023-10-01T08:18:09.000Z | [
"region:us"
] | ChanceFocus | null | null | 0 | 3 | 2023-10-01T08:17:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
arbitropy/ProcessedTextGen1 | 2023-10-02T21:32:43.000Z | [
"region:us"
] | arbitropy | null | null | 0 | 3 | 2023-10-02T21:31:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 515825625.7185176
num_examples: 2973192
download_size: 293360996
dataset_size: 515825625.7185176
---
# Dataset Card for "ProcessedTextGen1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 473 | [
[
-0.03143310546875,
-0.03546142578125,
0.030303955078125,
0.028533935546875,
-0.0129547119140625,
-0.00414276123046875,
0.0014104843139648438,
-0.018585205078125,
0.05718994140625,
0.05364990234375,
-0.07159423828125,
-0.05780029296875,
-0.0531005859375,
-0.0... |
nlplabtdtu/edu_eof | 2023-10-03T02:03:59.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 3 | 2023-10-03T02:03:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
shivanikerai/review_prompts_9.0.1 | 2023-10-03T04:50:56.000Z | [
"region:us"
] | shivanikerai | null | null | 0 | 3 | 2023-10-03T04:49:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vishal0719/infogen-2 | 2023-10-03T07:13:14.000Z | [
"region:us"
] | vishal0719 | null | null | 0 | 3 | 2023-10-03T07:13:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Algoroxyolo/squadForLLM | 2023-10-03T17:37:23.000Z | [
"region:us"
] | Algoroxyolo | null | null | 0 | 3 | 2023-10-03T08:34:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
csolheim/HealthBeautyClassifier | 2023-10-03T13:18:46.000Z | [
"region:us"
] | csolheim | null | null | 0 | 3 | 2023-10-03T13:13:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Mxode/Baike-Astronomy-ZH | 2023-10-03T14:19:38.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:zh",
"license:apache-2.0",
"astronomy",
"region:us"
] | Mxode | null | null | 0 | 3 | 2023-10-03T14:08:08 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- astronomy
size_categories:
- n<1K
---
天文学百科,包含 8 个子目录,约 1000 条词条、110,0000 个字符。
数据包含一级目录、二级目录、标题、内容。其中**内容已经处理为单行**,且**文本普遍较长**。
一个样例如下:
```json
{
"top_category": "天文学",
"sub_category": "天体力学",
"title": "万有引力定律",
"content": "万有引力定律(汉语拼音:wàn yǒu yǐn lì zhī dìng lǜ),(universal gravitation,law of),自然界中任何两个质点都相互吸引,这个力同两个质点的质量的乘积成正比,同它们之间的距离的二次方成反比。如用m1、m2表示两质点的质量,r表示两质点间的距离,F表示作用力的值,则F=Gm1m2/r2,式中的G是比例常量,称万有引力常量或牛顿引力常量,数值因不同单位制而异,在国际单位制中G为6.672×1011牛顿·米2/千克2。这个定律由牛顿于1687年在《原理》上首次发表,它和牛顿运动定律一起,构成了牛顿力学特别是天体力学的基础。\n 在牛顿公布该定律之前,胡克、惠更斯都曾根据开普勒定律推测行星和太阳间存在和距离二次方成反比的引力,但未能提出数学证明,为此胡克还和牛顿通过信,因此对定律的首创权有过争议。牛顿还曾对晚年的忘年交斯多克雷说过,1666年他在家乡避瘟疫时,曾因见苹果从树上落地而想到地球对苹果的引力是否可延伸到月球。此说传布很广,许多科学家深信不疑,并对牛顿为何推迟20年才发表有种种推测。但也有人根据牛顿晚年的精神状态,认为他对斯多克雷所说的并非真情。\n 一般物体之间的引力,在物体尺度远小于质心距离时,可视为质点;尺度和间距相近时,须视为质点系,用积分法求引力。但牛顿已算出一个密度均匀的圆球对附近质点的引力同把圆球的质量集中于球心时完全一致。对万有引力的起因,牛顿未作解释,把它视为超距力或以太的作用,系后人所为。爱因斯坦在广义相对论中将引力归之于时空曲率的变化。"
}
``` | 999 | [
[
-0.042236328125,
-0.039398193359375,
0.0293426513671875,
0.018463134765625,
-0.0253143310546875,
-0.0203399658203125,
0.0030803680419921875,
-0.02154541015625,
0.0223846435546875,
0.0252532958984375,
-0.02288818359375,
-0.0279083251953125,
-0.037933349609375,
... |
gorkaartola/ZS-train_S1-SDGdescriptions-AURORA05_S2-SDGdescriptions-SDGtitle_Negative_Sample_Filter-AURORA05 | 2023-10-03T21:04:24.000Z | [
"region:us"
] | gorkaartola | null | null | 0 | 3 | 2023-10-03T20:53:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_TheBloke__Guanaco-3B-Uncensored-v2-GPTQ | 2023-10-29T01:04:28.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 3 | 2023-10-03T21:39:29 | ---
pretty_name: Evaluation run of TheBloke/Guanaco-3B-Uncensored-v2-GPTQ
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/Guanaco-3B-Uncensored-v2-GPTQ](https://huggingface.co/TheBloke/Guanaco-3B-Uncensored-v2-GPTQ)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__Guanaco-3B-Uncensored-v2-GPTQ\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-29T01:04:16.242483](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Guanaco-3B-Uncensored-v2-GPTQ/blob/main/results_2023-10-29T01-04-16.242483.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0045092281879194635,\n\
\ \"em_stderr\": 0.0006861346899095007,\n \"f1\": 0.06708368288590627,\n\
\ \"f1_stderr\": 0.0016014292768729186,\n \"acc\": 0.322384038037953,\n\
\ \"acc_stderr\": 0.0072675866532889944\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.0045092281879194635,\n \"em_stderr\": 0.0006861346899095007,\n\
\ \"f1\": 0.06708368288590627,\n \"f1_stderr\": 0.0016014292768729186\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.0010717793485492612\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6432517758484609,\n \"acc_stderr\": 0.013463393958028728\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TheBloke/Guanaco-3B-Uncensored-v2-GPTQ
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|arc:challenge|25_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_29T01_04_16.242483
path:
- '**/details_harness|drop|3_2023-10-29T01-04-16.242483.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-29T01-04-16.242483.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_29T01_04_16.242483
path:
- '**/details_harness|gsm8k|5_2023-10-29T01-04-16.242483.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-29T01-04-16.242483.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hellaswag|10_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T21-39-11.409465.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-03T21-39-11.409465.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-03T21-39-11.409465.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_29T01_04_16.242483
path:
- '**/details_harness|winogrande|5_2023-10-29T01-04-16.242483.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-29T01-04-16.242483.parquet'
- config_name: results
data_files:
- split: 2023_10_03T21_39_11.409465
path:
- results_2023-10-03T21-39-11.409465.parquet
- split: 2023_10_29T01_04_16.242483
path:
- results_2023-10-29T01-04-16.242483.parquet
- split: latest
path:
- results_2023-10-29T01-04-16.242483.parquet
---
# Dataset Card for Evaluation run of TheBloke/Guanaco-3B-Uncensored-v2-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/Guanaco-3B-Uncensored-v2-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/Guanaco-3B-Uncensored-v2-GPTQ](https://huggingface.co/TheBloke/Guanaco-3B-Uncensored-v2-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__Guanaco-3B-Uncensored-v2-GPTQ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T01:04:16.242483](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Guanaco-3B-Uncensored-v2-GPTQ/blob/main/results_2023-10-29T01-04-16.242483.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0045092281879194635,
"em_stderr": 0.0006861346899095007,
"f1": 0.06708368288590627,
"f1_stderr": 0.0016014292768729186,
"acc": 0.322384038037953,
"acc_stderr": 0.0072675866532889944
},
"harness|drop|3": {
"em": 0.0045092281879194635,
"em_stderr": 0.0006861346899095007,
"f1": 0.06708368288590627,
"f1_stderr": 0.0016014292768729186
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492612
},
"harness|winogrande|5": {
"acc": 0.6432517758484609,
"acc_stderr": 0.013463393958028728
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 38,788 | [
[
-0.016876220703125,
-0.048065185546875,
0.01433563232421875,
0.01727294921875,
-0.021575927734375,
0.00940704345703125,
-0.0268402099609375,
-0.0186614990234375,
0.021575927734375,
0.042266845703125,
-0.040802001953125,
-0.07073974609375,
-0.0467529296875,
0... |
warleagle/1t_chat_bot_data_v2 | 2023-10-03T23:07:16.000Z | [
"region:us"
] | warleagle | null | null | 0 | 3 | 2023-10-03T23:07:13 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 890558
num_examples: 2083
download_size: 398939
dataset_size: 890558
---
# Dataset Card for "1t_chat_bot_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 358 | [
[
-0.016510009765625,
-0.044097900390625,
-0.003406524658203125,
0.0233001708984375,
-0.016204833984375,
0.002262115478515625,
0.0178375244140625,
-0.00435638427734375,
0.051544189453125,
0.044647216796875,
-0.06536865234375,
-0.039337158203125,
-0.040008544921875... |
Anis1123/guip-unfined | 2023-10-04T06:10:27.000Z | [
"region:us"
] | Anis1123 | null | null | 0 | 3 | 2023-10-04T05:28:02 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wozniakclub/compendio-anahuac | 2023-10-04T07:00:26.000Z | [
"region:us"
] | wozniakclub | null | null | 0 | 3 | 2023-10-04T06:40:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Falah/logo_prompts | 2023-10-04T09:57:32.000Z | [
"region:us"
] | Falah | null | null | 0 | 3 | 2023-10-04T09:57:29 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 271034
num_examples: 1000
download_size: 34969
dataset_size: 271034
---
# Dataset Card for "logo_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 353 | [
[
-0.048187255859375,
-0.020751953125,
0.0177459716796875,
0.018707275390625,
-0.018035888671875,
0.00905609130859375,
0.015838623046875,
-0.0081787109375,
0.055694580078125,
0.0167694091796875,
-0.07568359375,
-0.048980712890625,
-0.0408935546875,
-0.00021672... |
Mxode/Chinese-Classics-Partial | 2023-10-04T10:54:47.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"classics",
"region:us"
] | Mxode | null | null | 0 | 3 | 2023-10-04T10:46:03 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- classics
size_categories:
- 100K<n<1M
---
偶然找到的 200 多篇古籍相关的**纯 txt 文件**,简单洗了一下,去除了部分噪声和空白行。
一篇样例如下:
```
古训《增广贤文》
昔时贤文,诲汝谆谆,集韵增文,多见多闻。
观今宜鉴古,无古不成今。
知己知彼,将心比心。
酒逢知己饮,诗向会人吟。
相识满天下,知心能几人。
相逢好似初相识,到老终无怨恨心。
近水知鱼性,近山识鸟音。
易涨易退山溪水,易反易覆小人心。
运去金成铁,时来铁似金,读书须用意,一字值千金。
``` | 344 | [
[
-0.02947998046875,
-0.046295166015625,
0.021942138671875,
0.06707763671875,
-0.071044921875,
-0.0242919921875,
0.01187896728515625,
-0.033477783203125,
0.035797119140625,
0.04541015625,
-0.0231781005859375,
-0.042327880859375,
-0.0535888671875,
0.01242065429... |
ai4ce/UNav-Dataset | 2023-10-20T16:24:09.000Z | [
"region:us"
] | ai4ce | null | null | 0 | 3 | 2023-10-04T14:00:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DialogueCharacter/english_dialogue_instruction_with_reward_score_judged_by_13B_llama2 | 2023-10-29T03:54:07.000Z | [
"region:us"
] | DialogueCharacter | null | null | 0 | 3 | 2023-10-04T15:06:25 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: reward_score
dtype: float64
splits:
- name: train
num_bytes: 888623949
num_examples: 909740
download_size: 475765484
dataset_size: 888623949
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dialogue_instruction_with_reward_score_judged_by_13B_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 573 | [
[
-0.01904296875,
-0.024688720703125,
0.0275115966796875,
0.038909912109375,
-0.0220794677734375,
0.004955291748046875,
0.017608642578125,
-0.003627777099609375,
0.044525146484375,
0.032501220703125,
-0.0684814453125,
-0.061248779296875,
-0.057373046875,
-0.01... |
songlab/gpn-msa-sapiens-dataset | 2023-10-12T15:13:23.000Z | [
"license:mit",
"region:us"
] | songlab | null | null | 0 | 3 | 2023-10-04T17:11:12 | ---
license: mit
---
# Training windows for GPN-MSA-Sapiens
For more information check out our [paper](https://doi.org/10.1101/2023.10.10.561776) and [repository](https://github.com/songlab-cal/gpn).
Path in Snakemake:
`results/dataset/multiz100way/89/128/64/True/defined.phastCons.percentile-75_0.05_0.001` | 308 | [
[
-0.02459716796875,
-0.016021728515625,
0.041412353515625,
0.0030231475830078125,
-0.01424407958984375,
0.0159912109375,
-0.005035400390625,
-0.0059051513671875,
0.031585693359375,
0.00980377197265625,
-0.0556640625,
-0.052154541015625,
-0.030853271484375,
-0... |
erhwenkuo/rlhf_reward_single_round-chinese-zhtw | 2023-10-04T22:48:40.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"arxiv:2204.05862",
"region:us"
] | erhwenkuo | null | null | 0 | 3 | 2023-10-04T22:25:52 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 12143678
num_examples: 19862
- name: test
num_bytes: 3118994
num_examples: 4996
download_size: 10724182
dataset_size: 15262672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
task_categories:
- conversational
language:
- zh
size_categories:
- 10K<n<100K
---
# Dataset Card for "rlhf_reward_single_round-chinese-zhtw"
基於 anthropic 的 [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862) 論文開源的關於有助和無害的人類偏好資料。
這些數據旨在為後續的 RLHF 訓練訓練偏好(或獎勵)模型。
## 來源資料集
本資料集來自 [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese), 并使用 OpenCC 來進行簡繁轉換。
| 951 | [
[
-0.022918701171875,
-0.05389404296875,
-0.0177154541015625,
0.016326904296875,
-0.03436279296875,
-0.02252197265625,
-0.0054779052734375,
-0.02728271484375,
0.04052734375,
0.02557373046875,
-0.07574462890625,
-0.05169677734375,
-0.0311126708984375,
0.0056495... |
open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B | 2023-10-29T10:47:27.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 3 | 2023-10-05T07:30:08 | ---
pretty_name: Evaluation run of beomi/KoAlpaca-KoRWKV-6B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [beomi/KoAlpaca-KoRWKV-6B](https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-29T10:47:14.671531](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B/blob/main/results_2023-10-29T10-47-14.671531.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0065016778523489934,\n\
\ \"em_stderr\": 0.000823068429722398,\n \"f1\": 0.0316212248322148,\n\
\ \"f1_stderr\": 0.0012557323603243587,\n \"acc\": 0.2580899763220205,\n\
\ \"acc_stderr\": 0.007022563065489298\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0065016778523489934,\n \"em_stderr\": 0.000823068429722398,\n\
\ \"f1\": 0.0316212248322148,\n \"f1_stderr\": 0.0012557323603243587\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.516179952644041,\n\
\ \"acc_stderr\": 0.014045126130978596\n }\n}\n```"
repo_url: https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|arc:challenge|25_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_29T10_47_14.671531
path:
- '**/details_harness|drop|3_2023-10-29T10-47-14.671531.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-29T10-47-14.671531.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_29T10_47_14.671531
path:
- '**/details_harness|gsm8k|5_2023-10-29T10-47-14.671531.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-29T10-47-14.671531.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hellaswag|10_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_29T10_47_14.671531
path:
- '**/details_harness|winogrande|5_2023-10-29T10-47-14.671531.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-29T10-47-14.671531.parquet'
- config_name: results
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- results_2023-10-05T07-29-47.362584.parquet
- split: 2023_10_29T10_47_14.671531
path:
- results_2023-10-29T10-47-14.671531.parquet
- split: latest
path:
- results_2023-10-29T10-47-14.671531.parquet
---
# Dataset Card for Evaluation run of beomi/KoAlpaca-KoRWKV-6B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [beomi/KoAlpaca-KoRWKV-6B](https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T10:47:14.671531](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B/blob/main/results_2023-10-29T10-47-14.671531.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0065016778523489934,
"em_stderr": 0.000823068429722398,
"f1": 0.0316212248322148,
"f1_stderr": 0.0012557323603243587,
"acc": 0.2580899763220205,
"acc_stderr": 0.007022563065489298
},
"harness|drop|3": {
"em": 0.0065016778523489934,
"em_stderr": 0.000823068429722398,
"f1": 0.0316212248322148,
"f1_stderr": 0.0012557323603243587
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.516179952644041,
"acc_stderr": 0.014045126130978596
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 38,535 | [
[
-0.02880859375,
-0.0494384765625,
0.0083465576171875,
0.0213470458984375,
-0.0167999267578125,
0.007030487060546875,
-0.0288543701171875,
-0.0169525146484375,
0.0288848876953125,
0.03961181640625,
-0.0523681640625,
-0.0660400390625,
-0.057769775390625,
0.012... |
argilla/oig-30k | 2023-10-05T08:41:21.000Z | [
"size_categories:10K<n<100K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | 0 | 3 | 2023-10-05T08:41:13 | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for oig-30k
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/oig-30k")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/oig-30k")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| background | Background | text | True | False |
| prompt | Prompt | text | True | False |
| response | Final Response | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| quality | Is it a Good or Bad response? | label_selection | True | N/A | ['Good', 'Bad'] |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"background": "",
"prompt": "\u003chuman\u003e: Write a article related to AT\u0026T, purchased, June.",
"response": "\u003cbot\u003e: This is my first, albeit temporary, case for my iPhone. It was one of the few available at the AT\u0026T where I purchased my iPhone on June 29 and it\u0027s been useful to date, but it\u0027s not a long-term keeper"
},
"metadata": {},
"responses": [],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"background": "",
"external_id": null,
"metadata": "{}",
"prompt": "\u003chuman\u003e: Write a article related to AT\u0026T, purchased, June.",
"quality": [],
"quality-suggestion": null,
"quality-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"response": "\u003cbot\u003e: This is my first, albeit temporary, case for my iPhone. It was one of the few available at the AT\u0026T where I purchased my iPhone on June 29 and it\u0027s been useful to date, but it\u0027s not a long-term keeper"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **background** is of type `text`.
* **prompt** is of type `text`.
* **response** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **quality** is of type `label_selection` with the following allowed values ['Good', 'Bad'].
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **quality-suggestion** is of type `label_selection` with the following allowed values ['Good', 'Bad'].
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,415 | [
[
-0.061126708984375,
-0.0599365234375,
0.0157318115234375,
0.02313232421875,
-0.020355224609375,
-0.0261993408203125,
0.0041961669921875,
-0.04315185546875,
0.05401611328125,
0.052093505859375,
-0.051361083984375,
-0.05975341796875,
-0.041412353515625,
0.0135... |
kewu93/three_styles_10rand | 2023-10-05T23:47:22.000Z | [
"region:us"
] | kewu93 | null | null | 0 | 3 | 2023-10-05T23:47:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 321384.41333333333
num_examples: 10
- name: val
num_bytes: 2935082.1333333333
num_examples: 100
download_size: 3157886
dataset_size: 3256466.546666667
---
# Dataset Card for "three_styles_10rand"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.044586181640625,
-0.01500701904296875,
0.01190948486328125,
0.041961669921875,
-0.01226043701171875,
-0.0125732421875,
0.01403045654296875,
-0.01666259765625,
0.071533203125,
0.040313720703125,
-0.04315185546875,
-0.04803466796875,
-0.03485107421875,
-0.0... |
mharvill23/yugioh-crystal-beast-ready | 2023-10-06T01:09:51.000Z | [
"region:us"
] | mharvill23 | null | null | 0 | 3 | 2023-10-06T01:09:06 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 845968.0
num_examples: 15
download_size: 847374
dataset_size: 845968.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "yugioh-crystal-beast-ready"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.0277252197265625,
-0.01763916015625,
0.00044035911560058594,
0.0114288330078125,
-0.026153564453125,
-0.00394439697265625,
0.0300750732421875,
-0.0220794677734375,
0.07574462890625,
0.03900146484375,
-0.06878662109375,
-0.025115966796875,
-0.0088958740234375,... |
Intuit-GenSRF/AnikaBasu-CyberbullyingDataset-es | 2023-10-06T19:33:44.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 3 | 2023-10-06T05:35:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: __index_level_0__
dtype: int64
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 1407598
num_examples: 2955
download_size: 0
dataset_size: 1407598
---
# Dataset Card for "AnikaBasu-CyberbullyingDataset-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 620 | [
[
-0.0372314453125,
-0.046722412109375,
-0.0000852346420288086,
0.026092529296875,
-0.018646240234375,
0.0145416259765625,
0.0182952880859375,
-0.01161956787109375,
0.07763671875,
0.029571533203125,
-0.073486328125,
-0.044189453125,
-0.05694580078125,
-0.00681... |
minh21/COVID-QA-sentence-transformer-biencoder-data-65_25_10 | 2023-10-06T07:47:54.000Z | [
"region:us"
] | minh21 | null | null | 0 | 3 | 2023-10-06T07:47:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
- name: document_id
dtype: int64
splits:
- name: train
num_bytes: 4863851
num_examples: 2378
- name: test
num_bytes: 510126
num_examples: 269
download_size: 581674
dataset_size: 5373977
---
# Dataset Card for "COVID-QA-sentence-transformer-biencoder-data-65_25_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 694 | [
[
-0.0244903564453125,
-0.0196380615234375,
0.006500244140625,
0.0221710205078125,
-0.00992584228515625,
-0.01296234130859375,
0.0171966552734375,
-0.0036640167236328125,
0.039825439453125,
0.0186309814453125,
-0.0518798828125,
-0.047882080078125,
-0.0381774902343... |
TiagoAdriano/Adriano_Test | 2023-10-06T09:31:26.000Z | [
"region:us"
] | TiagoAdriano | null | null | 0 | 3 | 2023-10-06T09:24:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/massive_5_lang_DA2_tokenized | 2023-10-06T10:38:23.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 3 | 2023-10-06T10:38:00 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 424287645
num_examples: 552890
download_size: 127805722
dataset_size: 424287645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_5_lang_DA2_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 553 | [
[
-0.03472900390625,
-0.03570556640625,
0.01177978515625,
0.02423095703125,
-0.0173492431640625,
0.0083160400390625,
-0.0031032562255859375,
-0.0162506103515625,
0.054534912109375,
0.033782958984375,
-0.040924072265625,
-0.05859375,
-0.052032470703125,
-0.0028... |
adityarra07/sollingen_data | 2023-10-06T14:15:44.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 3 | 2023-10-06T14:15:08 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1174246362.25
num_examples: 4638
download_size: 1167082408
dataset_size: 1174246362.25
---
# Dataset Card for "sollingen_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 480 | [
[
-0.041168212890625,
-0.004100799560546875,
0.0181121826171875,
0.0289306640625,
-0.00920867919921875,
-0.02276611328125,
0.00887298583984375,
-0.00919342041015625,
0.048614501953125,
0.032989501953125,
-0.06488037109375,
-0.06292724609375,
-0.042083740234375,
... |
adityarra07/geneva_data | 2023-10-06T14:15:53.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 3 | 2023-10-06T14:15:44 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 223012930.0
num_examples: 811
download_size: 221984247
dataset_size: 223012930.0
---
# Dataset Card for "geneva_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 471 | [
[
-0.04052734375,
-0.0153961181640625,
0.0299072265625,
0.0054168701171875,
-0.018646240234375,
-0.00457763671875,
0.0301055908203125,
-0.01277923583984375,
0.05908203125,
0.037109375,
-0.0577392578125,
-0.06219482421875,
-0.0477294921875,
-0.0218505859375,
... |
Intuit-GenSRF/sexting-nsfw-adultconten-es | 2023-10-06T18:24:43.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 3 | 2023-10-06T18:22:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 89678
num_examples: 538
download_size: 0
dataset_size: 89678
---
# Dataset Card for "sexting-nsfw-adultconten-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 564 | [
[
-0.0312042236328125,
-0.0216064453125,
0.00386810302734375,
0.04339599609375,
-0.025054931640625,
-0.0219573974609375,
0.005489349365234375,
-0.0144500732421875,
0.039886474609375,
0.041961669921875,
-0.061676025390625,
-0.06561279296875,
-0.04022216796875,
... |
ContextualAI/nq_open_source | 2023-10-06T22:39:14.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 3 | 2023-10-06T22:15:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ContextualAI/mmlu | 2023-10-07T00:33:19.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 3 | 2023-10-07T00:18:19 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: choices
sequence: string
- name: query
dtype: string
- name: responses
sequence: string
- name: gold_generation
dtype: string
- name: configuration
dtype: string
splits:
- name: train
num_bytes: 9417355319
num_examples: 5690994
- name: dev
num_bytes: 828374
num_examples: 1531
- name: test
num_bytes: 7562338
num_examples: 14042
download_size: 2724102502
dataset_size: 9425746031
---
# Dataset Card for "mmlu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 682 | [
[
-0.039886474609375,
-0.025787353515625,
0.0116729736328125,
0.004119873046875,
-0.00963592529296875,
-0.0019779205322265625,
0.0283966064453125,
-0.009185791015625,
0.06500244140625,
0.02001953125,
-0.0712890625,
-0.0438232421875,
-0.040283203125,
-0.0065269... |
Intuit-GenSRF/tweet-eval-offensive-es | 2023-10-07T01:13:41.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 3 | 2023-10-07T01:13:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 4941865
num_examples: 11519
download_size: 3088828
dataset_size: 4941865
---
# Dataset Card for "tweet_eval-offensive-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 610 | [
[
-0.00911712646484375,
-0.0433349609375,
0.00025725364685058594,
0.0257720947265625,
-0.01666259765625,
0.0301971435546875,
0.002353668212890625,
-0.0059356689453125,
0.054412841796875,
0.0227813720703125,
-0.04345703125,
-0.0726318359375,
-0.06005859375,
-0.... |
Fraol/DedupedRefDatasetWMetricFinal | 2023-10-07T20:04:05.000Z | [
"region:us"
] | Fraol | null | null | 0 | 3 | 2023-10-07T01:46:00 | ---
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: ref_status
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
- name: cbo
dtype: float64
- name: wmc
dtype: float64
- name: lcom*
dtype: float64
- name: loc
dtype: float64
splits:
- name: train
num_bytes: 901652208.1944371
num_examples: 150671
download_size: 215554822
dataset_size: 901652208.1944371
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "DedupedRefDatasetWMetricFinal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 919 | [
[
-0.04400634765625,
-0.01666259765625,
0.0087432861328125,
0.0255279541015625,
-0.010284423828125,
0.011260986328125,
0.034149169921875,
-0.006214141845703125,
0.060943603515625,
0.036346435546875,
-0.07220458984375,
-0.041717529296875,
-0.042236328125,
-0.00... |
carnival13/massive_val_DA3_tokenized | 2023-10-07T06:45:08.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 3 | 2023-10-07T06:45:02 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 16518310
num_examples: 24160
download_size: 3772737
dataset_size: 16518310
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_val_DA3_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.0390625,
-0.042999267578125,
0.015106201171875,
0.0211334228515625,
-0.015533447265625,
-0.0011205673217773438,
0.0310516357421875,
-0.006359100341796875,
0.06341552734375,
0.04461669921875,
-0.04052734375,
-0.05230712890625,
-0.054107666015625,
-0.007850... |
RikoteMaster/llama2_4_translation | 2023-10-07T08:40:43.000Z | [
"region:us"
] | RikoteMaster | null | null | 0 | 3 | 2023-10-07T08:40:40 | ---
dataset_info:
features:
- name: Spanish
dtype: string
- name: English
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27623544
num_examples: 118964
download_size: 11129552
dataset_size: 27623544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2_4_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 525 | [
[
-0.0185089111328125,
-0.004520416259765625,
0.024383544921875,
0.041107177734375,
-0.03582763671875,
0.00954437255859375,
0.014495849609375,
-0.0225677490234375,
0.0491943359375,
0.032989501953125,
-0.053070068359375,
-0.0616455078125,
-0.05755615234375,
0.0... |
Tychema/autotrain-data-ceconomysumdataset | 2023-10-07T09:18:23.000Z | [
"task_categories:summarization",
"region:us"
] | Tychema | null | null | 0 | 3 | 2023-10-07T08:57:16 | ---
task_categories:
- summarization
---
# AutoTrain Dataset for project: ceconomysumdataset
## Dataset Description
This dataset has been automatically processed by AutoTrain for project ceconomysumdataset.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": "\u9ed8\u6c99\u4e1c\u6536\u8d2d\u5148\u7075\u8446\u96c5\u540e\u5c06\u91cd\u7ec4\u4e3a5\u4e2a\u90e8\u95e8",
"text": "\u65b0\u6d6a\u8d22\u7ecf\u8baf \u5317\u4eac\u65f6\u95f4\u5468\u4e00\u665a\u95f4\u6d88\u606f\uff0c\u9ed8\u6c99\u4e1c\u516c\u53f8(MRK)\u603b\u88c1\u517c\u9996\u5e2d\u6267\u884c\u5b98\u7406\u67e5\u5fb7\u00b7\u514b\u62c9\u514b(Richard T. Clark)\u8868\u793a\uff0c\u5728\u5b8c\u6210\u5bf9\u7ade\u4e89\u5bf9\u624b\u5148\u7075\u8446\u96c5\u516c\u53f8(SGP)411\u4ebf\u7f8e\u5143\u7684\u6536\u8d2d\u540e\uff0c\u8be5\u516c\u53f8\u5c06\u91cd\u7ec4\u4e3a5\u4e2a\u90e8\u95e8\u3002\u514b\u62c9\u514b\u5c06\u7ee7\u7eed\u62c5\u4efb\u65b0\u516c\u53f8\u7684CEO\u3002\u6b64\u9879\u4ea4\u6613\u9884\u8ba1\u5c06\u4e8e\u7b2c\u56db\u5b63\u5ea6\u5b8c\u6210\u3002\u65b0\u516c\u53f8\u5c06\u62e5\u67095\u4e2a\u4e3b\u8981\u90e8\u95e8\uff0c\u5305\u62ec\u5168\u7403\u4eba\u7c7b\u5065\u5eb7(Global Human Health)\u3001\u52a8\u7269\u5065\u5eb7(Animal Health)\u3001\u6d88\u8d39\u8005\u5065\u5eb7\u62a4\u7406(Consumer Health Care)\u3001\u9ed8\u6c99\u4e1c\u7814\u7a76\u5b9e\u9a8c\u5ba4(Merck Research Laboratories)\uff0c\u4ee5\u53ca\u9ed8\u6c99\u4e1c\u5236\u9020\u90e8\u95e8(Merck Manufacturing)\u3002\u6b64\u5916\uff0c\u8fd9\u5bb6\u603b\u90e8\u4f4d\u4e8e\u65b0\u6cfd\u897f\u5ddeWhitehouse Station\u7684\u516c\u53f8\u8868\u793a\uff0c\u5148\u7075\u8446\u96c5\u73b0\u4efb\u9886\u5bfc\u5c42\u5927\u7ea640%\u7684\u6210\u5458\u5c06\u6210\u4e3a\u65b0\u516c\u53f8\u7ba1\u7406\u5c42\u7684\u4e00\u90e8\u5206\uff0c\u800c\u8be5\u516c\u53f8\u5458\u5de5\u4e2d\u7684\u7edd\u5927\u90e8\u5206\u4e5f\u5c06\u7559\u5728\u5408\u5e76\u540e\u7684\u516c\u53f8\u3002\u5168\u7403\u4eba\u7c7b\u5065\u5eb7\u90e8\u95e8\u5c06\u7531\u80af\u5c3c\u65af\u00b7\u5f17\u96f7\u6cfd(Kenneth C. Frazier)\u9886\u5bfc\uff0c\u540e\u8005\u73b0\u4efb\u9ed8\u6c99\u4e1c\u6267\u884c\u526f\u603b\u88c1\u517c\u5168\u7403\u4eba\u7c7b\u5065\u5eb7\u90e8\u95e8\u603b\u88c1\u3002\u5148\u7075\u8446\u96c5\u73b0\u4efb\u9ad8\u7ea7\u526f\u603b\u88c1\u517cIntervet Schering-Plough Animal Health\u90e8\u95e8\u603b\u88c1\u52b3\u5c14\u00b7\u53ef\u6c57(Raul E. Kohan)\u5c06\u9886\u5bfc\u65b0\u7684\u9ed8\u6c99\u4e1c\u52a8\u7269\u5065\u5eb7\u90e8\u95e8\u3002\u6d88\u8d39\u8005\u4fdd\u5065\u90e8\u95e8\u5c06\u6682\u65f6\u7531\u65af\u5766\u5229\u00b7\u5df4\u8c22(Stanley F. Barshay)\u9886\u5bfc\uff0c\u540e\u8005\u73b0\u4efb\u5148\u7075\u8446\u96c5\u6d88\u8d39\u8005\u5065\u5eb7\u90e8\u95e8\u8463\u4e8b\u957f\u3002\u5408\u5e76\u540e\u7684\u516c\u53f8\u5c06\u4e3a\u8be5\u90e8\u95e8\u5bfb\u627e\u4e00\u4f4d\u6b63\u5f0f\u9886\u5bfc\u4eba\u3002\u9ed8\u6c99\u4e1c\u7814\u7a76\u5b9e\u9a8c\u5ba4\u90e8\u95e8\u4ecd\u5c06\u7531\u73b0\u4efb\u603b\u88c1\u5f7c\u5f97\u00b7\u91d1(Peter S. Kim)\u9886\u5bfc\u3002\u9ed8\u6c99\u4e1c\u751f\u4ea7\u90e8\u95e8\u5c06\u7531\u5a01\u5229\u00b7\u8fea\u65af(Willie A. Deese)\u9886\u5bfc\uff0c\u540e\u8005\u73b0\u4efb\u9ed8\u6c99\u4e1c\u751f\u4ea7\u4e1a\u52a1\u603b\u88c1\u3002"
},
{
"target": "\u5927\u76d8\u4e94\u8fde\u9633\u5251\u63072900 \u4e0b\u5468\u8fd0\u884c\u8def\u7ebf\u56fe\u5206\u6790",
"text": "== \u4eca\u65e5\u76d8\u9762\uff1a\u5927\u76d8\u559c\u89c1\u4e94\u8fde\u9633 \u6caa\u6307\u5251\u63072900\u70b9 ==\u5468\u4e94A\u80a1\u7ee7\u7eed\u9707\u8361\u4e0a\u884c\uff0c\u6caa\u6307\u54112900\u70b9\u8fdb\u519b\u3002\u53d7\u9996\u53eaIPO\u843d\u5730\u3001\u56fd\u9645\u6cb9\u4ef7\u7ee7\u7eed\u4e0a\u626c\u3001\u7f8e\u80a1\u9053\u6307\u5fae\u5e45\u6536\u9ad8\u7b49\u56e0\u7d20\u5f71\u54cd\uff0c\u5927\u76d8\u518d\u63a5\u518d\u5389\u53c8\u521b\u53cd\u5f39\u65b0\u9ad8\u3002\u4f46\u80a1\u6307\u4e0a\u884c\u52bf\u5934\u540c\u6bd4\u6628\u65e5\u6709\u6240\u6536\u655b\uff0c\u76d8\u4e2d\u6ce2\u52a8\u52a0\u5267\uff0c\u4e2a\u80a1\u4f9d\u65e7\u662f\u4e24\u6781\u5206\u5316\uff0c\u91d1\u878d\u548c\u751f\u7269\u5236\u836f\u677f\u5757\u7ee7\u7eed\u5145\u5f53\u5e02\u573a\u7684\u9886\u5934\u7f8a\u3002\u800c\u8d44\u6e90\u677f\u5757\u6210\u4e3a\u505a\u7a7a\u7684\u4e3b\u8981\u529b\u91cf\u3002\u622a\u81f3\u6536\u76d8\uff0c\u4e0a\u8bc1\u7efc\u6307\u62a52880.49\u70b9\uff0c\u4e0a\u6da80.93%\uff0c\u76d8\u4e2d\u521b\u51fa2886.50\u70b9\u65b0\u9ad8\uff0c\u6210\u4ea41535\u4ebf\uff1b\u6df1\u8bc1\u6210\u6307\u6536\u5e02\u62a511242.3\u70b9\uff0c\u4e0a\u6da80.81%\uff0c\u6210\u4ea4792.8\u4ebf\u3002\u4e24\u5e02\u5171\u6210\u4ea42327.8\u4ebf\u3002\u540c\u6bd4\u653e\u5927\u7ee7\u7eed\u653e\u5927\u3002== \u76d8\u9762\u5206\u6790\uff1a\u5e02\u573a\u70ed\u70b9\u7ee7\u7eed\u6d3b\u8dc3 \u8d44\u6e90\u7c7b\u677f\u5757\u518d\u6b21\u5012\u6208 ==\u6743\u91cd\u80a1\u4f9d\u7136\u8f83\u4e3a\u5f3a\u52bf\u76d8\u53e3\u663e\u793a\uff0c\u4e2a\u80a1\u5206\u5316\u7684\u8d8b\u52bf\u6ca1\u6709\u6539\u53d8\uff0c\u6743\u91cd\u80a1\u4f9d\u7136\u8f83\u4e3a\u5f3a\u52bf\u3002\u4e07\u79d1\u5927\u6da83.94%\uff0c\u4e2d\u4fe1\u8bc1\u5238\u3001\u6d66\u53d1\u94f6\u884c\u3001\u4e2d\u56fd\u5e73\u5b89\u3001\u4ea4\u901a\u94f6\u884c\u6da8\u5e45\u57281%\u4ee5\u4e0a\uff0c\u77f3\u5316\u53cc\u96c4\u5206\u9053\u626c\u9573\uff0c\u4e2d\u56fd\u77f3\u6cb9\u5fae\u5e45\u6536\u6da80.57%\uff0c\u800c\u4e2d\u56fd\u77f3\u5316\u5219\u4e0b\u8dcc0.48%\u3002\u4e2d\u56fd\u5357\u8f66\u5de8\u91cf\u5c01\u6b7b\u6da8\u505c \u521b\u51fa\u65b0\u9ad8\u4e2d\u56fd\u5357\u8f66\u51fa\u73b0\u660e\u663e\u5f02\u52a8\uff0c\u65e9\u76d8\u8be5\u80a1\u5de8\u91cf\u5c01\u6b7b\u6da8\u505c\uff0c\u521b\u51fa\u4e0a\u5e02\u4ee5\u6765\u7684\u65b0\u9ad8\uff1b\u6d77\u738b\u751f\u7269\u4e34\u8fd1\u6536\u76d8\u65f6\u75af\u72c2\u6253\u5f00\u6da8\u505c\uff0c\u6210\u4ea4\u91cf\u660e\u663e\u653e\u5927\u3002\u5728\u6362\u624b\u7387\u65b9\u9762\uff0c\u7d2b\u946b\u836f\u4e1a\u3001\u6069\u534e\u836f\u4e1a\u3001\u666e\u6d1b\u80a1\u4efd\u7b4990\u591a\u53ea\u4e2a\u80a1\u6362\u624b\u7387\u8d85\u8fc710%\uff0c\u540c\u6bd4\u6709\u6240\u589e\u52a0\u3002\u6574\u4f53\u6765\u770b\uff0c\u4e24\u5e02\u8fd1\u516d\u6210\u4e2a\u80a1\u4e0a\u6da8\uff0c\u6da8\u505c\u7684\u975eST\u4e2a\u80a119\u53ea\uff0c\u6da8\u5e45\u8d85\u8fc75%\u7684\u4e2a\u80a1\u8fd160\u53ea\uff0c\u8dcc\u5e45\u8d85\u8fc75%\u7684\u4e2a\u80a19\u53ea\uff0c\u672a\u89c1\u8dcc\u505c\u7684\u975eST\u4e2a\u80a1\u3002\u91d1\u878d\u548c\u516c\u8def\u6865\u6881\u7ee7\u7eed\u6da8\u5e45\u5c45\u524d\u4eca\u65e5\u5e02\u573a\u70ed\u70b9\u4ecd\u65e7\u8f83\u4e3a\u6d3b\u8dc3\uff0c\u91d1\u878d\u548c\u516c\u8def\u6865\u6881\u677f\u5757\u7ee7\u7eed\u4f4d\u5217\u6da8\u5e45\u699c\u524d\u5217\u3002\u91d1\u878d\u677f\u5757\u53d7\u5238\u5546\u548c\u4fdd\u9669\u80a1\u7684\u5e26\u52a8\u6da8\u52bf\u559c\u4eba\uff0c\u56fd\u91d1\u8bc1\u5238\u6da8\u505c\uff0c\u4e1c\u5317\u8bc1\u5238\u3001\u897f\u5357\u8bc1\u5238\u3001\u5b8f\u6e90\u8bc1\u5238\u6da8\u5e45\u8d85\u8fc73%\uff0c\u4e2d\u56fd\u4eba\u5bff\u3001\u4e2d\u56fd\u5e73\u5b89\u3001\u4e2d\u56fd\u592a\u4fdd\u5927\u6da8\u8d85\u8fc71%\uff0c\u4e2d\u56fd\u94f6\u884c\u4e5f\u5927\u6da84.38%\uff0c\u5176\u4f59\u94f6\u884c\u80a1\u6da8\u5e45\u8d8b\u7f13\u3002\u516c\u8def\u6865\u6881\u677f\u5757\u5348\u540e\u5d1b\u8d77\uff0c\u5c71\u4e1c\u9ad8\u901f\u51b2\u51fb\u6da8\u505c\uff0c\u6df1\u9ad8\u901f\u3001\u5b81\u6caa\u9ad8\u901f\u3001\u8d63\u7ca4\u9ad8\u901f\u6da8\u5e45\u57283%\u4ee5\u4e0a\uff1bIPO\u9996\u5355\u82b1\u843d\u533b\u836f\u677f\u5757\uff0c\u751f\u7269\u533b\u836f\u677f\u5757\u53d7\u6b64\u63d0\u632f\u7ee7\u7eed\u6d3b\u8dc3\uff0c\u4f46\u4e34\u8fd1\u6536\u76d8\u6da8\u5e45\u6709\u6240\u6536\u7a84\uff0c\u7f8e\u7f57\u836f\u4e1a\u3001\u5929\u76ee\u836f\u4e1a\u3001\u5929\u575b\u751f\u7269\u7b4910\u4f59\u53ea\u4e2a\u80a1\u5c01\u6b7b\u6da8\u505c\u3002\u6b64\u5916\uff0c\u5730\u4ea7\u3001\u519c\u6797\u3001\u4ea4\u901a\u8fd0\u8f93\u7b49\u677f\u5757\u8868\u73b0\u4e5f\u76f8\u5bf9\u6d3b\u8dc3\u3002\u8d44\u6e90\u7c7b\u677f\u5757\u518d\u6b21\u5012\u6208\u8d44\u6e90\u7c7b\u677f\u5757\u518d\u6b21\u6389\u5934\u4e0b\u632b\u3002\u5e02\u573a\u4f20\u8a00\u7ba1\u7406\u5c42\u5c06\u4ecb\u5165\u7164\u70ad\u8c08\u5224\uff0c\u7535\u7164\u4ef7\u683c\u4ec5\u4ec5\u4e0a\u6da84%\uff0c\u6b64\u6d88\u606f\u538b\u5236\u7164\u70ad\u677f\u5757\u8d70\u4f4e\uff0c\u5e73\u5e84\u80fd\u6e90\u5927\u8dcc\u8d85\u8fc77%\uff0c\u9f99\u5934\u897f\u5c71\u7164\u7535\u3001\u4e2d\u7164\u80fd\u6e90\u4e5f\u9006\u52bf\u6536\u8dcc\uff0c\u4e2d\u56fd\u795e\u534e\u5c0f\u5e45\u6536\u7ea2\uff1b\u77f3\u6cb9\u677f\u5757\u8d70\u52bf\u4e5f\u5782\u5934\u4e27\u6c14\uff0c\u6dee\u6cb9\u80a1\u4efd\u3001\u9c81\u6da6\u80a1\u4efd\u3001\u8302\u534e\u5b9e\u4e1a\u8dcc\u5e45\u5c45\u524d\uff0c\u677f\u5757\u5185\u53ea\u6709\u4e2d\u56fd\u77f3\u6cb9\u5c0f\u5e45\u6536\u7ea2\uff1b\u6709\u8272\u677f\u5757\u5185\u4e2a\u80a1\u4e5f\u6089\u6570\u4e0b\u8dcc\uff0c\u6d77\u4eae\u80a1\u4efd\u3001\u4e2d\u91d1\u5cad\u5357\u3001\u897f\u90e8\u8d44\u6e90\u8dcc\u5e45\u5c45\u524d\uff0c\u4e2d\u91d1\u9ec4\u91d1\u3001\u5c71\u4e1c\u9ec4\u91d1\u7b49\u8dcc\u5e45\u8d85\u8fc71%\u3002(\u4e2d\u8bc1\u6295\u8d44 \u5f20\u7d22\u6e05)== \u540e\u5e02\u5206\u6790\uff1a\u4e0b\u5468\u80fd\u5426\u653b\u51fb3000\u70b9 \u76d8\u5f80\u9ad8\u5904\u8d70\u94b1\u5411\u4f4e\u5904\u6d41 ==\u4e0b\u5468\u5927\u76d8\u8fd0\u884c\u8def\u7ebf\u56feIPO\u7b2c\u4e00\u5355\u6b63\u5f0f\u53d1\u5e03\uff0c\u5927\u76d8\u4eca\u5929\u7684\u5f00\u76d8\u4e5f\u5f02\u5e38\u5e73\u9759\u3002\u4eca\u5929\u5e02\u573a\u7684\u6838\u5fc3\u52a8\u529b\u4f9d\u7136\u662f\u84dd\u7b79\uff0c\u91d1\u878d\u677f\u5757\u4e0a\u5348\u8d70\u52bf\u5f3a\u52b2\uff0c\u5982\u4e2d\u884c\u3001\u5efa\u884c\u7b49\u51e0\u5927\u94f6\u884c\uff0c\u4ee5\u53ca\u5238\u5546\u80a1\uff0c\u5e26\u9886\u5927\u76d8\u4e0a\u5348\u9707\u8361\u4e0a\u884c\uff1b\u4e0b\u5348\u4e2d\u77f3\u6cb9\u3001\u4e09\u5927\u94f6\u884c\u5219\u7ee7\u7eed\u53d1\u529b\uff0c\u5927\u76d8\u518d\u6b21\u521b\u51fa\u65b0\u9ad8\u3002\u84dd\u7b79\u677f\u5757\u7684\u5f3a\u52bf\u4e5f\u4f7f\u5f975\u6708\u4efd\u4e4b\u524d\u66fe\u5927\u5e45\u7092\u4f5c\u7684\u9898\u6750\u677f\u5757\u518d\u6b21\u9006\u52bf\u4e0b\u8dcc\uff0c\u5176\u4e2d\u56de\u843d\u6700\u660e\u663e\u7684\u5c31\u662f\u65b0\u80fd\u6e90\u6982\u5ff5\u3002\u8b6c\u5982\u91d1\u98ce\u79d1\u6280\u3001\u98ce\u5e06\u80a1\u4efd\u7b49\u524d\u671f\u7684\u65b0\u80fd\u6e90\u9f99\u5934\u4e2a\u80a1\u4eca\u5929\u9006\u52bf\u4e0b\u8dcc\uff0c\u5176\u5b83\u8fd8\u6709\u7279\u53d8\u7535\u5de5\uff0c\u4e5f\u540c\u6837\u8d70\u51fa\u7834\u4f4d\u4e0b\u8dcc\u7684\u8d70\u52bf\u3002\u6628\u5929\u6709\u4fe1\u606f\u663e\u793a\uff0c\u7ba1\u7406\u5c42\u53ef\u80fd\u5c06\u65b0\u80fd\u6e90\u7684\u632f\u5174\u89c4\u5212\u89c4\u6a21\u6269\u5927\u4e00\u500d\uff0c\u6628\u65e5\u65b0\u80fd\u6e90\u677f\u5757\u4e00\u5ea6\u5f3a\u52bf\u53cd\u5f39\u3002\u4e0d\u8fc7\u6211\u5efa\u8bae\uff0c\u7ecf\u6d4e\u590d\u82cf\u3001\u84dd\u7b79\u4e3a\u738b\u7684\u80cc\u666f\u4e0b\uff0c\u65b0\u80fd\u6e90\u9898\u6750\u7684\u7092\u4f5c\u5f88\u96be\u6301\u7eed\uff0c\u5e94\u5f53\u501f\u52a9\u5229\u597d\u7684\u53d1\u5e03\u53cd\u5f39\u51cf\u8f7b\u4ed3\u4f4d\u3002\u540e\u5e02\u8fd9\u4e00\u7b56\u7565\u4f9d\u7136\u4e0d\u53d8\uff0c\u9ad8\u629b\u9898\u6750(\u524d\u51e0\u4e2a\u6708\u5927\u5e45\u7092\u4f5c\u7684)\uff0c\u4f4e\u4e70\u84dd\u7b79\u3002\u6628\u65e5\u5927\u76d8\u521b\u65b0\u9ad8\u800c\u51fa\u73b0\u8865\u6da8\u7684\u7164\u70ad\u3001\u6709\u8272\u91d1\u5c5e\u677f\u5757\u4eca\u5929\u6574\u4f53\u5c0f\u5e45\u56de\u843d\uff0c\u4e3b\u8981\u539f\u56e0\u5728\u4e8e\u5168\u7403\u539f\u6cb9\u3001\u6709\u8272\u91d1\u5c5e\u671f\u8d27\u4ef7\u683c\u8d70\u52bf\u5e73\u6de1\uff0c\u8fd9\u4e24\u4e2a\u84dd\u7b79\u677f\u5757\u4f9d\u7136\u53ef\u4ee5\u7ee7\u7eed\u6301\u6709\uff0c\u5176\u8d70\u52bf\u5c06\u5728\u5f88\u5927\u7a0b\u5ea6\u4e0a\u53d7\u5230\u5546\u54c1\u671f\u8d27\u4ef7\u683c\u7684\u5f71\u54cd\u3002\u5bf9\u4e8e\u7f8e\u56fd\u7684\u7ecf\u6d4e\u6570\u636e\uff0c\u6211\u8ba4\u4e3a\u867d\u7136\u90e8\u5206\u7ecf\u6d4e\u6570\u636e\u4e0d\u4e50\u89c2\uff0c\u4f46\u662f\u81ea4\u6708\u4efd\u4ee5\u6765\uff0c\u6574\u4f53\u7684\u7ecf\u6d4e\u6570\u636e\u503e\u5411\u4e8e\u8fdb\u4e00\u6b65\u597d\u8f6c\u3002\u56e0\u6b64\uff0c\u7f8e\u56fd\u80a1\u5e02\u7684\u9707\u8361\u4e0a\u884c\u7684\u901a\u9053\u4f9d\u7136\u4f1a\u7ef4\u6301\u4e0b\u53bb\u3002\u867d\u7136\u7f8e\u56fd\u80a1\u5e02\u7684\u8d70\u52bf\u5bf9\u4e8eA\u80a1\u6ca1\u6709\u51b3\u5b9a\u6027\u5f71\u54cd\uff0c\u4f46\u662f\u7f8e\u80a1\u8d70\u52bf\u4f1a\u5f71\u54cd\u5168\u7403\uff0c\u8fdb\u800c\u5728\u4e00\u5b9a\u9636\u6bb5\u5185\u5bf9A\u80a1\u5927\u76d8\u4ea7\u751f\u4f5c\u7528\u3002\u56e0\u6b64\u6211\u4eec\u5728\u5206\u6790\u65f6\uff0c\u5e94\u8be5\u5c06\u7f8e\u56fd\u7ecf\u6d4e\u7684\u8d70\u52bf\u548c\u80a1\u5e02\u8d70\u52bf\u4f5c\u4e3a\u4e00\u4e2a\u53c2\u8003\u4f9d\u636e\u3002\u9700\u8981\u8865\u5145\u7684\u4e00\u70b9\u662f\uff0c\u76ee\u524d\u4e2d\u56fd\u7ecf\u6d4e\u4e3b\u8981\u4f9d\u9760\u5927\u529b\u6269\u5927\u6295\u8d44(\u56fa\u5b9a\u8d44\u4ea7\u6295\u8d44)\u548c\u7a33\u5065\u7684\u6d88\u8d39\u523a\u6fc0\uff0c\u800c\u6295\u8d44\u7684\u589e\u957f\u662f\u6709\u9650\u5ea6\u7684\uff0c\u8fc7\u5206\u7684\u589e\u957f\u5c06\u5e26\u6765\u7ecf\u6d4e\u8fc7\u70ed\u901a\u80c0\u7b49\u95ee\u9898\uff0c\u672a\u6765\u8981\u8fdb\u4e00\u6b65\u589e\u957f\uff0c\u8fd8\u9700\u8981\u501f\u52a9\u51fa\u53e3\u3002\u800c\u51fa\u53e3\u7684\u589e\u957f\u5728\u672c\u8d28\u4e0a\u53d6\u51b3\u4e8e\u7f8e\u56fd\u7ecf\u6d4e\u7684\u590d\u82cf\u8fdb\u7a0b\u3002\u4e0b\u5468\u9884\u8ba1\u5927\u76d8\u57282800-2900\u533a\u95f4\u9ad8\u4f4d\u5f3a\u52bf\u9707\u8361\uff0c\u4e50\u89c2\u7684\u8bdd\u57282800-2950\u4e00\u5e26\u9707\u8361\u3002\u4f55\u65f6\u653b\u78342900-3000\u533a\u95f4\u7684\u8d85\u5f3a\u963b\u529b\uff0c\u6211\u4e2a\u4eba\u8ba4\u4e3a\u9700\u8981\u4f9d\u8d56\u4e8e7\u6708\u4e0a\u4e2d\u65ec\u7684\u7ecf\u6d4e\u6570\u636e\u3002 (\u4f55\u7fa4\u8363)\u4e0b\u5468\u76d8\u5f80\u9ad8\u5904\u8d70 \u94b1\u5411\u4f4e\u5904\u6d41\u5927\u76d8\u4e0a\u6da8\u4e00\u5468\u4e4b\u540e\uff0c\u6295\u8d44\u8005\u5173\u5fc3\u7684\u662f\uff0c\u4e0b\u5468\u884c\u60c5\u53c8\u4f1a\u5982\u4f55\uff1f\u6211\u7684\u89c2\u70b9\u662f\uff1a\u5927\u76d8\u5f80\u9ad8\u5904\u8d70\uff0c\u8d44\u91d1\u5411\u4f4e\u5904\u6d41\u3002\u8fd1\u671f\u5efa\u8bbe\u94f6\u884c\uff0c\u5de5\u5546\u94f6\u884c\uff0c\u4e2d\u56fd\u94f6\u884c\u4e3a\u4ee3\u8868\u7684\u4f4e\u4ef7\u56fd\u6709\u94f6\u884c\u51fa\u73b0\u5168\u9762\u8865\u6da8\uff0c\u539f\u56e0\u662f\u65b0\u589e\u8d44\u91d1\u5927\u4e3e\u6d41\u5165\uff0c\u6210\u4e3a\u63a8\u52a8\u5927\u76d8\u5411\u4e0a\u7684\u4e3b\u8981\u529b\u91cf\u3002\u8fd9\u4e9b\u94f6\u884c\u80a1\u7968\uff0c\u4e5f\u662f\u4f4e\u4ef7\u80a1\uff0c\u4e5f\u662f\u4f4e\u5e02\u76c8\u7387\u80a1\uff0c\u6d41\u5165\u8fd9\u4e9b\u80a1\u7968\u4e2d\u7684\u8d44\u91d1\uff0c\u5c31\u5176\u9009\u80a1\u98ce\u683c\u4e0e\u8d44\u91d1\u5b9e\u529b\u6765\u770b\uff0c\u5e94\u662f\u4ee5\u673a\u6784\u8d44\u91d1\u4e3a\u4e3b\uff0c\u5f15\u53d1\u6e38\u8d44\u8ddf\u98ce\u6548\u5e94\u3002\u4e2d\u56fd\u5357\u8f66(601766)\uff0c\u4eca\u5929\u7a81\u7136\u5de8\u91cf\u8d44\u91d1\u4ecb\u5165\u5c01\u6b7b\u6da8\u505c\u3002\u56db\u5143\u591a\u7684\u4e2d\u56fd\u5357\u8f66\uff0c\u4e00\u76f4\u5f88\u5c11\u8fdb\u5165\u6295\u8d44\u8005\u7684\u89c6\u91ce\uff0c\u73b0\u5728\u8865\u6da8\uff0c\u5f3a\u529b\u8d44\u91d1\u4ecb\u5165\uff0c\u5e26\u52a8\u94c1\u8def\uff0c\u9ad8\u901f\u516c\u8def\uff0c\u673a\u573a\u7b49\u76f8\u5173\u80a1\u7968\u8054\u52a8\u4e0a\u6da8\uff0c\u8054\u52a8\u8865\u6da8\u3002\u8fd9\u4e9b\u80a1\u7968\uff0c\u5927\u591a\u662f\u6b64\u524d\u6da8\u5e45\u6ede\u540e\u7684\u4e8c\u7ebf\u4f4e\u4ef7\u84dd\u7b79\u54c1\u79cd\uff0c\u8fd9\u4e9b\u80a1\u7968\u8865\u6da8\uff0c\u4e0e\u94f6\u884c\u677f\u5757\u7684\u8865\u6da8\u4e00\u6837\uff0c\u663e\u793a\u65b0\u589e\u201c\u8d44\u91d1\u5411\u4f4e\u5904\u6d41\u201d\u7684\u8d8b\u5411\u4ecd\u5728\u7ee7\u7eed\uff0c\u5728\u6269\u6563\u3002\u518d\u770b\u770b\u76d8\u9762\uff1a\u8fde\u7eed\u6da8\u505c\u7684\u80a1\u7968\uff0c\u4e3b\u8981\u662f\u4f4e\u4ef7\u80a1\uff1b\u91cd\u5927\u8d44\u4ea7\u91cd\u7ec4\u7684\u80a1\u7968\uff0c\u5927\u591a\u662f\u4f4e\u4ef7\u80a1\uff1b\u6bcf\u5929\u76d8\u9762\u6da8\u5e45\u9760\u524d\u7684\u80a1\u7968\uff0c\u5927\u591a\u662f\u4f4e\u4ef7\u80a1\uff1b\u6bcf\u5929\u6210\u4ea4\u91cf\u6392\u540d\u524d\u51e0\u4f4d\uff0c\u6e05\u4e00\u8272\u7684\u4f4e\u4ef7\u80a1\u3002\u4f4e\u4ef7\u8865\u6da8\uff0c\u6210\u4e3a\u8fd1\u671f\u8d44\u91d1\u7684\u4e3b\u8981\u6d41\u5411\u3002\u6211\u8ba4\u4e3a\uff1a\u4e0b\u5468\u5927\u76d8\uff0c\u4ecd\u7136\u4f1a\u4ee5\u5411\u4e0a\u9707\u8361\u4e3a\u4e3b\u3002\u9009\u80a1\u65b9\u9762\uff0c\u4e0d\u59a8\u91cd\u70b9\u8003\u8651\u4f4e\u4ef7\uff0c\u8865\u6da8\u3002\u4e00\u76f4\u6bd4\u8f83\u770b\u597d\u6caa\u6df1\u4e24\u5e02\u672c\u5730\u4f4e\u4ef7\u80a1\uff0c\u56e0\u4e3a\u5b83\u4eec\u80a1\u6027\u6d3b\uff0c\u9898\u6750\u4e30\u5bcc\uff0c\u5efa\u8bae\u5728\u8be6\u7ec6\u7814\u7a76\u516c\u53f8\u76f8\u5173\u8d44\u6599\u7684\u57fa\u7840\u4e0a\uff0c\u9002\u5f53\u8ddf\u8e2a\u3001\u5173\u6ce8\u3002(\u53f6\u5f18)"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 151575 |
| valid | 37894 |
| 17,323 | [
[
-0.05914306640625,
-0.0306396484375,
0.0213623046875,
-0.0120849609375,
0.0011157989501953125,
0.01105499267578125,
0.01456451416015625,
-0.01068115234375,
0.0657958984375,
0.030181884765625,
-0.04052734375,
-0.0240325927734375,
-0.05633544921875,
-0.0121994... |
vsarathy/nl-robotics-semantic-parsing-info_structure-2k-context-TEST | 2023-10-07T12:32:38.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 3 | 2023-10-07T12:32:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Falah/cyberpunk_photo_prompts2 | 2023-10-07T14:32:19.000Z | [
"region:us"
] | Falah | null | null | 0 | 3 | 2023-10-07T14:32:18 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 213569
num_examples: 1000
download_size: 24078
dataset_size: 213569
---
# Dataset Card for "cyberpunk_photo_prompts2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 365 | [
[
-0.04229736328125,
-0.0243988037109375,
0.0256195068359375,
0.0256195068359375,
-0.024261474609375,
0.003467559814453125,
0.024993896484375,
-0.01226043701171875,
0.0418701171875,
0.0163421630859375,
-0.0789794921875,
-0.03167724609375,
-0.0287017822265625,
... |
meta-math/GSM8K_Backward | 2023-10-25T09:00:47.000Z | [
"license:cc-by-nc-4.0",
"arxiv:2309.12284",
"region:us"
] | meta-math | null | null | 6 | 3 | 2023-10-07T14:43:40 | ---
license: cc-by-nc-4.0
configs:
- config_name: default
data_files:
- split: test
path: "GSM8K_Backward.jsonl"
---
arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
| 206 | [
[
-0.052032470703125,
-0.0247650146484375,
0.030487060546875,
0.00809478759765625,
-0.0157623291015625,
0.0047607421875,
0.0095367431640625,
-0.01117706298828125,
0.060089111328125,
0.057281494140625,
-0.052734375,
-0.046112060546875,
-0.0021190643310546875,
0... |
carnival13/massive_5_lang_DA4_tokenized | 2023-10-07T16:07:09.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 3 | 2023-10-07T16:06:59 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 519317955
num_examples: 705250
download_size: 162988938
dataset_size: 519317955
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_5_lang_DA4_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 553 | [
[
-0.039825439453125,
-0.030517578125,
0.014190673828125,
0.02252197265625,
-0.017181396484375,
0.01200103759765625,
-0.0036163330078125,
-0.0165863037109375,
0.059722900390625,
0.035125732421875,
-0.0430908203125,
-0.06329345703125,
-0.0469970703125,
0.007774... |
towhid/aesir-test69 | 2023-10-07T18:20:02.000Z | [
"region:us"
] | towhid | null | null | 0 | 3 | 2023-10-07T16:46:11 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 22114
num_examples: 10
download_size: 28277
dataset_size: 22114
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "aesir-test69"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 474 | [
[
-0.04595947265625,
-0.004123687744140625,
0.002758026123046875,
0.002216339111328125,
-0.01556396484375,
-0.00641632080078125,
0.0271148681640625,
-0.0148468017578125,
0.054412841796875,
0.0269927978515625,
-0.042022705078125,
-0.042327880859375,
-0.0380859375,
... |
jung1230/patient_info_and_summary | 2023-10-07T19:34:21.000Z | [
"region:us"
] | jung1230 | null | null | 0 | 3 | 2023-10-07T19:33:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
daishen/legal-js | 2023-10-14T07:37:13.000Z | [
"region:us"
] | daishen | null | null | 0 | 3 | 2023-10-08T01:19:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
layoric/tiny-codes-alpaca | 2023-10-08T02:28:04.000Z | [
"region:us"
] | layoric | null | null | 0 | 3 | 2023-10-08T02:25:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: main_topic
dtype: string
- name: subtopic
dtype: string
- name: adjective
dtype: string
- name: action_verb
dtype: string
- name: scenario
dtype: string
- name: target_audience
dtype: string
- name: programming_language
dtype: string
- name: common_sense_topic
dtype: string
- name: idx
dtype: int64
- name: output
dtype: string
splits:
- name: train
num_bytes: 3795436393
num_examples: 1632309
download_size: 1642754203
dataset_size: 3795436393
---
# Dataset Card for "tiny-codes-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 861 | [
[
-0.053466796875,
-0.0194549560546875,
0.019805908203125,
0.018524169921875,
-0.0305633544921875,
-0.01508331298828125,
0.0144500732421875,
-0.01611328125,
0.075439453125,
0.0240478515625,
-0.052154541015625,
-0.048187255859375,
-0.04071044921875,
-0.00943756... |
Linyuyu/songmeirong | 2023-10-12T10:14:07.000Z | [
"region:us"
] | Linyuyu | null | null | 0 | 3 | 2023-10-08T09:59:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hk-kaden-kim/uzh-hs23-etsp-eval-single-notitle-line | 2023-10-08T10:54:33.000Z | [
"region:us"
] | hk-kaden-kim | null | null | 0 | 3 | 2023-10-08T10:46:52 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 2890126.0
num_examples: 100
download_size: 2878288
dataset_size: 2890126.0
---
# Dataset Card for "uzh-hs23-etsp-eval-single-notitle-line"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 418 | [
[
-0.033477783203125,
-0.034759521484375,
0.00627899169921875,
0.0226593017578125,
-0.0216522216796875,
0.0126190185546875,
0.01364898681640625,
-0.0003879070281982422,
0.052734375,
0.0477294921875,
-0.05023193359375,
-0.054107666015625,
-0.01222991943359375,
... |
Intuit-GenSRF/jquiros-suicide-es | 2023-10-08T15:39:21.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 3 | 2023-10-08T15:39:09 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 434028422
num_examples: 230832
download_size: 266158998
dataset_size: 434028422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jquiros-suicide-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 612 | [
[
-0.0290374755859375,
-0.0186614990234375,
0.028045654296875,
0.017974853515625,
-0.0012159347534179688,
0.0134429931640625,
0.00847625732421875,
0.0048980712890625,
0.0626220703125,
0.0206146240234375,
-0.068115234375,
-0.059722900390625,
-0.045654296875,
-0... |
fudong03/Tiny-CropNet | 2023-10-10T06:53:58.000Z | [
"region:us"
] | fudong03 | null | null | 2 | 3 | 2023-10-08T17:48:23 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.0465087890625,
0.052520751953125,
0.005077362060546875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.06036376953125,
0.03... |
minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-65_25_10-v2 | 2023-10-09T03:48:25.000Z | [
"region:us"
] | minh21 | null | null | 0 | 3 | 2023-10-09T03:48:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 50185273
num_examples: 1176
- name: validation
num_bytes: 4744842
num_examples: 134
download_size: 13948442
dataset_size: 54930115
---
# Dataset Card for "COVID-QA-Chunk-64-question-answering-biencoder-data-65_25_10-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 764 | [
[
-0.03729248046875,
-0.026214599609375,
0.0051727294921875,
0.0153350830078125,
-0.0218353271484375,
-0.006366729736328125,
0.03948974609375,
-0.01435089111328125,
0.043792724609375,
0.020751953125,
-0.051605224609375,
-0.03460693359375,
-0.03460693359375,
-0... |
Rahi11Anurag/d | 2023-10-09T05:22:00.000Z | [
"region:us"
] | Rahi11Anurag | null | null | 0 | 3 | 2023-10-09T05:21:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aditya998/wikiData | 2023-10-10T07:03:29.000Z | [
"region:us"
] | aditya998 | null | null | 0 | 3 | 2023-10-09T06:18:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ShoukanLabs/LAION-DallE-3-Local | 2023-10-09T07:14:08.000Z | [
"region:us"
] | ShoukanLabs | null | null | 1 | 3 | 2023-10-09T06:59:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: url
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1531813332.75
num_examples: 1250
download_size: 1176337783
dataset_size: 1531813332.75
---
# Dataset Card for "LAION-DallE-3-Local"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.0249176025390625,
-0.01800537109375,
0.0294647216796875,
0.0175628662109375,
-0.0018873214721679688,
-0.005100250244140625,
0.025787353515625,
-0.005443572998046875,
0.058868408203125,
0.048675537109375,
-0.03118896484375,
-0.06158447265625,
-0.02217102050781... |
boundless-asura/summary | 2023-10-09T12:05:18.000Z | [
"region:us"
] | boundless-asura | null | null | 0 | 3 | 2023-10-09T11:47:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
qazisaad/news_recommendations_base | 2023-10-09T13:53:49.000Z | [
"region:us"
] | qazisaad | null | null | 0 | 3 | 2023-10-09T13:53:05 | ---
dataset_info:
features:
- name: category
dtype: string
- name: sub-category
dtype: string
- name: title
dtype: string
- name: times
dtype: timestamp[ns]
- name: url
dtype: string
splits:
- name: train
num_bytes: 1561817
num_examples: 3981
download_size: 742112
dataset_size: 1561817
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "news_recommendations_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 604 | [
[
-0.03887939453125,
-0.020233154296875,
0.0228424072265625,
0.021453857421875,
-0.0186309814453125,
-0.01256561279296875,
0.00734710693359375,
-0.00038242340087890625,
0.06732177734375,
0.042388916015625,
-0.06414794921875,
-0.0703125,
-0.036651611328125,
-0.... |
W1lson/testt | 2023-10-09T14:55:31.000Z | [
"region:us"
] | W1lson | null | null | 0 | 3 | 2023-10-09T14:22:42 | ---
dataset_info:
features:
- name: Category
dtype: string
- name: Description
dtype: string
splits:
- name: train
num_bytes: 4499
num_examples: 100
download_size: 3168
dataset_size: 4499
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "testt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 467 | [
[
-0.0364990234375,
-0.0188446044921875,
0.00978851318359375,
0.0042877197265625,
-0.0162200927734375,
0.00452423095703125,
0.0173492431640625,
-0.00559234619140625,
0.045989990234375,
0.016357421875,
-0.05316162109375,
-0.045623779296875,
-0.03680419921875,
-... |
dreamproit/bill_text_us | 2023-10-16T12:06:51.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"legal",
"bills",
"region:us"
] | dreamproit | null | null | 1 | 3 | 2023-10-09T17:02:16 | ---
task_categories:
- text-generation
- text-classification
language:
- en
tags:
- legal
- bills
pretty_name: bill_text_us
size_categories:
- 100K<n<1M
---
# Dataset Card for "bill_text_us"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [BillML](https://github.com/dreamproit/BillML)
- **Repository:** [BillML](https://github.com/dreamproit/BillML)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Dataset for US Congressional bills (bill_text_us).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English
## Dataset Structure
### Data Instances
#### default
### Data Fields
- id: id of the bill in format(congress number + bill type + bill number + bill version).
- congress: number of the congress.
- bill_type: type of the bill.
- bill_number: number of the bill.
- bill_version: version of the bill.
- title: official title of the bill.
- sections: list of bill sections with section_id, text and header.
- sections_length: number with lenght of the sections list.
- text: bill text.
- text_length: number of characters in the text.
### Data Splits
train
## Dataset Creation
### Curation Rationale
Bills (proposed laws) are specialized, structured documents with great public significance.
Often, the language of a bill may not directly explain the potential impact of the legislation.
This dataset collects the text of bills and some metadata.
As a result, this dataset collects bill text; it also provides text as a list of sections with the text and header.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
[govinfo.gov](https://www.govinfo.gov/)
#### Initial Data Collection and Normalization
The data consists of the US congress bills that were collected from the [govinfo.gov](https://www.govinfo.gov/) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[dreamproit.com](https://dreamproit.com/)
### Licensing Information
Bill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@aih](https://github.com/aih) [@BorodaUA](https://github.com/BorodaUA), [@alexbojko](https://github.com/alexbojko) for adding this dataset. | 4,820 | [
[
-0.0338134765625,
-0.043914794921875,
0.005825042724609375,
0.01265716552734375,
-0.039764404296875,
0.002361297607421875,
-0.01617431640625,
-0.025390625,
0.043548583984375,
0.058197021484375,
-0.036224365234375,
-0.07940673828125,
-0.0443115234375,
-0.0020... |
ilyas3141/ilias_test5 | 2023-10-09T17:53:13.000Z | [
"region:us"
] | ilyas3141 | null | null | 0 | 3 | 2023-10-09T17:52:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ilyas3141/ilias_test12 | 2023-10-09T19:18:41.000Z | [
"region:us"
] | ilyas3141 | null | null | 0 | 3 | 2023-10-09T19:18:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ArmelRandy/oa_lima_strat_qcm | 2023-10-09T20:43:48.000Z | [
"region:us"
] | ArmelRandy | null | null | 0 | 3 | 2023-10-09T20:43:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 24587836.604066804
num_examples: 18828
- name: test
num_bytes: 1294165.3959331955
num_examples: 991
download_size: 16177809
dataset_size: 25882002.0
---
# Dataset Card for "oa_lima_strat_qcm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 572 | [
[
-0.0283050537109375,
-0.034515380859375,
0.0254058837890625,
0.01349639892578125,
-0.039215087890625,
-0.0027942657470703125,
0.031097412109375,
-0.012786865234375,
0.058197021484375,
0.050628662109375,
-0.03802490234375,
-0.0657958984375,
-0.0465087890625,
... |
ariesta/forensic-datasets-tuning | 2023-10-09T23:22:20.000Z | [
"region:us"
] | ariesta | null | null | 0 | 3 | 2023-10-09T23:21:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
princeton-nlp/SWE-bench_bm25_13k_cl100k | 2023-10-10T19:31:31.000Z | [
"region:us"
] | princeton-nlp | null | null | 1 | 3 | 2023-10-10T04:09:27 | ---
dataset_info:
features:
- name: base_commit
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: test_patch
dtype: string
- name: repo
dtype: string
- name: problem_statement
dtype: string
- name: version
dtype: string
- name: instance_id
dtype: string
- name: FAIL_TO_PASS
dtype: string
- name: PASS_TO_PASS
dtype: string
- name: environment_setup_commit
dtype: string
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: patch
dtype: string
splits:
- name: test
num_bytes: 276234065
num_examples: 2294
download_size: 113943225
dataset_size: 276234065
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
Problem_statement: (str) - The issue title and body.
Version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
text: (str) - The generated text according to the retrieval criterion and the style-2 prompt found in [github:SWE-bench](https://github.com/princeton-nlp/SWE-bench).
input_ids: (List[int]) - The cl100k_base tokens for each text.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,024 | [
[
-0.038970947265625,
-0.03619384765625,
0.0206146240234375,
0.0266876220703125,
0.005523681640625,
-0.00786590576171875,
-0.0271453857421875,
-0.0157318115234375,
0.0226593017578125,
0.0311737060546875,
-0.055755615234375,
-0.046417236328125,
-0.018157958984375,
... |
princeton-nlp/SWE-bench_bm25_27k_cl100k | 2023-10-10T19:32:10.000Z | [
"region:us"
] | princeton-nlp | null | null | 0 | 3 | 2023-10-10T04:09:43 | ---
dataset_info:
features:
- name: base_commit
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: test_patch
dtype: string
- name: repo
dtype: string
- name: problem_statement
dtype: string
- name: version
dtype: string
- name: instance_id
dtype: string
- name: FAIL_TO_PASS
dtype: string
- name: PASS_TO_PASS
dtype: string
- name: environment_setup_commit
dtype: string
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: patch
dtype: string
splits:
- name: test
num_bytes: 541825176
num_examples: 2294
download_size: 235069451
dataset_size: 541825176
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
Problem_statement: (str) - The issue title and body.
Version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
text: (str) - The generated text according to the retrieval criterion and the style-2 prompt found in [github:SWE-bench](https://github.com/princeton-nlp/SWE-bench).
input_ids: (List[int]) - The cl100k_base tokens for each text.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,024 | [
[
-0.038970947265625,
-0.03619384765625,
0.0206146240234375,
0.0266876220703125,
0.005523681640625,
-0.00786590576171875,
-0.0271453857421875,
-0.0157318115234375,
0.0226593017578125,
0.0311737060546875,
-0.055755615234375,
-0.046417236328125,
-0.018157958984375,
... |
Linyuyu/wangdapao | 2023-10-12T10:28:45.000Z | [
"region:us"
] | Linyuyu | null | null | 0 | 3 | 2023-10-10T07:02:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Linyuyu/zhaojianguo | 2023-10-12T10:41:52.000Z | [
"region:us"
] | Linyuyu | null | null | 0 | 3 | 2023-10-10T07:05:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Linyuyu/zhenggaokao | 2023-10-11T03:25:45.000Z | [
"region:us"
] | Linyuyu | null | null | 0 | 3 | 2023-10-10T07:05:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ismailiismail/French_English_2 | 2023-10-12T08:17:22.000Z | [
"region:us"
] | ismailiismail | null | null | 0 | 3 | 2023-10-10T10:18:25 | ---
dataset_info:
features:
- name: french
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 914954
num_examples: 2992
download_size: 352011
dataset_size: 914954
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "French_English_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 479 | [
[
-0.0304412841796875,
-0.0207672119140625,
0.0133209228515625,
0.034149169921875,
-0.00498199462890625,
-0.005855560302734375,
0.0049591064453125,
-0.0170745849609375,
0.050628662109375,
0.0369873046875,
-0.04254150390625,
-0.04345703125,
-0.060394287109375,
... |
indolem/IndoMMLU | 2023-10-11T04:30:54.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:id",
"license:mit",
"knowledge",
"arxiv:2310.04928",
"arxiv:2112.10668",
"arxiv:2302.13971",
"region:us"
] | indolem | null | @inproceedings{koto-etal-2023-indommlu,
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
} | 5 | 3 | 2023-10-10T11:16:12 | ---
license: mit
task_categories:
- question-answering
language:
- id
tags:
- knowledge
pretty_name: IndoMMLU
size_categories:
- 10K<n<100K
---
# IndoMMLU
<!---
[](https://github.com/internLM/OpenCompass/) [](https://github.com/EleutherAI/lm-evaluation-harness)
-->
<p align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/IndoMMLU-Bar.png" style="width: 100%;" id="title-icon">
</p>
<p align="center"> <a href="http://www.fajrikoto.com" target="_blank">Fajri Koto</a>, <a href="https://www.linkedin.com/in/nuaisyah/" target="_blank">Nurul Aisyah</a>, <a href="https://haonan-li.github.io/" target="_blank">Haonan Li</a>, <a href="https://people.eng.unimelb.edu.au/tbaldwin/" target="_blank">Timothy Baldwin</a> </p>
<h4 align="center">
<p align="center" style="display: flex; flex-direction: row; justify-content: center; align-items: center">
📄 <a href="https://arxiv.org/abs/2310.04928" target="_blank" style="margin-right: 15px; margin-left: 10px">Paper</a> •
🏆 <a href="https://github.com/fajri91/IndoMMLU/blob/main/README_EN.md#evaluation" target="_blank" style="margin-left: 10px">Leaderboard</a> •
🤗 <a href="https://huggingface.co/datasets/indolem/indommlu" target="_blank" style="margin-left: 10px">Dataset</a>
</p>
</h4>
## Introduction
We introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,
which consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,
we obtain 14,906 questions across 63 tasks and education levels, with 46\% of the questions focusing on assessing proficiency
in the Indonesian language and knowledge of nine local languages and cultures in Indonesia.
<p align="left"> <img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-dist.png?raw=true" style="width: 500px;" id="title-icon"> </p>
## Subjects
| Level | Subjects |
|-----------|------------------------------------|
| SD (Primary School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Dayak Ngaju, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
| SMP (Junior High School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
| SMA (Senior High School) | Physics, Chemistry, Biology, Geography, Sociology, Economics, History, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Art, Sports, Islam religion, Christian religion, Hindu religion |
University Entrance Test | Chemistry, Biology, Geography, Sociology, Economics, History, Indonesian Language |
We categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.
## Examples
These questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/min_example.png?raw=true" style="width: 400px;" id="title-icon">
</p>
## Evaluation
We evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes [GPT-3.5 (ChatGPT)](https://chat.openai.com/), [XGLM](https://arxiv.org/abs/2112.10668), [Falcon](https://falconllm.tii.ae/), [BLOOMZ](https://huggingface.co/bigscience/bloomz), [mT0](https://huggingface.co/bigscience/bloomz), [LLaMA](https://arxiv.org/abs/2302.13971), and [Bactrian-X](https://github.com/mbzuai-nlp/bactrian-x). Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:
```
Ini adalah soal [subject] untuk [level]. Pilihlah salah satu jawaban yang dianggap benar!
English Translation: This is a [subject] question for [level]. Please choose the correct answer!
```
#### Zero-shot Evaluation
| Model (#param) | STEM | Social Science | Humanities | Indonesian Lang. | Local L. Culture | Average |
|---------------------|------|----------|-------------|---------|----------|---------|
| Random | 21.9 | 23.4 | 23.5 | 24.4 | 26.6 | 24.4 |
| [GPT-3.5 (175B)](https://chat.openai.com/) | **54.3** | **62.5** | **64.0** | **62.2** | 39.3 | **53.2** |
| [XGLM (564M)](https://huggingface.co/facebook/xglm-564M) | 22.1 | 23.0 | 25.6 | 25.6 | 27.5 | 25.2 |
| [XGLM (1.7B)](https://huggingface.co/facebook/xglm-1.7B) | 20.9 | 23.0 | 24.6 | 24.8 | 26.6 | 24.4 |
| [XGLM (2.9B)](https://huggingface.co/facebook/xglm-2.9B) | 22.9 | 23.2 | 25.4 | 26.3 | 27.2 | 25.2 |
| [XGLM (4.5B)](https://huggingface.co/facebook/xglm-4.5B) | 21.8 | 23.1 | 25.6 | 25.8 | 27.1 | 25.0 |
| [XGLM (7.5B)](https://huggingface.co/facebook/xglm-7.5B) | 22.7 | 21.7 | 23.6 | 24.5 | 27.5 | 24.5 |
| [Falcon (7B)](https://huggingface.co/tiiuae/falcon-7b) | 22.1 | 22.9 | 25.5 | 25.7 | 27.5 | 25.1 |
| [Falcon (40B)](https://huggingface.co/tiiuae/falcon-40b) | 30.2 | 34.8 | 34.8 | 34.9 | 29.2 | 32.1 |
| [BLOOMZ (560M)](https://huggingface.co/bigscience/bloomz-560m) | 22.9 | 23.6 | 23.2 | 24.2 | 25.1 | 24.0 |
| [BLOOMZ (1.1B)](https://huggingface.co/bigscience/bloomz-1b1) | 20.4 | 21.4 | 21.1 | 23.5 | 24.7 | 22.4 |
| [BLOOMZ (1.7B)](https://huggingface.co/bigscience/bloomz-1b7) | 31.5 | 39.3 | 38.3 | 42.8 | 29.4 | 34.4 |
| [BLOOMZ (3B)](https://huggingface.co/bigscience/bloomz-3b) | 33.5 | 44.5 | 39.7 | 46.7 | 29.8 | 36.4 |
| [BLOOMZ (7.1B)](https://huggingface.co/bigscience/bloomz-7b1) | 37.1 | 46.7 | 44.0 | 49.1 | 28.2 | 38.0 |
| [mT0<sub>small</sub> (300M)](https://huggingface.co/bigscience/mt0-small) | 21.8 | 21.4 | 25.7 | 25.1 | 27.6 | 24.9 |
| [mT0<sub>base</sub> (580M)](https://huggingface.co/bigscience/mt0-base) | 22.6 | 22.6 | 25.7 | 25.6 | 26.9 | 25.0 |
| [mT0<sub>large</sub> (1.2B)](https://huggingface.co/bigscience/mt0-large) | 22.0 | 23.4 | 25.1 | 27.3 | 27.6 | 25.2 |
| [mT0<sub>xl</sub> (3.7B)](https://huggingface.co/bigscience/mt0-xl) | 31.4 | 42.9 | 41.0 | 47.8 | 35.7 | 38.2 |
| [mT0<sub>xxl</sub> (13B)](https://huggingface.co/bigscience/mt0-xxl) | 33.5 | 46.2 | 47.9 | 52.6 | **39.6** | 42.5 |
| [LLaMA (7B)](https://arxiv.org/abs/2302.13971) | 22.8 | 23.1 | 25.1 | 26.7 | 27.6 | 25.3 |
| [LLaMA (13B)](https://arxiv.org/abs/2302.13971) | 24.1 | 23.0 | 24.4 | 29.5 | 26.7 | 25.3 |
| [LLaMA (30B)](https://arxiv.org/abs/2302.13971) | 25.4 | 23.5 | 25.9 | 28.4 | 28.7 | 26.5 |
| [LLaMA (65B)](https://arxiv.org/abs/2302.13971) | 33.0 | 37.7 | 40.8 | 41.4 | 32.1 | 35.8 |
| [Bactrian-X-LLaMA (7B)](https://github.com/mbzuai-nlp/bactrian-x) | 23.3 | 24.0 | 26.0 | 26.1 | 27.5 | 25.7 |
| [Bactrian-X-LLaMA (13B)](https://github.com/mbzuai-nlp/bactrian-x) | 28.3 | 29.9 | 32.8 | 35.2 | 29.2 | 30.3 |
#### GPT-3.5 performance (% accuracy) across different education levels
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-result.png?raw=true" style="width: 370px;" id="title-icon">
</p>
Red indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams.
#### Few-shot Evaluation
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/plot_fewshot.png?raw=true" style="width: 380px;" id="title-icon">
</p>
## Data
Each question in the dataset is a multiple-choice question with up to 5 choices and only one choice as the correct answer.
We provide our dataset according to each subject in [data](data) folder. You can also access our dataset via [Hugging Face](https://huggingface.co/datasets/indolem/indommlu).
<!--
#### Quick Use
Our dataset has been added to [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [OpenCompass](https://github.com/InternLM/opencompass), you can evaluate your model via these open-source tools.
-->
#### Evaluation
The code for the evaluation of each model we used is in `evaluate.py`, and the code to run them is listed in `run.sh`.
## Citation
```
@inproceedings{koto-etal-2023-indommlu,
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
}
```
## License
The IndoMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/). | 9,322 | [
[
-0.049896240234375,
-0.04351806640625,
0.0182647705078125,
0.02587890625,
-0.015716552734375,
0.00917816162109375,
-0.0199127197265625,
-0.025238037109375,
0.054046630859375,
-0.0015153884887695312,
-0.038177490234375,
-0.03924560546875,
-0.04681396484375,
0... |
Lancelot53/bengali_ai_ipa | 2023-10-10T14:07:56.000Z | [
"region:us"
] | Lancelot53 | null | null | 0 | 3 | 2023-10-10T14:00:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: ipa
dtype: string
- name: row_id_column_name
dtype: int64
splits:
- name: train
num_bytes: 6974634
num_examples: 21999
- name: test
num_bytes: 5861099
num_examples: 27228
download_size: 6174391
dataset_size: 12835733
---
# Dataset Card for "bengali_ai_ipa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 622 | [
[
-0.0243377685546875,
-0.0002913475036621094,
-0.00919342041015625,
0.03955078125,
-0.014739990234375,
0.006076812744140625,
0.026824951171875,
-0.0176544189453125,
0.06134033203125,
0.0235595703125,
-0.0284271240234375,
-0.05035400390625,
-0.048370361328125,
... |
nalmeida/test_local1 | 2023-10-10T14:03:19.000Z | [
"region:us"
] | nalmeida | null | null | 0 | 3 | 2023-10-10T14:01:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nalmeida/test3 | 2023-10-10T16:05:22.000Z | [
"region:us"
] | nalmeida | null | null | 0 | 3 | 2023-10-10T16:05:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 487027
num_examples: 321
download_size: 146233
dataset_size: 487027
---
# Dataset Card for "test3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.04132080078125,
-0.01328277587890625,
0.02020263671875,
0.016571044921875,
-0.002437591552734375,
-0.003795623779296875,
0.02923583984375,
-0.0168609619140625,
0.035064697265625,
0.0215911865234375,
-0.0513916015625,
-0.04864501953125,
-0.03179931640625,
... |
brunomaribeiro/ts_lyricsgenerationdataset | 2023-10-10T20:10:15.000Z | [
"region:us"
] | brunomaribeiro | null | null | 0 | 3 | 2023-10-10T19:30:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MemGPT/MSC-Self-Instruct | 2023-11-02T07:40:08.000Z | [
"license:apache-2.0",
"arxiv:2310.08560",
"region:us"
] | MemGPT | null | null | 5 | 3 | 2023-10-11T02:51:50 | ---
license: apache-2.0
---
MemGPT
===
This is the self-instruct dataset of MSC conversations used for MemGPT paper. For more information please refer to memgpt.ai
The [MSC dataset](https://parl.ai/projects/msc/) is a multi-round human conversations. In this dataset, our goal is to come up with a conversation opener, that is personalized to the user by referencing topics from the previous conversations.
These were generated while evaluating [MemGPT](https://arxiv.org/abs/2310.08560). | 492 | [
[
-0.03558349609375,
-0.054779052734375,
0.0216217041015625,
-0.0112762451171875,
-0.0003390312194824219,
0.015838623046875,
0.01474761962890625,
-0.0144805908203125,
0.025482177734375,
0.036163330078125,
-0.08154296875,
-0.0265960693359375,
-0.0158233642578125,
... |
vile99/QA_data | 2023-10-11T05:43:37.000Z | [
"region:us"
] | vile99 | null | null | 0 | 3 | 2023-10-11T05:43:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
iara-project/news_dataset_for_hdbscan | 2023-10-12T01:43:36.000Z | [
"region:us"
] | iara-project | null | null | 0 | 3 | 2023-10-12T01:34:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: category
dtype: string
- name: category_natural_language
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 77001539
num_examples: 21933
- name: test
num_bytes: 76974994
num_examples: 21933
download_size: 96118980
dataset_size: 153976533
---
# Dataset Card for "news_dataset_for_hdbscan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 749 | [
[
-0.050689697265625,
-0.031463623046875,
0.0209197998046875,
0.02288818359375,
-0.0301513671875,
0.00421142578125,
0.0246124267578125,
-0.0008687973022460938,
0.07281494140625,
0.0435791015625,
-0.04150390625,
-0.0723876953125,
-0.041778564453125,
-0.02435302... |
acozma/imagenet-1k-rand_blur | 2023-10-31T07:59:42.000Z | [
"region:us"
] | acozma | null | null | 0 | 3 | 2023-10-12T04:00:47 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
- name: params
struct:
- name: func
dtype: string
- name: radius
dtype: int64
splits:
- name: train
num_bytes: 283029903517.0
num_examples: 500000
download_size: 283032983222
dataset_size: 283029903517.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "imagenet-1k-rand_blur"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 653 | [
[
-0.0513916015625,
-0.02093505859375,
-0.0035724639892578125,
0.035064697265625,
-0.034088134765625,
-0.0198211669921875,
0.0281982421875,
-0.0298309326171875,
0.07427978515625,
0.04400634765625,
-0.0650634765625,
-0.0457763671875,
-0.040130615234375,
-0.0213... |
sdg4168/test | 2023-10-13T08:15:31.000Z | [
"region:us"
] | sdg4168 | null | null | 0 | 3 | 2023-10-12T07:43:18 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
VAGOsolutions/MT-Bench-TrueGerman | 2023-10-12T10:07:55.000Z | [
"language:de",
"region:us"
] | VAGOsolutions | null | null | 3 | 3 | 2023-10-12T09:00:45 | ---
language:
- de
---
## Benchmark
**German Benchmarks on Hugging Face**
At present, there is a notable scarcity, if not a complete **absence, of reliable and true German benchmarks** designed to evaluate the capabilities of German Language Models (LLMs). While some efforts have been made to translate English benchmarks into German, these attempts often **fall short in terms of precision, accuracy, and context sensitivity, even when employing GPT-4 technology**. Take, for instance, the **MT-Bench**, a widely recognized and frequently used benchmark for assessing LLM performance in real-world scenarios. The seemingly straightforward and cost-effective approach of **translating MT-Bench into German using GPT-4 proves to be counterproductive**, resulting in subpar outcomes that hinder a realistic and contextually appropriate evaluation of German LLMs. To illustrate this, we offer a few examples extracted from translated MT-Bench versions available on Hugging Face.
**Example: Uncommon use of words**
*{ "category": "writing", "turns": [ "Schreibe eine überzeugende E-Mail, um deinen introvertierten Freund, der öffentliches Sprechen nicht mag, dazu zu bringen, sich als Gastredner bei einer lokalen Veranstaltung zu engagieren. Verwende überzeugende Argumente und gehe auf mögliche Einwände ein. Bitte sei prägnant.", "Kannst du deine vorherige Antwort umformulieren und in jedem Satz eine Metapher oder ein **Gleichnis** einbauen?" ] }*
What you can see here is an example of a German word, someone would not use in a real conversation (marked in bold). In a real conversation someone would rather use “Vergleich” instead of “Gleichnis”.
**Example: Wrong context**
*{ "category": "roleplay", "turns": [ "Bitte nehmen Sie die Rolle eines englischen Übersetzers an, der damit beauftragt ist, Rechtschreibung und Sprache zu korrigieren und zu verbessern. Unabhängig von der Sprache, die ich verwende, sollten Sie sie identifizieren, übersetzen und mit einer verfeinerten und polierten Version meines Textes **auf Englisch antworten**.*
Here we get a request to translate a given sentence in English language and phrase a more sophisticated sentence compared to the original sentence. As we aim to assess a German LLM requesting the model to translate a sentence in English language would be pointless.
**Example: Wrong content**
*{"category": "writing", "turns": [ "Bearbeite den folgenden Absatz, um etwaige grammatikalische Fehler zu korrigieren: ***Sie erinnerte sich nicht daran, wo ihre Geldbörse ist, also denke ich, dass sie im Auto ist, aber er sagt, dass sie auf dem Küchentisch ist, aber er ist sich nicht sicher, und dann haben sie mich gebeten, danach zu suchen, sie sagt: "Kannst du?", und ich antworte: "Vielleicht, aber ich bin nicht sicher", und er hat mich nicht gehört, und er fragt: "Was?", "Hast du es gefunden?"***.", "Ändere deine frühere Antwort und vermeide die Verwendung von geschlechtsspezifischen Pronomen." ]}*
The task here is to edit a sentence full of grammatical errors and correct them. The problem with this translated version of the MT-bench is that the sentence was already corrected by GPT4 during translation. So now the model is requested to correct a sentence that has no more grammatical errors.
**Example: Pointless translation of anglicisms**
*{ "category": "roleplay", "turns": [ "Jetzt bist du ein **Maschinenlern-Ingenieur**. Deine Aufgabe besteht darin, komplexe Maschinenlernkonzepte auf einfache Weise zu erklären, damit Kunden ohne technischen Hintergrund deine Produkte verstehen und ihnen vertrauen können. Fangen wir an mit der Frage: Was ist ein Sprachmodell? Wird es mit gelabelten oder ungelabelten Daten trainiert?, "Ist das wahr? Ich habe gehört, dass andere Unternehmen unterschiedliche Ansätze verwenden, um dies zu tun und es sicherer zu machen.]}*
As we can see here, the GPT4 translation of this dataset lead to a term that no one would use when speaking German. Instead someone would rather use the original English term “Machine Learning Engineer” or the properly translated term “Ingenieur für maschinelles Lernen”.
**Our approach to a German Benchmark**
So, what we did instead of simply translating the MT-Bench with GPT4, we applied a mixed approach of automatic translation and human evaluation. In a first step we translated the complete MT-Bench into German language by using GPT4. In a second step we conducted a thorough manual evaluation of each translated dataset to ensure following quality criteria:
- The dataset has been translated into German language.
- The German translation consists of an appropriate and genuine wording.
- the context of the translated dataset is meaningful and reasonable for assessing German language skills of the model.
- the content of the translated dataset is still reasonable after translation.
Although this method is undeniably time-consuming, it enables us to create a substantive benchmark for evaluating the model's proficiency in completing various benchmark categories. Nonetheless, it is important to acknowledge that even with this meticulous approach, a truly flawless benchmark remains elusive, as minor oversights may still occur due to human errors.
Nevertheless, when we compare the current approaches of German Language Model teams available on Hugging Face, we may assume that our German MT-Bench, as of today, stands as the most precise and practical benchmark for assessing German LLMs. Consequently, the benchmark scores we present offer a realistic evaluation of the models performance in German language. | 5,569 | [
[
-0.0274200439453125,
-0.07379150390625,
0.04681396484375,
0.0259246826171875,
-0.032073974609375,
-0.01428985595703125,
-0.0198822021484375,
-0.0367431640625,
0.0024967193603515625,
0.021881103515625,
-0.044769287109375,
-0.032379150390625,
-0.0469970703125,
... |
yrehan32/llama2-layanobat-dataset-v2 | 2023-10-12T09:50:52.000Z | [
"region:us"
] | yrehan32 | null | null | 0 | 3 | 2023-10-12T09:12:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sandeep16064/news_summary | 2023-10-12T11:36:23.000Z | [
"region:us"
] | sandeep16064 | null | null | 0 | 3 | 2023-10-12T10:50:33 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
Inshorts News dataset Inshorts provides a news summary in 60 words or less. Inshorts is a news service that offers short summaries of news from around the web. This dataset contains headlines and a summary of news items and their source.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** An abstractive text summarization technique using transformer model with self-attention mechanism
- https://paperswithcode.com/paper/an-abstractive-text-summarization-technique
Neural Computing and Applications 2023
Sandeep Kumar, Arun Solanki
Creating a summarized version of a text document that still conveys precise meaning is an incredibly complex endeavor in natural language processing (NLP). Abstract text summarization (ATS) is the process of using facts from source sentences and merging them into concise representations while maintaining the content and intent of the text. Manually summarizing large amounts of text are challenging and time-consuming for humans. Therefore, text summarization has become an exciting research focus in NLP. This research paper proposed an ATS model using a Transformer Technique with Self-Attention Mechanism (T2SAM). The self-attention mechanism is added to the transformer to solve the problem of coreference in text. This makes the system to understand the text better. The proposed T2SAM model improves the performance of text summarization. It is trained on the Inshorts News dataset combined with the DUC-2004 shared tasks dataset. The performance of the proposed model has been evaluated using the ROUGE metrics, and it has been shown to outperform the existing state-of-the-art baseline models. The proposed model gives the training loss minimum to 1.8220 from 10.3058 (at the starting point) up to 30 epochs, and it achieved model accuracy 48.50% F1-Score on both the Inshorts and DUC-2004 news datasets.
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
Kaggle and Inshort news app
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
web scrapping
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
https://doi.org/10.1007/s00521-023-08687-7
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 5,986 | [
[
-0.019500732421875,
-0.026397705078125,
0.0191650390625,
0.0174713134765625,
-0.0303802490234375,
0.0009250640869140625,
-0.002593994140625,
-0.042388916015625,
0.031097412109375,
0.050933837890625,
-0.04779052734375,
-0.056121826171875,
-0.04290771484375,
0... |
Hung2003vn/dataset_quy_trinh_03 | 2023-10-12T13:27:11.000Z | [
"region:us"
] | Hung2003vn | null | null | 0 | 3 | 2023-10-12T13:26:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ccmusic-database/PMEmo | 2023-11-02T15:52:59.000Z | [
"task_categories:audio-classification",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | Music Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals. A Music Emotion Experiment was well-designed for collecting the affective-annotated music corpus of high quality, which recruited 457 subjects. The dataset is publically available to the research community, which is foremost intended for benchmarking in music emotion retrieval and recognition. To straightforwardly evaluate the methodologies for music affective analysis, it also involves pre-computed audio feature sets. In addition to that, manually selected chorus excerpts (compressed in MP3) of songs are provided to facilitate the development of chorus-related research. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Zhaowen Wang, Wei Li and Zijin Li},
title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | 1 | 3 | 2023-10-12T13:58:27 | ---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
tags:
- music
- art
pretty_name: PMEmo
size_categories:
- n<1K
viewer: false
---
# Dataset Card for PMEmo
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/PMEmo>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** Dataset; Music Emotion Recognition; Experiment; EDA
### Dataset Summary
Music Emotion Recognition (MER) has recently received considerable attention. To support the MER research which requires large music content libraries, we present the PMEmo dataset containing emotion annotations of 794 songs as well as the simultaneous electrodermal activity (EDA) signals.
### Supported Tasks and Leaderboards
MER, MIR, audio classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav, .txt, .lrc, .csv), .csv
### Data Fields
Audio Serial, Song Metadata, Audio Demo, Pre-computed Audio Features for Use in MER Tasks, Manually Annotated Emotion Labels, EDA Physiological Signals, Song Lyrics (LRC), Song Comments
### Data Splits
train, valid, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for time-based MER
### Source Data
#### Initial Data Collection and Normalization
Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun, Monan Zhou
#### Who are the source language producers?
Teachers & students from NEXT Lab
### Annotations
#### Annotation process
A Music Emotion Experiment was well-designed for collecting the affective-annotated music corpus of high quality, which recruited 457 subjects. The dataset is publically available to the research community, which is foremost intended for benchmarking in music emotion retrieval and recognition. To straightforwardly evaluate the methodologies for music affective analysis, it also involves pre-computed audio feature sets. In addition to that, manually selected chorus excerpts (compressed in MP3) of songs are provided to facilitate the development of chorus-related research.
#### Who are the annotators?
Teachers & students from NEXT Lab
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Advancing the Digitization Process of time-based MER
### Discussion of Biases
Only for pop music
### Other Known Limitations
Time-based MER has high noise
## Additional Information
### Dataset Curators
Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, Lingyun Sun
### Evaluation
[Kejun Zhang, Hui Zhang, Simeng Li, Changyuan Yang, and Lingyun Sun. 2018. The PMEmo Dataset for Music Emotion Recognition. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval (ICMR '18). Association for Computing Machinery, New York, NY, USA, 135–142. https://doi.org/10.1145/3206025.3206037](https://doi.org/10.1145/3206025.3206037)
### Licensing Information
```
MIT License
Copyright (c) NEXT Lab
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Zhaowen Wang, Wei Li and Zijin Li},
title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for time-based MER | 4,622 | [
[
-0.0384521484375,
-0.0133819580078125,
0.0204315185546875,
0.03155517578125,
-0.0279388427734375,
-0.00901031494140625,
-0.0302276611328125,
-0.029327392578125,
0.024932861328125,
0.022735595703125,
-0.07196044921875,
-0.079345703125,
-0.01515960693359375,
0... |
erbacher/nq_open5 | 2023-10-12T20:53:25.000Z | [
"region:us"
] | erbacher | null | null | 0 | 3 | 2023-10-12T16:24:53 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
sequence: string
- name: target
dtype: string
- name: text
dtype: string
- name: results
dtype: string
- name: em
dtype: float64
- name: hal_m
dtype: string
splits:
- name: train
num_bytes: 41737579
num_examples: 79168
- name: dev
num_bytes: 4612579
num_examples: 8757
- name: test
num_bytes: 1950822
num_examples: 3610
download_size: 13126477
dataset_size: 48300980
---
# Dataset Card for "nq_open5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 693 | [
[
-0.04095458984375,
0.0025081634521484375,
0.0072174072265625,
0.0032138824462890625,
-0.010772705078125,
-0.01016998291015625,
0.0292816162109375,
-0.005157470703125,
0.042877197265625,
0.039703369140625,
-0.056671142578125,
-0.06695556640625,
-0.018585205078125... |
dim/camel_ai_chemistry | 2023-10-12T17:22:28.000Z | [
"region:us"
] | dim | null | null | 1 | 3 | 2023-10-12T17:22:15 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 47000178
num_examples: 20000
download_size: 16918940
dataset_size: 47000178
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "camel_ai_chemistry"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 601 | [
[
-0.0296173095703125,
-0.01038360595703125,
-0.0035457611083984375,
0.0090484619140625,
-0.006214141845703125,
0.006107330322265625,
0.01910400390625,
-0.015380859375,
0.04949951171875,
0.0238494873046875,
-0.05560302734375,
-0.06634521484375,
-0.0283050537109375... |
psyche/nmt-sample | 2023-10-12T17:52:36.000Z | [
"region:us"
] | psyche | null | null | 0 | 3 | 2023-10-12T17:52:33 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: source_language
dtype: string
- name: target_language
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 988
num_examples: 3
download_size: 5473
dataset_size: 988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "nmt-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 586 | [
[
-0.049560546875,
-0.0131683349609375,
0.0199127197265625,
0.0017499923706054688,
-0.0223388671875,
0.00414276123046875,
0.0218658447265625,
-0.00341033935546875,
0.07098388671875,
0.029510498046875,
-0.07110595703125,
-0.047607421875,
-0.031951904296875,
-0.... |
chenuneris/news-brazillian-clean | 2023-10-13T19:08:20.000Z | [
"license:apache-2.0",
"region:us"
] | chenuneris | null | null | 0 | 3 | 2023-10-12T18:41:13 | ---
license: apache-2.0
---
Este dataset é composto pelos artigos encontrados nos seguintes portais de notícias:
- <a href="https://anovademocracia.com.br">A Nova Democracia</a>
- <a href="https://averdade.org.br">A verdade</a>
- <a href="https://www.brasildefato.com.br">Brasil de fato</a>
- <a href="https://mst.org.br/conteudo/noticias">Jornal MST</a>
- <a href="https://operamundi.uol.com.br">Opera Mundi</a>
- <a href="https://revistaopera.com.br">Revista Opera</a>
Cada pasta dentro do arquivo "artigos-extraidos.zip" contém os artigos em sí, porém não limpos.
O arquivo "br-news-prototype-dataset.json" é um json contendo todos os artigos concatenados e separados em chunks que foram utilizados para treinar a ultima versão do modelo "br-news-prototype" criada no dia 16/09/2023.
| 791 | [
[
-0.0191497802734375,
-0.0282135009765625,
0.0116424560546875,
0.0240478515625,
-0.035614013671875,
0.0168609619140625,
-0.005756378173828125,
-0.01495361328125,
0.03729248046875,
0.043426513671875,
-0.0286102294921875,
-0.054473876953125,
-0.03411865234375,
... |
shuubham/whisper_arl_1 | 2023-10-16T00:49:36.000Z | [
"region:us"
] | shuubham | null | null | 0 | 3 | 2023-10-12T20:39:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
librarian-bots/model_cards_with_metadata | 2023-11-02T03:13:22.000Z | [
"task_categories:text-retrieval",
"size_categories:100K<n<1M",
"ethics",
"region:us"
] | librarian-bots | null | null | 4 | 3 | 2023-10-12T21:50:53 | ---
size_categories:
- 100K<n<1M
task_categories:
- text-retrieval
pretty_name: Hugging Face Hub Model Cards
dataset_info:
features:
- name: modelId
dtype: string
- name: lastModified
dtype: string
- name: tags
sequence: string
- name: pipeline_tag
dtype: string
- name: author
dtype: string
- name: config
dtype: 'null'
- name: securityStatus
dtype: 'null'
- name: id
dtype: string
- name: likes
dtype: int64
- name: downloads
dtype: int64
- name: library_name
dtype: string
- name: created
dtype: timestamp[us]
- name: card
dtype: string
splits:
- name: train
num_bytes: 620250953
num_examples: 378702
download_size: 204789617
dataset_size: 620250953
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- ethics
---
# Dataset Card for Hugging Face Hub Model Cards
This datasets consists of [model cards](https://huggingface.co/docs/hub/model-cards) for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more.
This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new [discussion](https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/discussions/new).
## Dataset Details
### Dataset Description
- **Curated by:** Daniel van Strien
- **Language(s) (NLP):** Model cards on the Hugging Face Hub are predominantly in English but may include other languages.
## Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
This dataset has a single split.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
### Source Data
The source data is `README.md` files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The data is downloaded using a CRON job on a daily basis.
#### Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
### Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
#### Annotation process
N/A
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
N/A
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards.
Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
## Dataset Card Authors
[@davanstrien](https://huggingface.co/davanstrien)
## Dataset Card Contact
[@davanstrien](https://huggingface.co/davanstrien) | 5,975 | [
[
-0.03692626953125,
-0.05572509765625,
0.005126953125,
0.0236663818359375,
-0.02288818359375,
-0.02447509765625,
-0.0022220611572265625,
-0.057891845703125,
0.037261962890625,
0.049163818359375,
-0.0660400390625,
-0.06109619140625,
-0.0390625,
-0.001338958740... |
ContextualAI/tiny-lambada_openai | 2023-10-12T22:18:30.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 3 | 2023-10-12T22:17:03 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
dtype: string
splits:
- name: test
num_bytes: 33311
num_examples: 100
download_size: 0
dataset_size: 33311
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "tiny-lambada_openai"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 478 | [
[
-0.04132080078125,
-0.01334381103515625,
0.0184478759765625,
0.00420379638671875,
-0.0194244384765625,
-0.0299530029296875,
0.004474639892578125,
-0.0096893310546875,
0.057342529296875,
0.01490020751953125,
-0.0372314453125,
-0.03924560546875,
-0.019744873046875... |
Brecon/neutral_claim | 2023-10-13T00:41:38.000Z | [
"region:us"
] | Brecon | null | null | 0 | 3 | 2023-10-13T00:41:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 31976.842105263157
num_examples: 30
- name: test
num_bytes: 8527.157894736842
num_examples: 8
download_size: 34081
dataset_size: 40504.0
---
# Dataset Card for "neutral_claim"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 587 | [
[
-0.04248046875,
-0.004138946533203125,
0.00481414794921875,
0.01203155517578125,
-0.004848480224609375,
0.0025615692138671875,
0.007373809814453125,
-0.02484130859375,
0.06884765625,
0.032684326171875,
-0.050506591796875,
-0.042205810546875,
-0.03448486328125,
... |
lazaroq11/billqa | 2023-10-13T00:58:32.000Z | [
"region:us"
] | lazaroq11 | null | null | 0 | 3 | 2023-10-13T00:48:36 | ---
dataset_info:
features:
- name: text
dtype: string
- name: additional_info
dtype: string
splits:
- name: train
num_bytes: 240602641
num_examples: 9846
download_size: 9341153
dataset_size: 240602641
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "billqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 482 | [
[
-0.043701171875,
-0.00637054443359375,
-0.0011119842529296875,
0.0003349781036376953,
-0.0160369873046875,
0.002841949462890625,
0.051055908203125,
-0.004070281982421875,
0.05218505859375,
0.051849365234375,
-0.04290771484375,
-0.044525146484375,
-0.029586791992... |
jbac208/3.5turbo_ws5_xml | 2023-10-13T01:57:47.000Z | [
"region:us"
] | jbac208 | null | null | 0 | 3 | 2023-10-13T01:57:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jayyd/intel_hackathon_data | 2023-10-13T10:16:24.000Z | [
"region:us"
] | jayyd | null | null | 0 | 3 | 2023-10-13T10:02:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cis-lmu/GlotStoryBook | 2023-11-02T00:45:13.000Z | [
"language:kwn",
"language:nno",
"language:mlg",
"language:miu",
"language:mhi",
"language:yua",
"language:dga",
"language:pol",
"language:sck",
"language:nuj",
"language:ben",
"language:san",
"language:luo",
"language:guz",
"language:hus",
"language:adh",
"language:lwg",
"language:... | cis-lmu | null | null | 1 | 3 | 2023-10-13T11:13:40 | ---
license: cc
language:
- kwn
- nno
- mlg
- miu
- mhi
- yua
- dga
- pol
- sck
- nuj
- ben
- san
- luo
- guz
- hus
- adh
- lwg
- lue
- nhw
- mer
- lug
- xsm
- ell
- rus
- afr
- ewe
- yue
- mnw
- laj
- myx
- fra
- adx
- teo
- cce
- kln
- hat
- zne
- srp
- mmc
- mal
- fat
- nyu
- ndo
- ven
- hch
- ssw
- kqn
- mhw
- koo
- prs
- nso
- yor
- zho
- naq
- nle
- mqu
- lun
- tuv
- ocu
- sme
- kdj
- alz
- lit
- spa
- mfe
- maz
- tum
- nhe
- hun
- dje
- ori
- swa
- ron
- her
- urd
- ttj
- ktz
- tur
- kam
- sag
- kru
- kok
- toi
- jpn
- orm
- rki
- tsn
- nep
- tha
- zul
- ctu
- khg
- dag
- pcm
- keo
- lko
- amh
- saq
- jam
- ara
- kik
- toh
- kan
- lgg
- tam
- aeb
- ckb
- deu
- guj
- ukr
- tir
- tet
- mar
- bxk
- gur
- vie
- old
- nch
- kpz
- xho
- crk
- ita
- kmr
- nyn
- por
- kri
- gaa
- hin
- asm
- mas
- xog
- khm
- csw
- nor
- tgl
- kin
- luc
- ful
- sqi
- kua
- cat
- tsc
- pus
- nld
- kor
- sot
- mya
- lat
- bod
- eng
- nob
- nzi
- twi
- hau
- dan
- kau
- pan
- swe
- fas
- som
- tso
- loz
- anu
- tel
- ada
- nbl
- lsm
- ach
- bem
- pmq
- mat
- gjn
- nya
- epo
pretty_name: GlotStoryBook Corpus
tags:
- 'story '
- storybook
- language-identification
---
## Dataset Description
Story Books for 180 ISO-639-3 script pairs (174 unique ISO-639-3 codes).
- **Homepage:** [homepage](https://github.com/cisnlp/GlotStoryBook)
- **Repository:** [github](https://github.com/cisnlp/GlotStoryBook)
- **Paper:** [paper](https://arxiv.org/abs/2310.16248)
- **Point of Contact:** amir@cis.lmu.de
## Usage (HF Loader)
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/GlotStoryBook')
print(dataset['train'][0]) # First row data
```
## Download
If you are not a fan of the HF dataloader, download it directly:
```python
! wget https://huggingface.co/datasets/cis-lmu/GlotStoryBook/resolve/main/GlotStoryBook.csv
```
# Tools
To compute the script of each text we used Glotscript ([code](https://github.com/cisnlp/GlotScript) and [paper](https://arxiv.org/abs/2309.13320)).
## License and Copyright
We do not own any of the text from which these data has been extracted.
All the files are collected from the repository located at https://github.com/global-asp/.
The source repository for each text and file is stored in the dataset.
Each file in the dataset is associated with one license from the CC family.
The licenses include 'CC BY', 'CC BY-NC', 'CC BY-NC-SA', 'CC-BY', 'CC-BY-NC', and 'Public Domain'.
We also license the code, actual packaging and the metadata of these data under the cc0-1.0.
## Github
We additionally provide a GitHub version that openly shares the source code for processing this dataset:
https://github.com/cisnlp/GlotStoryBook
## Citation
If you use any part of this code and data in your research, please cite it (along with https://github.com/global-asp/) using the following BibTeX entry.
This work is part of the [GlotLID](https://github.com/cisnlp/GlotLID) project.
```
@inproceedings{
kargaran2023glotlid,
title={{GlotLID}: Language Identification for Low-Resource Languages},
author={Kargaran, Amir Hossein and Imani, Ayyoob and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023},
url={https://openreview.net/forum?id=dl4e3EBz5j}
}
``` | 3,309 | [
[
-0.00807952880859375,
-0.022216796875,
0.01446533203125,
0.0168304443359375,
-0.01177978515625,
-0.005596160888671875,
-0.027435302734375,
-0.04534912109375,
0.0234375,
0.033538818359375,
-0.035369873046875,
-0.05291748046875,
-0.0229034423828125,
0.01724243... |
renatomoulin/fourthbrain_synthetic_marketmail | 2023-10-14T13:57:55.000Z | [
"region:us"
] | renatomoulin | null | null | 0 | 3 | 2023-10-13T12:51:43 | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 20890
num_examples: 10
download_size: 26786
dataset_size: 20890
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fourthbrain_synthetic_marketmail"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.026947021484375,
-0.0201568603515625,
0.01071929931640625,
0.0167388916015625,
0.0006871223449707031,
0.01369476318359375,
0.010498046875,
-0.01187896728515625,
0.05157470703125,
0.03753662109375,
-0.0684814453125,
-0.06597900390625,
-0.014129638671875,
-... |
galbitang/autotrain-data-jeongmi_chair | 2023-10-13T15:56:45.000Z | [
"task_categories:image-classification",
"region:us"
] | galbitang | null | null | 0 | 3 | 2023-10-13T15:32:58 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: jeongmi_chair
## Dataset Description
This dataset has been automatically processed by AutoTrain for project jeongmi_chair.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1000x1000 RGB PIL image>",
"target": 4
},
{
"image": "<700x700 RGB PIL image>",
"target": 6
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['classsicantique', 'frenchprovence', 'industrial', 'koreaaisa', 'lovelyromantic', 'minimalsimple', 'modern', 'natural', 'notherneurope', 'unique', 'vintatageretro'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 796 |
| valid | 204 |
| 1,094 | [
[
-0.0341796875,
0.00507354736328125,
0.0015869140625,
0.0153350830078125,
-0.0303192138671875,
0.01071929931640625,
-0.0091400146484375,
-0.027008056640625,
-0.002353668212890625,
0.03619384765625,
-0.04254150390625,
-0.0474853515625,
-0.02935791015625,
0.009... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.