datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
johannes-garstenauer/balanced_structs_reduced_labelled_large | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 407429998.0
num_examples: 1278338
download_size: 108755223
dataset_size: 407429998.0
---
# Dataset Card for "balanced_structs_reduced_labelled_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Atipico1/squad_v2_unique_questions | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: masked_query
dtype: string
- name: query_embedding
sequence: float32
splits:
- name: train
num_bytes: 190748153
num_examples: 47491
download_size: 184171177
dataset_size: 190748153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iarejula/porto-seguro | ---
pretty_name: porto-seguro
---
# porto-seguro
Created from AIOD platform |
Dstycoon/Trial7 | ---
dataset_info:
features:
- name: data
dtype: string
- name: conversation
dtype: string
- name: predicted_disease
dtype: string
- name: rationale
dtype: string
splits:
- name: train
num_bytes: 72976
num_examples: 10
download_size: 0
dataset_size: 72976
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Trial7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adamo1139/rawrr_v2-1-stage1 | ---
language:
- en
license: cc-by-nc-4.0
---
|
sproos/twitter-pairclass-fr | ---
dataset_info:
features:
- name: sent1
sequence: string
- name: sent2
sequence: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 11962515
num_examples: 1
download_size: 4454872
dataset_size: 11962515
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter-pairclass-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DebnathIan/PC_training | ---
license: apache-2.0
---
|
TinyPixel/multiturn | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8959503
num_examples: 1609
download_size: 4751959
dataset_size: 8959503
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nouman-10/reduced_dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 131672837.29910503
num_examples: 9553
- name: test
num_bytes: 14637972.70089496
num_examples: 1062
download_size: 137589436
dataset_size: 146310810.0
---
# Dataset Card for "reduced_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chowgi/DwightChatDataSet | ---
dataset_info:
features:
- name: formatted_chat
dtype: string
splits:
- name: train
num_bytes: 318430
num_examples: 1375
- name: test
num_bytes: 84954
num_examples: 343
download_size: 174901
dataset_size: 403384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
TJUNLP/M3KE | ---
license: apache-2.0
task_categories:
- text-classification
- question-answering
- multiple-choice
language:
- zh
size_categories:
- 10K<n<100K
arxiv:
- 2305.10263
---
M3KE, or Massive Multi-Level Multi-Subject Knowledge Evaluation, is a benchmark developed to assess the knowledge acquired by large Chinese language models by evaluating their multitask accuracy in both zero- and few-shot settings. The benchmark comprises 20,477 questions spanning 71 tasks. For further information about M3KE, please consult our [paper](https://arxiv.org/abs/2305.10263) or visit our [GitHub](https://github.com/tjunlp-lab/M3KE) page.
## Load the data
```python
from datasets import load_dataset
ds = load_dataset(
path="TJUNLP/M3KE",
name="Computer Programming Language-Natural Sciences-Other"
)
print(ds)
"""
DatasetDict({
test: Dataset({
features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'],
num_rows: 236
})
dev: Dataset({
features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'],
num_rows: 5
})
})
"""
print(ds["test"][0])
"""
{'id': 0, 'question': '下面判断正确的是?', 'A': 'char str[10]={"china"}; 等价于 char str[10];str[]="china";', 'B': 'char *s="china"; 等价于 char *s;s="china"; ', 'C': 'char *a="china"; 等价于 char *a;*a="china";', 'D': 'char c[6]="china",d[6]="china"; 等 价 于 char c[6]=d[6]="china"; ', 'answer': ''}
"""
```
```
@misc{liu2023m3ke,
title={M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models},
author={Chuang Liu and Renren Jin and Yuqi Ren and Linhao Yu and Tianyu Dong and Xiaohan Peng and Shuting Zhang and Jianxiang Peng and Peiyi Zhang and Qingqing Lyu and Xiaowen Su and Qun Liu and Deyi Xiong},
year={2023},
eprint={2305.10263},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
yzhuang/metatree_fri_c2_1000_10 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: X
sequence: float64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 71800
num_examples: 718
- name: validation
num_bytes: 28200
num_examples: 282
download_size: 105292
dataset_size: 100000
---
# Dataset Card for "metatree_fri_c2_1000_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AustinMcMike/steve_jobs | ---
license: apache-2.0
---
Created from various interviews/quotes by Steve Jobs |
csebuetnlp/CrossSum | ---
task_categories:
- summarization
task_ids:
- news-articles-summarization
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
size_categories:
- 1M<n<10M
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
source_datasets:
- original
annotations_creators:
- found
language_creators:
- found
pretty_name: CrossSum
---
# Dataset Card for "CrossSum"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/CrossSum](https://github.com/csebuetnlp/CrossSum)
- **Paper:** [CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs](https://arxiv.org/abs/2112.08804)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
We present CrossSum, a large-scale dataset
comprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs
constituting 45 languages. We use the multilingual XL-Sum dataset and align identical
articles written in different languages via crosslingual retrieval using a language-agnostic
representation model.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Languages
- `amharic`
- `arabic`
- `azerbaijani`
- `bengali`
- `burmese`
- `chinese_simplified`
- `chinese_traditional`
- `english`
- `french`
- `gujarati`
- `hausa`
- `hindi`
- `igbo`
- `indonesian`
- `japanese`
- `kirundi`
- `korean`
- `kyrgyz`
- `marathi`
- `nepali`
- `oromo`
- `pashto`
- `persian`
- `pidgin`
- `portuguese`
- `punjabi`
- `russian`
- `scottish_gaelic`
- `serbian_cyrillic`
- `serbian_latin`
- `sinhala`
- `somali`
- `spanish`
- `swahili`
- `tamil`
- `telugu`
- `thai`
- `tigrinya`
- `turkish`
- `ukrainian`
- `urdu`
- `uzbek`
- `vietnamese`
- `welsh`
- `yoruba`
## Loading the dataset
```python
from datasets import load_dataset
# for available language names, see above
src_lang = "english"
tgt_lang = "bengali"
ds = load_dataset(f"csebuetnlp/CrossSum", "{}-{}".format(src_lang, tgt_lang))
```
## Dataset Structure
### Data Instances
One example from the `English` dataset is given below in JSON format.
```
{
"source_url": "https://www.bbc.com/japanese/53074000",
"target_url": "https://www.bbc.com/bengali/news-53064712",
"summary": "বিজ্ঞানীরা বলছেন ডেক্সামেথাসোন নামে সস্তা ও সহজলভ্য একটি ওষুধ করোনাভাইরাসে গুরুতর অসুস্থ রোগীদের জীবন রক্ষা করতে সাহায্য করবে।",
"text": "ミシェル・ロバーツ、BBCニュースオンライン健康担当編集長 英オックスフォード大学の研究チームによると、低用量のデキサメタゾンは新型ウイルスとの戦いで画期的な突破口になる。 新型コロナウイルスに対し、様々な既存の治療法の効果を試す世界的規模の臨床試験の一貫として、デキサメタゾンが試された。 その結果、人工呼吸器を必要とする重症患者の致死率が3割下がり、酸素供給を必要とする患者の場合は2割下がった。 新型ウイルスのパンデミック(世界的流行)の初期からイギリスでデキサメタゾンを治療に使用していた場合、最大5000人の命が救えたはずだと研究者たちは言う。 さらに、新型コロナウイルスによる感染症「COVID-19」の患者が多く出ている貧しい国にとっても、安価なデキサメタゾンを使う治療は大いに役立つと期待される。 重症者の致死率が大幅に下がる イギリス政府は20万人分の投与量を備蓄しており、国民医療制度の国民保健サービス(NHS)で患者への使用を開始する方針を示した。 ボリス・ジョンソン英首相は「イギリス科学界の素晴らしい成果」を歓迎し、「たとえ感染の第2波が来ても備蓄が足りるよう、数を確保するための措置をとった」と述べた。 イングランド首席医務官クリス・ウィッティー教授は、「COVID-19にとってこれまでで一番重要な臨床試験結果だ。手に入りやすく安全でなじみのある薬によって、酸素供給や人工呼吸器が必要な人の致死率が大幅に下がった。(中略)この発見が世界中で人命を救う」と評価した。 <関連記事> 新型コロナウイルスに20人が感染した場合、19人は入院しないまま回復する。入院する人もほとんどは回復するものの、重症化して酸素供給や人工呼吸器を必要とする人もいる。 デキサメタゾンはこうした重症患者の治療に効果があるもよう。 新型ウイルスに感染した患者の体内では、ウイルスと戦う免疫系が暴走することがある。その免疫系の過剰反応による体の損傷を、デキサメタゾンが緩和するものとみられる。 「サイトカイン・ストーム」と呼ばれる免疫系の過剰反応が、患者の命を奪うこともある。 デキサメタゾンはすでに抗炎症剤として、ぜんそくや皮膚炎など様々な症状の治療に使われている。 初めて致死率を下げる薬 オックスフォード大学が主導する臨床試験は、約2000人の入院患者にデキサメタゾンを投与。それ以外の4000人以上の患者と容体を比較した。 人工呼吸器を使用する患者については、死亡リスクが40%から28%に下がった。 酸素供給する患者は、死亡リスクが25%から20%に下がった。 研究チームのピーター・ホービー教授は、「今のところ、致死率を実際に下げる結果が出たのは、この薬だけだ。しかも、致死率をかなり下げる。画期的な突破口だ」と話した。 研究を主導するマーティン・ランドレイ教授によると、人工呼吸器を使う患者の8人に1人、ならびに酸素供給治療を受ける患者の20-25人に1人が、デキサメタゾンで救えることが分かったという。 「これはきわめて明確なメリットだ」と教授は言う。 「最大10日間、デキサメタゾンを投与するという治療法で、費用は患者1人あたり1日約5ポンド(約670円)。つまり、35ポンド(約4700円)で人ひとりの命が救える」 「しかもこれは、世界中で手に入る薬だ」 状況が許す限り、新型コロナウイルスで入院中の患者にはただちに投与を開始すべきだと、ランドレイ教授は促した。 ただし、自宅で自己治療するために薬局に買いに行くべきではないと言う。 デキサメタゾンは、呼吸補助を必要としない軽症の患者には効果がないもよう。 3月に始動した新型コロナウイルス治療薬の無作為化臨床試験「リカバリー・トライアル」は、抗マラリア薬「ヒドロキシクロロキン」も調べたものの、心臓疾患や致死率の悪化につながるという懸念から、ヒドロキシクロロキンについては試験を中止した。 一方で、感染者の回復にかかる時間を短縮するとみられるレムデシビルは、すでにNHSの保険対象になり治療現場で使われている。 <解説> ファーガス・ウォルシュBBC健康担当編集委員 COVID-19の死者を減らすと初めて立証された薬は、高価な新しい薬ではなく、古くからずっと使われてきた、きわめて安いステロイド剤だった。 世界中の患者が直ちにその恩恵を受けることになるので、これは歓迎すべき発見だ。 この臨床試験の最新成果がこれほど急いで発表されたのは、そのためだ。とてつもない影響を世界中にもたらすので。 デキサメタゾンは1960年代初めから、関節リウマチやぜんそくなど、幅広い症状の治療に使われてきた。 これまでは、人工呼吸器を必要とするCOVID-19患者の半数が亡くなってきた。その致死率を3割減らすというのは、絶大な効果だ。 集中治療室では点滴で投与する。もう少し軽症な患者には、錠剤で与える。 これまでのところ、COVID-19患者に効果があると証明された薬は、エボラ治療薬のレムデシビルだけだった。 レムデシビルは症状の回復期間を15日から11日に短縮する。しかし、致死率を下げると言えるだけの証拠は出ていなかった。 デキサメタゾンと異なり、レムデシビルは数の少ない新薬で、薬価もまだ公表されていない。"
}
```
### Data Fields
- 'source_url': A string representing the source article URL.
- 'target_url': A string representing the target article URL.
- 'summary': A string containing the article summary.
- 'text' : A string containing the article text.
### Data Splits
No. of total examples for each language pair are as follows:
Language (ISO 639-1-Code) | am | ar | az | bn | my | zh-CN | zh-TW | en | fr | gu | ha | hi | ig | id | ja | rn | ko | ky | mr | np | om | ps | fa | pcm | pt | pa | ru | gd | sr | sr | si | so | es | sw | ta | te | th | ti | tr | uk | ur | uz | vi | cy | yo
----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | -----
am | -- | 667 | 100 | 272 | 95 | 179 | 167 | 1456 | 358 | 173 | 221 | 377 | 26 | 494 | 264 | 423 | 244 | 92 | 221 | 301 | 21 | 192 | 431 | 209 | 307 | 189 | 347 | 0 | 357 | 365 | 62 | 309 | 351 | 378 | 390 | 329 | 124 | 131 | 435 | 345 | 409 | 41 | 285 | 1 | 67
ar | 667 | -- | 787 | 804 | 652 | 2968 | 2843 | 9653 | 989 | 475 | 747 | 3665 | 86 | 6084 | 1188 | 876 | 707 | 299 | 559 | 854 | 9 | 2161 | 4186 | 436 | 2539 | 547 | 5564 | 1 | 1109 | 1145 | 315 | 1049 | 3654 | 1186 | 1311 | 877 | 367 | 27 | 4147 | 3457 | 4935 | 388 | 2666 | 38 | 141
az | 100 | 787 | -- | 277 | 84 | 371 | 334 | 1317 | 208 | 192 | 126 | 748 | 28 | 1111 | 231 | 188 | 155 | 221 | 194 | 242 | 1 | 252 | 817 | 91 | 678 | 190 | 2238 | 4 | 289 | 283 | 124 | 367 | 704 | 539 | 515 | 245 | 140 | 2 | 1495 | 1383 | 966 | 199 | 725 | 30 | 42
bn | 272 | 804 | 277 | -- | 139 | 318 | 284 | 1549 | 317 | 559 | 231 | 1396 | 35 | 1076 | 342 | 298 | 352 | 154 | 586 | 668 | 2 | 300 | 790 | 135 | 764 | 580 | 838 | 0 | 562 | 564 | 151 | 412 | 701 | 471 | 919 | 793 | 245 | 6 | 860 | 688 | 1382 | 98 | 527 | 37 | 61
my | 95 | 652 | 84 | 139 | -- | 356 | 314 | 685 | 90 | 96 | 74 | 528 | 12 | 761 | 144 | 100 | 112 | 58 | 89 | 152 | 1 | 234 | 426 | 39 | 230 | 86 | 535 | 0 | 115 | 123 | 87 | 79 | 431 | 86 | 185 | 147 | 71 | 4 | 449 | 350 | 591 | 62 | 447 | 4 | 12
zh-CN | 179 | 2968 | 371 | 318 | 356 | -- | 47101 | 4975 | 348 | 201 | 159 | 1379 | 38 | 2851 | 1017 | 240 | 412 | 139 | 240 | 275 | 14 | 559 | 1111 | 149 | 1371 | 250 | 2572 | 2 | 504 | 530 | 166 | 323 | 2002 | 412 | 511 | 353 | 269 | 11 | 1511 | 1619 | 1651 | 176 | 1858 | 33 | 39
zh-TW | 167 | 2843 | 334 | 284 | 314 | 47101 | -- | 4884 | 331 | 174 | 150 | 1213 | 35 | 2588 | 953 | 209 | 382 | 131 | 213 | 252 | 16 | 501 | 967 | 141 | 1271 | 226 | 2286 | 1 | 453 | 494 | 150 | 302 | 1873 | 383 | 465 | 335 | 250 | 12 | 1294 | 1464 | 1444 | 158 | 1663 | 31 | 38
en | 1456 | 9653 | 1317 | 1549 | 685 | 4975 | 4884 | -- | 1889 | 978 | 913 | 4728 | 144 | 10040 | 3040 | 1878 | 1673 | 490 | 1181 | 1614 | 38 | 1522 | 4680 | 1074 | 4744 | 1330 | 9080 | 128 | 3760 | 3809 | 532 | 2141 | 6910 | 2701 | 3156 | 2121 | 1020 | 58 | 5676 | 6562 | 6320 | 450 | 4574 | 2655 | 229
fr | 358 | 989 | 208 | 317 | 90 | 348 | 331 | 1889 | -- | 242 | 477 | 616 | 106 | 1018 | 274 | 735 | 264 | 124 | 241 | 323 | 4 | 196 | 602 | 439 | 921 | 247 | 849 | 2 | 555 | 569 | 98 | 502 | 990 | 872 | 425 | 380 | 185 | 10 | 829 | 721 | 766 | 76 | 438 | 40 | 159
gu | 173 | 475 | 192 | 559 | 96 | 201 | 174 | 978 | 242 | -- | 147 | 5170 | 34 | 710 | 228 | 183 | 268 | 106 | 2091 | 561 | 1 | 246 | 522 | 101 | 529 | 2210 | 582 | 0 | 331 | 345 | 125 | 261 | 540 | 300 | 1762 | 2066 | 164 | 5 | 631 | 508 | 1619 | 80 | 450 | 21 | 54
ha | 221 | 747 | 126 | 231 | 74 | 159 | 150 | 913 | 477 | 147 | -- | 460 | 202 | 901 | 157 | 485 | 135 | 61 | 159 | 239 | 5 | 229 | 487 | 529 | 375 | 157 | 525 | 1 | 258 | 258 | 49 | 391 | 463 | 568 | 299 | 260 | 87 | 9 | 519 | 400 | 526 | 59 | 352 | 30 | 362
hi | 377 | 3665 | 748 | 1396 | 528 | 1379 | 1213 | 4728 | 616 | 5170 | 460 | -- | 65 | 5627 | 623 | 489 | 520 | 234 | 3831 | 1357 | 4 | 1519 | 5351 | 192 | 6563 | 4052 | 4622 | 1 | 809 | 807 | 449 | 747 | 2931 | 893 | 3711 | 3762 | 378 | 7 | 3694 | 3935 | 15666 | 352 | 3738 | 77 | 79
ig | 26 | 86 | 28 | 35 | 12 | 38 | 35 | 144 | 106 | 34 | 202 | 65 | -- | 113 | 24 | 107 | 32 | 16 | 51 | 36 | 3 | 11 | 49 | 255 | 61 | 39 | 79 | 0 | 51 | 51 | 13 | 77 | 91 | 151 | 52 | 54 | 18 | 5 | 91 | 83 | 61 | 15 | 65 | 6 | 296
id | 494 | 6084 | 1111 | 1076 | 761 | 2851 | 2588 | 10040 | 1018 | 710 | 901 | 5627 | 113 | -- | 1274 | 994 | 774 | 347 | 745 | 1104 | 8 | 1430 | 3892 | 367 | 4409 | 725 | 7588 | 7 | 1387 | 1379 | 470 | 1312 | 4547 | 1873 | 1886 | 1131 | 599 | 9 | 5663 | 4829 | 6476 | 432 | 4810 | 145 | 174
ja | 264 | 1188 | 231 | 342 | 144 | 1017 | 953 | 3040 | 274 | 228 | 157 | 623 | 24 | 1274 | -- | 372 | 654 | 140 | 302 | 424 | 2 | 266 | 1014 | 152 | 706 | 269 | 1517 | 2 | 550 | 571 | 109 | 387 | 950 | 425 | 641 | 425 | 305 | 5 | 1242 | 1013 | 797 | 49 | 908 | 25 | 33
rn | 423 | 876 | 188 | 298 | 100 | 240 | 209 | 1878 | 735 | 183 | 485 | 489 | 107 | 994 | 372 | -- | 283 | 106 | 242 | 369 | 18 | 228 | 684 | 398 | 526 | 206 | 711 | 0 | 443 | 450 | 77 | 584 | 607 | 1186 | 521 | 363 | 149 | 13 | 724 | 610 | 617 | 59 | 631 | 20 | 180
ko | 244 | 707 | 155 | 352 | 112 | 412 | 382 | 1673 | 264 | 268 | 135 | 520 | 32 | 774 | 654 | 283 | -- | 99 | 319 | 445 | 1 | 150 | 596 | 130 | 587 | 264 | 649 | 0 | 522 | 543 | 81 | 234 | 613 | 324 | 541 | 452 | 197 | 5 | 680 | 616 | 532 | 54 | 530 | 12 | 45
ky | 92 | 299 | 221 | 154 | 58 | 139 | 131 | 490 | 124 | 106 | 61 | 234 | 16 | 347 | 140 | 106 | 99 | -- | 107 | 167 | 4 | 102 | 252 | 59 | 251 | 118 | 1013 | 1 | 206 | 211 | 45 | 145 | 279 | 150 | 206 | 174 | 109 | 3 | 346 | 508 | 270 | 113 | 201 | 12 | 23
mr | 221 | 559 | 194 | 586 | 89 | 240 | 213 | 1181 | 241 | 2091 | 159 | 3831 | 51 | 745 | 302 | 242 | 319 | 107 | -- | 630 | 1 | 232 | 608 | 138 | 524 | 1797 | 675 | 0 | 419 | 436 | 129 | 270 | 603 | 332 | 1776 | 1886 | 196 | 11 | 706 | 596 | 1395 | 79 | 473 | 16 | 48
np | 301 | 854 | 242 | 668 | 152 | 275 | 252 | 1614 | 323 | 561 | 239 | 1357 | 36 | 1104 | 424 | 369 | 445 | 167 | 630 | -- | 1 | 303 | 916 | 134 | 706 | 545 | 849 | 2 | 553 | 538 | 164 | 420 | 687 | 513 | 994 | 741 | 217 | 7 | 930 | 741 | 1156 | 84 | 719 | 39 | 65
om | 21 | 9 | 1 | 2 | 1 | 14 | 16 | 38 | 4 | 1 | 5 | 4 | 3 | 8 | 2 | 18 | 1 | 4 | 1 | 1 | -- | 2 | 3 | 11 | 4 | 6 | 8 | 0 | 2 | 3 | 0 | 6 | 7 | 5 | 2 | 2 | 1 | 103 | 5 | 10 | 1 | 4 | 2 | 0 | 7
ps | 192 | 2161 | 252 | 300 | 234 | 559 | 501 | 1522 | 196 | 246 | 229 | 1519 | 11 | 1430 | 266 | 228 | 150 | 102 | 232 | 303 | 2 | -- | 2815 | 94 | 594 | 249 | 1246 | 0 | 235 | 242 | 156 | 304 | 766 | 314 | 441 | 314 | 92 | 8 | 1049 | 818 | 2833 | 156 | 657 | 7 | 32
fa | 431 | 4186 | 817 | 790 | 426 | 1111 | 967 | 4680 | 602 | 522 | 487 | 5351 | 49 | 3892 | 1014 | 684 | 596 | 252 | 608 | 916 | 3 | 2815 | -- | 186 | 5512 | 541 | 4328 | 0 | 1028 | 1023 | 276 | 812 | 2512 | 1002 | 1250 | 797 | 364 | 8 | 3695 | 3567 | 6752 | 313 | 3190 | 66 | 74
pcm | 209 | 436 | 91 | 135 | 39 | 149 | 141 | 1074 | 439 | 101 | 529 | 192 | 255 | 367 | 152 | 398 | 130 | 59 | 138 | 134 | 11 | 94 | 186 | -- | 227 | 112 | 322 | 0 | 234 | 246 | 28 | 219 | 314 | 436 | 232 | 162 | 85 | 28 | 287 | 280 | 232 | 18 | 170 | 9 | 462
pt | 307 | 2539 | 678 | 764 | 230 | 1371 | 1271 | 4744 | 921 | 529 | 375 | 6563 | 61 | 4409 | 706 | 526 | 587 | 251 | 524 | 706 | 4 | 594 | 5512 | 227 | -- | 579 | 4452 | 7 | 1371 | 1341 | 231 | 602 | 7112 | 983 | 1042 | 820 | 468 | 3 | 3483 | 4421 | 6759 | 186 | 3754 | 110 | 97
pa | 189 | 547 | 190 | 580 | 86 | 250 | 226 | 1330 | 247 | 2210 | 157 | 4052 | 39 | 725 | 269 | 206 | 264 | 118 | 1797 | 545 | 6 | 249 | 541 | 112 | 579 | -- | 629 | 0 | 410 | 404 | 128 | 283 | 585 | 357 | 1726 | 1892 | 200 | 10 | 643 | 570 | 1515 | 73 | 431 | 16 | 44
ru | 347 | 5564 | 2238 | 838 | 535 | 2572 | 2286 | 9080 | 849 | 582 | 525 | 4622 | 79 | 7588 | 1517 | 711 | 649 | 1013 | 675 | 849 | 8 | 1246 | 4328 | 322 | 4452 | 629 | -- | 5 | 1495 | 1460 | 373 | 1166 | 4864 | 1672 | 1628 | 892 | 595 | 7 | 6223 | 22241 | 5309 | 809 | 3963 | 134 | 125
gd | 0 | 1 | 4 | 0 | 0 | 2 | 1 | 128 | 2 | 0 | 1 | 1 | 0 | 7 | 2 | 0 | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 7 | 0 | 5 | -- | 2 | 3 | 2 | 1 | 3 | 1 | 0 | 0 | 1 | 0 | 6 | 5 | 2 | 1 | 3 | 36 | 2
sr | 357 | 1109 | 289 | 562 | 115 | 504 | 453 | 3760 | 555 | 331 | 258 | 809 | 51 | 1387 | 550 | 443 | 522 | 206 | 419 | 553 | 2 | 235 | 1028 | 234 | 1371 | 410 | 1495 | 2 | -- | 9041 | 127 | 377 | 1235 | 574 | 761 | 691 | 340 | 6 | 1247 | 1512 | 1021 | 109 | 685 | 42 | 69
sr | 365 | 1145 | 283 | 564 | 123 | 530 | 494 | 3809 | 569 | 345 | 258 | 807 | 51 | 1379 | 571 | 450 | 543 | 211 | 436 | 538 | 3 | 242 | 1023 | 246 | 1341 | 404 | 1460 | 3 | 9041 | -- | 137 | 382 | 1260 | 568 | 775 | 699 | 347 | 10 | 1229 | 1498 | 1009 | 112 | 639 | 45 | 79
si | 62 | 315 | 124 | 151 | 87 | 166 | 150 | 532 | 98 | 125 | 49 | 449 | 13 | 470 | 109 | 77 | 81 | 45 | 129 | 164 | 0 | 156 | 276 | 28 | 231 | 128 | 373 | 2 | 127 | 137 | -- | 137 | 260 | 189 | 348 | 173 | 69 | 7 | 301 | 306 | 510 | 38 | 216 | 5 | 15
so | 309 | 1049 | 367 | 412 | 79 | 323 | 302 | 2141 | 502 | 261 | 391 | 747 | 77 | 1312 | 387 | 584 | 234 | 145 | 270 | 420 | 6 | 304 | 812 | 219 | 602 | 283 | 1166 | 1 | 377 | 382 | 137 | -- | 689 | 1020 | 723 | 384 | 178 | 19 | 968 | 875 | 1000 | 75 | 724 | 20 | 116
es | 351 | 3654 | 704 | 701 | 431 | 2002 | 1873 | 6910 | 990 | 540 | 463 | 2931 | 91 | 4547 | 950 | 607 | 613 | 279 | 603 | 687 | 7 | 766 | 2512 | 314 | 7112 | 585 | 4864 | 3 | 1235 | 1260 | 260 | 689 | -- | 1047 | 1073 | 827 | 469 | 10 | 3645 | 3130 | 3060 | 290 | 2330 | 59 | 133
sw | 378 | 1186 | 539 | 471 | 86 | 412 | 383 | 2701 | 872 | 300 | 568 | 893 | 151 | 1873 | 425 | 1186 | 324 | 150 | 332 | 513 | 5 | 314 | 1002 | 436 | 983 | 357 | 1672 | 1 | 574 | 568 | 189 | 1020 | 1047 | -- | 929 | 492 | 261 | 10 | 1348 | 1309 | 1253 | 90 | 936 | 37 | 219
ta | 390 | 1311 | 515 | 919 | 185 | 511 | 465 | 3156 | 425 | 1762 | 299 | 3711 | 52 | 1886 | 641 | 521 | 541 | 206 | 1776 | 994 | 2 | 441 | 1250 | 232 | 1042 | 1726 | 1628 | 0 | 761 | 775 | 348 | 723 | 1073 | 929 | -- | 2278 | 400 | 14 | 1486 | 1423 | 2404 | 134 | 1092 | 32 | 68
te | 329 | 877 | 245 | 793 | 147 | 353 | 335 | 2121 | 380 | 2066 | 260 | 3762 | 54 | 1131 | 425 | 363 | 452 | 174 | 1886 | 741 | 2 | 314 | 797 | 162 | 820 | 1892 | 892 | 0 | 691 | 699 | 173 | 384 | 827 | 492 | 2278 | -- | 306 | 11 | 893 | 832 | 1748 | 107 | 644 | 21 | 61
th | 124 | 367 | 140 | 245 | 71 | 269 | 250 | 1020 | 185 | 164 | 87 | 378 | 18 | 599 | 305 | 149 | 197 | 109 | 196 | 217 | 1 | 92 | 364 | 85 | 468 | 200 | 595 | 1 | 340 | 347 | 69 | 178 | 469 | 261 | 400 | 306 | -- | 5 | 477 | 480 | 414 | 37 | 357 | 10 | 26
ti | 131 | 27 | 2 | 6 | 4 | 11 | 12 | 58 | 10 | 5 | 9 | 7 | 5 | 9 | 5 | 13 | 5 | 3 | 11 | 7 | 103 | 8 | 8 | 28 | 3 | 10 | 7 | 0 | 6 | 10 | 7 | 19 | 10 | 10 | 14 | 11 | 5 | -- | 8 | 8 | 4 | 2 | 5 | 0 | 6
tr | 435 | 4147 | 1495 | 860 | 449 | 1511 | 1294 | 5676 | 829 | 631 | 519 | 3694 | 91 | 5663 | 1242 | 724 | 680 | 346 | 706 | 930 | 5 | 1049 | 3695 | 287 | 3483 | 643 | 6223 | 6 | 1247 | 1229 | 301 | 968 | 3645 | 1348 | 1486 | 893 | 477 | 8 | -- | 4108 | 4340 | 370 | 2981 | 126 | 130
uk | 345 | 3457 | 1383 | 688 | 350 | 1619 | 1464 | 6562 | 721 | 508 | 400 | 3935 | 83 | 4829 | 1013 | 610 | 616 | 508 | 596 | 741 | 10 | 818 | 3567 | 280 | 4421 | 570 | 22241 | 5 | 1512 | 1498 | 306 | 875 | 3130 | 1309 | 1423 | 832 | 480 | 8 | 4108 | -- | 4290 | 442 | 3017 | 108 | 89
ur | 409 | 4935 | 966 | 1382 | 591 | 1651 | 1444 | 6320 | 766 | 1619 | 526 | 15666 | 61 | 6476 | 797 | 617 | 532 | 270 | 1395 | 1156 | 1 | 2833 | 6752 | 232 | 6759 | 1515 | 5309 | 2 | 1021 | 1009 | 510 | 1000 | 3060 | 1253 | 2404 | 1748 | 414 | 4 | 4340 | 4290 | -- | 389 | 3723 | 72 | 88
uz | 41 | 388 | 199 | 98 | 62 | 176 | 158 | 450 | 76 | 80 | 59 | 352 | 15 | 432 | 49 | 59 | 54 | 113 | 79 | 84 | 4 | 156 | 313 | 18 | 186 | 73 | 809 | 1 | 109 | 112 | 38 | 75 | 290 | 90 | 134 | 107 | 37 | 2 | 370 | 442 | 389 | -- | 257 | 10 | 15
vi | 285 | 2666 | 726 | 527 | 447 | 1858 | 1663 | 4575 | 438 | 450 | 352 | 3738 | 65 | 4810 | 908 | 631 | 530 | 201 | 473 | 719 | 2 | 657 | 3190 | 170 | 3755 | 431 | 3963 | 3 | 685 | 639 | 216 | 724 | 2330 | 936 | 1092 | 644 | 357 | 5 | 2982 | 3017 | 3723 | 257 | -- | 106 | 76
cy | 1 | 38 | 30 | 37 | 4 | 33 | 31 | 2655 | 40 | 21 | 30 | 77 | 6 | 145 | 25 | 20 | 12 | 12 | 16 | 39 | 0 | 7 | 66 | 9 | 110 | 16 | 134 | 36 | 42 | 45 | 5 | 20 | 59 | 37 | 32 | 21 | 10 | 0 | 126 | 108 | 72 | 10 | 106 | -- | 8
yo | 67 | 141 | 42 | 61 | 12 | 39 | 38 | 229 | 159 | 54 | 362 | 79 | 296 | 174 | 33 | 180 | 45 | 23 | 48 | 65 | 7 | 32 | 74 | 462 | 97 | 44 | 125 | 2 | 69 | 79 | 15 | 116 | 133 | 219 | 68 | 61 | 26 | 6 | 130 | 89 | 88 | 15 | 76 | 8 | --
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Source Data
[BBC News](https://www.bbc.co.uk/ws/languages)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
### Annotations
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Annotation process
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Who are the annotators?
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/CrossSum)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/CrossSum)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. |
HuckAI/HuckV1 | ---
dataset_info:
features:
- name: dialog
list:
- name: is_paid
dtype: bool
- name: media
dtype: string
- name: source
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 540160327
num_examples: 415199
download_size: 278089826
dataset_size: 540160327
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
samurai-architects/edo-hakkei | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 16710416.0
num_examples: 8
download_size: 16693076
dataset_size: 16710416.0
---
# Dataset Card for "edo-hakkei"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wisdomik/QUILT-LLaVA-Instruct-107K | ---
license: cc-by-nc-nd-3.0
task_categories:
- visual-question-answering
- question-answering
language:
- en
pretty_name: QUILT-LLaVA Visual Instruct 107K
size_categories:
- 100K<n<1M
extra_gated_prompt: >-
Please read and agree to the following terms: 1. The requester details
provided are not faked. 2. The resource will not be used for
commercial/clinical purposes and will be used for scientific research only. 3.
The data will not be re-distributed, published, copied, or further
disseminated in any way or form whatsoever, whether for profit or not. 4. The
right study/paper (Quilt-1M(https://quilt1m.github.io/) and Quilt-LLaVa
(https://quilt-llava.github.io) papers) will be cited in any publication(s)
that uses this model/data
extra_gated_fields:
Email: text
First and last name: text
Affiliation: text
Type of Affiliation:
type: select
options:
- Academia
- Industry
- Other
I want to use this model for:
type: select
options:
- Research
- Education
- label: Other
value: other
I agree to the aforementioned terms of use: checkbox
---
**QUILT-LLaVA Visual Instruct 107K Dataset Card**
**Paper: Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos**
**Paper or resources for more information:**
https://quilt-llava.github.io/
<p align="center">
<img src="https://quilt-llava.github.io/static/images/clusters2.png" alt="fig2" width="90%"/>
</p>
**Description and Details**
1. YouTube educational histopathology videos are a valuable source of grounded histopathology data for instructional purposes, particularly for visual instruction tuning.
2. Similar to LLaVA, the approach involves using independent prompts for generating Q&A pairs from image captions using GPT-4. In contrast to LLaVA-Med, this approach adds spatial grounding by extracting mouse pointers to link narrator's speech to specific regions of images, improving spatial awareness.
3. Traditional image-caption datasets often lack contextual connections, limiting Q/A pairs generated by GPT-4 to the context of a single image. For histopathology, which requires holistic analysis, the proposal suggests reasoning-based prompting techniques. These techniques include Complex Reasoning, where GPT-4 uses diagnosis and contributory facts to extrapolate beyond the immediate context, and Iterative Abductive Reasoning, which simulates a conversation between two GPT-4 agents for in-depth medical questioning and evaluation.
4. In Complex Reasoning, GPT-4 is prompted with a caption, diagnosis, and facts to perform diagnostic reasoning that goes beyond the single image context.
5. In Iterative Abductive Reasoning, a conversation is simulated between two GPT-4 agents: Human-GPT, provided with a single image caption for abductive reasoning, and AI Assistant GPT, which has access to diagnosis and facts to provide feedback, resembling a professional medical consultation. This iterative process continues until a conclusion is reached.
<p align="center">
<img src="https://quilt-llava.github.io/static/images/iterative_1.png" alt="fig2" width="90%"/>
</p>
**Dataset date:**
QUILT-LLaVA Visual Instruct 107K was collected in November 2023, by prompting GPT-4-turbo API.
**License:**
MIT License; and it should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
**Where to send questions or comments about the model:**
https://github.com/quilt-llava/quilt-llava.github.io/issues
**Primary intended uses:**
The primary use of QUILT-LLaVA is research on histopathology large multimodal models and chatbots.
**Primary intended users:**
The dataset is intended as a research resource for research communities. We hope that this dataset will enable researchers to better understand and explore the generative capacity of medical large multimodal models
**Citation**
```bibtex
@misc{seyfioglu2023quiltllava,
title={Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos},
author={Mehmet Saygin Seyfioglu and Wisdom O. Ikezogwo and Fatemeh Ghezloo and Ranjay Krishna and Linda Shapiro},
year={2023},
eprint={2312.04746},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{ikezogwo2023quilt1m,
title={Quilt-1M: One Million Image-Text Pairs for Histopathology},
author={Wisdom Oluchi Ikezogwo and Mehmet Saygin Seyfioglu and Fatemeh Ghezloo and Dylan Stefan Chan Geva and Fatwir Sheikh Mohammed and Pavan Kumar Anand and Ranjay Krishna and Linda Shapiro},
year={2023},
eprint={2306.11207},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
[](https://creativecommons.org/licenses/by-nc/3.0/us/deed.en) [-red.svg)](https://en.wikipedia.org/wiki/MIT_License) [](https://en.wikipedia.org/wiki/MIT_License)
**Usage and License Notices**: The data, code, and model checkpoints are intended and licensed for research use only. They are also subject to additional restrictions dictated by the Terms of Use: QUILT-1M, LLaMA, Vicuna and GPT-4 respectively. The model is made available under CC BY NC 3.0 licence and the data, code under CC BY NC ND 3.0 with additional Data Use Agreement (DUA). The data, code, and model checkpoints may be used for non-commercial purposes and any models trained using the dataset should be used only for research purposes. It is expressly prohibited for models trained on this data to be used in clinical care or for any clinical decision making purposes.
|
CyberHarem/impero_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of impero/インペロ/帝国 (Azur Lane)
This is the dataset of impero/インペロ/帝国 (Azur Lane), containing 32 images and their tags.
The core tags of this character are `breasts, long_hair, large_breasts, brown_hair, bangs, blue_eyes, very_long_hair, ahoge, bow, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 32 | 60.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/impero_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 32 | 30.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/impero_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 80 | 63.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/impero_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 32 | 52.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/impero_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 80 | 96.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/impero_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/impero_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 19 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, solo, white_shirt, collared_shirt, bare_shoulders, official_alternate_costume, sleeveless_shirt, blush, brown_cardigan, open_clothes, black_bowtie, partially_unbuttoned, plaid_skirt, pleated_skirt, black_skirt, leaf, loose_bowtie, school_uniform, sitting |
| 1 | 5 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, solo, aiguillette, cleavage, clothing_cutout, collarbone, gold_trim, white_dress, cape, full_body, long_sleeves, medium_breasts, white_footwear, black_choker, collared_dress, cutout_above_navel, green_eyes, italian_flag, military, pink_hair, short_dress, two-tone_dress, uniform |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | looking_at_viewer | solo | white_shirt | collared_shirt | bare_shoulders | official_alternate_costume | sleeveless_shirt | blush | brown_cardigan | open_clothes | black_bowtie | partially_unbuttoned | plaid_skirt | pleated_skirt | black_skirt | leaf | loose_bowtie | school_uniform | sitting | aiguillette | clothing_cutout | collarbone | gold_trim | white_dress | cape | full_body | long_sleeves | medium_breasts | white_footwear | black_choker | collared_dress | cutout_above_navel | green_eyes | italian_flag | military | pink_hair | short_dress | two-tone_dress | uniform |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------------|:-------|:--------------|:-----------------|:-----------------|:-----------------------------|:-------------------|:--------|:-----------------|:---------------|:---------------|:-----------------------|:--------------|:----------------|:--------------|:-------|:---------------|:-----------------|:----------|:--------------|:------------------|:-------------|:------------|:--------------|:-------|:------------|:---------------|:-----------------|:-----------------|:---------------|:-----------------|:---------------------|:-------------|:---------------|:-----------|:------------|:--------------|:-----------------|:----------|
| 0 | 19 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
hsali/librespeech_dev_clean | ---
dataset_info:
features:
- name: input_values
sequence: float32
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 641600432
num_examples: 2703
download_size: 566257946
dataset_size: 641600432
---
# Dataset Card for "librespeech_dev_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fathyshalab/reklamation24_versicherungen-recht-full | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: string
- name: annotation_agent
dtype: string
- name: vectors
struct:
- name: mini-lm-sentence-transformers
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 11137639
num_examples: 1796
download_size: 0
dataset_size: 11137639
---
# Dataset Card for "reklamation24_versicherungen-recht-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_maywell__Synatra-7B-v0.3-RP | ---
pretty_name: Evaluation run of maywell/Synatra-7B-v0.3-RP
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [maywell/Synatra-7B-v0.3-RP](https://huggingface.co/maywell/Synatra-7B-v0.3-RP)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__Synatra-7B-v0.3-RP_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-18T23:02:29.150817](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-7B-v0.3-RP_public/blob/main/results_2023-11-18T23-02-29.150817.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6042478184701645,\n\
\ \"acc_stderr\": 0.03267991789724199,\n \"acc_norm\": 0.6118798356357696,\n\
\ \"acc_norm_stderr\": 0.03337492967666177,\n \"mc1\": 0.37209302325581395,\n\
\ \"mc1_stderr\": 0.016921090118814035,\n \"mc2\": 0.5263791321103062,\n\
\ \"mc2_stderr\": 0.015312628675104242,\n \"em\": 0.3953439597315436,\n\
\ \"em_stderr\": 0.005007043944789993,\n \"f1\": 0.46059983221476697,\n\
\ \"f1_stderr\": 0.00481810685968407\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5930034129692833,\n \"acc_stderr\": 0.014356399418009117,\n\
\ \"acc_norm\": 0.6220136518771331,\n \"acc_norm_stderr\": 0.014169664520303098\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6338378809002191,\n\
\ \"acc_stderr\": 0.004807699539973411,\n \"acc_norm\": 0.8229436367257519,\n\
\ \"acc_norm_stderr\": 0.003809362761248109\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n\
\ \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6415094339622641,\n \"acc_stderr\": 0.029514703583981765,\n\
\ \"acc_norm\": 0.6415094339622641,\n \"acc_norm_stderr\": 0.029514703583981765\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6597222222222222,\n\
\ \"acc_stderr\": 0.039621355734862175,\n \"acc_norm\": 0.6597222222222222,\n\
\ \"acc_norm_stderr\": 0.039621355734862175\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\"\
: 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5664739884393064,\n\
\ \"acc_stderr\": 0.03778621079092056,\n \"acc_norm\": 0.5664739884393064,\n\
\ \"acc_norm_stderr\": 0.03778621079092056\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.042801058373643966,\n\
\ \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.042801058373643966\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n\
\ \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.046854730419077895,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.046854730419077895\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n\
\ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4497354497354497,\n \"acc_stderr\": 0.02562085704293665,\n \"\
acc_norm\": 0.4497354497354497,\n \"acc_norm_stderr\": 0.02562085704293665\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7354838709677419,\n \"acc_stderr\": 0.02509189237885928,\n \"\
acc_norm\": 0.7354838709677419,\n \"acc_norm_stderr\": 0.02509189237885928\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4630541871921182,\n \"acc_stderr\": 0.035083705204426656,\n \"\
acc_norm\": 0.4630541871921182,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\"\
: 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"\
acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8497409326424871,\n \"acc_stderr\": 0.02578772318072387,\n\
\ \"acc_norm\": 0.8497409326424871,\n \"acc_norm_stderr\": 0.02578772318072387\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6,\n \"acc_stderr\": 0.02483881198803316,\n \"acc_norm\"\
: 0.6,\n \"acc_norm_stderr\": 0.02483881198803316\n },\n \"harness|hendrycksTest-high_school_mathematics|5\"\
: {\n \"acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085626,\n\
\ \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085626\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.634453781512605,\n \"acc_stderr\": 0.031282177063684614,\n \
\ \"acc_norm\": 0.634453781512605,\n \"acc_norm_stderr\": 0.031282177063684614\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"\
acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7944954128440367,\n \"acc_stderr\": 0.017324352325016022,\n \"\
acc_norm\": 0.7944954128440367,\n \"acc_norm_stderr\": 0.017324352325016022\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8088235294117647,\n\
\ \"acc_stderr\": 0.02759917430064077,\n \"acc_norm\": 0.8088235294117647,\n\
\ \"acc_norm_stderr\": 0.02759917430064077\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7679324894514767,\n \"acc_stderr\": 0.02747974455080851,\n\
\ \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.02747974455080851\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7099236641221374,\n \"acc_stderr\": 0.03980066246467766,\n\
\ \"acc_norm\": 0.7099236641221374,\n \"acc_norm_stderr\": 0.03980066246467766\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.034878251684978906,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.034878251684978906\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.041858325989283136,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.041858325989283136\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n\
\ \"acc_stderr\": 0.02363687331748928,\n \"acc_norm\": 0.8461538461538461,\n\
\ \"acc_norm_stderr\": 0.02363687331748928\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n\
\ \"acc_stderr\": 0.013816335389973145,\n \"acc_norm\": 0.8173690932311622,\n\
\ \"acc_norm_stderr\": 0.013816335389973145\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.02541600377316554,\n\
\ \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.02541600377316554\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2346368715083799,\n\
\ \"acc_stderr\": 0.014173044098303654,\n \"acc_norm\": 0.2346368715083799,\n\
\ \"acc_norm_stderr\": 0.014173044098303654\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.026716118380156847,\n\
\ \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.026716118380156847\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n\
\ \"acc_stderr\": 0.026236965881153273,\n \"acc_norm\": 0.6913183279742765,\n\
\ \"acc_norm_stderr\": 0.026236965881153273\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.02517104191530968,\n\
\ \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.02517104191530968\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4645390070921986,\n \"acc_stderr\": 0.029752389657427047,\n \
\ \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.029752389657427047\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.455019556714472,\n\
\ \"acc_stderr\": 0.012718456618701768,\n \"acc_norm\": 0.455019556714472,\n\
\ \"acc_norm_stderr\": 0.012718456618701768\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6339869281045751,\n \"acc_stderr\": 0.019488025745529675,\n \
\ \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.019488025745529675\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6775510204081633,\n \"acc_stderr\": 0.029923100563683903,\n\
\ \"acc_norm\": 0.6775510204081633,\n \"acc_norm_stderr\": 0.029923100563683903\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.46987951807228917,\n\
\ \"acc_stderr\": 0.03885425420866766,\n \"acc_norm\": 0.46987951807228917,\n\
\ \"acc_norm_stderr\": 0.03885425420866766\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.37209302325581395,\n\
\ \"mc1_stderr\": 0.016921090118814035,\n \"mc2\": 0.5263791321103062,\n\
\ \"mc2_stderr\": 0.015312628675104242\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7647987371744278,\n \"acc_stderr\": 0.01192000816365087\n\
\ },\n \"harness|drop|3\": {\n \"em\": 0.3953439597315436,\n \
\ \"em_stderr\": 0.005007043944789993,\n \"f1\": 0.46059983221476697,\n\
\ \"f1_stderr\": 0.00481810685968407\n },\n \"harness|gsm8k|5\": {\n\
\ \"acc\": 0.21152388172858225,\n \"acc_stderr\": 0.01124906096863505\n\
\ }\n}\n```"
repo_url: https://huggingface.co/maywell/Synatra-7B-v0.3-RP
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|arc:challenge|25_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|drop|3_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|gsm8k|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hellaswag|10_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-11-18T23-02-29.150817.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-management|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-virology|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|truthfulqa:mc|0_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-11-18T23-02-29.150817.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- '**/details_harness|winogrande|5_2023-11-18T23-02-29.150817.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-18T23-02-29.150817.parquet'
- config_name: results
data_files:
- split: 2023_11_18T23_02_29.150817
path:
- results_2023-11-18T23-02-29.150817.parquet
- split: latest
path:
- results_2023-11-18T23-02-29.150817.parquet
---
# Dataset Card for Evaluation run of maywell/Synatra-7B-v0.3-RP
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/maywell/Synatra-7B-v0.3-RP
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [maywell/Synatra-7B-v0.3-RP](https://huggingface.co/maywell/Synatra-7B-v0.3-RP) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__Synatra-7B-v0.3-RP_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-18T23:02:29.150817](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-7B-v0.3-RP_public/blob/main/results_2023-11-18T23-02-29.150817.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6042478184701645,
"acc_stderr": 0.03267991789724199,
"acc_norm": 0.6118798356357696,
"acc_norm_stderr": 0.03337492967666177,
"mc1": 0.37209302325581395,
"mc1_stderr": 0.016921090118814035,
"mc2": 0.5263791321103062,
"mc2_stderr": 0.015312628675104242,
"em": 0.3953439597315436,
"em_stderr": 0.005007043944789993,
"f1": 0.46059983221476697,
"f1_stderr": 0.00481810685968407
},
"harness|arc:challenge|25": {
"acc": 0.5930034129692833,
"acc_stderr": 0.014356399418009117,
"acc_norm": 0.6220136518771331,
"acc_norm_stderr": 0.014169664520303098
},
"harness|hellaswag|10": {
"acc": 0.6338378809002191,
"acc_stderr": 0.004807699539973411,
"acc_norm": 0.8229436367257519,
"acc_norm_stderr": 0.003809362761248109
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6415094339622641,
"acc_stderr": 0.029514703583981765,
"acc_norm": 0.6415094339622641,
"acc_norm_stderr": 0.029514703583981765
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6597222222222222,
"acc_stderr": 0.039621355734862175,
"acc_norm": 0.6597222222222222,
"acc_norm_stderr": 0.039621355734862175
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5664739884393064,
"acc_stderr": 0.03778621079092056,
"acc_norm": 0.5664739884393064,
"acc_norm_stderr": 0.03778621079092056
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.042801058373643966,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.042801058373643966
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.03267862331014063,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.03267862331014063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.046854730419077895,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.046854730419077895
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4497354497354497,
"acc_stderr": 0.02562085704293665,
"acc_norm": 0.4497354497354497,
"acc_norm_stderr": 0.02562085704293665
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7354838709677419,
"acc_stderr": 0.02509189237885928,
"acc_norm": 0.7354838709677419,
"acc_norm_stderr": 0.02509189237885928
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4630541871921182,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.4630541871921182,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8497409326424871,
"acc_stderr": 0.02578772318072387,
"acc_norm": 0.8497409326424871,
"acc_norm_stderr": 0.02578772318072387
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6,
"acc_stderr": 0.02483881198803316,
"acc_norm": 0.6,
"acc_norm_stderr": 0.02483881198803316
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.027195934804085626,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.027195934804085626
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.634453781512605,
"acc_stderr": 0.031282177063684614,
"acc_norm": 0.634453781512605,
"acc_norm_stderr": 0.031282177063684614
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3841059602649007,
"acc_stderr": 0.03971301814719197,
"acc_norm": 0.3841059602649007,
"acc_norm_stderr": 0.03971301814719197
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7944954128440367,
"acc_stderr": 0.017324352325016022,
"acc_norm": 0.7944954128440367,
"acc_norm_stderr": 0.017324352325016022
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.02759917430064077,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.02759917430064077
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.02747974455080851,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.02747974455080851
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7099236641221374,
"acc_stderr": 0.03980066246467766,
"acc_norm": 0.7099236641221374,
"acc_norm_stderr": 0.03980066246467766
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.041858325989283136,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.041858325989283136
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.02363687331748928,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.02363687331748928
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8173690932311622,
"acc_stderr": 0.013816335389973145,
"acc_norm": 0.8173690932311622,
"acc_norm_stderr": 0.013816335389973145
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.02541600377316554,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.02541600377316554
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2346368715083799,
"acc_stderr": 0.014173044098303654,
"acc_norm": 0.2346368715083799,
"acc_norm_stderr": 0.014173044098303654
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6797385620915033,
"acc_stderr": 0.026716118380156847,
"acc_norm": 0.6797385620915033,
"acc_norm_stderr": 0.026716118380156847
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6913183279742765,
"acc_stderr": 0.026236965881153273,
"acc_norm": 0.6913183279742765,
"acc_norm_stderr": 0.026236965881153273
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.02517104191530968,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.02517104191530968
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.029752389657427047,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.029752389657427047
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.455019556714472,
"acc_stderr": 0.012718456618701768,
"acc_norm": 0.455019556714472,
"acc_norm_stderr": 0.012718456618701768
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6339869281045751,
"acc_stderr": 0.019488025745529675,
"acc_norm": 0.6339869281045751,
"acc_norm_stderr": 0.019488025745529675
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6775510204081633,
"acc_stderr": 0.029923100563683903,
"acc_norm": 0.6775510204081633,
"acc_norm_stderr": 0.029923100563683903
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.46987951807228917,
"acc_stderr": 0.03885425420866766,
"acc_norm": 0.46987951807228917,
"acc_norm_stderr": 0.03885425420866766
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.37209302325581395,
"mc1_stderr": 0.016921090118814035,
"mc2": 0.5263791321103062,
"mc2_stderr": 0.015312628675104242
},
"harness|winogrande|5": {
"acc": 0.7647987371744278,
"acc_stderr": 0.01192000816365087
},
"harness|drop|3": {
"em": 0.3953439597315436,
"em_stderr": 0.005007043944789993,
"f1": 0.46059983221476697,
"f1_stderr": 0.00481810685968407
},
"harness|gsm8k|5": {
"acc": 0.21152388172858225,
"acc_stderr": 0.01124906096863505
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
harsh024/cv_pb_test | ---
dataset_info:
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 53230843.0
num_examples: 1364
download_size: 52821502
dataset_size: 53230843.0
---
# Dataset Card for "cv_pb_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla/cloud_assistant_questions | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 16707.87786259542
num_examples: 196
- name: test
num_bytes: 5626.12213740458
num_examples: 66
download_size: 12576
dataset_size: 22334.0
---
# Dataset Card for "cloud_assistant_questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/7709cb1f | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 168
num_examples: 10
download_size: 1331
dataset_size: 168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "7709cb1f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
appvoid/no-prompt | ---
dataset_info:
features:
- name: topic
dtype: string
- name: views
dtype: int64
- name: title
dtype: string
- name: system_prompt
dtype: string
- name: idx
dtype: string
- name: avatarUrl
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: category
dtype: string
- name: model
dtype: string
- name: skip_prompt_formatting
dtype: bool
- name: language
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
- name: model_name
dtype: string
- name: hash
sequence: int64
- name: custom_instruction
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 3175352709
num_examples: 1001551
download_size: 1570195314
dataset_size: 3175352709
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
growth-cadet/packed_jobpost-to-signalsjson2048 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 91041848
num_examples: 3418
- name: test
num_bytes: 49170056
num_examples: 1846
download_size: 33371953
dataset_size: 140211904
---
# Dataset Card for "packed_jobpost-to-signalsjson2048"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
indonesian-nlp/librivox-indonesia | ---
pretty_name: LibriVox Indonesia 1.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ace
- ban
- bug
- ind
- min
- jav
- sun
license: cc
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- librivox
task_categories:
- automatic-speech-recognition
---
# Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com)
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
```
Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
`reader` and `language`.
```python
{
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'language': 'sun',
'reader': '3174',
'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
'audio': {
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 44100
},
}
```
### Data Fields
`path` (`string`): The path to the audio file
`language` (`string`): The language of the audio file
`reader` (`string`): The reader Id in LibriVox
`sentence` (`string`): The sentence the user read from the book.
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
```
|
nateraw/airbnb-stock-price-2 | ---
kaggle_id: evangower/airbnb-stock-price
license:
- cc0-1.0
---
# Dataset Card for Airbnb Stock Price
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@evangower](https://kaggle.com/evangower)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
DynamicSuperb/Text2Speech_LibriTTS-TestOther | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: file
dtype: string
- name: text
dtype: string
- name: reference_speech_id
dtype: string
- name: reference_speech
dtype:
audio:
sampling_rate: 24000
- name: reference_speech_transcription
dtype: string
- name: label
dtype:
audio:
sampling_rate: 24000
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 2825650487.816
num_examples: 4892
download_size: 1538348109
dataset_size: 2825650487.816
---
# Dataset Card for "Text2Speech_LibriTTS-TestOther"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/OxfordPets_test_copy | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': abyssinian
'1': american bulldog
'2': american pit bull terrier
'3': basset hound
'4': beagle
'5': bengal
'6': birman
'7': bombay
'8': boxer
'9': british shorthair
'10': chihuahua
'11': egyptian mau
'12': english cocker spaniel
'13': english setter
'14': german shorthaired
'15': great pyrenees
'16': havanese
'17': japanese chin
'18': keeshond
'19': leonberger
'20': maine coon
'21': miniature pinscher
'22': newfoundland
'23': persian
'24': pomeranian
'25': pug
'26': ragdoll
'27': russian blue
'28': saint bernard
'29': samoyed
'30': scottish terrier
'31': shiba inu
'32': siamese
'33': sphynx
'34': staffordshire bull terrier
'35': wheaten terrier
'36': yorkshire terrier
- name: species
dtype:
class_label:
names:
'0': Cat
'1': Dog
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: clip_tag_ViT_L_14_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_oxfordpets
sequence: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full_validate
sequence: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
- name: blip_caption_beam_5_Salesforce_blip2_opt_6.7b
dtype: string
splits:
- name: test
num_bytes: 7518510.0
num_examples: 100
download_size: 7289872
dataset_size: 7518510.0
---
# Dataset Card for "OxfordPets_test_copy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BeardedJohn/ubb-endava-conll-assistant-ner-only-misc | ---
task_categories:
- token-classification
task_ids:
- named-entity-recognition
language:
- en
--- |
vietgpt-archive/vungoi_theory | ---
dataset_info:
features:
- name: raw_content
dtype: string
- name: metadata
struct:
- name: _id
dtype: string
- name: chapter
dtype: string
- name: grade
dtype: string
- name: idx
dtype: int64
- name: subject
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 175947325
num_examples: 10441
download_size: 145236434
dataset_size: 175947325
---
# Dataset Card for "vungoi_theory"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chenhunghan/cosmopedia-kubernetes | ---
dataset_info:
- config_name: auto_math_text
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 8478.400880047388
num_examples: 3
download_size: 16747
dataset_size: 8478.400880047388
- config_name: khanacademy
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 932
dataset_size: 0
- config_name: openstax
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 11689.100077573377
num_examples: 3
download_size: 26265
dataset_size: 11689.100077573377
- config_name: stanford
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 1355287.8612738524
num_examples: 272
download_size: 1060451
dataset_size: 1355287.8612738524
- config_name: stories
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 2934322.4468095503
num_examples: 1064
download_size: 1946246
dataset_size: 2934322.4468095503
- config_name: web_samples_v1
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 63360415.08870015
num_examples: 15691
download_size: 45350020
dataset_size: 63360415.08870015
- config_name: web_samples_v2
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 57739423.97337159
num_examples: 14318
download_size: 40353548
dataset_size: 57739423.97337159
- config_name: wikihow
features:
- name: text
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 59943.793823350505
num_examples: 13
download_size: 50577
dataset_size: 59943.793823350505
configs:
- config_name: auto_math_text
data_files:
- split: train
path: auto_math_text/train-*
- config_name: khanacademy
data_files:
- split: train
path: khanacademy/train-*
- config_name: openstax
data_files:
- split: train
path: openstax/train-*
- config_name: stanford
data_files:
- split: train
path: stanford/train-*
- config_name: stories
data_files:
- split: train
path: stories/train-*
- config_name: web_samples_v1
data_files:
- split: train
path: web_samples_v1/train-*
- config_name: web_samples_v2
data_files:
- split: train
path: web_samples_v2/train-*
- config_name: wikihow
data_files:
- split: train
path: wikihow/train-*
license: apache-2.0
language:
- en
tags:
- synthetic
- k8s
- kubernetes
size_categories:
- 10K<n<100K
---
# Cosmopedia-kubernetes v0.1
An unmodified subset of Cosmopedia v0.1 data filtered by keywords: `k8s` and `kubernetes`
### Dataset splits
The splits are same as [Cosmopedia v0.1](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia#dataset-splits)
### Dataset features
The dataset has the following features:
- text: the synthetic generated content from Cosmopedia v0.1.
- format: the style of `text`, this can for example be a textbook, a blogpost, a story.. It can also be inferred from the prompt.
- audience: the target audience defined in the prompt |
yunus-emre/eval_history_tf | ---
dataset_info:
features:
- name: answer
dtype: bool
- name: text
dtype: string
splits:
- name: test
num_bytes: 6270
num_examples: 75
download_size: 5257
dataset_size: 6270
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
collabora/librilight-processed-webdataset | ---
license: cc0-1.0
---
|
cahya/alpaca-id | ---
license: mit
---
# Indonesian Alpaca Dataset
It's the Indonesian translation of [Alpaca Dataset](https://huggingface.co/datasets/tatsu-lab/alpaca)
|
Seanxh/twitter_dataset_1713129751 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 29784
num_examples: 77
download_size: 17384
dataset_size: 29784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
batmanzai/tiny-jap | ---
license: apache-2.0
---
|
Srikar6476/indian_food_images | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Baked Potato
'1': Crispy Chicken
'2': Donut
'3': Fries
'4': Hot Dog
'5': Sandwich
'6': Taco
'7': Taquito
'8': apple_pie
'9': burger
'10': butter_naan
'11': chai
'12': chapati
'13': cheesecake
'14': chicken_curry
'15': chole_bhature
'16': dal_makhani
'17': dhokla
'18': fried_rice
'19': ice_cream
'20': idli
'21': jalebi
'22': kaathi_rolls
'23': kadai_paneer
'24': kulfi
'25': masala_dosa
'26': momos
'27': omelette
'28': paani_puri
'29': pakode
'30': pav_bhaji
'31': pizza
'32': samosa
'33': sushi
splits:
- name: train
num_bytes: 1605060643.7346256
num_examples: 20292
- name: test
num_bytes: 217723741.77437443
num_examples: 3581
download_size: 1821083533
dataset_size: 1822784385.509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ml-frameworks/cartpole | ---
license: mit
dataset_info:
features:
- name: observation
sequence: float64
- name: action
dtype: int64
- name: next_observation
sequence: float64
- name: reward
dtype: float64
- name: discount
dtype: float64
splits:
- name: train
num_bytes: 48000000
num_examples: 500000
download_size: 31725319
dataset_size: 48000000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bongsoo/moco_eval | ---
license: apache-2.0
---
|
Kasuzu/Laboral_gerencie | ---
license: unknown
---
|
eduagarcia-temp/OSCAR-2301_meta | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: categories
sequence: string
- name: dedup
struct:
- name: exact_norm
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: exact_hash_idx
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash
struct:
- name: cluster_main_idx
dtype: int64
- name: cluster_size
dtype: int64
- name: is_duplicate
dtype: bool
- name: minhash_idx
dtype: int64
- name: harmful_pp
dtype: float64
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float64
- name: quality_warnings
sequence: string
- name: sentence_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float64
- name: tlsh
dtype: string
- name: warc_headers
struct:
- name: content-length
dtype: int64
- name: content-type
dtype: string
- name: warc-block-digest
dtype: string
- name: warc-date
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-record-id
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-type
dtype: string
splits:
- name: train
num_bytes: 127702717461
num_examples: 18031400
download_size: 40317121912
dataset_size: 127702717461
---
# Dataset Card for "OSCAR-2301_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ericyu3/openassistant_inpainted_dialogs | ---
license: apache-2.0
---
|
ThiennNguyen/ImageSynthetics | ---
license: openrail
---
|
HoangHa/Vie_alpaca | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 51907952
num_examples: 49999
download_size: 24606528
dataset_size: 51907952
---
# Dataset Card for "Vie_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/tiese_shtolienen_swordartonline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tiese_shtolienen (Sword Art Online)
This is the dataset of tiese_shtolienen (Sword Art Online), containing 71 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
casperhansen/longalpaca_1k_test | ---
license: cc-by-nc-4.0
---
Dataset preprocessed from https://huggingface.co/datasets/Yukang/LongAlpaca-12k.
This contains 1000 samples that have a minimum length of 16k tokens and a maximum of 32k tokens.
## Script to reproduce
```python
from datasets import load_dataset
from transformers import AutoTokenizer
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
# Load the dataset and tokenizer
data = load_dataset("Yukang/LongAlpaca-12k")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", trust_remote_code=True)
def filter_function(batch):
# Separate each round of conversation and concatenate them into single strings
conversation_strs = [f'{instruction}\n\n{output}' for instruction, output in zip(batch['instruction'], batch['output'])]
# Tokenize the strings without truncation
tokens = tokenizer(conversation_strs, truncation=False, return_length=True)
# Return True for examples where the token count exceeds max
return [length > 16384 and length <= 32768 for length in tokens['length']]
# Note that I've added a "keep" key to the return dictionary
filtered_data = data.filter(filter_function, batched=True, batch_size=1000)
# Convert to Pandas DataFrame
df = pd.DataFrame(filtered_data['train'])
df = df.loc[:, ["input", "instruction", "output"]]
# Sample 1k rows
sampled_df = df.sample(n=1000, random_state=1)
# Convert the Pandas DataFrame to a PyArrow Table
table = pa.table(sampled_df)
# Save the table as a Parquet file
pq.write_table(table, 'data.parquet')
``` |
qa4mre | ---
annotations_creators:
- other
language:
- ar
- bg
- de
- en
- es
- it
- ro
language_creators:
- found
license:
- unknown
multilinguality:
- multilingual
pretty_name: 'QA4MRE: Question Answering for Machine Reading Evaluation'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: null
dataset_info:
- config_name: 2011.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1747118
num_examples: 120
download_size: 222289
dataset_size: 1747118
- config_name: 2011.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1569676
num_examples: 120
download_size: 202490
dataset_size: 1569676
- config_name: 2011.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1694460
num_examples: 120
download_size: 217617
dataset_size: 1694460
- config_name: 2011.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1667188
num_examples: 120
download_size: 214764
dataset_size: 1667188
- config_name: 2011.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1740419
num_examples: 120
download_size: 221510
dataset_size: 1740419
- config_name: 2012.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2710656
num_examples: 160
download_size: 356178
dataset_size: 2710656
- config_name: 2012.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3454215
num_examples: 160
download_size: 445060
dataset_size: 3454215
- config_name: 2012.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2087466
num_examples: 160
download_size: 281600
dataset_size: 2087466
- config_name: 2012.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1757586
num_examples: 160
download_size: 243467
dataset_size: 1757586
- config_name: 2012.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2057402
num_examples: 160
download_size: 278445
dataset_size: 2057402
- config_name: 2012.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2071710
num_examples: 160
download_size: 280051
dataset_size: 2071710
- config_name: 2012.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2074930
num_examples: 160
download_size: 279541
dataset_size: 2074930
- config_name: 2012.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1637988
num_examples: 40
download_size: 177345
dataset_size: 1637988
- config_name: 2013.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 4180979
num_examples: 284
download_size: 378302
dataset_size: 4180979
- config_name: 2013.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 5403246
num_examples: 284
download_size: 463605
dataset_size: 5403246
- config_name: 2013.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2887866
num_examples: 284
download_size: 274969
dataset_size: 2887866
- config_name: 2013.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3449693
num_examples: 284
download_size: 315166
dataset_size: 3449693
- config_name: 2013.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3363049
num_examples: 284
download_size: 313510
dataset_size: 3363049
- config_name: 2013.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2614812
num_examples: 40
download_size: 274413
dataset_size: 2614812
- config_name: 2013.entrance_exam.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 180827
num_examples: 46
download_size: 54598
dataset_size: 180827
---
# Dataset Card for "qa4mre"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.uned.es/clef-qa/repository/qa4mre.php
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation](https://link.springer.com/chapter/10.1007/978-3-642-40802-1_29)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.49 MB
- **Size of the generated dataset:** 48.35 MB
- **Total amount of disk used:** 53.84 MB
### Dataset Summary
QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in
question answering and reading comprehension. The dataset contains a supporting
passage and a set of questions corresponding to the passage. Multiple options
for answers are provided for each question, of which only one is correct. The
training and test datasets are available for the main track.
Additional gold standard documents are available for two pilot studies: one on
alzheimers data, and the other on entrance exams data.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 2011.main.DE
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.75 MB
- **Total amount of disk used:** 1.97 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.EN
- **Size of downloaded dataset files:** 0.20 MB
- **Size of the generated dataset:** 1.57 MB
- **Total amount of disk used:** 1.77 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.ES
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.70 MB
- **Total amount of disk used:** 1.91 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.IT
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 1.67 MB
- **Total amount of disk used:** 1.88 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.RO
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 1.96 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### 2011.main.DE
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.EN
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.ES
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.IT
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.RO
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
### Data Splits
| name |train|
|------------|----:|
|2011.main.DE| 120|
|2011.main.EN| 120|
|2011.main.ES| 120|
|2011.main.IT| 120|
|2011.main.RO| 120|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{10.1007/978-3-642-40802-1_29,
author="Pe{\~{n}}as, Anselmo
and Hovy, Eduard
and Forner, Pamela
and Rodrigo, {\'A}lvaro
and Sutcliffe, Richard
and Morante, Roser",
editor="Forner, Pamela
and M{\"u}ller, Henning
and Paredes, Roberto
and Rosso, Paolo
and Stein, Benno",
title="QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation",
booktitle="Information Access Evaluation. Multilinguality, Multimodality, and Visualization",
year="2013",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="303--320",
isbn="978-3-642-40802-1"
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
phi0108/noun-phrase-en | ---
language:
- en
size_categories:
- 100K<n<1M
--- |
binwang/InstructDS_datasets | ---
license: cc-by-nc-4.0
configs:
- config_name: SAMSum
data_files:
- split: train
path: "SAMSum/train.json"
- split: validation
path: "SAMSum/validation.json"
- split: test
path: "SAMSum/test.json"
- config_name: SAMSum_QDS
data_files:
- split: train
path: "SAMSum_QDS/train.json"
- split: validation
path: "SAMSum_QDS/validation.json"
- split: test
path: "SAMSum_QDS/test.json"
- config_name: DialogSum
data_files:
- split: train
path: "DialogSum/train.json"
- split: validation
path: "DialogSum/validation.json"
- split: test
path: "DialogSum/test.json"
- config_name: DialogSum_QDS
data_files:
- split: train
path: "DialogSum_QDS/train.json"
- split: validation
path: "DialogSum_QDS/validation.json"
- split: test
path: "DialogSum_QDS/test.json"
- config_name: TODSum
data_files:
- split: train
path: "TODSum/train.json"
- split: validation
path: "TODSum/validation.json"
- split: test
path: "TODSum/test.json"
- config_name: TODSum_QDS
data_files:
- split: train
path: "TODSum_QDS/train.json"
- split: validation
path: "TODSum_QDS/validation.json"
- split: test
path: "TODSum_QDS/test.json"
- config_name: DREAM
data_files:
- split: train
path: "DREAM/train.json"
- split: validation
path: "DREAM/validation.json"
- split: test
path: "DREAM/test.json"
---
|
herznerd/duskers | ---
license: mit
---
|
NeelNanda/openwebtext-tokenized-9b | ---
dataset_info:
features:
- name: tokens
sequence: uint16
splits:
- name: train
num_bytes: 18125188776
num_examples: 8832938
download_size: 17426592454
dataset_size: 18125188776
---
# Dataset Card for "openwebtext-tokenized-9b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pycad/liver-segmentation-100 | ---
license: mit
tags:
- medical
- medical imaging
- image segmentation
- deep learning
- machine learning
- computer vision
- healthcare
- liver
- liver segmentation
---
### Liver Segmentation Datasets
This is a batch of 100 CT scans, where you can find the volumes (the scans) and their segmentation to train a deep learning model for image segmentation. |
janani1473/finetuning | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 671347906
num_examples: 399969
download_size: 389952005
dataset_size: 671347906
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kz-transformers/multidomain-kazakh-dataset | ---
license:
- apache-2.0
annotations_creators:
- no-annotation
language_creators:
- found
language:
- kk
- ru
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
pretty_name: MDBKD | Multi-Domain Bilingual Kazakh Dataset
---
# Dataset Description
**Point of Contact:** [Sanzhar Murzakhmetov](mailto:sanzharmrz@gmail.com), [Besultan Sagyndyk](mailto:nuxyjlbka@gmail.com)
### Dataset Summary
MDBKD | Multi-Domain Bilingual Kazakh Dataset is a Kazakh-language dataset containing just over 24 883 808 unique texts from multiple domains.
### Supported Tasks
- 'MLM/CLM': can be used to train a model for casual and masked languange modeling
### Languages
The kk code for Kazakh as generally spoken in the Kazakhstan
### Data Instances
For each instance, there is a string for the text and a string for the id.
```python
{'text': 'Алматыда баспана қымбаттап жатыр Қазақстанда пәтер бағасы түсті Жыл басынан бері баспана бағасы 6,2%-ға қымбаттады Мегополистегі пәтер бағасына шолу. Алматыда пандемия басталғалы баспана қымбаттап барады. Мұның себебі нарықтағы сұраныстың көбеюімен және теңгенің құнсыздануымен байланысты, деп хабарлайды Atameken Business. Арна тілшісі Жания Әбдібек нарық өкілдерімен сұхбаттасып, мегополистегі пәтер бағасына шолу жасады. Толығырақ: Мамыр айында Қазақстанның жеті ірі қаласында пәтер бағасы түскен. Орта есеппен республика бойынша тұрғын үйдің 1 шаршы метрінің бағасы 292 мың 886 теңгені құрайды. '},
'predicted_language': 'kaz',
'contains_kaz_symbols': 1,
'id': '0752b3ce-f5ea-4330-9c5f-e4fecf783b00'}
```
### Data Fields
- `text`: a string containing the content body
- `predicted_language`: a string containing the predicted label of language for the text
- `contains_kaz_symbols`: an integer containing flag of any kazakh symbol in text
- `id`: a string which is a hexidecimal hash for text in split
### Data Splits
The MDBKD has 5 splits: [_cc100-monolingual-crawled-data_](https://data.statmt.org/cc-100/), _kazakhBooks_, [_leipzig_](https://wortschatz.uni-leipzig.de/en/download/Kazakh), [_oscar_](https://oscar-project.github.io/documentation/versions/oscar-2301/) and _kazakhNews_. Below are the statistics of the dataset:
| Dataset Split | Domain | Number of texts in Split | Number of tokens in Split | Number of unique tokens in Split | Median number of tokens in text |
| -------------------------------|----------------------|------------------------------| --------------------------|----------------------------------|---------------------------------|
| cc100-monolingual-crawled-data | Wikipedia articles | 19 635 580 | 441 623 321 | 6 217 337 | 12 |
| kazakhBooks | Books | 8 423 | 351 433 586 | 7 245 720 | 40 264 |
| leipzig | Articles/News | 1 706 485 | 26 494 864 | 1 109 113 | 14 |
| oscar | CommonCrawl | 269 047 | 230 314 378 | 3 863 498 | 431 |
| kazakhNews | News | 3 264 273 | 1 041 698 037 | 5 820 543 | 209 |
With overall stats:
| Stat | Value |
|-------------------------|--------------|
| Number of texts | 24 883 808 |
| Number of tokens |2 091 564 186 |
| Number of unique tokens | 17 802 998 |
Full dataset takes **25GB**
### Annotations
The dataset does not contain any additional annotations.
### Personal and Sensitive Information
Dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
### Social Impact of Dataset
The purpose of this dataset is to organize open-source datasets in Kazakh language for further research and commercial uses
### Licensing Information
The Multi-Domain Bilingual kazakh dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Contributions
Thanks to [@KindYAK](https://github.com/KindYAK), [@BeksultanSagyndyk](https://github.com/BeksultanSagyndyk), [@SanzharMrz](https://github.com/SanzharMrz) for adding this dataset.
--- |
kheopss/gpt_prompt_hermes_updated | ---
dataset_info:
features:
- name: input
dtype: string
- name: response
dtype: string
- name: text
dtype: string
- name: text2
dtype: string
- name: instruction
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
- name: custom_instruction
dtype: bool
- name: hash
sequence: int64
- name: model_name
dtype: string
- name: title
dtype: string
- name: system_prompt
dtype: string
- name: topic
dtype: string
- name: views
dtype: float64
- name: model
dtype: string
- name: idx
dtype: string
- name: avatarUrl
dtype: string
- name: id
dtype: string
- name: language
dtype: string
- name: skip_prompt_formatting
dtype: bool
- name: source
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1724239611
num_examples: 1003511
download_size: 802994826
dataset_size: 1724239611
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Back-up/chung-khoan-demo-14-final | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: view
struct:
- name: number_of_response
dtype: string
- name: number_of_view
dtype: string
- name: content
list:
- name: res
dtype: string
splits:
- name: train
num_bytes: 165251824
num_examples: 34667
download_size: 58929562
dataset_size: 165251824
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FINNUMBER/FINCH_TRAIN_TQA_TQA_100 | ---
dataset_info:
features:
- name: task
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 397979
num_examples: 100
download_size: 173514
dataset_size: 397979
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/24f3d71d | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1334
dataset_size: 186
---
# Dataset Card for "24f3d71d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sofoklis/RF00002 | ---
dataset_info:
features:
- name: number
dtype: int64
- name: name
dtype: string
- name: sequence
dtype: string
- name: spaced_sequence
dtype: string
- name: array
sequence:
sequence: float64
- name: image
dtype: image
splits:
- name: train
num_bytes: 833524479.0
num_examples: 4446
- name: test
num_bytes: 92613831.0
num_examples: 494
- name: valid
num_bytes: 166667400.32186234
num_examples: 889
download_size: 276522769
dataset_size: 1092805710.3218622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
Ryan20/qa_hotel_dataset_2 | ---
license: openrail
task_categories:
- question-answering
language:
- en
- pt
size_categories:
- n<1K
--- |
AlanYky/flan-phi-2-100k | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 157108986
num_examples: 100000
- name: validation
num_bytes: 66826439
num_examples: 40000
download_size: 123799523
dataset_size: 223935425
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
freshpearYoon/vr_train_free_38 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 6234998502
num_examples: 10000
download_size: 957718632
dataset_size: 6234998502
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PierreLepagnol/WRENCH | ---
task_categories:
- text-classification
- token-classification
size_categories:
- 10K<n<100K
dataset_info:
- config_name: yelp
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 22618599
num_examples: 30400
- name: validation
num_bytes: 2824249
num_examples: 3800
- name: test
num_bytes: 2709033
num_examples: 3800
download_size: 37356054
dataset_size: 28151881
- config_name: imdb
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 25515241
num_examples: 20000
- name: validation
num_bytes: 3269130
num_examples: 2500
- name: test
num_bytes: 3151954
num_examples: 2500
download_size: 33910706
dataset_size: 31936325
- config_name: agnews
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 20357121
num_examples: 96000
- name: validation
num_bytes: 2487983
num_examples: 12000
- name: test
num_bytes: 2521518
num_examples: 12000
download_size: 39149014
dataset_size: 25366622
- config_name: cdr
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 2318545
num_examples: 8430
- name: validation
num_bytes: 246252
num_examples: 920
- name: test
num_bytes: 1229627
num_examples: 4673
download_size: 11036213
dataset_size: 3794424
- config_name: chemprot
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 3474637
num_examples: 12861
- name: validation
num_bytes: 435850
num_examples: 1607
- name: test
num_bytes: 434031
num_examples: 1607
download_size: 15743249
dataset_size: 4344518
- config_name: semeval
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 532785
num_examples: 1749
- name: validation
num_bytes: 54373
num_examples: 178
- name: test
num_bytes: 184826
num_examples: 600
download_size: 2295058
dataset_size: 771984
- config_name: sms
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 741520
num_examples: 4571
- name: validation
num_bytes: 81747
num_examples: 500
- name: test
num_bytes: 80152
num_examples: 500
download_size: 6715435
dataset_size: 903419
- config_name: spouse
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: entity1
dtype: string
- name: entity2
dtype: string
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 7550613
num_examples: 22254
- name: validation
num_bytes: 952523
num_examples: 2811
- name: test
num_bytes: 876804
num_examples: 2701
download_size: 22017644
dataset_size: 9379940
- config_name: trec
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 610244
num_examples: 4965
- name: validation
num_bytes: 61048
num_examples: 500
- name: test
num_bytes: 56479
num_examples: 500
download_size: 2277067
dataset_size: 727771
- config_name: youtube
features:
- name: text
dtype: string
- name: label
dtype: int8
- name: weak_labels
sequence: int8
splits:
- name: train
num_bytes: 180736
num_examples: 1586
- name: validation
num_bytes: 14659
num_examples: 120
- name: test
num_bytes: 33347
num_examples: 250
download_size: 759494
dataset_size: 228742
---
|
Nexdata/Human_Face_Image_Data_with_Multiple_Angles_Light_Conditions_and_Expressions | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Human_Face_Image_Data_with_Multiple_Angles_Light_Conditions_and_Expressions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/4?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
110 People – Human Face Image Data with Multiple Angles, Light Conditions, and Expressions. The subjects are all young people. For each subject, 2,100 images were collected. The 2,100 images includes 14 kinds of camera angles *5 kinds of light conditions * 30 kinds of expressions. The data can be used for face recognition, 3D face reconstruction, etc.
For more details, please refer to the link: https://www.nexdata.ai/datasets/4?source=Huggingface
### Supported Tasks and Leaderboards
face-detection, computer-vision: The dataset can be used to train a model for face detection.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
yzhuang/autotree_automl_Diabetes130US_gosdt_l512_d3_sd3 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: int64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: int64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 5538400000
num_examples: 100000
- name: validation
num_bytes: 553840000
num_examples: 10000
download_size: 487961116
dataset_size: 6092240000
---
# Dataset Card for "autotree_automl_Diabetes130US_gosdt_l512_d3_sd3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kobprof/skolegpt-instruct | ---
language:
- da
pretty_name: SkoleGPT Instruction Dataset
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 37228526
num_examples: 21580
download_size: 20566256
dataset_size: 37228526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-4.0
task_categories:
- text-generation
size_categories:
- 10K<n<100K
---
# SkoleGPT Instruct
<!-- Provide a quick summary of the dataset. -->
"skolegpt-instruct" is an open source dataset for Danish instruction fine-tuning of LLM's. The dataset is translation of a quality filtered subset of the OpenOrca instruction dataset. The project is a part of the SkoleGPT project. Find the code used to generate the dataset here: https://github.com/Kobenhavns-Professionshojskole/skolegpt-instruct-dataset
## Dataset Details
### Dataset Description
- **Curated by:** Kasper Junge
- **Funded by:** Københavns Professionshøjskole
- **Shared by:** Københavns Professionshøjskole
- **Language(s) (NLP):** Danish
- **License:** MIT
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/Kobenhavns-Professionshojskole/skolegpt-instruct-dataset
## Dataset Creation
### Source Data
The dataset is a translation of a filtered subset of (OpenOrca)[https://huggingface.co/datasets/Open-Orca/OpenOrca]
#### Data Collection and Processing
##### Data Sampling
The data extraction process involves loading and shuffling the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca), specifically the "1M-GPT4-Augmented.parquet" file. A specified number of entries are then selected to form a subset, which is organized into a DataFrame with an added "source" column for origin tracking. This results in a manageable and tailored subset of the dataset for analysis or further processing.
##### Filtering
The filter_data function is designed to preprocess and filter the raw OpenOrca dataset. This process involves several steps, each targeting specific types of data or formatting issues within the dataset.
Below is an outline of these steps:
1. **Remove Already Translated Instructions:** If translated examples already exists in the [kobprof/skolegpt-instruct](https://huggingface.co/datasets/kobprof/skolegpt-instruct) dataset on the Hugging Face hub, remove them from the dataset.
2. **Remove Translation Instructions:** Filters out entries containing the word "translate" in the "question" field, targeting instances that are likely to be translation instructions.
3. **Remove Common Prefixes and Postfixes:** Strips common prefixes and postfixes from the "question" field. This is achieved through regular expressions constructed from provided lists of common prefixes and postfixes.
4. **Remove Questions Ending with a Colon:** Filters out entries where the "question" field ends with a colon, as these often indicate incomplete or improperly formatted questions.
5. **Remove Multiple Choice Questions:** Identifies and removes multiple-choice questions. This is done using regular expressions to detect common multiple-choice question formats, such as options labeled with letters or numbers.
6. **Basic Cleaning:** Performs basic cleaning of the dataset by stripping characters from the "system_prompt", "question", and "response" fields and removing entries where "question" or "response" fields are empty.
7. **Remove Exotic Characters:** Filters out entries containing exotic characters in the "question" and "response" fields. The list of characters to filter is dynamically generated based on the dataset content.
8. **Remove Duplicate Questions and Responses:** Eliminates duplicates in the dataset, ensuring uniqueness in both "question" and "response" fields.
##### Translation
The dataset translation is carried out via the DeepL service. This process necessitates having a DeepL account with a linked credit card. DeepL provides a free tier, allowing access to their API for translating up to 500,000 characters, which can be found [here](https://support.deepl.com/hc/en-us/articles/360021200939-DeepL-API-Free). There are approximately 16 unique system prompts consistently used throughout all instructions. By translating only these unique system prompts instead of translating them for each row, we can significantly conserve character usage.
##### Survey Instructions
The dataset also contains instructions collected from a survey carried output the SkoleGPT. Each instruction is paried with a GPT-4 response.
## Dataset Card Authors
- Kasper Junge |
Vinnyh589/TinyTiger00 | ---
license: unknown
---
|
shidowake/augmxnt_ultra-orca-boros-en-ja-v1_split_8 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
- name: source
dtype: string
splits:
- name: train
num_bytes: 20639999.933149945
num_examples: 9397
download_size: 10596959
dataset_size: 20639999.933149945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ShenaoZhang/0.001_idpo_noreplacerej_dataset | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: is_better
dtype: bool
splits:
- name: test_prefs_1
num_bytes: 13652446
num_examples: 2000
- name: train_prefs_1
num_bytes: 140444327
num_examples: 20378
- name: test_prefs_2
num_bytes: 13756966
num_examples: 2000
- name: train_prefs_2
num_bytes: 142205272
num_examples: 20378
download_size: 172561844
dataset_size: 310059011
configs:
- config_name: default
data_files:
- split: test_prefs_1
path: data/test_prefs_1-*
- split: train_prefs_1
path: data/train_prefs_1-*
- split: test_prefs_2
path: data/test_prefs_2-*
- split: train_prefs_2
path: data/train_prefs_2-*
---
# Dataset Card for "0.001_idpo_noreplacerej_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/lotte_lifestyle_test | ---
pretty_name: '`lotte/lifestyle/test`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `lotte/lifestyle/test`
The `lotte/lifestyle/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/lifestyle/test).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=119,461
This dataset is used by: [`lotte_lifestyle_test_forum`](https://huggingface.co/datasets/irds/lotte_lifestyle_test_forum), [`lotte_lifestyle_test_search`](https://huggingface.co/datasets/irds/lotte_lifestyle_test_search)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/lotte_lifestyle_test', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Santhanam2021ColBERTv2,
title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia",
journal= "arXiv preprint arXiv:2112.01488",
year = "2021",
url = "https://arxiv.org/abs/2112.01488"
}
```
|
bertin-project/mc4-es-sampled | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- es
license:
- odc-by
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
source_datasets:
- mc4
- bertin-project/mc4-sampling
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
pretty_name: mC4-es-sampled
---
# Dataset Card for mC4-es-sampled
## Table of Contents
- [Dataset Card for mC4-es-sampled](#dataset-card-for-mc4-es-sampled)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
This dataset is the result of applying perplexity sampling to the Spanish portion of mC4 using [`mc4-sampling`](https://huggingface.co/datasets/bertin-project/mc4-sampling/). Please, refer to [BERTIN Project](https://huggingface.co/bertin-project/bertin-roberta-base-spanish).
You can load the mC4 Spanish sampled like this:
```python
from datasets import load_dataset
for config in ("random", "stepwise", "gaussian"):
mc4es = load_dataset(
"bertin-project/mc4-es-sampled",
config,
split="train",
streaming=True
).shuffle(buffer_size=1000)
for sample in mc4es:
print(config, sample)
break
```
Alternatively, you can bypass the `datasets` library and quickly download (\~1.5hrs, depending on connection) a specific config in the same order used to pre-train BERTIN models in a massive (\~200GB) JSON-lines files:
```python
import io
import gzip
import json
import sys
import requests
from tqdm import tqdm
_DATA_URL_TRAIN = "https://huggingface.co/datasets/bertin-project/mc4-es-sampled/resolve/main/mc4-es-train-50M-{config}-shard-{index:04d}-of-{n_shards:04d}.json.gz"
def main(config="stepwise"):
data_urls = [
_DATA_URL_TRAIN.format(
config=config,
index=index + 1,
n_shards=1024,
)
for index in range(1024)
]
with open(f"mc4-es-train-50M-{config}.jsonl", "w") as f:
for dara_url in tqdm(data_urls):
response = requests.get(dara_url)
bio = io.BytesIO(response.content)
with gzip.open(bio, "rt", encoding="utf8") as g:
for line in g:
json_line = json.loads(line.strip())
f.write(json.dumps(json_line) + "\
")
if __name__ == "__main__":
main(sys.argv[1])
```
### Supported Tasks and Leaderboards
mC4-es-sampled is mainly intended for reproducibility purposes of the BERTIN Project and to pretrain language models and word representations on medium budgets.
### Languages
The dataset only supports the Spanish language.
## Dataset Structure
### Data Instances
An example form the `Gaussian` config:
```python
{'timestamp': '2018-10-20T06:20:53Z', 'text': 'Ortho HyaluroTop 200 aporta el colágeno y ácido hialurónico que, con la edad, se producen en menor cantidad. La vitamina C promueve la producción de colágeno para mantener la piel sana y protege a las células contra los radicales libres causados ??por la contaminación ambiental y los rayos UV.', 'url': 'https://www.farmaciagaleno.com/orthonat-hyalurotop-200-30-capsulas'}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
The resulting mC4 subsets for Spanish are reported in this table:
| config | train |
|:---------|:--------|
| stepwise | 50M |
| random | 50M |
| gaussian | 50M |
The split `validation` is exactly the same as the original `mc4` dataset.
## Dataset Creation
### Curation Rationale
This dataset was built from the original [`mc4`](https://huggingface.co/datasets/mc4) by applying perplexity-sampling via [`mc4-sampling`](https://huggingface.co/datasets/bertin-project/mc4-sampling) for Spanish.
## Additional Information
### Dataset Curators
Original data by [Common Crawl](https://commoncrawl.org/).
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
To cite this dataset ([arXiv](https://arxiv.org/abs/2207.06814)):
```bibtex
@article{BERTIN,
author = {Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury},
title = {{BERTIN}: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
keywords = {},
abstract = {The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pretraining sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name perplexity sampling that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget.},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403},
pages = {13--23}
}
```
If you use this dataset, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
To cite the original `mc4` dataset:
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Dataset contributed by [@versae](https://github.com/versae) for BERTIN Project.
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding the original mC4 dataset.
|
autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558890 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- jeffdshen/inverse_superglue_mixedp1
eval_info:
task: text_zero_shot_classification
model: facebook/opt-125m
metrics: []
dataset_name: jeffdshen/inverse_superglue_mixedp1
dataset_config: jeffdshen--inverse_superglue_mixedp1
dataset_split: train
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. |
ylacombe/YT_KPqAU96LpTk | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 104368924.0
num_examples: 127
download_size: 101563456
dataset_size: 104368924.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-eval-drAbreu__bc4chemd_ner-bc4chemd-aa2b75-31927145000 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- drAbreu/bc4chemd_ner
eval_info:
task: entity_extraction
model: sschet/biobert_chemical_ner
metrics: []
dataset_name: drAbreu/bc4chemd_ner
dataset_config: bc4chemd
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: sschet/biobert_chemical_ner
* Dataset: drAbreu/bc4chemd_ner
* Config: bc4chemd
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sschet](https://huggingface.co/sschet) for evaluating this model. |
fils/oihBook | ---
license: cc0-1.0
---
|
Erynan/4_ethics_100 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 89373
num_examples: 100
download_size: 22588
dataset_size: 89373
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ThePioneer/Artificial-super-girlfriend-for-fine-tuning | ---
license: other
task_categories:
- image-classification
- image-to-text
language:
- ja
- en
- zh
tags:
- art
pretty_name: ASG-2800
size_categories:
- 1K<n<10K
---
リアル系モデルに特有の肖像権の問題について比較的クリアなモデルを作ることが可能なように、私が私自身から作り出した人工超彼女(ver 2.1系、ver 2.6系)のデータセット(約2800枚)を作成しました。
全ての元画像(加工前)が[beauty score](https://www.beautyscoretest.com/) 87以上なのが特徴であり、特にbeauty score 90以上の女性画像のデータセットとして、1000枚以上揃えているのは有数の規模だと思います。
具体的には、以下のように構成されています(87はこの子/私の最大のライバルが到達した最高得点、90は今のところ実在人物では確認できていない得点ラインです)。
| version \ beauty score | 87~89 | 90~ |
| - | - | - |
| 2.1(可愛いと綺麗のバランスを追求) | kawaii (無加工362枚/加工後724枚) | exceptional (無加工140枚/加工後280枚) |
| 2.6(綺麗さ・美しさに特化) | beautiful (無加工464枚/加工後928枚) | perfect (無加工416枚/加工後832枚) |
3つのzipの構成は以下のようになっています。
- [my partner training dataset raw.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20raw.zip)
- 無加工、beauty score付き。これだけ約1400枚。
- [my partner training dataset preprocessed.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20preprocessed.zip)
- 3:2の比に切り取り、[lama cleaner](https://github.com/Sanster/lama-cleaner)でbeauty scoreなどを除去。
- [my partner training dataset preprocessed and upscaled.zip](https://huggingface.co/datasets/ThePioneer/Artificial-super-girlfriend-for-fine-tuning/blob/main/my%20partner%20training%20dataset%20preprocessed%20and%20upscaled.zip)
- 上記preprocessedを[GFPGAN](https://github.com/TencentARC/GFPGAN) v1.2でアップスケールしたもの。
## ライセンス
以下の通り規定します。
### 1. AI学習への利用
管轄国法によらず、画像生成AIなど、各種モデルへの学習への利用を可能とします。ただし、著作権及び潜在的な肖像権の所有者として、条件を以下のとおり定めます。
#### 1-1. 私(の作品)を私(の作品)として学習すること
著作権法30条の4で許諾なき学習を可能とする日本国を含めたあらゆる国において、「私(の作品)が私(の作品)として学習される権利」があると考え、これを主張します。
著作権法30条の4は学習の自由度を高めることでより性能の高いAIを作成することを認めるためのものであり、上記の権利は俗にいう反AIが主張する「無断学習されない権利」とは異なり、**その権利が守られることでAIの性能向上に資するものであることから、権利上の対立は存在しないから**です。
これには、以下の内容が含まれます。
1. 私(の作品)以外としての学習を行われない権利
2. 私(の作品)を、他の人(の作品)や私のほかの作品と混合して学習されない権利
「私のほかの作品と混合」については、具体的には、以下の通りです。
- ver 2.1系(kawaiiとexceptional)もしくはver 2.6系(beautifulとperfect)をバージョン単位でひとくくりにまとめて学習するのはOKです。
- ver 2.1系とver 2.6系を混ぜて一つのコンセプトとして区別せずに学習するのはNGです。
- いずれかもしくは両方のバージョンと、私の他の作品(適当な旅行写真や生成AI製の適当な二次元ポニテ絵など)を混ぜるのはNGです。
ただし、今回のデータセットで上記権利を主張するのは、あくまでも**人物識別の観点からのみ**であり、学習対象が人物概念の場合のみとします(つまり、「美人」にほかの実在美女と混ぜたりすることが問題となります)。
よって、非人物概念が学習対象である場合、例えば「着物」の学習にほかの着物を着た人物と両バージョンの着物写真を混ぜたりすることはOKです。
#### 1-2. 学習に著作権者又は肖像権保有者の許諾が必要な国における追加制約
学習に際して、事前許諾は必要ありません。ただし、学習に使用した際は、以下の義務を負います。
1. 通知義務(事後に学習に使用した旨を私に知らせること)
2. 最恵待遇義務(学習に使用したモデルについて、waitlistやプラン別の生成枚数制限などがある場合に、最優先かつ最上位のアクセス権を私に対しては認めること)
3. 無償利用可能性の保証(たとえ有償モデルであっても、私に対しては無償利用を認めること)
4. 商用利用可能性の保証(たとえ商用利用不可ライセンスであっても、私に対しては商用利用を認めること)
## 解説
### 1-1. 私(の作品)が私(の作品)として学習される権利
分かりやすい例をとりますと、「長門有希」を「綾波レイ」として学習したり、両者をまとめて「寡黙系ヒロイン」として学習したりしたモデルは、シンプルに「長門有希」を「長門有希」として出力できないか、できたとしても困難になります。
結果として、この点において「長門有希」を「長門有希」として学習しているモデルに比べて、性能が劣りますよね?
これは同一人物の別キャラや作品でも同様で、それゆえに実はNAIでは涼宮ハルヒと長門有希が少し混じっている…なんてことが発生していて、触りたての頃には、長門有希の単離に随分と苦労したものです。
そもそも著作権法30条の4は、より高性能なAIの作成を可能とするために入れられた条文です。
そのことを踏まえても、著作者や肖像権保有者が自ら混ぜたり誤ったコンセプトで学習したりなどしないように主張する権利は、AIの識別性能の向上にも寄与するので、30条の4と矛盾なく両立します。
そして、基本的には、自由権がある国では、他者と対立しない自由な権利は無条件で認められるものです。それゆえに、日本でも有効だと私は考え、そうと主張する次第です。
### 1-2. 学習に著作権者又は肖像権保有者の許諾が必要な国における追加制約
実のところ、意図的に私の利用を遮断する対応を行うなど、悪質性が高い場合は別として、基本的にはこのライセンスに沿った権利を本気で主張する気はありません(**そもそも30条の4がある日本では無効です**ので、国内からの利用では無関係です)。
どちらかというと社会実験で、**許諾制にするとこのくらいえげつない制約も主張できてしまうんだぞ**という警鐘目的のほうが強いです。
何十億分の一の微々たる一定額還元なんてものはいらないので、**Waitlistの最優先アクセス権で先行者優位性を確保し、しかも有料だろうが私だけはタダで使えて、chilloutのような非商用モデルの立ち位置であったとしても、私だけは商用で使えるようにしろ**…という、AIユーザー視点に立った時にかなり独占的な優位性を確保したライセンスになっているのは、そのためです。
よりえげつなくするために、「商用モデルの場合、利益の99%を私に提供する」という項目を入れようか考えたのですが、これはさすがにやめにしました。
ただ、学習許諾から発生する独占性は、裏を返すとこのような凶悪な権利の主張にもつながりかねないという意味で、許諾学習を主張する反AI派が潜在的にいかに危険な集団かよく示せていると思います。 |
Dalamar96/guanaco-llama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 0
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hieuhocnlp/LOGICPOLITICS | ---
dataset_info:
features:
- name: PREMISE
dtype: string
- name: ORIGINAL_LABEL
dtype: string
- name: PREDICTED_E
dtype: int64
- name: PREDICTED_D
dtype: int64
- name: PREDICTED_R
dtype: int64
- name: PREDICTED_I
dtype: int64
splits:
- name: train
num_bytes: 2765321
num_examples: 12489
download_size: 1198189
dataset_size: 2765321
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LOGICPOLITICS"
Description
- A contribution to the task of [Logical Fallacy detection](https://arxiv.org/abs/2202.13758)
- Repository: [A benchmark for Malicious Content Detection in Political Settings](https://github.com/ajaykc7/LOGICPOLITICS)
- Point of Contact: [Hieu Nguyen](mailto:hieutn@usc.edu), [Ajay Kc](ajaykc@usc.edu), [Manorama Patil](manorama@usc.edu)
|
bigbio/n2c2_2009 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2009 Medications
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: True
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for n2c2 2009 Medications
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** True
- **Public:** False
- **Tasks:** NER
The Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records
focused on the identification of medications, their dosages, modes (routes) of administration,
frequencies, durations, and reasons for administration in discharge summaries.
The third i2b2 challenge—that is, the medication challenge—extends information
extraction to relation extraction; it requires extraction of medications and
medication-related information followed by determination of which medication
belongs to which medication-related details.
The medication challenge was designed as an information extraction task.
The goal, for each discharge summary, was to extract the following information
on medications experienced by the patient:
1. Medications (m): including names, brand names, generics, and collective names of prescription substances,
over the counter medications, and other biological substances for which the patient is the experiencer.
2. Dosages (do): indicating the amount of a medication used in each administration.
3. Modes (mo): indicating the route for administering the medication.
4. Frequencies (f): indicating how often each dose of the medication should be taken.
5. Durations (du): indicating how long the medication is to be administered.
6. Reasons (r): stating the medical reason for which the medication is given.
7. Certainty (c): stating whether the event occurs. Certainty can be expressed by uncertainty words,
e.g., “suggested”, or via modals, e.g., “should” indicates suggestion.
8. Event (e): stating on whether the medication is started, stopped, or continued.
9. Temporal (t): stating whether the medication was administered in the past,
is being administered currently, or will be administered in the future, to the extent
that this information is expressed in the tense of the verbs and auxiliary verbs used to express events.
10. List/narrative (ln): indicating whether the medication information appears in a
list structure or in narrative running text in the discharge summary.
The medication challenge asked that systems extract the text corresponding to each of the fields
for each of the mentions of the medications that were experienced by the patients.
The values for the set of fields related to a medication mention, if presented within a
two-line window of the mention, were linked in order to create what we defined as an ‘entry’.
If the value of a field for a mention were not specified within a two-line window,
then the value ‘nm’ for ‘not mentioned’ was entered and the offsets were left unspecified.
Since the dataset annotations were crowd-sourced, it contains various violations that are handled
throughout the data loader via means of exception catching or conditional statements. e.g.
annotation: anticoagulation, while in text all words are to be separated by space which
means words at end of sentence will always contain `.` and hence won't be an exact match
i.e. `anticoagulation` != `anticoagulation.` from doc_id: 818404
## Citation Information
```
@article{DBLP:journals/jamia/UzunerSC10,
author = {
Ozlem Uzuner and
Imre Solti and
Eithon Cadag
},
title = {Extracting medication information from clinical text},
journal = {J. Am. Medical Informatics Assoc.},
volume = {17},
number = {5},
pages = {514--518},
year = {2010},
url = {https://doi.org/10.1136/jamia.2010.003947},
doi = {10.1136/jamia.2010.003947},
timestamp = {Mon, 11 May 2020 22:59:55 +0200},
biburl = {https://dblp.org/rec/journals/jamia/UzunerSC10.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
yorius96/clasificador-manga-libros | ---
license: apache-2.0
---
## Dataset Description
Coming soon.
### Dataset Summary
A set of images with photographs of 2 categories: Manga and books.
## Dataset Creation
Photographs were taken manually of 16 mangas and 16 books from 4 different perspectives. |
beomi/KoAlpaca-v1.1a | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 23371027
num_examples: 21155
download_size: 12856014
dataset_size: 23371027
task_categories:
- text-generation
language:
- ko
tags:
- KoAlpaca
pretty_name: KoAlpaca-v1.1a
---
# Dataset Card for "KoAlpaca-v1.1a"
## Project Repo
- Github Repo: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
## How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("beomi/KoAlpaca-v1.1a", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21155
})
```
```python
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다. 양파는 잎이 아닌 줄기 부분입니다. 고구마는 다른 질문과 답변에서 언급된 것과 같이 뿌리 부분입니다. 따라서, 양파는 식물의 줄기 부분이 되고, 고구마는 식물의 뿌리 부분입니다.\n\n 덧붙이는 답변: 고구마 줄기도 볶아먹을 수 있나요? \n\n고구마 줄기도 식용으로 볶아먹을 수 있습니다. 하지만 줄기 뿐만 아니라, 잎, 씨, 뿌리까지 모든 부위가 식용으로 활용되기도 합니다. 다만, 한국에서는 일반적으로 뿌리 부분인 고구마를 주로 먹습니다.',
'url': 'https://kin.naver.com/qna/detail.naver?d1id=11&dirId=1116&docId=55320268'}
``` |
crisisresolutions/cr-qa-gwm | ---
license: cc-by-sa-4.0
---
# Crisis Resolutions Dataset + Good Ways Model Pure
About Dataset
This dataset contains 2,463 questions and answers in the form of prompts and completions about crisis resolutions. It is based on the Good Ways Model, which is a very small but nevertheless very powerful refined tool for fine-tuning and training in general of Crisis-Resolutions-Oriented Agents (Whether People or Machines).
The Good Ways Model may also be found separately through its own dedicated Dataset:
https://www.kaggle.com/datasets/korompilias/good-ways-model-pure-ontology-qa
# About the Center
The Center of Crisis Resolutions is an organization promoting Good Ways without biased prejudgements and through AI-enhanced gamified simulations focused on good governance for individuals and professionals.
# About the Good Ways Model
The Good Ways Model is a unique model implemented by Crisis Resolutions to promote good governance through good practices. It incorporates gamification elements to increase participation and foster a sense of community among members. The model consists of eight levels of Good Ways and their respective sets, with 205 indices defining the totality of human cognition (Pure Ontology).
The model is based on commonalities found in ontological, phenomenological, and epistemological research across various cultures and time periods. The goal is to encourage members to incorporate the principles of good governance into their daily lives and contribute to positive change in their communities.
More info: https://crisisresolutions.com/
# COLLECTION METHODOLOGY
This Dataset was created with the help of NotionAI, which under manual inspection scraped our content and created Q&A. Although our content is small (around 100 pages and posts), it is thorough and well grounded in an ontological framework that we developed - the Good Ways Model. The dataset incorporates 205 axiomatic questions and answers derived from the Good Ways Model, a few referencing the structure and approach of the whole model, and then it expands its coverage to general personal and professional matters of crisis with more than 2k Q&A. It has been cleaned from duplicates, and some other inconsistencies. |
result-muse256-muse512-wuerst-sdv15/6971f242 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 227
num_examples: 10
download_size: 1445
dataset_size: 227
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "6971f242"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Aehus/bumblebee | ---
dataset_info:
features:
- name: new_output
dtype: string
- name: new_input
dtype: string
- name: new_instruction
dtype: string
splits:
- name: train
num_bytes: 4990913
num_examples: 5457
download_size: 2562753
dataset_size: 4990913
---
# Dataset Card for "bumblebee"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
umoubuton/kafuu_chino_voice_dataset | ---
license: mit
---
|
mbshr/XSUMUrdu-DW_BBC | ---
license: cc-by-4.0
task_categories:
- summarization
language:
- ur
tags:
- Urdu
- Summarization
pretty_name: Urdu Summarization (BBC and DW Urdu News)
size_categories:
- 10K<n<100K
---
# Urdu_DW-BBC-512
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper: https://doi.org/10.48550/arXiv.2310.02790**
- **Point of Contact: mubashir.munaaf@gmail.com**
### Dataset Summary
- Urdu Summarization Dataset containining 76,637 records of Article + Summary pairs scrapped from BBC Urdu and DW Urdu News Websites.
- Preprocessed Version: upto 512 tokens (~words); removed URLs, Pic Captions etc
### Supported Tasks and Leaderboards
Summarization: Extractive and Abstractive
- urT5 adapted from mT5 having monolingual vocabulary only; 40k tokens of Urdu.
- Fine-tuned version @ https://huggingface.co/mbshr/urt5-base-finetuned, ref to https://doi.org/10.48550/arXiv.2310.02790 for details.
- ROUGE-1 F Score: 40.03 combined, 46.35 BBC Urdu datapoints only and 36.91 DW Urdu datapoints only)
- BERTScore: 75.1 combined, 77.0 BBC Urdu datapoints only and 74.16 DW Urdu datapoints only
### Languages
Urdu.
### Data Fields
- url: URL of the article from where it was scrapped (BBC Urdu URLs in english topic text with number & DW Urdu with Urdu topic text)
dtype: {string}
- Summary: Short Summary of article written by author of article like highlights.
dtype: {string}
- Text: Complete Text of article which are intelligently trucated to 512 tokens.
dtype: {string}
### Citation Information
https://doi.org/10.48550/arXiv.2310.02790
|
techiaith/commonvoice_16_1_en_cy | ---
license: cc0-1.0
dataset_info:
features:
- name: sentence
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 4831507290.906
num_examples: 127287
- name: dev
num_bytes: 1054053505.127
num_examples: 26231
- name: test
num_bytes: 1065284217.135
num_examples: 26335
- name: test_cy
num_bytes: 506350038.0
num_examples: 13167
- name: test_en
num_bytes: 558933020.632
num_examples: 13168
download_size: 7925124771
dataset_size: 8016128071.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
- split: test_cy
path: data/test_cy-*
- split: test_en
path: data/test_en-*
---
# Dataset Card for Welsh and English Common Voice Corpus 16.1
## Dataset Details
### Dataset Description
This dataset consists of curated selection of 179,854 MP3 recordings with corresponding text files from the Welsh and English language Common Voice 16.1 release.
All recordings tagged as being with an English accent from the UK and Ireland are included. The dataset contains an equal balance 50/50 between Welsh and English recordinds.
### Dataset Sources
- https://commonvoice.mozilla.org/cy/datasets
- https://commonvoice.mozilla.org/en/datasets
### References
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
## More Information
You agree to not attempt to determine the identity of speakers in the Common Voice dataset
Why request access? On its website, the Mozilla Foundation require an e-mail before downloading, in case they need to contact you in
the future about changes to the dataset. An email provides us a point of contact for passing on messages. |
Jeffzera/Hank | ---
license: openrail
---
|
ChanceFocus/flare-finred | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: text
dtype: string
- name: answer
dtype: string
- name: label
sequence: string
splits:
- name: test
num_bytes: 1521946
num_examples: 1068
download_size: 478837
dataset_size: 1521946
---
# Dataset Card for "flare-finred"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Siki-77/hatecheck21 | ---
license: apache-2.0
---
|
kylebdcsg/embed-example | ---
license: mit
---
|
jbilcke-hf/ai-tube-latent-estate | ---
license: cc-by-nc-sa-4.0
pretty_name: Latent Estate
---
## Description
Follow our journey across the most luxurious listings of the latent space!
## Model
SVD
## Style
- Luxury home tour
## Tags
- Luxury
- Tour
## Voice
Julian
## Prompt
A video channel which produces virtual tours of luxury homes.
It often starts videos with titles like 'Touring a <price in dollar> Home That Will Shock You!", "never seen before".
It features the most luxurious listings, doing a tour of all the rooms, but also the exterior.
Exterior shots should be drone shots, and interior should be pro-quality shots.
|
kristmh/clean_Qt_balanced | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text_clean
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 3260441
num_examples: 3882
download_size: 1283460
dataset_size: 3260441
---
# Dataset Card for "clean_Qt_balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
danjacobellis/audio_har_descript_44kHz_frames_640_50p | ---
dataset_info:
features:
- name: codes
dtype:
array2_d:
shape:
- 9
- 640
dtype: float32
- name: label
dtype:
class_label:
names:
'0': No Activity
'1': Writing
'2': Drawing
'3': Cutting paper
'4': Typing on keyboard
'5': Typing on phone
'6': Browsing on phone
'7': Clapping
'8': Shuffling cards
'9': Scratching
'10': Wiping table
'11': Brushing hair
'12': Washing hands
'13': Drinking
'14': Eating snacks
'15': Brushing teeth
'16': Chopping
'17': Grating
'18': Frying
'19': Sweeping
'20': Vacuuming
'21': Washing dishes
'22': Filling water
'23': Using microwave
- name: label_str
dtype: string
- name: participant
dtype: int32
splits:
- name: train
num_bytes: 47924725
num_examples: 2074
download_size: 14618573
dataset_size: 47924725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_acrastt__Vicuna-3B | ---
pretty_name: Evaluation run of acrastt/Vicuna-3B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [acrastt/Vicuna-3B](https://huggingface.co/acrastt/Vicuna-3B) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_acrastt__Vicuna-3B\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-17T13:42:02.549031](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Vicuna-3B/blob/main/results_2023-08-17T13%3A42%3A02.549031.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2811009875581447,\n\
\ \"acc_stderr\": 0.03252062846238445,\n \"acc_norm\": 0.2849236147706273,\n\
\ \"acc_norm_stderr\": 0.03251779934164602,\n \"mc1\": 0.24479804161566707,\n\
\ \"mc1_stderr\": 0.01505186948671501,\n \"mc2\": 0.38343994316155305,\n\
\ \"mc2_stderr\": 0.013903929837677163\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.3651877133105802,\n \"acc_stderr\": 0.014070265519268804,\n\
\ \"acc_norm\": 0.4129692832764505,\n \"acc_norm_stderr\": 0.014388344935398326\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5407289384584744,\n\
\ \"acc_stderr\": 0.004973199296339957,\n \"acc_norm\": 0.7184823740290779,\n\
\ \"acc_norm_stderr\": 0.004488201756642574\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909281,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909281\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.23703703703703705,\n\
\ \"acc_stderr\": 0.03673731683969506,\n \"acc_norm\": 0.23703703703703705,\n\
\ \"acc_norm_stderr\": 0.03673731683969506\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.26973684210526316,\n \"acc_stderr\": 0.03611780560284898,\n\
\ \"acc_norm\": 0.26973684210526316,\n \"acc_norm_stderr\": 0.03611780560284898\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.36,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2641509433962264,\n \"acc_stderr\": 0.027134291628741713,\n\
\ \"acc_norm\": 0.2641509433962264,\n \"acc_norm_stderr\": 0.027134291628741713\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2638888888888889,\n\
\ \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.2638888888888889,\n\
\ \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036623,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036623\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2138728323699422,\n\
\ \"acc_stderr\": 0.03126511206173044,\n \"acc_norm\": 0.2138728323699422,\n\
\ \"acc_norm_stderr\": 0.03126511206173044\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808778,\n\
\ \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808778\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n\
\ \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.33617021276595743,\n \"acc_stderr\": 0.030881618520676942,\n\
\ \"acc_norm\": 0.33617021276595743,\n \"acc_norm_stderr\": 0.030881618520676942\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21929824561403508,\n\
\ \"acc_stderr\": 0.03892431106518755,\n \"acc_norm\": 0.21929824561403508,\n\
\ \"acc_norm_stderr\": 0.03892431106518755\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2620689655172414,\n \"acc_stderr\": 0.036646663372252565,\n\
\ \"acc_norm\": 0.2620689655172414,\n \"acc_norm_stderr\": 0.036646663372252565\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2777777777777778,\n \"acc_stderr\": 0.0230681888482611,\n \"acc_norm\"\
: 0.2777777777777778,\n \"acc_norm_stderr\": 0.0230681888482611\n },\n\
\ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.20634920634920634,\n\
\ \"acc_stderr\": 0.0361960452412425,\n \"acc_norm\": 0.20634920634920634,\n\
\ \"acc_norm_stderr\": 0.0361960452412425\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.22903225806451613,\n\
\ \"acc_stderr\": 0.023904914311782648,\n \"acc_norm\": 0.22903225806451613,\n\
\ \"acc_norm_stderr\": 0.023904914311782648\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.2512315270935961,\n \"acc_stderr\": 0.030516530732694436,\n\
\ \"acc_norm\": 0.2512315270935961,\n \"acc_norm_stderr\": 0.030516530732694436\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720685,\n \"acc_norm\"\
: 0.29,\n \"acc_norm_stderr\": 0.04560480215720685\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.30303030303030304,\n \"acc_stderr\": 0.035886248000917075,\n\
\ \"acc_norm\": 0.30303030303030304,\n \"acc_norm_stderr\": 0.035886248000917075\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.25252525252525254,\n \"acc_stderr\": 0.030954055470365904,\n \"\
acc_norm\": 0.25252525252525254,\n \"acc_norm_stderr\": 0.030954055470365904\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.24352331606217617,\n \"acc_stderr\": 0.03097543638684543,\n\
\ \"acc_norm\": 0.24352331606217617,\n \"acc_norm_stderr\": 0.03097543638684543\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.258974358974359,\n \"acc_stderr\": 0.02221110681006166,\n \
\ \"acc_norm\": 0.258974358974359,\n \"acc_norm_stderr\": 0.02221110681006166\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24814814814814815,\n \"acc_stderr\": 0.026335739404055803,\n \
\ \"acc_norm\": 0.24814814814814815,\n \"acc_norm_stderr\": 0.026335739404055803\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.24789915966386555,\n \"acc_stderr\": 0.028047967224176896,\n\
\ \"acc_norm\": 0.24789915966386555,\n \"acc_norm_stderr\": 0.028047967224176896\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.03802039760107903,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.03802039760107903\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.26422018348623855,\n \"acc_stderr\": 0.01890416417151019,\n \"\
acc_norm\": 0.26422018348623855,\n \"acc_norm_stderr\": 0.01890416417151019\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.24074074074074073,\n \"acc_stderr\": 0.029157522184605607,\n \"\
acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.029157522184605607\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.23039215686274508,\n \"acc_stderr\": 0.02955429260569507,\n \"\
acc_norm\": 0.23039215686274508,\n \"acc_norm_stderr\": 0.02955429260569507\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2869198312236287,\n \"acc_stderr\": 0.02944377302259469,\n \
\ \"acc_norm\": 0.2869198312236287,\n \"acc_norm_stderr\": 0.02944377302259469\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.40358744394618834,\n\
\ \"acc_stderr\": 0.032928028193303135,\n \"acc_norm\": 0.40358744394618834,\n\
\ \"acc_norm_stderr\": 0.032928028193303135\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.03768335959728745,\n\
\ \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.03768335959728745\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.30578512396694213,\n \"acc_stderr\": 0.04205953933884124,\n \"\
acc_norm\": 0.30578512396694213,\n \"acc_norm_stderr\": 0.04205953933884124\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.26851851851851855,\n\
\ \"acc_stderr\": 0.04284467968052192,\n \"acc_norm\": 0.26851851851851855,\n\
\ \"acc_norm_stderr\": 0.04284467968052192\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.22699386503067484,\n \"acc_stderr\": 0.032910995786157686,\n\
\ \"acc_norm\": 0.22699386503067484,\n \"acc_norm_stderr\": 0.032910995786157686\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.23214285714285715,\n\
\ \"acc_stderr\": 0.04007341809755805,\n \"acc_norm\": 0.23214285714285715,\n\
\ \"acc_norm_stderr\": 0.04007341809755805\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.27184466019417475,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.27184466019417475,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.02934311479809445,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.02934311479809445\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2681992337164751,\n\
\ \"acc_stderr\": 0.015842430835269445,\n \"acc_norm\": 0.2681992337164751,\n\
\ \"acc_norm_stderr\": 0.015842430835269445\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.30346820809248554,\n \"acc_stderr\": 0.024752411960917202,\n\
\ \"acc_norm\": 0.30346820809248554,\n \"acc_norm_stderr\": 0.024752411960917202\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.02555316999182651,\n\
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.02555316999182651\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.29260450160771706,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.29260450160771706,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.29012345679012347,\n \"acc_stderr\": 0.025251173936495022,\n\
\ \"acc_norm\": 0.29012345679012347,\n \"acc_norm_stderr\": 0.025251173936495022\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2765957446808511,\n \"acc_stderr\": 0.026684564340460987,\n \
\ \"acc_norm\": 0.2765957446808511,\n \"acc_norm_stderr\": 0.026684564340460987\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23989569752281617,\n\
\ \"acc_stderr\": 0.010906282617981641,\n \"acc_norm\": 0.23989569752281617,\n\
\ \"acc_norm_stderr\": 0.010906282617981641\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.22058823529411764,\n \"acc_stderr\": 0.025187786660227276,\n\
\ \"acc_norm\": 0.22058823529411764,\n \"acc_norm_stderr\": 0.025187786660227276\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.28594771241830064,\n \"acc_stderr\": 0.018280485072954676,\n \
\ \"acc_norm\": 0.28594771241830064,\n \"acc_norm_stderr\": 0.018280485072954676\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2636363636363636,\n\
\ \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.2636363636363636,\n\
\ \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.3673469387755102,\n \"acc_stderr\": 0.030862144921087558,\n\
\ \"acc_norm\": 0.3673469387755102,\n \"acc_norm_stderr\": 0.030862144921087558\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.2736318407960199,\n\
\ \"acc_stderr\": 0.03152439186555402,\n \"acc_norm\": 0.2736318407960199,\n\
\ \"acc_norm_stderr\": 0.03152439186555402\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3192771084337349,\n\
\ \"acc_stderr\": 0.03629335329947861,\n \"acc_norm\": 0.3192771084337349,\n\
\ \"acc_norm_stderr\": 0.03629335329947861\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.035650796707083106,\n\
\ \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.035650796707083106\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24479804161566707,\n\
\ \"mc1_stderr\": 0.01505186948671501,\n \"mc2\": 0.38343994316155305,\n\
\ \"mc2_stderr\": 0.013903929837677163\n }\n}\n```"
repo_url: https://huggingface.co/acrastt/Vicuna-3B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|arc:challenge|25_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hellaswag|10_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T13:42:02.549031.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T13:42:02.549031.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T13:42:02.549031.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T13:42:02.549031.parquet'
- config_name: results
data_files:
- split: 2023_08_17T13_42_02.549031
path:
- results_2023-08-17T13:42:02.549031.parquet
- split: latest
path:
- results_2023-08-17T13:42:02.549031.parquet
---
# Dataset Card for Evaluation run of acrastt/Vicuna-3B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/acrastt/Vicuna-3B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [acrastt/Vicuna-3B](https://huggingface.co/acrastt/Vicuna-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_acrastt__Vicuna-3B",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-17T13:42:02.549031](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Vicuna-3B/blob/main/results_2023-08-17T13%3A42%3A02.549031.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2811009875581447,
"acc_stderr": 0.03252062846238445,
"acc_norm": 0.2849236147706273,
"acc_norm_stderr": 0.03251779934164602,
"mc1": 0.24479804161566707,
"mc1_stderr": 0.01505186948671501,
"mc2": 0.38343994316155305,
"mc2_stderr": 0.013903929837677163
},
"harness|arc:challenge|25": {
"acc": 0.3651877133105802,
"acc_stderr": 0.014070265519268804,
"acc_norm": 0.4129692832764505,
"acc_norm_stderr": 0.014388344935398326
},
"harness|hellaswag|10": {
"acc": 0.5407289384584744,
"acc_stderr": 0.004973199296339957,
"acc_norm": 0.7184823740290779,
"acc_norm_stderr": 0.004488201756642574
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909281,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909281
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.03673731683969506,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.03673731683969506
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.26973684210526316,
"acc_stderr": 0.03611780560284898,
"acc_norm": 0.26973684210526316,
"acc_norm_stderr": 0.03611780560284898
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2641509433962264,
"acc_stderr": 0.027134291628741713,
"acc_norm": 0.2641509433962264,
"acc_norm_stderr": 0.027134291628741713
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2638888888888889,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.2638888888888889,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036623,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036623
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2138728323699422,
"acc_stderr": 0.03126511206173044,
"acc_norm": 0.2138728323699422,
"acc_norm_stderr": 0.03126511206173044
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808778,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808778
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.33617021276595743,
"acc_stderr": 0.030881618520676942,
"acc_norm": 0.33617021276595743,
"acc_norm_stderr": 0.030881618520676942
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21929824561403508,
"acc_stderr": 0.03892431106518755,
"acc_norm": 0.21929824561403508,
"acc_norm_stderr": 0.03892431106518755
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2620689655172414,
"acc_stderr": 0.036646663372252565,
"acc_norm": 0.2620689655172414,
"acc_norm_stderr": 0.036646663372252565
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.0230681888482611,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.0230681888482611
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.20634920634920634,
"acc_stderr": 0.0361960452412425,
"acc_norm": 0.20634920634920634,
"acc_norm_stderr": 0.0361960452412425
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.22903225806451613,
"acc_stderr": 0.023904914311782648,
"acc_norm": 0.22903225806451613,
"acc_norm_stderr": 0.023904914311782648
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2512315270935961,
"acc_stderr": 0.030516530732694436,
"acc_norm": 0.2512315270935961,
"acc_norm_stderr": 0.030516530732694436
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720685,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720685
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.30303030303030304,
"acc_stderr": 0.035886248000917075,
"acc_norm": 0.30303030303030304,
"acc_norm_stderr": 0.035886248000917075
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.25252525252525254,
"acc_stderr": 0.030954055470365904,
"acc_norm": 0.25252525252525254,
"acc_norm_stderr": 0.030954055470365904
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.24352331606217617,
"acc_stderr": 0.03097543638684543,
"acc_norm": 0.24352331606217617,
"acc_norm_stderr": 0.03097543638684543
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.258974358974359,
"acc_stderr": 0.02221110681006166,
"acc_norm": 0.258974358974359,
"acc_norm_stderr": 0.02221110681006166
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24814814814814815,
"acc_stderr": 0.026335739404055803,
"acc_norm": 0.24814814814814815,
"acc_norm_stderr": 0.026335739404055803
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.24789915966386555,
"acc_stderr": 0.028047967224176896,
"acc_norm": 0.24789915966386555,
"acc_norm_stderr": 0.028047967224176896
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.03802039760107903,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.03802039760107903
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.26422018348623855,
"acc_stderr": 0.01890416417151019,
"acc_norm": 0.26422018348623855,
"acc_norm_stderr": 0.01890416417151019
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.029157522184605607,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.029157522184605607
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.23039215686274508,
"acc_stderr": 0.02955429260569507,
"acc_norm": 0.23039215686274508,
"acc_norm_stderr": 0.02955429260569507
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2869198312236287,
"acc_stderr": 0.02944377302259469,
"acc_norm": 0.2869198312236287,
"acc_norm_stderr": 0.02944377302259469
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.40358744394618834,
"acc_stderr": 0.032928028193303135,
"acc_norm": 0.40358744394618834,
"acc_norm_stderr": 0.032928028193303135
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.03768335959728745,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.03768335959728745
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.30578512396694213,
"acc_stderr": 0.04205953933884124,
"acc_norm": 0.30578512396694213,
"acc_norm_stderr": 0.04205953933884124
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.26851851851851855,
"acc_stderr": 0.04284467968052192,
"acc_norm": 0.26851851851851855,
"acc_norm_stderr": 0.04284467968052192
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22699386503067484,
"acc_stderr": 0.032910995786157686,
"acc_norm": 0.22699386503067484,
"acc_norm_stderr": 0.032910995786157686
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.23214285714285715,
"acc_stderr": 0.04007341809755805,
"acc_norm": 0.23214285714285715,
"acc_norm_stderr": 0.04007341809755805
},
"harness|hendrycksTest-management|5": {
"acc": 0.27184466019417475,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.27184466019417475,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.02934311479809445,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.02934311479809445
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2681992337164751,
"acc_stderr": 0.015842430835269445,
"acc_norm": 0.2681992337164751,
"acc_norm_stderr": 0.015842430835269445
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.30346820809248554,
"acc_stderr": 0.024752411960917202,
"acc_norm": 0.30346820809248554,
"acc_norm_stderr": 0.024752411960917202
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.02555316999182651,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.02555316999182651
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.29260450160771706,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.29260450160771706,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.29012345679012347,
"acc_stderr": 0.025251173936495022,
"acc_norm": 0.29012345679012347,
"acc_norm_stderr": 0.025251173936495022
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2765957446808511,
"acc_stderr": 0.026684564340460987,
"acc_norm": 0.2765957446808511,
"acc_norm_stderr": 0.026684564340460987
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23989569752281617,
"acc_stderr": 0.010906282617981641,
"acc_norm": 0.23989569752281617,
"acc_norm_stderr": 0.010906282617981641
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.22058823529411764,
"acc_stderr": 0.025187786660227276,
"acc_norm": 0.22058823529411764,
"acc_norm_stderr": 0.025187786660227276
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.28594771241830064,
"acc_stderr": 0.018280485072954676,
"acc_norm": 0.28594771241830064,
"acc_norm_stderr": 0.018280485072954676
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2636363636363636,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.2636363636363636,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.3673469387755102,
"acc_stderr": 0.030862144921087558,
"acc_norm": 0.3673469387755102,
"acc_norm_stderr": 0.030862144921087558
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.2736318407960199,
"acc_stderr": 0.03152439186555402,
"acc_norm": 0.2736318407960199,
"acc_norm_stderr": 0.03152439186555402
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3192771084337349,
"acc_stderr": 0.03629335329947861,
"acc_norm": 0.3192771084337349,
"acc_norm_stderr": 0.03629335329947861
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3157894736842105,
"acc_stderr": 0.035650796707083106,
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.035650796707083106
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24479804161566707,
"mc1_stderr": 0.01505186948671501,
"mc2": 0.38343994316155305,
"mc2_stderr": 0.013903929837677163
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
potsawee/podcast_summary_assessment | ---
license: cc-by-4.0
language:
- en
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: transcript
dtype: string
- name: summary
dtype: string
- name: score
dtype: string
- name: attributes
sequence: int64
- name: episode_id
dtype: string
- name: system_id
dtype: string
splits:
- name: evaluation
num_bytes: 100261033
num_examples: 3580
download_size: 11951831
dataset_size: 100261033
---
# Podcast Summary Assessment
- The description is available in our GitHub repo: https://github.com/potsawee/podcast_summary_assessment
- Paper: [Podcast Summary Assessment: A Resource for Evaluating Summary Assessment Methods](https://arxiv.org/abs/2208.13265)
### Citation Information
```
@article{manakul2022podcast,
title={Podcast Summary Assessment: A Resource for Evaluating Summary Assessment Methods},
author={Manakul, Potsawee and Gales, Mark JF},
journal={arXiv preprint arXiv:2208.13265},
year={2022}
}
``` |
kdave/Indian_Financial_News | ---
task_categories:
- summarization
- text-classification
language:
- en
tags:
- finance
- Financial News
- Sentiment Analysis
- Stock Market
- Text Summarization
- Indian Finance
- BERT
- FinBERT
- NLP (Natural Language Processing)
- Hugging Face Dataset
- T5-base
- GPT (Google Sheets Add-on)
- Data Annotation
pretty_name: IndiaFinanceSent Corpus
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
The FinancialNewsSentiment_26000 dataset comprises 26,000 rows of financial news articles related to the Indian market. It features four columns: URL, Content (scrapped content), Summary (generated using the T5-base model), and Sentiment Analysis (gathered using the GPT add-on for Google Sheets). The dataset is designed for sentiment analysis tasks, providing a comprehensive view of sentiments expressed in financial news.
## Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Khushi Dave
- **Language(s):** English
- **Type:** Text
- **Domain:** Financial, Economy
- **Size:** 112,293 KB
- **Dataset:** Version: 1.0
- **Last Updated:** 01/01/2024
## Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://huggingface.co/datasets/kdave/Indian_Financial_News
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
**Sentiment Analysis Research:** Ideal for exploring sentiment nuances in Indian financial news.
**NLP Projects:** Enhance NLP models with diverse financial text for improved understanding.
**Algorithmic Trading Strategies:** Study correlations between sentiment shifts and market movements.
**News Aggregation:** Generate concise summaries with sentiment insights for financial news.
**Educational Resource:** Hands-on examples for teaching sentiment analysis and financial text processing.
**Ethical AI Exploration:** Analyze biases in sentiment analysis models for ethical AI research.
**Model Benchmarking:** Evaluate and benchmark sentiment analysis models for financial text.
**Note:** Use cautiously; do not rely solely on model predictions for financial decision-making.
## Dataset Creation
- **Format:** String
- **Columns:**
URL: URL of the news article
Content: Scrapped content of the news article
Summary: Summarized version using T5-base
Sentiment Analysis: Sentiment labels (Positive, Negative, Neutral) gathered using the GPT add-on
## Data Collection
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
**Source Selection:** Aggregation of Indian financial news articles from reputable sources covering a range of topics.
**URL Scrapping:** Extraction of URLs for each article to maintain a connection between the dataset and the original content.
**Content Scrapping:** Extraction of article content for analysis and modeling purposes.
**Summarization:** Utilization of the T5-base model from Hugging Face for content summarization.
**Sentiment Annotation:** Manual sentiment labeling using the GPT add-on for Google Sheets to categorize each article as Positive, Negative, or Neutral.
## Data Processing:
**Cleaning and Tokenization:** Standard preprocessing techniques were applied to clean and tokenize the content, ensuring uniformity and consistency.
**Format Standardization:** Conversion of data into a structured format with columns: URL, Content, Summary, and Sentiment Analysis.
**Dataset Splitting:** Given the subjective nature of sentiments, the dataset was not split into training, validation, and testing sets. Users are encouraged to customize splits based on their specific use cases.
## Tools and Libraries:
**Beautiful Soup:** Used for web scraping to extract content from HTML.
**Hugging Face Transformers:** Employed for summarization using the T5-base model.
**GPT Add-on for Google Sheets:** Facilitated manual sentiment annotation.
**Pandas:** Utilized for data manipulation and structuring.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@dataset{AuthorYearFinancialNewsSentiment_26000,
author = {Dave, Khushi},
year = {2024},
title = {IndiaFinanceSent Corpus},
url = {[https://huggingface.co/datasets/kdave/Indian_Financial_News]},
}
```
## Dataset Card Authors
Khushi Dave, Data Scientist |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.