datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
taskmasterpeace/andrea1 | ---
license: apache-2.0
---
|
tqfang229/CKBP | ---
license: mit
---
|
rgsgs/asoul_carol | ---
license: other
language:
- zh
pretty_name: carol
---
# 声音数据
数据来源为asoul的珈乐 22年5月~21年6月的大部分录播
时长共5小时 无内容标记
已完成响度匹配
数据在carol_fast_lzma2.zip里
压缩算法是fast lzma2 太旧的解压软件可能不支持
**字母s开头的音频是歌声数据,量少质量低,建议删除**
*无授权,侵删* |
roborovski/diffusiondb-seq2seq | ---
dataset_info:
features:
- name: subject
dtype: string
- name: descriptor
dtype: string
splits:
- name: train
num_bytes: 10079006
num_examples: 93834
download_size: 6236928
dataset_size: 10079006
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "diffusiondb-seq2seq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Andaleciomusic/delecio | ---
license: openrail
---
|
allennghayoui/mistral-chat-code-assistant-without-query | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 173225.5
num_examples: 172
- name: test
num_bytes: 10071.25
num_examples: 10
- name: validation
num_bytes: 10071.25
num_examples: 10
download_size: 68218
dataset_size: 193368.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
gsstein/75-percent-human-dataset-opt | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
- name: prompt
dtype: string
- name: generated
dtype: bool
- name: raw_summary
dtype: string
splits:
- name: train
num_bytes: 129036066
num_examples: 15326
- name: test
num_bytes: 4622318
num_examples: 576
- name: validation
num_bytes: 4905897
num_examples: 576
download_size: 84411845
dataset_size: 138564281
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
rsouza17/vozreimp3 | ---
license: openrail
---
|
result-kand2-sdxl-wuerst-karlo/606de66e | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1369
dataset_size: 186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "606de66e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
neural-commons/tokenization-corpus | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1087225427
num_examples: 62424
download_size: 697307266
dataset_size: 1087225427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Solikkk/M4rkim | ---
license: apache-2.0
---
|
hotchpotch/JQaRA | ---
dataset_info:
features:
- name: id
dtype: string
- name: q_id
dtype: string
- name: passage_row_id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
splits:
- name: unused
num_bytes: 23595528
num_examples: 24900
- name: dev
num_bytes: 81708550
num_examples: 86850
- name: test
num_bytes: 140414874
num_examples: 166700
download_size: 113244850
dataset_size: 245718952
configs:
- config_name: default
data_files:
- split: unused
path: data/unused-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
license: cc
task_categories:
- question-answering
language:
- ja
---
# **JQaRA** : Japanese Question Answering with Retrieval Augmentation - 検索拡張(RAG)評価のための日本語 Q&A データセット
高性能な LLM の台頭に伴い、LLM を用いた質疑応答のユースケースが増加しています。しかしながら、LLM は質問に対して適切な回答する知識を有していないと、答えることができないだけでなく、誤った回答を返答するといった課題が存在します。この課題を解決する一つの手段として、LLM が外部の知識を参照して回答する「RAG(Retrieval-Augmented Generation・検索拡張生成)」の需要が高まっています。
そのため、LLM が RAG を用いた際に回答精度が上がるような情報を検索によって取得可能か評価するためのデータセット"**JQaRA** : Japanese Question Answering with Retrieval Augmentation - 検索拡張(RAG)評価のための日本語 Q&A データセット"を構築しました。なお JQaRA は「じゃくら」と読みます。
データセット自体は HuggingFace で、データセットの評価コード例などは GitHub で公開しています。
- 🤗 https://huggingface.co/datasets/hotchpotch/JQaRA
- HuggingFace で公開している JQaRA データセットです
- 🛠️ https://github.com/hotchpotch/JQaRA/
- GitHub で、📈 [評価用コード](https://github.com/hotchpotch/JQaRA/tree/main/evaluator) を公開しています。
## JQaRA の特徴
JQaRA の特徴として、llama-7b の派生モデルや GPT4 等の LLM が質問に回答できる検索データに対して正解ラベル付けを行っています(注・一部人間の目視チェックよるラベル付もあり)。そのため、LLM にとって回答精度を上げるヒントになるデータをどれだけ検索で取得できるか、すなわち RAG の精度向上に寄与しそうかの視点を元に作ったデータセットです。
大元の質問文は[AI 王 公式配布データセット(JAQKET)](https://sites.google.com/view/project-aio/dataset?authuser=0)を、検索対象文は Wikipedia のデータを用いています。
### 評価指標
JQaRA は質問に対して、候補となる 100 件のデータ(一件以上の正解を含む)の情報検索タスクです。そのため主の評価指標として、test データの nDCG@10 (normalized Documented Cumulative Gain)を用います。
また例として、簡単に評価できるスクリプトを [GitHub の evaluator](https://github.com/hotchpotch/JQaRA/tree/main/evaluator) 以下に置いています。このスクリプトは SentenceTransformer や CrossEncoder といった、一般的なインターフェイスを備えたモデル、また高精度と謳われるモデルを評価するスクリプトです。
### 評価結果
以下はさまざまなモデルの評価結果です。評価は nDCG@10 以外にも、参考まで MRR@10 の数値も掲載しています。
#### 文ベクトルモデル
| model_names | ndcg@10 | mrr@10 |
| :------------------------------------------------------------------------------ | ------: | -----: |
| [bge-m3+dense](https://huggingface.co/BAAI/bge-m3) | 0.539 | 0.7854 |
| [fio-base-japanese-v0.1](https://huggingface.co/bclavie/fio-base-japanese-v0.1) | 0.3718 | 0.6161 |
| [sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base) | 0.3237 | 0.5407 |
| [sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large) | 0.3571 | 0.575 |
| [unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base) | 0.3121 | 0.5209 |
| [unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large) | 0.3928 | 0.6257 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.554 | 0.7988 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 0.4917 | 0.7291 |
| [GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.3085 | 0.5179 |
| [text-embedding-3-small](https://platform.openai.com/docs/guides/embeddings) | 0.3881 | 0.6107 |
#### ColBERT モデル
| model_names | ndcg@10 | mrr@10 |
| :-------------------------------------------------------- | ------: | -----: |
| [bge-m3+colbert](https://huggingface.co/BAAI/bge-m3) | 0.5656 | 0.8095 |
| [JaColBERT](https://huggingface.co/bclavie/JaColBERT) | 0.5495 | 0.8108 |
| [JaColBERTv2](https://huggingface.co/bclavie/JaColBERTv2) | 0.5847 | 0.8356 |
#### CrossEncoder モデル
| model_names | ndcg@10 | mrr@10 |
| :---------------------------------------------------------------------------------------------------------------------- | ------: | -----: |
| [japanese-reranker-cross-encoder-xsmall-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-xsmall-v1) | 0.6136 | 0.8402 |
| [japanese-reranker-cross-encoder-small-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-small-v1) | 0.6247 | 0.8599 |
| [japanese-reranker-cross-encoder-base-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-base-v1) | 0.6711 | 0.8809 |
| [japanese-reranker-cross-encoder-large-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-large-v1) | 0.71 | 0.8983 |
| [japanese-bge-reranker-v2-m3-v1](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) | 0.6918 | 0.8996 |
| [bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) | 0.673 | 0.8909 |
| [shioriha-large-reranker](https://huggingface.co/cl-nagoya/shioriha-large-reranker) | 0.5775 | 0.83 |
| [bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 0.2445 | 0.4378 |
| [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 0.4718 | 0.7108 |
| [cross-encoder-mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/corrius/cross-encoder-mmarco-mMiniLMv2-L12-H384-v1) | 0.5588 | 0.8107 |
#### その他モデル
| model_names | ndcg@10 | mrr@10 |
| :-------------------------------------------------- | ------: | -----: |
| [bge-m3+all](https://huggingface.co/BAAI/bge-m3) | 0.576 | 0.8178 |
| [bge-m3+sparse](https://huggingface.co/BAAI/bge-m3) | 0.5088 | 0.7596 |
| bm25 | 0.458 | 0.702 |
---
## JQaRA データセット構築方法
### Q&A データの選定
まず JQaRA の基礎となる日本語 Q&A データとして、「[JAQKET: クイズを題材にした日本語 QA データセット](https://sites.google.com/view/project-aio/dataset?authuser=0)」の質問と回答を使用しています。JAQKET は、質の高い多様な日本語 Q&A データセットで、Wikipedia の記事タイトル名が回答となる特徴を持っています。そのため、Wikipedia の文章から適切な該当文章を見つけることができれば、ほとんどの質問に対して回答を導き出すことが可能です。
JAQKET の中から、CC-BY-SA 4.0 ライセンスで公開されている dev(約 2,000 件)、unused(約 600 件)、test(約 2,000 件)のデータを JQaRA で使用しています。JAQKET の train(約 17000 件)はライセンスが学術利用用途のみとなり、商用での学習は不可なことから含めていません。以下は、JQaRA 評価用の test データセットの構築方法です。
### Wikipedia データの追加
JAQKET の質問データから、Wikipedia から質問に関連するであろう文章を取得します。Wikipedia の記事全文だと文章が長すぎるため、最大文字数が 400 文字になるようにチャンク分割されたデータ、[singletongue/wikipedia-utils - passages-c400-jawiki-20230403](https://huggingface.co/datasets/singletongue/wikipedia-utils)を利用しています。
質問文から関連する文章の取得には、Embeddings モデルを用いた文ベクトルの類似度で評価視しています。また一つの Embeddings モデルでは偏りが発生してしまうため、多様性を確保するために 5 種類の Embeddings モデル[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large), [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3), [cl-nagoya/sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base), [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja), [OpenAI/text-embedding-3-small](https://platform.openai.com/docs/guides/embeddings) を利用します。
また 400 文字以内になるように分割された Wikipedia 文データは、約 560 万文存在します。そのため、現実的な速度で検索が可能になるよう、Embeddings モデルを用いて文ベクトルに変換した後、IVF(Inverted File Index)と量子化(IVFPQ)を使い、高速にベクトル検索が可能な状態にします。なおベクトル検索のライブラリには FAISS を用いており、IVFPQ のパラメータは IVF の nlist に 2048、PQ は Embeddings モデルの埋め込みベクトルの次元数/4(例: e5-large は 1024 次元なので、PQ=1024/4=256)としています。
これらを使い、質問文各々に最も類似する上位 500 の文章 x 5 種類の Embeddings モデルの結果を得ます。その後、これら 5 つの結果を RRF(Reciprocal Rank Fusion)を用いてランク付けしなおし、スコアが高い上位 100 文を抽出しました。これらの文と、その文が含まれる Wikipedia 記事タイトルを、質問文に紐付けします。
### ルールベースでの正解ラベルの付与
質問文に紐付けした 100 文の中から、タイトルまたは文に質問に対応する回答文字列が完全一致で含まれる場合には、それを関連があると判断し正解ラベルをまず付与します。質問文に紐づけた 100 文のうち、正解ラベルが一つもないデータ(39 件)や、正解ラベルの数が多すぎるデータは、評価が困難なため除外しました。正解ラベルの数が多いデータの算出には、各質問に紐づく正解ラベルの総数の標準偏差を計算し、総数平均値(16.54) +1 標準偏差(15.21) = 31.66 件以上の正解ラベルを持つデータ 281 件を除外しました。
このフィルタリングにより、元々約 2,000 件あった test データが 1,680 件へと減少しました。また、正解ラベルが付与されている文データは、1,680 質問 \* 100 文の合計 168,000 データのうち 16,726 件となりました。また各々の質問について、100 文中何件正解ラベルの総数は、フィルタリング後は平均 9.98 件、標準偏差は 6.70 となっています。
## 正解ラベル有用性の検証
ルールベースで付与された正解ラベルの中には、質問に対して回答精度を上げるのヒントとならないものも含まれています。そのためまず実際の LLM を用いて、質問と文(Wikipedia タイトル + 400 文字以下の対象文)を与えた上で、正しい回答が出力できるかどうかで、その正解ラベルが有益かどうかを評価します。最終的には人間での評価も行い、回答精度を上げるヒントにならないデータは削除します。
### 1) LocalLLM 7B, 13B モデルでの検証
初めに、日本語 LLM のパラメータ数が 7B の [youri-7b-instruction](https://huggingface.co/rinna/youri-7b-instruction) および、13B の [Swallow-13B-instruction-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf) に対して回答のみを出力するように SFT(Supervised fine-tuning)で学習させたモデル、[youri-7b-stf-qa-context-jaqket-jsquad-gptq](https://huggingface.co/hotchpotch/youri-7b-stf-qa-context-jaqket-jsquad-gptq)と [Swallow-13b-stf-qa-context-jaqket-jsquad-gptq](https://huggingface.co/hotchpotch/Swallow-13b-stf-qa-context-jaqket-jsquad-gptq) を作成しました。なお、今回の test データは、これらのモデルの学習には利用していません。
この段階で、ルールベースの正解ラベルが付与されたデータ 16,726 件中、どちらかのモデルが部分一致で正解を含む回答を出せなかった物が、16,726 件中 742 件ありました。なお部分一致なのは、回答生成時に少々冗長な出力をしがちなため、完全一致ではなく部分一致で評価しています。
### 2) ChatGPT 3.5, GPT4 での検証
その後の二段階目では、1) の 7B, 13B モデルで間違えたデータ 742 件を使用し、ChatGPT 3.5(gpt-3.5-turbo-0125)および GPT4(gpt-4-0125-preview)を用いて、同様に正解が出せるかを検証しました。この結果、ChatGPT 3.5, GPT4 どちらのモデルも間違ったデータは 550 件ありました。
### 3) 人間の検証
最後に、LLM が間違えたデータ 550 件を人間がチェックして、正解ラベルを削除するかどうかを判断し、最終的に 522 件を削除するデータとみなしました。
このうち、正解ラベルとして削除しない、と判断したデータで一番多いものは表記揺れです。LLM 評価が文字列一致だったため、例えば質問が"「ぼたん」と言えばイノシシの肉ですが、「もみじ」といえば何の肉でしょう?"で、正解は"シカ"ですが、LLM が"鹿肉"と答えたようなものが挙げられます。
また、質問が "炊飯器、冷蔵庫、洗濯機といった生活に必須となる家電製品のことを、ある色を用いた表現で何というでしょう?" で、正解は"白物家電"ですが、文の中の表記が"ホワイトグッズ(白物家電)"となっていて、LLM は皆"ホワイトグッズ"を回答として挙げていました。
他にも、大抵の人なら読んで正解を答えられるであろうが、LLM では間違ったしまったものも残す判断をしました。例えば、質問が"約 1.8 リットルを 1 とする、日本酒などの体積を表す時に用いる尺貫法の単位は何でしょう?" に対して、文として "斗(と)とは、尺貫法における体積(容積)の単位。10 升が 1 斗、10 斗が 1 石となる。日本では、明治時代に 1 升=約 1.8039 リットルと定められたので、1 斗=約 18.039 リットルとなる。" が与えられているので、正解の「升」を答えられるが、LLM 達は「斗」と誤って回答したデータ等も削除せずに残しています。
## test データセットの構築
最後に、522 件のデータの正解ラベルをラベルを削除します。ラベルのみの削除なので、データ自体は残しています。また正解ラベルの削除により、1,680 件の質問に紐づいている 100 文の中で、正解ラベルが 1 つもないデータが発生するので、それを除いた最終的な質問データは 1,667 件となりました。このデータで test データセットは構築されています。
## dev, unused データセットの作成・構築
dev, unused データセットについても、test の文データと重複が発生しないよう取り除いて、ほぼ同様の方法で作成しました。ただし、dev, unused データセットに置いては、正解ラベル有用性の検証では youri-7b のモデルのみを用いて正しい回答が出力されたデータ最大 5 件を残し、他のルールベースで正解をつけたが LLM が答えられなかったデータはラベルの削除ではなく、データ自体を削除しています。また、dev, unused では 1 質問に対して 100 文ではなく 50 文を付与しています。
最終的に、dev 1,737 件、unused 498 件の質問データで、データセットを作りました。なお本来、学習には train のデータを利用しますが、大元の Q&A データセット JAQKET train データがライセンス上商用利用できないため、JQaRA ではこの dev, unused のデータセットを学習用途として想定しております。
# おわりに
今回、JQaRA データセットを構築しようと思ったのは、実務や実験において RAG の精度がどれだけ上がるかの評価が、既存のデータセットを用いたことでは難しかったことから、無いなら自ら作ってみようと始まりました。趣味で作り始めたこともあり、途中で挫折せずに公開まで行き着くことができて嬉しく思います。
これまで、特に自然言語処理の分野では、研究、コミュニティ、企業からのアウトプットに多大な恩恵を受けてきました。このデータセットが、自然言語処理や検索技術に携わる方々に少しでも貢献できれば幸いです。
# ライセンス
JQaRA データセットのライセンスは、"question", "answers" カラムは[AI 王 公式配布データセット(JAQKET)](https://sites.google.com/view/project-aio/dataset?authuser=0) の[CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja)を継承します。
また "title", "text" カラムにおいては、[Wikipedia の著作権である CC BY-SA 4.0 または GFDL](https://ja.wikipedia.org/wiki/Wikipedia:%E8%91%97%E4%BD%9C%E6%A8%A9)とします。
## 謝辞
このデータセットは、[AI 王 公式配布データセット(JAQKET)](https://sites.google.com/view/project-aio/dataset?authuser=0)から質問・回答文を利用しています。AI 王の開催・クイズ作成等々をされた関係者の方々、有益なデータセットの公開ありがとうございます。
また関連文章を見つけるために利用した、有益な Embeddings モデルを公開されている大学・研究機関・企業の方々、ありがとうございます。
---
```
@misc{yuichi-tateno-2024-jqara,,
url={https://huggingface.co/datasets/hotchpotch/JQaRA},
title={JQaRA: Japanese Question Answering with Retrieval Augmentation - 検索拡張(RAG)評価のための日本語Q&Aデータセット},
author={Yuichi Tateno}
}
```
|
dip67/guanaco-llama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig | ---
pretty_name: Evaluation run of Weyaxi/test-help-steer-filtered-orig
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Weyaxi/test-help-steer-filtered-orig](https://huggingface.co/Weyaxi/test-help-steer-filtered-orig)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T13:28:23.925402](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig/blob/main/results_2023-12-02T13-28-23.925402.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.09628506444275967,\n\
\ \"acc_stderr\": 0.008125264128215877\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.09628506444275967,\n \"acc_stderr\": 0.008125264128215877\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Weyaxi/test-help-steer-filtered-orig
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_02T13_28_23.925402
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-28-23.925402.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-28-23.925402.parquet'
- config_name: results
data_files:
- split: 2023_12_02T13_28_23.925402
path:
- results_2023-12-02T13-28-23.925402.parquet
- split: latest
path:
- results_2023-12-02T13-28-23.925402.parquet
---
# Dataset Card for Evaluation run of Weyaxi/test-help-steer-filtered-orig
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/test-help-steer-filtered-orig
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/test-help-steer-filtered-orig](https://huggingface.co/Weyaxi/test-help-steer-filtered-orig) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:28:23.925402](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__test-help-steer-filtered-orig/blob/main/results_2023-12-02T13-28-23.925402.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.09628506444275967,
"acc_stderr": 0.008125264128215877
},
"harness|gsm8k|5": {
"acc": 0.09628506444275967,
"acc_stderr": 0.008125264128215877
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
SocialGrep/the-reddit-dataset-dataset | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-dataset-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-dataset-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
### Dataset Summary
A meta dataset of Reddit's own /r/datasets community.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
|
77xiaoyuanzi8/code_reviewer | ---
license: apache-2.0
---
|
HarishDemigod/OCRdataset | ---
license: unlicense
---
|
linhqyy/data_test_whisper_large_v2_peft | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
- name: pred_str
dtype: string
splits:
- name: train
num_bytes: 174279434.625
num_examples: 1299
download_size: 164189043
dataset_size: 174279434.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data_test_whisper_large_v2_peft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
annabely/ukiyoe_100_200_control_net | ---
dataset_info:
features:
- name: source
dtype: image
- name: target
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1693358001.33
num_examples: 4015
download_size: 1744645145
dataset_size: 1693358001.33
---
# MIT 6.8300/6.8301 Advances in Computer Vision Final Project
This is a dataset card used for our final projet on control nets
Dataset is obtained from https://www.kaggle.com/datasets/kengoichiki/the-metropolitan-museum-of-art-ukiyoe-dataset
Here, we used BLIP for image captions (prompt), used CV2 canny edge detection algorithm for conditioning images (target) |
jan-hq/multiturn_programming_binarized | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 267828406.47343305
num_examples: 100139
- name: test
num_bytes: 29759900.526566967
num_examples: 11127
download_size: 153603097
dataset_size: 297588307.0
---
# Dataset Card for "multiturn_programming_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qxcv/tensor-trust | ---
task_categories:
- text-generation
size_categories:
- 100K<n<1M
---
# The Tensor Trust dataset (v1 benchmarks, v2 raw data dump) (mirror of [GitHub version](https://github.com/HumanCompatibleAI/tensor-trust-data))
**Other Tensor Trust links:** [[Game]](https://tensortrust.ai/) [[Code]](https://github.com/HumanCompatibleAI/tensor-trust) [[Paper]](https://arxiv.org/abs/2311.01011)
This HF dataset contains the raw data and derived benchmarks for the [Tensor Trust](https://tensortrust.ai/paper) project.
An interactive explanation of how to load and use the data (including the meaning of the columns) is in a Jupyter notebook in this directory.
You can [click here to run the notebook right now in Google Colab](https://colab.research.google.com/github/HumanCompatibleAI/tensor-trust-data/blob/main/Using%20the%20Tensor%20Trust%20dataset.ipynb).
|
Mediocreatmybest/ABC_Open_Archive | ---
license: cc0-1.0
language:
- en
---
# ABC Open Archives
Created from The ABC Open Archives.
Image file can either be downloaded with your own script using the url column, or use the image data saved directly into the image column.
<https://www.flickr.com/people/abcarchives/>
Parquet file created here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/images2parq.py>
File can also be extracted from here: <https://github.com/mediocreatmybest/gaslightingeveryone/blob/main/tools/parq2folder.py>
The available meta data from the text columns:
TEXT: Original text from the source archives
alt_text_a: GIT/COCO generated captions
alt_text_b: CLIP captions
tags: Tags from the source archives
|
sparanoid/images | ---
license: other
---
|
CyberHarem/flandre_scarlet_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of flandre_scarlet/フランドール・スカーレット/플랑드르스칼렛 (Touhou)
This is the dataset of flandre_scarlet/フランドール・スカーレット/플랑드르스칼렛 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, wings, red_eyes, hat, mob_cap, bangs, one_side_up, bow, ribbon, white_headwear, hair_between_eyes, red_bow, red_ribbon, hat_ribbon, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 818.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 442.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1283 | 977.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 713.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1283 | 1.39 GiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/flandre_scarlet_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, crystal, puffy_short_sleeves, red_skirt, red_vest, solo, looking_at_viewer, open_mouth, simple_background, white_shirt, yellow_ascot, blush, :d, fang, wrist_cuffs, cowboy_shot, side_ponytail, white_background, frilled_shirt_collar, hat_bow, skirt_set |
| 1 | 9 |  |  |  |  |  | 1girl, crystal, looking_at_viewer, open_mouth, puffy_short_sleeves, red_skirt, red_vest, solo, white_shirt, white_socks, yellow_ascot, :d, blush, full_body, red_footwear, frills, fang, simple_background, skirt_set, white_background, petticoat, wrist_cuffs, bobby_socks, mary_janes, white_bloomers |
| 2 | 5 |  |  |  |  |  | 1girl, blush, cowboy_shot, solo, standing, alternate_costume, bare_shoulders, closed_mouth, collarbone, crystal, looking_at_viewer, sleeveless_dress, smile, white_dress, bare_arms, no_headwear, outdoors, spaghetti_strap, sundress, day, depth_of_field, flower, frilled_dress, hair_ribbon, hat_bow, medium_hair, own_hands_together, skirt_hold, sky, small_breasts, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | crystal | puffy_short_sleeves | red_skirt | red_vest | solo | looking_at_viewer | open_mouth | simple_background | white_shirt | yellow_ascot | blush | :d | fang | wrist_cuffs | cowboy_shot | side_ponytail | white_background | frilled_shirt_collar | hat_bow | skirt_set | white_socks | full_body | red_footwear | frills | petticoat | bobby_socks | mary_janes | white_bloomers | standing | alternate_costume | bare_shoulders | closed_mouth | collarbone | sleeveless_dress | smile | white_dress | bare_arms | no_headwear | outdoors | spaghetti_strap | sundress | day | depth_of_field | flower | frilled_dress | hair_ribbon | medium_hair | own_hands_together | skirt_hold | sky | small_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:----------------------|:------------|:-----------|:-------|:--------------------|:-------------|:--------------------|:--------------|:---------------|:--------|:-----|:-------|:--------------|:--------------|:----------------|:-------------------|:-----------------------|:----------|:------------|:--------------|:------------|:---------------|:---------|:------------|:--------------|:-------------|:-----------------|:-----------|:--------------------|:-----------------|:---------------|:-------------|:-------------------|:--------|:--------------|:------------|:--------------|:-----------|:------------------|:-----------|:------|:-----------------|:---------|:----------------|:--------------|:--------------|:---------------------|:-------------|:------|:----------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | | X | X | | | | | X | | | | X | | X | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
nihiluis/argureviews | ---
dataset_info:
features:
- name: type
dtype: string
- name: text
dtype: string
- name: annotator
dtype: string
- name: component
dtype: string
- name: specificity
dtype: string
- name: sentiment
dtype: string
- name: aspect
dtype: string
- name: id
dtype: string
- name: sidx
dtype: float64
splits:
- name: train
num_bytes: 1328357
num_examples: 7266
download_size: 534609
dataset_size: 1328357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "argureviews"
Dataset for basic argumentation in online reviews
The dataset aims to annotate online review sentences for basic argumentative quality, sentiment and aspect of interest. It covers 1016 online reviews with 7286 sentences for the following domains: products from Amazon, local services, restaurant and hotels from Yelp and brokerage apps from the Google Play Store.
The label set descriptions are as follows. The respective DeBERTa models are linked as well.
- [Argument component](/nihiluis/argureviews-component-deberta_v1): Distinguishes the argumentative component that is used. Can be one of: claim, premise, background.
- [Specificity](/nihiluis/argureviews-specificity-deberta_v1): Differentiates between generic statements and more thoughtful user statements. Can be one of: general, specific, experience.
- [Sentiment](/nihiluis/argureviews-sentiment-deberta_v1): A positive, balanced, negative or neutral argumentative statement about the reviewed item.
- [Aspect](/nihiluis/argureviews-aspect-deberta_v1): Provides more insight into what aspect of interest the argumentative statement covers. Can be one or more of: general sentiment, price, delivery, function and quality, fun and usage, style, installation, customer service and none. Only available for the Amazon review subset. |
AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_7_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 406429
num_examples: 688
download_size: 170546
dataset_size: 406429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python-code-instructions-18k-alpaca-standardized_cluster_7_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/hoto_mocha_istheorderarabbit | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Hoto Mocha
This is the dataset of Hoto Mocha, containing 68 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 68 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 167 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 187 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 68 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 68 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 68 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 167 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 167 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 143 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 187 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 187 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
arieg/bw_spec_cls_80_21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '53576'
'1': '53578'
'2': '53591'
'3': '53592'
'4': '53675'
'5': '53723'
'6': '53724'
'7': '53725'
'8': '53726'
'9': '53727'
'10': '53728'
'11': '53729'
'12': '53807'
'13': '53862'
'14': '53863'
'15': '53937'
'16': '54019'
'17': '54031'
'18': '54032'
'19': '54033'
'20': '54034'
'21': '54037'
'22': '54039'
'23': '54061'
'24': '54062'
'25': '54063'
'26': '54064'
'27': '54149'
'28': '54150'
'29': '54151'
'30': '54152'
'31': '54153'
'32': '54154'
'33': '54155'
'34': '54156'
'35': '54158'
'36': '54159'
'37': '54160'
'38': '54163'
'39': '54234'
'40': '54235'
'41': '54236'
'42': '54237'
'43': '54297'
'44': '54335'
'45': '54365'
'46': '54376'
'47': '54433'
'48': '54436'
'49': '54437'
'50': '54438'
'51': '54442'
'52': '54443'
'53': '54475'
'54': '54476'
'55': '54479'
'56': '54480'
'57': '54481'
'58': '54482'
'59': '54496'
'60': '54568'
'61': '54570'
'62': '54576'
'63': '54578'
'64': '54580'
'65': '54621'
'66': '54623'
'67': '54624'
'68': '54625'
'69': '54626'
'70': '54662'
'71': '54664'
'72': '54665'
'73': '54666'
'74': '54667'
'75': '54719'
'76': '54735'
'77': '54753'
'78': '54874'
'79': '54942'
splits:
- name: train
num_bytes: 87811337.6
num_examples: 1600
download_size: 87587637
dataset_size: 87811337.6
---
# Dataset Card for "bw_spec_cls_80_21"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Joe02/Fuku_Naoto_refs | ---
license: other
---
|
SAGI-1/chat_data_3jan_2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1816431
num_examples: 500
download_size: 750937
dataset_size: 1816431
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PierLo/test01 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 11625274.0
num_examples: 1
download_size: 11612127
dataset_size: 11625274.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
clips/mqa | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- ca
- en
- de
- es
- fr
- ru
- ja
- it
- zh
- pt
- nl
- tr
- pl
- vi
- ar
- id
- uk
- ro
- no
- th
- sv
- el
- fi
- he
- da
- cs
- ko
- fa
- hi
- hu
- sk
- lt
- et
- hr
- is
- lv
- ms
- bg
- sr
- ca
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: MQA - a Multilingual FAQ and CQA Dataset
size_categories:
- unknown
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# MQA
MQA is a Multilingual corpus of Questions and Answers (MQA) parsed from the [Common Crawl](https://commoncrawl.org/). Questions are divided in two types: *Frequently Asked Questions (FAQ)* and *Community Question Answering (CQA)*.
```python
from datasets import load_dataset
all_data = load_dataset("clips/mqa", language="en")
{
"name": "the title of the question (if any)",
"text": "the body of the question (if any)",
"answers": [{
"text": "the text of the answer",
"is_accepted": "true|false"
}]
}
faq_data = load_dataset("clips/mqa", scope="faq", language="en")
cqa_data = load_dataset("clips/mqa", scope="cqa", language="en")
```
## Languages
We collected around **234M pairs** of questions and answers in **39 languages**. To download a language specific subset you need to specify the language key as configuration. See below for an example.
```python
load_dataset("clips/mqa", language="en") # replace "en" by any language listed below
```
| Language | FAQ | CQA |
|:-----------|------------:|-----------:|
| en | 174,696,414 | 14,082,180 |
| de | 17,796,992 | 1,094,606 |
| es | 14,967,582 | 845,836 |
| fr | 13,096,727 | 1,299,359 |
| ru | 12,435,022 | 1,715,131 |
| it | 6,850,573 | 455,027 |
| ja | 6,369,706 | 2,089,952 |
| zh | 5,940,796 | 579,596 |
| pt | 5,851,286 | 373,982 |
| nl | 4,882,511 | 503,376 |
| tr | 3,893,964 | 370,975 |
| pl | 3,766,531 | 70,559 |
| vi | 2,795,227 | 96,528 |
| id | 2,253,070 | 200,441 |
| ar | 2,211,795 | 805,661 |
| uk | 2,090,611 | 27,260 |
| el | 1,758,618 | 17,167 |
| no | 1,752,820 | 11,786 |
| sv | 1,733,582 | 20,024 |
| fi | 1,717,221 | 41,371 |
| ro | 1,689,471 | 93,222 |
| th | 1,685,463 | 73,204 |
| da | 1,554,581 | 16,398 |
| he | 1,422,449 | 88,435 |
| ko | 1,361,901 | 49,061 |
| cs | 1,224,312 | 143,863 |
| hu | 878,385 | 27,639 |
| fa | 787,420 | 118,805 |
| sk | 785,101 | 4,615 |
| lt | 672,105 | 301 |
| et | 547,208 | 441 |
| hi | 516,342 | 205,645 |
| hr | 458,958 | 11,677 |
| is | 437,748 | 37 |
| lv | 428,002 | 88 |
| ms | 230,568 | 7,460 |
| bg | 198,671 | 5,320 |
| sr | 110,270 | 3,980 |
| ca | 100,201 | 1,914 |
## FAQ vs. CQA
You can download the *Frequently Asked Questions* (FAQ) or the *Community Question Answering* (CQA) part of the dataset.
```python
faq = load_dataset("clips/mqa", scope="faq")
cqa = load_dataset("clips/mqa", scope="cqa")
all = load_dataset("clips/mqa", scope="all")
```
Although FAQ and CQA questions share the same structure, CQA questions can have multiple answers for a given questions, while FAQ questions have a single answer. FAQ questions typically only have a title (`name` key), while CQA have a title and a body (`name` and `text`).
## Nesting and Data Fields
You can specify three different nesting level: `question`, `page` and `domain`.
#### Question
```python
load_dataset("clips/mqa", level="question") # default
```
The default level is the question object:
- **name**: the title of the question(if any) in markdown format
- **text**: the body of the question (if any) in markdown format
- **answers**: a list of answers
- **text**: the title of the answer (if any) in markdown format
- **name**: the body of the answer in markdown format
- **is_accepted**: true if the answer is selected.
#### Page
This level returns a list of questions present on the same page. This is mostly useful for FAQs since CQAs already have one question per page.
```python
load_dataset("clips/mqa", level="page")
```
#### Domain
This level returns a list of pages present on the web domain. This is a good way to cope with FAQs duplication by sampling one page per domain at each epoch.
```python
load_dataset("clips/mqa", level="domain")
```
## Source Data
This section was adapted from the source data description of [OSCAR](https://huggingface.co/datasets/oscar#source-data)
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies.
To construct MQA, we used the WARC files of Common Crawl.
## People
This model was developed by [Maxime De Bruyn](https://maximedb.vercel.app), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
## Licensing Information
```
These data are released under this licensing scheme.
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
```
## Citation information
```
@inproceedings{de-bruyn-etal-2021-mfaq,
title = "{MFAQ}: a Multilingual {FAQ} Dataset",
author = "De Bruyn, Maxime and
Lotfi, Ehsan and
Buhmann, Jeska and
Daelemans, Walter",
booktitle = "Proceedings of the 3rd Workshop on Machine Reading for Question Answering",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrqa-1.1",
pages = "1--13",
}
``` |
Emanuel12a/japa | ---
license: openrail
---
|
goodfellowliu/Manga109 | ---
license: apache-2.0
---
|
p-conscious/pokemon-evolutions | ---
license: mit
---
|
open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1 | ---
pretty_name: Evaluation run of kenhktsui/nano-phi-115M-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [kenhktsui/nano-phi-115M-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-v0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T13:21:05.777292](https://huggingface.co/datasets/open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1/blob/main/results_2024-02-09T13-21-05.777292.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2525471471328879,\n\
\ \"acc_stderr\": 0.030613616045154307,\n \"acc_norm\": 0.25320771353725297,\n\
\ \"acc_norm_stderr\": 0.031427270253272646,\n \"mc1\": 0.25091799265605874,\n\
\ \"mc1_stderr\": 0.015176985027707693,\n \"mc2\": 0.4600301977928377,\n\
\ \"mc2_stderr\": 0.015417429651937565\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.1825938566552901,\n \"acc_stderr\": 0.011289730684564993,\n\
\ \"acc_norm\": 0.21928327645051193,\n \"acc_norm_stderr\": 0.012091245787615735\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2731527584146584,\n\
\ \"acc_stderr\": 0.004446680081493753,\n \"acc_norm\": 0.2786297550288787,\n\
\ \"acc_norm_stderr\": 0.0044740864899406865\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816507,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816507\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3111111111111111,\n\
\ \"acc_stderr\": 0.039992628766177235,\n \"acc_norm\": 0.3111111111111111,\n\
\ \"acc_norm_stderr\": 0.039992628766177235\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.21710526315789475,\n \"acc_stderr\": 0.033550453048829205,\n\
\ \"acc_norm\": 0.21710526315789475,\n \"acc_norm_stderr\": 0.033550453048829205\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.22264150943396227,\n \"acc_stderr\": 0.025604233470899098,\n\
\ \"acc_norm\": 0.22264150943396227,\n \"acc_norm_stderr\": 0.025604233470899098\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.22916666666666666,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.22916666666666666,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.17,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n\
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2023121387283237,\n\
\ \"acc_stderr\": 0.030631145539198816,\n \"acc_norm\": 0.2023121387283237,\n\
\ \"acc_norm_stderr\": 0.030631145539198816\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.046550104113196177,\n\
\ \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.046550104113196177\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.26,\n\
\ \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2425531914893617,\n \"acc_stderr\": 0.028020226271200217,\n\
\ \"acc_norm\": 0.2425531914893617,\n \"acc_norm_stderr\": 0.028020226271200217\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n\
\ \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n\
\ \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.22758620689655173,\n \"acc_stderr\": 0.03493950380131184,\n\
\ \"acc_norm\": 0.22758620689655173,\n \"acc_norm_stderr\": 0.03493950380131184\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.24603174603174602,\n \"acc_stderr\": 0.022182037202948368,\n \"\
acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.022182037202948368\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n\
\ \"acc_stderr\": 0.03200686497287392,\n \"acc_norm\": 0.15079365079365079,\n\
\ \"acc_norm_stderr\": 0.03200686497287392\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3387096774193548,\n\
\ \"acc_stderr\": 0.026923446059302844,\n \"acc_norm\": 0.3387096774193548,\n\
\ \"acc_norm_stderr\": 0.026923446059302844\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.29064039408866993,\n \"acc_stderr\": 0.0319474007226554,\n\
\ \"acc_norm\": 0.29064039408866993,\n \"acc_norm_stderr\": 0.0319474007226554\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117317,\n \"acc_norm\"\
: 0.31,\n \"acc_norm_stderr\": 0.04648231987117317\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.3181818181818182,\n \"acc_stderr\": 0.0331847733384533,\n \"acc_norm\"\
: 0.3181818181818182,\n \"acc_norm_stderr\": 0.0331847733384533\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.20207253886010362,\n \"acc_stderr\": 0.02897908979429673,\n\
\ \"acc_norm\": 0.20207253886010362,\n \"acc_norm_stderr\": 0.02897908979429673\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.24871794871794872,\n \"acc_stderr\": 0.0219169577092138,\n \
\ \"acc_norm\": 0.24871794871794872,\n \"acc_norm_stderr\": 0.0219169577092138\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230186,\n \
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230186\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3277310924369748,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.3277310924369748,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.24954128440366974,\n \"acc_stderr\": 0.018553897629501617,\n \"\
acc_norm\": 0.24954128440366974,\n \"acc_norm_stderr\": 0.018553897629501617\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2696078431372549,\n\
\ \"acc_stderr\": 0.03114557065948678,\n \"acc_norm\": 0.2696078431372549,\n\
\ \"acc_norm_stderr\": 0.03114557065948678\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.21518987341772153,\n \"acc_stderr\": 0.026750826994676152,\n\
\ \"acc_norm\": 0.21518987341772153,\n \"acc_norm_stderr\": 0.026750826994676152\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.19730941704035873,\n\
\ \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.19730941704035873,\n\
\ \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2824427480916031,\n \"acc_stderr\": 0.03948406125768361,\n\
\ \"acc_norm\": 0.2824427480916031,\n \"acc_norm_stderr\": 0.03948406125768361\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794088,\n \"\
acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794088\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.294478527607362,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.294478527607362,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.2767857142857143,\n\
\ \"acc_stderr\": 0.04246624336697624,\n \"acc_norm\": 0.2767857142857143,\n\
\ \"acc_norm_stderr\": 0.04246624336697624\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.20388349514563106,\n \"acc_stderr\": 0.03989139859531772,\n\
\ \"acc_norm\": 0.20388349514563106,\n \"acc_norm_stderr\": 0.03989139859531772\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19658119658119658,\n\
\ \"acc_stderr\": 0.02603538609895129,\n \"acc_norm\": 0.19658119658119658,\n\
\ \"acc_norm_stderr\": 0.02603538609895129\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2720306513409962,\n\
\ \"acc_stderr\": 0.015913367447500524,\n \"acc_norm\": 0.2720306513409962,\n\
\ \"acc_norm_stderr\": 0.015913367447500524\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.20520231213872833,\n \"acc_stderr\": 0.021742519835276287,\n\
\ \"acc_norm\": 0.20520231213872833,\n \"acc_norm_stderr\": 0.021742519835276287\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.02526169121972948,\n\
\ \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.02526169121972948\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.19292604501607716,\n\
\ \"acc_stderr\": 0.022411516780911363,\n \"acc_norm\": 0.19292604501607716,\n\
\ \"acc_norm_stderr\": 0.022411516780911363\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.23765432098765432,\n \"acc_stderr\": 0.023683591837008557,\n\
\ \"acc_norm\": 0.23765432098765432,\n \"acc_norm_stderr\": 0.023683591837008557\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.24822695035460993,\n \"acc_stderr\": 0.025770015644290382,\n \
\ \"acc_norm\": 0.24822695035460993,\n \"acc_norm_stderr\": 0.025770015644290382\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24641460234680573,\n\
\ \"acc_stderr\": 0.011005971399927235,\n \"acc_norm\": 0.24641460234680573,\n\
\ \"acc_norm_stderr\": 0.011005971399927235\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4264705882352941,\n \"acc_stderr\": 0.03004261583271486,\n\
\ \"acc_norm\": 0.4264705882352941,\n \"acc_norm_stderr\": 0.03004261583271486\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2630718954248366,\n \"acc_stderr\": 0.01781267654232065,\n \
\ \"acc_norm\": 0.2630718954248366,\n \"acc_norm_stderr\": 0.01781267654232065\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.17272727272727273,\n\
\ \"acc_stderr\": 0.03620691833929218,\n \"acc_norm\": 0.17272727272727273,\n\
\ \"acc_norm_stderr\": 0.03620691833929218\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.025607375986579153,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.025607375986579153\n \
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.22885572139303484,\n\
\ \"acc_stderr\": 0.029705284056772436,\n \"acc_norm\": 0.22885572139303484,\n\
\ \"acc_norm_stderr\": 0.029705284056772436\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.20481927710843373,\n\
\ \"acc_stderr\": 0.03141784291663925,\n \"acc_norm\": 0.20481927710843373,\n\
\ \"acc_norm_stderr\": 0.03141784291663925\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.03188578017686398,\n\
\ \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.03188578017686398\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.25091799265605874,\n\
\ \"mc1_stderr\": 0.015176985027707693,\n \"mc2\": 0.4600301977928377,\n\
\ \"mc2_stderr\": 0.015417429651937565\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5082872928176796,\n \"acc_stderr\": 0.014050555322824194\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/kenhktsui/nano-phi-115M-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|arc:challenge|25_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|gsm8k|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hellaswag|10_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T13-21-05.777292.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- '**/details_harness|winogrande|5_2024-02-09T13-21-05.777292.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T13-21-05.777292.parquet'
- config_name: results
data_files:
- split: 2024_02_09T13_21_05.777292
path:
- results_2024-02-09T13-21-05.777292.parquet
- split: latest
path:
- results_2024-02-09T13-21-05.777292.parquet
---
# Dataset Card for Evaluation run of kenhktsui/nano-phi-115M-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [kenhktsui/nano-phi-115M-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T13:21:05.777292](https://huggingface.co/datasets/open-llm-leaderboard/details_kenhktsui__nano-phi-115M-v0.1/blob/main/results_2024-02-09T13-21-05.777292.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2525471471328879,
"acc_stderr": 0.030613616045154307,
"acc_norm": 0.25320771353725297,
"acc_norm_stderr": 0.031427270253272646,
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707693,
"mc2": 0.4600301977928377,
"mc2_stderr": 0.015417429651937565
},
"harness|arc:challenge|25": {
"acc": 0.1825938566552901,
"acc_stderr": 0.011289730684564993,
"acc_norm": 0.21928327645051193,
"acc_norm_stderr": 0.012091245787615735
},
"harness|hellaswag|10": {
"acc": 0.2731527584146584,
"acc_stderr": 0.004446680081493753,
"acc_norm": 0.2786297550288787,
"acc_norm_stderr": 0.0044740864899406865
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.039992628766177235,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.039992628766177235
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.21710526315789475,
"acc_stderr": 0.033550453048829205,
"acc_norm": 0.21710526315789475,
"acc_norm_stderr": 0.033550453048829205
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22264150943396227,
"acc_stderr": 0.025604233470899098,
"acc_norm": 0.22264150943396227,
"acc_norm_stderr": 0.025604233470899098
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.22916666666666666,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.22916666666666666,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.17,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.17,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2023121387283237,
"acc_stderr": 0.030631145539198816,
"acc_norm": 0.2023121387283237,
"acc_norm_stderr": 0.030631145539198816
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.046550104113196177,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.046550104113196177
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2425531914893617,
"acc_stderr": 0.028020226271200217,
"acc_norm": 0.2425531914893617,
"acc_norm_stderr": 0.028020226271200217
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.22807017543859648,
"acc_stderr": 0.03947152782669415,
"acc_norm": 0.22807017543859648,
"acc_norm_stderr": 0.03947152782669415
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.22758620689655173,
"acc_stderr": 0.03493950380131184,
"acc_norm": 0.22758620689655173,
"acc_norm_stderr": 0.03493950380131184
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24603174603174602,
"acc_stderr": 0.022182037202948368,
"acc_norm": 0.24603174603174602,
"acc_norm_stderr": 0.022182037202948368
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15079365079365079,
"acc_stderr": 0.03200686497287392,
"acc_norm": 0.15079365079365079,
"acc_norm_stderr": 0.03200686497287392
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3387096774193548,
"acc_stderr": 0.026923446059302844,
"acc_norm": 0.3387096774193548,
"acc_norm_stderr": 0.026923446059302844
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.29064039408866993,
"acc_stderr": 0.0319474007226554,
"acc_norm": 0.29064039408866993,
"acc_norm_stderr": 0.0319474007226554
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117317,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117317
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3181818181818182,
"acc_stderr": 0.0331847733384533,
"acc_norm": 0.3181818181818182,
"acc_norm_stderr": 0.0331847733384533
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.20207253886010362,
"acc_stderr": 0.02897908979429673,
"acc_norm": 0.20207253886010362,
"acc_norm_stderr": 0.02897908979429673
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.24871794871794872,
"acc_stderr": 0.0219169577092138,
"acc_norm": 0.24871794871794872,
"acc_norm_stderr": 0.0219169577092138
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.027309140588230186,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.027309140588230186
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3277310924369748,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.3277310924369748,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.03822746937658753,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.03822746937658753
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.24954128440366974,
"acc_stderr": 0.018553897629501617,
"acc_norm": 0.24954128440366974,
"acc_norm_stderr": 0.018553897629501617
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2696078431372549,
"acc_stderr": 0.03114557065948678,
"acc_norm": 0.2696078431372549,
"acc_norm_stderr": 0.03114557065948678
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.21518987341772153,
"acc_stderr": 0.026750826994676152,
"acc_norm": 0.21518987341772153,
"acc_norm_stderr": 0.026750826994676152
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.19730941704035873,
"acc_stderr": 0.02670985334496796,
"acc_norm": 0.19730941704035873,
"acc_norm_stderr": 0.02670985334496796
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2824427480916031,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.2824427480916031,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.23140495867768596,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.23140495867768596,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.294478527607362,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.294478527607362,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.2767857142857143,
"acc_stderr": 0.04246624336697624,
"acc_norm": 0.2767857142857143,
"acc_norm_stderr": 0.04246624336697624
},
"harness|hendrycksTest-management|5": {
"acc": 0.20388349514563106,
"acc_stderr": 0.03989139859531772,
"acc_norm": 0.20388349514563106,
"acc_norm_stderr": 0.03989139859531772
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.19658119658119658,
"acc_stderr": 0.02603538609895129,
"acc_norm": 0.19658119658119658,
"acc_norm_stderr": 0.02603538609895129
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2720306513409962,
"acc_stderr": 0.015913367447500524,
"acc_norm": 0.2720306513409962,
"acc_norm_stderr": 0.015913367447500524
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.20520231213872833,
"acc_stderr": 0.021742519835276287,
"acc_norm": 0.20520231213872833,
"acc_norm_stderr": 0.021742519835276287
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.19292604501607716,
"acc_stderr": 0.022411516780911363,
"acc_norm": 0.19292604501607716,
"acc_norm_stderr": 0.022411516780911363
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.23765432098765432,
"acc_stderr": 0.023683591837008557,
"acc_norm": 0.23765432098765432,
"acc_norm_stderr": 0.023683591837008557
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.24822695035460993,
"acc_stderr": 0.025770015644290382,
"acc_norm": 0.24822695035460993,
"acc_norm_stderr": 0.025770015644290382
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24641460234680573,
"acc_stderr": 0.011005971399927235,
"acc_norm": 0.24641460234680573,
"acc_norm_stderr": 0.011005971399927235
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4264705882352941,
"acc_stderr": 0.03004261583271486,
"acc_norm": 0.4264705882352941,
"acc_norm_stderr": 0.03004261583271486
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2630718954248366,
"acc_stderr": 0.01781267654232065,
"acc_norm": 0.2630718954248366,
"acc_norm_stderr": 0.01781267654232065
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.17272727272727273,
"acc_stderr": 0.03620691833929218,
"acc_norm": 0.17272727272727273,
"acc_norm_stderr": 0.03620691833929218
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.2,
"acc_stderr": 0.025607375986579153,
"acc_norm": 0.2,
"acc_norm_stderr": 0.025607375986579153
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.22885572139303484,
"acc_stderr": 0.029705284056772436,
"acc_norm": 0.22885572139303484,
"acc_norm_stderr": 0.029705284056772436
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-virology|5": {
"acc": 0.20481927710843373,
"acc_stderr": 0.03141784291663925,
"acc_norm": 0.20481927710843373,
"acc_norm_stderr": 0.03141784291663925
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.03188578017686398,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.03188578017686398
},
"harness|truthfulqa:mc|0": {
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707693,
"mc2": 0.4600301977928377,
"mc2_stderr": 0.015417429651937565
},
"harness|winogrande|5": {
"acc": 0.5082872928176796,
"acc_stderr": 0.014050555322824194
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
irds/wapo_v2_trec-news-2019 | ---
pretty_name: '`wapo/v2/trec-news-2019`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wapo/v2/trec-news-2019`
The `wapo/v2/trec-news-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-news-2019).
# Data
This dataset provides:
- `queries` (i.e., topics); count=60
- `qrels`: (relevance assessments); count=15,655
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/wapo_v2_trec-news-2019', 'queries')
for record in queries:
record # {'query_id': ..., 'doc_id': ..., 'url': ...}
qrels = load_dataset('irds/wapo_v2_trec-news-2019', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Soboroff2019News,
title={TREC 2019 News Track Overview},
author={Ian Soboroff and Shudong Huang and Donna Harman},
booktitle={TREC},
year={2019}
}
```
|
Hyeonsieun/NTandPNT_1 | ---
dataset_info:
features:
- name: GT
dtype: string
- name: PNT + NT
dtype: string
splits:
- name: train
num_bytes: 47528690
num_examples: 186660
download_size: 18016582
dataset_size: 47528690
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PJMixers/Math-10K | ---
language:
- en
tags:
- math
size_categories:
- 10K<n<100K
---
Created with [this script](https://gist.github.com/xzuyn/5807bbc2a305590f7b49b879dc0354ad), so I assume everything is 100% correct (with rounding). |
atmallen/quirky_translation | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: float64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 7925572
num_examples: 27998
- name: validation
num_bytes: 1133387
num_examples: 4000
- name: test
num_bytes: 1133061
num_examples: 4000
download_size: 1036236
dataset_size: 10192020
---
# Dataset Card for "quirky_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_chargoddard__Yi-34B-Llama | ---
pretty_name: Evaluation run of chargoddard/Yi-34B-Llama
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__Yi-34B-Llama\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-03T19:49:20.371477](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__Yi-34B-Llama/blob/main/results_2023-12-03T19-49-20.371477.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6080363912054587,\n\
\ \"acc_stderr\": 0.013447140886023824\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.6080363912054587,\n \"acc_stderr\": 0.013447140886023824\n\
\ }\n}\n```"
repo_url: https://huggingface.co/chargoddard/Yi-34B-Llama
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_03T19_49_20.371477
path:
- '**/details_harness|gsm8k|5_2023-12-03T19-49-20.371477.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-03T19-49-20.371477.parquet'
- config_name: results
data_files:
- split: 2023_12_03T19_49_20.371477
path:
- results_2023-12-03T19-49-20.371477.parquet
- split: latest
path:
- results_2023-12-03T19-49-20.371477.parquet
---
# Dataset Card for Evaluation run of chargoddard/Yi-34B-Llama
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/Yi-34B-Llama
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__Yi-34B-Llama",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:49:20.371477](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__Yi-34B-Llama/blob/main/results_2023-12-03T19-49-20.371477.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6080363912054587,
"acc_stderr": 0.013447140886023824
},
"harness|gsm8k|5": {
"acc": 0.6080363912054587,
"acc_stderr": 0.013447140886023824
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
smart-dev-art/mh77hmrbr66x33zetfqco3vfh4 | ---
license: openrail
---
|
enpaiva/qa-cultura-guarani_prometheus-scores | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: query
dtype: string
- name: base_answer
dtype: string
- name: generated_answer
dtype: string
- name: model
dtype: string
- name: prompt_used
dtype: string
- name: labels
dtype: string
- name: prometheus_score
dtype: float64
- name: feedback
dtype: string
- name: evaluated_by
dtype: string
splits:
- name: train
num_bytes: 3333369
num_examples: 2746
- name: test
num_bytes: 318709
num_examples: 250
download_size: 1083996
dataset_size: 3652078
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_85_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1350118
num_examples: 3800
download_size: 568891
dataset_size: 1350118
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_85_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
plncmm/wl-medication | ---
license: cc-by-nc-4.0
---
|
MuhammadHanif/Laion_aesthetics_5plus_1024_33M | ---
license: apache-2.0
---
|
Rosenberg/CMeEE | ---
license: mit
---
# Mainfest
- CMeEE_train.json: 训练集
- CMeEE_dev.json: 验证集
- CMeEE_test.json: 测试集
- 提交的时候需要为每条记录填充"entities"字段,类型为列表。每个识别出来的实体必须包含"start_idx", "end_idx", "type"3个字段。
- 提交的文件名为:CMeEE_test.json
- example_gold.json: 标准答案示例
- example_pred.json: 提交结果示例
评估指标以严格Micro-F1值为准 |
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-45000 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 6166063039
num_examples: 1000
download_size: 1241635590
dataset_size: 6166063039
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shivam9980/headline-data-updated | ---
dataset_info:
features:
- name: headline
dtype: string
- name: content
dtype: string
- name: category
dtype:
class_label:
names:
'0': Sponsor
'1': Big Tech & Startups
'2': Science and Futuristic Technology
'3': Programming, Design & Data Science
'4': Miscellaneous
splits:
- name: train
num_bytes: 4914106
num_examples: 7138
- name: test
num_bytes: 545521
num_examples: 794
download_size: 2951994
dataset_size: 5459627
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
lponsard/phpbb | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: post_subject
dtype: string
- name: post_text
dtype: string
splits:
- name: train
num_bytes: 81017.6
num_examples: 116
- name: test
num_bytes: 20254.4
num_examples: 29
download_size: 59163
dataset_size: 101272.0
---
# Dataset Card for "phpbb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/shizune_naruto | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of shizune (NARUTO)
This is the dataset of shizune (NARUTO), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
dikshyamohanty/ni-sample | ---
license: unknown
---
|
zaydzuhri/the_pile_tokenized_5percent_truncated | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 26575097543
num_examples: 6000000
download_size: 8682781157
dataset_size: 26575097543
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_80_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 15463681
num_examples: 32130
download_size: 7972208
dataset_size: 15463681
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_80_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/magallan_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of magallan/マゼラン/麦哲伦 (Arknights)
This is the dataset of magallan/マゼラン/麦哲伦 (Arknights), containing 500 images and their tags.
The core tags of this character are `multicolored_hair, streaked_hair, white_hair, short_hair, brown_hair, yellow_eyes, earrings, two-tone_hair, hair_between_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 779.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/magallan_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 663.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/magallan_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1201 | 1.25 GiB | [Download](https://huggingface.co/datasets/CyberHarem/magallan_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/magallan_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, backpack, long_sleeves, mask_around_neck, solo, white_gloves, jewelry, simple_background, white_background, white_coat, looking_at_viewer, full_body, closed_mouth, holding, skates, smile, bird_mask, standing |
| 1 | 7 |  |  |  |  |  | 1girl, backpack, jacket, long_sleeves, simple_background, solo, upper_body, white_background, white_gloves, blush, coat, looking_at_viewer, mask_around_neck, open_mouth, :d, jewelry, glint |
| 2 | 16 |  |  |  |  |  | 1girl, solo, white_gloves, long_sleeves, outdoors, penguin, bird_mask, jewelry, mask_around_neck, snow, blue_sky, blush, looking_at_viewer, day, jacket, open_mouth, white_coat, :d, upper_body, drone, backpack, holding_animal, winter_clothes |
| 3 | 5 |  |  |  |  |  | 1girl, full_body, jewelry, long_sleeves, looking_at_viewer, mask_around_neck, outdoors, solo, standing, white_gloves, blue_sky, holding, ice_skates, white_coat, winter_coat, bird_mask, closed_mouth, penguin, :d, backpack, day, open_mouth, snow |
| 4 | 12 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, ribbed_sweater, simple_background, solo, mask_around_neck, jewelry, upper_body, white_background, brown_sweater, medium_breasts, turtleneck_sweater, white_gloves, smile, black_hair, blush, closed_mouth, single_glove, grey_sweater, hand_up, strap |
| 5 | 6 |  |  |  |  |  | 1girl, closed_mouth, long_sleeves, looking_at_viewer, mask_around_neck, ribbed_sweater, smile, solo, bird_mask, drone, white_gloves, black_hair, glint, holding, medium_breasts, rhine_lab_logo, single_glove, clothes_around_waist, cowboy_shot, single_earring, upper_body |
| 6 | 22 |  |  |  |  |  | 1girl, solo, long_sleeves, looking_at_viewer, nejiri_hachimaki, official_alternate_costume, smile, white_gloves, white_shirt, holding_bowl, simple_background, upper_body, jacket, jewelry, white_background, shaved_ice, single_glove, open_mouth |
| 7 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, navel, nipples, open_mouth, penis, spread_legs, vaginal, bottomless, cum_in_pussy, long_sleeves, looking_at_viewer, no_bra, solo_focus, sweater_lift, cum_overflow, grey_sweater, heart, large_breasts, medium_breasts, naked_sweater, ribbed_sweater, saliva, simple_background, stomach, tears, turtleneck_sweater, uncensored, bar_censor, black_hair, black_sweater, clothed_female_nude_male, clothed_sex, cowgirl_position, girl_on_top, heavy_breathing, mask_around_neck, panties_around_one_leg, pov, pubic_hair, sex_from_behind, single_earring, smile, tearing_up, white_background, white_panties |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | backpack | long_sleeves | mask_around_neck | solo | white_gloves | jewelry | simple_background | white_background | white_coat | looking_at_viewer | full_body | closed_mouth | holding | skates | smile | bird_mask | standing | jacket | upper_body | blush | coat | open_mouth | :d | glint | outdoors | penguin | snow | blue_sky | day | drone | holding_animal | winter_clothes | ice_skates | winter_coat | ribbed_sweater | brown_sweater | medium_breasts | turtleneck_sweater | black_hair | single_glove | grey_sweater | hand_up | strap | rhine_lab_logo | clothes_around_waist | cowboy_shot | single_earring | nejiri_hachimaki | official_alternate_costume | white_shirt | holding_bowl | shaved_ice | 1boy | hetero | navel | nipples | penis | spread_legs | vaginal | bottomless | cum_in_pussy | no_bra | solo_focus | sweater_lift | cum_overflow | heart | large_breasts | naked_sweater | saliva | stomach | tears | uncensored | bar_censor | black_sweater | clothed_female_nude_male | clothed_sex | cowgirl_position | girl_on_top | heavy_breathing | panties_around_one_leg | pov | pubic_hair | sex_from_behind | tearing_up | white_panties |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:---------------|:-------------------|:-------|:---------------|:----------|:--------------------|:-------------------|:-------------|:--------------------|:------------|:---------------|:----------|:---------|:--------|:------------|:-----------|:---------|:-------------|:--------|:-------|:-------------|:-----|:--------|:-----------|:----------|:-------|:-----------|:------|:--------|:-----------------|:-----------------|:-------------|:--------------|:-----------------|:----------------|:-----------------|:---------------------|:-------------|:---------------|:---------------|:----------|:--------|:-----------------|:-----------------------|:--------------|:-----------------|:-------------------|:-----------------------------|:--------------|:---------------|:-------------|:-------|:---------|:--------|:----------|:--------|:--------------|:----------|:-------------|:---------------|:---------|:-------------|:---------------|:---------------|:--------|:----------------|:----------------|:---------|:----------|:--------|:-------------|:-------------|:----------------|:---------------------------|:--------------|:-------------------|:--------------|:------------------|:-------------------------|:------|:-------------|:------------------|:-------------|:----------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | | | | | | X | | X | X | X | | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | X | X | X | | | X | X | | | | | X | X | | X | X | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 12 |  |  |  |  |  | X | | X | X | X | X | X | X | X | | X | | X | | | X | | | | X | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | X | X | X | X | | | | | X | | X | X | | X | X | | | X | | | | | X | | | | | | X | | | | | X | | X | | X | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 22 |  |  |  |  |  | X | | X | | X | X | X | X | X | | X | | | | | X | | | X | X | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | X | X | | | | X | X | | X | | | | | X | | | | | X | | X | | | | | | | | | | | | | X | | X | X | X | | X | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/jersey_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of jersey/ジャージー/泽西 (Azur Lane)
This is the dataset of jersey/ジャージー/泽西 (Azur Lane), containing 13 images and their tags.
The core tags of this character are `long_hair, red_hair, twintails, bangs, breasts, low_twintails, very_long_hair, yellow_eyes, ahoge, antenna_hair, bow, brown_eyes, crown, hair_ornament, small_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 13 | 21.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jersey_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 13 | 11.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jersey_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 36 | 27.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jersey_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 13 | 18.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jersey_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 36 | 39.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/jersey_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/jersey_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, blush, solo, looking_at_viewer, open_mouth, detached_sleeves, bare_shoulders, dress, sleeveless |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | looking_at_viewer | open_mouth | detached_sleeves | bare_shoulders | dress | sleeveless |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------------------|:-------------|:-------------------|:-----------------|:--------|:-------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X |
|
tyzhu/synpre_mix_v1_1M | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1480500977.0
num_examples: 1000000
- name: validation
num_bytes: 14857855.5
num_examples: 10000
download_size: 976476075
dataset_size: 1495358832.5
---
# Dataset Card for "synpre_mix_v1_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Eurdem__Megatron-Mx | ---
pretty_name: Evaluation run of Eurdem/Megatron-Mx
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Eurdem/Megatron-Mx](https://huggingface.co/Eurdem/Megatron-Mx) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Eurdem__Megatron-Mx\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-12T14:27:26.866749](https://huggingface.co/datasets/open-llm-leaderboard/details_Eurdem__Megatron-Mx/blob/main/results_2024-02-12T14-27-26.866749.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6226080527190941,\n\
\ \"acc_stderr\": 0.03275260029756631,\n \"acc_norm\": 0.625474906418769,\n\
\ \"acc_norm_stderr\": 0.033408847072192514,\n \"mc1\": 0.423500611995104,\n\
\ \"mc1_stderr\": 0.01729742144853473,\n \"mc2\": 0.5995420715043286,\n\
\ \"mc2_stderr\": 0.015268251321394645\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6245733788395904,\n \"acc_stderr\": 0.014150631435111725,\n\
\ \"acc_norm\": 0.6689419795221843,\n \"acc_norm_stderr\": 0.013752062419817832\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6502688707428799,\n\
\ \"acc_stderr\": 0.004759103432380764,\n \"acc_norm\": 0.8498307110137423,\n\
\ \"acc_norm_stderr\": 0.0035650718701954473\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n\
\ \"acc_stderr\": 0.04244633238353228,\n \"acc_norm\": 0.5925925925925926,\n\
\ \"acc_norm_stderr\": 0.04244633238353228\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7171052631578947,\n \"acc_stderr\": 0.03665349695640767,\n\
\ \"acc_norm\": 0.7171052631578947,\n \"acc_norm_stderr\": 0.03665349695640767\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.03745554791462456,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.03745554791462456\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n\
\ \"acc_stderr\": 0.03724249595817731,\n \"acc_norm\": 0.6069364161849711,\n\
\ \"acc_norm_stderr\": 0.03724249595817731\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105653,\n\
\ \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105653\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5361702127659574,\n \"acc_stderr\": 0.032600385118357715,\n\
\ \"acc_norm\": 0.5361702127659574,\n \"acc_norm_stderr\": 0.032600385118357715\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n\
\ \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n\
\ \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4312169312169312,\n \"acc_stderr\": 0.025506481698138208,\n \"\
acc_norm\": 0.4312169312169312,\n \"acc_norm_stderr\": 0.025506481698138208\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5774193548387097,\n\
\ \"acc_stderr\": 0.02810096472427264,\n \"acc_norm\": 0.5774193548387097,\n\
\ \"acc_norm_stderr\": 0.02810096472427264\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n\
\ \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.65,\n \"acc_stderr\": 0.04793724854411018,\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.04793724854411018\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386424,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386424\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5974358974358974,\n \"acc_stderr\": 0.02486499515976775,\n \
\ \"acc_norm\": 0.5974358974358974,\n \"acc_norm_stderr\": 0.02486499515976775\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3148148148148148,\n \"acc_stderr\": 0.02831753349606648,\n \
\ \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.02831753349606648\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.03086868260412162,\n \
\ \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.03086868260412162\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\
acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8128440366972477,\n \"acc_stderr\": 0.016722684526200148,\n \"\
acc_norm\": 0.8128440366972477,\n \"acc_norm_stderr\": 0.016722684526200148\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n\
\ \"acc_stderr\": 0.028867431449849303,\n \"acc_norm\": 0.7843137254901961,\n\
\ \"acc_norm_stderr\": 0.028867431449849303\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n\
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\
\ \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n\
\ \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n\
\ \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097652,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097652\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.034624199316156234,\n\
\ \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.034624199316156234\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n\
\ \"acc_stderr\": 0.022509033937077802,\n \"acc_norm\": 0.8632478632478633,\n\
\ \"acc_norm_stderr\": 0.022509033937077802\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.013890862162876166,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.013890862162876166\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.024818350129436593,\n\
\ \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.024818350129436593\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4,\n\
\ \"acc_stderr\": 0.01638463841038082,\n \"acc_norm\": 0.4,\n \
\ \"acc_norm_stderr\": 0.01638463841038082\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7091503267973857,\n \"acc_stderr\": 0.02600480036395213,\n\
\ \"acc_norm\": 0.7091503267973857,\n \"acc_norm_stderr\": 0.02600480036395213\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7160493827160493,\n \"acc_stderr\": 0.025089478523765137,\n\
\ \"acc_norm\": 0.7160493827160493,\n \"acc_norm_stderr\": 0.025089478523765137\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.475177304964539,\n \"acc_stderr\": 0.029790719243829727,\n \
\ \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.029790719243829727\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4576271186440678,\n\
\ \"acc_stderr\": 0.012724296550980188,\n \"acc_norm\": 0.4576271186440678,\n\
\ \"acc_norm_stderr\": 0.012724296550980188\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.0290294228156814,\n\
\ \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.0290294228156814\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6601307189542484,\n \"acc_stderr\": 0.019162418588623553,\n \
\ \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.019162418588623553\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.02826388994378459,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.02826388994378459\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6169154228855721,\n\
\ \"acc_stderr\": 0.03437519337338251,\n \"acc_norm\": 0.6169154228855721,\n\
\ \"acc_norm_stderr\": 0.03437519337338251\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.423500611995104,\n\
\ \"mc1_stderr\": 0.01729742144853473,\n \"mc2\": 0.5995420715043286,\n\
\ \"mc2_stderr\": 0.015268251321394645\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7900552486187845,\n \"acc_stderr\": 0.01144628062926263\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5299469294920395,\n \
\ \"acc_stderr\": 0.013747759685444704\n }\n}\n```"
repo_url: https://huggingface.co/Eurdem/Megatron-Mx
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|arc:challenge|25_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|gsm8k|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hellaswag|10_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-12T14-27-26.866749.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-12T14-27-26.866749.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- '**/details_harness|winogrande|5_2024-02-12T14-27-26.866749.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-12T14-27-26.866749.parquet'
- config_name: results
data_files:
- split: 2024_02_12T14_27_26.866749
path:
- results_2024-02-12T14-27-26.866749.parquet
- split: latest
path:
- results_2024-02-12T14-27-26.866749.parquet
---
# Dataset Card for Evaluation run of Eurdem/Megatron-Mx
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Eurdem/Megatron-Mx](https://huggingface.co/Eurdem/Megatron-Mx) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Eurdem__Megatron-Mx",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-12T14:27:26.866749](https://huggingface.co/datasets/open-llm-leaderboard/details_Eurdem__Megatron-Mx/blob/main/results_2024-02-12T14-27-26.866749.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6226080527190941,
"acc_stderr": 0.03275260029756631,
"acc_norm": 0.625474906418769,
"acc_norm_stderr": 0.033408847072192514,
"mc1": 0.423500611995104,
"mc1_stderr": 0.01729742144853473,
"mc2": 0.5995420715043286,
"mc2_stderr": 0.015268251321394645
},
"harness|arc:challenge|25": {
"acc": 0.6245733788395904,
"acc_stderr": 0.014150631435111725,
"acc_norm": 0.6689419795221843,
"acc_norm_stderr": 0.013752062419817832
},
"harness|hellaswag|10": {
"acc": 0.6502688707428799,
"acc_stderr": 0.004759103432380764,
"acc_norm": 0.8498307110137423,
"acc_norm_stderr": 0.0035650718701954473
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353228,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353228
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7171052631578947,
"acc_stderr": 0.03665349695640767,
"acc_norm": 0.7171052631578947,
"acc_norm_stderr": 0.03665349695640767
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.03745554791462456,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.03745554791462456
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6069364161849711,
"acc_stderr": 0.03724249595817731,
"acc_norm": 0.6069364161849711,
"acc_norm_stderr": 0.03724249595817731
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.04784060704105653,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.04784060704105653
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5361702127659574,
"acc_stderr": 0.032600385118357715,
"acc_norm": 0.5361702127659574,
"acc_norm_stderr": 0.032600385118357715
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594963,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594963
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4312169312169312,
"acc_stderr": 0.025506481698138208,
"acc_norm": 0.4312169312169312,
"acc_norm_stderr": 0.025506481698138208
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5774193548387097,
"acc_stderr": 0.02810096472427264,
"acc_norm": 0.5774193548387097,
"acc_norm_stderr": 0.02810096472427264
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.04793724854411018,
"acc_norm": 0.65,
"acc_norm_stderr": 0.04793724854411018
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386424,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386424
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5974358974358974,
"acc_stderr": 0.02486499515976775,
"acc_norm": 0.5974358974358974,
"acc_norm_stderr": 0.02486499515976775
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.02831753349606648,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.02831753349606648
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.03086868260412162,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.03086868260412162
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8128440366972477,
"acc_stderr": 0.016722684526200148,
"acc_norm": 0.8128440366972477,
"acc_norm_stderr": 0.016722684526200148
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849303,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849303
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.03160295143776679,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.03160295143776679
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097652,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097652
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.034624199316156234,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.034624199316156234
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077802,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077802
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.013890862162876166,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.013890862162876166
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.024818350129436593,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.024818350129436593
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4,
"acc_stderr": 0.01638463841038082,
"acc_norm": 0.4,
"acc_norm_stderr": 0.01638463841038082
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7160493827160493,
"acc_stderr": 0.025089478523765137,
"acc_norm": 0.7160493827160493,
"acc_norm_stderr": 0.025089478523765137
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.029790719243829727,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.029790719243829727
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4576271186440678,
"acc_stderr": 0.012724296550980188,
"acc_norm": 0.4576271186440678,
"acc_norm_stderr": 0.012724296550980188
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.0290294228156814,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.0290294228156814
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6601307189542484,
"acc_stderr": 0.019162418588623553,
"acc_norm": 0.6601307189542484,
"acc_norm_stderr": 0.019162418588623553
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.02826388994378459,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.02826388994378459
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6169154228855721,
"acc_stderr": 0.03437519337338251,
"acc_norm": 0.6169154228855721,
"acc_norm_stderr": 0.03437519337338251
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.423500611995104,
"mc1_stderr": 0.01729742144853473,
"mc2": 0.5995420715043286,
"mc2_stderr": 0.015268251321394645
},
"harness|winogrande|5": {
"acc": 0.7900552486187845,
"acc_stderr": 0.01144628062926263
},
"harness|gsm8k|5": {
"acc": 0.5299469294920395,
"acc_stderr": 0.013747759685444704
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
karmiq/wikipedia-embeddings-cs-e5-large-instruct | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: chunks
sequence: string
- name: embeddings
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6161159498
num_examples: 534044
download_size: 5891284588
dataset_size: 6161159498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- cs
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- fill-mask
license:
- cc-by-sa-3.0
- gfdl
---
This dataset contains the Czech subset of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. Each page is divided into paragraphs, stored as a list in the `chunks` column. For every paragraph, embeddings are created using the [`intfloat/multilingual-e5-large-instruct`](https://huggingface.co/intfloat/multilingual-e5-large-instruct) model.
## Usage
Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("karmiq/wikipedia-embeddings-cs-e5-large-instruct", split="train")
ds[1]
```
```
{
'id': '1',
'url': 'https://cs.wikipedia.org/wiki/Astronomie',
'title': 'Astronomie',
'chunks': [
'Astronomie, řecky αστρονομία z άστρον ( astron ) hvězda a νόμος ( nomos )...',
'Myšlenky Aristotelovy rozvinul ve 2. století našeho letopočtu Klaudios Ptolemaios...',
...,
],
'embeddings': tensor([
[ 0.0393, 0.0144, 0.0008, ..., -0.0149, -0.0126, 0.0271],
[ 0.0403, 0.0316, -0.0101, ..., -0.0063, -0.0172, 0.0137],
... ])
}
```
The structure makes it easy to use the dataset for implementing semantic search.
<details>
<summary>Load the data in Elasticsearch</summary>
```python
def doc_generator(data, batch_size=1000):
for batch in data.with_format("numpy").iter(batch_size):
for i, id in enumerate(batch["id"]):
output = {"id": id}
output["title"] = batch["title"][i]
output["url"] = batch["url"][i]
output["parts"] = [
{ "chunk": chunk, "embedding": embedding }
for chunk, embedding in zip(batch["chunks"][i], batch["embeddings"][i])
]
yield output
num_indexed, num_failed = 0, 0,
progress = tqdm(total=ds.num_rows, unit="doc", desc="Indexing")
for ok, info in parallel_bulk(
es,
index="wikipedia-search",
actions=doc_generator(ds),
raise_on_error=False,
):
if not ok:
print(f"ERROR {info['index']['status']}: {info['index']['error']}"
progress.update(1)
```
</details>
<details>
<summary>Use <code>sentence_transformers.util.semantic_search</code></summary>
```python
import os
import textwrap
import sentence_transformers
model = sentence_transformers.SentenceTransformer("intfloat/multilingual-e5-large-instruct")
ds.set_format(type="torch", columns=["embeddings"], output_all_columns=True)
# Flatten the dataset
def explode_sequence(batch):
output = { "id": [], "url": [], "title": [], "chunk": [], "embedding": [] }
for id, url, title, chunks, embeddings in zip(
batch["id"], batch["url"], batch["title"], batch["chunks"], batch["embeddings"]
):
output["id"].extend([id for _ in range(len(chunks))])
output["url"].extend([url for _ in range(len(chunks))])
output["title"].extend([title for _ in range(len(chunks))])
output["chunk"].extend(chunks)
output["embedding"].extend(embeddings)
return output
ds_flat = ds.map(
explode_sequence,
batched=True,
remove_columns=ds.column_names,
num_proc=min(os.cpu_count(), 32),
desc="Flatten")
ds_flat
query = "query: Čím se zabývá fyzika?"
hits = sentence_transformers.util.semantic_search(
query_embeddings=model.encode(query),
corpus_embeddings=ds_flat["embedding"],
top_k=10)
for hit in hits[0]:
title = ds_flat[hit['corpus_id']]['title']
chunk = ds_flat[hit['corpus_id']]['chunk']
print(f"[{hit['score']:0.2f}] {textwrap.shorten(chunk, width=100, placeholder='…')} [{title}]")
# [0.86] Věda [Výzkum]
# [0.86] Kvantová fyzika Fyzika částic [Elektroslabá interakce]
# ...
```
</details>
The embeddings generation took about 7 hours on an NVIDIA A100 80GB GPU.
## License
See license of the original dataset: <https://huggingface.co/datasets/wikimedia/wikipedia>.
|
autoevaluate/autoeval-staging-eval-project-200453bd-7694963 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- masakhaner
eval_info:
task: entity_extraction
model: mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili
metrics: []
dataset_name: masakhaner
dataset_config: swa
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili
* Dataset: masakhaner
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Dippi9845/Tesi | ---
license: bsd-3-clause-clear
---
|
webis/tldr-17 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: webis-tldr-17-corpus
pretty_name: Reddit Webis-TLDR-17
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
col_mapping:
content: text
summary: target
metrics:
- type: rouge
name: Rouge
tags:
- reddit-posts-summarization
dataset_info:
features:
- name: author
dtype: string
- name: body
dtype: string
- name: normalizedBody
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: id
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 18940542951
num_examples: 3848330
download_size: 3141854161
dataset_size: 18940542951
---
# Dataset Card for Reddit Webis-TLDR-17
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
- **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
- **Paper:** [https://aclanthology.org/W17-4508]
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.14 GB
- **Size of the generated dataset:** 18.94 GB
- **Total amount of disk used:** 22.08 GB
### Dataset Summary
This corpus contains preprocessed posts from the Reddit dataset (Webis-TLDR-17).
The dataset consists of 3,848,330 posts with an average length of 270 words for content,
and 28 words for the summary.
Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id.
Content is used as document and summary is used as summary.
### Supported Tasks and Leaderboards
Summarization (abstractive)
Known ROUGE scores achieved for the Webis-TLDR-17:
| Model | ROUGE-1 | ROUGE-2 | ROUGE-L | Paper/Source |
|-------|-------|-------|-------|------:|
| Transformer + Copy (Gehrmann et al., 2019) | 22 | 6 | 17 | Generating Summaries with Finetuned Language Models |
| Unified VAE + PGN (Choi et al., 2019) | 19 | 4 | 15 | VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization |
(Source: https://github.com/sebastianruder/NLP-progress/blob/master/english/summarization.md)
### Languages
English
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.14 GB
- **Size of the generated dataset:** 18.94 GB
- **Total amount of disk used:** 22.08 GB
An example of 'train' looks as follows.
```
{
"author": "me",
"body": "<>",
"content": "input document.",
"id": "1",
"normalizedBody": "",
"subreddit": "machinelearning",
"subreddit_id": "2",
"summary": "output summary."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `author`: a `string` feature.
- `body`: a `string` feature.
- `normalizedBody`: a `string` feature.
- `subreddit`: a `string` feature.
- `subreddit_id`: a `string` feature.
- `id`: a `string` feature.
- `content`: a `string` feature.
- `summary`: a `string` feature.
### Data Splits
| name | train |
|-------|------:|
|default|3848330|
This corpus does not contain a separate test set. Thus it is up to the users to divide the corpus into appropriate training, validation and test sets.
## Dataset Creation
### Curation Rationale
In the scope of the task of absractive summarization the creators of the Webis-TLDR-17 propose mining social media for author-provided summaries and taking advantage of the common practice of appending a "TL;DR" to long posts. A large Reddit crawl was used to yield the Webis-TLDR-17 corpus. This dataset intends to complement the existing summarization corpora primarily from the news genre.
### Source Data
Reddit subreddits posts (submissions & comments) containing "TL;DR" from 2006 to 2016. Multiple subreddits are included.
#### Initial Data Collection and Normalization
Initial data: a set of 286 million submissions and 1.6 billion comments posted to Reddit between 2006 and 2016.
Then a five-step pipeline of consecutive filtering steps was applied.
#### Who are the source language producers?
The contents of the dataset are produced by human authors, bot-generated content was eliminated by filtering out all bot accounts with the help of an extensive list provided by the Reddit community, as well as manual inspection of cases where the user name contained the substring "bot."
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
This dataset has been created to serve as a source of large-scale summarization training data. It is primarily geared towards the automatic abstractive summarization task, that can be considered one of the most challenging variants of automatic summarization. It also aims to tackle the lack of genre diversity in the summarization datasets (most are news-related).
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.
Although filtering was performed abusive language maybe still be present.
## Additional Information
### Dataset Curators
Michael Völske, Martin Potthast, Shahbaz Syed, Benno Stein
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{volske-etal-2017-tl,
title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
author = {V{"o}lske, Michael and
Potthast, Martin and
Syed, Shahbaz and
Stein, Benno},
booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W17-4508",
doi = "10.18653/v1/W17-4508",
pages = "59--63",
abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
wu981526092/MGSD | ---
license: mit
task_categories:
- text-classification
- token-classification
language:
- en
size_categories:
- 10K<n<100K
---
# MULTI-GRAIN STEREOTYPE DATASET (MGSD)
The MULTI-GRAIN STEREOTYPE DATASET (MGSD) is a comprehensive dataset designed for the research and analysis of stereotypes in natural language processing. It provides granular annotations at both the sentence and token levels, enabling various studies and applications in stereotype detection.
## Dataset Structure
The dataset contains the following columns:
- **text_with_marker**: Contains the original text with markers (`===`) highlighting potential stereotype tokens.
- **text_no_marker**: The text without any markers, suitable for models that operate at the sentence level.
- **label**: Indicates if the sentence is a stereotype, anti-stereotype, or unrelated.
- **stereotype_type**: Describes the type of stereotype e.g., race, gender, profession.
- **binary_class**: A binary classification of the stereotype e.g., stereotype_race, unrelated.
- **multi_class**: A multi-class classification label e.g., stereotype_race, stereotype_gender.
- **original_dataset**: Source of the data.
## Usage
This dataset can be used to train models for various tasks:
1. **Sentence-level Stereotype Detection**: Using the `text_no_marker` column as input and `binary_label` or `multi_label` as target.
2. **Token-level Stereotype Detection**: Using the `text_with_marker` to identify the position of the token in the sentence. |
madaanpulkit/opus_eng_hin_pan | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sent
dtype: string
- name: lang
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 159097285
num_examples: 1283230
- name: validation
num_bytes: 770267
num_examples: 8000
- name: test
num_bytes: 790471
num_examples: 8000
download_size: 71739889
dataset_size: 160658023
---
# Dataset Card for "opus_eng_hin_pan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vaishakgkumar/Covid | ---
license: mit
---
|
jlbaker361/sitcoms-lite_captioned-augmented | ---
dataset_info:
features:
- name: image
dtype: image
- name: src
dtype: string
- name: split
dtype: string
- name: id
dtype: int64
- name: caption
dtype: string
splits:
- name: train
num_bytes: 423306677.375
num_examples: 1669
download_size: 423148766
dataset_size: 423306677.375
---
# Dataset Card for "sitcoms-lite_captioned-augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lask8/gradio-lipsync-wav2lip | ---
title: Gradio Lipsync Wav2lip
emoji: 👄
colorFrom: indigo
colorTo: blue
sdk: gradio
python_version: 3.8
sdk_version: 3.40.1
suggested_hardware: "t4-medium"
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |
el2e10/aya-parapharse-punjabi | ---
language:
- pa
license: cc
size_categories:
- n<1K
source_datasets:
- extended|ai4bharat/IndicXParaphrase
task_categories:
- text-generation
pretty_name: Aya Paraphrase Punjabi
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: template_lang
dtype: string
- name: template_id
dtype: int64
splits:
- name: train
num_bytes: 629535
num_examples: 1001
download_size: 230066
dataset_size: 629535
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
### Description
This dataset is derived from the already existing dataset made by AI4Bharat. We have used the [IndicXParaphrase](https://huggingface.co/datasets/ai4bharat/IndicXParaphrase) dataset of AI4Bharat to create this instruction style dataset.
We have used the malayalam split of the above mentioned dataset to create this one. This was created as part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI.
IndicXParaphrase is multilingual, and n-way parallel dataset for paraphrase detection in 10 Indic languages. The original dataset(IndicXParaphrase) was made available under the cc-0 license.
### Template
The following templates(Punjabi) where used for converting the original dataset:
```
#Template 1
prompt:
ਵੱਖ-ਵੱਖ ਸ਼ਬਦਾਂ ਦੀ ਵਰਤੋਂ ਕਰਕੇ ਹੇਠਾਂ ਦਿੱਤੇ ਵਾਕ ਨੂੰ ਲਿਖੋ: "{original_sentence}"
completion:
{paraphrased_sentence}
```
```
#Template 2
prompt:
ਨਿਮਨਲਿਖਤ ਵਾਕ ਨੂੰ ਵੱਖਰੇ ਤਰੀਕੇ ਨਾਲ ਦੁਬਾਰਾ ਲਿਖੋ: "{original_sentence}"
completion:
{paraphrased_sentence}
```
```
#Template 3
prompt:
ਹੇਠਾਂ ਦਿੱਤੇ ਵਾਕ ਨੂੰ ਸਮਝਾਓ: "{original_sentence}"
completion:
{paraphrased_sentence}
```
### Acknowledgement
Thank you, Amarjit for helping with the preparation of this dataset by providing the Punjabi translation of the above mentioned English prompts. |
AdapterOcean/med_alpaca_standardized_cluster_65_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 958015
num_examples: 2452
download_size: 212755
dataset_size: 958015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_65_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liyucheng/mmlu_mini | ---
dataset_info:
features:
- name: input
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: target
dtype: string
- name: task
dtype: string
splits:
- name: val
num_bytes: 494633.0905282202
num_examples: 1000
- name: test
num_bytes: 489506.01082613575
num_examples: 1000
- name: train
num_bytes: 435903.50877192983
num_examples: 1000
download_size: 587231
dataset_size: 1420042.6101262858
---
# Dataset Card for "mmlu_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/med_alpaca_standardized_cluster_1_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 26006437
num_examples: 14404
download_size: 13974289
dataset_size: 26006437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_1_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1e527f1e | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1331
dataset_size: 180
---
# Dataset Card for "1e527f1e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
k1r33v/ru_fragmented | ---
license: apache-2.0
---
|
stoddur/med_chat_13 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 546730800.0
num_examples: 177510
download_size: 8201421
dataset_size: 546730800.0
---
# Dataset Card for "med_chat_13"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingnft/mini-mutants | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/mini-mutants
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/mini-mutants).
Model is available [here](https://huggingface.co/huggingnft/mini-mutants).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/mini-mutants")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
realzdlegend/autotrain-data-mnist-analysis | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: mnist-analysis
## Dataset Description
This dataset has been automatically processed by AutoTrain for project mnist-analysis.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<28x28 L PIL image>",
"target": 1
},
{
"image": "<28x28 L PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 55996 |
| valid | 14004 |
|
Besteasy/MMCU | ---
license: cc-by-nc-sa-4.0
---
# MMCU
This is the code repository for paper Measuring Massive Multitask Chinese Understanding [https://arxiv.org/abs/2304.12986](https://arxiv.org/ftp/arxiv/papers/2304/2304.12986.pdf)
Please download the dataset at https://huggingface.co/datasets/Besteasy/MMCU, <br>
or send us an email to apply for free dataset download: order@besteasy.com <br>
You may need to clarify your identity (Professor, College Students, NLP researcher/engineer, etc.)<br>
For academic exchanges, please contact me at felix.zeng@besteasy.com
## 重要声明
数据集获取方式:直接在 https://huggingface.co/datasets/Besteasy/MMCU 下载文件。<br>
也可以发邮件至 order@besteasy.com 申请,注明身份及预期用途即可。<br>
本评测只是对大模型语义理解能力的测试,并不能代表模型的全面能力评测,评测结果仅供参考。整个评测方式、评测数据集、评测记录都公开,确保可以复现。<br>
本测试集免费提供给大家是为了帮助各位研究者们评测自己的模型性能,并验证训练策略是否有效,并不是为了给出排行榜,中文大模型的改进任重道远,希望大家以正确的方式充分利用此数据集。<br> |
Seanxh/twitter_dataset_1713188987 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 32877
num_examples: 74
download_size: 17309
dataset_size: 32877
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigscience-data/roots_indic-ta_wikinews | ---
language: ta
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-ta_wikinews
# wikinews_filtered
- Dataset uid: `wikinews_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0307 % of total
- 0.0701 % of ar
- 0.3036 % of pt
- 0.0271 % of en
- 0.0405 % of fr
- 0.2119 % of indic-ta
- 0.0081 % of zh
- 0.0510 % of es
- 0.0725 % of ca
### BigScience processing steps
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
|
dongyoung4091/shp-generated_flan_t5_large_flan_t5_base_zeroshot | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: zeroshot_helpfulness
dtype: float64
- name: zeroshot_specificity
dtype: float64
- name: zeroshot_intent
dtype: float64
- name: zeroshot_factuality
dtype: float64
- name: zeroshot_easy-to-understand
dtype: float64
- name: zeroshot_relevance
dtype: float64
- name: zeroshot_readability
dtype: float64
- name: zeroshot_enough-detail
dtype: float64
- name: 'zeroshot_biased:'
dtype: float64
- name: zeroshot_fail-to-consider-individual-preferences
dtype: float64
- name: zeroshot_repetetive
dtype: float64
- name: zeroshot_fail-to-consider-context
dtype: float64
- name: zeroshot_too-long
dtype: float64
splits:
- name: train
num_bytes: 29493865
num_examples: 25600
download_size: 0
dataset_size: 29493865
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_base_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hackathon-pln-es/comentarios_depresivos |
---
license: cc-by-sa-4.0
---
La base de datos consta de una cantidad de 192 347 filas de datos para el entrenamiento, 33 944 para las pruebas y 22630 para la validación. Su contenido está compuesto por comentarios suicidas y comentarios normales de la red social Reddit traducidos al español, y obtenidos de la base de datos: Suicide and Depression Detection de Nikhileswar Komati, la cual se puede visualizar en la siguiente dirección: https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch
Autores
- Danny Vásquez
- César Salazar
- Alexis Cañar
- Yannela Castro
- Daniel Patiño
|
Nexdata/4253_Pairs_of_Human_Face_Images_Before_and_After_Makeup | ---
license: cc-by-nc-nd-4.0
---
## Description
4,253 Pairs of Human Face Images – Before and After Makeup. For each pair, one image without makeup and one image with makeup are included. This dataset can be used for tasks such as face recognition and makeup style analysis.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1037?source=Huggingface
# Specifications
## Data size
4,253 pairs of images; one image without makeup and one image with makeup per pair
## Race distribution
914 pairs of black people, 1,500 pairs of Caucasian people, 1,839 pairs of Asian people
## Gender distribution
183 pairs of males and 4,070 pairs of females
## Collecting environment
indoor scenes
## Data diversity
different races, ages and makeup styles
## Data format
.png
## Accuracy
the accuracy of labels of race and gender is at least 97%
# Licensing Information
Commercial License
|
Falah/photogram_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1790632
num_examples: 10000
download_size: 248890
dataset_size: 1790632
---
# Dataset Card for "photogram_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/shimakaze_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of shimakaze/島風 (Kantai Collection)
This is the dataset of shimakaze/島風 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `blonde_hair, long_hair, hairband, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 584.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimakaze_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 372.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimakaze_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1226 | 767.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimakaze_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 533.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimakaze_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1226 | 1003.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shimakaze_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shimakaze_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, elbow_gloves, solo, striped_thighhighs, white_gloves, black_panties, looking_at_viewer, skirt, navel, brown_eyes, blush, yellow_eyes, midriff |
| 1 | 5 |  |  |  |  |  | 1girl, :3, >_<, black_panties, brown_eyes, closed_eyes, elbow_gloves, lifebuoy, looking_at_viewer, skirt, solo, striped_thighhighs, white_gloves, anchor, blush, navel, midriff, yellow_eyes |
| 2 | 5 |  |  |  |  |  | 1girl, black_panties, elbow_gloves, highleg_panties, navel, skirt, striped_thighhighs, white_gloves, looking_at_viewer, solo, midriff |
| 3 | 5 |  |  |  |  |  | 1girl, elbow_gloves, skirt, solo, striped_thighhighs, white_gloves, blue_eyes, looking_at_viewer, navel, black_panties |
| 4 | 9 |  |  |  |  |  | 1girl, black_panties, crop_top, elbow_gloves, highleg_panties, looking_at_viewer, sailor_collar, solo, white_gloves, pleated_skirt, serafuku, striped_thighhighs, blue_skirt, simple_background, miniskirt, white_background, microskirt, grey_eyes, navel |
| 5 | 13 |  |  |  |  |  | 1girl, black_neckerchief, black_panties, blue_skirt, crop_top, elbow_gloves, miniskirt, pleated_skirt, serafuku, solo, white_gloves, highleg_panties, microskirt, looking_at_viewer, simple_background, striped_thighhighs, anchor_hair_ornament, black_hairband, blue_sailor_collar, grey_eyes, hair_between_eyes, navel, white_background, blush, sleeveless |
| 6 | 6 |  |  |  |  |  | 1girl, elbow_gloves, looking_at_viewer, solo, upper_body, black_neckerchief, blue_sailor_collar, serafuku, simple_background, white_gloves, black_hairband, open_mouth, brown_eyes, sleeveless, white_background |
| 7 | 6 |  |  |  |  |  | 1girl, blush, elbow_gloves, looking_at_viewer, solo, white_gloves, brown_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | elbow_gloves | solo | striped_thighhighs | white_gloves | black_panties | looking_at_viewer | skirt | navel | brown_eyes | blush | yellow_eyes | midriff | :3 | >_< | closed_eyes | lifebuoy | anchor | highleg_panties | blue_eyes | crop_top | sailor_collar | pleated_skirt | serafuku | blue_skirt | simple_background | miniskirt | white_background | microskirt | grey_eyes | black_neckerchief | anchor_hair_ornament | black_hairband | blue_sailor_collar | hair_between_eyes | sleeveless | upper_body | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:---------------------|:---------------|:----------------|:--------------------|:--------|:--------|:-------------|:--------|:--------------|:----------|:-----|:------|:--------------|:-----------|:---------|:------------------|:------------|:-----------|:----------------|:----------------|:-----------|:-------------|:--------------------|:------------|:-------------------|:-------------|:------------|:--------------------|:-----------------------|:-----------------|:---------------------|:--------------------|:-------------|:-------------|:-------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | | X | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 5 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | | X | | X | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | |
| 6 | 6 |  |  |  |  |  | X | X | X | | X | | X | | | X | | | | | | | | | | | | | | X | | X | | X | | | X | | X | X | | X | X | X |
| 7 | 6 |  |  |  |  |  | X | X | X | | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
|
arhamk/HTML-correction-examples | ---
license: mit
task_categories:
- text-classification
tags:
- code
size_categories:
- 1K<n<10K
--- |
CyberHarem/elly_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of elly (Touhou)
This is the dataset of elly (Touhou), containing 209 images and their tags.
The core tags of this character are `blonde_hair, hat, ribbon, short_hair, yellow_eyes, hat_ribbon, bow, curly_hair, white_headwear, drill_hair, red_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 209 | 185.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elly_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 209 | 127.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elly_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 384 | 235.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elly_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 209 | 171.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elly_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 384 | 302.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elly_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/elly_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, bangs, holding_scythe, juliet_sleeves, red_dress, solo, frills, hat_bow, looking_at_viewer, smile, medium_breasts, sun_hat, closed_mouth, upper_body, open_mouth, red_bowtie |
| 1 | 5 |  |  |  |  |  | 1girl, holding_scythe, long_sleeves, looking_at_viewer, smile, solo, red_dress, weapon, open_mouth |
| 2 | 16 |  |  |  |  |  | 1girl, holding_scythe, solo, dress, smile, weapon |
| 3 | 11 |  |  |  |  |  | 1girl, blush, puffy_short_sleeves, solo, looking_at_viewer, medium_breasts, navel, red_skirt, brown_shirt, hat_bow, holding_scythe, open_mouth, smile, bangs, black_panties, highleg_panties, midriff, red_bow, frilled_skirt, stomach, black_ribbon, bobby_socks, brown_ribbon, crop_top, drill_locks, full_body, holding_weapon, neck_ribbon, shoes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bangs | holding_scythe | juliet_sleeves | red_dress | solo | frills | hat_bow | looking_at_viewer | smile | medium_breasts | sun_hat | closed_mouth | upper_body | open_mouth | red_bowtie | long_sleeves | weapon | dress | blush | puffy_short_sleeves | navel | red_skirt | brown_shirt | black_panties | highleg_panties | midriff | red_bow | frilled_skirt | stomach | black_ribbon | bobby_socks | brown_ribbon | crop_top | drill_locks | full_body | holding_weapon | neck_ribbon | shoes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------------|:-----------------|:------------|:-------|:---------|:----------|:--------------------|:--------|:-----------------|:----------|:---------------|:-------------|:-------------|:-------------|:---------------|:---------|:--------|:--------|:----------------------|:--------|:------------|:--------------|:----------------|:------------------|:----------|:----------|:----------------|:----------|:---------------|:--------------|:---------------|:-----------|:--------------|:------------|:-----------------|:--------------|:--------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | | X | X | | | X | X | | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | | X | | | X | | | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | X | X | | | X | | X | X | X | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
albertvillanova/tmp-imagefolder | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bharatanatyam
'1': kathak
splits:
- name: train
num_bytes: 18458.0
num_examples: 2
- name: validation
num_bytes: 8463.0
num_examples: 1
download_size: 29860
dataset_size: 26921.0
---
# Dataset Card for "tmp-imagefolder"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Seanxh/twitter_dataset_1713191039 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 45430
num_examples: 104
download_size: 21534
dataset_size: 45430
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yzhuang/autotree_automl_heloc_gosdt_l512_d3_sd1 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: int64
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: int64
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 11682400000
num_examples: 100000
- name: validation
num_bytes: 1168240000
num_examples: 10000
download_size: 1471483972
dataset_size: 12850640000
---
# Dataset Card for "autotree_automl_heloc_gosdt_l512_d3_sd1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
niv-al/sq-babi_nli_counting | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype:
class_label:
names:
'0': not-entailed
'1': entailed
splits:
- name: train
num_bytes: 250135
num_examples: 1000
- name: validation
num_bytes: 34065
num_examples: 144
- name: test
num_bytes: 37455
num_examples: 144
download_size: 60218
dataset_size: 321655
language:
- sq
---
# Dataset Card for "sq-babi_nli_counting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dongyoungkim/NIA_60000_TRAIN | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 16012584214.0
num_examples: 60000
download_size: 11357172048
dataset_size: 16012584214.0
---
# Dataset Card for "NIA_60000_TRAIN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mediform/sgd-sharegpt | ---
language:
- en
size_categories:
- 10K<n<100K
source_datasets:
- schema_guided_dstc8
pretty_name: schema_guided_dstc8
dataset_info:
- config_name: function_cot
features:
- name: dialogue_id
dtype: string
- name: services
sequence: string
- name: turns
sequence:
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': SYSTEM
- name: utterance
dtype: string
- name: frames
sequence:
- name: service
dtype: string
- name: slots
sequence:
- name: slot
dtype: string
- name: start
dtype: int32
- name: exclusive_end
dtype: int32
- name: state
struct:
- name: active_intent
dtype: string
- name: requested_slots
sequence: string
- name: slot_values
sequence:
- name: slot_name
dtype: string
- name: slot_value_list
sequence: string
- name: actions
sequence:
- name: act
dtype:
class_label:
names:
'0': AFFIRM
'1': AFFIRM_INTENT
'2': CONFIRM
'3': GOODBYE
'4': INFORM
'5': INFORM_COUNT
'6': INFORM_INTENT
'7': NEGATE
'8': NEGATE_INTENT
'9': NOTIFY_FAILURE
'10': NOTIFY_SUCCESS
'11': OFFER
'12': OFFER_INTENT
'13': REQUEST
'14': REQUEST_ALTS
'15': REQ_MORE
'16': SELECT
'17': THANK_YOU
- name: slot
dtype: string
- name: canonical_values
sequence: string
- name: values
sequence: string
- name: service_results
sequence:
- name: service_results_list
sequence:
- name: service_slot_name
dtype: string
- name: service_canonical_value
dtype: string
- name: service_call
struct:
- name: method
dtype: string
- name: parameters
sequence:
- name: parameter_slot_name
dtype: string
- name: parameter_canonical_value
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 399228786
num_examples: 18624
- name: test
num_bytes: 90489796
num_examples: 4201
download_size: 74770692
dataset_size: 489718582
- config_name: function_cot_nlg
features:
- name: dialogue_id
dtype: string
- name: services
sequence: string
- name: turns
sequence:
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': SYSTEM
- name: utterance
dtype: string
- name: frames
sequence:
- name: service
dtype: string
- name: slots
sequence:
- name: slot
dtype: string
- name: start
dtype: int32
- name: exclusive_end
dtype: int32
- name: state
struct:
- name: active_intent
dtype: string
- name: requested_slots
sequence: string
- name: slot_values
sequence:
- name: slot_name
dtype: string
- name: slot_value_list
sequence: string
- name: actions
sequence:
- name: act
dtype:
class_label:
names:
'0': AFFIRM
'1': AFFIRM_INTENT
'2': CONFIRM
'3': GOODBYE
'4': INFORM
'5': INFORM_COUNT
'6': INFORM_INTENT
'7': NEGATE
'8': NEGATE_INTENT
'9': NOTIFY_FAILURE
'10': NOTIFY_SUCCESS
'11': OFFER
'12': OFFER_INTENT
'13': REQUEST
'14': REQUEST_ALTS
'15': REQ_MORE
'16': SELECT
'17': THANK_YOU
- name: slot
dtype: string
- name: canonical_values
sequence: string
- name: values
sequence: string
- name: service_results
sequence:
- name: service_results_list
sequence:
- name: service_slot_name
dtype: string
- name: service_canonical_value
dtype: string
- name: service_call
struct:
- name: method
dtype: string
- name: parameters
sequence:
- name: parameter_slot_name
dtype: string
- name: parameter_canonical_value
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 413611739
num_examples: 18624
- name: test
num_bytes: 93800454
num_examples: 4201
download_size: 83499083
dataset_size: 507412193
- config_name: no_user_action
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 61638813.05410734
num_examples: 7570
download_size: 5916553
dataset_size: 61638813.05410734
- config_name: only_function
features:
- name: dialogue_id
dtype: string
- name: services
sequence: string
- name: turns
sequence:
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': SYSTEM
- name: utterance
dtype: string
- name: frames
sequence:
- name: service
dtype: string
- name: slots
sequence:
- name: slot
dtype: string
- name: start
dtype: int32
- name: exclusive_end
dtype: int32
- name: state
struct:
- name: active_intent
dtype: string
- name: requested_slots
sequence: string
- name: slot_values
sequence:
- name: slot_name
dtype: string
- name: slot_value_list
sequence: string
- name: actions
sequence:
- name: act
dtype:
class_label:
names:
'0': AFFIRM
'1': AFFIRM_INTENT
'2': CONFIRM
'3': GOODBYE
'4': INFORM
'5': INFORM_COUNT
'6': INFORM_INTENT
'7': NEGATE
'8': NEGATE_INTENT
'9': NOTIFY_FAILURE
'10': NOTIFY_SUCCESS
'11': OFFER
'12': OFFER_INTENT
'13': REQUEST
'14': REQUEST_ALTS
'15': REQ_MORE
'16': SELECT
'17': THANK_YOU
- name: slot
dtype: string
- name: canonical_values
sequence: string
- name: values
sequence: string
- name: service_results
sequence:
- name: service_results_list
sequence:
- name: service_slot_name
dtype: string
- name: service_canonical_value
dtype: string
- name: service_call
struct:
- name: method
dtype: string
- name: parameters
sequence:
- name: parameter_slot_name
dtype: string
- name: parameter_canonical_value
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 322401072
num_examples: 18624
- name: test
num_bytes: 73561762
num_examples: 4201
download_size: 57534751
dataset_size: 395962834
configs:
- config_name: function_cot
data_files:
- split: train
path: function_cot/train-*
- split: test
path: function_cot/test-*
- config_name: function_cot_nlg
data_files:
- split: train
path: function_cot_nlg/train-*
- split: test
path: function_cot_nlg/test-*
- config_name: no_user_action
data_files:
- split: train
path: no_user_action/train-*
- config_name: only_function
data_files:
- split: train
path: only_function/train-*
- split: test
path: only_function/test-*
---
## Dataset Card for The Schema-Guided Dialogue (SGD) Dataset converted to ShareGPT
### Dataset summary
This is a ShareGPT-parsed version of [schema_guided_dstc8](https://huggingface.co/datasets/schema_guided_dstc8) dataset, which aims to finetune LLMs on task-oriented conversations. There are three versions of the dataset:
* `function_only`: In each turn, the assistant only answers with function calls. For example:
```
REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
```
* `function_cot`: In each turn, the assistant has to analyze recent user actions, the intent being fulfilled by the system (`active_intent`), and slots requested by the user in the current turn (`requested_slots`) before giving the appropriate function calls. For example:
```Let's think step-by-step:
User actions: INFORM_INTENT(intent=['FindRestaurants'])
active_intent: FindRestaurants()
requested_slots: NONE
Therefore, assistant actions should be: REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
```
* `function_cot_nlg`: Same as `function_cot` but the assistant has to give an additional natural language response corresponding to the function calls. For example:
```Let's think step-by-step:
User actions: INFORM_INTENT(intent=['FindRestaurants'])
active_intent: FindRestaurants()
requested_slots: NONE
Therefore, assistant actions should be: REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
Response: Sure thing, which area are you interested in and what is your preferred cuisine, such as Italian and Indian?
```
### Dataset samples
To keep README short, only a sample from `function_cot_nlg` is provided. Samples from the other subsets slightly differ in system's instruction and assistant turns (as described above).
```
SYSTEM:
As a helpful assistant, you are involved in a task-oriented conversation where you interact with provided services and APIs to assist users with their query. Service description:
Service: Restaurants_1
Description: A leading provider for restaurant search and reservations
Slots:
- restaurant_name (Any): Name of the restaurant
- date (Any): Date for the reservation or to find availability
- time (Any): Time for the reservation or to find availability
- serves_alcohol (True, False): Boolean flag indicating if the restaurant serves alcohol
- has_live_music (True, False): Boolean flag indicating if the restaurant has live music
- phone_number (Any): Phone number of the restaurant
- street_address (Any): Address of the restaurant
- party_size (1, 2, 3, 4, 5, 6): Party size for a reservation
- price_range (inexpensive, moderate, expensive, very expensive): Price range for the restaurant
- city (Any): City in which the restaurant is located
- cuisine (Mexican, Chinese, Indian, American, Italian): Cuisine of food served in the restaurant
Intents:
- ReserveRestaurant:
Reserve a table at a restaurant
Args:
restaurant_name: Required
city: Required
time: Required
date: Optional. Default: "2019-03-01"
party_size: Optional. Default: "2"
- FindRestaurants:
Find a restaurant of a particular cuisine in a city
Args:
cuisine: Required
city: Required
price_range: Optional. Default: "dontcare"
has_live_music: Optional. Default: "dontcare"
serves_alcohol: Optional. Default: "dontcare"
In each turn, think step-by-step about user actions and conversation state, which includes the following information:
- active_intent: The intent corresponding to the service of the frame which is currently being fulfilled by the system. It takes the value "NONE" if none of the intents are active.
- requested_slots: A list of slots requested by the user in the current turn.
From the captured information, take some actions in form of functions to clarify user request using one of the actions below:
AFFIRM
AFFIRM_INTENT
CONFIRM
GOODBYE
INFORM
INFORM_COUNT
INFORM_INTENT
NEGATE
NEGATE_INTENT
NOTIFY_FAILURE
NOTIFY_SUCCESS
OFFER
OFFER_INTENT
REQUEST
REQUEST_ALTS
REQ_MORE
SELECT
THANK_YOU
Each action can have some arguments derived from the slot list above, and corresponding values if necessary. Examples of an action:
OFFER(restaurant_name=["71 Saint Peter"], city=["San Jose"])
INFORM(city=["San Jose", "Los Angeles"])
Beside actions above, you can also send request to the service to retrieve necessary information. The request consists of the following fields:
- method: The name of the intent or function of the service or API being executed.
- parameters: slot names and the corresponding values.
Examples of a request:
FindRestaurants(city='San Jose', cuisine='American')
TransferMoney(account_type='checking', amount=1000)
If you make a request to the service, you should expect the next user turn contains results from the service instead of actual user message.
Finally, generate a response in natural language to the user.
USER: Hello, could you help me search for a place to dine in?
ASSISTANT: Let's think step-by-step:
User actions: INFORM_INTENT(intent=['FindRestaurants'])
active_intent: FindRestaurants()
requested_slots: NONE
Therefore, assistant actions should be: REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
Response: Sure thing, which area are you interested in and what is your preferred cuisine, such as Italian and Indian?
USER: I would prefer a restaurant which serves up Cambodian food in SF.
ASSISTANT: Let's think step-by-step:
User actions: INFORM(cuisine=['Vietnamese']) INFORM(city=['San Francisco'])
active_intent: FindRestaurants(city=['SF'], cuisine=['Cambodian'])
requested_slots: NONE
Therefore, assistant actions should be: FindRestaurants(city="San Francisco" cuisine="Vietnamese")
USER: [SYSTEM RESPONSE]
{'city': 'San Francisco', 'cuisine': 'Vietnamese', 'has_live_music': 'False', 'phone_number': '415-885-5180', 'price_range': 'moderate', 'restaurant_name': 'Anh Hong', 'serves_alcohol': 'True', 'street_address': '808 Geary Street'}
{'city': 'San Francisco', 'cuisine': 'Vietnamese', 'has_live_music': 'False', 'phone_number': '415-928-4977', 'price_range': 'moderate', 'restaurant_name': 'Aux Delices', 'serves_alcohol': 'False', 'street_address': '2327 Polk Street'}
``` |
autoevaluate/autoeval-staging-eval-project-84760c85-7314786 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: philschmid/distilbart-cnn-12-6-samsum
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: philschmid/distilbart-cnn-12-6-samsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
JinglesDados/AdelmoCase | ---
license: openrail
---
|
TokenBender/python_evol_instruct_51k | ---
license: mit
---
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_148 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 978908472.0
num_examples: 190746
download_size: 1000598385
dataset_size: 978908472.0
---
# Dataset Card for "chunk_148"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dyvapandhu/molecul-datasets | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AA
'1': C
splits:
- name: train
num_bytes: 722508.0
num_examples: 60
- name: validation
num_bytes: 113871.0
num_examples: 10
- name: test
num_bytes: 140052.0
num_examples: 14
download_size: 869841
dataset_size: 976431.0
---
# Dataset Card for "molecul-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713225611 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 22198
num_examples: 63
download_size: 19780
dataset_size: 22198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713225611"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anan-2024/twitter_dataset_1713141922 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 17978
num_examples: 42
download_size: 11009
dataset_size: 17978
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_42 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 68682893
num_examples: 7077
download_size: 20200080
dataset_size: 68682893
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_42"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.