ko-ref-llama2-13b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
a108524
|
raw
history blame
1.16 kB
metadata
language:
  - ko
library_name: transformers
pipeline_tag: text-generation

Model Developers HyunseokLee, TaeyoungKim - (kaist alinlab, omnious.ai)

Input Models input text only.

Output Models generate text only.

Model Architecture
ko-ref-llama2-13b is an auto-regressive language model based on the LLaMA2 transformer architecture.

Base Model
Llama-2-13B

Training Dataset
Open dataset (Korean).

Training Objective
We trained the model to learn Korean corpus.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 41.32
ARC (25-shot) 48.38
HellaSwag (10-shot) 73.56
MMLU (5-shot) 34.83
TruthfulQA (0-shot) 35.82
Winogrande (5-shot) 69.14
GSM8K (5-shot) 0.0
DROP (3-shot) 27.53