(μ£Ό)λ―Έλμ΄κ·Έλ£Ήμ¬λκ³Όμ²κ³Ό (μ£Ό)λ§μ»€μ LLM μ°κ΅¬ 컨μμμμμ κ°λ°λ λͺ¨λΈμ
λλ€
The license is cc-by-nc-sa-4.0.
π³Korean-OpenOrca-13B-v2π³
Model Details
Model Developers Kyujin Han (kyujinpy)
Model Architecture
Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
Repo Link
Github Korean-OpenOrca: π³Korean-OpenOrcaπ³
Base Model hyunseoki/ko-en-llama2-13b
Training Dataset
I use OpenOrca-ko-v3.
Using DeepL, translate about OpenOrca.
I use A100 GPU 40GB and COLAB, when trianing.
Model comparisons
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
|---|---|---|---|---|---|---|
| [Korean-OpenOrca-13Bπ³] | 48.79 | 43.09 | 54.13 | 40.24 | 45.22 | 61.28 |
| [Korean-OpenOrca-13B-v2π³] | 48.17 | 43.17 | 54.51 | 42.90 | 41.82 | 58.44 |
| Korean-OpenOrca-13B-v3π³ | 48.86 | 43.77 | 54.30 | 41.79 | 43.85 | 60.57 |
Implementation Code
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Korean-OpenOrca-13B-v3"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
- Downloads last month
- 10
