metadata
language:
- ko
license: llama2
model-index:
- name: k2s3_test_24001
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.72
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/k2s3_test_24001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.69
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/k2s3_test_24001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.6
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/k2s3_test_24001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.57
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/k2s3_test_24001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.69
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/k2s3_test_24001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.8
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/k2s3_test_24001
name: Open LLM Leaderboard
Developed by :
- Changgil Song
Model Number:
- k2s3_test_24001
Base Model :
Training Data
- The model was trained on a diverse dataset comprising approximately 800 million tokens, including the Standard Korean Dictionary, KULLM training data from Korea University, dissertation abstracts from master's and doctoral theses, and Korean language samples from AI Hub.
- ์ด ๋ชจ๋ธ์ ํ์ค๋๊ตญ์ด์ฌ์ , ๊ณ ๋ ค๋ KULLM์ ํ๋ จ ๋ฐ์ดํฐ, ์๋ฐ์ฌํ์์ ์์ง์ ๋ณด ๋ ผ๋ฌธ์ด๋ก, ai_hub์ ํ๊ตญ์ด ๋ฐ์ดํฐ ์ํ๋ค์ ํฌํจํ์ฌ ์ฝ 8์ต ๊ฐ์ ํ ํฐ์ผ๋ก ๊ตฌ์ฑ๋ ๋ค์ํ ๋ฐ์ดํฐ์ ์์ ํ๋ จ๋์์ต๋๋ค.
Training Method
- This model was fine-tuned on the "meta-llama/Llama-2-13b-chat-hf" base model using PEFT (Parameter-Efficient Fine-Tuning) LoRA (Low-Rank Adaptation) techniques.
- ์ด ๋ชจ๋ธ์ "meta-llama/Llama-2-13b-chat-hf" ๊ธฐ๋ฐ ๋ชจ๋ธ์ PEFT LoRA๋ฅผ ์ฌ์ฉํ์ฌ ๋ฏธ์ธ์กฐ์ ๋์์ต๋๋ค.
Hardware and Software
- Hardware: Utilized two A100 (80G*2EA) GPUs for training.
- Training Factors: This model was fine-tuned using PEFT LoRA with the HuggingFace SFTtrainer and applied fsdp. Key parameters included LoRA r = 8, LoRA alpha = 16, trained for 2 epochs, batch size of 1, and gradient accumulation of 32.
- ์ด ๋ชจ๋ธ์ PEFT LoRA๋ฅผ ์ฌ์ฉํ์ฌ HuggingFace SFTtrainer์ fsdp๋ฅผ ์ ์ฉํ์ฌ ๋ฏธ์ธ์กฐ์ ๋์์ต๋๋ค. ์ฃผ์ ํ๋ผ๋ฏธํฐ๋ก๋ LoRA r = 8, LoRA alpha = 16, 2 ์ํญ ํ๋ จ, ๋ฐฐ์น ํฌ๊ธฐ 1, ๊ทธ๋ฆฌ๊ณ ๊ทธ๋ผ๋์ธํธ ๋์ 32๋ฅผ ํฌํจํฉ๋๋ค.
Caution
- For fine-tuning this model, it is advised to consider the specific parameters used during training, such as LoRA r and LoRA alpha values, to ensure compatibility and optimal performance.
- ์ด ๋ชจ๋ธ์ ๋ฏธ์ธ์กฐ์ ํ ๋๋ LoRA r ๋ฐ LoRA alpha ๊ฐ๊ณผ ๊ฐ์ด ํ๋ จ ์ค์ ์ฌ์ฉ๋ ํน์ ํ๋ผ๋ฏธํฐ๋ฅผ ๊ณ ๋ คํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ์ด๋ ํธํ์ฑ ๋ฐ ์ต์ ์ ์ฑ๋ฅ์ ๋ณด์ฅํ๊ธฐ ์ํจ์ ๋๋ค.
Additional Information
- The training leveraged the fsdp (Fully Sharded Data Parallel) feature through the HuggingFace SFTtrainer for efficient memory usage and accelerated training.
- ํ๋ จ์ HuggingFace SFTtrainer๋ฅผ ํตํ fsdp ๊ธฐ๋ฅ์ ํ์ฉํ์ฌ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ์ ํจ์จ์ ์ผ๋ก ํ๊ณ ํ๋ จ ์๋๋ฅผ ๊ฐ์ํํ์ต๋๋ค.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 56.68 |
| AI2 Reasoning Challenge (25-Shot) | 55.72 |
| HellaSwag (10-Shot) | 80.69 |
| MMLU (5-Shot) | 54.60 |
| TruthfulQA (0-shot) | 43.57 |
| Winogrande (5-shot) | 75.69 |
| GSM8k (5-shot) | 29.80 |