Update README.md
Browse files
README.md
CHANGED
|
@@ -48,7 +48,6 @@ gemma-2-27b-it ๋ฒ ์ด์ค๋ชจ๋ธ์ ์ฌ์ฉํด์ H100-80G 8๊ฐ๋ฅผ ํตํด ์ฝ 15%
|
|
| 48 |
์ฒ๋ง๊ฑด์ ํ๊ธ ๋ด์ค ์ฝํผ์ค๋ฅผ ๊ธฐ์ค์ผ๋ก ๋ค์ํ ํ
์คํฌ๋ณ ํ๊ตญ์ด-์ค๊ตญ์ด-์์ด-์ผ๋ณธ์ด ๊ต์ฐจ ํ์ต ๋ฐ์ดํฐ์ ์ํ ๋ฐ ๋
ผ๋ฆฌํ๋จ ๋ฐ์ดํฐ๋ฅผ ํตํ์ฌ ํ์ค์ผ์ ์ธ์ด ๊ต์ฐจ ์ฆ๊ฐ ์ฒ๋ฆฌ์ ๋ณต์กํ ๋
ผ๋ฆฌ ๋ฌธ์ ์ญ์ ๋์ ๊ฐ๋ฅํ๋๋ก ํ๋ จํ ๋ชจ๋ธ์ด๋ค.<br>
|
| 49 |
-ํ ํฌ๋์ด์ ๋ ๋จ์ด ํ์ฅ ์์ด ๋ฒ ์ด์ค ๋ชจ๋ธ ๊ทธ๋๋ก ์ฌ์ฉ<br>
|
| 50 |
-๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋ ์์
ํฌ์คํ
๊ณ ์ฐจ์ ๋ถ์ ๋ฐ ์ฝ๋ฉ๊ณผ ์๋ฌธ, ์ํ, ๋
ผ๋ฆฌํ๋จ ๋ฑ์ด ๊ฐํ๋ ๋ชจ๋ธ<br>
|
| 51 |
-
-128k-Context Window<br>
|
| 52 |
-Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์ฌ์ฉ <br>
|
| 53 |
-ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q4 <br>
|
| 54 |
-ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q8
|
|
@@ -59,7 +58,6 @@ about 15% of total parameters Korean CPT(Continued-Pretraining)->SFT->DPO traini
|
|
| 59 |
It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 10M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
|
| 60 |
-Tokenizer uses the base model without word expansion<br>
|
| 61 |
-Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
|
| 62 |
-
-128k-Context Window<br>
|
| 63 |
-Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
|
| 64 |
<br><br>
|
| 65 |
|
|
|
|
| 48 |
์ฒ๋ง๊ฑด์ ํ๊ธ ๋ด์ค ์ฝํผ์ค๋ฅผ ๊ธฐ์ค์ผ๋ก ๋ค์ํ ํ
์คํฌ๋ณ ํ๊ตญ์ด-์ค๊ตญ์ด-์์ด-์ผ๋ณธ์ด ๊ต์ฐจ ํ์ต ๋ฐ์ดํฐ์ ์ํ ๋ฐ ๋
ผ๋ฆฌํ๋จ ๋ฐ์ดํฐ๋ฅผ ํตํ์ฌ ํ์ค์ผ์ ์ธ์ด ๊ต์ฐจ ์ฆ๊ฐ ์ฒ๋ฆฌ์ ๋ณต์กํ ๋
ผ๋ฆฌ ๋ฌธ์ ์ญ์ ๋์ ๊ฐ๋ฅํ๋๋ก ํ๋ จํ ๋ชจ๋ธ์ด๋ค.<br>
|
| 49 |
-ํ ํฌ๋์ด์ ๋ ๋จ์ด ํ์ฅ ์์ด ๋ฒ ์ด์ค ๋ชจ๋ธ ๊ทธ๋๋ก ์ฌ์ฉ<br>
|
| 50 |
-๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋ ์์
ํฌ์คํ
๊ณ ์ฐจ์ ๋ถ์ ๋ฐ ์ฝ๋ฉ๊ณผ ์๋ฌธ, ์ํ, ๋
ผ๋ฆฌํ๋จ ๋ฑ์ด ๊ฐํ๋ ๋ชจ๋ธ<br>
|
|
|
|
| 51 |
-Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์ฌ์ฉ <br>
|
| 52 |
-ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q4 <br>
|
| 53 |
-ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q8
|
|
|
|
| 58 |
It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 10M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
|
| 59 |
-Tokenizer uses the base model without word expansion<br>
|
| 60 |
-Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
|
|
|
|
| 61 |
-Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
|
| 62 |
<br><br>
|
| 63 |
|