Saxo commited on
Commit
f641e6b
ยท
verified ยท
1 Parent(s): 5b68f83

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -48,7 +48,6 @@ gemma-2-27b-it ๋ฒ ์ด์Šค๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์„œ H100-80G 8๊ฐœ๋ฅผ ํ†ตํ•ด ์•ฝ 15%
48
  ์ฒœ๋งŒ๊ฑด์˜ ํ•œ๊ธ€ ๋‰ด์Šค ์ฝ”ํผ์Šค๋ฅผ ๊ธฐ์ค€์œผ๋กœ ๋‹ค์–‘ํ•œ ํ…Œ์Šคํฌ๋ณ„ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ์ˆ˜ํ•™ ๋ฐ ๋…ผ๋ฆฌํŒ๋‹จ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋‹ค.<br>
49
  -ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ<br>
50
  -๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๊ณผ ์ž‘๋ฌธ, ์ˆ˜ํ•™, ๋…ผ๋ฆฌํŒ๋‹จ ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ<br>
51
- -128k-Context Window<br>
52
  -Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์‚ฌ์šฉ <br>
53
  -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q4 <br>
54
  -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q8
@@ -59,7 +58,6 @@ about 15% of total parameters Korean CPT(Continued-Pretraining)->SFT->DPO traini
59
  It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 10M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
60
  -Tokenizer uses the base model without word expansion<br>
61
  -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
62
- -128k-Context Window<br>
63
  -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
64
  <br><br>
65
 
 
48
  ์ฒœ๋งŒ๊ฑด์˜ ํ•œ๊ธ€ ๋‰ด์Šค ์ฝ”ํผ์Šค๋ฅผ ๊ธฐ์ค€์œผ๋กœ ๋‹ค์–‘ํ•œ ํ…Œ์Šคํฌ๋ณ„ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ์ˆ˜ํ•™ ๋ฐ ๋…ผ๋ฆฌํŒ๋‹จ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋‹ค.<br>
49
  -ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ<br>
50
  -๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๊ณผ ์ž‘๋ฌธ, ์ˆ˜ํ•™, ๋…ผ๋ฆฌํŒ๋‹จ ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ<br>
 
51
  -Deepspeed Stage=3, rslora ๋ฐ BAdam Layer Mode ์‚ฌ์šฉ <br>
52
  -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q4 <br>
53
  -ollama run benedict/linkbricks-gemma2-27b-korean-advanced-q8
 
58
  It is a model that has been trained to handle Korean-Chinese-English-Japanese cross-training data and 10M korean news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
59
  -Tokenizer uses the base model without word expansion<br>
60
  -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
 
61
  -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
62
  <br><br>
63