oheast commited on
Commit
b71be1d
·
1 Parent(s): c28d58a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,3 +1,51 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ language:
4
+ - ko
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
+
9
+ **The license is `cc-by-nc-4.0`.**
10
+
11
+ # **GAI-LLM/KoSOLAR-10.7B-mixed-v13**
12
+
13
+ ## Model Details
14
+
15
+ **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
16
+
17
+ **Input** Models input text only.
18
+
19
+ **Output** Models generate text only.
20
+
21
+ **Model Architecture**
22
+ GAI-LLM/KoSOLAR-10.7B-mixed-v13 is an auto-regressive language model based on the LLaMA2 transformer architecture.
23
+
24
+ **Base Model** [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated)
25
+
26
+ **Training Dataset**
27
+
28
+ - We combined Open Korean Dateset using mixed-strategy.
29
+ - We use A100 GPU 80GB * 8, when training.
30
+
31
+ # **Model Benchmark**
32
+
33
+ ## KO-LLM leaderboard
34
+ - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
35
+
36
+
37
+ # Implementation Code
38
+ ```python
39
+ ### GAI-LLM/KoSOLAR-10.7B-mixed-v13
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+ import torch
42
+
43
+ repo = "GAI-LLM/KoSOLAR-10.7B-mixed-v13"
44
+ model = AutoModelForCausalLM.from_pretrained(
45
+ repo,
46
+ return_dict=True,
47
+ torch_dtype=torch.float16,
48
+ device_map='auto'
49
+ )
50
+ tokenizer = AutoTokenizer.from_pretrained(repo)
51
+ ```