Byungchae commited on
Commit
bf219ee
·
verified ·
1 Parent(s): ee0467c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -17
README.md CHANGED
@@ -2,28 +2,15 @@
2
  license: cc-by-nc-4.0
3
  language: ko
4
  ---
5
- # Model Card for '2311-0011'
6
-
7
  ## Developed by : Byungchae Song
8
 
9
- ## Hardware and Software
10
-
11
- * **Hardware**: We utilized an A100x8 * 1 for training our model
12
- * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
13
 
14
  ## Base Model :
15
  * [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
16
 
17
  ### Training Data
 
18
 
19
- The LDCC-Instruct-Llama-2-ko-13B model was trained with publicly accessible Korean/English data sources. For its fine-tuning, we utilized other public data and underwent some processing and refinement.
20
-
21
- We did not incorporate any client data owned by Lotte Data Communication.
22
-
23
- ## Prompt Template
24
- ```
25
- ### Prompt:
26
- {instruction}
27
- ### Answer:
28
- {output}
29
- ```
 
2
  license: cc-by-nc-4.0
3
  language: ko
4
  ---
 
 
5
  ## Developed by : Byungchae Song
6
 
7
+ ## Model Number: k2s3_test_0001
 
 
 
8
 
9
  ## Base Model :
10
  * [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
11
 
12
  ### Training Data
13
+ * in-house dataset
14
 
15
+ ### Training Method
16
+ * PEFT QLoRA