langquantof commited on
Commit
31d42b8
ยท
verified ยท
1 Parent(s): ca4cf80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +172 -172
README.md CHANGED
@@ -1,172 +1,172 @@
1
- ---
2
- language:
3
- - ko
4
- license: mit
5
- tags:
6
- - finance
7
- - extractive-summarization
8
- - sentence-extraction
9
- - role-classification
10
- - korean
11
- - roberta
12
- pipeline_tag: text-classification
13
- base_model: klue/roberta-base
14
- metrics:
15
- - f1
16
- - accuracy
17
- ---
18
-
19
- # LQ-FSE-base: Korean Financial Sentence Extractor
20
-
21
- ๊ธˆ์œต ๋ฆฌํฌํŠธ, ๊ธˆ์œต ๊ด€๋ จ ๋‰ด์Šค์—์„œ ๋Œ€ํ‘œ๋ฌธ์žฅ์„ ์ถ”์ถœํ•˜๊ณ  ์—ญํ• (outlook, event, financial, risk)์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
22
-
23
- ## Model Description
24
-
25
- - **Base Model**: klue/roberta-base
26
- - **Architecture**: Sentence Encoder (RoBERTa) + Inter-sentence Transformer (2 layers) + Dual Classifiers
27
- - **Task**: Extractive Summarization + Role Classification (Multi-task)
28
- - **Language**: Korean
29
- - **Domain**: Financial Reports (์ฆ๊ถŒ ๋ฆฌํฌํŠธ), Financial News (๊ธˆ์œต ๋‰ด์Šค)
30
-
31
- ### Input Constraints
32
-
33
- | Parameter | Value | Description |
34
- |-----------|-------|-------------|
35
- | Max sentence length | 128 tokens | ๋ฌธ์žฅ๋‹น ์ตœ๋Œ€ ํ† ํฐ ์ˆ˜ (์ดˆ๊ณผ ์‹œ truncation) |
36
- | Max sentences per document | 30 | ๋ฌธ์„œ๋‹น ์ตœ๋Œ€ ๋ฌธ์žฅ ์ˆ˜ (์ดˆ๊ณผ ์‹œ ์•ž 30๊ฐœ๋งŒ ์‚ฌ์šฉ) |
37
- | Input format | Plain text | ๋ฌธ์žฅ ๋ถ€ํ˜ธ(`.!?`) ๊ธฐ์ค€์œผ๋กœ ์ž๋™ ๋ถ„๋ฆฌ |
38
-
39
- - **์ž…๋ ฅ**: ํ•œ๊ตญ์–ด ๊ธˆ์œต ํ…์ŠคํŠธ (์ฆ๊ถŒ ๋ฆฌํฌํŠธ, ๊ธˆ์œต ๋‰ด์Šค ๋“ฑ)
40
- - **์ถœ๋ ฅ**: ๊ฐ ๋ฌธ์žฅ๋ณ„ ๋Œ€ํ‘œ๋ฌธ์žฅ ์ ์ˆ˜ (0~1) + ์—ญํ•  ๋ถ„๋ฅ˜ (outlook/event/financial/risk)
41
-
42
- ### Performance
43
-
44
- | Metric | Score |
45
- |--------|-------|
46
- | Extraction F1 | 0.705 |
47
- | Role Accuracy | 0.851 |
48
-
49
- ### Role Labels
50
-
51
- | Label | Description |
52
- |-------|-------------|
53
- | `outlook` | ์ „๋ง/์˜ˆ์ธก ๋ฌธ์žฅ |
54
- | `event` | ์ด๋ฒคํŠธ/์‚ฌ๊ฑด ๋ฌธ์žฅ |
55
- | `financial` | ์žฌ๋ฌด/์‹ค์  ๋ฌธ์žฅ |
56
- | `risk` | ๋ฆฌ์Šคํฌ ์š”์ธ ๋ฌธ์žฅ |
57
-
58
- ## Usage
59
-
60
- ```python
61
- import re
62
- import torch
63
- from transformers import AutoConfig, AutoModel, AutoTokenizer
64
-
65
- repo_id = "LangQuant/LQ-FSE-base"
66
-
67
- # ๋ชจ๋ธ ๋กœ๋“œ
68
- config = AutoConfig.from_pretrained(repo_id, trust_remote_code=True)
69
- model = AutoModel.from_pretrained(repo_id, trust_remote_code=True)
70
- tokenizer = AutoTokenizer.from_pretrained(repo_id)
71
- model.eval()
72
-
73
- # ์ž…๋ ฅ ํ…์ŠคํŠธ
74
- text = (
75
- "์‚ผ์„ฑ์ „์ž์˜ 2024๋…„ 4๋ถ„๊ธฐ ์‹ค์ ์ด ์‹œ์žฅ ์˜ˆ์ƒ์„ ์ƒํšŒํ–ˆ๋‹ค. "
76
- "๋ฉ”๋ชจ๋ฆฌ ๋ฐ˜๋„์ฒด ๊ฐ€๊ฒฉ ์ƒ์Šน์œผ๋กœ ์˜์—…์ด์ต์ด ์ „๋ถ„๊ธฐ ๋Œ€๋น„ 30% ์ฆ๊ฐ€ํ–ˆ๋‹ค. "
77
- "HBM3E ์–‘์‚ฐ์ด ๋ณธ๊ฒฉํ™”๋˜๋ฉด์„œ AI ๋ฐ˜๋„์ฒด ์‹œ์žฅ ์ ์œ ์œจ์ด ํ™•๋Œ€๋  ์ „๋ง์ด๋‹ค."
78
- )
79
-
80
- # ๋ฌธ์žฅ ๋ถ„๋ฆฌ ๋ฐ ํ† ํฐํ™”
81
- sentences = [s.strip() for s in re.split(r'(?<=[.!?])\s+', text.strip()) if s.strip()]
82
- max_len, max_sent = config.max_length, config.max_sentences
83
-
84
- padded = sentences[:max_sent]
85
- num_real = len(padded)
86
- while len(padded) < max_sent:
87
- padded.append("")
88
-
89
- ids_list, mask_list = [], []
90
- for s in padded:
91
- if s:
92
- enc = tokenizer(s, max_length=max_len, padding="max_length", truncation=True, return_tensors="pt")
93
- else:
94
- enc = {"input_ids": torch.zeros(1, max_len, dtype=torch.long),
95
- "attention_mask": torch.zeros(1, max_len, dtype=torch.long)}
96
- ids_list.append(enc["input_ids"])
97
- mask_list.append(enc["attention_mask"])
98
-
99
- input_ids = torch.cat(ids_list).unsqueeze(0)
100
- attention_mask = torch.cat(mask_list).unsqueeze(0)
101
- doc_mask = torch.zeros(1, max_sent)
102
- doc_mask[0, :num_real] = 1
103
-
104
- # ์ถ”๋ก 
105
- with torch.no_grad():
106
- scores, role_logits = model(input_ids, attention_mask, doc_mask)
107
-
108
- role_labels = config.role_labels
109
- for i, sent in enumerate(sentences):
110
- score = scores[0, i].item()
111
- role = role_labels[role_logits[0, i].argmax().item()]
112
- marker = "*" if score >= 0.5 else " "
113
- print(f" {marker} [{score:.4f}] [{role:10s}] {sent}")
114
- ```
115
-
116
- ## Model Architecture
117
-
118
- ```
119
- Input Sentences
120
- โ†“
121
- [klue/roberta-base] โ†’ [CLS] embeddings per sentence
122
- โ†“
123
- [Inter-sentence Transformer] (2 layers, 8 heads)
124
- โ†“
125
- โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
126
- โ”‚ Binary Classifierโ”‚ Role Classifier โ”‚
127
- โ”‚ (representative?)โ”‚ (outlook/event/ โ”‚
128
- โ”‚ โ”‚ financial/risk) โ”‚
129
- โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
130
- ```
131
-
132
- ## Training
133
-
134
- - Optimizer: AdamW (lr=2e-5, weight_decay=0.01)
135
- - Scheduler: Linear warmup (10%)
136
- - Loss: BCE (extraction) + CrossEntropy (role), role_weight=0.5
137
- - Max sentence length: 128 tokens
138
- - Max sentences per document: 30
139
-
140
- ## Files
141
-
142
- - `model.py`: Model definition (DocumentEncoderConfig, DocumentEncoderForExtractiveSummarization)
143
- - `config.json`: Model configuration
144
- - `model.safetensors`: Model weights
145
- - `inference_example.py`: Inference helper with usage example
146
- - `convert_checkpoint.py`: Script to convert original .pt checkpoint
147
-
148
- ## Disclaimer (๋ฉด์ฑ… ์กฐํ•ญ)
149
-
150
- - ๋ณธ ๋ชจ๋ธ์€ **์—ฐ๊ตฌ ๋ฐ ์ •๋ณด ์ œ๊ณต ๋ชฉ์ **์œผ๋กœ๋งŒ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค.
151
- - ๋ณธ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์€ **ํˆฌ์ž ์กฐ์–ธ, ๊ธˆ๏ฟฝ๏ฟฝ ์ž๋ฌธ, ๋งค๋งค ์ถ”์ฒœ์ด ์•„๋‹™๋‹ˆ๋‹ค.**
152
- - ๋ชจ๋ธ์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํˆฌ์ž ํŒ๋‹จ์— ๋Œ€ํ•ด LangQuant ๋ฐ ๊ฐœ๋ฐœ์ž๋Š” **์–ด๋– ํ•œ ๋ฒ•์  ์ฑ…์ž„๋„ ์ง€์ง€ ์•Š์Šต๋‹ˆ๋‹ค.**
153
- - ๋ชจ๋ธ์˜ ์ •ํ™•์„ฑ, ์™„์ „์„ฑ, ์ ์‹œ์„ฑ์— ๋Œ€ํ•ด ๋ณด์ฆํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์‹ค์ œ ํˆฌ์ž ์˜์‚ฌ๊ฒฐ์ • ์‹œ ๋ฐ˜๋“œ์‹œ ์ „๋ฌธ๊ฐ€์˜ ์กฐ์–ธ์„ ๊ตฌํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.
154
- - ๊ธˆ์œต ์‹œ์žฅ์€ ๋ณธ์งˆ์ ์œผ๋กœ ๋ถˆํ™•์‹คํ•˜๋ฉฐ, ๊ณผ๊ฑฐ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต๋œ ๋ชจ๋ธ์ด ๋ฏธ๋ž˜ ์„ฑ๊ณผ๋ฅผ ๋ณด์žฅํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
155
-
156
- ## Usage Restrictions (์‚ฌ์šฉ ์ œํ•œ)
157
-
158
- - **๊ธˆ์ง€ ์‚ฌํ•ญ:**
159
- - ๋ณธ ๋ชจ๋ธ์„ ์ด์šฉํ•œ ์‹œ์„ธ ์กฐ์ข…, ํ—ˆ์œ„ ์ •๋ณด ์ƒ์„ฑ ๋“ฑ ๋ถˆ๋ฒ•์  ๋ชฉ์ ์˜ ์‚ฌ์šฉ
160
- - ์ž๋™ํ™”๋œ ํˆฌ์ž ๋งค๋งค ์‹œ์Šคํ…œ์˜ ๋‹จ๋… ์˜์‚ฌ๊ฒฐ์ • ์ˆ˜๋‹จ์œผ๋กœ ์‚ฌ์šฉ
161
- - ๋ชจ๋ธ ์ถœ๋ ฅ์„ ์ „๋ฌธ ๊ธˆ์œต ์ž๋ฌธ์ธ ๊ฒƒ์ฒ˜๋Ÿผ ์ œ3์ž์—๊ฒŒ ์ œ๊ณตํ•˜๋Š” ํ–‰์œ„
162
- - **ํ—ˆ์šฉ ์‚ฌํ•ญ:**
163
- - ํ•™์ˆ  ์—ฐ๊ตฌ ๋ฐ ๊ต์œก ๋ชฉ์ ์˜ ์‚ฌ์šฉ
164
- - ๊ธˆ์œต ํ…์ŠคํŠธ ๋ถ„์„ ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋ณด์กฐ ๋„๊ตฌ๋กœ ํ™œ์šฉ
165
- - ์‚ฌ๋‚ด ๋ฆฌ์„œ์น˜/๋ถ„์„ ์—…๋ฌด์˜ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ ํ™œ์šฉ
166
- - ์ƒ์—…์  ์‚ฌ์šฉ ์‹œ LangQuant์— ์‚ฌ์ „ ๋ฌธ์˜๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.
167
-
168
- ## Contributors
169
-
170
- - **[Taegyeong Lee](https://www.linkedin.com/in/taegyeong-lee/)** (taegyeong.leaf@gmail.com)
171
- - **[Dong Young Kim](https://www.linkedin.com/in/dykim04/)** (dong-kim@student.42kl.edu.my) โ€” Ecole 42
172
- - **[Seunghyun Hwang](https://www.linkedin.com/in/seung-hyun-hwang-53700124a/)** (hsh1030@g.skku.edu) โ€” DSSAL
 
1
+ ---
2
+ language:
3
+ - ko
4
+ license: mit
5
+ tags:
6
+ - finance
7
+ - extractive-summarization
8
+ - sentence-extraction
9
+ - role-classification
10
+ - korean
11
+ - roberta
12
+ pipeline_tag: text-classification
13
+ base_model: klue/roberta-base
14
+ metrics:
15
+ - f1
16
+ - accuracy
17
+ ---
18
+
19
+ # LQ-FSE-base: Korean Financial Sentence Extractor
20
+
21
+ LangQuant(๋žญํ€€ํŠธ)์—์„œ ๊ณต๊ฐœํ•œ ๊ธˆ์œต ๋ฆฌํฌํŠธ, ๊ธˆ์œต ๊ด€๋ จ ๋‰ด์Šค์—์„œ ๋Œ€ํ‘œ๋ฌธ์žฅ์„ ์ถ”์ถœํ•˜๊ณ  ์—ญํ• (outlook, event, financial, risk)์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
22
+
23
+ ## Model Description
24
+
25
+ - **Base Model**: klue/roberta-base
26
+ - **Architecture**: Sentence Encoder (RoBERTa) + Inter-sentence Transformer (2 layers) + Dual Classifiers
27
+ - **Task**: Extractive Summarization + Role Classification (Multi-task)
28
+ - **Language**: Korean
29
+ - **Domain**: Financial Reports (์ฆ๊ถŒ ๋ฆฌํฌํŠธ), Financial News (๊ธˆ์œต ๋‰ด์Šค)
30
+
31
+ ### Input Constraints
32
+
33
+ | Parameter | Value | Description |
34
+ |-----------|-------|-------------|
35
+ | Max sentence length | 128 tokens | ๋ฌธ์žฅ๋‹น ์ตœ๋Œ€ ํ† ํฐ ์ˆ˜ (์ดˆ๊ณผ ์‹œ truncation) |
36
+ | Max sentences per document | 30 | ๋ฌธ์„œ๋‹น ์ตœ๋Œ€ ๋ฌธ์žฅ ์ˆ˜ (์ดˆ๊ณผ ์‹œ ์•ž 30๊ฐœ๋งŒ ์‚ฌ์šฉ) |
37
+ | Input format | Plain text | ๋ฌธ์žฅ ๋ถ€ํ˜ธ(`.!?`) ๊ธฐ์ค€์œผ๋กœ ์ž๋™ ๋ถ„๋ฆฌ |
38
+
39
+ - **์ž…๋ ฅ**: ํ•œ๊ตญ์–ด ๊ธˆ์œต ํ…์ŠคํŠธ (์ฆ๊ถŒ ๋ฆฌํฌํŠธ, ๊ธˆ์œต ๋‰ด์Šค ๋“ฑ)
40
+ - **์ถœ๋ ฅ**: ๊ฐ ๋ฌธ์žฅ๋ณ„ ๋Œ€ํ‘œ๋ฌธ์žฅ ์ ์ˆ˜ (0~1) + ์—ญํ•  ๋ถ„๋ฅ˜ (outlook/event/financial/risk)
41
+
42
+ ### Performance
43
+
44
+ | Metric | Score |
45
+ |--------|-------|
46
+ | Extraction F1 | 0.705 |
47
+ | Role Accuracy | 0.851 |
48
+
49
+ ### Role Labels
50
+
51
+ | Label | Description |
52
+ |-------|-------------|
53
+ | `outlook` | ์ „๋ง/์˜ˆ์ธก ๋ฌธ์žฅ |
54
+ | `event` | ์ด๋ฒคํŠธ/์‚ฌ๊ฑด ๋ฌธ์žฅ |
55
+ | `financial` | ์žฌ๋ฌด/์‹ค์  ๋ฌธ์žฅ |
56
+ | `risk` | ๋ฆฌ์Šคํฌ ์š”์ธ ๋ฌธ์žฅ |
57
+
58
+ ## Usage
59
+
60
+ ```python
61
+ import re
62
+ import torch
63
+ from transformers import AutoConfig, AutoModel, AutoTokenizer
64
+
65
+ repo_id = "LangQuant/LQ-FSE-base"
66
+
67
+ # ๋ชจ๋ธ ๋กœ๋“œ
68
+ config = AutoConfig.from_pretrained(repo_id, trust_remote_code=True)
69
+ model = AutoModel.from_pretrained(repo_id, trust_remote_code=True)
70
+ tokenizer = AutoTokenizer.from_pretrained(repo_id)
71
+ model.eval()
72
+
73
+ # ์ž…๋ ฅ ํ…์ŠคํŠธ
74
+ text = (
75
+ "์‚ผ์„ฑ์ „์ž์˜ 2024๋…„ 4๋ถ„๊ธฐ ์‹ค์ ์ด ์‹œ์žฅ ์˜ˆ์ƒ์„ ์ƒํšŒํ–ˆ๋‹ค. "
76
+ "๋ฉ”๋ชจ๋ฆฌ ๋ฐ˜๋„์ฒด ๊ฐ€๊ฒฉ ์ƒ์Šน์œผ๋กœ ์˜์—…์ด์ต์ด ์ „๋ถ„๊ธฐ ๋Œ€๋น„ 30% ์ฆ๊ฐ€ํ–ˆ๋‹ค. "
77
+ "HBM3E ์–‘์‚ฐ์ด ๋ณธ๊ฒฉํ™”๋˜๋ฉด์„œ AI ๋ฐ˜๋„์ฒด ์‹œ์žฅ ์ ์œ ์œจ์ด ํ™•๋Œ€๋  ์ „๋ง์ด๋‹ค."
78
+ )
79
+
80
+ # ๋ฌธ์žฅ ๋ถ„๋ฆฌ ๋ฐ ํ† ํฐํ™”
81
+ sentences = [s.strip() for s in re.split(r'(?<=[.!?])\s+', text.strip()) if s.strip()]
82
+ max_len, max_sent = config.max_length, config.max_sentences
83
+
84
+ padded = sentences[:max_sent]
85
+ num_real = len(padded)
86
+ while len(padded) < max_sent:
87
+ padded.append("")
88
+
89
+ ids_list, mask_list = [], []
90
+ for s in padded:
91
+ if s:
92
+ enc = tokenizer(s, max_length=max_len, padding="max_length", truncation=True, return_tensors="pt")
93
+ else:
94
+ enc = {"input_ids": torch.zeros(1, max_len, dtype=torch.long),
95
+ "attention_mask": torch.zeros(1, max_len, dtype=torch.long)}
96
+ ids_list.append(enc["input_ids"])
97
+ mask_list.append(enc["attention_mask"])
98
+
99
+ input_ids = torch.cat(ids_list).unsqueeze(0)
100
+ attention_mask = torch.cat(mask_list).unsqueeze(0)
101
+ doc_mask = torch.zeros(1, max_sent)
102
+ doc_mask[0, :num_real] = 1
103
+
104
+ # ์ถ”๋ก 
105
+ with torch.no_grad():
106
+ scores, role_logits = model(input_ids, attention_mask, doc_mask)
107
+
108
+ role_labels = config.role_labels
109
+ for i, sent in enumerate(sentences):
110
+ score = scores[0, i].item()
111
+ role = role_labels[role_logits[0, i].argmax().item()]
112
+ marker = "*" if score >= 0.5 else " "
113
+ print(f" {marker} [{score:.4f}] [{role:10s}] {sent}")
114
+ ```
115
+
116
+ ## Model Architecture
117
+
118
+ ```
119
+ Input Sentences
120
+ โ†“
121
+ [klue/roberta-base] โ†’ [CLS] embeddings per sentence
122
+ โ†“
123
+ [Inter-sentence Transformer] (2 layers, 8 heads)
124
+ โ†“
125
+ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
126
+ โ”‚ Binary Classifierโ”‚ Role Classifier โ”‚
127
+ โ”‚ (representative?)โ”‚ (outlook/event/ โ”‚
128
+ โ”‚ โ”‚ financial/risk) โ”‚
129
+ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
130
+ ```
131
+
132
+ ## Training
133
+
134
+ - Optimizer: AdamW (lr=2e-5, weight_decay=0.01)
135
+ - Scheduler: Linear warmup (10%)
136
+ - Loss: BCE (extraction) + CrossEntropy (role), role_weight=0.5
137
+ - Max sentence length: 128 tokens
138
+ - Max sentences per document: 30
139
+
140
+ ## Files
141
+
142
+ - `model.py`: Model definition (DocumentEncoderConfig, DocumentEncoderForExtractiveSummarization)
143
+ - `config.json`: Model configuration
144
+ - `model.safetensors`: Model weights
145
+ - `inference_example.py`: Inference helper with usage example
146
+ - `convert_checkpoint.py`: Script to convert original .pt checkpoint
147
+
148
+ ## Disclaimer (๋ฉด์ฑ… ์กฐํ•ญ)
149
+
150
+ - ๋ณธ ๋ชจ๋ธ์€ **์—ฐ๊ตฌ ๋ฐ ์ •๋ณด ์ œ๊ณต ๋ชฉ์ **์œผ๋กœ๋งŒ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค.
151
+ - ๋ณธ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์€ **ํˆฌ์ž ์กฐ์–ธ, ๊ธˆ์œต ์ž๋ฌธ, ๋งค๋งค ์ถ”์ฒœ์ด ์•„๋‹™๋‹ˆ๋‹ค.**
152
+ - ๋ชจ๋ธ์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํˆฌ์ž ํŒ๋‹จ์— ๋Œ€ํ•ด LangQuant ๋ฐ ๊ฐœ๋ฐœ์ž๋Š” **์–ด๋– ํ•œ ๋ฒ•์  ์ฑ…์ž„๋„ ์ง€์ง€ ์•Š์Šต๋‹ˆ๋‹ค.**
153
+ - ๋ชจ๋ธ์˜ ์ •ํ™•์„ฑ, ์™„์ „์„ฑ, ์ ์‹œ์„ฑ์— ๋Œ€ํ•ด ๋ณด์ฆํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์‹ค์ œ ํˆฌ์ž ์˜์‚ฌ๊ฒฐ์ • ์‹œ ๋ฐ˜๋“œ์‹œ ์ „๋ฌธ๊ฐ€์˜ ์กฐ์–ธ์„ ๊ตฌํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.
154
+ - ๊ธˆ์œต ์‹œ์žฅ์€ ๋ณธ์งˆ์ ์œผ๋กœ ๋ถˆํ™•์‹คํ•˜๋ฉฐ, ๊ณผ๊ฑฐ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต๋œ ๋ชจ๋ธ์ด ๋ฏธ๋ž˜ ์„ฑ๊ณผ๋ฅผ ๋ณด์žฅํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
155
+
156
+ ## Usage Restrictions (์‚ฌ์šฉ ์ œํ•œ)
157
+
158
+ - **๊ธˆ์ง€ ์‚ฌํ•ญ:**
159
+ - ๋ณธ ๋ชจ๋ธ์„ ์ด์šฉํ•œ ์‹œ์„ธ ์กฐ์ข…, ํ—ˆ์œ„ ์ •๋ณด ์ƒ์„ฑ ๋“ฑ ๋ถˆ๋ฒ•์  ๋ชฉ์ ์˜ ์‚ฌ์šฉ
160
+ - ์ž๋™ํ™”๋œ ํˆฌ์ž ๋งค๋งค ์‹œ์Šคํ…œ์˜ ๋‹จ๋… ์˜์‚ฌ๊ฒฐ์ • ์ˆ˜๋‹จ์œผ๋กœ ์‚ฌ์šฉ
161
+ - ๋ชจ๋ธ ์ถœ๋ ฅ์„ ์ „๋ฌธ ๊ธˆ์œต ์ž๋ฌธ์ธ ๊ฒƒ์ฒ˜๋Ÿผ ์ œ3์ž์—๊ฒŒ ์ œ๊ณตํ•˜๋Š” ํ–‰์œ„
162
+ - **ํ—ˆ์šฉ ์‚ฌํ•ญ:**
163
+ - ํ•™์ˆ  ์—ฐ๊ตฌ ๋ฐ ๊ต์œก ๋ชฉ์ ์˜ ์‚ฌ์šฉ
164
+ - ๊ธˆ์œต ํ…์ŠคํŠธ ๋ถ„์„ ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋ณด์กฐ ๋„๊ตฌ๋กœ ํ™œ์šฉ
165
+ - ์‚ฌ๋‚ด ๋ฆฌ์„œ์น˜/๋ถ„์„ ์—…๋ฌด์˜ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ ํ™œ์šฉ
166
+ - ์ƒ์—…์  ์‚ฌ์šฉ ์‹œ LangQuant์— ์‚ฌ์ „ ๋ฌธ์˜๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.
167
+
168
+ ## Contributors
169
+
170
+ - **[Taegyeong Lee](https://www.linkedin.com/in/taegyeong-lee/)** (taegyeong.leaf@gmail.com)
171
+ - **[Dong Young Kim](https://www.linkedin.com/in/dykim04/)** (dong-kim@student.42kl.edu.my) โ€” Ecole 42
172
+ - **[Seunghyun Hwang](https://www.linkedin.com/in/seung-hyun-hwang-53700124a/)** (hsh1030@g.skku.edu) โ€” DSSAL