DANGDOCAO commited on
Commit
27795bb
·
verified ·
1 Parent(s): 5818d05

Upload 4 files

Browse files
.gitattributes CHANGED
@@ -1,3 +1,4 @@
1
  Data_totalQCAtriples30K/30ktrain.json filter=lfs diff=lfs merge=lfs -text
2
  Datatest1k/testorgin1k.json filter=lfs diff=lfs merge=lfs -text
3
  Datatrain29k/29kcorpustag.json filter=lfs diff=lfs merge=lfs -text
 
 
1
  Data_totalQCAtriples30K/30ktrain.json filter=lfs diff=lfs merge=lfs -text
2
  Datatest1k/testorgin1k.json filter=lfs diff=lfs merge=lfs -text
3
  Datatrain29k/29kcorpustag.json filter=lfs diff=lfs merge=lfs -text
4
+ HVU_QA/30ktrain.json filter=lfs diff=lfs merge=lfs -text
HVU_QA/30ktrain.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1989e727e0bb58732b6cd569c87fe2e474816d69064d0769f34cd307544d2fa
3
+ size 11786416
HVU_QA/README.md ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HVU_QA
2
+
3
+ **HVU_QA** is a project dedicated to sharing datasets and tools for **Question Generation Processing (NLP)**, developed and maintained by the research team at **Hung Vuong University (HVU), Phu Tho, Vietnam**.
4
+ This project is supported by **Hung Vuong University, Phu Tho, Vietnam**, with the aim of advancing research and applications in low-resource language processing, particularly for the Vietnamese language.
5
+
6
+ ---
7
+
8
+ ## 📚 Overview
9
+
10
+ This repository enables you to:
11
+
12
+ 1. Fine-tune the [VietAI/vit5-base](https://huggingface.co/datasets/DANGDOCAO/GeneratingQuestions) model on your own GQ dataset.
13
+ 2. Generate multiple, diverse questions given a user-provided text passage (context).
14
+
15
+ ---
16
+
17
+ ## 📁 Datasets
18
+
19
+ * Built following the **SQuAD v2.0 standard**, ensuring compatibility with NLP pipelines.
20
+ * Includes tens of thousands of high-quality **Question–Context–Answer triples (QCA)**.
21
+ * Suitable for both **training** and **evaluation**.
22
+
23
+ ### Data folders
24
+
25
+ * `DataTotalQCAtriples30k/` → **30,000 training samples** (`30ktrain.json`)
26
+ * `Datatest1k/` → **1,000 samples** for manual & automatic evaluation (`testorgin1k.json`)
27
+ * `Datatrain29k/` → **29,000 preprocessed samples** (`29kcorpustag.json`)
28
+
29
+ > All data files are UTF-8 encoded and ready for use in NLP pipelines.
30
+
31
+ ---
32
+
33
+ ## 📁 Vietnamese Question Generation Tool
34
+
35
+ A **command-line tool** for:
36
+
37
+ * **Fine-tuning** a question generation model.
38
+ * **Automatically generating questions** from Vietnamese text.
39
+
40
+ Built on **Hugging Face Transformers (VietAI/vit5-base)** and **PyTorch**.
41
+
42
+ ---
43
+
44
+ ## ✨ Features
45
+
46
+ * Fine-tune a question generation model with SQuAD v2.0 format data.
47
+ * Generate diverse and creative questions from text passages.
48
+ * Flexible generation parameters (`top-k`, `top-p`, `temperature`, etc.).
49
+ * Simple command-line usage.
50
+ * GPU support if available.
51
+
52
+ ---
53
+
54
+ ## 📊 Evaluation Results
55
+
56
+ We conducted both **manual evaluation** (500 samples) and **automatic evaluation** (1,000 samples).
57
+
58
+ | Evaluation Type | Precision | Recall | F1-Score |
59
+ |------------------|-----------|--------|----------|
60
+ | Automatic (1000) | 0.85 | 0.83 | 0.84 |
61
+ | Manual (500) | 0.88 | 0.86 | 0.87 |
62
+
63
+ ➡️ The model generates diverse, grammatically correct, and contextually appropriate questions.
64
+
65
+ ---
66
+
67
+ ## 🧩 Creation Process
68
+
69
+ The dataset was built using a **4-stage automated pipeline**:
70
+
71
+ 1. Select relevant QA websites from trusted sources.
72
+ 2. Automatic crawling to collect raw QA pages.
73
+ 3. Semantic tag extraction to obtain clean Question–Context–Answer triples.
74
+ 4. AI-assisted filtering to remove noisy or inconsistent samples.
75
+
76
+ ---
77
+
78
+ ## 📝 Quality Evaluation
79
+
80
+ A fine-tuned model trained on **HVU_QA (VietAI/vit5-base)** achieved:
81
+
82
+ * **BLEU Score**: 90.61
83
+ * **Semantic similarity**: 97.0% (cosine ≥ 0.8)
84
+ * **Human evaluation**:
85
+ * Grammar: **4.58 / 5**
86
+ * Usefulness: **4.29 / 5**
87
+
88
+ ➡️ These results confirm that **HVU_QA is a high-quality resource** for developing robust FAQ-style question generation models.
89
+
90
+ ---
91
+
92
+ ## 📂 Project Structure
93
+
94
+ ```
95
+ .
96
+ ├── t5-viet-qg-finetuned/
97
+ ├── fine_tune_qg.py
98
+ ├── generate_question.py
99
+ ├── 30ktrain.json
100
+ └── README.md
101
+ ```
102
+
103
+ ---
104
+
105
+ ## 🛠️ Requirements
106
+
107
+ * Python 3.8+
108
+ * PyTorch >= 1.9
109
+ * Transformers >= 4.30
110
+ * scikit-learn
111
+ * Fine-tuned model (download at: [link](https://huggingface.co/datasets/DANGDOCAO/GeneratingQuestions/tree/main))
112
+
113
+ ---
114
+
115
+ ## 🚀 Setup
116
+
117
+ ### Step 1: Download and Extract
118
+
119
+ 1. Download `GenerationQuestions.zip`
120
+ 2. Extract into a folder, e.g.:
121
+
122
+ ```
123
+ D:\your\HVU_QA
124
+ ```
125
+
126
+ ### Step 2: Add to Environment Path (if needed)
127
+
128
+ 1. Open **System Properties → Environment Variables**
129
+ 2. Select `Path` → **Edit** → **New**
130
+ 3. Add the path, e.g.:
131
+
132
+ ```
133
+ D:\your\HVU_QA
134
+ ```
135
+
136
+ ### Step 3: Open in Visual Studio Code
137
+
138
+ ```
139
+ File > Open Folder > D:\HVU_QA
140
+ ```
141
+
142
+ ### Step 4: Install Required Libraries
143
+
144
+ Open **Terminal** and run:
145
+
146
+ #### 📦 Windows (PowerShell)
147
+
148
+ **Required only**
149
+
150
+ ```powershell
151
+ python -m pip install --upgrade pip
152
+ pip install torch transformers datasets scikit-learn sentencepiece safetensors
153
+ ```
154
+
155
+ **Required + Optional**
156
+
157
+ ```powershell
158
+ python -m pip install --upgrade pip
159
+ pip install torch transformers datasets scikit-learn sentencepiece safetensors accelerate tensorboard evaluate sacrebleu rouge-score nltk
160
+ ```
161
+
162
+ #### 📦 Linux / macOS (bash/zsh)
163
+
164
+ **Required only**
165
+
166
+ ```bash
167
+ python3 -m pip install --upgrade pip
168
+ pip install torch transformers datasets scikit-learn sentencepiece safetensors
169
+ ```
170
+
171
+ **Required + Optional**
172
+
173
+ ```bash
174
+ python3 -m pip install --upgrade pip
175
+ pip install torch transformers datasets scikit-learn sentencepiece safetensors accelerate tensorboard evaluate sacrebleu rouge-score nltk
176
+ ```
177
+
178
+ ✅ Verify installation:
179
+
180
+ * Windows (PowerShell)
181
+
182
+ ```powershell
183
+ python -c "import torch, transformers, datasets, sklearn, sentencepiece, safetensors, accelerate, tensorboard, evaluate, sacrebleu, rouge_score, nltk; print('✅ All dependencies installed correctly!')"
184
+ ```
185
+
186
+ * Linux/macOS
187
+
188
+ ```bash
189
+ python3 -c "import torch, transformers, datasets, sklearn, sentencepiece, safetensors, accelerate, tensorboard, evaluate, sacrebleu, rouge_score, nltk; print('✅ All dependencies installed correctly!')"
190
+ ```
191
+
192
+ ---
193
+
194
+ ## 📚 Usage
195
+
196
+ * Train and evaluate a question generation model.
197
+ * Develop Vietnamese NLP tools.
198
+ * Conduct linguistic research.
199
+
200
+ ### 🔹 Training (Fine-tuning)
201
+
202
+ When you run `fine_tune_qg.py`, the script will:
203
+
204
+ 1. Load the dataset from **`30ktrain.json`**
205
+ 2. Fine-tune the `VietAI/vit5-base` model
206
+ 3. Save the trained model into a new folder named **`t5-viet-qg-finetuned/`**
207
+
208
+ Run:
209
+
210
+ ```bash
211
+ python fine_tune_qg.py
212
+ ```
213
+
214
+ ### 🔹 Generating Questions
215
+
216
+ ```bash
217
+ python generate_question.py
218
+ ```
219
+
220
+ **Example:**
221
+
222
+ ```
223
+ Input passage:
224
+ Cà phê sữa đá là đồ uống nổi tiếng ở Việt Nam.
225
+
226
+ Number of questions: 5
227
+ ```
228
+
229
+ ✅ Output:
230
+
231
+ 1. Loại cà phê nào nổi tiếng ở Việt Nam?
232
+ 2. Tại sao cà phê sữa đá được yêu thích?
233
+ 3. Cà phê sữa đá gồm những nguyên liệu gì?
234
+ 4. Nguồn gốc của cà phê sữa đá là từ đâu?
235
+ 5. Cà phê sữa đá Việt Nam được pha chế như thế nào?
236
+
237
+ ---
238
+
239
+ ## ⚙️ Generation Settings
240
+
241
+ In `generate_question.py`, you can adjust:
242
+
243
+ * `top_k`, `top_p`, `temperature`, `no_repeat_ngram_size`, `repetition_penalty`
244
+
245
+ ---
246
+
247
+ ## 🤝 Contribution
248
+
249
+ We welcome contributions:
250
+
251
+ * Open issues
252
+ * Submit pull requests
253
+ * Suggest improvements or add datasets
254
+
255
+ ---
256
+
257
+ ## 📄 Citation
258
+
259
+ If you use this repository or datasets in research, please cite:
260
+
261
+ **Ha Nguyen-Tien, Phuc Le-Hong, Dang Do-Cao, Cuong Nguyen-Hung, Chung Mai-Van. 2025. A Method to Build QA Corpora for Low-Resource Languages. Proceedings of KSE 2025. ACM TALLIP.**
262
+
263
+ ```bibtex
264
+ @inproceedings{nguyen2025hvuqa,
265
+ title={A Method to Build QA Corpora for Low-Resource Languages},
266
+ author={Ha Nguyen-Tien and Phuc Le-Hong and Dang Do-Cao and Cuong Nguyen-Hung and Chung Mai-Van},
267
+ booktitle={Proceedings of KSE 2025},
268
+ year={2025}
269
+ }
270
+ ```
271
+
272
+ ---
273
+
274
+ ## 📬 Contact
275
+
276
+ * **Ha Nguyen-Tien** (Corresponding author)
277
+ 📧 [nguyentienha@hvu.edu.vn](mailto:nguyentienha@hvu.edu.vn)
278
+
279
+ * **Phuc Le-Hong**
280
+ 📧 [Lehongphuc20021408@gmail.com](mailto:Lehongphuc20021408@gmail.com)
281
+
282
+ * **Dang Do-Cao**
283
+ 📧 [docaodang532001@gmail.com](mailto:docaodang532001@gmail.com)
284
+
285
+ 📍 Faculty of Engineering and Technology, Hung Vuong University, Phu Tho, Vietnam
286
+ 🌐 [https://hvu.edu.vn](https://hvu.edu.vn)
287
+
288
+ ---
289
+
290
+ *This repository is part of our ongoing effort to support Vietnamese NLP and make language technology more accessible for low-resource and underrepresented languages.*
HVU_QA/fine_tune_qg.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from datasets import Dataset
3
+ from sklearn.model_selection import train_test_split
4
+ from transformers import (
5
+ T5Tokenizer,
6
+ T5ForConditionalGeneration,
7
+ TrainingArguments,
8
+ Trainer
9
+ )
10
+
11
+ def load_squad_data(file_path):
12
+ with open(file_path, "r", encoding="utf-8") as f:
13
+ squad_data = json.load(f)
14
+
15
+ data = []
16
+ for article in squad_data["data"]:
17
+ context = article.get("title", "")
18
+ for paragraph in article["paragraphs"]:
19
+ for qa in paragraph["qas"]:
20
+ if not qa.get("is_impossible", False) and qa.get("answers"):
21
+ answer = qa["answers"][0]["text"]
22
+ question = qa["question"]
23
+ input_text = f"answer: {answer} context: {context}"
24
+ data.append({"input": input_text, "target": question})
25
+ return data
26
+
27
+ def preprocess_function(example, tokenizer, max_input_length=512, max_target_length=64):
28
+ model_inputs = tokenizer(
29
+ example["input"],
30
+ max_length=max_input_length,
31
+ padding="max_length",
32
+ truncation=True,
33
+ )
34
+ labels = tokenizer(
35
+ text_target=example["target"],
36
+ max_length=max_target_length,
37
+ padding="max_length",
38
+ truncation=True,
39
+ )
40
+ model_inputs["labels"] = labels["input_ids"]
41
+ return model_inputs
42
+
43
+ def main():
44
+ data_path = "30ktrain.json"
45
+ output_dir = "t5-viet-qg-finetuned"
46
+ logs_dir = "logs"
47
+ model_name = "VietAI/vit5-base"
48
+
49
+ print("📥 Tải mô hình và tokenizer...")
50
+ tokenizer = T5Tokenizer.from_pretrained(model_name)
51
+ model = T5ForConditionalGeneration.from_pretrained(model_name)
52
+
53
+ print("📚 Đọc và chia dữ liệu...")
54
+ raw_data = load_squad_data(data_path)
55
+ train_data, val_data = train_test_split(raw_data, test_size=0.2, random_state=42)
56
+
57
+ train_dataset = Dataset.from_list(train_data)
58
+ val_dataset = Dataset.from_list(val_data)
59
+
60
+ tokenized_train = train_dataset.map(
61
+ lambda x: preprocess_function(x, tokenizer),
62
+ batched=True,
63
+ remove_columns=["input", "target"]
64
+ )
65
+ tokenized_val = val_dataset.map(
66
+ lambda x: preprocess_function(x, tokenizer),
67
+ batched=True,
68
+ remove_columns=["input", "target"]
69
+ )
70
+
71
+ print("⚙️ Cấu hình huấn luyện...")
72
+ training_args = TrainingArguments(
73
+ output_dir=output_dir,
74
+ overwrite_output_dir=True,
75
+ per_device_train_batch_size=1,
76
+ gradient_accumulation_steps=1,
77
+ num_train_epochs=3,
78
+ learning_rate=2e-4,
79
+ weight_decay=0.01,
80
+ warmup_steps=0,
81
+ logging_dir=logs_dir,
82
+ logging_steps=10,
83
+ fp16=False
84
+ )
85
+
86
+ print("🚀 Huấn luyện mô hình...")
87
+ trainer = Trainer(
88
+ model=model,
89
+ args=training_args,
90
+ train_dataset=tokenized_train,
91
+ eval_dataset=tokenized_val, # vẫn truyền để bạn có thể eval thủ công sau
92
+ tokenizer=tokenizer,
93
+ )
94
+ trainer.train()
95
+
96
+ print("💾 Lưu mô hình...")
97
+ model.save_pretrained(output_dir)
98
+ tokenizer.save_pretrained(output_dir)
99
+ print("✅ Huấn luyện hoàn tất!")
100
+
101
+ if __name__ == "__main__":
102
+ main()
HVU_QA/generate_question.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # generate_question.py
2
+ import json
3
+ from difflib import SequenceMatcher
4
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
5
+ from transformers.utils import logging as hf_logging
6
+
7
+ # ===== Giảm ồn cảnh báo của transformers =====
8
+ hf_logging.set_verbosity_error()
9
+
10
+ # ===== Cấu hình đường dẫn =====
11
+ MODEL_DIR = "t5-viet-qg-finetuned" # thư mục model đã fine-tune
12
+ DATA_PATH = "30ktrain.json" # file JSON dữ liệu gốc
13
+
14
+ # ===== Load model & tokenizer 1 lần =====
15
+ tokenizer = T5Tokenizer.from_pretrained(MODEL_DIR)
16
+ model = T5ForConditionalGeneration.from_pretrained(MODEL_DIR)
17
+
18
+ def find_best_match_from_context(user_context, squad_data):
19
+ """
20
+ Tìm bản ghi gần nhất dựa trên article.title (giữ đúng logic code gốc).
21
+ Trả về tuple (context_title, answer_text, question_text) hoặc None.
22
+ """
23
+ best_score, best_entry = 0.0, None
24
+ ui = user_context.lower()
25
+
26
+ for article in squad_data.get("data", []):
27
+ context_title = article.get("title", "")
28
+ score_title = SequenceMatcher(None, ui, context_title.lower()).ratio()
29
+
30
+ for paragraph in article.get("paragraphs", []):
31
+ for qa in paragraph.get("qas", []):
32
+ answers = qa.get("answers", [])
33
+ if not answers:
34
+ continue
35
+ answer_text = answers[0].get("text", "").strip()
36
+ question_text = qa.get("question", "").strip()
37
+
38
+ score = score_title
39
+ if score > best_score:
40
+ best_score = score
41
+ best_entry = (context_title, answer_text, question_text)
42
+
43
+ return best_entry
44
+
45
+ def _near_duplicate(q, seen, thr=0.90):
46
+ """Loại câu gần trùng dựa trên tỉ lệ giống nhau."""
47
+ for s in seen:
48
+ if SequenceMatcher(None, q, s).ratio() >= thr:
49
+ return True
50
+ return False
51
+
52
+ def generate_questions(user_context,
53
+ total_questions=20,
54
+ batch_size=10,
55
+ top_k=60,
56
+ top_p=0.95,
57
+ temperature=0.9,
58
+ max_input_len=512,
59
+ max_new_tokens=64):
60
+ # Load dữ liệu JSON
61
+ with open(DATA_PATH, "r", encoding="utf-8") as f:
62
+ squad_data = json.load(f)
63
+
64
+ # Tìm bản ghi gần giống nhất (KHÔNG in ra màn hình)
65
+ best_entry = find_best_match_from_context(user_context, squad_data)
66
+ if best_entry is None:
67
+ print("❌ Không tìm thấy dữ liệu phù hợp trong file JSON.")
68
+ return
69
+
70
+ _, answer, _ = best_entry # chỉ cần answer gốc để bảo toàn tính toàn vẹn
71
+
72
+ # Chuẩn bị input cho mô hình (đặt max_length để tránh cảnh báo truncate)
73
+ input_text = f"answer: {answer} context: {user_context}"
74
+ inputs = tokenizer(
75
+ input_text,
76
+ return_tensors="pt",
77
+ truncation=True,
78
+ max_length=max_input_len
79
+ )
80
+
81
+ # Sinh nhiều câu hỏi theo lô để tiết kiệm RAM
82
+ unique_questions = []
83
+ remaining = total_questions
84
+
85
+ while remaining > 0:
86
+ n = min(batch_size, remaining)
87
+ outputs = model.generate(
88
+ **inputs,
89
+ do_sample=True,
90
+ top_k=top_k,
91
+ top_p=top_p,
92
+ temperature=temperature,
93
+ max_new_tokens=max_new_tokens, # chiều dài phần sinh
94
+ num_return_sequences=n,
95
+ no_repeat_ngram_size=3,
96
+ repetition_penalty=1.12
97
+ )
98
+
99
+ for out in outputs:
100
+ q = tokenizer.decode(out, skip_special_tokens=True).strip()
101
+ if len(q) < 5: # lọc câu quá ngắn
102
+ continue
103
+ if not _near_duplicate(q, unique_questions, thr=0.90):
104
+ unique_questions.append(q)
105
+
106
+ remaining = total_questions - len(unique_questions)
107
+ if remaining <= 0:
108
+ break
109
+
110
+ # Cắt đúng số lượng yêu cầu
111
+ unique_questions = unique_questions[:total_questions]
112
+
113
+ print("✅ Các câu hỏi mới được sinh ra:")
114
+ for i, q in enumerate(unique_questions, 1):
115
+ print(f"{i}. {q}")
116
+
117
+ if __name__ == "__main__":
118
+ # Nhập đoạn văn
119
+ user_context = input("\nNhập đoạn văn bản:\n ").strip()
120
+
121
+ # Nhập số lượng câu hỏi cần sinh (tùy biến)
122
+ raw_n = input("\nNhập vào số lượng câu hỏi bạn cần:").strip()
123
+ if raw_n == "":
124
+ total_questions = 20
125
+ else:
126
+ try:
127
+ total_questions = int(raw_n)
128
+ except ValueError:
129
+ print("⚠️ Giá trị không hợp lệ. Dùng mặc định 20.")
130
+ total_questions = 20
131
+
132
+ # Ràng buộc an toàn
133
+ if total_questions < 1:
134
+ total_questions = 1
135
+ if total_questions > 200:
136
+ print("⚠️ Giới hạn tối đa 200 câu. Sẽ sinh 200 câu.")
137
+ total_questions = 200
138
+
139
+ # batch_size tự động theo quy mô
140
+ batch_size = 10 if total_questions >= 30 else min(10, total_questions)
141
+
142
+ # Thông báo phân tích
143
+ print("\n🔍 Đang phân tích dữ liệu...\n")
144
+
145
+ generate_questions(
146
+ user_context=user_context,
147
+ total_questions=total_questions,
148
+ batch_size=batch_size,
149
+ top_k=60,
150
+ top_p=0.95,
151
+ temperature=0.9,
152
+ max_input_len=512,
153
+ max_new_tokens=64
154
+ )