PeterPaker123 commited on
Commit
47e632c
·
verified ·
1 Parent(s): f8b30fd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: Qwen/Qwen2.5-7B-Instruct
4
+ tags:
5
+ - qwen
6
+ - qwen2.5
7
+ - mathematics
8
+ - vietnamese
9
+ - sft
10
+ - flash-attention-2
11
+ datasets:
12
+ - 5CD-AI/Vietnamese-395k-meta-math-MetaMathQA-gg-translated
13
+ language:
14
+ - vi
15
+ pipeline_tag: text-generation
16
+ library_name: trl
17
+ ---
18
+
19
+ # Qwen2.5-7B-ViMetaMathQA-Mini
20
+
21
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) optimized for solving mathematical problems in **Vietnamese**.
22
+
23
+ It was trained on a 100,000-sample subset of the translated MetaMathQA dataset, utilizing high-performance computing techniques including **Flash Attention 2** and **BFloat16** precision on NVIDIA H100 hardware.
24
+
25
+ ## Model Details
26
+
27
+ - **Developed by:** PeterPaker123
28
+ - **Language:** Vietnamese
29
+ - **Base Model:** Qwen/Qwen2.5-7B-Instruct
30
+ - **Fine-tuning Dataset:** 5CD-AI/Vietnamese-395k-meta-math-MetaMathQA-gg-translated (100k subset)
31
+ - **Task:** Mathematical Reasoning and Problem Solving
32
+
33
+ ## Training Configuration
34
+
35
+ The model was trained with the following settings to ensure high efficiency and reasoning quality:
36
+
37
+ - **Hardware:** NVIDIA H100 80GB HBM3
38
+ - **Optimization:** Flash Attention 2, TF32 enabled
39
+ - **Precision:** BFloat16 (Mixed Precision)
40
+ - **Optimizer:** AdamW (8-bit)
41
+ - **Learning Rate:** 1e-5
42
+ - **Batch Size:** 4 (Per device)
43
+ - **Gradient Accumulation:** 4 (Effective Batch Size: 16)
44
+ - **Max Sequence Length:** 2048 tokens (with Sequence Packing)
45
+ - **Epochs:** 1
46
+
47
+ ## Intended Use
48
+
49
+ This model is designed to act as a mathematical assistant for Vietnamese speakers. It is particularly effective at:
50
+ - Solving simple algebra problems.
51
+ - Following Vietnamese instructional prompts for mathematical logic.
52
+
53
+ ### System Prompt
54
+ For best results, use the system prompt used during training:
55
+ > `Bạn là một chuyên gia toán học. Hãy giải bài toán sau bằng tiếng Việt.`
56
+
57
+ ## Usage Example
58
+
59
+ ```python
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+ import torch
62
+
63
+ model_name = "PeterPaker123/Qwen2.5-7B-ViMetaMathQA-Mini"
64
+
65
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
66
+ model = AutoModelForCausalLM.from_pretrained(
67
+ model_name,
68
+ torch_dtype=torch.bfloat16,
69
+ device_map="auto",
70
+ attn_implementation="flash_attention_2" # Recommended for H100/A100/L4
71
+ )
72
+
73
+ messages = [
74
+ {"role": "system", "content": "Bạn là một chuyên gia toán học. Hãy giải bài toán sau bằng tiếng Việt."},
75
+ {"role": "user", "content": "Tìm x, biết 2x + 5 = 15."}
76
+ ]
77
+
78
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
79
+ inputs = tokenizer(text, return_tensors="pt").to("cuda")
80
+
81
+ outputs = model.generate(**inputs, max_new_tokens=512)
82
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))