Improve language tag

#2
by lbourdois - opened
Files changed (1) hide show
  1. README.md +78 -64
README.md CHANGED
@@ -1,64 +1,78 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-7B-Instruct
3
- library_name: transformers
4
- license: apache-2.0
5
- tags:
6
- - llama-factory
7
- - generated_from_trainer
8
- pipeline_tag: text-generation
9
- model-index:
10
- - name: WritingBench-Critic-Model-Qwen-7B
11
- results: []
12
- ---
13
-
14
-
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
-
18
- # WritingBench-Critic-Model-Qwen-7B
19
-
20
- <p align="center">
21
- 📃 <a href="https://arxiv.org/abs/2503.05244" target="_blank">[Paper]</a> • 🚀 <a href="https://github.com/X-PLUG/WritingBench" target="_blank">[Github Repo]</a> • 📏 <a href="https://huggingface.co/AQuarterMile/WritingBench-Critic-Model-Qwen-7B" target="_blank">[Critic Model]</a> • ✍️ <a href="https://huggingface.co/AQuarterMile/Writing-Model-Qwen-7B" target="_blank">[Writer-7B]</a> <a href="https://huggingface.co/AQuarterMile/Writing-Model-Qwen-32B-thinking" target="_blank">[Writer-32B]</a>
22
- </p>
23
-
24
- This model is fine-tuned from [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on a 50K SFT dataset for writing evaluation tasks.
25
-
26
- For each criterion, the evaluator independently assigns a score on a 10-point scale to a response, providing both a score and a justification.
27
-
28
-
29
- ## Training procedure
30
-
31
- ### Training hyperparameters
32
-
33
- The following hyperparameters were used during training:
34
- - learning_rate: 7e-06
35
- - train_batch_size: 1
36
- - eval_batch_size: 8
37
- - seed: 42
38
- - distributed_type: multi-GPU
39
- - num_devices: 8
40
- - gradient_accumulation_steps: 8
41
- - total_train_batch_size: 64
42
- - total_eval_batch_size: 64
43
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
- - lr_scheduler_type: cosine
45
- - lr_scheduler_warmup_ratio: 0.1
46
- - num_epochs: 3
47
-
48
- ### Framework versions
49
-
50
- - Transformers 4.46.1
51
- - Pytorch 2.5.1+cu124
52
- - Datasets 3.1.0
53
- - Tokenizers 0.20.3
54
-
55
- ## 📝 Citation
56
-
57
- ```
58
- @misc{wu2025writingbench,
59
- title={WritingBench: A Comprehensive Benchmark for Generative Writing},
60
- author={Yuning Wu and Jiahao Mei and Ming Yan and Chenliang Li and Shaopeng Lai and Yuran Ren and Zijia Wang and Ji Zhang and Mengyue Wu and Qin Jin and Fei Huang},
61
- year={2025},
62
- url={https://arxiv.org/abs/2503.05244},
63
- }
64
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ library_name: transformers
4
+ license: apache-2.0
5
+ tags:
6
+ - llama-factory
7
+ - generated_from_trainer
8
+ pipeline_tag: text-generation
9
+ language:
10
+ - zho
11
+ - eng
12
+ - fra
13
+ - spa
14
+ - por
15
+ - deu
16
+ - ita
17
+ - rus
18
+ - jpn
19
+ - kor
20
+ - vie
21
+ - tha
22
+ - ara
23
+ model-index:
24
+ - name: WritingBench-Critic-Model-Qwen-7B
25
+ results: []
26
+ ---
27
+
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # WritingBench-Critic-Model-Qwen-7B
33
+
34
+ <p align="center">
35
+ 📃 <a href="https://arxiv.org/abs/2503.05244" target="_blank">[Paper]</a> • 🚀 <a href="https://github.com/X-PLUG/WritingBench" target="_blank">[Github Repo]</a> • 📏 <a href="https://huggingface.co/AQuarterMile/WritingBench-Critic-Model-Qwen-7B" target="_blank">[Critic Model]</a> • ✍️ <a href="https://huggingface.co/AQuarterMile/Writing-Model-Qwen-7B" target="_blank">[Writer-7B]</a> <a href="https://huggingface.co/AQuarterMile/Writing-Model-Qwen-32B-thinking" target="_blank">[Writer-32B]</a>
36
+ </p>
37
+
38
+ This model is fine-tuned from [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on a 50K SFT dataset for writing evaluation tasks.
39
+
40
+ For each criterion, the evaluator independently assigns a score on a 10-point scale to a response, providing both a score and a justification.
41
+
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 7e-06
49
+ - train_batch_size: 1
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 8
54
+ - gradient_accumulation_steps: 8
55
+ - total_train_batch_size: 64
56
+ - total_eval_batch_size: 64
57
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 3
61
+
62
+ ### Framework versions
63
+
64
+ - Transformers 4.46.1
65
+ - Pytorch 2.5.1+cu124
66
+ - Datasets 3.1.0
67
+ - Tokenizers 0.20.3
68
+
69
+ ## 📝 Citation
70
+
71
+ ```
72
+ @misc{wu2025writingbench,
73
+ title={WritingBench: A Comprehensive Benchmark for Generative Writing},
74
+ author={Yuning Wu and Jiahao Mei and Ming Yan and Chenliang Li and Shaopeng Lai and Yuran Ren and Zijia Wang and Ji Zhang and Mengyue Wu and Qin Jin and Fei Huang},
75
+ year={2025},
76
+ url={https://arxiv.org/abs/2503.05244},
77
+ }
78
+ ```