Add metadata and improve model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,14 +1,26 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - mistralai/Mistral-7B-Instruct-v0.2
 
 
 
5
  ---
6
 
 
 
 
 
 
 
 
 
 
7
  ## Citation
8
- ```
9
  @article{yang2025mix,
10
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
11
  author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
12
  journal={arXiv preprint arXiv:2502.06876},
13
  year={2025}
14
- }
 
 
1
  ---
 
2
  base_model:
3
  - mistralai/Mistral-7B-Instruct-v0.2
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
 
9
+ # RESM-Mistral-7B
10
+
11
+ This repository contains the model weights presented in the paper [Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging](https://huggingface.co/papers/2502.06876).
12
+
13
+ ## Description
14
+ This model is a 3H-aligned (Helpfulness, Honesty, and Harmlessness) Large Language Model developed using a novel model merging framework called **RESM** (**R**eweighting **E**nhanced task **S**ingular **M**erging).
15
+
16
+ RESM addresses the challenges of preference noise accumulation and layer sparsity adaptation inherent in 3H-aligned LLM merging through outlier weighting and sparsity-aware rank selection strategies. Compared to standard data mixture or traditional merging methods, RESM achieves a more balanced optimization across the three alignment dimensions.
17
+
18
  ## Citation
19
+ ```bibtex
20
  @article{yang2025mix,
21
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
22
  author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
23
  journal={arXiv preprint arXiv:2502.06876},
24
  year={2025}
25
+ }
26
+ ```