Add library_name, pipeline_tag and link to paper

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +20 -3
README.md CHANGED
@@ -1,14 +1,31 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - mistralai/Mistral-7B-Instruct-v0.2
 
 
 
5
  ---
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ## Citation
8
- ```
 
9
  @article{yang2025mix,
10
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
11
  author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
12
  journal={arXiv preprint arXiv:2502.06876},
13
  year={2025}
14
- }
 
 
1
  ---
 
2
  base_model:
3
  - mistralai/Mistral-7B-Instruct-v0.2
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
 
9
+ # RESM-Mistral-7B
10
+
11
+ This repository contains the model checkpoint for **RESM-Mistral-7B**, as introduced in the paper [Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging](https://huggingface.co/papers/2502.06876).
12
+
13
+ ## Description
14
+ RESM (**R**eweighting **E**nhanced task **S**ingular **M**erging) is a novel model merging method designed to achieve balanced alignment across the "3H" dimensions: Helpfulness, Honesty, and Harmlessness.
15
+
16
+ Traditional data mixture strategies often struggle with conflicting optimization signals. RESM addresses these challenges at the parameter level through:
17
+ 1. **Outlier Weighting**: Mitigating preference noise accumulation.
18
+ 2. **Sparsity-aware Rank Selection**: Adapting to layer sparsity inherent in aligned LLMs.
19
+
20
+ This model is a 3H-aligned version based on `mistralai/Mistral-7B-Instruct-v0.2`.
21
+
22
  ## Citation
23
+ If you find this work useful, please cite:
24
+ ```bibtex
25
  @article{yang2025mix,
26
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
27
  author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
28
  journal={arXiv preprint arXiv:2502.06876},
29
  year={2025}
30
+ }
31
+ ```