Improve model card with metadata and description

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -1,14 +1,28 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - mistralai/Mistral-7B-Instruct-v0.2
 
 
 
5
  ---
6
 
 
 
 
 
 
 
 
 
 
 
7
  ## Citation
8
- ```
 
9
  @article{yang2025mix,
10
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
11
  author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
12
  journal={arXiv preprint arXiv:2502.06876},
13
  year={2025}
14
- }
 
 
1
  ---
 
2
  base_model:
3
  - mistralai/Mistral-7B-Instruct-v0.2
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
 
9
+ # Mistral-7B-RESM
10
+
11
+ This model is a 3H-aligned Large Language Model (LLM) developed as part of the research paper [Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging](https://huggingface.co/papers/2502.06876).
12
+
13
+ ## Model Description
14
+
15
+ Mistral-7B-RESM is based on [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) and was optimized across three critical dimensions: **Helpfulness, Honesty, and Harmlessness (3H optimization)**.
16
+
17
+ The model was created using a novel parameter-level merging strategy called **RESM** (**R**eweighting **E**nhanced task **S**ingular **M**erging). RESM addresses challenges in 3H-aligned LLM merging, such as preference noise accumulation and layer sparsity adaptation, through outlier weighting and sparsity-aware rank selection strategies.
18
+
19
  ## Citation
20
+
21
+ ```bibtex
22
  @article{yang2025mix,
23
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
24
  author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
25
  journal={arXiv preprint arXiv:2502.06876},
26
  year={2025}
27
+ }
28
+ ```