Add pipeline tag, library name and paper link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +17 -4
README.md CHANGED
@@ -1,13 +1,26 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - meta-llama/Meta-Llama-3-8B-Instruct
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
6
  ## Citation
7
- ```
8
  @article{yang2025mix,
9
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
10
- author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
11
  journal={arXiv preprint arXiv:2502.06876},
12
  year={2025}
13
- }
 
 
1
  ---
 
2
  base_model:
3
  - meta-llama/Meta-Llama-3-8B-Instruct
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
+
9
+ # RESM-Llama-3-8B-Instruct
10
+
11
+ This repository contains the weights for **RESM-Llama-3-8B-Instruct**, a model developed using the **RESM** (**R**eweighted **E**nhanced task **S**ingular **M**erging) method to balance Helpfulness, Honesty, and Harmlessness (3H optimization).
12
+
13
+ The model was introduced in the paper [Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging](https://huggingface.co/papers/2502.06876).
14
+
15
+ ## Description
16
+ Achieving balanced alignment of large language models (LLMs) in terms of Helpfulness, Honesty, and Harmlessness constitutes a cornerstone of responsible AI. The authors propose a novel **R**eweighted **E**nhanced task **S**ingular **M**erging method, **RESM**, through outlier weighting and sparsity-aware rank selection strategies to address the challenges of preference noise accumulation and layer sparsity adaptation inherent in 3H-aligned LLM merging.
17
+
18
  ## Citation
19
+ ```bibtex
20
  @article{yang2025mix,
21
  title={Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging},
22
+ author={Yang, Jinluan and Jin, Dingnan and Tang, Anke and Shen, Li capitals and Zhu, Didi and Chen, Zhengyu and Wang, Daixin and Cui, Qing and Zhang, Zhiqiang and Zhou, Jun and others},
23
  journal={arXiv preprint arXiv:2502.06876},
24
  year={2025}
25
+ }
26
+ ```