File size: 5,269 Bytes
dcc9128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31c722c
dcc9128
31c722c
dcc9128
31c722c
dcc9128
31c722c
dcc9128
31c722c
dcc9128
31c722c
dcc9128
 
 
 
 
 
 
 
 
 
 
 
 
 
9dd08d9
e62bd65
9dd08d9
e62bd65
 
9dd08d9
e62bd65
 
 
f222b28
e62bd65
9dd08d9
e62bd65
9dd08d9
e62bd65
9dd08d9
e62bd65
9dd08d9
e62bd65
9dd08d9
e62bd65
f222b28
9dd08d9
85425d8
 
 
 
 
 
e62bd65
9dd08d9
e62bd65
9dd08d9
e62bd65
9dd08d9
56f91d7
 
f222b28
 
56f91d7
 
9dd08d9
6eebd19
9dd08d9
e62bd65
 
 
9dd08d9
e62bd65
 
 
9dd08d9
e62bd65
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
# 许可证标识符,必须是 Hugging Face 支持的 license ID
license: apache-2.0

# 模型主要支持的语言
language:
  - en
  - zh

# 所使用的深度学习库
library_name: transformers

# 用于搜索和分类的标签
tags:
  - text-generation
  - scaling-laws
  - densing-law
  - reference-models

# 论文中用于评估的数据集
datasets:
  - mmlu
  - big-bench-hard
  - math
  - mbpp
  - human-eval

# 论文中使用的评估指标
metrics:
  - loss
  - accuracy

# pipeline 标签,用于推理 API 和小部件
pipeline_tag: text-generation

# 使用 model-index 来列出仓库中包含的所有模型变体
model-index:
  - name: DensingLaw-ScalingModel-s1
    results: []
  - name: DensingLaw-ScalingModel-s2
    results: []
  - name: DensingLaw-ScalingModel-s3
    results: []
  - name: DensingLaw-ScalingModel-s4
    results: []
  - name: DensingLaw-ScalingModel-s5
    results: []
  - name: DensingLaw-ScalingModel-s6
    results: []

# 引用论文的 BibTeX
citation: |
  @misc{xiao2024densinglawllms,
        title={Densing Law of LLMs}, 
        author={Chaojun Xiao and Jie Cai and Weilin Zhao and Guoyang Zeng and Biyuan Lin and Jie Zhou and Zhi Zheng and Xu Han and Zhiyuan Liu and Maosong Sun},
        year={2024},
        eprint={2412.04315},
        archivePrefix={arXiv},
        primaryClass={cs.AI},
        url={https://arxiv.org/abs/2412.04315}, 
  }
---
# DensingLaw-ScalingModels

This repository contains a series of reference models of varying sizes, released as part of our paper, **`Densing Law of LLMs`**. These models were trained to establish a robust scaling law, which serves as a foundational component for calculating the "density" of other Large Language Models (LLMs).


[📜 Paper](https://arxiv.org/abs/2412.04315) | [🤗 Hugging Face Models](https://huggingface.co/openbmb/DensingLaw-ScalingModels) </div>

## 💡 Overview

The core contribution of our paper is the concept of **LLM Density** \\(\rho\\), defined as the ratio of a model's *effective* parameter size \\(\hat{N}\\) to its *actual* parameter size \\(N\\). To accurately determine a model's effective size, we must first establish a reliable "ruler"—a scaling law that maps training compute to performance on downstream tasks.

The models in this repository serve as that "ruler". We trained a series of six models, ranging from **5 million to 800 million parameters**, on a consistent dataset. By measuring their loss on various benchmarks, we fitted a precise scaling function. This function allows us to take any other LLM, measure its performance, and infer its effective parameter size by seeing where it lands on our reference scale.

These models are released to allow researchers to verify our results, build upon our work, and use this established scale for their own model evaluations.

## 🔬 The Models

We trained six models with architectures designed for scaling. The detailed hyperparameters are listed below.

#### Table 1: Detailed Hyper-parameters of Models for Loss Estimation

| Name   | # Para        | BS  | n_layer  | d     | d_ffn  | n_head  | n_kv  |
| :----- | :------------ | :-- | :------ | :---- | :---- | :----- | :--- |
| 0.005B (s1) | 5,247,232     | 32  | 8       | 256   | 640   | 4      | 1    |
| 0.03B (s2)  | 31,470,080    | 32  | 12      | 512   | 1,280 | 8      | 2    |
| 0.1B (s3)  | 106,196,736   | 64  | 18      | 768   | 1,920 | 12     | 3    |
| 0.2B (s4)  | 245,416,960   | 128 | 24      | 1,024 | 2,560 | 16     | 2   |
| 0.4B (s5)  | 476,852,480   | 256 | 30      | 1,280 | 3,200 | 20     | 2   |
| 0.8B (s6)  | 828,225,024   | 512 | 36      | 1,536 | 3,840 | 24     | 3   |

### Training Data

As stated in our paper, all reference models were trained on the **training corpus of MiniCPM-3-4B** (Hu et al., 2024) to ensure consistency.

## 🎯 Research Context: The Densing Law
Our framework for calculating LLM density involves a two-step estimation process, which is visualized below.

1.  **Loss Estimation \\( f_1 \\)**: We first establish the relationship between training compute (approximated as \\(C \approx 6ND\\) and conditional loss \\(\mathcal L\\) on downstream tasks. The models released in this repository are the data points used to fit this curve \\(\mathcal L = f_1(C)\\).
2.  **Performance Estimation \\(f_2\\)**: We then map the relationship between this loss \\(\mathcal L\\) and a more intuitive performance metric \\(S\\), such as accuracy \\(S = f_2(\mathcal L)\\).

By combining these, we can determine the effective compute, and therefore the effective parameter size, for any model based on its performance.
<div align="center">
<img src="assets/fig.jpeg" width="800"/>
</div>

## 📜 License

This work is released under the `Apache 2.0` license.

## 📚 Citation

If you use our models or the Densing Law concept in your research, please cite our paper:

```bibtex
@misc{xiao2024densinglawllms,
      title={Densing Law of LLMs}, 
      author={Chaojun Xiao and Jie Cai and Weilin Zhao and Guoyang Zeng and Biyuan Lin and Jie Zhou and Zhi Zheng and Xu Han and Zhiyuan Liu and Maosong Sun},
      year={2024},
      eprint={2412.04315},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2412.04315}, 
}
```