Control LLM: Controlled Evolution for Intelligence Retention in LLM
Paper
•
2501.10979
•
Published
•
6
This is a fine-tuned model of Llama-3.1-8B for muliligual-Chinese tasks on SynE dataset by Control LLM-Concat16-Dlerp, as described in Control LLM: Controlled Evolution for Intelligence Retention in LLM.
This model is associated with the paper: Control-LLM.
This model is associated with the github: Control-LLM.
Here is an overview of the evaluation results and findings:
The table below summarizes evaluation results across Chinese tasks and original capabilities.
| Model | CEval | CEvalC | CMMLU | CMMLUC | C-Avg | BBH | MLU | MLUP | O-Avg | Overall |
|---|---|---|---|---|---|---|---|---|---|---|
| Llama3.1-8B | 48.3 | 12.8 | 51.1 | 14.1 | 13.9 | 65.2 | 65.4 | 35.5 | 45.9 | 29.9 |
| Llama-3-SynE | 57.7 | 22.3 | 57.1 | 22.8 | 22.8 | 61.9 | 64.0 | 32.6 | 42.9 | 32.9 |
| Full Param Tune | 59.0 | 40.2 | 60.2 | 44.3 | 43.8 | 64.8 | 64.9 | 35.0 | 45.4 | 44.6 |
| Stack Expansion | 56.0 | 32.7 | 55.2 | 33.4 | 33.3 | 62.3 | 65.6 | 35.3 | 44.8 | 39.1 |
| Concat-Lerp | 57.1 | 34.8 | 57.0 | 37.4 | 37.1 | 64.4 | 64.6 | 35.8 | 45.9 | 41.5 |
| Hybrid Expansion | 58.9 | 44.7 | 57.9 | 44.3 | 44.4 | 65.1 | 65.7 | 36.9 | 46.8 | 45.6 |
| Control LLM* | 57.0 | 44.7 | 56.0 | 44.9 | 44.8 | 68.2 | 65.6 | 37.9 | 48.5 | 46.7 |
Base model
meta-llama/Llama-3.1-8B