| base_model: | |
| - meta-llama/Llama-3.1-70B | |
| # NOT SUITABLE FOR CHAT INFERENCE AS-IS | |
| llama3.1-base | |
| Base 3.1 weights with transplanted llama 3.3 special token embeddings. | |
| Unusable for chat in base state, intended to enable easy instruct qlora tuning of llama 3.1 base. | |
| No need to target embeddings or lm_head. | |