|
|
--- |
|
|
license: llama2 |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- code |
|
|
--- |
|
|
|
|
|
These are GGUF quantized versions of [codellama/CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf). |
|
|
|
|
|
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`. |
|
|
|
|
|
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later. |
|
|
|
|
|
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |