| | --- |
| | library_name: transformers |
| | tags: |
| | - GGUF |
| | base_model: FabienRoger/cot_5k |
| | --- |
| | |
| | ## FabienRoger/cot_5k - GGUF |
| | |
| | This repo contains GGUF format model files for [FabienRoger/cot_5k](https://huggingface.co/FabienRoger/cot_5k). |
| | |
| | they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). |
| | |
| | ## Prompt template |
| | |
| | |
| | ``` |
| | <|system|> |
| | {system_prompt}<|endoftext|> |
| | <|user|> |
| | {prompt}<|endoftext|> |
| | <|assistant|> |
| | ``` |
| | |
| | ## Model file specification |
| | |
| | | Filename | Quant type | File Size | Description | |
| | | -------- | ---------- | --------- | ----------- | |
| | | [cot_5k-Q2_K.gguf](https://huggingface.co/tensorblock/cot_5k-GGUF/blob/main/cot_5k-Q2_K.gguf) | Q2_K | 0.646 GB | smallest, significant quality loss - not recommended for most purposes | |
| | |
| | |
| | ## Downloading instruction |
| | |
| | ### Command line |
| | |
| | Firstly, install Huggingface Client |
| | |
| | ```shell |
| | pip install -U "huggingface_hub[cli]" |
| | ``` |
| | |
| | Then, downoad the individual model file the a local directory |
| | |
| | ```shell |
| | huggingface-cli download tensorblock/cot_5k-GGUF --include "cot_5k-Q2_K.gguf" --local-dir MY_LOCAL_DIR |
| | ``` |
| | |
| | If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: |
| | |
| | ```shell |
| | huggingface-cli download tensorblock/cot_5k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' |
| | ``` |
| | |