File size: 1,358 Bytes
33a1b68
 
 
 
 
 
 
 
 
 
 
8c886ae
33a1b68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
library_name: transformers
tags:
- GGUF
base_model: FabienRoger/cot_5k
---

## FabienRoger/cot_5k - GGUF

This repo contains GGUF format model files for [FabienRoger/cot_5k](https://huggingface.co/FabienRoger/cot_5k).

they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).

## Prompt template


```
<|system|>
{system_prompt}<|endoftext|>
<|user|>
{prompt}<|endoftext|>
<|assistant|>
```

## Model file specification

| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [cot_5k-Q2_K.gguf](https://huggingface.co/tensorblock/cot_5k-GGUF/blob/main/cot_5k-Q2_K.gguf) | Q2_K | 0.646 GB | smallest, significant quality loss - not recommended for most purposes |


## Downloading instruction

### Command line

Firstly, install Huggingface Client

```shell
pip install -U "huggingface_hub[cli]"
```

Then, downoad the individual model file the a local directory

```shell
huggingface-cli download tensorblock/cot_5k-GGUF --include "cot_5k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```

If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:

```shell
huggingface-cli download tensorblock/cot_5k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```