File size: 6,577 Bytes
6ccfdf0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf1b60b
 
 
 
 
 
 
6ccfdf0
 
 
 
 
 
 
44d56ac
 
 
a0863b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44d56ac
 
 
 
 
a0863b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44d56ac
6ccfdf0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
tags:
- generated_from_trainer
- TensorBlock
- GGUF
widget:
- text: How can I write a Python function to generate the nth Fibonacci number?
- text: How do I get the current date using shell commands? Explain how it works.
license: bigcode-openrail-m
base_model: HuggingFaceH4/starchat-beta
model-index:
- name: starchat-beta
  results: []
---

<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>

[![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
[![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
[![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
[![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)


## HuggingFaceH4/starchat-beta - GGUF

This repo contains GGUF format model files for [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta).

The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).

## Our projects
<table border="1" cellspacing="0" cellpadding="10">
  <tr>
    <th colspan="2" style="font-size: 25px;">Forge</th>
  </tr>
  <tr>
    <th colspan="2">
      <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
    </th>
  </tr>
  <tr>
    <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
  </tr>
  <tr>
    <th colspan="2">
      <a href="https://github.com/TensorBlock/forge" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸš€ Try it now! πŸš€</a>
    </th>
  </tr>

  <tr>
    <th style="font-size: 25px;">Awesome MCP Servers</th>
    <th style="font-size: 25px;">TensorBlock Studio</th>
  </tr>
  <tr>
    <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
    <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
  </tr>
  <tr>
    <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
    <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
  </tr>
  <tr>
    <th>
      <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸ‘€ See what we built πŸ‘€</a>
    </th>
    <th>
      <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
        display: inline-block;
        padding: 8px 16px;
        background-color: #FF7F50;
        color: white;
        text-decoration: none;
        border-radius: 6px;
        font-weight: bold;
        font-family: sans-serif;
      ">πŸ‘€ See what we built πŸ‘€</a>
    </th>
  </tr>
</table>
## Prompt template

```

```

## Model file specification

| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [starchat-beta-Q2_K.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q2_K.gguf) | Q2_K | 5.778 GB | smallest, significant quality loss - not recommended for most purposes |
| [starchat-beta-Q3_K_S.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q3_K_S.gguf) | Q3_K_S | 6.498 GB | very small, high quality loss |
| [starchat-beta-Q3_K_M.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q3_K_M.gguf) | Q3_K_M | 7.661 GB | very small, high quality loss |
| [starchat-beta-Q3_K_L.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q3_K_L.gguf) | Q3_K_L | 8.505 GB | small, substantial quality loss |
| [starchat-beta-Q4_0.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q4_0.gguf) | Q4_0 | 8.373 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [starchat-beta-Q4_K_S.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q4_K_S.gguf) | Q4_K_S | 8.461 GB | small, greater quality loss |
| [starchat-beta-Q4_K_M.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q4_K_M.gguf) | Q4_K_M | 9.281 GB | medium, balanced quality - recommended |
| [starchat-beta-Q5_0.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q5_0.gguf) | Q5_0 | 10.138 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [starchat-beta-Q5_K_S.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q5_K_S.gguf) | Q5_K_S | 10.138 GB | large, low quality loss - recommended |
| [starchat-beta-Q5_K_M.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q5_K_M.gguf) | Q5_K_M | 10.706 GB | large, very low quality loss - recommended |
| [starchat-beta-Q6_K.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q6_K.gguf) | Q6_K | 12.014 GB | very large, extremely low quality loss |
| [starchat-beta-Q8_0.gguf](https://huggingface.co/tensorblock/starchat-beta-GGUF/blob/main/starchat-beta-Q8_0.gguf) | Q8_0 | 15.502 GB | very large, extremely low quality loss - not recommended |


## Downloading instruction

### Command line

Firstly, install Huggingface Client

```shell
pip install -U "huggingface_hub[cli]"
```

Then, downoad the individual model file the a local directory

```shell
huggingface-cli download tensorblock/starchat-beta-GGUF --include "starchat-beta-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```

If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:

```shell
huggingface-cli download tensorblock/starchat-beta-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```