| | --- |
| | license: apache-2.0 |
| | library_name: transformers |
| | pipeline_tag: text-generation |
| | language: |
| | - bg |
| | - ca |
| | - code |
| | - cs |
| | - cy |
| | - da |
| | - de |
| | - el |
| | - en |
| | - es |
| | - et |
| | - eu |
| | - fi |
| | - fr |
| | - ga |
| | - gl |
| | - hr |
| | - hu |
| | - it |
| | - lt |
| | - lv |
| | - mt |
| | - nl |
| | - nn |
| | - \no |
| | - oc |
| | - pl |
| | - pt |
| | - ro |
| | - ru |
| | - sh |
| | - sk |
| | - sl |
| | - sr |
| | - sv |
| | - uk |
| | datasets: |
| | - oscar-corpus/colossal-oscar-1.0 |
| | - HuggingFaceFW/fineweb-edu |
| | - joelniklaus/eurlex_resources |
| | - joelito/legal-mc4 |
| | - projecte-aina/CATalog |
| | - UFRGS/brwac |
| | - community-datasets/hrwac |
| | - danish-foundation-models/danish-gigaword |
| | - HiTZ/euscrawl |
| | - PleIAs/French-PD-Newspapers |
| | - PleIAs/French-PD-Books |
| | - AI-team-UoA/greek_legal_code |
| | - HiTZ/latxa-corpus-v1.1 |
| | - allenai/peS2o |
| | - pile-of-law/pile-of-law |
| | - PORTULAN/parlamento-pt |
| | - hoskinson-center/proof-pile |
| | - togethercomputer/RedPajama-Data-1T |
| | - bigcode/starcoderdata |
| | - bjoernp/tagesschau-2018-2023 |
| | - EleutherAI/the_pile_deduplicated |
| | base_model: BSC-LT/salamandra-7b-instruct |
| | tags: |
| | - TensorBlock |
| | - GGUF |
| | --- |
| | |
| | <div style="width: auto; margin-left: auto; margin-right: auto"> |
| | <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
| | </div> |
| |
|
| | [](https://tensorblock.co) |
| | [](https://twitter.com/tensorblock_aoi) |
| | [](https://discord.gg/Ej5NmeHFf2) |
| | [](https://github.com/TensorBlock) |
| | [](https://t.me/TensorBlock) |
| |
|
| |
|
| | ## BSC-LT/salamandra-7b-instruct - GGUF |
| |
|
| | This repo contains GGUF format model files for [BSC-LT/salamandra-7b-instruct](https://huggingface.co/BSC-LT/salamandra-7b-instruct). |
| |
|
| | The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4658](https://github.com/ggerganov/llama.cpp/commit/855cd0734aca26c86cc23d94aefd34f934464ac9). |
| |
|
| | ## Our projects |
| | <table border="1" cellspacing="0" cellpadding="10"> |
| | <tr> |
| | <th colspan="2" style="font-size: 25px;">Forge</th> |
| | </tr> |
| | <tr> |
| | <th colspan="2"> |
| | <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/> |
| | </th> |
| | </tr> |
| | <tr> |
| | <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th> |
| | </tr> |
| | <tr> |
| | <th colspan="2"> |
| | <a href="https://github.com/TensorBlock/forge" target="_blank" style=" |
| | display: inline-block; |
| | padding: 8px 16px; |
| | background-color: #FF7F50; |
| | color: white; |
| | text-decoration: none; |
| | border-radius: 6px; |
| | font-weight: bold; |
| | font-family: sans-serif; |
| | ">π Try it now! π</a> |
| | </th> |
| | </tr> |
| | |
| | <tr> |
| | <th style="font-size: 25px;">Awesome MCP Servers</th> |
| | <th style="font-size: 25px;">TensorBlock Studio</th> |
| | </tr> |
| | <tr> |
| | <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th> |
| | <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th> |
| | </tr> |
| | <tr> |
| | <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th> |
| | <th>A lightweight, open, and extensible multi-LLM interaction studio.</th> |
| | </tr> |
| | <tr> |
| | <th> |
| | <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style=" |
| | display: inline-block; |
| | padding: 8px 16px; |
| | background-color: #FF7F50; |
| | color: white; |
| | text-decoration: none; |
| | border-radius: 6px; |
| | font-weight: bold; |
| | font-family: sans-serif; |
| | ">π See what we built π</a> |
| | </th> |
| | <th> |
| | <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style=" |
| | display: inline-block; |
| | padding: 8px 16px; |
| | background-color: #FF7F50; |
| | color: white; |
| | text-decoration: none; |
| | border-radius: 6px; |
| | font-weight: bold; |
| | font-family: sans-serif; |
| | ">π See what we built π</a> |
| | </th> |
| | </tr> |
| | </table> |
| | ## Prompt template |
| | |
| | ``` |
| | <|im_start|>system |
| | {system_prompt}<|im_end|> |
| | <|im_start|>user |
| | {prompt}<|im_end|> |
| | <|im_start|>assistant |
| | ``` |
| |
|
| | ## Model file specification |
| |
|
| | | Filename | Quant type | File Size | Description | |
| | | -------- | ---------- | --------- | ----------- | |
| | | [salamandra-7b-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q2_K.gguf) | Q2_K | 3.305 GB | smallest, significant quality loss - not recommended for most purposes | |
| | | [salamandra-7b-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q3_K_S.gguf) | Q3_K_S | 3.755 GB | very small, high quality loss | |
| | | [salamandra-7b-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q3_K_M.gguf) | Q3_K_M | 4.048 GB | very small, high quality loss | |
| | | [salamandra-7b-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q3_K_L.gguf) | Q3_K_L | 4.300 GB | small, substantial quality loss | |
| | | [salamandra-7b-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q4_0.gguf) | Q4_0 | 4.647 GB | legacy; small, very high quality loss - prefer using Q3_K_M | |
| | | [salamandra-7b-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q4_K_S.gguf) | Q4_K_S | 4.672 GB | small, greater quality loss | |
| | | [salamandra-7b-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q4_K_M.gguf) | Q4_K_M | 4.851 GB | medium, balanced quality - recommended | |
| | | [salamandra-7b-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q5_0.gguf) | Q5_0 | 5.487 GB | legacy; medium, balanced quality - prefer using Q4_K_M | |
| | | [salamandra-7b-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q5_K_S.gguf) | Q5_K_S | 5.487 GB | large, low quality loss - recommended | |
| | | [salamandra-7b-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q5_K_M.gguf) | Q5_K_M | 5.592 GB | large, very low quality loss - recommended | |
| | | [salamandra-7b-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q6_K.gguf) | Q6_K | 6.380 GB | very large, extremely low quality loss | |
| | | [salamandra-7b-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/salamandra-7b-instruct-GGUF/blob/main/salamandra-7b-instruct-Q8_0.gguf) | Q8_0 | 8.261 GB | very large, extremely low quality loss - not recommended | |
| | |
| | |
| | ## Downloading instruction |
| | |
| | ### Command line |
| | |
| | Firstly, install Huggingface Client |
| | |
| | ```shell |
| | pip install -U "huggingface_hub[cli]" |
| | ``` |
| | |
| | Then, downoad the individual model file the a local directory |
| | |
| | ```shell |
| | huggingface-cli download tensorblock/salamandra-7b-instruct-GGUF --include "salamandra-7b-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR |
| | ``` |
| | |
| | If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: |
| | |
| | ```shell |
| | huggingface-cli download tensorblock/salamandra-7b-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' |
| | ``` |
| | |