| | --- |
| | library_name: transformers |
| | license: apache-2.0 |
| | base_model: mlfoundations-dev/dolphinr1 |
| | tags: |
| | - llama-factory |
| | - full |
| | - generated_from_trainer |
| | - TensorBlock |
| | - GGUF |
| | model-index: |
| | - name: dolphinr1 |
| | results: [] |
| | --- |
| | |
| | <div style="width: auto; margin-left: auto; margin-right: auto"> |
| | <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
| | </div> |
| |
|
| | [](https://tensorblock.co) |
| | [](https://twitter.com/tensorblock_aoi) |
| | [](https://discord.gg/Ej5NmeHFf2) |
| | [](https://github.com/TensorBlock) |
| | [](https://t.me/TensorBlock) |
| |
|
| |
|
| | ## mlfoundations-dev/dolphinr1 - GGUF |
| |
|
| | This repo contains GGUF format model files for [mlfoundations-dev/dolphinr1](https://huggingface.co/mlfoundations-dev/dolphinr1). |
| |
|
| | The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39). |
| |
|
| | ## Our projects |
| | <table border="1" cellspacing="0" cellpadding="10"> |
| | <tr> |
| | <th colspan="2" style="font-size: 25px;">Forge</th> |
| | </tr> |
| | <tr> |
| | <th colspan="2"> |
| | <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/> |
| | </th> |
| | </tr> |
| | <tr> |
| | <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th> |
| | </tr> |
| | <tr> |
| | <th colspan="2"> |
| | <a href="https://github.com/TensorBlock/forge" target="_blank" style=" |
| | display: inline-block; |
| | padding: 8px 16px; |
| | background-color: #FF7F50; |
| | color: white; |
| | text-decoration: none; |
| | border-radius: 6px; |
| | font-weight: bold; |
| | font-family: sans-serif; |
| | ">π Try it now! π</a> |
| | </th> |
| | </tr> |
| | |
| | <tr> |
| | <th style="font-size: 25px;">Awesome MCP Servers</th> |
| | <th style="font-size: 25px;">TensorBlock Studio</th> |
| | </tr> |
| | <tr> |
| | <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th> |
| | <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th> |
| | </tr> |
| | <tr> |
| | <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th> |
| | <th>A lightweight, open, and extensible multi-LLM interaction studio.</th> |
| | </tr> |
| | <tr> |
| | <th> |
| | <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style=" |
| | display: inline-block; |
| | padding: 8px 16px; |
| | background-color: #FF7F50; |
| | color: white; |
| | text-decoration: none; |
| | border-radius: 6px; |
| | font-weight: bold; |
| | font-family: sans-serif; |
| | ">π See what we built π</a> |
| | </th> |
| | <th> |
| | <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style=" |
| | display: inline-block; |
| | padding: 8px 16px; |
| | background-color: #FF7F50; |
| | color: white; |
| | text-decoration: none; |
| | border-radius: 6px; |
| | font-weight: bold; |
| | font-family: sans-serif; |
| | ">π See what we built π</a> |
| | </th> |
| | </tr> |
| | </table> |
| | ## Prompt template |
| | |
| | ``` |
| | <|im_start|>system |
| | {system_prompt}<|im_end|> |
| | <|im_start|>user |
| | {prompt}<|im_end|> |
| | <|im_start|>assistant |
| | ``` |
| |
|
| | ## Model file specification |
| |
|
| | | Filename | Quant type | File Size | Description | |
| | | -------- | ---------- | --------- | ----------- | |
| | | [dolphinr1-Q2_K.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes | |
| | | [dolphinr1-Q3_K_S.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss | |
| | | [dolphinr1-Q3_K_M.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss | |
| | | [dolphinr1-Q3_K_L.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss | |
| | | [dolphinr1-Q4_0.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M | |
| | | [dolphinr1-Q4_K_S.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss | |
| | | [dolphinr1-Q4_K_M.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended | |
| | | [dolphinr1-Q5_0.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M | |
| | | [dolphinr1-Q5_K_S.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended | |
| | | [dolphinr1-Q5_K_M.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended | |
| | | [dolphinr1-Q6_K.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss | |
| | | [dolphinr1-Q8_0.gguf](https://huggingface.co/tensorblock/dolphinr1-GGUF/blob/main/dolphinr1-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended | |
| | |
| | |
| | ## Downloading instruction |
| | |
| | ### Command line |
| | |
| | Firstly, install Huggingface Client |
| | |
| | ```shell |
| | pip install -U "huggingface_hub[cli]" |
| | ``` |
| | |
| | Then, downoad the individual model file the a local directory |
| | |
| | ```shell |
| | huggingface-cli download tensorblock/dolphinr1-GGUF --include "dolphinr1-Q2_K.gguf" --local-dir MY_LOCAL_DIR |
| | ``` |
| | |
| | If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: |
| | |
| | ```shell |
| | huggingface-cli download tensorblock/dolphinr1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' |
| | ``` |
| | |