Luminis-phi-4-GGUF / README.md
morriszms's picture
Update README.md
0e59ccc verified
---
language:
- en
license: mit
library_name: transformers
tags:
- mergekit
- merge
- phi-4
- TensorBlock
- GGUF
base_model: suayptalha/Luminis-phi-4
pipeline_tag: text-generation
model-index:
- name: Luminis-phi-4
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 69.0
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 55.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 43.66
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.53
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.68
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.15
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
[![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
[![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
[![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)
## suayptalha/Luminis-phi-4 - GGUF
This repo contains GGUF format model files for [suayptalha/Luminis-phi-4](https://huggingface.co/suayptalha/Luminis-phi-4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4823](https://github.com/ggml-org/llama.cpp/commit/5bbe6a9fe9a8796a9389c85accec89dbc4d91e39).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">πŸš€ Try it now! πŸš€</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">πŸ‘€ See what we built πŸ‘€</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">πŸ‘€ See what we built πŸ‘€</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system<|im_sep|>{system_prompt}<|im_end|><|im_start|>user<|im_sep|>{prompt}<|im_end|><|im_start|>assistant<|im_sep|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Luminis-phi-4-Q2_K.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q2_K.gguf) | Q2_K | 5.609 GB | smallest, significant quality loss - not recommended for most purposes |
| [Luminis-phi-4-Q3_K_S.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q3_K_S.gguf) | Q3_K_S | 6.505 GB | very small, high quality loss |
| [Luminis-phi-4-Q3_K_M.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q3_K_M.gguf) | Q3_K_M | 7.191 GB | very small, high quality loss |
| [Luminis-phi-4-Q3_K_L.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q3_K_L.gguf) | Q3_K_L | 7.789 GB | small, substantial quality loss |
| [Luminis-phi-4-Q4_0.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q4_0.gguf) | Q4_0 | 8.383 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Luminis-phi-4-Q4_K_S.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q4_K_S.gguf) | Q4_K_S | 8.444 GB | small, greater quality loss |
| [Luminis-phi-4-Q4_K_M.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q4_K_M.gguf) | Q4_K_M | 8.890 GB | medium, balanced quality - recommended |
| [Luminis-phi-4-Q5_0.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q5_0.gguf) | Q5_0 | 10.152 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Luminis-phi-4-Q5_K_S.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q5_K_S.gguf) | Q5_K_S | 10.152 GB | large, low quality loss - recommended |
| [Luminis-phi-4-Q5_K_M.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q5_K_M.gguf) | Q5_K_M | 10.413 GB | large, very low quality loss - recommended |
| [Luminis-phi-4-Q6_K.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q6_K.gguf) | Q6_K | 12.030 GB | very large, extremely low quality loss |
| [Luminis-phi-4-Q8_0.gguf](https://huggingface.co/tensorblock/Luminis-phi-4-GGUF/blob/main/Luminis-phi-4-Q8_0.gguf) | Q8_0 | 15.581 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Luminis-phi-4-GGUF --include "Luminis-phi-4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Luminis-phi-4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```