| --- |
| language: |
| - en |
| license: mit |
| library_name: transformers |
| tags: |
| - mergekit |
| - merge |
| - phi-4 |
| base_model: |
| - bunnycore/Phi-4-RR-Shoup |
| - bunnycore/Phi-4-Model-Stock-v4 |
| pipeline_tag: text-generation |
| model-index: |
| - name: Luminis-phi-4 |
| results: |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: IFEval (0-Shot) |
| type: HuggingFaceH4/ifeval |
| args: |
| num_few_shot: 0 |
| metrics: |
| - type: inst_level_strict_acc and prompt_level_strict_acc |
| value: 69.0 |
| name: strict accuracy |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4 |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: BBH (3-Shot) |
| type: BBH |
| args: |
| num_few_shot: 3 |
| metrics: |
| - type: acc_norm |
| value: 55.8 |
| name: normalized accuracy |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4 |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: MATH Lvl 5 (4-Shot) |
| type: hendrycks/competition_math |
| args: |
| num_few_shot: 4 |
| metrics: |
| - type: exact_match |
| value: 43.66 |
| name: exact match |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4 |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: GPQA (0-shot) |
| type: Idavidrein/gpqa |
| args: |
| num_few_shot: 0 |
| metrics: |
| - type: acc_norm |
| value: 13.53 |
| name: acc_norm |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4 |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: MuSR (0-shot) |
| type: TAUR-Lab/MuSR |
| args: |
| num_few_shot: 0 |
| metrics: |
| - type: acc_norm |
| value: 16.68 |
| name: acc_norm |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4 |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: MMLU-PRO (5-shot) |
| type: TIGER-Lab/MMLU-Pro |
| config: main |
| split: test |
| args: |
| num_few_shot: 5 |
| metrics: |
| - type: acc |
| value: 49.15 |
| name: accuracy |
| source: |
| url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/Luminis-phi-4 |
| name: Open LLM Leaderboard |
| --- |
| # Merged Model |
|
|
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|  |
|
|
| > [!TIP] This model is currently ranked #3 on the Open LLM Leaderboard among models up to 15B parameters, #4 among models up to 32B parameters and #53 among all the models! |
|
|
| -14.2.2025 |
|
|
| ## Merge Details |
| ### Merge Method |
|
|
| This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. |
|
|
| ### Models Merged |
|
|
| The following models were included in the merge: |
| * [bunnycore/Phi-4-RR-Shoup](https://huggingface.co/bunnycore/Phi-4-RR-Shoup) |
| * [bunnycore/Phi-4-Model-Stock-v4](https://huggingface.co/bunnycore/Phi-4-Model-Stock-v4) |
|
|
| ### Configuration |
|
|
| The following YAML configuration was used to produce this model: |
|
|
| ```yaml |
| base_model: bunnycore/Phi-4-Model-Stock-v4 |
| dtype: bfloat16 |
| merge_method: slerp |
| parameters: |
| t: |
| - filter: self_attn |
| value: [0.0, 0.5, 0.3, 0.7, 1.0] |
| - filter: mlp |
| value: [1.0, 0.5, 0.7, 0.3, 0.0] |
| - value: 0.5 |
| slices: |
| - sources: |
| - layer_range: [0, 40] |
| model: bunnycore/Phi-4-Model-Stock-v4 |
| - layer_range: [0, 40] |
| model: bunnycore/Phi-4-RR-Shoup |
| ``` |
| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
| Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/suayptalha__Luminis-phi-4-details) |
|
|
| | Metric |Value| |
| |-------------------|----:| |
| |Avg. |41.30| |
| |IFEval (0-Shot) |69.00| |
| |BBH (3-Shot) |55.80| |
| |MATH Lvl 5 (4-Shot)|43.66| |
| |GPQA (0-shot) |13.53| |
| |MuSR (0-shot) |16.68| |
| |MMLU-PRO (5-shot) |49.15| |
|
|
| <a href="https://www.buymeacoffee.com/suayptalha" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> |