| Quantization made by Richard Erkhov. | |
| [Github](https://github.com/RichardErkhov) | |
| [Discord](https://discord.gg/pvy7H8DZMG) | |
| [Request more models](https://github.com/RichardErkhov/quant_request) | |
| Phi-5B-Test - bnb 4bits | |
| - Model creator: https://huggingface.co/Replete-AI/ | |
| - Original model: https://huggingface.co/Replete-AI/Phi-5B-Test/ | |
| Original model description: | |
| --- | |
| base_model: [] | |
| tags: | |
| - mergekit | |
| - merge | |
| license: mit | |
| --- | |
| # Untitled Model (1) | |
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
| ## Merge Details | |
| ### Merge Method | |
| This model was merged using the passthrough merge method. | |
| ### Models Merged | |
| The following models were included in the merge: | |
| * https://huggingface.co/liminerity/Phigments12 | |
| * https://huggingface.co/l3utterfly/phi-2-layla-v1-chatml | |
| * and then depth upscaled | |
| ### Configuration | |
| The following YAML configuration was used to produce this model: | |
| ```yaml | |
| models: | |
| - model: liminerity/Phigments12 | |
| parameters: | |
| density: [1, 0.7, 0.1] # density gradient | |
| weight: 1.0 | |
| - model: l3utterfly/phi-2-layla-v1-chatml | |
| parameters: | |
| density: 0.8 | |
| weight: [0, 0.5, 0.7, 1] # weight gradient | |
| merge_method: dare_ties | |
| base_model: liminerity/Phigments12 | |
| parameters: | |
| normalize: true | |
| int8_mask: true | |
| dtype: float16 | |
| ``` | |
| ```yaml | |
| dtype: float16 | |
| merge_method: passthrough | |
| slices: | |
| - sources: | |
| - model: phi/ | |
| layer_range: [0,32] | |
| - sources: | |
| - model: phi/ | |
| layer_range: [0,32] | |
| ``` | |
| # [Join the Replete AI Discord here!](https://discord.gg/tG5aY4EX4T) | |