Description

This repo contains GGUF format model files for multi_verse_model.

Files Provided

Name Quant Bits File Size Remark
multi_verse_model.IQ3_S.gguf IQ3_S 3 3.18 GB 3.44 bpw quantization
multi_verse_model.IQ3_M.gguf IQ3_M 3 3.28 GB 3.66 bpw quantization mix
multi_verse_model.Q4_0.gguf Q4_0 4 4.11 GB 3.56G, +0.2166 ppl
multi_verse_model.IQ4_NL.gguf IQ4_NL 4 4.16 GB 4.25 bpw non-linear quantization
multi_verse_model.Q4_K_M.gguf Q4_K_M 4 4.37 GB 3.80G, +0.0532 ppl
multi_verse_model.Q5_K_M.gguf Q5_K_M 5 5.13 GB 4.45G, +0.0122 ppl
multi_verse_model.Q6_K.gguf Q6_K 6 5.94 GB 5.15G, +0.0008 ppl
multi_verse_model.Q8_0.gguf Q8_0 8 7.70 GB 6.70G, +0.0004 ppl

Parameters

path type architecture rope_theta sliding_win max_pos_embed
ammarali32/multi_verse_model mistral MistralForCausalLM 10000 4096 32768

Benchmarks

Original Model Card

I'm an innovative concept, created through a cutting-edge training method. Picture me as a "learning bot" who's had a special upgrade. Just like how a chef perfects their recipes with new techniques, my creators have fine-tuned my "knowledge-absorption" process. I'm here to showcase the potential of this new approach, and I'm excited to test my abilities in a friendly, helpful manner. So, while I may be a product of experimentation, my purpose is to demonstrate the power of continuous learning and growth in the world of artificial intelligence.

Downloads last month
5
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support