Create README.md
Browse fileshttps://huggingface.co/mistralai/Mistral-7B-v0.1 <----- BASE MODEL
Mistral 7B finetuned on https://huggingface.co/datasets/netcat420/quiklogik for 500 steps on an A100 GPU in google colab using paid compute units
safetensors model is in f32 precision mode, and requires at least 40GB of memory, users on consumer hardware can download the "mhenn4b.gguf" model for the same 7billion parameter model in 4 bit precision mode (Q4_K_M) this model only requires 6.83GB of memory.