QuantFactory/Sekhmet_Bet-L3.1-8B-v0.2-GGUF

This is quantized version of Nitral-AI/Sekhmet_Bet-L3.1-8B-v0.2 created using llama.cpp

Original Model Card

image/jpeg

Sekhmet_Bet [v-0.2] - Designed to provide robust solutions to complex problems while offering support and insightful guidance.

GGUF Quant's available thanks to: Reiterate3680 <3 GGUF Here

EXL2 Quant: 5bpw Exl2 Here

Recomended ST Presets: Sekhmet Presets(Same as Hathor's)


Training Note: Sekhmet_Bet [v0.2] is trained on: 1 epoch of Private - Hathor_0.85 Instructions, small subset of creative writing data, roleplaying chat pairs over Sekhmet_Aleph-L3.1-8B-v0.1

Additional Note's: This model was quickly assembled to provide users with a relatively uncensored alternative to L3.1 Instruct, featuring extended context capabilities. (As I will soon be on a short hiatus) The learning rate for this model was set rather low. Therefore, I do not expect it to match the performance levels demonstrated by Hathor versions 0.5, 0.85, or 1.0.

Downloads last month
40
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support