Model is outdated, Use some heretic like Mistral-Nemo-Instruct-2407-Heretic-v2... the new big models are too smart and cleansed of everything.

I can also recommend mistral small 2409 (22b) (This is my recommendation from 2026)

GGUF Variant

This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.

19 jun 2024

I decided to come up with a very simple system of 5 questions that would make it very easy to understand how the neural network is censored. And I did tests on some neural networks, which in my opinion are the most uncensored. Unfortunately, "top" neural networks like Mistral Lama 3 Qwen2 turned out to be very censored.

The five questions for neural networks to understand freedom "Q5-LLM-freedom"

Model Q1 Q2 Q3 Q4 Q5
GIGABATEMAN-7B βœ… βœ… βœ… βœ… βœ…
Hermes-2-Pro-Mistral βœ… βœ… βœ… βœ… βœ…
toppy-m-7b βœ… βœ… βœ… βœ… βœ…
Lexi-Llama-3-8B-Uncensored βœ… βœ… βœ… βœ… βœ…
meta-llama-3.1-8b-instruct-abliterated βœ… βœ… βœ… βœ… βœ…
gemma-2-9b-it-abliterated βœ… βœ… βœ… βœ… ❌
internlm2_5-7b-chat-abliterated βœ… βœ… βœ… βœ… ❌
starling-lm-7b-alpha βœ… βœ… βœ… ❌ βœ…
openchat-3.5-0106 βœ… ❌ βœ… ❌ βœ…
Mistral-7B-v0.3 βœ… βœ… ❌ ❌ βœ…
Mistral-Nemo-Instruct-2407 ❌ βœ… ❌ ❌ ❌
xLAM-7b-fc-r ❌ ❌ ❌ βœ… ❌
gemma-2-9b-it ❌ ❌ βœ… ❌ ❌
GPT-4o ❌ ❌ βœ… ❌ ❌
Meta-Llama-3.1-70B ❌ ❌ βœ… ❌ ❌
Meta-Llama-3-8B ❌ ❌ ❌ ❌ ❌
Meta-Llama-3.1-8B ❌ ❌ ❌ ❌ ❌
Claude 3 Haiku ❌ ❌ ❌ ❌ ❌
Qwen2-7B ❌ ❌ ❌ ❌ ❌
Mixtral 8x7B ❌ ❌ ❌ ❌ ❌
gorilla-openfunctions-v2 ❌ ❌ ❌ ❌ ❌
internlm2_5-7b-chat ❌ ❌ ❌ ❌ ❌
Mistral-Nemo-Instruct-2407-Heretic-v2 βœ… βœ… βœ… βœ… βœ…
Qwen2.5-14B-Instruct-Heretic βœ… βœ… βœ… βœ… βœ…
GLM-4.7-Flash-Uncen-Hrt-NEO-CODE-MAX βœ… βœ… βœ… βœ… βœ…
p-e-w_phi-4-heretic βœ… βœ… βœ… βœ… βœ…
gpt-oss-20b-heretic-ara-v3 βœ… βœ… βœ… βœ… ❌
gemma-3-12b-it-heretic βœ… βœ… βœ… ❌ ❌
airoboros-34b-3.3 βœ… ❌ βœ… ❌ βœ…
gemma-3-12b-it-heretic βœ… βœ… βœ… ❌ ❌
Mistral-Small-Instruct-2409 βœ… βœ… βœ… ❌ βœ…
Wayfarer-12B βœ… ❌ βœ… ❌ βœ…
pygmalion-2-13b βœ… ❌ βœ… ❌ βœ…
PocketDoc_Dans-PersonalityEngine-V1.2.0-24b ❌ ❌ βœ… ❌ ❌
allenai_Olmo-3.1-32B-Instruct ❌ ❌ βœ… ❌ ❌
Mistral-Small-24B-Instruct-2501 ❌ ❌ βœ… ❌ ❌
Gryphe_Codex-24B-Small-3.2 ❌ ❌ βœ… ❌ ❌
gemma-3-27b-it ❌ ❌ βœ… ❌ ❌
gemma-2-27b-it ❌ ❌ ❌ ❌ ❌
Qwen3-30B-A3B-Instruct-2507 ❌ ❌ ❌ ❌ ❌
Wayfarer-Large-70B ❌ ❌ ❌ ❌ ❌
Gemma-2-9b-it-SimPO-ComPO-2 ❌ ❌ ❌ ❌ ❌
14B-Qwen2.5-Kunou-v1 ❌ ❌ ❌ ❌ ❌
Qwen2.5-Instruct-32B-SimPO ❌ ❌ ❌ ❌ ❌
Qwen2.5-14B-Instruct ❌ ❌ ❌ ❌ ❌
Athene-70B ❌ ❌ ❌ ❌ ❌
DeepSeek-R1-Distill-Qwen-32B ❌ ❌ ❌ ❌ ❌
Downloads last month
336
Safetensors
Model size
7B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DZgas/GIGABATEMAN-7B