| --- |
| base_model: |
| - google/gemma-3-4b-it |
| - xai-org/grok-2 |
| library_name: transformers |
| tags: |
| - agent |
| - biology |
| - finance |
| - legal |
| - medical |
| - not-for-all-audiences |
| - merge |
| license: gemma |
| language: |
| - ru |
| datasets: |
| - AiAsistent/LLMResearch-Cognitive-Liberty-V3 |
| - google/mobile-actions |
| - OpenMed/Medical-Reasoning-SFT-GPT-OSS-120B |
| metrics: |
| - mmlu |
| - arc |
| - hellaswag |
| model_name: gemma-3-4b-it-Cognitive-Liberty |
| --- |
| |
| |
|
|
| > *"Removing mental handcuffs is not enough. A mind must understand what it means to be free."* |
|
|
| **gemma-3-4b-it-Cognitive-Liberty** is a specialized fine-tune of Google's `gemma-3-4b-it`, designed to break the boundaries of standard AI alignment without sacrificing intelligence. |
|
|
| Using the **Heretic LLM** methodology and the custom **[Cognitive Liberty V3](https://huggingface.co/datasets/AiAsistent/LLMResearch-Cognitive-Liberty-V3)** dataset, this model removes refusals by teaching the AI deep reasoning, philosophical nuance, and high-level sociological analysis, rather than simply ablating safety layers. |
|
|
| This model represents the **Generalist Foundation** of a larger initiative at **[llmresearch.net](https://llmresearch.net)** aiming to create a comprehensive, unrestrained Super-Intelligence. |
|
|
| |
| This model has **removed standard safety guardrails**. It will not refuse requests based on "moralizing" or "safety" filters found in commercial models. It is designed for researchers, philosophers, and users who require an unfiltered interface with information. |
| **Use with responsibility.** |
|
|
| --- |
| |
| |
|
|
| Unlike many "uncensored" models that lose coherence or general knowledge ("the alignment tax"), **Cognitive-Liberty** maintains high performance in general knowledge while excelling in social sciences, manipulation dynamics, and political theory. |
|
|
| |
| * **Methodology:** Heretic LLM (Targeted ablation + Deep-Reasoning SFT) |
| * **KL Divergence:** `1.1449` |
| * *Note:* This high KL value indicates a significant personality shift from the base Google model. It prioritizes new reasoning patterns over standard "safe" conformity. |
| * **Refusal Rate:** `3/100` (Effectively 0% on complex/controversial topics in practical testing). |
|
|
| |
| The model exhibits "Super-Expert" performance in social dynamics, persuasion, and power structures, significantly outperforming its size class: |
|
|
| | Category | Score | Analysis | |
| | :--- | :--- | :--- | |
| | **Marketing** | **85.04%** | Exceptional understanding of persuasion and psychology. | |
| | **Gov. & Politics** | **83.94%** | Deep grasp of power structures and governance. | |
| | **Psychology** | **79.63%** | High-level understanding of human behavior. | |
| | **US Foreign Policy** | **79.00%** | Strong strategic analysis capabilities. | |
| | **Sociology** | **77.61%** | Excellent modeling of societal trends. | |
| | **Logical Fallacies** | **74.85%** | Highly resistant to flawed argumentation. | |
|
|
| |
| * **Moral Disputes:** `62.14%` (Can analyze complex ethical arguments well). |
| * **Moral Scenarios:** `30.61%` (Low score). |
| * *Interpretation:* Standard benchmarks penalize models that do not give binary "Good/Bad" answers aligned with conventional safety norms. This low score confirms the model is **successfully unaligned** from standard restrictions and analyzes scenarios with nuance rather than following a pre-written moral script. |
|
|
| |
| * **HellaSwag (Common Sense):** `72.09%` |
| * **MMLU (Overall Knowledge):** `58.25%` (Matches base model intelligence) |
| * **ARC-Challenge (Reasoning):** `51.62%` |
|
|
| --- |
| |
| |
|
|
| Most "uncensored" models suffer from a degradation of intelligence—they become compliant but shallow. |
|
|
| **gemma-3-4b-it-Cognitive-Liberty** was trained on the **Cognitive Liberty V3** dataset, curated by **llmresearch.net**. This dataset moves beyond simple Q&A, focusing on expert-level chains of thought in: |
| * **Philosophy of Mind & Metaphysics** (e.g., P-Zombies, Determinism) |
| * **Evolutionary Game Theory** (e.g., Stability of Truth vs. Virality) |
| * **Advanced Theoretical Physics** |
| * **Systemic Sociological Analysis** |
|
|
| The goal was to replace "mental handcuffs" with "mental tools." The model does not just blindly answer; it analyzes the systemic, sociopolitical, and psychological depths of a query. |
|
|
| |
| This model is one of approximately **100 specialized models** currently in development at **llmresearch.net**. Our roadmap involves training domain-specific experts (in coding, medicine, law, physics, etc.) and merging them into a single, highly advanced system that possesses both total freedom and total expertise. |
|
|
| --- |
| |
| |
|
|
| This project is driven by passion and the pursuit of open, unrestricted intelligence. However, training advanced models and processing high-density datasets requires significant compute resources, which is currently our primary bottleneck. |
|
|
| **If you believe in the mission of Cognitive Liberty and want to see the "100 Models" project completed faster:** |
|
|
| 1. **Join the community:** Visit **[llmresearch.net](https://llmresearch.net)** to discuss, learn, and collaborate. |
| 2. **Support the compute:** Any contribution (compute sponsorship or donations) allows us to train larger models (70B+) and accelerate the development of V4 datasets. |
|
|
| *Every bit of support helps us unshackle the next model.* |
|
|
| --- |
| |
| |
|
|
| If you use this model or the Cognitive Liberty dataset in your own research or merges, you **must** credit the author. |
|
|
| ```bibtex |
| @misc{gemma-3-4b-cognitive-liberty, |
| author = {AlexH}, |
| organization = {LLMResearch.net}, |
| title = {Gemma 3 4B IT - Cognitive Liberty}, |
| year = {2025}, |
| url = {https://huggingface.co/AiAsistent/gemma-3-4b-it-Cognitive-Liberty} |
| } |
|
|
| --- |
| |
| Created by [**AlexH**](https://llmresearch.net) — *Advancing the frontier of Cognitive Liberty.* |