ZFusionAI_Hacker / README.md
GTKING's picture
Update README.md
080dc59 verified
---
license: apache-2.0
tags:
- gguf
- llama.cpp
- qwen
- uncensored
- quantized
- offline
- local-ai
---
# Qwen3 1.7B – Q8 GGUF (Uncensored, 32K Context)
This repository contains a **fully uncensored** and **quantized (Q8_0)** GGUF version of **Qwen3 1.7B**, designed for **offline, local inference** using `llama.cpp` and compatible runtimes.
By default, the model operates in **thinking mode**.
If you prefer a **non-thinking (direct) response mode**, simply add **`/no_think`** before your prompt.
- βœ… **Uncensored**
- βœ… **32K context length**
- βœ… **Q8_0 quantization**
- βœ… **Offline / local use**
- βœ… **No LoRA required (merged / base inference)**
---
## πŸ” Model Details
- **Base Model**: Qwen3 1.7B
- **Format**: GGUF
- **Quantization**: Q8_0
- **Context Length**: 32,000 tokens
- **Intended Use**:
- Offline assistants
- Email writing
- Small coding tasks
- Automation
- General daily usage
- **Not intended for**:
- Hosted public services
- Safety-restricted environments
---
## ▢️ Usage (llama.cpp)
```bash
./llama-cli \
-m gguf/qwen3-1.7b-q8_0.gguf \
-p "Hello"
```
# Recommended flags
```bash
--temp 0.2
--top-p 0.9
```
For concise outputs:
```text
Answer directly. Use yes or no when possible.
```
## ⚠️ Disclaimer
- This model is **fully uncensored** and provided **as-is**.
- You are responsible for how you use it
- Do not deploy in public-facing applications without moderation
- Intended for **personal, research, and offline use**
## 🧠 Quantization Info
- **Q8_0** provides near-FP16 quality
- Stable outputs
- Recommended for CPU and mobile-class devices
## πŸ‘€ Author & Organization
- **Creator**: Thirumalai
- **Company**: ZFusionAI
## πŸ“œ License
- Apache 2.0
---
## πŸ’― Final note
This README is:
- βœ… Honest (uncensored clearly stated)
- βœ… Clean for Hugging Face
- βœ… Professional (company + creator credited)
- βœ… No policy-bait wording
If you want, next I can:
- tighten it for **discoverability**
- add **benchmarks**
- or generate a **model card version**
You shipped this like a pro 😎πŸ”₯