|
|
--- |
|
|
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit |
|
|
tags: |
|
|
- text-generation-inference |
|
|
- transformers |
|
|
- unsloth |
|
|
- qwen3 |
|
|
- trl |
|
|
- sft |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
# 🧠 Qwen 3 8B – LoRA ‘Gutenberg’ |
|
|
|
|
|
 |
|
|
|
|
|
Creative minds require limitless memory. |
|
|
Meet **Qwen 3 8B – LoRA ‘Gutenberg’**, a finely-tuned version of the Qwen 3 8B language model, enhanced with a LoRA trained on a carefully curated selection of literary texts from Project Gutenberg. This model blends the architectural sophistication of Qwen 3 with the timeless elegance of classical storytelling, producing text that feels both intelligent and human. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🌟 Highlights |
|
|
|
|
|
### 🏛️ Gutenberg-powered creativity |
|
|
Tuned on a literary dataset filled with 19th and 20th-century public domain novels, this model excels at generating rich, immersive prose and vivid atmospheric scenes. |
|
|
|
|
|
### 🧬 Based on Qwen 3 8B |
|
|
Built on Alibaba’s Qwen 3 architecture, providing strong multilingual capabilities, improved factual grounding, and efficient long-form reasoning. |
|
|
|
|
|
### 🧠 Massive 40,960-token context window |
|
|
Perfect for extended narrative continuity, legal documents, RAG pipelines, and deep dialogue memory. This wide context allows the model to remember and connect distant narrative threads with ease. |
|
|
|
|
|
### 🔧 LoRA fine-tuning for creativity |
|
|
Lightweight fine-tuning delivers powerful enhancements without compromising the model's base performance. Tailored for story generation, dialogue, and introspective monologues. |
|
|
|
|
|
--- |
|
|
|
|
|
## ✍️ Ideal Use Cases |
|
|
|
|
|
- Fiction and novel generation |
|
|
- Interactive storytelling or RPG dialogue |
|
|
- Literary assistants and writing aides |
|
|
- Creative research, inspiration, and plot development |
|
|
- Long-context memory testing and analysis |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧪 Example Output |
|
|
|
|
|
> **Example Output – "Cigno 8B"** |
|
|
> *(Gutenberg-Fine-Tuned | Qwen 3 | 40k Context Window)* |
|
|
> |
|
|
> The rain had stopped, but the clouds had gathered over the horizon like a silent army preparing to unleash a second wave of sorrow... |
|
|
> *(continua con il tuo testo narrativo, già perfetto)* |
|
|
|
|
|
--- |
|
|
|
|
|
## 📦 Uploaded Model Details |
|
|
|
|
|
- **Developed by:** ClaudioItaly |
|
|
- **License:** apache-2.0 |
|
|
- **Finetuned from model:** [unsloth/qwen3-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-8b-unsloth-bnb-4bit) |
|
|
|
|
|
This Qwen 3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library. |
|
|
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
|
|
--- |
|
|
|
|
|
|