ClaudioItaly commited on
Commit
bbedd01
·
verified ·
1 Parent(s): fa6f17b

Update README.md

Browse files

![ChatGPT Image 9 mag 2025, 18_46_35.png](https://cdn-uploads.huggingface.co/production/uploads/6460ca24cd9ba6a317c3fe49/z6-uv3QPoZsl1TB35MtDb.png)

Files changed (1) hide show
  1. README.md +53 -5
README.md CHANGED
@@ -12,12 +12,60 @@ language:
12
  - en
13
  ---
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** ClaudioItaly
18
- - **License:** apache-2.0
19
- - **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
20
 
21
- This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
12
  - en
13
  ---
14
 
15
+ # 🧠 Qwen 3 8B – LoRA ‘Gutenberg’
16
 
17
+ ![ChatGPT Image](https://cdn-uploads.huggingface.co/production/uploads/6460ca24cd9ba6a317c3fe49/z6-uv3QPoZsl1TB35MtDb.png)
 
 
18
 
19
+ Creative minds require limitless memory.
20
+ Meet **Qwen 3 8B – LoRA ‘Gutenberg’**, a finely-tuned version of the Qwen 3 8B language model, enhanced with a LoRA trained on a carefully curated selection of literary texts from Project Gutenberg. This model blends the architectural sophistication of Qwen 3 with the timeless elegance of classical storytelling, producing text that feels both intelligent and human.
21
+
22
+ ---
23
+
24
+ ## 🌟 Highlights
25
+
26
+ ### 🏛️ Gutenberg-powered creativity
27
+ Tuned on a literary dataset filled with 19th and 20th-century public domain novels, this model excels at generating rich, immersive prose and vivid atmospheric scenes.
28
+
29
+ ### 🧬 Based on Qwen 3 8B
30
+ Built on Alibaba’s Qwen 3 architecture, providing strong multilingual capabilities, improved factual grounding, and efficient long-form reasoning.
31
+
32
+ ### 🧠 Massive 40,960-token context window
33
+ Perfect for extended narrative continuity, legal documents, RAG pipelines, and deep dialogue memory. This wide context allows the model to remember and connect distant narrative threads with ease.
34
+
35
+ ### 🔧 LoRA fine-tuning for creativity
36
+ Lightweight fine-tuning delivers powerful enhancements without compromising the model's base performance. Tailored for story generation, dialogue, and introspective monologues.
37
+
38
+ ---
39
+
40
+ ## ✍️ Ideal Use Cases
41
+
42
+ - Fiction and novel generation
43
+ - Interactive storytelling or RPG dialogue
44
+ - Literary assistants and writing aides
45
+ - Creative research, inspiration, and plot development
46
+ - Long-context memory testing and analysis
47
+
48
+ ---
49
+
50
+ ## 🧪 Example Output
51
+
52
+ > **Example Output – "Cigno 8B"**
53
+ > *(Gutenberg-Fine-Tuned | Qwen 3 | 40k Context Window)*
54
+ >
55
+ > The rain had stopped, but the clouds had gathered over the horizon like a silent army preparing to unleash a second wave of sorrow...
56
+ > *(continua con il tuo testo narrativo, già perfetto)*
57
+
58
+ ---
59
+
60
+ ## 📦 Uploaded Model Details
61
+
62
+ - **Developed by:** ClaudioItaly
63
+ - **License:** apache-2.0
64
+ - **Finetuned from model:** [unsloth/qwen3-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-8b-unsloth-bnb-4bit)
65
+
66
+ This Qwen 3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL library.
67
 
68
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
69
+
70
+ ---
71
+