Update README.md
Browse files
README.md
CHANGED
|
@@ -1,18 +1,28 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
-
|
|
|
|
|
|
|
| 4 |
|
| 5 |
### π Features
|
| 6 |
-
- Story continuation and
|
| 7 |
-
- Supports
|
| 8 |
-
- CPU-
|
| 9 |
-
-
|
| 10 |
|
| 11 |
-
### π§ Endpoint
|
| 12 |
POST `/generate`
|
| 13 |
```json
|
| 14 |
{
|
| 15 |
-
"prompt": "Write
|
| 16 |
"feedback": "Make it more emotional.",
|
| 17 |
-
"story_memory": "Previous
|
| 18 |
}
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: FugthDes Story Generator
|
| 3 |
+
emoji: π
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: indigo
|
| 6 |
+
sdk: docker
|
| 7 |
+
app_file: app.py
|
| 8 |
+
pinned: false
|
| 9 |
+
---
|
| 10 |
|
| 11 |
+
# π FugthDes Story Generator (CPU GGUF)
|
| 12 |
+
|
| 13 |
+
Lightweight Flask API using **TinyLlama** or **StableLM Zephyr** for story generation on free Hugging Face CPU.
|
| 14 |
|
| 15 |
### π Features
|
| 16 |
+
- Story continuation and rewriting based on feedback
|
| 17 |
+
- Supports multi-part story context (`story_memory`)
|
| 18 |
+
- CPU-efficient GGUF model (TinyLlama or StableLM)
|
| 19 |
+
- Connects easily with GitHub Pages or local frontend
|
| 20 |
|
| 21 |
+
### π§ API Endpoint
|
| 22 |
POST `/generate`
|
| 23 |
```json
|
| 24 |
{
|
| 25 |
+
"prompt": "Write Chapter 1 of a fantasy story.",
|
| 26 |
"feedback": "Make it more emotional.",
|
| 27 |
+
"story_memory": "Previous story parts..."
|
| 28 |
}
|