Update README.md
Browse files
README.md
CHANGED
|
@@ -1,29 +1,18 @@
|
|
| 1 |
-
|
| 2 |
-
title: FugthDesign AI Backend
|
| 3 |
-
emoji: 🤖
|
| 4 |
-
colorFrom: purple
|
| 5 |
-
colorTo: blue
|
| 6 |
-
sdk: docker
|
| 7 |
-
app_port: 7860
|
| 8 |
-
---
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
###
|
| 15 |
-
|
| 16 |
-
- **Model:** `stablelm-zephyr-3b.Q3_K_S.gguf`
|
| 17 |
-
|
| 18 |
-
### Performance Note
|
| 19 |
-
⚠️ This Space is running on a free CPU and is intended for demonstration purposes only. Response times will be slow. For real-time performance, upgrade to a GPU hardware in the settings.
|
| 20 |
-
|
| 21 |
-
### API Endpoint: `/chat`
|
| 22 |
-
- **Method:** POST
|
| 23 |
-
- **Body (JSON):**
|
| 24 |
```json
|
| 25 |
{
|
| 26 |
-
"
|
| 27 |
-
"
|
| 28 |
-
"
|
| 29 |
-
}
|
|
|
|
| 1 |
+
# FugthDes Story Generator (Hugging Face Space)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
+
Lightweight CPU-based story generator using **TinyLlama GGUF** + **Flask API**.
|
| 4 |
|
| 5 |
+
### 🚀 Features
|
| 6 |
+
- Story continuation and feedback-based rewriting
|
| 7 |
+
- Supports story memory (multi-part continuity)
|
| 8 |
+
- CPU-friendly (TinyLlama Q4_K_M)
|
| 9 |
+
- Connectable to GitHub front-end or any REST client
|
| 10 |
|
| 11 |
+
### 🧠 Endpoint
|
| 12 |
+
POST `/generate`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
```json
|
| 14 |
{
|
| 15 |
+
"prompt": "Write chapter 1 of a fantasy story.",
|
| 16 |
+
"feedback": "Make it more emotional.",
|
| 17 |
+
"story_memory": "Previous chapters..."
|
| 18 |
+
}
|