File size: 1,387 Bytes
79bbe79
 
c275a0e
 
 
79bbe79
c275a0e
79bbe79
 
c275a0e
79bbe79
 
c275a0e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
title: Creative Help
emoji: ✍️
colorFrom: red
colorTo: pink
sdk: gradio
sdk_version: 5.0.0
app_file: app.py
pinned: false
license: mit
---

# Creative Help

A word processor interface for eliciting text completions from the **creative-help** language model. Write freely, then type `\help\` to trigger generation automatically.

## How to use

1. **Write** your story in the text area.
2. **Trigger** a completion by typing `\help\` at the end of your text — generation starts automatically.
3. The model’s completion is appended to your text.

## Setup (Space owner)

This Space calls your deployed **creative-help** Inference Endpoint. Configure these in **Settings → Variables and Secrets**:

| Name | Type | Description |
|------|------|-------------|
| `HF_ENDPOINT_URL` | Variable | Your Inference Endpoint URL (e.g. `https://xxx.us-east-1.aws.endpoints.huggingface.cloud`) |
| `HF_TOKEN` | Secret | Your Hugging Face token with access to the endpoint |

Alternatively, you can use the serverless Inference API by setting:

| Name | Type | Description |
|------|------|-------------|
| `HF_MODEL_ID` | Variable | Model ID (e.g. `username/creative-help`) |
| `HF_TOKEN` | Secret | Your Hugging Face token |

## Model

This app uses the [creative-help](https://huggingface.co/YOUR_USERNAME/creative-help) model. Deploy it as an Inference Endpoint for best performance.