File size: 1,716 Bytes
0e86bf8
 
 
 
 
 
 
 
 
 
 
243bb62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---

title: Phi Finetuned Chat
emoji: πŸ€–
colorFrom: purple
colorTo: pink
sdk: docker
sdk_version: "3.10"
app_file: app.py
pinned: false
---


# Simple Text Generator

A minimal Flask app for text generation with your fine-tuned model.

## Features

- βœ… Single page interface
- βœ… Simple prompt β†’ output
- βœ… No complex questionnaire
- βœ… Full terminal logging
- βœ… Clean UI
- βœ… Docker ready for Hugging Face Spaces

## Files

- `app.py` - Main Flask application (all-in-one file)
- `requirements.txt` - Python dependencies
- `Dockerfile` - Docker configuration for HF Spaces
- `.dockerignore` - Files to exclude from Docker build

## Quick Deploy to Hugging Face Spaces

1. Create new Space at https://huggingface.co/new-space
2. Choose **Docker** as SDK (not Gradio or Streamlit)
3. Upload all 4 files from this folder:
   - `app.py`
   - `requirements.txt`
   - `Dockerfile`
   - `.dockerignore`
4. Wait for build to complete
5. Access your app!

## Local Usage

### With Python directly:
```bash

pip install -r requirements.txt

python app.py

```
Open http://localhost:7860

### With Docker:
```bash

docker build -t simple-text-gen .

docker run -p 7860:7860 simple-text-gen

```
Open http://localhost:7860

## Configuration

Change model in `app.py`:
```python

MODEL_NAME = "KASHH-4/phi_finetuned"  # Your model

```

Adjust generation settings:
```python

max_new_tokens=100  # Number of tokens to generate

do_sample=False     # Greedy decoding (faster)

```

## Hardware Requirements

- **Minimum**: 2 vCPU + 18GB RAM (CPU inference, ~60s per generation)
- **Recommended**: T4 GPU + 16GB RAM (GPU inference, ~3-5s per generation)