swayamshetkar commited on
Commit
dfae69a
·
verified ·
1 Parent(s): 50bbce6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -173
README.md CHANGED
@@ -1,177 +1,8 @@
1
- ---
2
- title: FLAN-T5 Hackathon Idea Generator
3
- emoji: 💡
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: docker
7
- app_port: 7860
8
- pinned: false
9
- ---
10
 
11
- # FLAN-T5 Hackathon Idea Generator
12
 
13
- A FastAPI backend service that uses Google's FLAN-T5-Base model to generate hackathon project ideas and detailed build plans.
14
 
15
- ## Features
16
-
17
- - **Generate Ideas**: Creates 3 client-side hackathon ideas with tech stacks, difficulty levels, and time estimates
18
- - **Detailed Plans**: Expands ideas into 48-hour build plans with architecture, phases, code snippets, and risk analysis
19
- - **JSON Output**: Returns structured JSON responses for easy integration
20
-
21
- ## API Endpoints
22
-
23
- ### `GET /`
24
- Health check endpoint
25
- ```json
26
- {
27
- "status": "FLAN-T5 Backend Running",
28
- "endpoints": ["/generate", "/details"],
29
- "model": "google/flan-t5-base"
30
- }
31
- ```
32
-
33
- ### `POST /generate`
34
- Generate 3 hackathon ideas
35
-
36
- **Request Body:**
37
- ```json
38
- {
39
- "custom_prompt": "focus on AI and machine learning"
40
- }
41
- ```
42
-
43
- **Response:**
44
- ```json
45
- {
46
- "ideas": [
47
- {
48
- "id": 1,
49
- "title": "Project Title",
50
- "elevator": "Brief pitch",
51
- "overview": "Detailed description",
52
- "primary_tech_stack": ["React", "Node.js"],
53
- "difficulty": "Medium",
54
- "time_estimate_hours": 24
55
- }
56
- ],
57
- "best_pick_id": 1,
58
- "best_pick_reason": "Explanation"
59
- }
60
- ```
61
-
62
- ### `POST /details`
63
- Get detailed build plan for an idea
64
-
65
- **Request Body:**
66
- ```json
67
- {
68
- "idea_id": 1,
69
- "idea_title": "Project Title"
70
- }
71
- ```
72
-
73
- **Response:**
74
- ```json
75
- {
76
- "id": 1,
77
- "title": "Project Title",
78
- "mermaid_architecture": "graph LR; A-->B;",
79
- "phases": [...],
80
- "critical_code_snippets": [...],
81
- "ui_components": [...],
82
- "risks_and_mitigations": [...]
83
- }
84
- ```
85
-
86
- ## Testing the API
87
-
88
- ### Using curl
89
- ```bash
90
- # Health check
91
- curl https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME/
92
-
93
- # Generate ideas
94
- curl -X POST https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME/generate \
95
- -H "Content-Type: application/json" \
96
- -d '{"custom_prompt": "focus on web3 and blockchain"}'
97
-
98
- # Get details
99
- curl -X POST https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME/details \
100
- -H "Content-Type: application/json" \
101
- -d '{"idea_id": 1, "idea_title": "NFT Gallery"}'
102
- ```
103
-
104
- ### Using Python
105
- ```python
106
- import requests
107
-
108
- BASE_URL = "https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME"
109
-
110
- # Generate ideas
111
- response = requests.post(
112
- f"{BASE_URL}/generate",
113
- json={"custom_prompt": "AI-powered tools"}
114
- )
115
- print(response.json())
116
-
117
- # Get details
118
- response = requests.post(
119
- f"{BASE_URL}/details",
120
- json={"idea_id": 1, "idea_title": "AI Code Assistant"}
121
- )
122
- print(response.json())
123
- ```
124
-
125
- ## Model Information
126
-
127
- - **Model**: google/flan-t5-base
128
- - **Size**: ~250M parameters
129
- - **Type**: Sequence-to-sequence transformer
130
- - **Training**: Instruction-finetuned on diverse tasks
131
-
132
- ## Performance Notes
133
-
134
- - Model runs on CPU (suitable for Hugging Face free tier)
135
- - First request may be slow due to model loading (~30-60 seconds)
136
- - Subsequent requests are faster with warm model
137
- - Max generation length: 512 tokens
138
-
139
- ## Limitations
140
-
141
- - FLAN-T5-Base is a smaller model and may not always produce perfect JSON
142
- - Complex prompts might require prompt engineering
143
- - Response quality depends on prompt clarity
144
- - Model has no memory between requests
145
-
146
- ## Local Development
147
-
148
- ### Prerequisites
149
- - Python 3.10+
150
- - pip
151
-
152
- ### Setup
153
  ```bash
154
- # Install dependencies
155
- pip install -r requirements.txt
156
-
157
- # Run the server
158
- uvicorn main:app --host 0.0.0.0 --port 7860
159
- ```
160
-
161
- The API will be available at `http://localhost:7860`
162
-
163
- ## Docker Deployment
164
-
165
- ### Build Image
166
- ```bash
167
- docker build -t flan-t5-hackathon .
168
- ```
169
-
170
- ### Run Container
171
- ```bash
172
- docker run -p 7860:7860 flan-t5-hackathon
173
- ```
174
-
175
- ## License
176
-
177
- This project uses the FLAN-T5 model which is released under Apache 2.0 license.
 
1
+ # Hackathon Idea Generator API (FastAPI)
 
 
 
 
 
 
 
 
2
 
3
+ This Space runs a FastAPI backend only.
4
 
5
+ ## Run command
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ```bash
8
+ uvicorn app:app --host 0.0.0.0 --port $PORT