Nur Arifin Akbar commited on
Commit
20ff705
Β·
1 Parent(s): 9c12608

Add deployment guide for Hugging Face Spaces and local setup

Browse files
Files changed (1) hide show
  1. DEPLOYMENT.md +190 -0
DEPLOYMENT.md ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deployment Guide
2
+
3
+ ## πŸš€ Deploying to Hugging Face Spaces
4
+
5
+ ### Step 1: Create a Space
6
+
7
+ 1. Go to [Hugging Face Spaces](https://huggingface.co/spaces)
8
+ 2. Click "Create new Space"
9
+ 3. Fill in the details:
10
+ - **Owner**: Your username
11
+ - **Space name**: `PaperReview` (or your preferred name)
12
+ - **License**: MIT
13
+ - **SDK**: Gradio
14
+ - **Space hardware**: CPU basic (free tier works fine)
15
+
16
+ ### Step 2: Configure Secrets
17
+
18
+ In your Space settings, add these secrets:
19
+
20
+ #### Required:
21
+ - `OPENAI_API_KEY`: Your OpenAI API key or compatible API key
22
+ - Get from: https://platform.openai.com/api-keys
23
+
24
+ #### Optional but Recommended:
25
+ - `SEMANTIC_SCHOLAR_API_KEY`: `0NK3h4KTzo1O1hXHsueIvwARh3L8JgE5IU0tLkGi`
26
+ - This provides higher rate limits for Semantic Scholar API
27
+ - Without it, you'll be limited to basic rate limits
28
+
29
+ #### Optional:
30
+ - `OPENAI_BASE_URL`: Custom API endpoint (if not using OpenAI)
31
+ - Default: `https://api.openai.com/v1`
32
+ - For Azure: `https://your-resource.openai.azure.com/`
33
+ - For local: `http://localhost:8000/v1`
34
+
35
+ - `MODEL_NAME`: Model identifier
36
+ - Default: `gpt-3.5-turbo`
37
+ - Options: `gpt-4`, `gpt-4-turbo`, etc.
38
+
39
+ ### Step 3: Upload Files
40
+
41
+ Upload these files to your Space:
42
+ ```
43
+ β”œβ”€β”€ app.py
44
+ β”œβ”€β”€ agents.py
45
+ β”œβ”€β”€ requirements.txt
46
+ β”œβ”€β”€ README.md
47
+ β”œβ”€β”€ .gitignore
48
+ └── .env.example (optional)
49
+ ```
50
+
51
+ **Important**: Do NOT upload `.env` file with actual secrets!
52
+
53
+ ### Step 4: Wait for Build
54
+
55
+ Hugging Face will automatically:
56
+ 1. Install dependencies from `requirements.txt`
57
+ 2. Load secrets from Space settings
58
+ 3. Start the Gradio app
59
+
60
+ Build time: ~3-5 minutes
61
+
62
+ ### Step 5: Test Your Space
63
+
64
+ 1. Open your Space URL
65
+ 2. Upload a test PDF
66
+ 3. Click "Review Paper"
67
+ 4. Wait 3-6 minutes for results
68
+
69
+ ## πŸ”§ Local Development
70
+
71
+ ### Setup
72
+
73
+ ```bash
74
+ # Clone or navigate to project
75
+ cd aireviewer
76
+
77
+ # Create virtual environment
78
+ python -m venv venv
79
+ source venv/bin/activate # On Windows: venv\Scripts\activate
80
+
81
+ # Install dependencies
82
+ pip install -r requirements.txt
83
+
84
+ # Copy environment template
85
+ cp .env.example .env
86
+
87
+ # Edit .env with your API keys
88
+ nano .env # or use your preferred editor
89
+ ```
90
+
91
+ ### Required in .env:
92
+
93
+ ```bash
94
+ OPENAI_API_KEY=sk-...your-key-here
95
+ SEMANTIC_SCHOLAR_API_KEY=0NK3h4KTzo1O1hXHsueIvwARh3L8JgE5IU0tLkGi
96
+ ```
97
+
98
+ ### Run
99
+
100
+ ```bash
101
+ python app.py
102
+ ```
103
+
104
+ Open browser to: http://localhost:7860
105
+
106
+ ## 🐳 Docker Deployment
107
+
108
+ ### Build Image
109
+
110
+ ```bash
111
+ docker build -t aireviewer .
112
+ ```
113
+
114
+ ### Run Container
115
+
116
+ ```bash
117
+ docker run -p 7860:7860 \
118
+ -e OPENAI_API_KEY=your-key \
119
+ -e SEMANTIC_SCHOLAR_API_KEY=0NK3h4KTzo1O1hXHsueIvwARh3L8JgE5IU0tLkGi \
120
+ -e MODEL_NAME=gpt-4 \
121
+ aireviewer
122
+ ```
123
+
124
+ Or use docker-compose:
125
+
126
+ ```yaml
127
+ version: '3.8'
128
+ services:
129
+ aireviewer:
130
+ build: .
131
+ ports:
132
+ - "7860:7860"
133
+ environment:
134
+ - OPENAI_API_KEY=${OPENAI_API_KEY}
135
+ - SEMANTIC_SCHOLAR_API_KEY=0NK3h4KTzo1O1hXHsueIvwARh3L8JgE5IU0tLkGi
136
+ - MODEL_NAME=gpt-4
137
+ ```
138
+
139
+ ## ⚠️ Important Notes
140
+
141
+ ### Rate Limiting
142
+ - **LLM API**: 1 request per second (sequential processing)
143
+ - **Semantic Scholar**: 1 request per second
144
+ - Total processing time: 3-6 minutes per paper
145
+
146
+ ### API Costs
147
+ - Using GPT-4: ~$0.10-0.50 per paper review (depending on paper length)
148
+ - Using GPT-3.5-turbo: ~$0.01-0.05 per paper review
149
+
150
+ ### Concurrency
151
+ - The app does NOT support concurrent reviews
152
+ - Reviews are processed sequentially to avoid API rate limit issues
153
+ - One paper at a time recommended
154
+
155
+ ### Security
156
+ - Never commit `.env` file to git
157
+ - Use Hugging Face Spaces secrets for production
158
+ - API keys are loaded from environment variables only
159
+
160
+ ## πŸ” Troubleshooting
161
+
162
+ ### "Could not extract text from PDF"
163
+ - Ensure PDF is not scanned/image-based
164
+ - Try OCR preprocessing if needed
165
+
166
+ ### "API key error"
167
+ - Check if secrets are properly set in Space settings
168
+ - Verify API key is valid and has credits
169
+
170
+ ### "Rate limit exceeded"
171
+ - Wait a few minutes and try again
172
+ - The app has built-in rate limiting
173
+ - Consider upgrading API tier
174
+
175
+ ### "Review taking too long"
176
+ - Normal processing: 3-6 minutes
177
+ - Check API status if longer
178
+ - Review progress bar for current status
179
+
180
+ ## πŸ“ž Support
181
+
182
+ For issues:
183
+ 1. Check the logs in Hugging Face Space
184
+ 2. Verify all secrets are properly configured
185
+ 3. Test with a shorter paper first
186
+ 4. Open an issue on GitHub
187
+
188
+ ---
189
+
190
+ **Live Demo**: https://huggingface.co/spaces/syaikhipin/PaperReview