Rishabh2095 commited on
Commit
40ea651
·
1 Parent(s): ea10140

Added mongodb store dependency for Agent Memory

Browse files
DEPLOYMENT_GUIDE.md DELETED
@@ -1,303 +0,0 @@
1
- # Deployment Guide for Job Application Agent
2
-
3
- ## Option 1: LangGraph Cloud (Easiest & Recommended)
4
-
5
- ### Prerequisites
6
- - LangGraph CLI installed (`langgraph-cli` in requirements.txt)
7
- - `langgraph.json` already configured ✅
8
-
9
- ### Steps
10
-
11
- 1. **Install LangGraph CLI** (if not already):
12
- ```powershell
13
- pip install langgraph-cli
14
- ```
15
-
16
- 2. **Login to LangGraph Cloud**:
17
- ```powershell
18
- langgraph login
19
- ```
20
-
21
- 3. **Deploy your agent**:
22
- ```powershell
23
- langgraph deploy
24
- ```
25
-
26
- 4. **Get your API endpoint** - LangGraph Cloud provides a REST API automatically
27
-
28
- ### Cost
29
- - **Free tier**: Limited requests/month
30
- - **Paid**: Pay-per-use pricing
31
-
32
- ### Pros
33
- - ✅ Zero infrastructure management
34
- - ✅ Built-in state persistence
35
- - ✅ Automatic API generation
36
- - ✅ LangSmith integration
37
- - ✅ Perfect for LangGraph apps
38
-
39
- ### Cons
40
- - ⚠️ Vendor lock-in
41
- - ⚠️ Limited customization
42
-
43
- ---
44
-
45
- ## Option 2: Railway.app (Simple & Cheap)
46
-
47
- ### Steps
48
-
49
- 1. **Create a FastAPI wrapper** (create `api.py`):
50
- ```python
51
- from fastapi import FastAPI, File, UploadFile
52
- from job_writing_agent.workflow import JobWorkflow
53
- import tempfile
54
- import os
55
-
56
- app = FastAPI()
57
-
58
- @app.post("/generate")
59
- async def generate_application(
60
- resume: UploadFile = File(...),
61
- job_description: str,
62
- content_type: str = "cover_letter"
63
- ):
64
- # Save resume temporarily
65
- with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as tmp:
66
- tmp.write(await resume.read())
67
- resume_path = tmp.name
68
-
69
- try:
70
- workflow = JobWorkflow(
71
- resume=resume_path,
72
- job_description_source=job_description,
73
- content=content_type
74
- )
75
- result = await workflow.run()
76
- return {"result": result}
77
- finally:
78
- os.unlink(resume_path)
79
- ```
80
-
81
- 2. **Create `Procfile`**:
82
- ```
83
- web: uvicorn api:app --host 0.0.0.0 --port $PORT
84
- ```
85
-
86
- 3. **Deploy to Railway**:
87
- - Sign up at [railway.app](https://railway.app)
88
- - Connect GitHub repo
89
- - Railway auto-detects Python and runs `Procfile`
90
-
91
- ### Cost
92
- - **Free tier**: $5 credit/month
93
- - **Hobby**: $5/month for 512MB RAM
94
- - **Pro**: $20/month for 2GB RAM
95
-
96
- ### Pros
97
- - ✅ Very simple deployment
98
- - ✅ Auto-scaling
99
- - ✅ Free tier available
100
- - ✅ Automatic HTTPS
101
-
102
- ### Cons
103
- - ⚠️ Need to add FastAPI wrapper
104
- - ⚠️ State management needs Redis/Postgres
105
-
106
- ---
107
-
108
- ## Option 3: Render.com (Similar to Railway)
109
-
110
- ### Steps
111
-
112
- 1. **Create `render.yaml`**:
113
- ```yaml
114
- services:
115
- - type: web
116
- name: job-writer-api
117
- env: python
118
- buildCommand: pip install -r requirements.txt
119
- startCommand: uvicorn api:app --host 0.0.0.0 --port $PORT
120
- envVars:
121
- - key: OPENROUTER_API_KEY
122
- sync: false
123
- - key: TAVILY_API_KEY
124
- sync: false
125
- ```
126
-
127
- 2. **Deploy**:
128
- - Connect GitHub repo to Render
129
- - Render auto-detects `render.yaml`
130
-
131
- ### Cost
132
- - **Free tier**: 750 hours/month (sleeps after 15min inactivity)
133
- - **Starter**: $7/month (always on)
134
-
135
- ### Pros
136
- - ✅ Free tier for testing
137
- - ✅ Simple YAML config
138
- - ✅ Auto-deploy from Git
139
-
140
- ### Cons
141
- - ⚠️ Free tier sleeps (cold starts)
142
- - ⚠️ Need FastAPI wrapper
143
-
144
- ---
145
-
146
- ## Option 4: Fly.io (Good Free Tier)
147
-
148
- ### Steps
149
-
150
- 1. **Install Fly CLI**:
151
- ```powershell
152
- iwr https://fly.io/install.ps1 -useb | iex
153
- ```
154
-
155
- 2. **Create `Dockerfile`**:
156
- ```dockerfile
157
- FROM python:3.12-slim
158
-
159
- WORKDIR /app
160
- COPY requirements.txt .
161
- RUN pip install --no-cache-dir -r requirements.txt
162
-
163
- COPY . .
164
-
165
- CMD ["uvicorn", "api:app", "--host", "0.0.0.0", "--port", "8080"]
166
- ```
167
-
168
- 3. **Deploy**:
169
- ```powershell
170
- fly launch
171
- fly deploy
172
- ```
173
-
174
- ### Cost
175
- - **Free tier**: 3 shared-cpu VMs, 3GB storage
176
- - **Paid**: $1.94/month per VM
177
-
178
- ### Pros
179
- - ✅ Generous free tier
180
- - ✅ Global edge deployment
181
- - ✅ Docker-based (flexible)
182
-
183
- ### Cons
184
- - ⚠️ Need Docker knowledge
185
- - ⚠️ Need FastAPI wrapper
186
-
187
- ---
188
-
189
- ## Option 5: AWS Lambda (Serverless - Pay Per Use)
190
-
191
- ### Steps
192
-
193
- 1. **Create Lambda handler** (`lambda_handler.py`):
194
- ```python
195
- import json
196
- from job_writing_agent.workflow import JobWorkflow
197
-
198
- def lambda_handler(event, context):
199
- # Parse event
200
- body = json.loads(event['body'])
201
-
202
- workflow = JobWorkflow(
203
- resume=body['resume_path'],
204
- job_description_source=body['job_description'],
205
- content=body.get('content_type', 'cover_letter')
206
- )
207
-
208
- result = workflow.run()
209
-
210
- return {
211
- 'statusCode': 200,
212
- 'body': json.dumps({'result': result})
213
- }
214
- ```
215
-
216
- 2. **Package and deploy** using AWS SAM or Serverless Framework
217
-
218
- ### Cost
219
- - **Free tier**: 1M requests/month
220
- - **Paid**: $0.20 per 1M requests + compute time
221
-
222
- ### Pros
223
- - ✅ Pay only for usage
224
- - ✅ Auto-scaling
225
- - ✅ Very cheap for low traffic
226
-
227
- ### Cons
228
- - ⚠️ 15min timeout limit
229
- - ⚠️ Cold starts
230
- - ⚠️ Complex setup
231
- - ⚠️ Need to handle state externally
232
-
233
- ---
234
-
235
- ## Recommendation
236
-
237
- **For your use case, I recommend:**
238
-
239
- 1. **Start with LangGraph Cloud** - Easiest, built for your stack
240
- 2. **If you need more control → Railway** - Simple, good free tier
241
- 3. **If you need serverless → AWS Lambda** - Cheapest for low traffic
242
-
243
- ---
244
-
245
- ## Quick Start: FastAPI Wrapper (for Railway/Render/Fly.io)
246
-
247
- Create `api.py` in your project root:
248
-
249
- ```python
250
- from fastapi import FastAPI, File, UploadFile, HTTPException
251
- from fastapi.responses import JSONResponse
252
- from job_writing_agent.workflow import JobWorkflow
253
- import tempfile
254
- import os
255
- import asyncio
256
-
257
- app = FastAPI(title="Job Application Writer API")
258
-
259
- @app.get("/")
260
- def health():
261
- return {"status": "ok"}
262
-
263
- @app.post("/generate")
264
- async def generate_application(
265
- resume: UploadFile = File(...),
266
- job_description: str,
267
- content_type: str = "cover_letter"
268
- ):
269
- """Generate job application material."""
270
- # Save resume temporarily
271
- with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as tmp:
272
- content = await resume.read()
273
- tmp.write(content)
274
- resume_path = tmp.name
275
-
276
- try:
277
- workflow = JobWorkflow(
278
- resume=resume_path,
279
- job_description_source=job_description,
280
- content=content_type
281
- )
282
-
283
- # Run workflow (assuming it's async or can be wrapped)
284
- result = await asyncio.to_thread(workflow.run)
285
-
286
- return JSONResponse({
287
- "status": "success",
288
- "result": result
289
- })
290
- except Exception as e:
291
- raise HTTPException(status_code=500, detail=str(e))
292
- finally:
293
- # Cleanup
294
- if os.path.exists(resume_path):
295
- os.unlink(resume_path)
296
-
297
- if __name__ == "__main__":
298
- import uvicorn
299
- uvicorn.run(app, host="0.0.0.0", port=8000)
300
- ```
301
-
302
- Then update `requirements.txt` to ensure FastAPI and uvicorn are included (they already are ✅).
303
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
DOCKERFILE_EXPLANATION.md DELETED
@@ -1,147 +0,0 @@
1
- # Dockerfile Explanation
2
-
3
- This Dockerfile is specifically designed for **LangGraph Cloud/LangServe deployment**. It uses the official LangGraph API base image and configures your agent graphs to be served as REST APIs.
4
-
5
- ## Line-by-Line Breakdown
6
-
7
- ### 1. Base Image (Line 1)
8
- ```dockerfile
9
- FROM langchain/langgraph-api:3.12
10
- ```
11
- - **Purpose**: Uses the official LangGraph API base image with Python 3.12
12
- - **What it includes**: Pre-configured LangGraph runtime, LangServe server, and all LangGraph dependencies
13
- - **Why**: This image already has everything needed to serve LangGraph workflows as REST APIs
14
-
15
- ---
16
-
17
- ### 2. Install Node Dependencies (Line 9)
18
- ```dockerfile
19
- RUN PYTHONDONTWRITEBYTECODE=1 uv pip install --system --no-cache-dir -c /api/constraints.txt nodes
20
- ```
21
- - **Purpose**: Installs the `nodes` package (likely a dependency from your `langgraph.json`)
22
- - **`PYTHONDONTWRITEBYTECODE=1`**: Prevents creating `.pyc` files (smaller image)
23
- - **`uv pip`**: Uses `uv` (fast Python package installer) instead of regular `pip`
24
- - **`--system`**: Installs to system Python (not virtual env)
25
- - **`--no-cache-dir`**: Doesn't cache pip downloads (smaller image)
26
- - **`-c /api/constraints.txt`**: Uses constraint file from base image (ensures compatible versions)
27
-
28
- ---
29
-
30
- ### 3. Copy Your Code (Line 14)
31
- ```dockerfile
32
- ADD . /deps/job_writer
33
- ```
34
- - **Purpose**: Copies your entire project into `/deps/job_writer` in the container
35
- - **Why `/deps/`**: LangGraph API expects dependencies in this directory
36
- - **What gets copied**: All your source code, `pyproject.toml`, `requirements.txt`, etc.
37
-
38
- ---
39
-
40
- ### 4. Install Your Package (Lines 19-21)
41
- ```dockerfile
42
- RUN for dep in /deps/*; do
43
- echo "Installing $dep";
44
- if [ -d "$dep" ]; then
45
- echo "Installing $dep";
46
- (cd "$dep" && PYTHONDONTWRITEBYTECODE=1 uv pip install --system --no-cache-dir -c /api/constraints.txt -e .);
47
- fi;
48
- done
49
- ```
50
- - **Purpose**: Installs your `job_writer` package in editable mode (`-e`)
51
- - **How it works**:
52
- - Loops through all directories in `/deps/`
53
- - For each directory, changes into it and runs `pip install -e .`
54
- - The `-e` flag installs in "editable" mode (changes to code are reflected)
55
- - **Why**: Makes your package importable as `job_writing_agent` inside the container
56
-
57
- ---
58
-
59
- ### 5. Register Your Graphs (Line 25)
60
- ```dockerfile
61
- ENV LANGSERVE_GRAPHS='{"job_app_graph": "/deps/job_writer/src/job_writing_agent/workflow.py:job_app_graph", ...}'
62
- ```
63
- - **Purpose**: Tells LangServe which graphs to expose as REST APIs
64
- - **Format**: JSON mapping of `graph_name` → `module_path:attribute_name`
65
- - **What it does**:
66
- - `job_app_graph` → Exposes `JobWorkflow.job_app_graph` property as an API endpoint
67
- - `research_workflow` → Exposes the research subgraph
68
- - `data_loading_workflow` → Exposes the data loading subgraph
69
- - **Result**: Each graph becomes a REST API endpoint like `/invoke/job_app_graph`
70
-
71
- ---
72
-
73
- ### 6. Protect LangGraph API (Lines 33-35)
74
- ```dockerfile
75
- RUN mkdir -p /api/langgraph_api /api/langgraph_runtime /api/langgraph_license && \
76
- touch /api/langgraph_api/__init__.py /api/langgraph_runtime/__init__.py /api/langgraph_license/__init__.py
77
- RUN PYTHONDONTWRITEBYTECODE=1 uv pip install --system --no-cache-dir --no-deps -e /api
78
- ```
79
- - **Purpose**: Prevents your dependencies from accidentally overwriting LangGraph API packages
80
- - **How**:
81
- 1. Creates placeholder `__init__.py` files for LangGraph packages
82
- 2. Reinstalls LangGraph API (without dependencies) to ensure it's not overwritten
83
- - **Why**: If your `requirements.txt` has conflicting versions, this ensures LangGraph API stays intact
84
-
85
- ---
86
-
87
- ### 7. Cleanup Build Tools (Lines 37-41)
88
- ```dockerfile
89
- RUN pip uninstall -y pip setuptools wheel
90
- RUN rm -rf /usr/local/lib/python*/site-packages/pip* ...
91
- RUN uv pip uninstall --system pip setuptools wheel && rm /usr/bin/uv /usr/bin/uvx
92
- ```
93
- - **Purpose**: Removes all build tools to make the image smaller and more secure
94
- - **What gets removed**:
95
- - `pip`, `setuptools`, `wheel` (Python build tools)
96
- - `uv` and `uvx` (package installers)
97
- - **Why**: These tools aren't needed at runtime, only during build
98
- - **Security**: Smaller attack surface (can't install malicious packages at runtime)
99
-
100
- ---
101
-
102
- ### 8. Set Working Directory (Line 45)
103
- ```dockerfile
104
- WORKDIR /deps/job_writer
105
- ```
106
- - **Purpose**: Sets the default directory when the container starts
107
- - **Why**: Makes it easier to reference files relative to your project root
108
-
109
- ---
110
-
111
- ## How It Works at Runtime
112
-
113
- When this container runs:
114
-
115
- 1. **LangServe starts automatically** (from base image)
116
- 2. **Reads `LANGSERVE_GRAPHS`** environment variable
117
- 3. **Imports your graphs** from the specified paths
118
- 4. **Exposes REST API endpoints**:
119
- - `POST /invoke/job_app_graph` - Main workflow
120
- - `POST /invoke/research_workflow` - Research subgraph
121
- - `POST /invoke/data_loading_workflow` - Data loading subgraph
122
- 5. **Handles state management** automatically (checkpointing, persistence)
123
-
124
- ## Example API Usage
125
-
126
- Once deployed, you can call your agent like this:
127
-
128
- ```bash
129
- curl -X POST http://your-deployment/invoke/job_app_graph \
130
- -H "Content-Type: application/json" \
131
- -d '{
132
- "resume_path": "...",
133
- "job_description_source": "...",
134
- "content": "cover_letter"
135
- }'
136
- ```
137
-
138
- ## Key Points
139
-
140
- ✅ **Optimized for LangGraph Cloud** - Uses official base image
141
- ✅ **Automatic API generation** - No need to write FastAPI code
142
- ✅ **State management** - Built-in checkpointing and persistence
143
- ✅ **Security** - Removes build tools from final image
144
- ✅ **Small image** - No-cache installs, no bytecode files
145
-
146
- This is the **easiest deployment option** for LangGraph apps - just build and push this Docker image!
147
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
RESUME_STORAGE_GUIDE.md DELETED
@@ -1,239 +0,0 @@
1
- # Resume Storage Options for HF Spaces Deployment
2
-
3
- This guide explains different ways to store and access your resume file for the deployed LangGraph application on HuggingFace Spaces.
4
-
5
- ## Problem
6
-
7
- HuggingFace Spaces doesn't allow binary files (PDFs) in git repositories. We removed `resume.pdf` from git, but the workflow needs access to it.
8
-
9
- ## Solution Options
10
-
11
- ### ✅ Option 1: URL Support (Easiest - Already Implemented!)
12
-
13
- **Status:** ✅ **Code updated - now supports URLs!**
14
-
15
- You can now provide a resume URL instead of a file path. The code will automatically download it.
16
-
17
- **Supported URL formats:**
18
- - `https://example.com/resume.pdf` - Direct HTTP/HTTPS links
19
- - `https://github.com/username/repo/raw/main/resume.pdf` - GitHub raw files
20
- - `https://drive.google.com/uc?export=download&id=FILE_ID` - Google Drive (public)
21
- - Any publicly accessible URL
22
-
23
- **How to use:**
24
-
25
- 1. **Upload resume to a public location:**
26
- - GitHub: Upload to a repo and use the "raw" file URL
27
- - Google Drive: Make file public, get shareable link
28
- - Dropbox: Get public link
29
- - Any web server or CDN
30
-
31
- 2. **Use the URL in your API call:**
32
- ```json
33
- {
34
- "assistant_id": "job_app_graph",
35
- "input": {
36
- "resume_path": "https://github.com/username/repo/raw/main/resume.pdf",
37
- "job_description_source": "https://example.com/job",
38
- "content_category": "cover_letter"
39
- }
40
- }
41
- ```
42
-
43
- **Pros:**
44
- - ✅ No code changes needed (already implemented)
45
- - ✅ Works with any public URL
46
- - ✅ No additional services required
47
- - ✅ Easy to update (just replace the file at the URL)
48
-
49
- **Cons:**
50
- - ⚠️ File must be publicly accessible
51
- - ⚠️ Requires internet connection to download
52
-
53
- ---
54
-
55
- ### Option 2: HuggingFace Hub Dataset (Recommended for Production)
56
-
57
- Store your resume in HF Hub as a dataset - native integration with HF Spaces.
58
-
59
- **Steps:**
60
-
61
- 1. **Install HF Hub CLI:**
62
- ```bash
63
- pip install huggingface_hub
64
- ```
65
-
66
- 2. **Login to HF:**
67
- ```bash
68
- huggingface-cli login
69
- ```
70
-
71
- 3. **Create a dataset and upload resume:**
72
- ```bash
73
- # Create dataset (one-time)
74
- huggingface-cli repo create resume-dataset --type dataset
75
-
76
- # Upload resume
77
- huggingface-cli upload Rishabh2095/resume-dataset resume.pdf resume.pdf
78
- ```
79
-
80
- 4. **Access in code (add to workflow):**
81
- ```python
82
- from huggingface_hub import hf_hub_download
83
- import tempfile
84
-
85
- # Download resume from HF Hub
86
- resume_path = hf_hub_download(
87
- repo_id="Rishabh2095/resume-dataset",
88
- filename="resume.pdf",
89
- cache_dir="/tmp"
90
- )
91
- ```
92
-
93
- 5. **Use in API call:**
94
- ```json
95
- {
96
- "assistant_id": "job_app_graph",
97
- "input": {
98
- "resume_path": "/tmp/resume.pdf", # After downloading from HF Hub
99
- "job_description_source": "https://example.com/job",
100
- "content_category": "cover_letter"
101
- }
102
- }
103
- ```
104
-
105
- **Pros:**
106
- - ✅ Native HF integration
107
- - ✅ Private datasets supported
108
- - ✅ Version control for resume
109
- - ✅ No external dependencies
110
-
111
- **Cons:**
112
- - ⚠️ Requires code modification to download from HF Hub
113
- - ⚠️ Slight overhead for downloading
114
-
115
- ---
116
-
117
- ### Option 3: Object Storage (S3, GCS, Azure Blob)
118
-
119
- Use cloud object storage for production scalability.
120
-
121
- **Example: AWS S3**
122
-
123
- 1. **Upload to S3:**
124
- ```bash
125
- aws s3 cp resume.pdf s3://your-bucket/resume.pdf --acl public-read
126
- ```
127
-
128
- 2. **Use public URL:**
129
- ```json
130
- {
131
- "resume_path": "https://your-bucket.s3.amazonaws.com/resume.pdf"
132
- }
133
- ```
134
-
135
- **For private S3 (requires credentials):**
136
- - Add AWS credentials as HF Space secrets
137
- - Use `boto3` to download in code
138
-
139
- **Pros:**
140
- - ✅ Scalable and reliable
141
- - ✅ Supports private files with auth
142
- - ✅ Industry standard
143
-
144
- **Cons:**
145
- - ⚠️ Requires cloud account setup
146
- - ⚠️ May incur costs
147
- - ⚠️ More complex setup
148
-
149
- ---
150
-
151
- ### Option 4: HF Spaces Persistent Storage
152
-
153
- HF Spaces provides `/tmp` directory that persists across restarts.
154
-
155
- **Steps:**
156
-
157
- 1. **Upload file via API or during build:**
158
- - Add file to Docker image (but this increases image size)
159
- - Or download during container startup
160
-
161
- 2. **Use in code:**
162
- ```python
163
- # In your workflow initialization
164
- DEFAULT_RESUME_PATH = "/tmp/resume.pdf"
165
- ```
166
-
167
- **Pros:**
168
- - ✅ No external dependencies
169
- - ✅ Fast access (local file)
170
-
171
- **Cons:**
172
- - ⚠️ File must be in Docker image (increases size)
173
- - ⚠️ Not easily updatable without rebuild
174
-
175
- ---
176
-
177
- ### Option 5: Environment Variable with URL
178
-
179
- Store resume URL as an HF Space secret.
180
-
181
- **Steps:**
182
-
183
- 1. **Add to HF Space Secrets:**
184
- - Go to Space Settings → Variables and secrets
185
- - Add: `RESUME_URL=https://example.com/resume.pdf`
186
-
187
- 2. **Use in code:**
188
- ```python
189
- import os
190
- resume_path = os.getenv("RESUME_URL", "default_path_or_url")
191
- ```
192
-
193
- **Pros:**
194
- - ✅ Easy to update (change secret, no code deploy)
195
- - ✅ Can point to any URL
196
- - ✅ Works with Option 1 (URL support)
197
-
198
- **Cons:**
199
- - ⚠️ Requires code modification to read env var
200
-
201
- ---
202
-
203
- ## Recommended Approach
204
-
205
- **For Quick Start:** Use **Option 1 (URL Support)** - just upload your resume to GitHub, Google Drive, or any public URL and use that URL in your API calls.
206
-
207
- **For Production:** Use **Option 2 (HF Hub Dataset)** - native integration, private support, version control.
208
-
209
- ## Implementation Status
210
-
211
- - ✅ **URL Support:** Implemented in `parse_resume()` function
212
- - ⏳ **HF Hub Integration:** Can be added if needed
213
- - ⏳ **Environment Variable:** Can be added if needed
214
-
215
- ## Testing
216
-
217
- Test with a public resume URL:
218
-
219
- ```powershell
220
- # Test with GitHub raw file URL
221
- $body = @{
222
- assistant_id = "job_app_graph"
223
- input = @{
224
- resume_path = "https://github.com/username/repo/raw/main/resume.pdf"
225
- job_description_source = "https://example.com/job"
226
- content_category = "cover_letter"
227
- }
228
- } | ConvertTo-Json
229
-
230
- Invoke-RestMethod -Uri "https://rishabh2095-agentworkflowjobapplications.hf.space/runs/wait" `
231
- -Method POST -Body $body -ContentType "application/json"
232
- ```
233
-
234
- ## Next Steps
235
-
236
- 1. Upload your resume to a public location (GitHub, Google Drive, etc.)
237
- 2. Get the public URL
238
- 3. Use that URL in your API calls as `resume_path`
239
- 4. The code will automatically download and process it!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
docker-compose.override.example.yml DELETED
@@ -1,21 +0,0 @@
1
- # Example override file for local development
2
- # Copy this to docker-compose.override.yml to customize settings
3
- # docker-compose automatically loads override files
4
-
5
- version: "3.9"
6
- services:
7
- redis:
8
- # Override Redis port for local development
9
- ports:
10
- - "6380:6379" # Use different port if 6379 is already in use
11
-
12
- postgres:
13
- # Override Postgres port for local development
14
- ports:
15
- - "5433:5432" # Use different port if 5432 is already in use
16
- environment:
17
- # Override credentials for local dev
18
- - POSTGRES_USER=dev_user
19
- - POSTGRES_PASSWORD=dev_password
20
- - POSTGRES_DB=job_app_dev
21
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
pyproject.toml CHANGED
@@ -119,6 +119,7 @@ dependencies = [
119
  "langgraph-prebuilt",
120
  "langgraph-runtime-inmem==0.14.1",
121
  "langgraph-sdk==0.2.9",
 
122
  "langsmith>=0.6.3",
123
  "lazy-object-proxy==1.12.0",
124
  "litellm==1.77.7",
 
119
  "langgraph-prebuilt",
120
  "langgraph-runtime-inmem==0.14.1",
121
  "langgraph-sdk==0.2.9",
122
+ "langgraph-store-mongodb>=0.1.1",
123
  "langsmith>=0.6.3",
124
  "lazy-object-proxy==1.12.0",
125
  "litellm==1.77.7",
src/job_writing_agent/langgraph_init.py DELETED
@@ -1,4 +0,0 @@
1
- from .workflow import JobWorkflow
2
-
3
-
4
- job_app_graph= JobWorkflow().compile()