Jesus del Carmen Valdiviezo commited on
Commit
3f538a2
·
1 Parent(s): 6a26fd2

initial webhook imp claude

Browse files
Files changed (6) hide show
  1. .gitignore +34 -0
  2. Dockerfile +35 -0
  3. README.md +45 -12
  4. app.py +346 -0
  5. requirements.txt +12 -0
  6. webhook.log +2 -0
.gitignore ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Environment variables
2
+ .env
3
+
4
+ # Python
5
+ __pycache__/
6
+ *.py[cod]
7
+ *$py.class
8
+ *.so
9
+ .Python
10
+ env/
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ *.egg-info/
23
+ .installed.cfg
24
+ *.egg
25
+ venv/
26
+
27
+ # OS specific
28
+ .DS_Store
29
+ .DS_Store?
30
+ ._*
31
+ .Spotlight-V100
32
+ .Trashes
33
+ ehthumbs.db
34
+ Thumbs.db
Dockerfile ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use an official Python runtime as base image
2
+ # Using slim variant for smaller image size
3
+ FROM python:3.9-slim
4
+
5
+ # Set working directory in the container
6
+ WORKDIR /app
7
+
8
+ # Set environment variables
9
+ # - Prevents Python from writing .pyc files
10
+ # - Ensures Python output is sent straight to terminal without buffering
11
+ ENV PYTHONDONTWRITEBYTECODE=1 \
12
+ PYTHONUNBUFFERED=1
13
+
14
+ # Install system dependencies
15
+ # We keep this minimal for smaller image size
16
+ RUN apt-get update && apt-get install -y --no-install-recommends \
17
+ gcc \
18
+ && apt-get clean \
19
+ && rm -rf /var/lib/apt/lists/*
20
+
21
+ # Copy requirements file
22
+ COPY requirements.txt .
23
+
24
+ # Install Python dependencies
25
+ RUN pip install --no-cache-dir -r requirements.txt
26
+
27
+ # Copy the project files into the working directory
28
+ COPY . .
29
+
30
+ # Expose the port the app runs on
31
+ EXPOSE 7860
32
+
33
+ # Command to run the application
34
+ # Using Uvicorn with WSGI interface for Flask
35
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860", "--interface", "wsgi"]
README.md CHANGED
@@ -1,12 +1,45 @@
1
- ---
2
- title: Epigen Webhook
3
- emoji: 🐨
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- short_description: Webhook
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Epigen WhatsApp Webhook
2
+
3
+ A webhook server for the Epigen WhatsApp integration. This service processes incoming WhatsApp messages using Google's Gemini AI model and responds with information about Epigen products and services.
4
+
5
+ ## Features
6
+
7
+ - Processes incoming WhatsApp messages via Green API
8
+ - Generates AI responses using Google's Gemini model
9
+ - Maintains conversation history for personalized interactions
10
+ - Provides health check and monitoring endpoints
11
+
12
+ ## Technical Stack
13
+
14
+ - **Flask**: Web framework for handling HTTP requests
15
+ - **Uvicorn**: ASGI server for running the Flask application
16
+ - **Google Generative AI**: AI model for generating responses
17
+ - **Green API**: WhatsApp integration provider
18
+
19
+ ## Deployment
20
+
21
+ This application is designed to be deployed as a Docker container and is compatible with:
22
+ - Hugging Face Spaces
23
+ - AWS Elastic Beanstalk
24
+ - Google Cloud Run
25
+ - Any Docker-compatible cloud platform
26
+
27
+ ## Local Development
28
+
29
+ 1. Clone the repository
30
+ 2. Create a `.env` file with the required API keys
31
+ 3. Install dependencies: `pip install -r requirements.txt`
32
+ 4. Run the application: `python app.py`
33
+
34
+ ## Environment Variables
35
+
36
+ - `GREEN_API_ID`: Your Green API instance ID
37
+ - `GREEN_API_TOKEN`: Your Green API API token
38
+ - `GOOGLE_API_KEY`: Your Google API key for Gemini access
39
+ - `PORT`: The port to run the server on (default: 7860)
40
+
41
+ ## API Endpoints
42
+
43
+ - `GET /`: Home page showing server status
44
+ - `GET /health`: Health check endpoint
45
+ - `GET/POST /webhook`: Main webhook endpoint for WhatsApp integration
app.py ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ WhatsApp Webhook Server for Epigen Chatbot
3
+
4
+ This server receives webhook events from WhatsApp via Green API,
5
+ processes them using Google's Gemini AI model, and sends responses
6
+ back to the user.
7
+
8
+ The server is built with Flask and runs on Uvicorn for improved performance.
9
+ """
10
+
11
+ import os
12
+ import json
13
+ import time
14
+ from typing import Dict, List, Any, Optional
15
+ import requests
16
+ from flask import Flask, request, jsonify
17
+ from loguru import logger
18
+ from dotenv import load_dotenv
19
+
20
+ # Load environment variables from .env file (for local development)
21
+ # This has no effect in production where environment variables are set differently
22
+ load_dotenv()
23
+
24
+ # Initialize Flask application
25
+ app = Flask(__name__)
26
+
27
+ # ==================== CONFIGURATION ====================
28
+
29
+ # Get API credentials from environment variables
30
+ # These will be set as secrets in Hugging Face Spaces or other cloud environments
31
+ GREEN_API_ID = os.environ.get("GREEN_API_ID")
32
+ GREEN_API_TOKEN = os.environ.get("GREEN_API_TOKEN")
33
+ GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY")
34
+
35
+ # Check if required environment variables are set
36
+ if not GREEN_API_ID or not GREEN_API_TOKEN:
37
+ logger.warning("WhatsApp API credentials not set. Webhook will not be able to send messages.")
38
+
39
+ if not GOOGLE_API_KEY:
40
+ logger.warning("Google API key not set. AI responses will not work.")
41
+
42
+ # Configure logging
43
+ logger.add("webhook.log", rotation="500 MB", level="INFO", retention="10 days")
44
+
45
+ # ==================== DATA STORAGE ====================
46
+
47
+ # In-memory storage for chat histories
48
+ # In a production environment, this would be replaced with a database
49
+ whatsapp_chat_histories: Dict[str, List[Dict[str, str]]] = {}
50
+
51
+ # Knowledge base content - replace with your actual content from the Streamlit app
52
+ knowledge_content = """
53
+ # Datos de Epigen
54
+ - WhatsApp: 5544918977
55
+ - Direccion: Avenida de los Insurgentes 601, 03810 Col. Nápoles, CDMX, CP:03100
56
+ - Sitio Web: https://epigen.mx/
57
+ """ # Add your full knowledge base here
58
+
59
+ # ==================== ROUTE HANDLERS ====================
60
+
61
+ @app.route('/', methods=['GET'])
62
+ def home():
63
+ """
64
+ Home route to confirm the server is running.
65
+
66
+ This endpoint is useful for:
67
+ 1. Checking if the server is alive
68
+ 2. Basic health monitoring
69
+ 3. Browser-based verification
70
+
71
+ Returns:
72
+ JSON response with status message
73
+ """
74
+ return jsonify({
75
+ "status": "online",
76
+ "message": "Epigen WhatsApp webhook server is running",
77
+ "version": "1.0.0"
78
+ }), 200
79
+
80
+ @app.route('/webhook', methods=['GET', 'POST'])
81
+ def webhook():
82
+ """
83
+ Main webhook endpoint for WhatsApp.
84
+
85
+ Handles two types of requests:
86
+ - GET: Used by Green API to verify the webhook URL
87
+ - POST: Receives incoming message notifications
88
+
89
+ Returns:
90
+ JSON response indicating success or error
91
+ """
92
+ # Handle webhook verification (GET request)
93
+ if request.method == 'GET':
94
+ logger.info("Received webhook verification request")
95
+ return jsonify({"status": "webhook is active"}), 200
96
+
97
+ # Handle incoming webhook events (POST request)
98
+ try:
99
+ # Get the JSON data from the request
100
+ data = request.get_json()
101
+ logger.info(f"Received webhook data: {json.dumps(data)}")
102
+
103
+ # Process incoming messages
104
+ if data.get("typeWebhook") == "incomingMessageReceived":
105
+ message_data = data.get("messageData", {})
106
+
107
+ # Handle text messages
108
+ if message_data.get("typeMessage") == "textMessage":
109
+ sender = data["senderData"]["sender"].split("@")[0] # Get phone number
110
+ message_text = message_data["textMessageData"]["textMessage"]
111
+ logger.info(f"Received message from {sender}: {message_text}")
112
+
113
+ # Process the message and get a response
114
+ ai_response = process_message(sender, message_text)
115
+
116
+ # Send the response back to the user
117
+ send_whatsapp_message(sender, ai_response)
118
+
119
+ # Handle voice messages (future enhancement)
120
+ elif message_data.get("typeMessage") == "audioMessage":
121
+ sender = data["senderData"]["sender"].split("@")[0]
122
+ logger.info(f"Received audio message from {sender}")
123
+
124
+ # Currently we don't process audio, so just send a default response
125
+ send_whatsapp_message(
126
+ sender,
127
+ "Recibí tu mensaje de voz, pero actualmente solo puedo procesar mensajes de texto."
128
+ )
129
+
130
+ return jsonify({"status": "message processed"}), 200
131
+
132
+ except Exception as e:
133
+ logger.error(f"Error processing webhook: {str(e)}")
134
+ return jsonify({"status": "error", "message": str(e)}), 500
135
+
136
+ # ==================== MESSAGE PROCESSING ====================
137
+
138
+ def process_message(sender: str, message_text: str) -> str:
139
+ """
140
+ Process a message and generate an AI response.
141
+
142
+ This function:
143
+ 1. Initializes chat history for new users
144
+ 2. Adds the user message to history
145
+ 3. Generates an AI response
146
+ 4. Adds the response to history
147
+
148
+ Args:
149
+ sender (str): The phone number of the sender
150
+ message_text (str): The content of the message
151
+
152
+ Returns:
153
+ str: The AI-generated response
154
+ """
155
+ try:
156
+ # Initialize chat history for new users
157
+ if sender not in whatsapp_chat_histories:
158
+ whatsapp_chat_histories[sender] = [
159
+ {"role": "assistant", "content": "¡Hola! Soy el asistente de Epigen. ¿Cómo puedo ayudarte hoy? 🧬"}
160
+ ]
161
+ logger.info(f"Initialized new chat history for {sender}")
162
+
163
+ # Add user message to history
164
+ whatsapp_chat_histories[sender].append({"role": "user", "content": message_text})
165
+
166
+ # Generate AI response with retry mechanism
167
+ max_retries = 3
168
+ for attempt in range(max_retries):
169
+ try:
170
+ # Generate response using AI
171
+ response = generate_ai_response(
172
+ whatsapp_chat_histories[sender],
173
+ message_text
174
+ )
175
+
176
+ # Add AI response to history
177
+ whatsapp_chat_histories[sender].append({"role": "assistant", "content": response})
178
+ logger.info(f"Generated response for {sender}: {response[:50]}...")
179
+
180
+ return response
181
+
182
+ except Exception as e:
183
+ logger.error(f"Attempt {attempt+1}/{max_retries} failed: {str(e)}")
184
+ if attempt == max_retries - 1: # Last attempt
185
+ raise
186
+ time.sleep(1) # Wait before retrying
187
+
188
+ except Exception as e:
189
+ logger.error(f"Error processing message: {str(e)}")
190
+ return "Lo siento, tuve un problema procesando tu mensaje. Por favor intenta de nuevo."
191
+
192
+ def generate_ai_response(chat_history: List[Dict[str, str]], user_message: str) -> str:
193
+ """
194
+ Generate a response using the Google Gemini model.
195
+
196
+ This function:
197
+ 1. Configures the Gemini API
198
+ 2. Formats the conversation history
199
+ 3. Adds the system message with knowledge base
200
+ 4. Generates and returns the response
201
+
202
+ Args:
203
+ chat_history (List[Dict[str, str]]): The conversation history
204
+ user_message (str): The latest user message
205
+
206
+ Returns:
207
+ str: The generated AI response
208
+ """
209
+ # Import the Gemini API library
210
+ # We import here to avoid loading it unless needed
211
+ import google.generativeai as genai
212
+
213
+ # Configure the Gemini API
214
+ genai.configure(api_key=GOOGLE_API_KEY)
215
+
216
+ # Set up the model with appropriate parameters
217
+ generation_config = {
218
+ "temperature": 0.7, # Controls randomness (0.0 = deterministic, 1.0 = creative)
219
+ "top_p": 0.95, # Nucleus sampling parameter
220
+ "top_k": 0, # Limits vocabulary to top K tokens
221
+ "max_output_tokens": 1000, # Maximum length of response
222
+ }
223
+
224
+ # Safety settings to prevent harmful or inappropriate content
225
+ safety_settings = [
226
+ {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_MEDIUM_AND_ABOVE"},
227
+ {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"},
228
+ {"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_MEDIUM_AND_ABOVE"},
229
+ {"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_MEDIUM_AND_ABOVE"},
230
+ ]
231
+
232
+ # Initialize the generative model
233
+ model = genai.GenerativeModel(
234
+ model_name="gemini-2.0-flash", # Using the more efficient model for faster responses
235
+ generation_config=generation_config,
236
+ safety_settings=safety_settings,
237
+ )
238
+
239
+ # Format the conversation history for Gemini
240
+ # Gemini uses "user" and "model" roles instead of "user" and "assistant"
241
+ formatted_history = []
242
+ for message in chat_history:
243
+ role = "user" if message["role"] == "user" else "model"
244
+ formatted_history.append({"role": role, "parts": [message["content"]]})
245
+
246
+ # Add system message with knowledge base
247
+ # This provides context about Epigen to inform the AI's responses
248
+ system_message = (
249
+ "Eres un agente conversacional de IA experto en epigenética y en los productos de Epigen. "
250
+ "Usa la siguiente información para responder preguntas sobre Epigen:\n\n" + knowledge_content
251
+ )
252
+ formatted_history.insert(0, {"role": "model", "parts": [system_message]})
253
+
254
+ # Generate response
255
+ chat = model.start_chat(history=formatted_history)
256
+ response = chat.send_message(user_message)
257
+
258
+ return response.text
259
+
260
+ # ==================== WHATSAPP INTEGRATION ====================
261
+
262
+ def send_whatsapp_message(recipient: str, message: str) -> Optional[Dict[str, Any]]:
263
+ """
264
+ Send a message back to the user via WhatsApp.
265
+
266
+ Uses Green API to send messages to WhatsApp users.
267
+
268
+ Args:
269
+ recipient (str): The phone number to send the message to
270
+ message (str): The content of the message
271
+
272
+ Returns:
273
+ Optional[Dict[str, Any]]: The response from the Green API, or None if failed
274
+ """
275
+ # Construct the URL for the Green API endpoint
276
+ url = f"https://api.green-api.com/waInstance{GREEN_API_ID}/sendMessage/{GREEN_API_TOKEN}"
277
+
278
+ # Prepare the payload with the recipient and message
279
+ payload = {
280
+ "chatId": f"{recipient}@c.us", # Format required by WhatsApp
281
+ "message": message
282
+ }
283
+
284
+ try:
285
+ # Send the request to Green API
286
+ response = requests.post(url, json=payload)
287
+ response_data = response.json()
288
+
289
+ # Log the result
290
+ if response.status_code == 200 and response_data.get("idMessage"):
291
+ logger.info(f"Message sent to {recipient}: {message[:50]}...")
292
+ else:
293
+ logger.error(f"Error sending message: {response_data}")
294
+
295
+ return response_data
296
+
297
+ except Exception as e:
298
+ logger.error(f"Exception when sending message: {str(e)}")
299
+ return None
300
+
301
+ # ==================== UTILITY ROUTES ====================
302
+
303
+ @app.route('/health', methods=['GET'])
304
+ def health_check():
305
+ """
306
+ Health check endpoint for monitoring services.
307
+
308
+ Returns detailed information about the server's status,
309
+ including environment configuration and service availability.
310
+
311
+ Returns:
312
+ JSON response with health information
313
+ """
314
+ # Check Green API connectivity
315
+ green_api_status = "configured" if GREEN_API_ID and GREEN_API_TOKEN else "not configured"
316
+
317
+ # Check Google API connectivity
318
+ google_api_status = "configured" if GOOGLE_API_KEY else "not configured"
319
+
320
+ # Return comprehensive health status
321
+ return jsonify({
322
+ "status": "healthy",
323
+ "timestamp": time.time(),
324
+ "services": {
325
+ "green_api": green_api_status,
326
+ "google_ai": google_api_status
327
+ },
328
+ "active_chats": len(whatsapp_chat_histories)
329
+ }), 200
330
+
331
+ # ==================== SERVER STARTUP ====================
332
+
333
+ # This block only runs when executing this file directly
334
+ # In production, Uvicorn will import and run the Flask app object
335
+ if __name__ == "__main__":
336
+ import uvicorn
337
+
338
+ # Get port from environment or use default
339
+ port = int(os.environ.get('PORT', 7860))
340
+
341
+ # Log the server startup
342
+ logger.info(f"Starting server on port {port}")
343
+
344
+ # Run the server using Uvicorn
345
+ # Using WSGI interface since Flask is a WSGI application
346
+ uvicorn.run("app:app", host="0.0.0.0", port=port, interface="wsgi")
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Web server components
2
+ flask # Web framework for creating the API endpoints
3
+ uvicorn # ASGI server for running the Flask application
4
+ python-dotenv # For loading environment variables from .env files
5
+
6
+ # External service integrations
7
+ requests # HTTP library for making API calls to WhatsApp
8
+ google-generativeai # Google's Gemini AI API client
9
+
10
+ # Utilities
11
+ pydantic # Data validation library
12
+ loguru # Better logging capabilities
webhook.log ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ 2025-05-03 16:25:35.331 | INFO | __main__:<module>:342 - Starting server on port 7860
2
+ 2025-05-03 22:19:00.614 | INFO | __main__:<module>:342 - Starting server on port 7860