sccastillo commited on
Commit
476e500
·
1 Parent(s): e711ca4
.env ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Environment variables for FastAPI TextGen
2
+ # Copy this file to .env and fill in your actual values
3
+
4
+ # OpenAI API Configuration
5
+ OPENAI_API_KEY=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
6
+ # Development Notes:
7
+ # - Get your OpenAI API key from: https://platform.openai.com/api-keys
8
+ # - Make sure your API key has sufficient credits
9
+ # - Keep your API key secure and never commit it to version control
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .researcher
Dockerfile ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10.9
2
+
3
+ COPY . .
4
+
5
+ WORKDIR /
6
+
7
+ RUN pip install --no-cache-dir --upgrade -r /requirements.txt
8
+
9
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -6,7 +6,205 @@ colorTo: indigo
6
  sdk: docker
7
  pinned: false
8
  license: mit
9
- short_description: Research agent for cientific discovery
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  sdk: docker
7
  pinned: false
8
  license: mit
9
+ short_description: Simple FastAPI agent for answering questions using OpenAI
10
  ---
11
 
12
+ # FastAPI TextGen with OpenAI
13
+
14
+ A simple FastAPI application that uses OpenAI's LLM to answer user questions through LangChain.
15
+
16
+ ## 🚀 Quick Start
17
+
18
+ ### Local Development
19
+
20
+ 1. **Clone and setup**
21
+ ```bash
22
+ git clone <your-repo>
23
+ cd researcher
24
+ ```
25
+
26
+ 2. **Install dependencies**
27
+ ```bash
28
+ pip install -r requirements.txt
29
+ ```
30
+
31
+ 3. **Setup environment**
32
+ ```bash
33
+ # Copy the environment template
34
+ cp env_template.txt .env
35
+
36
+ # Edit .env and add your OpenAI API key
37
+ # OPENAI_API_KEY=your_actual_key_here
38
+ ```
39
+
40
+ 4. **Run development server**
41
+ ```bash
42
+ # Option 1: Use the development runner (recommended)
43
+ python dev_run.py
44
+
45
+ # Option 2: Run directly with uvicorn
46
+ uvicorn app:app --host 127.0.0.1 --port 8000 --reload
47
+ ```
48
+
49
+ 5. **Test the application**
50
+ - **Web Interface**: http://localhost:8000 (Beautiful UI for testing)
51
+ - **API Documentation**: http://localhost:8000/docs (Swagger UI)
52
+ - **Health Check**: http://localhost:8000/health
53
+
54
+ ### For Hugging Face Deployment
55
+
56
+ In your Hugging Face Space settings, add the following secret:
57
+ - **Name**: `OPENAI_API_KEY`
58
+
59
+ ## 🔗 API Endpoints
60
+
61
+ The FastAPI backend provides the following endpoints for consuming the service:
62
+
63
+ ### Base URL
64
+ - **Local Development**: `http://localhost:8000`
65
+ - **Hugging Face Deployment**: `https://your-space-name.hf.space`
66
+
67
+ ### Available Endpoints
68
+
69
+ #### 1. **GET /** - Home/Web Interface
70
+ - **Description**: Serves the web interface for interactive testing
71
+ - **URL**: `/`
72
+ - **Method**: `GET`
73
+ - **Response**: HTML page or JSON welcome message
74
+
75
+ #### 2. **GET /health** - Health Check
76
+ - **Description**: Check if the API is running and properly configured
77
+ - **URL**: `/health`
78
+ - **Method**: `GET`
79
+ - **Response**:
80
+ ```json
81
+ {
82
+ "status": "healthy",
83
+ "message": "FastAPI TextGen is running",
84
+ "openai_configured": true
85
+ }
86
+ ```
87
+
88
+ #### 3. **POST /api/generate** - Generate Answer
89
+ - **Description**: Send a question and receive an AI-generated answer
90
+ - **URL**: `/api/generate`
91
+ - **Method**: `POST`
92
+ - **Content-Type**: `application/json`
93
+ - **Request Body**:
94
+ ```json
95
+ {
96
+ "question": "Your question here"
97
+ }
98
+ ```
99
+ - **Response**:
100
+ ```json
101
+ {
102
+ "text": "AI-generated answer to your question"
103
+ }
104
+ ```
105
+ - **Error Responses**:
106
+ - `400 Bad Request`: Empty or missing question
107
+ - `500 Internal Server Error`: OpenAI API issues or configuration problems
108
+
109
+ #### 4. **GET /docs** - API Documentation
110
+ - **Description**: Interactive Swagger UI documentation
111
+ - **URL**: `/docs`
112
+ - **Method**: `GET`
113
+ - **Response**: Interactive API documentation interface
114
+
115
+ ## 📖 API Usage Examples
116
+
117
+ ### POST /api/generate
118
+
119
+ Send a question to get an AI-powered answer.
120
+
121
+ **Request Body:**
122
+ ```json
123
+ {
124
+ "question": "What is artificial intelligence?"
125
+ }
126
+ ```
127
+
128
+ **Response:**
129
+ ```json
130
+ {
131
+ "text": "Artificial intelligence (AI) refers to the simulation of human intelligence in machines..."
132
+ }
133
+ ```
134
+
135
+ ### Example with curl
136
+ ```bash
137
+ curl -X POST "http://localhost:8000/api/generate" \
138
+ -H "Content-Type: application/json" \
139
+ -d '{"question": "Explain quantum computing in simple terms"}'
140
+ ```
141
+
142
+ ### Example with Python
143
+ ```python
144
+ import requests
145
+
146
+ response = requests.post(
147
+ "http://localhost:8000/api/generate",
148
+ json={"question": "What is machine learning?"}
149
+ )
150
+ print(response.json()["text"])
151
+ ```
152
+
153
+ ### Example with JavaScript/Fetch
154
+ ```javascript
155
+ async function askQuestion(question) {
156
+ const response = await fetch('http://localhost:8000/api/generate', {
157
+ method: 'POST',
158
+ headers: {
159
+ 'Content-Type': 'application/json',
160
+ },
161
+ body: JSON.stringify({ question: question })
162
+ });
163
+
164
+ const data = await response.json();
165
+ return data.text;
166
+ }
167
+
168
+ // Usage
169
+ askQuestion("Explain neural networks").then(answer => {
170
+ console.log(answer);
171
+ });
172
+ ```
173
+
174
+ ### Example with Node.js
175
+ ```javascript
176
+ const axios = require('axios');
177
+
178
+ async function askQuestion(question) {
179
+ try {
180
+ const response = await axios.post('http://localhost:8000/api/generate', {
181
+ question: question
182
+ });
183
+ return response.data.text;
184
+ } catch (error) {
185
+ console.error('Error:', error.response?.data || error.message);
186
+ }
187
+ }
188
+
189
+ // Usage
190
+ askQuestion("What is artificial intelligence?").then(answer => {
191
+ console.log(answer);
192
+ });
193
+ ```
194
+
195
+ ### Health Check Example
196
+ ```bash
197
+ # Check if the API is running
198
+ curl http://localhost:8000/health
199
+
200
+ # Expected response:
201
+ # {"status":"healthy","message":"FastAPI TextGen is running","openai_configured":true}
202
+ ```
203
+
204
+ ## 🌐 Integration Notes
205
+
206
+ - **CORS Enabled**: The API accepts requests from any origin (`*`)
207
+ - **Content-Type**: Always use `application/json` for POST requests
208
+ - **Error Handling**: Check HTTP status codes and response messages
209
+ - **Rate Limiting**: Depends on your OpenAI API key limits
210
+ - **Timeout**: Consider setting appropriate timeouts for your requests
TextGen/ConfigEnv.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Config class for handling env variables.
2
+ """
3
+ from functools import lru_cache
4
+ from pydantic import BaseSettings
5
+
6
+ class Settings(BaseSettings):
7
+ OPENAI_API_KEY: str
8
+
9
+ class Config:
10
+ env_file = '.env'
11
+
12
+ @lru_cache()
13
+ def get_settings():
14
+ return Settings()
15
+ config = get_settings()
TextGen/__init__.py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI
2
+
3
+ app = FastAPI(title="Deploying FastAPI Apps on Huggingface")
4
+
5
+ from TextGen import router
6
+
7
+
TextGen/__pycache__/ConfigEnv.cpython-312.pyc ADDED
Binary file (913 Bytes). View file
 
TextGen/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (310 Bytes). View file
 
TextGen/__pycache__/router.cpython-312.pyc ADDED
Binary file (3.75 kB). View file
 
TextGen/router.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel
2
+ from fastapi import HTTPException
3
+ from fastapi.staticfiles import StaticFiles
4
+ from fastapi.responses import FileResponse
5
+ import os
6
+
7
+ from .ConfigEnv import config
8
+ from fastapi.middleware.cors import CORSMiddleware
9
+
10
+ from langchain_openai import OpenAI
11
+ from langchain.chains import LLMChain
12
+ from langchain.prompts import PromptTemplate
13
+
14
+ from TextGen import app
15
+
16
+ class Generate(BaseModel):
17
+ text: str
18
+
19
+ class QuestionRequest(BaseModel):
20
+ question: str
21
+
22
+ def answer_question(question: str):
23
+ if not question or question.strip() == "":
24
+ raise HTTPException(status_code=400, detail="Please provide a question.")
25
+
26
+ # Simple prompt template for answering questions
27
+ prompt_template = PromptTemplate(
28
+ template="Answer the following question clearly and concisely: {question}",
29
+ input_variables=["question"]
30
+ )
31
+
32
+ # Initialize OpenAI LLM
33
+ llm = OpenAI(
34
+ api_key=config.OPENAI_API_KEY,
35
+ temperature=0.7
36
+ )
37
+
38
+ # Create LLM chain
39
+ llm_chain = LLMChain(
40
+ prompt=prompt_template,
41
+ llm=llm
42
+ )
43
+
44
+ try:
45
+ # Generate response
46
+ response = llm_chain.run(question=question)
47
+ return Generate(text=response.strip())
48
+ except Exception as e:
49
+ raise HTTPException(status_code=500, detail=f"Error generating response: {str(e)}")
50
+
51
+ # Mount static files for development interface
52
+ if os.path.exists("static"):
53
+ app.mount("/static", StaticFiles(directory="static"), name="static")
54
+
55
+ app.add_middleware(
56
+ CORSMiddleware,
57
+ allow_origins=["*"],
58
+ allow_credentials=True,
59
+ allow_methods=["*"],
60
+ allow_headers=["*"],
61
+ )
62
+
63
+ @app.get("/", tags=["Home"])
64
+ def api_home():
65
+ # Check if we have the static file for development interface
66
+ if os.path.exists("static/index.html"):
67
+ return FileResponse('static/index.html')
68
+ else:
69
+ return {'detail': 'Welcome to FastAPI TextGen Tutorial! Visit /docs for API documentation.'}
70
+
71
+ @app.get("/health", tags=["Health"])
72
+ def health_check():
73
+ """Health check endpoint for development and deployment monitoring."""
74
+ return {
75
+ "status": "healthy",
76
+ "message": "FastAPI TextGen is running",
77
+ "openai_configured": bool(config.OPENAI_API_KEY)
78
+ }
79
+
80
+ @app.post("/api/generate", summary="Answer user questions", tags=["Generate"], response_model=Generate)
81
+ def inference(request: QuestionRequest):
82
+ return answer_question(question=request.question)
__pycache__/app.cpython-312.pyc ADDED
Binary file (172 Bytes). View file
 
app.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from TextGen import app
dev_run.py ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Development runner for the FastAPI TextGen application.
4
+ This script helps run the application locally with development configurations.
5
+ """
6
+
7
+ import os
8
+ import sys
9
+ import uvicorn
10
+ from pathlib import Path
11
+
12
+ def setup_environment():
13
+ """Setup environment variables for local development."""
14
+ # Check if .env file exists
15
+ env_file = Path(".env")
16
+ if not env_file.exists():
17
+ print("⚠️ No .env file found!")
18
+ print("📝 Please create a .env file with the following content:")
19
+ print()
20
+ print("OPENAI_API_KEY=your_openai_api_key_here")
21
+ print()
22
+
23
+ # Ask user if they want to continue with environment variable
24
+ openai_key = os.getenv("OPENAI_API_KEY")
25
+ if not openai_key:
26
+ print("❌ OPENAI_API_KEY environment variable not set either.")
27
+ print("Please either:")
28
+ print("1. Create a .env file with OPENAI_API_KEY=your_key")
29
+ print("2. Set the environment variable: export OPENAI_API_KEY=your_key")
30
+ sys.exit(1)
31
+ else:
32
+ print("✅ Found OPENAI_API_KEY in environment variables")
33
+ else:
34
+ print("✅ Found .env file")
35
+
36
+ def main():
37
+ """Main function to run the development server."""
38
+ print("🚀 Starting FastAPI TextGen Development Server")
39
+ print("=" * 50)
40
+
41
+ # Setup environment
42
+ setup_environment()
43
+
44
+ print("🔧 Development server starting...")
45
+ print("📍 API will be available at: http://localhost:8000")
46
+ print("📚 API documentation at: http://localhost:8000/docs")
47
+ print("🔄 Auto-reload enabled for development")
48
+ print()
49
+ print("Press Ctrl+C to stop the server")
50
+ print("=" * 50)
51
+
52
+ # Run the development server
53
+ try:
54
+ uvicorn.run(
55
+ "app:app",
56
+ host="127.0.0.1",
57
+ port=8000,
58
+ reload=True,
59
+ log_level="info"
60
+ )
61
+ except KeyboardInterrupt:
62
+ print("\n👋 Development server stopped")
63
+
64
+ if __name__ == "__main__":
65
+ main()
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ fastapi==0.99.1
2
+ uvicorn
3
+ requests
4
+ pydantic==1.10.12
5
+ langchain
6
+ langchain-openai
7
+ openai
8
+ python-multipart
9
+ python-dotenv
static/index.html ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>FastAPI TextGen - Development Interface</title>
7
+ <style>
8
+ * {
9
+ margin: 0;
10
+ padding: 0;
11
+ box-sizing: border-box;
12
+ }
13
+
14
+ body {
15
+ font-family: 'Arial', sans-serif;
16
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
17
+ min-height: 100vh;
18
+ padding: 20px;
19
+ }
20
+
21
+ .container {
22
+ max-width: 800px;
23
+ margin: 0 auto;
24
+ background: white;
25
+ border-radius: 15px;
26
+ box-shadow: 0 10px 30px rgba(0, 0, 0, 0.2);
27
+ overflow: hidden;
28
+ }
29
+
30
+ .header {
31
+ background: linear-gradient(135deg, #4facfe 0%, #00f2fe 100%);
32
+ color: white;
33
+ padding: 30px;
34
+ text-align: center;
35
+ }
36
+
37
+ .header h1 {
38
+ font-size: 2.5em;
39
+ margin-bottom: 10px;
40
+ }
41
+
42
+ .header p {
43
+ font-size: 1.1em;
44
+ opacity: 0.9;
45
+ }
46
+
47
+ .content {
48
+ padding: 40px;
49
+ }
50
+
51
+ .form-group {
52
+ margin-bottom: 25px;
53
+ }
54
+
55
+ label {
56
+ display: block;
57
+ margin-bottom: 8px;
58
+ font-weight: bold;
59
+ color: #333;
60
+ }
61
+
62
+ #questionInput {
63
+ width: 100%;
64
+ padding: 15px;
65
+ border: 2px solid #e1e5e9;
66
+ border-radius: 8px;
67
+ font-size: 16px;
68
+ transition: border-color 0.3s;
69
+ resize: vertical;
70
+ min-height: 100px;
71
+ }
72
+
73
+ #questionInput:focus {
74
+ outline: none;
75
+ border-color: #4facfe;
76
+ }
77
+
78
+ #askButton {
79
+ background: linear-gradient(135deg, #4facfe 0%, #00f2fe 100%);
80
+ color: white;
81
+ padding: 15px 30px;
82
+ border: none;
83
+ border-radius: 8px;
84
+ font-size: 18px;
85
+ cursor: pointer;
86
+ transition: transform 0.2s;
87
+ width: 100%;
88
+ }
89
+
90
+ #askButton:hover {
91
+ transform: translateY(-2px);
92
+ }
93
+
94
+ #askButton:disabled {
95
+ opacity: 0.6;
96
+ cursor: not-allowed;
97
+ transform: none;
98
+ }
99
+
100
+ .response-section {
101
+ margin-top: 30px;
102
+ }
103
+
104
+ #responseContainer {
105
+ background: #f8f9fa;
106
+ border: 1px solid #e9ecef;
107
+ border-radius: 8px;
108
+ padding: 20px;
109
+ min-height: 100px;
110
+ white-space: pre-wrap;
111
+ line-height: 1.6;
112
+ }
113
+
114
+ .loading {
115
+ text-align: center;
116
+ color: #6c757d;
117
+ font-style: italic;
118
+ }
119
+
120
+ .error {
121
+ color: #dc3545;
122
+ background: #f8d7da;
123
+ border-color: #f5c6cb;
124
+ }
125
+
126
+ .examples {
127
+ margin-top: 20px;
128
+ padding: 20px;
129
+ background: #e3f2fd;
130
+ border-radius: 8px;
131
+ }
132
+
133
+ .examples h3 {
134
+ color: #1976d2;
135
+ margin-bottom: 15px;
136
+ }
137
+
138
+ .example-question {
139
+ background: white;
140
+ padding: 10px;
141
+ margin: 5px 0;
142
+ border-radius: 5px;
143
+ cursor: pointer;
144
+ transition: background-color 0.2s;
145
+ }
146
+
147
+ .example-question:hover {
148
+ background: #f5f5f5;
149
+ }
150
+ </style>
151
+ </head>
152
+ <body>
153
+ <div class="container">
154
+ <div class="header">
155
+ <h1>🤖 FastAPI TextGen</h1>
156
+ <p>Development Interface - Ask any question!</p>
157
+ </div>
158
+
159
+ <div class="content">
160
+ <form id="questionForm">
161
+ <div class="form-group">
162
+ <label for="questionInput">Your Question:</label>
163
+ <textarea
164
+ id="questionInput"
165
+ placeholder="Ask me anything... For example: 'What is artificial intelligence?' or 'Explain quantum computing in simple terms'"
166
+ required
167
+ ></textarea>
168
+ </div>
169
+
170
+ <button type="submit" id="askButton">Ask Question</button>
171
+ </form>
172
+
173
+ <div class="response-section">
174
+ <label>Response:</label>
175
+ <div id="responseContainer">
176
+ Ready to answer your questions! Type a question above and click "Ask Question".
177
+ </div>
178
+ </div>
179
+
180
+ <div class="examples">
181
+ <h3>💡 Example Questions</h3>
182
+ <div class="example-question" onclick="setQuestion('What is machine learning?')">
183
+ What is machine learning?
184
+ </div>
185
+ <div class="example-question" onclick="setQuestion('Explain the difference between AI and machine learning')">
186
+ Explain the difference between AI and machine learning
187
+ </div>
188
+ <div class="example-question" onclick="setQuestion('How does a neural network work?')">
189
+ How does a neural network work?
190
+ </div>
191
+ <div class="example-question" onclick="setQuestion('What are the benefits of cloud computing?')">
192
+ What are the benefits of cloud computing?
193
+ </div>
194
+ </div>
195
+ </div>
196
+ </div>
197
+
198
+ <script>
199
+ const questionForm = document.getElementById('questionForm');
200
+ const questionInput = document.getElementById('questionInput');
201
+ const askButton = document.getElementById('askButton');
202
+ const responseContainer = document.getElementById('responseContainer');
203
+
204
+ function setQuestion(question) {
205
+ questionInput.value = question;
206
+ questionInput.focus();
207
+ }
208
+
209
+ questionForm.addEventListener('submit', async (e) => {
210
+ e.preventDefault();
211
+
212
+ const question = questionInput.value.trim();
213
+ if (!question) return;
214
+
215
+ // Show loading state
216
+ askButton.disabled = true;
217
+ askButton.textContent = 'Thinking...';
218
+ responseContainer.className = 'loading';
219
+ responseContainer.textContent = '🤔 Processing your question...';
220
+
221
+ try {
222
+ const response = await fetch('/api/generate', {
223
+ method: 'POST',
224
+ headers: {
225
+ 'Content-Type': 'application/json',
226
+ },
227
+ body: JSON.stringify({ question: question }),
228
+ });
229
+
230
+ const data = await response.json();
231
+
232
+ if (response.ok) {
233
+ responseContainer.className = '';
234
+ responseContainer.textContent = data.text;
235
+ } else {
236
+ throw new Error(data.detail || 'Something went wrong');
237
+ }
238
+ } catch (error) {
239
+ responseContainer.className = 'error';
240
+ responseContainer.textContent = `Error: ${error.message}`;
241
+ } finally {
242
+ askButton.disabled = false;
243
+ askButton.textContent = 'Ask Question';
244
+ }
245
+ });
246
+ </script>
247
+ </body>
248
+ </html>
test_api.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Simple test script for the FastAPI TextGen API.
4
+ Run this script to test the API endpoints locally.
5
+ """
6
+
7
+ import requests
8
+ import json
9
+ import sys
10
+
11
+ def test_health_endpoint(base_url):
12
+ """Test the health check endpoint."""
13
+ print("🔍 Testing health endpoint...")
14
+ try:
15
+ response = requests.get(f"{base_url}/health")
16
+ if response.status_code == 200:
17
+ data = response.json()
18
+ print(f"✅ Health check passed: {data['message']}")
19
+ print(f"🔑 OpenAI configured: {data['openai_configured']}")
20
+ return True
21
+ else:
22
+ print(f"❌ Health check failed with status: {response.status_code}")
23
+ return False
24
+ except requests.exceptions.RequestException as e:
25
+ print(f"❌ Health check failed: {e}")
26
+ return False
27
+
28
+ def test_generate_endpoint(base_url, question):
29
+ """Test the generate endpoint with a question."""
30
+ print(f"\n💭 Testing question: '{question}'")
31
+ try:
32
+ response = requests.post(
33
+ f"{base_url}/api/generate",
34
+ json={"question": question},
35
+ headers={"Content-Type": "application/json"}
36
+ )
37
+
38
+ if response.status_code == 200:
39
+ data = response.json()
40
+ print("✅ Response received:")
41
+ print(f"📝 Answer: {data['text'][:200]}{'...' if len(data['text']) > 200 else ''}")
42
+ return True
43
+ else:
44
+ print(f"❌ Request failed with status: {response.status_code}")
45
+ print(f"📄 Response: {response.text}")
46
+ return False
47
+ except requests.exceptions.RequestException as e:
48
+ print(f"❌ Request failed: {e}")
49
+ return False
50
+
51
+ def main():
52
+ """Main test function."""
53
+ base_url = "http://localhost:8000"
54
+
55
+ print("🚀 FastAPI TextGen API Test Suite")
56
+ print("=" * 50)
57
+
58
+ # Test health endpoint
59
+ if not test_health_endpoint(base_url):
60
+ print("\n❌ Health check failed. Make sure the server is running.")
61
+ print("💡 Start the server with: python dev_run.py")
62
+ sys.exit(1)
63
+
64
+ # Test questions
65
+ test_questions = [
66
+ "What is artificial intelligence?",
67
+ "Explain Python programming in one sentence",
68
+ "What are the benefits of renewable energy?",
69
+ ]
70
+
71
+ print(f"\n🧪 Testing {len(test_questions)} questions...")
72
+
73
+ success_count = 0
74
+ for question in test_questions:
75
+ if test_generate_endpoint(base_url, question):
76
+ success_count += 1
77
+
78
+ print(f"\n📊 Test Results: {success_count}/{len(test_questions)} tests passed")
79
+
80
+ if success_count == len(test_questions):
81
+ print("🎉 All tests passed! Your API is working correctly.")
82
+ else:
83
+ print("⚠️ Some tests failed. Check your OpenAI API key and configuration.")
84
+
85
+ if __name__ == "__main__":
86
+ main()