Spaces:
Running
Running
File size: 5,455 Bytes
6dc9d46 aefac4f 6dc9d46 aefac4f 6dc9d46 aefac4f 6dc9d46 aefac4f 6dc9d46 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 | # RagBot API - Getting Started (5 Minutes)
Follow these steps to get your API running in 5 minutes:
---
## Prerequisites
Before starting, ensure you have:
1. **Python 3.11+** installed
```powershell
python --version
```
2. **A free API key** from one of:
- [Groq](https://console.groq.com/keys) β Recommended (fast, free LLaMA 3.3-70B)
- [Google Gemini](https://aistudio.google.com/app/apikey) β Alternative
3. **RagBot dependencies installed**
```powershell
# From RagBot root directory
pip install -r requirements.txt
```
4. **`.env` configured** in project root with your API key:
```
GROQ_API_KEY=gsk_...
LLM_PROVIDER=groq
```
---
## π Step 1: Install API Dependencies (30 seconds)
```powershell
# Navigate to api directory
cd C:\Users\admin\OneDrive\Documents\GitHub\RagBot\api
# Install FastAPI and dependencies
pip install -r requirements.txt
```
**Expected output:**
```
Successfully installed fastapi-0.109.0 uvicorn-0.27.0 ...
```
---
## π Step 2: Start the API (10 seconds)
```powershell
# Make sure you're in the api/ directory
python -m uvicorn app.main:app --reload --port 8000
```
**Expected output:**
```
INFO: Started server process
INFO: Waiting for application startup.
π Starting RagBot API Server
β
RagBot service initialized successfully
β
API server ready to accept requests
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000
```
**β οΈ Wait 10-30 seconds for initialization** (loading vector store)
---
## β
Step 3: Verify It's Working (30 seconds)
### Option A: Use the Test Script
```powershell
# In a NEW PowerShell window (keep API running)
cd C:\Users\admin\OneDrive\Documents\GitHub\RagBot\api
.\test_api.ps1
```
### Option B: Manual Test
```powershell
# Health check
curl http://localhost:8000/api/v1/health
# Get example analysis
curl http://localhost:8000/api/v1/example
```
### Option C: Browser
Open: http://localhost:8000/docs
---
## π Step 4: Test Your First Request (1 minute)
### Test Natural Language Analysis
```powershell
# PowerShell
$body = @{
message = "My glucose is 185 and HbA1c is 8.2"
patient_context = @{
age = 52
gender = "male"
}
} | ConvertTo-Json
Invoke-RestMethod -Uri "http://localhost:8000/api/v1/analyze/natural" `
-Method Post -Body $body -ContentType "application/json"
```
**Expected:** JSON response with disease prediction, safety alerts, recommendations
---
## π Step 5: Integrate with Your Backend (2 minutes)
### Your Backend Code (Node.js/Express Example)
```javascript
// backend/routes/analysis.js
const axios = require('axios');
app.post('/api/analyze', async (req, res) => {
try {
// Get user input from your frontend
const { biomarkerText, patientInfo } = req.body;
// Call RagBot API on localhost
const response = await axios.post('http://localhost:8000/api/v1/analyze/natural', {
message: biomarkerText,
patient_context: patientInfo
});
// Send results to your frontend
res.json(response.data);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
```
### Your Frontend Code (React Example)
```javascript
// frontend/components/BiomarkerAnalysis.jsx
async function analyzeBiomarkers(userInput) {
// Call YOUR backend (which calls RagBot API)
const response = await fetch('/api/analyze', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({
biomarkerText: userInput,
patientInfo: { age: 52, gender: 'male' }
})
});
const result = await response.json();
// Display results
console.log('Disease:', result.prediction.disease);
console.log('Confidence:', result.prediction.confidence);
console.log('Summary:', result.conversational_summary);
return result;
}
```
---
## π Quick Reference
### API Endpoints You'll Use Most:
1. **Natural Language (Recommended)**
```
POST /api/v1/analyze/natural
Body: {"message": "glucose 185, HbA1c 8.2"}
```
2. **Structured (If you have exact values)**
```
POST /api/v1/analyze/structured
Body: {"biomarkers": {"Glucose": 185, "HbA1c": 8.2}}
```
3. **Health Check**
```
GET /api/v1/health
```
---
## π Troubleshooting
### Issue: "Connection refused"
**Problem:** Ollama not running
**Fix:**
```powershell
ollama serve
```
### Issue: "Vector store not loaded"
**Problem:** Missing vector database
**Fix:**
```powershell
cd C:\Users\admin\OneDrive\Documents\GitHub\RagBot
python scripts/setup_embeddings.py
```
### Issue: "Port 8000 in use"
**Problem:** Another app using port 8000
**Fix:**
```powershell
# Use different port
python -m uvicorn app.main:app --reload --port 8001
```
---
## π Next Steps
1. **Read the docs:** http://localhost:8000/docs
2. **Try all endpoints:** See [README.md](README.md)
3. **Integrate:** Connect your frontend to your backend
4. **Deploy:** Use Docker when ready ([docker-compose.yml](docker-compose.yml))
---
## π You're Done!
Your RagBot is now accessible via REST API at `http://localhost:8000`
**Test it right now:**
```powershell
curl http://localhost:8000/api/v1/health
```
---
**Need Help?**
- Full docs: [README.md](README.md)
- Quick reference: [QUICK_REFERENCE.md](QUICK_REFERENCE.md)
- Implementation details: [IMPLEMENTATION_COMPLETE.md](IMPLEMENTATION_COMPLETE.md)
**Have fun! π**
|