Spaces:
No application file
No application file
File size: 3,869 Bytes
9e8f2ae | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | # Quickstart: Gemini API Migration
**Feature Branch**: `006-gemini-api-migration`
**Created**: 2025-12-14
## Prerequisites
1. Python 3.9+ installed
2. Google AI API key (get one at https://aistudio.google.com/apikey)
3. Existing backend codebase
## Setup
### 1. Install Dependencies
```bash
# Remove old OpenAI dependency and add Gemini
pip uninstall openai -y
pip install google-genai
```
Or update requirements.txt:
```txt
# Remove this line:
# openai==1.35.13
# Add this line:
google-genai>=0.3.0
```
Then run:
```bash
pip install -r requirements.txt
```
### 2. Configure Environment
Update your `.env` file:
```bash
# Remove or comment out:
# OPENAI_API_KEY=sk-proj-xxxxx
# Add:
GEMINI_API_KEY=your-gemini-api-key-here
```
### 3. Verify Configuration
```bash
# Test that the API key is set
python -c "import os; print('GEMINI_API_KEY set:', bool(os.getenv('GEMINI_API_KEY')))"
```
## Testing the Migration
### Start the Server
```bash
uvicorn app.main:app --reload
```
### Test Chat Endpoint
```bash
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you?"}'
```
Expected: JSON response with AI-generated message.
### Test Translation Endpoint
```bash
curl -X POST http://localhost:8000/api/translate \
-H "Content-Type: application/json" \
-d '{"content": "Hello, world!"}'
```
Expected: JSON response with Urdu translation.
### Test Personalization Endpoint
```bash
curl -X POST http://localhost:8000/api/personalize \
-H "Content-Type: application/json" \
-d '{
"content": "Machine learning uses algorithms to find patterns in data.",
"user_id": 1
}'
```
Expected: JSON response with personalized content and adjustments made.
## Code Examples
### Using GeminiService Directly
```python
from app.services.gemini_service import GeminiService
# Initialize service
service = GeminiService()
# Chat response
response = await service.get_chat_response("What is Python?")
print(response)
# Translation
urdu_text = await service.translate_to_urdu("Hello, how are you?")
print(urdu_text)
# Personalization
result = await service.personalize_content(
content="Neural networks are computing systems...",
software_level="beginner",
hardware_level="none",
learning_goals="learn AI basics"
)
print(result)
```
### Using EmbeddingsService
```python
from app.services.embeddings_service import EmbeddingsService
# Initialize service
embeddings = EmbeddingsService()
# Generate embedding
vector = await embeddings.create_embedding("Hello world")
print(f"Embedding dimension: {len(vector)}") # Should be 768
```
## Troubleshooting
### Error: "GEMINI_API_KEY not set"
Make sure your `.env` file has the key set and the app is loading environment variables:
```python
from dotenv import load_dotenv
load_dotenv()
```
### Error: "Model not found"
Verify you have access to the specified models:
- `gemini-2.0-flash-exp` for chat/translation/personalization
- `text-embedding-004` for embeddings
### Error: "Rate limit exceeded"
The Gemini API has rate limits. For development:
- Free tier: 15 RPM (requests per minute)
- Consider implementing retry logic with exponential backoff
### Embedding Dimension Mismatch
If you see Qdrant errors about vector dimensions:
- OpenAI embeddings: 1536 dimensions
- Gemini embeddings: 768 dimensions
You may need to recreate Qdrant collections with the new dimension size (out of scope for this migration).
## Verification Checklist
- [ ] Server starts without errors
- [ ] Chat endpoint returns responses
- [ ] Translation endpoint works
- [ ] Personalization endpoint works
- [ ] No references to OpenAI in logs
- [ ] No `OPENAI_API_KEY` required
## Related Documentation
- [Spec](./spec.md)
- [Research](./research.md)
- [Data Model](./data-model.md)
- [Implementation Plan](./plan.md)
|