MuhammadSaad16's picture
Upload 112 files
971b4ea verified
# Quickstart: Gemini API Migration
**Feature Branch**: `006-gemini-api-migration`
**Created**: 2025-12-14
## Prerequisites
1. Python 3.9+ installed
2. Google AI API key (get one at https://aistudio.google.com/apikey)
3. Existing backend codebase
## Setup
### 1. Install Dependencies
```bash
# Remove old OpenAI dependency and add Gemini
pip uninstall openai -y
pip install google-genai
```
Or update requirements.txt:
```txt
# Remove this line:
# openai==1.35.13
# Add this line:
google-genai>=0.3.0
```
Then run:
```bash
pip install -r requirements.txt
```
### 2. Configure Environment
Update your `.env` file:
```bash
# Remove or comment out:
# OPENAI_API_KEY=sk-proj-xxxxx
# Add:
GEMINI_API_KEY=your-gemini-api-key-here
```
### 3. Verify Configuration
```bash
# Test that the API key is set
python -c "import os; print('GEMINI_API_KEY set:', bool(os.getenv('GEMINI_API_KEY')))"
```
## Testing the Migration
### Start the Server
```bash
uvicorn app.main:app --reload
```
### Test Chat Endpoint
```bash
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you?"}'
```
Expected: JSON response with AI-generated message.
### Test Translation Endpoint
```bash
curl -X POST http://localhost:8000/api/translate \
-H "Content-Type: application/json" \
-d '{"content": "Hello, world!"}'
```
Expected: JSON response with Urdu translation.
### Test Personalization Endpoint
```bash
curl -X POST http://localhost:8000/api/personalize \
-H "Content-Type: application/json" \
-d '{
"content": "Machine learning uses algorithms to find patterns in data.",
"user_id": 1
}'
```
Expected: JSON response with personalized content and adjustments made.
## Code Examples
### Using GeminiService Directly
```python
from app.services.gemini_service import GeminiService
# Initialize service
service = GeminiService()
# Chat response
response = await service.get_chat_response("What is Python?")
print(response)
# Translation
urdu_text = await service.translate_to_urdu("Hello, how are you?")
print(urdu_text)
# Personalization
result = await service.personalize_content(
content="Neural networks are computing systems...",
software_level="beginner",
hardware_level="none",
learning_goals="learn AI basics"
)
print(result)
```
### Using EmbeddingsService
```python
from app.services.embeddings_service import EmbeddingsService
# Initialize service
embeddings = EmbeddingsService()
# Generate embedding
vector = await embeddings.create_embedding("Hello world")
print(f"Embedding dimension: {len(vector)}") # Should be 768
```
## Troubleshooting
### Error: "GEMINI_API_KEY not set"
Make sure your `.env` file has the key set and the app is loading environment variables:
```python
from dotenv import load_dotenv
load_dotenv()
```
### Error: "Model not found"
Verify you have access to the specified models:
- `gemini-2.0-flash-exp` for chat/translation/personalization
- `text-embedding-004` for embeddings
### Error: "Rate limit exceeded"
The Gemini API has rate limits. For development:
- Free tier: 15 RPM (requests per minute)
- Consider implementing retry logic with exponential backoff
### Embedding Dimension Mismatch
If you see Qdrant errors about vector dimensions:
- OpenAI embeddings: 1536 dimensions
- Gemini embeddings: 768 dimensions
You may need to recreate Qdrant collections with the new dimension size (out of scope for this migration).
## Verification Checklist
- [ ] Server starts without errors
- [ ] Chat endpoint returns responses
- [ ] Translation endpoint works
- [ ] Personalization endpoint works
- [ ] No references to OpenAI in logs
- [ ] No `OPENAI_API_KEY` required
## Related Documentation
- [Spec](./spec.md)
- [Research](./research.md)
- [Data Model](./data-model.md)
- [Implementation Plan](./plan.md)