Spaces:
Configuration error
Configuration error
Quickstart: Gemini API Migration
Feature Branch: 006-gemini-api-migration
Created: 2025-12-14
Prerequisites
- Python 3.9+ installed
- Google AI API key (get one at https://aistudio.google.com/apikey)
- Existing backend codebase
Setup
1. Install Dependencies
# Remove old OpenAI dependency and add Gemini
pip uninstall openai -y
pip install google-genai
Or update requirements.txt:
# Remove this line:
# openai==1.35.13
# Add this line:
google-genai>=0.3.0
Then run:
pip install -r requirements.txt
2. Configure Environment
Update your .env file:
# Remove or comment out:
# OPENAI_API_KEY=sk-proj-xxxxx
# Add:
GEMINI_API_KEY=your-gemini-api-key-here
3. Verify Configuration
# Test that the API key is set
python -c "import os; print('GEMINI_API_KEY set:', bool(os.getenv('GEMINI_API_KEY')))"
Testing the Migration
Start the Server
uvicorn app.main:app --reload
Test Chat Endpoint
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you?"}'
Expected: JSON response with AI-generated message.
Test Translation Endpoint
curl -X POST http://localhost:8000/api/translate \
-H "Content-Type: application/json" \
-d '{"content": "Hello, world!"}'
Expected: JSON response with Urdu translation.
Test Personalization Endpoint
curl -X POST http://localhost:8000/api/personalize \
-H "Content-Type: application/json" \
-d '{
"content": "Machine learning uses algorithms to find patterns in data.",
"user_id": 1
}'
Expected: JSON response with personalized content and adjustments made.
Code Examples
Using GeminiService Directly
from app.services.gemini_service import GeminiService
# Initialize service
service = GeminiService()
# Chat response
response = await service.get_chat_response("What is Python?")
print(response)
# Translation
urdu_text = await service.translate_to_urdu("Hello, how are you?")
print(urdu_text)
# Personalization
result = await service.personalize_content(
content="Neural networks are computing systems...",
software_level="beginner",
hardware_level="none",
learning_goals="learn AI basics"
)
print(result)
Using EmbeddingsService
from app.services.embeddings_service import EmbeddingsService
# Initialize service
embeddings = EmbeddingsService()
# Generate embedding
vector = await embeddings.create_embedding("Hello world")
print(f"Embedding dimension: {len(vector)}") # Should be 768
Troubleshooting
Error: "GEMINI_API_KEY not set"
Make sure your .env file has the key set and the app is loading environment variables:
from dotenv import load_dotenv
load_dotenv()
Error: "Model not found"
Verify you have access to the specified models:
gemini-2.0-flash-expfor chat/translation/personalizationtext-embedding-004for embeddings
Error: "Rate limit exceeded"
The Gemini API has rate limits. For development:
- Free tier: 15 RPM (requests per minute)
- Consider implementing retry logic with exponential backoff
Embedding Dimension Mismatch
If you see Qdrant errors about vector dimensions:
- OpenAI embeddings: 1536 dimensions
- Gemini embeddings: 768 dimensions
You may need to recreate Qdrant collections with the new dimension size (out of scope for this migration).
Verification Checklist
- Server starts without errors
- Chat endpoint returns responses
- Translation endpoint works
- Personalization endpoint works
- No references to OpenAI in logs
- No
OPENAI_API_KEYrequired