Spaces:
No application file
Implementation Tasks: Gemini API Migration
Feature Branch: 006-gemini-api-migration
Created: 2025-12-14
Status: Ready for Implementation
Summary
| Metric | Value |
|---|---|
| Total Tasks | 16 |
| Setup Phase | 3 tasks |
| Foundational Phase | 2 tasks |
| User Story 1 (Chat) | 2 tasks |
| User Story 2 (Translation) | 2 tasks |
| User Story 3 (Personalization) | 2 tasks |
| User Story 4 (Embeddings) | 2 tasks |
| Polish Phase | 3 tasks |
| Parallel Opportunities | 6 tasks marked [P] |
Phase 1: Setup
Goal: Configure environment and dependencies for Gemini API migration.
- T001 [P] Update
requirements.txtto removeopenai==1.35.13and addgoogle-genai>=0.3.0 - T002 [P] Update
app/config.pyto replaceOPENAI_API_KEYwithGEMINI_API_KEYand addGEMINI_MODEL_CHATandGEMINI_MODEL_EMBEDDINGsettings - T003 [P] Update
.envto addGEMINI_API_KEYenvironment variable (template only, actual key provided by user)
Parallel Execution: T001, T002, T003 can all run in parallel (no dependencies).
Phase 2: Foundational
Goal: Create the core GeminiService that will be used by all user stories.
Dependencies: Phase 1 must be complete.
- T004 Create
app/services/gemini_service.pywith GeminiService class implementing__init__,get_chat_response,translate_to_urdu, andpersonalize_contentmethods per plan.md - T005 Update
app/services/rag_service.pyto importGeminiServicefromgemini_serviceinstead ofOpenAIServicefromopenai_service, and rename variableopenai_servicetogemini_service
Parallel Execution: T005 depends on T004.
Phase 3: User Story 1 - Chat Response Generation
Goal: Enable chat endpoint to use Gemini API for response generation.
User Story: A user sends a chat message through the application. The system uses Google Gemini API (gemini-2.0-flash-exp model) instead of OpenAI to generate a contextual response.
Independent Test: Send a POST request to /api/chat with a message and verify a Gemini-generated response is returned.
Dependencies: Phase 2 (T004) must be complete.
- T006 [US1] Update
app/routes/chat.pyto importGeminiServicefromapp.services.gemini_serviceinstead ofOpenAIServicefromapp.services.openai_service - T007 [US1] Update
app/routes/chat.pyto instantiateGeminiService()instead ofOpenAIService()and update all variable references fromopenai_servicetogemini_service
Acceptance Criteria:
- Chat endpoint returns AI-generated responses from Gemini
- Conversation history is properly processed
- No references to OpenAI remain in chat.py
Phase 4: User Story 2 - Urdu Translation
Goal: Enable translation endpoint to use Gemini API for English to Urdu translation.
User Story: A user submits English content for translation to Urdu. The system uses Google Gemini API to perform the translation instead of OpenAI GPT-4.
Independent Test: Send a POST request to /api/translate with English content and verify accurate Urdu translation is returned.
Dependencies: Phase 2 (T004) must be complete.
- T008 [P] [US2] Update
app/routes/translate.pyto importGeminiServicefromapp.services.gemini_serviceinstead ofOpenAIServicefromapp.services.openai_service - T009 [US2] Update
app/routes/translate.pyto instantiateGeminiService()instead ofOpenAIService()and update all variable references fromopenai_servicetogemini_service
Acceptance Criteria:
- Translation endpoint returns Urdu translations from Gemini
- Only translated text is returned without additional explanations
- No references to OpenAI remain in translate.py
Parallel Note: T008 can run in parallel with T006 (different files).
Phase 5: User Story 3 - Content Personalization
Goal: Enable personalization endpoint to use Gemini API for content adaptation.
User Story: A user requests content personalization based on their background profile. The system uses Google Gemini API to adapt content complexity instead of OpenAI GPT-4.
Independent Test: Send a POST request to /api/personalize with content and user_id, verify personalized content with JSON response containing personalized_content and adjustments_made.
Dependencies: Phase 2 (T004) must be complete.
- T010 [P] [US3] Update
app/routes/personalize.pyto importGeminiServicefromapp.services.gemini_serviceinstead ofOpenAIServicefromapp.services.openai_service - T011 [US3] Update
app/routes/personalize.pyto instantiateGeminiService()instead ofOpenAIService()and update all variable references fromopenai_servicetogemini_service
Acceptance Criteria:
- Personalization endpoint returns adapted content from Gemini
- JSON response includes
personalized_contentandadjustments_madefields - Beginner/intermediate/advanced personalization rules are applied
- No references to OpenAI remain in personalize.py
Parallel Note: T010 can run in parallel with T006 and T008 (different files).
Phase 6: User Story 4 - Embedding Generation
Goal: Enable embeddings service to use Gemini's text-embedding-004 model.
User Story: The system generates embeddings for text content using Google Gemini's text-embedding-004 model instead of OpenAI embeddings for RAG operations.
Independent Test: Call EmbeddingsService().create_embedding("test text") and verify a valid 768-dimensional vector is returned.
Dependencies: Phase 1 (T002 for config) must be complete.
- T012 [P] [US4] Rewrite
app/services/embeddings_service.pyto usegoogle.genai.Clientinstead of OpenAI client, usingtext-embedding-004model per plan.md - T013 [US4] Verify
app/services/embeddings_service.pyuses async pattern withclient.aio.models.embed_content()and returnsresult.embeddings[0].values
Acceptance Criteria:
- Embeddings service generates vectors using Gemini text-embedding-004
- Embedding dimensions are 768 (documented change from 1536)
- Async pattern is used consistently
- No references to OpenAI remain in embeddings_service.py
Parallel Note: T012 can run in parallel with T006, T008, T010 (different files).
Phase 7: Polish & Cleanup
Goal: Remove OpenAI artifacts and verify complete migration.
Dependencies: All previous phases must be complete.
- T014 Delete
app/services/openai_service.py(file no longer needed after migration) - T015 Verify no remaining references to
openaiorOpenAIexist in codebase using grep search (excluding history/specs directories) - T016 Verify server starts successfully with only
GEMINI_API_KEYconfigured (noOPENAI_API_KEYrequired)
Acceptance Criteria:
- openai_service.py is deleted
- No OpenAI imports or references in active code
- Server starts without errors
- All endpoints respond correctly
Dependencies Graph
Phase 1 (Setup)
βββ T001 [P] requirements.txt
βββ T002 [P] config.py
βββ T003 [P] .env
β
βΌ
Phase 2 (Foundational)
βββ T004 gemini_service.py
βββ T005 rag_service.py (depends on T004)
β
βββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββ
βΌ βΌ βΌ βΌ
Phase 3 (US1) Phase 4 (US2) Phase 5 (US3) Phase 6 (US4)
βββ T006 chat.py βββ T008 [P] βββ T010 [P] βββ T012 [P]
βββ T007 chat.py βββ T009 βββ T011 βββ T013
β β β β
βββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ
β
βΌ
Phase 7 (Polish)
βββ T014 delete openai_service.py
βββ T015 verify no OpenAI refs
βββ T016 verify server starts
Parallel Execution Opportunities
Maximum Parallelism per Phase:
| Phase | Parallel Tasks | Sequential Tasks |
|---|---|---|
| Phase 1 | T001, T002, T003 (all) | None |
| Phase 2 | None | T004 β T005 |
| Phases 3-6 | T006/T008/T010/T012 (first task of each) | Second task of each depends on first |
| Phase 7 | None | T014 β T015 β T016 |
Recommended Parallel Groups:
- Group A: T001, T002, T003 (setup - no dependencies)
- Group B: T006, T008, T010, T012 (route/service updates after T004 - different files)
Implementation Strategy
MVP Scope (Recommended)
For fastest time-to-value, implement in this order:
- Phase 1: Setup (T001-T003) - Required for all
- Phase 2: GeminiService (T004-T005) - Core dependency
- Phase 3: Chat (T006-T007) - Primary user interaction
- Verify: Test chat endpoint works
This gives a working chat feature with Gemini backend. Then continue with remaining stories.
Full Implementation Order
T001 ββ¬β T004 βββ T005 ββ¬β T006 βββ T007 ββ¬β T014
T002 ββ€ ββ T008 βββ T009 ββ€
T003 ββ ββ T010 βββ T011 βββ T015
ββ T012 βββ T013 ββ΄β T016
File Modification Summary
| File | Tasks | Action |
|---|---|---|
requirements.txt |
T001 | Modify |
app/config.py |
T002 | Modify |
.env |
T003 | Modify |
app/services/gemini_service.py |
T004 | Create |
app/services/rag_service.py |
T005 | Modify |
app/routes/chat.py |
T006, T007 | Modify |
app/routes/translate.py |
T008, T009 | Modify |
app/routes/personalize.py |
T010, T011 | Modify |
app/services/embeddings_service.py |
T012, T013 | Rewrite |
app/services/openai_service.py |
T014 | Delete |
Related Artifacts
| Artifact | Path |
|---|---|
| Specification | specs/006-gemini-api-migration/spec.md |
| Research | specs/006-gemini-api-migration/research.md |
| Data Model | specs/006-gemini-api-migration/data-model.md |
| Implementation Plan | specs/006-gemini-api-migration/plan.md |
| Quickstart Guide | specs/006-gemini-api-migration/quickstart.md |
Task Details
T004: Create GeminiService
File: app/services/gemini_service.py
from google import genai
from google.genai import types
from app.config import settings
from typing import List
import json
class GeminiService:
def __init__(self):
self.client = genai.Client(api_key=settings.GEMINI_API_KEY)
self.model = settings.GEMINI_MODEL_CHAT
async def get_chat_response(self, prompt: str, history: List[dict] = None) -> str:
"""Generate chat response using Gemini."""
contents = []
if history:
for msg in history:
role = "model" if msg["role"] == "assistant" else msg["role"]
if role == "system":
continue
contents.append(
types.Content(
role=role,
parts=[types.Part(text=msg["content"])]
)
)
contents.append(
types.Content(
role="user",
parts=[types.Part(text=prompt)]
)
)
response = await self.client.aio.models.generate_content(
model=self.model,
contents=contents
)
return response.text
async def translate_to_urdu(self, content: str) -> str:
"""Translate English content to Urdu using Gemini."""
system_instruction = "You are a professional translator. Translate the following English text to Urdu. Provide only the Urdu translation without any explanation or additional text."
response = await self.client.aio.models.generate_content(
model=self.model,
contents=content,
config=types.GenerateContentConfig(
system_instruction=system_instruction
)
)
return response.text
async def personalize_content(
self,
content: str,
software_level: str,
hardware_level: str,
learning_goals: str
) -> dict:
"""Personalize content based on user's background."""
system_instruction = f"""You are an expert educational content adapter...
[Full prompt from plan.md]"""
response = await self.client.aio.models.generate_content(
model=self.model,
contents=content,
config=types.GenerateContentConfig(
system_instruction=system_instruction,
response_mime_type="application/json"
)
)
result = json.loads(response.text)
return result
T012: Rewrite EmbeddingsService
File: app/services/embeddings_service.py
from google import genai
from google.genai import types
from app.config import settings
class EmbeddingsService:
def __init__(self):
self.client = genai.Client(api_key=settings.GEMINI_API_KEY)
self.model = settings.GEMINI_MODEL_EMBEDDING
async def create_embedding(self, text: str):
"""Generate embedding for text using Gemini."""
text = text.replace("\n", " ")
result = await self.client.aio.models.embed_content(
model=self.model,
contents=text,
config=types.EmbedContentConfig(
task_type="RETRIEVAL_DOCUMENT"
)
)
return result.embeddings[0].values