Spaces:
No application file
No application file
File size: 3,336 Bytes
9e8f2ae | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | # Data Model: Gemini API Migration
**Feature Branch**: `006-gemini-api-migration`
**Created**: 2025-12-14
## Overview
This migration does not introduce new database entities. It replaces the AI service layer implementation while maintaining existing data contracts.
## Service Classes
### GeminiService (replaces OpenAIService)
**File**: `app/services/gemini_service.py`
| Attribute | Type | Description |
|-----------|------|-------------|
| client | genai.Client | Google GenAI client instance |
| model | str | Default model: "gemini-2.0-flash-exp" |
**Methods** (signature-compatible with OpenAIService):
| Method | Parameters | Return Type | Description |
|--------|------------|-------------|-------------|
| get_chat_response | prompt: str, history: List[dict] = None | str | Generate chat response using Gemini |
| translate_to_urdu | content: str | str | Translate English to Urdu |
| personalize_content | content: str, software_level: str, hardware_level: str, learning_goals: str | dict | Personalize content based on user profile |
### EmbeddingsService (modified)
**File**: `app/services/embeddings_service.py`
| Attribute | Type | Description |
|-----------|------|-------------|
| client | genai.Client | Google GenAI client instance |
| model | str | "text-embedding-004" |
**Methods**:
| Method | Parameters | Return Type | Description |
|--------|------------|-------------|-------------|
| create_embedding | text: str | List[float] | Generate embedding vector for text |
## Configuration Changes
### Settings Class (app/config.py)
**Removed Fields**:
- `OPENAI_API_KEY: str`
- `OPENAI_MODEL_CHAT: str`
- `OPENAI_MODEL_EMBEDDING: str`
**Added Fields**:
- `GEMINI_API_KEY: str`
- `GEMINI_MODEL_CHAT: str = "gemini-2.0-flash-exp"`
- `GEMINI_MODEL_EMBEDDING: str = "text-embedding-004"`
## Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| GEMINI_API_KEY | Yes | Google AI API key for Gemini services |
| OPENAI_API_KEY | Removed | No longer required |
## Data Flow (Unchanged)
```
Request β Route β Service (GeminiService) β Google Gemini API β Response
β
EmbeddingsService β Gemini text-embedding-004
β
Qdrant (unchanged)
```
## Embedding Dimension Change
| Service | Model | Dimensions |
|---------|-------|------------|
| OpenAI (current) | text-embedding-3-small | 1536 |
| Gemini (new) | text-embedding-004 | 768 |
**Impact**: Existing Qdrant collections indexed with OpenAI embeddings are incompatible with Gemini embeddings. Re-indexing is out of scope per specification.
## Message Format Mapping
### Chat History Conversion
**OpenAI Format** (input from routes):
```python
[
{"role": "system", "content": "..."},
{"role": "user", "content": "..."},
{"role": "assistant", "content": "..."}
]
```
**Gemini Format** (internal conversion):
```python
# System message β system_instruction config
# user β user
# assistant β model
[
types.Content(role="user", parts=[types.Part(text="...")]),
types.Content(role="model", parts=[types.Part(text="...")])
]
```
## No Schema Changes
This migration does not modify:
- Database tables
- Pydantic request/response models
- API endpoint signatures
- Route patterns
|