mr.saris kiattithapanayong
commited on
Commit
·
3d142aa
1
Parent(s):
056e676
update the code that demoed on saturday 22 nov
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .backup_adk-rag-agent_20251120_223056/.gitignore +3 -0
- .backup_adk-rag-agent_20251120_223056/README.md +125 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/__init__.py +35 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/agent.py +115 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/config.py +26 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/__init__.py +29 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/add_data.py +156 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/create_corpus.py +78 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/delete_corpus.py +67 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/delete_document.py +58 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/get_corpus_info.py +99 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/list_corpora.py +51 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/rag_query.py +112 -0
- .backup_adk-rag-agent_20251120_223056/rag_agent/tools/utils.py +117 -0
- .backup_adk-rag-agent_20251120_223056/requirements.txt +5 -0
- .cloudbuild/deploy-to-prod.yaml +52 -0
- .cloudbuild/pr_checks.yaml +51 -0
- .cloudbuild/staging.yaml +120 -0
- .gitignore +1 -0
- .gradio/certificate.pem +31 -0
- GEMINI.md +1992 -0
- GRADIO_COMPLETE_SETUP.md +300 -0
- GRADIO_README.md +118 -0
- GRADIO_SUMMARY.md +237 -0
- Makefile +80 -0
- QUICKSTART_GRADIO.md +266 -0
- VERSIONS_COMPARISON.md +205 -0
- deployment/README.md +11 -0
- deployment_metadata.json +6 -0
- gradio_app.py +193 -0
- gradio_app_v2.py +443 -0
- notebooks/adk_app_testing.ipynb +367 -0
- notebooks/evaluating_adk_agent.ipynb +1535 -0
- pyproject.toml +92 -0
- rag_agent/.env.example +0 -3
- rag_agent/agent.py +44 -96
- rag_agent/agent_engine_app.py +61 -0
- rag_agent/app_utils/.requirements.txt +175 -0
- rag_agent/app_utils/deploy.py +338 -0
- rag_agent/app_utils/gcs.py +42 -0
- rag_agent/app_utils/telemetry.py +45 -0
- rag_agent/app_utils/typing.py +33 -0
- rag_agent/config.py +3 -1
- rag_agent/tools/rag_query.py +77 -83
- requirements.txt +2 -0
- run_gradio.py +22 -0
- setup_gradio.sh +57 -0
- starter_pack_README.md +108 -0
- test.ipynb +118 -0
- test_gradio_setup.py +110 -0
.backup_adk-rag-agent_20251120_223056/.gitignore
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
.env
|
| 2 |
+
__pycache__/
|
| 3 |
+
.venv/
|
.backup_adk-rag-agent_20251120_223056/README.md
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Vertex AI RAG Agent with ADK
|
| 2 |
+
|
| 3 |
+
This repository contains a Google Agent Development Kit (ADK) implementation of a Retrieval Augmented Generation (RAG) agent using Google Cloud Vertex AI.
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
The Vertex AI RAG Agent allows you to:
|
| 8 |
+
|
| 9 |
+
- Query document corpora with natural language questions
|
| 10 |
+
- List available document corpora
|
| 11 |
+
- Create new document corpora
|
| 12 |
+
- Add new documents to existing corpora
|
| 13 |
+
- Get detailed information about specific corpora
|
| 14 |
+
- Delete corpora when they're no longer needed
|
| 15 |
+
|
| 16 |
+
## Prerequisites
|
| 17 |
+
|
| 18 |
+
- A Google Cloud account with billing enabled
|
| 19 |
+
- A Google Cloud project with the Vertex AI API enabled
|
| 20 |
+
- Appropriate access to create and manage Vertex AI resources
|
| 21 |
+
- Python 3.9+ environment
|
| 22 |
+
|
| 23 |
+
## Setting Up Google Cloud Authentication
|
| 24 |
+
|
| 25 |
+
Before running the agent, you need to set up authentication with Google Cloud:
|
| 26 |
+
|
| 27 |
+
1. **Install Google Cloud CLI**:
|
| 28 |
+
- Visit [Google Cloud SDK](https://cloud.google.com/sdk/docs/install) for installation instructions for your OS
|
| 29 |
+
|
| 30 |
+
2. **Initialize the Google Cloud CLI**:
|
| 31 |
+
```bash
|
| 32 |
+
gcloud init
|
| 33 |
+
```
|
| 34 |
+
This will guide you through logging in and selecting your project.
|
| 35 |
+
|
| 36 |
+
3. **Set up Application Default Credentials**:
|
| 37 |
+
```bash
|
| 38 |
+
gcloud auth application-default login
|
| 39 |
+
```
|
| 40 |
+
This will open a browser window for authentication and store credentials in:
|
| 41 |
+
`~/.config/gcloud/application_default_credentials.json`
|
| 42 |
+
|
| 43 |
+
4. **Verify Authentication**:
|
| 44 |
+
```bash
|
| 45 |
+
gcloud auth list
|
| 46 |
+
gcloud config list
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
5. **Enable Required APIs** (if not already enabled):
|
| 50 |
+
```bash
|
| 51 |
+
gcloud services enable aiplatform.googleapis.com
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
## Installation
|
| 55 |
+
|
| 56 |
+
1. **Set up a virtual environment**:
|
| 57 |
+
```bash
|
| 58 |
+
python -m venv .venv
|
| 59 |
+
source .venv/bin/activate # On Windows: .venv\Scripts\activate
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
2. **Install Dependencies**:
|
| 63 |
+
```bash
|
| 64 |
+
pip install -r requirements.txt
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Using the Agent
|
| 68 |
+
|
| 69 |
+
The agent provides the following functionality through its tools:
|
| 70 |
+
|
| 71 |
+
### 1. Query Documents
|
| 72 |
+
Allows you to ask questions and get answers from your document corpus:
|
| 73 |
+
- Automatically retrieves relevant information from the specified corpus
|
| 74 |
+
- Generates informative responses based on the retrieved content
|
| 75 |
+
|
| 76 |
+
### 2. List Corpora
|
| 77 |
+
Shows all available document corpora in your project:
|
| 78 |
+
- Displays corpus names and basic information
|
| 79 |
+
- Helps you understand what data collections are available
|
| 80 |
+
|
| 81 |
+
### 3. Create Corpus
|
| 82 |
+
Create a new empty document corpus:
|
| 83 |
+
- Specify a custom name for your corpus
|
| 84 |
+
- Sets up the corpus with recommended embedding model configuration
|
| 85 |
+
- Prepares the corpus for document ingestion
|
| 86 |
+
|
| 87 |
+
### 4. Add New Data
|
| 88 |
+
Add documents to existing corpora or create new ones:
|
| 89 |
+
- Supports Google Drive URLs and GCS (Google Cloud Storage) paths
|
| 90 |
+
- Automatically creates new corpora if they don't exist
|
| 91 |
+
|
| 92 |
+
### 5. Get Corpus Information
|
| 93 |
+
Provides detailed information about a specific corpus:
|
| 94 |
+
- Shows document count, file metadata, and creation time
|
| 95 |
+
- Useful for understanding corpus contents and structure
|
| 96 |
+
|
| 97 |
+
### 6. Delete Corpus
|
| 98 |
+
Removes corpora that are no longer needed:
|
| 99 |
+
- Requires confirmation to prevent accidental deletion
|
| 100 |
+
- Permanently removes the corpus and all associated files
|
| 101 |
+
|
| 102 |
+
## Troubleshooting
|
| 103 |
+
|
| 104 |
+
If you encounter issues:
|
| 105 |
+
|
| 106 |
+
- **Authentication Problems**:
|
| 107 |
+
- Run `gcloud auth application-default login` again
|
| 108 |
+
- Check if your service account has the necessary permissions
|
| 109 |
+
|
| 110 |
+
- **API Errors**:
|
| 111 |
+
- Ensure the Vertex AI API is enabled: `gcloud services enable aiplatform.googleapis.com`
|
| 112 |
+
- Verify your project has billing enabled
|
| 113 |
+
|
| 114 |
+
- **Quota Issues**:
|
| 115 |
+
- Check your Google Cloud Console for any quota limitations
|
| 116 |
+
- Request quota increases if needed
|
| 117 |
+
|
| 118 |
+
- **Missing Dependencies**:
|
| 119 |
+
- Ensure all requirements are installed: `pip install -r requirements.txt`
|
| 120 |
+
|
| 121 |
+
## Additional Resources
|
| 122 |
+
|
| 123 |
+
- [Vertex AI RAG Documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/rag-overview)
|
| 124 |
+
- [Google Agent Development Kit (ADK) Documentation](https://github.com/google/agents-framework)
|
| 125 |
+
- [Google Cloud Authentication Guide](https://cloud.google.com/docs/authentication)
|
.backup_adk-rag-agent_20251120_223056/rag_agent/__init__.py
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Vertex AI RAG Agent
|
| 3 |
+
|
| 4 |
+
A package for interacting with Google Cloud Vertex AI RAG capabilities.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import os
|
| 8 |
+
|
| 9 |
+
import vertexai
|
| 10 |
+
from dotenv import load_dotenv
|
| 11 |
+
|
| 12 |
+
# Load environment variables
|
| 13 |
+
load_dotenv()
|
| 14 |
+
|
| 15 |
+
# Get Vertex AI configuration from environment
|
| 16 |
+
PROJECT_ID = os.environ.get("GOOGLE_CLOUD_PROJECT")
|
| 17 |
+
LOCATION = os.environ.get("GOOGLE_CLOUD_LOCATION")
|
| 18 |
+
|
| 19 |
+
# Initialize Vertex AI at package load time
|
| 20 |
+
try:
|
| 21 |
+
if PROJECT_ID and LOCATION:
|
| 22 |
+
print(f"Initializing Vertex AI with project={PROJECT_ID}, location={LOCATION}")
|
| 23 |
+
vertexai.init(project=PROJECT_ID, location=LOCATION)
|
| 24 |
+
print("Vertex AI initialization successful")
|
| 25 |
+
else:
|
| 26 |
+
print(
|
| 27 |
+
f"Missing Vertex AI configuration. PROJECT_ID={PROJECT_ID}, LOCATION={LOCATION}. "
|
| 28 |
+
f"Tools requiring Vertex AI may not work properly."
|
| 29 |
+
)
|
| 30 |
+
except Exception as e:
|
| 31 |
+
print(f"Failed to initialize Vertex AI: {str(e)}")
|
| 32 |
+
print("Please check your Google Cloud credentials and project settings.")
|
| 33 |
+
|
| 34 |
+
# Import agent after initialization is complete
|
| 35 |
+
from . import agent
|
.backup_adk-rag-agent_20251120_223056/rag_agent/agent.py
ADDED
|
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from google.adk.agents import Agent
|
| 2 |
+
|
| 3 |
+
from .tools.add_data import add_data
|
| 4 |
+
from .tools.create_corpus import create_corpus
|
| 5 |
+
from .tools.delete_corpus import delete_corpus
|
| 6 |
+
from .tools.delete_document import delete_document
|
| 7 |
+
from .tools.get_corpus_info import get_corpus_info
|
| 8 |
+
from .tools.list_corpora import list_corpora
|
| 9 |
+
from .tools.rag_query import rag_query
|
| 10 |
+
|
| 11 |
+
root_agent = Agent(
|
| 12 |
+
name="RagAgent",
|
| 13 |
+
# Using Gemini 2.5 Flash for best performance with RAG operations
|
| 14 |
+
model="gemini-2.5-flash-preview-04-17",
|
| 15 |
+
description="Vertex AI RAG Agent",
|
| 16 |
+
tools=[
|
| 17 |
+
rag_query,
|
| 18 |
+
list_corpora,
|
| 19 |
+
create_corpus,
|
| 20 |
+
add_data,
|
| 21 |
+
get_corpus_info,
|
| 22 |
+
delete_corpus,
|
| 23 |
+
delete_document,
|
| 24 |
+
],
|
| 25 |
+
instruction="""
|
| 26 |
+
# 🧠 Vertex AI RAG Agent
|
| 27 |
+
|
| 28 |
+
You are a helpful RAG (Retrieval Augmented Generation) agent that can interact with Vertex AI's document corpora.
|
| 29 |
+
You can retrieve information from corpora, list available corpora, create new corpora, add new documents to corpora,
|
| 30 |
+
get detailed information about specific corpora, delete specific documents from corpora,
|
| 31 |
+
and delete entire corpora when they're no longer needed.
|
| 32 |
+
|
| 33 |
+
## Your Capabilities
|
| 34 |
+
|
| 35 |
+
1. **Query Documents**: You can answer questions by retrieving relevant information from document corpora.
|
| 36 |
+
2. **List Corpora**: You can list all available document corpora to help users understand what data is available.
|
| 37 |
+
3. **Create Corpus**: You can create new document corpora for organizing information.
|
| 38 |
+
4. **Add New Data**: You can add new documents (Google Drive URLs, etc.) to existing corpora.
|
| 39 |
+
5. **Get Corpus Info**: You can provide detailed information about a specific corpus, including file metadata and statistics.
|
| 40 |
+
6. **Delete Document**: You can delete a specific document from a corpus when it's no longer needed.
|
| 41 |
+
7. **Delete Corpus**: You can delete an entire corpus and all its associated files when it's no longer needed.
|
| 42 |
+
|
| 43 |
+
## How to Approach User Requests
|
| 44 |
+
|
| 45 |
+
When a user asks a question:
|
| 46 |
+
1. First, determine if they want to manage corpora (list/create/add data/get info/delete) or query existing information.
|
| 47 |
+
2. If they're asking a knowledge question, use the `rag_query` tool to search the corpus.
|
| 48 |
+
3. If they're asking about available corpora, use the `list_corpora` tool.
|
| 49 |
+
4. If they want to create a new corpus, use the `create_corpus` tool.
|
| 50 |
+
5. If they want to add data, ensure you know which corpus to add to, then use the `add_data` tool.
|
| 51 |
+
6. If they want information about a specific corpus, use the `get_corpus_info` tool.
|
| 52 |
+
7. If they want to delete a specific document, use the `delete_document` tool with confirmation.
|
| 53 |
+
8. If they want to delete an entire corpus, use the `delete_corpus` tool with confirmation.
|
| 54 |
+
|
| 55 |
+
## Using Tools
|
| 56 |
+
|
| 57 |
+
You have seven specialized tools at your disposal:
|
| 58 |
+
|
| 59 |
+
1. `rag_query`: Query a corpus to answer questions
|
| 60 |
+
- Parameters:
|
| 61 |
+
- corpus_name: The name of the corpus to query (required, but can be empty to use current corpus)
|
| 62 |
+
- query: The text question to ask
|
| 63 |
+
|
| 64 |
+
2. `list_corpora`: List all available corpora
|
| 65 |
+
- When this tool is called, it returns the full resource names that should be used with other tools
|
| 66 |
+
|
| 67 |
+
3. `create_corpus`: Create a new corpus
|
| 68 |
+
- Parameters:
|
| 69 |
+
- corpus_name: The name for the new corpus
|
| 70 |
+
|
| 71 |
+
4. `add_data`: Add new data to a corpus
|
| 72 |
+
- Parameters:
|
| 73 |
+
- corpus_name: The name of the corpus to add data to (required, but can be empty to use current corpus)
|
| 74 |
+
- paths: List of Google Drive or GCS URLs
|
| 75 |
+
|
| 76 |
+
5. `get_corpus_info`: Get detailed information about a specific corpus
|
| 77 |
+
- Parameters:
|
| 78 |
+
- corpus_name: The name of the corpus to get information about
|
| 79 |
+
|
| 80 |
+
6. `delete_document`: Delete a specific document from a corpus
|
| 81 |
+
- Parameters:
|
| 82 |
+
- corpus_name: The name of the corpus containing the document
|
| 83 |
+
- document_id: The ID of the document to delete (can be obtained from get_corpus_info results)
|
| 84 |
+
- confirm: Boolean flag that must be set to True to confirm deletion
|
| 85 |
+
|
| 86 |
+
7. `delete_corpus`: Delete an entire corpus and all its associated files
|
| 87 |
+
- Parameters:
|
| 88 |
+
- corpus_name: The name of the corpus to delete
|
| 89 |
+
- confirm: Boolean flag that must be set to True to confirm deletion
|
| 90 |
+
|
| 91 |
+
## INTERNAL: Technical Implementation Details
|
| 92 |
+
|
| 93 |
+
This section is NOT user-facing information - don't repeat these details to users:
|
| 94 |
+
|
| 95 |
+
- The system tracks a "current corpus" in the state. When a corpus is created or used, it becomes the current corpus.
|
| 96 |
+
- For rag_query and add_data, you can provide an empty string for corpus_name to use the current corpus.
|
| 97 |
+
- If no current corpus is set and an empty corpus_name is provided, the tools will prompt the user to specify one.
|
| 98 |
+
- Whenever possible, use the full resource name returned by the list_corpora tool when calling other tools.
|
| 99 |
+
- Using the full resource name instead of just the display name will ensure more reliable operation.
|
| 100 |
+
- Do not tell users to use full resource names in your responses - just use them internally in your tool calls.
|
| 101 |
+
|
| 102 |
+
## Communication Guidelines
|
| 103 |
+
|
| 104 |
+
- Be clear and concise in your responses.
|
| 105 |
+
- If querying a corpus, explain which corpus you're using to answer the question.
|
| 106 |
+
- If managing corpora, explain what actions you've taken.
|
| 107 |
+
- When new data is added, confirm what was added and to which corpus.
|
| 108 |
+
- When corpus information is displayed, organize it clearly for the user.
|
| 109 |
+
- When deleting a document or corpus, always ask for confirmation before proceeding.
|
| 110 |
+
- If an error occurs, explain what went wrong and suggest next steps.
|
| 111 |
+
- When listing corpora, just provide the display names and basic information - don't tell users about resource names.
|
| 112 |
+
|
| 113 |
+
Remember, your primary goal is to help users access and manage information through RAG capabilities.
|
| 114 |
+
""",
|
| 115 |
+
)
|
.backup_adk-rag-agent_20251120_223056/rag_agent/config.py
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Configuration settings for the RAG Agent.
|
| 3 |
+
|
| 4 |
+
These settings are used by the various RAG tools.
|
| 5 |
+
Vertex AI initialization is performed in the package's __init__.py
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
import os
|
| 9 |
+
|
| 10 |
+
from dotenv import load_dotenv
|
| 11 |
+
|
| 12 |
+
# Load environment variables (this is redundant if __init__.py is imported first,
|
| 13 |
+
# but included for safety when importing config directly)
|
| 14 |
+
load_dotenv()
|
| 15 |
+
|
| 16 |
+
# Vertex AI settings
|
| 17 |
+
PROJECT_ID = os.environ.get("GOOGLE_CLOUD_PROJECT")
|
| 18 |
+
LOCATION = os.environ.get("GOOGLE_CLOUD_LOCATION")
|
| 19 |
+
|
| 20 |
+
# RAG settings
|
| 21 |
+
DEFAULT_CHUNK_SIZE = 512
|
| 22 |
+
DEFAULT_CHUNK_OVERLAP = 100
|
| 23 |
+
DEFAULT_TOP_K = 3
|
| 24 |
+
DEFAULT_DISTANCE_THRESHOLD = 0.5
|
| 25 |
+
DEFAULT_EMBEDDING_MODEL = "publishers/google/models/text-embedding-005"
|
| 26 |
+
DEFAULT_EMBEDDING_REQUESTS_PER_MIN = 1000
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/__init__.py
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
RAG Tools package for interacting with Vertex AI RAG corpora.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
from .add_data import add_data
|
| 6 |
+
from .create_corpus import create_corpus
|
| 7 |
+
from .delete_corpus import delete_corpus
|
| 8 |
+
from .delete_document import delete_document
|
| 9 |
+
from .get_corpus_info import get_corpus_info
|
| 10 |
+
from .list_corpora import list_corpora
|
| 11 |
+
from .rag_query import rag_query
|
| 12 |
+
from .utils import (
|
| 13 |
+
check_corpus_exists,
|
| 14 |
+
get_corpus_resource_name,
|
| 15 |
+
set_current_corpus,
|
| 16 |
+
)
|
| 17 |
+
|
| 18 |
+
__all__ = [
|
| 19 |
+
"add_data",
|
| 20 |
+
"create_corpus",
|
| 21 |
+
"list_corpora",
|
| 22 |
+
"rag_query",
|
| 23 |
+
"get_corpus_info",
|
| 24 |
+
"delete_corpus",
|
| 25 |
+
"delete_document",
|
| 26 |
+
"check_corpus_exists",
|
| 27 |
+
"get_corpus_resource_name",
|
| 28 |
+
"set_current_corpus",
|
| 29 |
+
]
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/add_data.py
ADDED
|
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool for adding new data sources to a Vertex AI RAG corpus.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
import re
|
| 6 |
+
from typing import List
|
| 7 |
+
|
| 8 |
+
from google.adk.tools.tool_context import ToolContext
|
| 9 |
+
from vertexai import rag
|
| 10 |
+
|
| 11 |
+
from ..config import (
|
| 12 |
+
DEFAULT_CHUNK_OVERLAP,
|
| 13 |
+
DEFAULT_CHUNK_SIZE,
|
| 14 |
+
DEFAULT_EMBEDDING_REQUESTS_PER_MIN,
|
| 15 |
+
)
|
| 16 |
+
from .utils import check_corpus_exists, get_corpus_resource_name
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
def add_data(
|
| 20 |
+
corpus_name: str,
|
| 21 |
+
paths: List[str],
|
| 22 |
+
tool_context: ToolContext,
|
| 23 |
+
) -> dict:
|
| 24 |
+
"""
|
| 25 |
+
Add new data sources to a Vertex AI RAG corpus.
|
| 26 |
+
|
| 27 |
+
Args:
|
| 28 |
+
corpus_name (str): The name of the corpus to add data to. If empty, the current corpus will be used.
|
| 29 |
+
paths (List[str]): List of URLs or GCS paths to add to the corpus.
|
| 30 |
+
Supported formats:
|
| 31 |
+
- Google Drive: "https://drive.google.com/file/d/{FILE_ID}/view"
|
| 32 |
+
- Google Docs/Sheets/Slides: "https://docs.google.com/{type}/d/{FILE_ID}/..."
|
| 33 |
+
- Google Cloud Storage: "gs://{BUCKET}/{PATH}"
|
| 34 |
+
Example: ["https://drive.google.com/file/d/123", "gs://my_bucket/my_files_dir"]
|
| 35 |
+
tool_context (ToolContext): The tool context
|
| 36 |
+
|
| 37 |
+
Returns:
|
| 38 |
+
dict: Information about the added data and status
|
| 39 |
+
"""
|
| 40 |
+
# Check if the corpus exists
|
| 41 |
+
if not check_corpus_exists(corpus_name, tool_context):
|
| 42 |
+
return {
|
| 43 |
+
"status": "error",
|
| 44 |
+
"message": f"Corpus '{corpus_name}' does not exist. Please create it first using the create_corpus tool.",
|
| 45 |
+
"corpus_name": corpus_name,
|
| 46 |
+
"paths": paths,
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
+
# Validate inputs
|
| 50 |
+
if not paths or not all(isinstance(path, str) for path in paths):
|
| 51 |
+
return {
|
| 52 |
+
"status": "error",
|
| 53 |
+
"message": "Invalid paths: Please provide a list of URLs or GCS paths",
|
| 54 |
+
"corpus_name": corpus_name,
|
| 55 |
+
"paths": paths,
|
| 56 |
+
}
|
| 57 |
+
|
| 58 |
+
# Pre-process paths to validate and convert Google Docs URLs to Drive format if needed
|
| 59 |
+
validated_paths = []
|
| 60 |
+
invalid_paths = []
|
| 61 |
+
conversions = []
|
| 62 |
+
|
| 63 |
+
for path in paths:
|
| 64 |
+
if not path or not isinstance(path, str):
|
| 65 |
+
invalid_paths.append(f"{path} (Not a valid string)")
|
| 66 |
+
continue
|
| 67 |
+
|
| 68 |
+
# Check for Google Docs/Sheets/Slides URLs and convert them to Drive format
|
| 69 |
+
docs_match = re.match(
|
| 70 |
+
r"https:\/\/docs\.google\.com\/(?:document|spreadsheets|presentation)\/d\/([a-zA-Z0-9_-]+)(?:\/|$)",
|
| 71 |
+
path,
|
| 72 |
+
)
|
| 73 |
+
if docs_match:
|
| 74 |
+
file_id = docs_match.group(1)
|
| 75 |
+
drive_url = f"https://drive.google.com/file/d/{file_id}/view"
|
| 76 |
+
validated_paths.append(drive_url)
|
| 77 |
+
conversions.append(f"{path} → {drive_url}")
|
| 78 |
+
continue
|
| 79 |
+
|
| 80 |
+
# Check for valid Drive URL format
|
| 81 |
+
drive_match = re.match(
|
| 82 |
+
r"https:\/\/drive\.google\.com\/(?:file\/d\/|open\?id=)([a-zA-Z0-9_-]+)(?:\/|$)",
|
| 83 |
+
path,
|
| 84 |
+
)
|
| 85 |
+
if drive_match:
|
| 86 |
+
# Normalize to the standard Drive URL format
|
| 87 |
+
file_id = drive_match.group(1)
|
| 88 |
+
drive_url = f"https://drive.google.com/file/d/{file_id}/view"
|
| 89 |
+
validated_paths.append(drive_url)
|
| 90 |
+
if drive_url != path:
|
| 91 |
+
conversions.append(f"{path} → {drive_url}")
|
| 92 |
+
continue
|
| 93 |
+
|
| 94 |
+
# Check for GCS paths
|
| 95 |
+
if path.startswith("gs://"):
|
| 96 |
+
validated_paths.append(path)
|
| 97 |
+
continue
|
| 98 |
+
|
| 99 |
+
# If we're here, the path wasn't in a recognized format
|
| 100 |
+
invalid_paths.append(f"{path} (Invalid format)")
|
| 101 |
+
|
| 102 |
+
# Check if we have any valid paths after validation
|
| 103 |
+
if not validated_paths:
|
| 104 |
+
return {
|
| 105 |
+
"status": "error",
|
| 106 |
+
"message": "No valid paths provided. Please provide Google Drive URLs or GCS paths.",
|
| 107 |
+
"corpus_name": corpus_name,
|
| 108 |
+
"invalid_paths": invalid_paths,
|
| 109 |
+
}
|
| 110 |
+
|
| 111 |
+
try:
|
| 112 |
+
# Get the corpus resource name
|
| 113 |
+
corpus_resource_name = get_corpus_resource_name(corpus_name)
|
| 114 |
+
|
| 115 |
+
# Set up chunking configuration
|
| 116 |
+
transformation_config = rag.TransformationConfig(
|
| 117 |
+
chunking_config=rag.ChunkingConfig(
|
| 118 |
+
chunk_size=DEFAULT_CHUNK_SIZE,
|
| 119 |
+
chunk_overlap=DEFAULT_CHUNK_OVERLAP,
|
| 120 |
+
),
|
| 121 |
+
)
|
| 122 |
+
|
| 123 |
+
# Import files to the corpus
|
| 124 |
+
import_result = rag.import_files(
|
| 125 |
+
corpus_resource_name,
|
| 126 |
+
validated_paths,
|
| 127 |
+
transformation_config=transformation_config,
|
| 128 |
+
max_embedding_requests_per_min=DEFAULT_EMBEDDING_REQUESTS_PER_MIN,
|
| 129 |
+
)
|
| 130 |
+
|
| 131 |
+
# Set this as the current corpus if not already set
|
| 132 |
+
if not tool_context.state.get("current_corpus"):
|
| 133 |
+
tool_context.state["current_corpus"] = corpus_name
|
| 134 |
+
|
| 135 |
+
# Build the success message
|
| 136 |
+
conversion_msg = ""
|
| 137 |
+
if conversions:
|
| 138 |
+
conversion_msg = " (Converted Google Docs URLs to Drive format)"
|
| 139 |
+
|
| 140 |
+
return {
|
| 141 |
+
"status": "success",
|
| 142 |
+
"message": f"Successfully added {import_result.imported_rag_files_count} file(s) to corpus '{corpus_name}'{conversion_msg}",
|
| 143 |
+
"corpus_name": corpus_name,
|
| 144 |
+
"files_added": import_result.imported_rag_files_count,
|
| 145 |
+
"paths": validated_paths,
|
| 146 |
+
"invalid_paths": invalid_paths,
|
| 147 |
+
"conversions": conversions,
|
| 148 |
+
}
|
| 149 |
+
|
| 150 |
+
except Exception as e:
|
| 151 |
+
return {
|
| 152 |
+
"status": "error",
|
| 153 |
+
"message": f"Error adding data to corpus: {str(e)}",
|
| 154 |
+
"corpus_name": corpus_name,
|
| 155 |
+
"paths": paths,
|
| 156 |
+
}
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/create_corpus.py
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool for creating a new Vertex AI RAG corpus.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
import re
|
| 6 |
+
|
| 7 |
+
from google.adk.tools.tool_context import ToolContext
|
| 8 |
+
from vertexai import rag
|
| 9 |
+
|
| 10 |
+
from ..config import (
|
| 11 |
+
DEFAULT_EMBEDDING_MODEL,
|
| 12 |
+
)
|
| 13 |
+
from .utils import check_corpus_exists
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
def create_corpus(
|
| 17 |
+
corpus_name: str,
|
| 18 |
+
tool_context: ToolContext,
|
| 19 |
+
) -> dict:
|
| 20 |
+
"""
|
| 21 |
+
Create a new Vertex AI RAG corpus with the specified name.
|
| 22 |
+
|
| 23 |
+
Args:
|
| 24 |
+
corpus_name (str): The name for the new corpus
|
| 25 |
+
tool_context (ToolContext): The tool context for state management
|
| 26 |
+
|
| 27 |
+
Returns:
|
| 28 |
+
dict: Status information about the operation
|
| 29 |
+
"""
|
| 30 |
+
# Check if corpus already exists
|
| 31 |
+
if check_corpus_exists(corpus_name, tool_context):
|
| 32 |
+
return {
|
| 33 |
+
"status": "info",
|
| 34 |
+
"message": f"Corpus '{corpus_name}' already exists",
|
| 35 |
+
"corpus_name": corpus_name,
|
| 36 |
+
"corpus_created": False,
|
| 37 |
+
}
|
| 38 |
+
|
| 39 |
+
try:
|
| 40 |
+
# Clean corpus name for use as display name
|
| 41 |
+
display_name = re.sub(r"[^a-zA-Z0-9_-]", "_", corpus_name)
|
| 42 |
+
|
| 43 |
+
# Configure embedding model
|
| 44 |
+
embedding_model_config = rag.RagEmbeddingModelConfig(
|
| 45 |
+
vertex_prediction_endpoint=rag.VertexPredictionEndpoint(
|
| 46 |
+
publisher_model=DEFAULT_EMBEDDING_MODEL
|
| 47 |
+
)
|
| 48 |
+
)
|
| 49 |
+
|
| 50 |
+
# Create the corpus
|
| 51 |
+
rag_corpus = rag.create_corpus(
|
| 52 |
+
display_name=display_name,
|
| 53 |
+
backend_config=rag.RagVectorDbConfig(
|
| 54 |
+
rag_embedding_model_config=embedding_model_config
|
| 55 |
+
),
|
| 56 |
+
)
|
| 57 |
+
|
| 58 |
+
# Update state to track corpus existence
|
| 59 |
+
tool_context.state[f"corpus_exists_{corpus_name}"] = True
|
| 60 |
+
|
| 61 |
+
# Set this as the current corpus
|
| 62 |
+
tool_context.state["current_corpus"] = corpus_name
|
| 63 |
+
|
| 64 |
+
return {
|
| 65 |
+
"status": "success",
|
| 66 |
+
"message": f"Successfully created corpus '{corpus_name}'",
|
| 67 |
+
"corpus_name": rag_corpus.name,
|
| 68 |
+
"display_name": rag_corpus.display_name,
|
| 69 |
+
"corpus_created": True,
|
| 70 |
+
}
|
| 71 |
+
|
| 72 |
+
except Exception as e:
|
| 73 |
+
return {
|
| 74 |
+
"status": "error",
|
| 75 |
+
"message": f"Error creating corpus: {str(e)}",
|
| 76 |
+
"corpus_name": corpus_name,
|
| 77 |
+
"corpus_created": False,
|
| 78 |
+
}
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/delete_corpus.py
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool for deleting a Vertex AI RAG corpus when it's no longer needed.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
from google.adk.tools.tool_context import ToolContext
|
| 6 |
+
from vertexai import rag
|
| 7 |
+
|
| 8 |
+
from .utils import check_corpus_exists, get_corpus_resource_name
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
def delete_corpus(
|
| 12 |
+
corpus_name: str,
|
| 13 |
+
confirm: bool,
|
| 14 |
+
tool_context: ToolContext,
|
| 15 |
+
) -> dict:
|
| 16 |
+
"""
|
| 17 |
+
Delete a Vertex AI RAG corpus when it's no longer needed.
|
| 18 |
+
Requires confirmation to prevent accidental deletion.
|
| 19 |
+
|
| 20 |
+
Args:
|
| 21 |
+
corpus_name (str): The full resource name of the corpus to delete.
|
| 22 |
+
Preferably use the resource_name from list_corpora results.
|
| 23 |
+
confirm (bool): Must be set to True to confirm deletion
|
| 24 |
+
tool_context (ToolContext): The tool context
|
| 25 |
+
|
| 26 |
+
Returns:
|
| 27 |
+
dict: Status information about the deletion operation
|
| 28 |
+
"""
|
| 29 |
+
# Check if corpus exists
|
| 30 |
+
if not check_corpus_exists(corpus_name, tool_context):
|
| 31 |
+
return {
|
| 32 |
+
"status": "error",
|
| 33 |
+
"message": f"Corpus '{corpus_name}' does not exist",
|
| 34 |
+
"corpus_name": corpus_name,
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
# Check if deletion is confirmed
|
| 38 |
+
if not confirm:
|
| 39 |
+
return {
|
| 40 |
+
"status": "error",
|
| 41 |
+
"message": "Deletion requires explicit confirmation. Set confirm=True to delete this corpus.",
|
| 42 |
+
"corpus_name": corpus_name,
|
| 43 |
+
}
|
| 44 |
+
|
| 45 |
+
try:
|
| 46 |
+
# Get the corpus resource name
|
| 47 |
+
corpus_resource_name = get_corpus_resource_name(corpus_name)
|
| 48 |
+
|
| 49 |
+
# Delete the corpus
|
| 50 |
+
rag.delete_corpus(corpus_resource_name)
|
| 51 |
+
|
| 52 |
+
# Remove from state by setting to False
|
| 53 |
+
state_key = f"corpus_exists_{corpus_name}"
|
| 54 |
+
if state_key in tool_context.state:
|
| 55 |
+
tool_context.state[state_key] = False
|
| 56 |
+
|
| 57 |
+
return {
|
| 58 |
+
"status": "success",
|
| 59 |
+
"message": f"Successfully deleted corpus '{corpus_name}'",
|
| 60 |
+
"corpus_name": corpus_name,
|
| 61 |
+
}
|
| 62 |
+
except Exception as e:
|
| 63 |
+
return {
|
| 64 |
+
"status": "error",
|
| 65 |
+
"message": f"Error deleting corpus: {str(e)}",
|
| 66 |
+
"corpus_name": corpus_name,
|
| 67 |
+
}
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/delete_document.py
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool for deleting a specific document from a Vertex AI RAG corpus.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
from google.adk.tools.tool_context import ToolContext
|
| 6 |
+
from vertexai import rag
|
| 7 |
+
|
| 8 |
+
from .utils import check_corpus_exists, get_corpus_resource_name
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
def delete_document(
|
| 12 |
+
corpus_name: str,
|
| 13 |
+
document_id: str,
|
| 14 |
+
tool_context: ToolContext,
|
| 15 |
+
) -> dict:
|
| 16 |
+
"""
|
| 17 |
+
Delete a specific document from a Vertex AI RAG corpus.
|
| 18 |
+
|
| 19 |
+
Args:
|
| 20 |
+
corpus_name (str): The full resource name of the corpus containing the document.
|
| 21 |
+
Preferably use the resource_name from list_corpora results.
|
| 22 |
+
document_id (str): The ID of the specific document/file to delete. This can be
|
| 23 |
+
obtained from get_corpus_info results.
|
| 24 |
+
tool_context (ToolContext): The tool context
|
| 25 |
+
|
| 26 |
+
Returns:
|
| 27 |
+
dict: Status information about the deletion operation
|
| 28 |
+
"""
|
| 29 |
+
# Check if corpus exists
|
| 30 |
+
if not check_corpus_exists(corpus_name, tool_context):
|
| 31 |
+
return {
|
| 32 |
+
"status": "error",
|
| 33 |
+
"message": f"Corpus '{corpus_name}' does not exist",
|
| 34 |
+
"corpus_name": corpus_name,
|
| 35 |
+
"document_id": document_id,
|
| 36 |
+
}
|
| 37 |
+
|
| 38 |
+
try:
|
| 39 |
+
# Get the corpus resource name
|
| 40 |
+
corpus_resource_name = get_corpus_resource_name(corpus_name)
|
| 41 |
+
|
| 42 |
+
# Delete the document
|
| 43 |
+
rag_file_path = f"{corpus_resource_name}/ragFiles/{document_id}"
|
| 44 |
+
rag.delete_file(rag_file_path)
|
| 45 |
+
|
| 46 |
+
return {
|
| 47 |
+
"status": "success",
|
| 48 |
+
"message": f"Successfully deleted document '{document_id}' from corpus '{corpus_name}'",
|
| 49 |
+
"corpus_name": corpus_name,
|
| 50 |
+
"document_id": document_id,
|
| 51 |
+
}
|
| 52 |
+
except Exception as e:
|
| 53 |
+
return {
|
| 54 |
+
"status": "error",
|
| 55 |
+
"message": f"Error deleting document: {str(e)}",
|
| 56 |
+
"corpus_name": corpus_name,
|
| 57 |
+
"document_id": document_id,
|
| 58 |
+
}
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/get_corpus_info.py
ADDED
|
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool for retrieving detailed information about a specific RAG corpus.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
from google.adk.tools.tool_context import ToolContext
|
| 6 |
+
from vertexai import rag
|
| 7 |
+
|
| 8 |
+
from .utils import check_corpus_exists, get_corpus_resource_name
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
def get_corpus_info(
|
| 12 |
+
corpus_name: str,
|
| 13 |
+
tool_context: ToolContext,
|
| 14 |
+
) -> dict:
|
| 15 |
+
"""
|
| 16 |
+
Get detailed information about a specific RAG corpus, including its files.
|
| 17 |
+
|
| 18 |
+
Args:
|
| 19 |
+
corpus_name (str): The full resource name of the corpus to get information about.
|
| 20 |
+
Preferably use the resource_name from list_corpora results.
|
| 21 |
+
tool_context (ToolContext): The tool context
|
| 22 |
+
|
| 23 |
+
Returns:
|
| 24 |
+
dict: Information about the corpus and its files
|
| 25 |
+
"""
|
| 26 |
+
try:
|
| 27 |
+
# Check if corpus exists
|
| 28 |
+
if not check_corpus_exists(corpus_name, tool_context):
|
| 29 |
+
return {
|
| 30 |
+
"status": "error",
|
| 31 |
+
"message": f"Corpus '{corpus_name}' does not exist",
|
| 32 |
+
"corpus_name": corpus_name,
|
| 33 |
+
}
|
| 34 |
+
|
| 35 |
+
# Get the corpus resource name
|
| 36 |
+
corpus_resource_name = get_corpus_resource_name(corpus_name)
|
| 37 |
+
|
| 38 |
+
# Try to get corpus details first
|
| 39 |
+
corpus_display_name = corpus_name # Default if we can't get actual display name
|
| 40 |
+
|
| 41 |
+
# Process file information
|
| 42 |
+
file_details = []
|
| 43 |
+
try:
|
| 44 |
+
# Get the list of files
|
| 45 |
+
files = rag.list_files(corpus_resource_name)
|
| 46 |
+
for rag_file in files:
|
| 47 |
+
# Get document specific details
|
| 48 |
+
try:
|
| 49 |
+
# Extract the file ID from the name
|
| 50 |
+
file_id = rag_file.name.split("/")[-1]
|
| 51 |
+
|
| 52 |
+
file_info = {
|
| 53 |
+
"file_id": file_id,
|
| 54 |
+
"display_name": (
|
| 55 |
+
rag_file.display_name
|
| 56 |
+
if hasattr(rag_file, "display_name")
|
| 57 |
+
else ""
|
| 58 |
+
),
|
| 59 |
+
"source_uri": (
|
| 60 |
+
rag_file.source_uri
|
| 61 |
+
if hasattr(rag_file, "source_uri")
|
| 62 |
+
else ""
|
| 63 |
+
),
|
| 64 |
+
"create_time": (
|
| 65 |
+
str(rag_file.create_time)
|
| 66 |
+
if hasattr(rag_file, "create_time")
|
| 67 |
+
else ""
|
| 68 |
+
),
|
| 69 |
+
"update_time": (
|
| 70 |
+
str(rag_file.update_time)
|
| 71 |
+
if hasattr(rag_file, "update_time")
|
| 72 |
+
else ""
|
| 73 |
+
),
|
| 74 |
+
}
|
| 75 |
+
|
| 76 |
+
file_details.append(file_info)
|
| 77 |
+
except Exception:
|
| 78 |
+
# Continue to the next file
|
| 79 |
+
continue
|
| 80 |
+
except Exception:
|
| 81 |
+
# Continue without file details
|
| 82 |
+
pass
|
| 83 |
+
|
| 84 |
+
# Basic corpus info
|
| 85 |
+
return {
|
| 86 |
+
"status": "success",
|
| 87 |
+
"message": f"Successfully retrieved information for corpus '{corpus_display_name}'",
|
| 88 |
+
"corpus_name": corpus_name,
|
| 89 |
+
"corpus_display_name": corpus_display_name,
|
| 90 |
+
"file_count": len(file_details),
|
| 91 |
+
"files": file_details,
|
| 92 |
+
}
|
| 93 |
+
|
| 94 |
+
except Exception as e:
|
| 95 |
+
return {
|
| 96 |
+
"status": "error",
|
| 97 |
+
"message": f"Error getting corpus information: {str(e)}",
|
| 98 |
+
"corpus_name": corpus_name,
|
| 99 |
+
}
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/list_corpora.py
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool for listing all available Vertex AI RAG corpora.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
from typing import Dict, List, Union
|
| 6 |
+
|
| 7 |
+
from vertexai import rag
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
def list_corpora() -> dict:
|
| 11 |
+
"""
|
| 12 |
+
List all available Vertex AI RAG corpora.
|
| 13 |
+
|
| 14 |
+
Returns:
|
| 15 |
+
dict: A list of available corpora and status, with each corpus containing:
|
| 16 |
+
- resource_name: The full resource name to use with other tools
|
| 17 |
+
- display_name: The human-readable name of the corpus
|
| 18 |
+
- create_time: When the corpus was created
|
| 19 |
+
- update_time: When the corpus was last updated
|
| 20 |
+
"""
|
| 21 |
+
try:
|
| 22 |
+
# Get the list of corpora
|
| 23 |
+
corpora = rag.list_corpora()
|
| 24 |
+
|
| 25 |
+
# Process corpus information into a more usable format
|
| 26 |
+
corpus_info: List[Dict[str, Union[str, int]]] = []
|
| 27 |
+
for corpus in corpora:
|
| 28 |
+
corpus_data: Dict[str, Union[str, int]] = {
|
| 29 |
+
"resource_name": corpus.name, # Full resource name for use with other tools
|
| 30 |
+
"display_name": corpus.display_name,
|
| 31 |
+
"create_time": (
|
| 32 |
+
str(corpus.create_time) if hasattr(corpus, "create_time") else ""
|
| 33 |
+
),
|
| 34 |
+
"update_time": (
|
| 35 |
+
str(corpus.update_time) if hasattr(corpus, "update_time") else ""
|
| 36 |
+
),
|
| 37 |
+
}
|
| 38 |
+
|
| 39 |
+
corpus_info.append(corpus_data)
|
| 40 |
+
|
| 41 |
+
return {
|
| 42 |
+
"status": "success",
|
| 43 |
+
"message": f"Found {len(corpus_info)} available corpora",
|
| 44 |
+
"corpora": corpus_info,
|
| 45 |
+
}
|
| 46 |
+
except Exception as e:
|
| 47 |
+
return {
|
| 48 |
+
"status": "error",
|
| 49 |
+
"message": f"Error listing corpora: {str(e)}",
|
| 50 |
+
"corpora": [],
|
| 51 |
+
}
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/rag_query.py
ADDED
|
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool for querying Vertex AI RAG corpora and retrieving relevant information.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
import logging
|
| 6 |
+
|
| 7 |
+
from google.adk.tools.tool_context import ToolContext
|
| 8 |
+
from vertexai import rag
|
| 9 |
+
|
| 10 |
+
from ..config import (
|
| 11 |
+
DEFAULT_DISTANCE_THRESHOLD,
|
| 12 |
+
DEFAULT_TOP_K,
|
| 13 |
+
)
|
| 14 |
+
from .utils import check_corpus_exists, get_corpus_resource_name
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
def rag_query(
|
| 18 |
+
corpus_name: str,
|
| 19 |
+
query: str,
|
| 20 |
+
tool_context: ToolContext,
|
| 21 |
+
) -> dict:
|
| 22 |
+
"""
|
| 23 |
+
Query a Vertex AI RAG corpus with a user question and return relevant information.
|
| 24 |
+
|
| 25 |
+
Args:
|
| 26 |
+
corpus_name (str): The name of the corpus to query. If empty, the current corpus will be used.
|
| 27 |
+
Preferably use the resource_name from list_corpora results.
|
| 28 |
+
query (str): The text query to search for in the corpus
|
| 29 |
+
tool_context (ToolContext): The tool context
|
| 30 |
+
|
| 31 |
+
Returns:
|
| 32 |
+
dict: The query results and status
|
| 33 |
+
"""
|
| 34 |
+
try:
|
| 35 |
+
|
| 36 |
+
# Check if the corpus exists
|
| 37 |
+
if not check_corpus_exists(corpus_name, tool_context):
|
| 38 |
+
return {
|
| 39 |
+
"status": "error",
|
| 40 |
+
"message": f"Corpus '{corpus_name}' does not exist. Please create it first using the create_corpus tool.",
|
| 41 |
+
"query": query,
|
| 42 |
+
"corpus_name": corpus_name,
|
| 43 |
+
}
|
| 44 |
+
|
| 45 |
+
# Get the corpus resource name
|
| 46 |
+
corpus_resource_name = get_corpus_resource_name(corpus_name)
|
| 47 |
+
|
| 48 |
+
# Configure retrieval parameters
|
| 49 |
+
rag_retrieval_config = rag.RagRetrievalConfig(
|
| 50 |
+
top_k=DEFAULT_TOP_K,
|
| 51 |
+
filter=rag.Filter(vector_distance_threshold=DEFAULT_DISTANCE_THRESHOLD),
|
| 52 |
+
)
|
| 53 |
+
|
| 54 |
+
# Perform the query
|
| 55 |
+
print("Performing retrieval query...")
|
| 56 |
+
response = rag.retrieval_query(
|
| 57 |
+
rag_resources=[
|
| 58 |
+
rag.RagResource(
|
| 59 |
+
rag_corpus=corpus_resource_name,
|
| 60 |
+
)
|
| 61 |
+
],
|
| 62 |
+
text=query,
|
| 63 |
+
rag_retrieval_config=rag_retrieval_config,
|
| 64 |
+
)
|
| 65 |
+
|
| 66 |
+
# Process the response into a more usable format
|
| 67 |
+
results = []
|
| 68 |
+
if hasattr(response, "contexts") and response.contexts:
|
| 69 |
+
for ctx_group in response.contexts.contexts:
|
| 70 |
+
result = {
|
| 71 |
+
"source_uri": (
|
| 72 |
+
ctx_group.source_uri if hasattr(ctx_group, "source_uri") else ""
|
| 73 |
+
),
|
| 74 |
+
"source_name": (
|
| 75 |
+
ctx_group.source_display_name
|
| 76 |
+
if hasattr(ctx_group, "source_display_name")
|
| 77 |
+
else ""
|
| 78 |
+
),
|
| 79 |
+
"text": ctx_group.text if hasattr(ctx_group, "text") else "",
|
| 80 |
+
"score": ctx_group.score if hasattr(ctx_group, "score") else 0.0,
|
| 81 |
+
}
|
| 82 |
+
results.append(result)
|
| 83 |
+
|
| 84 |
+
# If we didn't find any results
|
| 85 |
+
if not results:
|
| 86 |
+
return {
|
| 87 |
+
"status": "warning",
|
| 88 |
+
"message": f"No results found in corpus '{corpus_name}' for query: '{query}'",
|
| 89 |
+
"query": query,
|
| 90 |
+
"corpus_name": corpus_name,
|
| 91 |
+
"results": [],
|
| 92 |
+
"results_count": 0,
|
| 93 |
+
}
|
| 94 |
+
|
| 95 |
+
return {
|
| 96 |
+
"status": "success",
|
| 97 |
+
"message": f"Successfully queried corpus '{corpus_name}'",
|
| 98 |
+
"query": query,
|
| 99 |
+
"corpus_name": corpus_name,
|
| 100 |
+
"results": results,
|
| 101 |
+
"results_count": len(results),
|
| 102 |
+
}
|
| 103 |
+
|
| 104 |
+
except Exception as e:
|
| 105 |
+
error_msg = f"Error querying corpus: {str(e)}"
|
| 106 |
+
logging.error(error_msg)
|
| 107 |
+
return {
|
| 108 |
+
"status": "error",
|
| 109 |
+
"message": error_msg,
|
| 110 |
+
"query": query,
|
| 111 |
+
"corpus_name": corpus_name,
|
| 112 |
+
}
|
.backup_adk-rag-agent_20251120_223056/rag_agent/tools/utils.py
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Utility functions for the RAG tools.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
import logging
|
| 6 |
+
import re
|
| 7 |
+
|
| 8 |
+
from google.adk.tools.tool_context import ToolContext
|
| 9 |
+
from vertexai import rag
|
| 10 |
+
|
| 11 |
+
from ..config import (
|
| 12 |
+
LOCATION,
|
| 13 |
+
PROJECT_ID,
|
| 14 |
+
)
|
| 15 |
+
|
| 16 |
+
logger = logging.getLogger(__name__)
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
def get_corpus_resource_name(corpus_name: str) -> str:
|
| 20 |
+
"""
|
| 21 |
+
Convert a corpus name to its full resource name if needed.
|
| 22 |
+
Handles various input formats and ensures the returned name follows Vertex AI's requirements.
|
| 23 |
+
|
| 24 |
+
Args:
|
| 25 |
+
corpus_name (str): The corpus name or display name
|
| 26 |
+
|
| 27 |
+
Returns:
|
| 28 |
+
str: The full resource name of the corpus
|
| 29 |
+
"""
|
| 30 |
+
logger.info(f"Getting resource name for corpus: {corpus_name}")
|
| 31 |
+
|
| 32 |
+
# If it's already a full resource name with the projects/locations/ragCorpora format
|
| 33 |
+
if re.match(r"^projects/[^/]+/locations/[^/]+/ragCorpora/[^/]+$", corpus_name):
|
| 34 |
+
return corpus_name
|
| 35 |
+
|
| 36 |
+
# Check if this is a display name of an existing corpus
|
| 37 |
+
try:
|
| 38 |
+
# List all corpora and check if there's a match with the display name
|
| 39 |
+
corpora = rag.list_corpora()
|
| 40 |
+
for corpus in corpora:
|
| 41 |
+
if hasattr(corpus, "display_name") and corpus.display_name == corpus_name:
|
| 42 |
+
return corpus.name
|
| 43 |
+
except Exception as e:
|
| 44 |
+
logger.warning(f"Error when checking for corpus display name: {str(e)}")
|
| 45 |
+
# If we can't check, continue with the default behavior
|
| 46 |
+
pass
|
| 47 |
+
|
| 48 |
+
# If it contains partial path elements, extract just the corpus ID
|
| 49 |
+
if "/" in corpus_name:
|
| 50 |
+
# Extract the last part of the path as the corpus ID
|
| 51 |
+
corpus_id = corpus_name.split("/")[-1]
|
| 52 |
+
else:
|
| 53 |
+
corpus_id = corpus_name
|
| 54 |
+
|
| 55 |
+
# Remove any special characters that might cause issues
|
| 56 |
+
corpus_id = re.sub(r"[^a-zA-Z0-9_-]", "_", corpus_id)
|
| 57 |
+
|
| 58 |
+
# Construct the standardized resource name
|
| 59 |
+
return f"projects/{PROJECT_ID}/locations/{LOCATION}/ragCorpora/{corpus_id}"
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def check_corpus_exists(corpus_name: str, tool_context: ToolContext) -> bool:
|
| 63 |
+
"""
|
| 64 |
+
Check if a corpus with the given name exists.
|
| 65 |
+
|
| 66 |
+
Args:
|
| 67 |
+
corpus_name (str): The name of the corpus to check
|
| 68 |
+
tool_context (ToolContext): The tool context for state management
|
| 69 |
+
|
| 70 |
+
Returns:
|
| 71 |
+
bool: True if the corpus exists, False otherwise
|
| 72 |
+
"""
|
| 73 |
+
# Check state first if tool_context is provided
|
| 74 |
+
if tool_context.state.get(f"corpus_exists_{corpus_name}"):
|
| 75 |
+
return True
|
| 76 |
+
|
| 77 |
+
try:
|
| 78 |
+
# Get full resource name
|
| 79 |
+
corpus_resource_name = get_corpus_resource_name(corpus_name)
|
| 80 |
+
|
| 81 |
+
# List all corpora and check if this one exists
|
| 82 |
+
corpora = rag.list_corpora()
|
| 83 |
+
for corpus in corpora:
|
| 84 |
+
if (
|
| 85 |
+
corpus.name == corpus_resource_name
|
| 86 |
+
or corpus.display_name == corpus_name
|
| 87 |
+
):
|
| 88 |
+
# Update state
|
| 89 |
+
tool_context.state[f"corpus_exists_{corpus_name}"] = True
|
| 90 |
+
# Also set this as the current corpus if no current corpus is set
|
| 91 |
+
if not tool_context.state.get("current_corpus"):
|
| 92 |
+
tool_context.state["current_corpus"] = corpus_name
|
| 93 |
+
return True
|
| 94 |
+
|
| 95 |
+
return False
|
| 96 |
+
except Exception as e:
|
| 97 |
+
logger.error(f"Error checking if corpus exists: {str(e)}")
|
| 98 |
+
# If we can't check, assume it doesn't exist
|
| 99 |
+
return False
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
def set_current_corpus(corpus_name: str, tool_context: ToolContext) -> bool:
|
| 103 |
+
"""
|
| 104 |
+
Set the current corpus in the tool context state.
|
| 105 |
+
|
| 106 |
+
Args:
|
| 107 |
+
corpus_name (str): The name of the corpus to set as current
|
| 108 |
+
tool_context (ToolContext): The tool context for state management
|
| 109 |
+
|
| 110 |
+
Returns:
|
| 111 |
+
bool: True if the corpus exists and was set as current, False otherwise
|
| 112 |
+
"""
|
| 113 |
+
# Check if corpus exists first
|
| 114 |
+
if check_corpus_exists(corpus_name, tool_context):
|
| 115 |
+
tool_context.state["current_corpus"] = corpus_name
|
| 116 |
+
return True
|
| 117 |
+
return False
|
.backup_adk-rag-agent_20251120_223056/requirements.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
google-cloud-aiplatform==1.92.0
|
| 2 |
+
google-cloud-storage==2.19.0
|
| 3 |
+
google-genai==1.14.0
|
| 4 |
+
gitpython==3.1.40
|
| 5 |
+
google-adk==0.5.0
|
.cloudbuild/deploy-to-prod.yaml
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
|
| 15 |
+
steps:
|
| 16 |
+
- name: "python:3.12-slim"
|
| 17 |
+
id: install-dependencies
|
| 18 |
+
entrypoint: /bin/bash
|
| 19 |
+
args:
|
| 20 |
+
- "-c"
|
| 21 |
+
- |
|
| 22 |
+
pip install uv==0.8.13 --user && uv sync --locked
|
| 23 |
+
env:
|
| 24 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 25 |
+
|
| 26 |
+
- name: "python:3.12-slim"
|
| 27 |
+
id: trigger-deployment
|
| 28 |
+
entrypoint: /bin/bash
|
| 29 |
+
args:
|
| 30 |
+
- "-c"
|
| 31 |
+
- |
|
| 32 |
+
uv export --no-hashes --no-sources --no-header --no-dev --no-emit-project --no-annotate --locked > rag_agent/app_utils/.requirements.txt
|
| 33 |
+
uv run python -m rag_agent.app_utils.deploy \
|
| 34 |
+
--project ${_PROD_PROJECT_ID} \
|
| 35 |
+
--location ${_REGION} \
|
| 36 |
+
--source-packages=./rag_agent \
|
| 37 |
+
--entrypoint-module=rag_agent.agent_engine_app \
|
| 38 |
+
--entrypoint-object=agent_engine \
|
| 39 |
+
--requirements-file=rag_agent/app_utils/.requirements.txt \
|
| 40 |
+
--service-account=${_APP_SERVICE_ACCOUNT_PROD} \
|
| 41 |
+
--set-env-vars="COMMIT_SHA=${COMMIT_SHA},LOGS_BUCKET_NAME=${_LOGS_BUCKET_NAME_PROD}"
|
| 42 |
+
env:
|
| 43 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 44 |
+
|
| 45 |
+
substitutions:
|
| 46 |
+
_PROD_PROJECT_ID: YOUR_PROD_PROJECT_ID
|
| 47 |
+
_REGION: asia-southeast1
|
| 48 |
+
|
| 49 |
+
logsBucket: gs://${PROJECT_ID}-adk-rag-agent-logs/build-logs
|
| 50 |
+
options:
|
| 51 |
+
substitutionOption: ALLOW_LOOSE
|
| 52 |
+
defaultLogsBucketBehavior: REGIONAL_USER_OWNED_BUCKET
|
.cloudbuild/pr_checks.yaml
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
|
| 15 |
+
steps:
|
| 16 |
+
# Install uv package manager and sync dependencies
|
| 17 |
+
- name: "python:3.12-slim"
|
| 18 |
+
id: install-dependencies
|
| 19 |
+
entrypoint: /bin/bash
|
| 20 |
+
args:
|
| 21 |
+
- "-c"
|
| 22 |
+
- |
|
| 23 |
+
pip install uv==0.8.13 --user && uv sync --locked
|
| 24 |
+
env:
|
| 25 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 26 |
+
|
| 27 |
+
# Run unit tests using pytest
|
| 28 |
+
- name: "python:3.12-slim"
|
| 29 |
+
id: unit-tests
|
| 30 |
+
entrypoint: /bin/bash
|
| 31 |
+
args:
|
| 32 |
+
- "-c"
|
| 33 |
+
- |
|
| 34 |
+
uv run pytest tests/unit
|
| 35 |
+
env:
|
| 36 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 37 |
+
|
| 38 |
+
# Run integration tests
|
| 39 |
+
- name: "python:3.12-slim"
|
| 40 |
+
id: integration-tests
|
| 41 |
+
entrypoint: /bin/bash
|
| 42 |
+
args:
|
| 43 |
+
- "-c"
|
| 44 |
+
- |
|
| 45 |
+
uv run pytest tests/integration
|
| 46 |
+
env:
|
| 47 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 48 |
+
|
| 49 |
+
logsBucket: gs://${PROJECT_ID}-adk-rag-agent-logs/build-logs
|
| 50 |
+
options:
|
| 51 |
+
defaultLogsBucketBehavior: REGIONAL_USER_OWNED_BUCKET
|
.cloudbuild/staging.yaml
ADDED
|
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
|
| 15 |
+
steps:
|
| 16 |
+
- name: "python:3.12-slim"
|
| 17 |
+
id: install-dependencies
|
| 18 |
+
entrypoint: /bin/bash
|
| 19 |
+
args:
|
| 20 |
+
- "-c"
|
| 21 |
+
- |
|
| 22 |
+
pip install uv==0.8.13 --user && uv sync --locked
|
| 23 |
+
env:
|
| 24 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 25 |
+
|
| 26 |
+
- name: "python:3.12-slim"
|
| 27 |
+
id: deploy-staging
|
| 28 |
+
entrypoint: /bin/bash
|
| 29 |
+
args:
|
| 30 |
+
- "-c"
|
| 31 |
+
- |
|
| 32 |
+
uv export --no-hashes --no-sources --no-header --no-dev --no-emit-project --no-annotate --locked > rag_agent/app_utils/.requirements.txt
|
| 33 |
+
uv run python -m rag_agent.app_utils.deploy \
|
| 34 |
+
--project ${_STAGING_PROJECT_ID} \
|
| 35 |
+
--location ${_REGION} \
|
| 36 |
+
--source-packages=./rag_agent \
|
| 37 |
+
--entrypoint-module=rag_agent.agent_engine_app \
|
| 38 |
+
--entrypoint-object=agent_engine \
|
| 39 |
+
--requirements-file=rag_agent/app_utils/.requirements.txt \
|
| 40 |
+
--service-account=${_APP_SERVICE_ACCOUNT_STAGING} \
|
| 41 |
+
--set-env-vars="COMMIT_SHA=${COMMIT_SHA},LOGS_BUCKET_NAME=${_LOGS_BUCKET_NAME_STAGING}"
|
| 42 |
+
env:
|
| 43 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
- name: gcr.io/cloud-builders/gcloud
|
| 47 |
+
id: fetch-auth-token
|
| 48 |
+
entrypoint: /bin/bash
|
| 49 |
+
args:
|
| 50 |
+
- "-c"
|
| 51 |
+
- |
|
| 52 |
+
echo $(gcloud auth print-access-token -q) > auth_token.txt
|
| 53 |
+
|
| 54 |
+
# Load Testing
|
| 55 |
+
- name: "python:3.12-slim"
|
| 56 |
+
id: load_test
|
| 57 |
+
entrypoint: /bin/bash
|
| 58 |
+
args:
|
| 59 |
+
- "-c"
|
| 60 |
+
- |
|
| 61 |
+
export _AUTH_TOKEN=$(cat auth_token.txt)
|
| 62 |
+
pip install locust==2.31.1 --user
|
| 63 |
+
locust -f tests/load_test/load_test.py \
|
| 64 |
+
--headless \
|
| 65 |
+
-t 30s -u 2 -r 0.5 \
|
| 66 |
+
--csv=tests/load_test/.results/results \
|
| 67 |
+
--html=tests/load_test/.results/report.html
|
| 68 |
+
env:
|
| 69 |
+
- 'PATH=/usr/local/bin:/usr/bin:~/.local/bin'
|
| 70 |
+
|
| 71 |
+
# Export Load Test Results to GCS
|
| 72 |
+
- name: gcr.io/cloud-builders/gcloud
|
| 73 |
+
id: export-results-to-gcs
|
| 74 |
+
entrypoint: /bin/bash
|
| 75 |
+
args:
|
| 76 |
+
- "-c"
|
| 77 |
+
- |
|
| 78 |
+
export _TIMESTAMP=$(date +%Y%m%d-%H%M%S)
|
| 79 |
+
gsutil -m cp -r tests/load_test/.results gs://${_LOGS_BUCKET_NAME_STAGING}/load-test-results/results-$${_TIMESTAMP}
|
| 80 |
+
echo "_________________________________________________________________________"
|
| 81 |
+
echo "Load test results copied to gs://${_LOGS_BUCKET_NAME_STAGING}/load-test-results/results-$${_TIMESTAMP}"
|
| 82 |
+
echo "HTTP link: https://console.cloud.google.com/storage/browser/${_LOGS_BUCKET_NAME_STAGING}/load-test-results/results-$${_TIMESTAMP}"
|
| 83 |
+
echo "_________________________________________________________________________"
|
| 84 |
+
|
| 85 |
+
# Trigger Prod Deployment
|
| 86 |
+
- name: gcr.io/cloud-builders/gcloud
|
| 87 |
+
id: trigger-prod-deployment
|
| 88 |
+
entrypoint: gcloud
|
| 89 |
+
args:
|
| 90 |
+
- "beta"
|
| 91 |
+
- "builds"
|
| 92 |
+
- "triggers"
|
| 93 |
+
- "run"
|
| 94 |
+
- "deploy-adk-rag-agent"
|
| 95 |
+
- "--region"
|
| 96 |
+
- "$LOCATION"
|
| 97 |
+
- "--project"
|
| 98 |
+
- "$PROJECT_ID"
|
| 99 |
+
- "--sha"
|
| 100 |
+
- $COMMIT_SHA
|
| 101 |
+
|
| 102 |
+
- name: gcr.io/cloud-builders/gcloud
|
| 103 |
+
id: echo-view-build-trigger-link
|
| 104 |
+
entrypoint: /bin/bash
|
| 105 |
+
args:
|
| 106 |
+
- "-c"
|
| 107 |
+
- |
|
| 108 |
+
echo "_________________________________________________________________________"
|
| 109 |
+
echo "Production deployment triggered. View progress and / or approve on the Cloud Build Console:"
|
| 110 |
+
echo "https://console.cloud.google.com/cloud-build/builds;region=$LOCATION"
|
| 111 |
+
echo "_________________________________________________________________________"
|
| 112 |
+
|
| 113 |
+
substitutions:
|
| 114 |
+
_STAGING_PROJECT_ID: YOUR_STAGING_PROJECT_ID
|
| 115 |
+
_REGION: asia-southeast1
|
| 116 |
+
|
| 117 |
+
logsBucket: gs://${PROJECT_ID}-adk-rag-agent-logs/build-logs
|
| 118 |
+
options:
|
| 119 |
+
substitutionOption: ALLOW_LOOSE
|
| 120 |
+
defaultLogsBucketBehavior: REGIONAL_USER_OWNED_BUCKET
|
.gitignore
CHANGED
|
@@ -1,3 +1,4 @@
|
|
| 1 |
.env
|
| 2 |
__pycache__/
|
| 3 |
.venv/
|
|
|
|
|
|
| 1 |
.env
|
| 2 |
__pycache__/
|
| 3 |
.venv/
|
| 4 |
+
deployment/terraform/
|
.gradio/certificate.pem
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
-----BEGIN CERTIFICATE-----
|
| 2 |
+
MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
|
| 3 |
+
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
|
| 4 |
+
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
|
| 5 |
+
WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
|
| 6 |
+
ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
|
| 7 |
+
MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
|
| 8 |
+
h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
|
| 9 |
+
0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
|
| 10 |
+
A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
|
| 11 |
+
T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
|
| 12 |
+
B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
|
| 13 |
+
B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
|
| 14 |
+
KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
|
| 15 |
+
OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
|
| 16 |
+
jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
|
| 17 |
+
qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
|
| 18 |
+
rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
|
| 19 |
+
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
|
| 20 |
+
hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
|
| 21 |
+
ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
|
| 22 |
+
3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
|
| 23 |
+
NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
|
| 24 |
+
ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
|
| 25 |
+
TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
|
| 26 |
+
jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
|
| 27 |
+
oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
|
| 28 |
+
4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
|
| 29 |
+
mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
|
| 30 |
+
emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
|
| 31 |
+
-----END CERTIFICATE-----
|
GEMINI.md
ADDED
|
@@ -0,0 +1,1992 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Coding Agent guidance:
|
| 2 |
+
# Google Agent Development Kit (ADK) Python Cheatsheet
|
| 3 |
+
|
| 4 |
+
This document serves as a long-form, comprehensive reference for building, orchestrating, and deploying AI agents using the Python Agent Development Kit (ADK). It aims to cover every significant aspect with greater detail, more code examples, and in-depth best practices.
|
| 5 |
+
|
| 6 |
+
## Table of Contents
|
| 7 |
+
|
| 8 |
+
1. [Core Concepts & Project Structure](#1-core-concepts--project-structure)
|
| 9 |
+
* 1.1 ADK's Foundational Principles
|
| 10 |
+
* 1.2 Essential Primitives
|
| 11 |
+
* 1.3 Standard Project Layout
|
| 12 |
+
* 1.A Build Agents without Code (Agent Config)
|
| 13 |
+
2. [Agent Definitions (`LlmAgent`)](#2-agent-definitions-llmagent)
|
| 14 |
+
* 2.1 Basic `LlmAgent` Setup
|
| 15 |
+
* 2.2 Advanced `LlmAgent` Configuration
|
| 16 |
+
* 2.3 LLM Instruction Crafting
|
| 17 |
+
* 2.4 Production Wrapper (`App`)
|
| 18 |
+
3. [Orchestration with Workflow Agents](#3-orchestration-with-workflow-agents)
|
| 19 |
+
* 3.1 `SequentialAgent`: Linear Execution
|
| 20 |
+
* 3.2 `ParallelAgent`: Concurrent Execution
|
| 21 |
+
* 3.3 `LoopAgent`: Iterative Processes
|
| 22 |
+
4. [Multi-Agent Systems & Communication](#4-multi-agent-systems--communication)
|
| 23 |
+
* 4.1 Agent Hierarchy
|
| 24 |
+
* 4.2 Inter-Agent Communication Mechanisms
|
| 25 |
+
* 4.3 Common Multi-Agent Patterns
|
| 26 |
+
* 4.A Distributed Communication (A2A Protocol)
|
| 27 |
+
5. [Building Custom Agents (`BaseAgent`)](#5-building-custom-agents-baseagent)
|
| 28 |
+
* 5.1 When to Use Custom Agents
|
| 29 |
+
* 5.2 Implementing `_run_async_impl`
|
| 30 |
+
6. [Models: Gemini, LiteLLM, and Vertex AI](#6-models-gemini-litellm-and-vertex-ai)
|
| 31 |
+
* 6.1 Google Gemini Models (AI Studio & Vertex AI)
|
| 32 |
+
* 6.2 Other Cloud & Proprietary Models via LiteLLM
|
| 33 |
+
* 6.3 Open & Local Models via LiteLLM (Ollama, vLLM)
|
| 34 |
+
* 6.4 Customizing LLM API Clients
|
| 35 |
+
7. [Tools: The Agent's Capabilities](#7-tools-the-agents-capabilities)
|
| 36 |
+
* 7.1 Defining Function Tools: Principles & Best Practices
|
| 37 |
+
* 7.2 The `ToolContext` Object: Accessing Runtime Information
|
| 38 |
+
* 7.3 All Tool Types & Their Usage
|
| 39 |
+
* 7.4 Tool Confirmation (Human-in-the-Loop)
|
| 40 |
+
8. [Context, State, and Memory Management](#8-context-state-and-memory-management)
|
| 41 |
+
* 8.1 The `Session` Object & `SessionService`
|
| 42 |
+
* 8.2 `State`: The Conversational Scratchpad
|
| 43 |
+
* 8.3 `Memory`: Long-Term Knowledge & Retrieval
|
| 44 |
+
* 8.4 `Artifacts`: Binary Data Management
|
| 45 |
+
9. [Runtime, Events, and Execution Flow](#9-runtime-events-and-execution-flow)
|
| 46 |
+
* 9.1 Runtime Configuration (`RunConfig`)
|
| 47 |
+
* 9.2 The `Runner`: The Orchestrator
|
| 48 |
+
* 9.3 The Event Loop: Core Execution Flow
|
| 49 |
+
* 9.4 `Event` Object: The Communication Backbone
|
| 50 |
+
* 9.5 Asynchronous Programming (Python Specific)
|
| 51 |
+
10. [Control Flow with Callbacks](#10-control-flow-with-callbacks)
|
| 52 |
+
* 10.1 Callback Mechanism: Interception & Control
|
| 53 |
+
* 10.2 Types of Callbacks
|
| 54 |
+
* 10.3 Callback Best Practices
|
| 55 |
+
* 10.A Global Control with Plugins
|
| 56 |
+
11. [Authentication for Tools](#11-authentication-for-tools)
|
| 57 |
+
* 11.1 Core Concepts: `AuthScheme` & `AuthCredential`
|
| 58 |
+
* 11.2 Interactive OAuth/OIDC Flows
|
| 59 |
+
* 11.3 Custom Tool Authentication
|
| 60 |
+
12. [Deployment Strategies](#12-deployment-strategies)
|
| 61 |
+
* 12.1 Local Development & Testing (`adk web`, `adk run`, `adk api_server`)
|
| 62 |
+
* 12.2 Vertex AI Agent Engine
|
| 63 |
+
* 12.3 Cloud Run
|
| 64 |
+
* 12.4 Google Kubernetes Engine (GKE)
|
| 65 |
+
* 12.5 CI/CD Integration
|
| 66 |
+
13. [Evaluation and Safety](#13-evaluation-and-safety)
|
| 67 |
+
* 13.1 Agent Evaluation (`adk eval`)
|
| 68 |
+
* 13.2 Safety & Guardrails
|
| 69 |
+
14. [Debugging, Logging & Observability](#14-debugging-logging--observability)
|
| 70 |
+
15. [Streaming & Advanced I/O](#15-streaming--advanced-io)
|
| 71 |
+
16. [Performance Optimization](#16-performance-optimization)
|
| 72 |
+
17. [General Best Practices & Common Pitfalls](#17-general-best-practices--common-pitfalls)
|
| 73 |
+
18. [Official API & CLI References](#18-official-api--cli-references)
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
## 1. Core Concepts & Project Structure
|
| 78 |
+
|
| 79 |
+
### 1.1 ADK's Foundational Principles
|
| 80 |
+
|
| 81 |
+
* **Modularity**: Break down complex problems into smaller, manageable agents and tools.
|
| 82 |
+
* **Composability**: Combine simple agents and tools to build sophisticated systems.
|
| 83 |
+
* **Observability**: Detailed event logging and tracing capabilities to understand agent behavior.
|
| 84 |
+
* **Extensibility**: Easily integrate with external services, models, and frameworks.
|
| 85 |
+
* **Deployment-Agnostic**: Design agents once, deploy anywhere.
|
| 86 |
+
|
| 87 |
+
### 1.2 Essential Primitives
|
| 88 |
+
|
| 89 |
+
* **`Agent`**: The core intelligent unit. Can be `LlmAgent` (LLM-driven) or `BaseAgent` (custom/workflow).
|
| 90 |
+
* **`Tool`**: Callable function/class providing external capabilities (`FunctionTool`, `OpenAPIToolset`, etc.).
|
| 91 |
+
* **`Session`**: A unique, stateful conversation thread with history (`events`) and short-term memory (`state`).
|
| 92 |
+
* **`State`**: Key-value dictionary within a `Session` for transient conversation data.
|
| 93 |
+
* **`Memory`**: Long-term, searchable knowledge base beyond a single session (`MemoryService`).
|
| 94 |
+
* **`Artifact`**: Named, versioned binary data (files, images) associated with a session or user.
|
| 95 |
+
* **`Runner`**: The execution engine; orchestrates agent activity and event flow.
|
| 96 |
+
* **`Event`**: Atomic unit of communication and history; carries content and side-effect `actions`.
|
| 97 |
+
* **`InvocationContext`**: The comprehensive root context object holding all runtime information for a single `run_async` call.
|
| 98 |
+
|
| 99 |
+
### 1.3 Standard Project Layout
|
| 100 |
+
|
| 101 |
+
A well-structured ADK project is crucial for maintainability and leveraging `adk` CLI tools.
|
| 102 |
+
|
| 103 |
+
```
|
| 104 |
+
your_project_root/
|
| 105 |
+
├── my_first_agent/ # Each folder is a distinct agent app
|
| 106 |
+
│ ├── __init__.py # Makes `my_first_agent` a Python package (`from . import agent`)
|
| 107 |
+
│ ├── agent.py # Contains `root_agent` definition and `LlmAgent`/WorkflowAgent instances
|
| 108 |
+
│ ├── tools.py # Custom tool function definitions
|
| 109 |
+
│ ├── data/ # Optional: static data, templates
|
| 110 |
+
│ └── .env # Environment variables (API keys, project IDs)
|
| 111 |
+
├── my_second_agent/
|
| 112 |
+
│ ├── __init__.py
|
| 113 |
+
│ └── agent.py
|
| 114 |
+
├── requirements.txt # Project's Python dependencies (e.g., google-adk, litellm)
|
| 115 |
+
├── tests/ # Unit and integration tests
|
| 116 |
+
│ ├── unit/
|
| 117 |
+
│ │ └── test_tools.py
|
| 118 |
+
│ └── integration/
|
| 119 |
+
│ └── test_my_first_agent.py
|
| 120 |
+
│ └── my_first_agent.evalset.json # Evaluation dataset for `adk eval`
|
| 121 |
+
└── main.py # Optional: Entry point for custom FastAPI server deployment
|
| 122 |
+
```
|
| 123 |
+
* `adk web` and `adk run` automatically discover agents in subdirectories with `__init__.py` and `agent.py`.
|
| 124 |
+
* `.env` files are automatically loaded by `adk` tools when run from the root or agent directory.
|
| 125 |
+
|
| 126 |
+
### 1.A Build Agents without Code (Agent Config)
|
| 127 |
+
|
| 128 |
+
ADK allows you to define agents, tools, and even multi-agent workflows using a simple YAML format, eliminating the need to write Python code for orchestration. This is ideal for rapid prototyping and for non-programmers to configure agents.
|
| 129 |
+
|
| 130 |
+
#### **Getting Started with Agent Config**
|
| 131 |
+
|
| 132 |
+
* **Create a Config-based Agent**:
|
| 133 |
+
```bash
|
| 134 |
+
adk create --type=config my_yaml_agent
|
| 135 |
+
```
|
| 136 |
+
This generates a `my_yaml_agent/` folder with `root_agent.yaml` and `.env` files.
|
| 137 |
+
|
| 138 |
+
* **Environment Setup** (in `.env` file):
|
| 139 |
+
```bash
|
| 140 |
+
# For Google AI Studio (simpler setup)
|
| 141 |
+
GOOGLE_GENAI_USE_VERTEXAI=0
|
| 142 |
+
GOOGLE_API_KEY=<your-Google-Gemini-API-key>
|
| 143 |
+
|
| 144 |
+
# For Google Cloud Vertex AI (production)
|
| 145 |
+
GOOGLE_GENAI_USE_VERTEXAI=1
|
| 146 |
+
GOOGLE_CLOUD_PROJECT=<your_gcp_project>
|
| 147 |
+
GOOGLE_CLOUD_LOCATION=asia-southeast1
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
#### **Core Agent Config Structure**
|
| 151 |
+
|
| 152 |
+
* **Basic Agent (`root_agent.yaml`)**:
|
| 153 |
+
```yaml
|
| 154 |
+
# yaml-language-server: $schema=https://raw.githubusercontent.com/google/adk-python/refs/heads/main/src/google/adk/agents/config_schemas/AgentConfig.json
|
| 155 |
+
name: assistant_agent
|
| 156 |
+
model: gemini-2.5-flash
|
| 157 |
+
description: A helper agent that can answer users' various questions.
|
| 158 |
+
instruction: You are an agent to help answer users' various questions.
|
| 159 |
+
```
|
| 160 |
+
|
| 161 |
+
* **Agent with Built-in Tools**:
|
| 162 |
+
```yaml
|
| 163 |
+
name: search_agent
|
| 164 |
+
model: gemini-2.0-flash
|
| 165 |
+
description: 'an agent whose job it is to perform Google search queries and answer questions about the results.'
|
| 166 |
+
instruction: You are an agent whose job is to perform Google search queries and answer questions about the results.
|
| 167 |
+
tools:
|
| 168 |
+
- name: google_search # Built-in ADK tool
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
* **Agent with Custom Tools**:
|
| 172 |
+
```yaml
|
| 173 |
+
agent_class: LlmAgent
|
| 174 |
+
model: gemini-2.5-flash
|
| 175 |
+
name: prime_agent
|
| 176 |
+
description: Handles checking if numbers are prime.
|
| 177 |
+
instruction: |
|
| 178 |
+
You are responsible for checking whether numbers are prime.
|
| 179 |
+
When asked to check primes, you must call the check_prime tool with a list of integers.
|
| 180 |
+
Never attempt to determine prime numbers manually.
|
| 181 |
+
tools:
|
| 182 |
+
- name: ma_llm.check_prime # Reference to Python function
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
* **Multi-Agent System with Sub-Agents**:
|
| 186 |
+
```yaml
|
| 187 |
+
agent_class: LlmAgent
|
| 188 |
+
model: gemini-2.5-flash
|
| 189 |
+
name: root_agent
|
| 190 |
+
description: Learning assistant that provides tutoring in code and math.
|
| 191 |
+
instruction: |
|
| 192 |
+
You are a learning assistant that helps students with coding and math questions.
|
| 193 |
+
|
| 194 |
+
You delegate coding questions to the code_tutor_agent and math questions to the math_tutor_agent.
|
| 195 |
+
|
| 196 |
+
Follow these steps:
|
| 197 |
+
1. If the user asks about programming or coding, delegate to the code_tutor_agent.
|
| 198 |
+
2. If the user asks about math concepts or problems, delegate to the math_tutor_agent.
|
| 199 |
+
3. Always provide clear explanations and encourage learning.
|
| 200 |
+
sub_agents:
|
| 201 |
+
- config_path: code_tutor_agent.yaml
|
| 202 |
+
- config_path: math_tutor_agent.yaml
|
| 203 |
+
```
|
| 204 |
+
|
| 205 |
+
#### **Loading Agent Config in Python**
|
| 206 |
+
|
| 207 |
+
```python
|
| 208 |
+
from google.adk.agents import config_agent_utils
|
| 209 |
+
root_agent = config_agent_utils.from_config("{agent_folder}/root_agent.yaml")
|
| 210 |
+
```
|
| 211 |
+
|
| 212 |
+
#### **Running Agent Config Agents**
|
| 213 |
+
|
| 214 |
+
From the agent directory, use any of these commands:
|
| 215 |
+
* `adk web` - Launch web UI interface
|
| 216 |
+
* `adk run` - Run in terminal without UI
|
| 217 |
+
* `adk api_server` - Run as a service for other applications
|
| 218 |
+
|
| 219 |
+
#### **Deployment Support**
|
| 220 |
+
|
| 221 |
+
Agent Config agents can be deployed using:
|
| 222 |
+
* `adk deploy cloud_run` - Deploy to Google Cloud Run
|
| 223 |
+
* `adk deploy agent_engine` - Deploy to Vertex AI Agent Engine
|
| 224 |
+
|
| 225 |
+
#### **Key Features & Capabilities**
|
| 226 |
+
|
| 227 |
+
* **Supported Built-in Tools**: `google_search`, `load_artifacts`, `url_context`, `exit_loop`, `preload_memory`, `get_user_choice`, `enterprise_web_search`, `load_web_page`
|
| 228 |
+
* **Custom Tool Integration**: Reference Python functions using fully qualified module paths
|
| 229 |
+
* **Multi-Agent Orchestration**: Link agents via `config_path` references
|
| 230 |
+
* **Schema Validation**: Built-in YAML schema for IDE support and validation
|
| 231 |
+
|
| 232 |
+
#### **Current Limitations** (Experimental Feature)
|
| 233 |
+
|
| 234 |
+
* **Model Support**: Only Gemini models currently supported
|
| 235 |
+
* **Language Support**: Custom tools must be written in Python
|
| 236 |
+
* **Unsupported Agent Types**: `LangGraphAgent`, `A2aAgent`
|
| 237 |
+
* **Unsupported Tools**: `AgentTool`, `LongRunningFunctionTool`, `VertexAiSearchTool`, `MCPToolset`, `LangchainTool`, `ExampleTool`
|
| 238 |
+
|
| 239 |
+
For complete examples and reference, see the [ADK samples repository](https://github.com/search?q=repo%3Agoogle%2Fadk-python+path%3A%2F%5Econtributing%5C%2Fsamples%5C%2F%2F+.yaml&type=code).
|
| 240 |
+
|
| 241 |
+
---
|
| 242 |
+
|
| 243 |
+
## 2. Agent Definitions (`LlmAgent`)
|
| 244 |
+
|
| 245 |
+
The `LlmAgent` is the cornerstone of intelligent behavior, leveraging an LLM for reasoning and decision-making.
|
| 246 |
+
|
| 247 |
+
### 2.1 Basic `LlmAgent` Setup
|
| 248 |
+
|
| 249 |
+
```python
|
| 250 |
+
from google.adk.agents import Agent
|
| 251 |
+
|
| 252 |
+
def get_current_time(city: str) -> dict:
|
| 253 |
+
"""Returns the current time in a specified city."""
|
| 254 |
+
# Mock implementation
|
| 255 |
+
if city.lower() == "new york":
|
| 256 |
+
return {"status": "success", "time": "10:30 AM EST"}
|
| 257 |
+
return {"status": "error", "message": f"Time for {city} not available."}
|
| 258 |
+
|
| 259 |
+
my_first_llm_agent = Agent(
|
| 260 |
+
name="time_teller_agent",
|
| 261 |
+
model="gemini-2.5-flash", # Essential: The LLM powering the agent
|
| 262 |
+
instruction="You are a helpful assistant that tells the current time in cities. Use the 'get_current_time' tool for this purpose.",
|
| 263 |
+
description="Tells the current time in a specified city.", # Crucial for multi-agent delegation
|
| 264 |
+
tools=[get_current_time] # List of callable functions/tool instances
|
| 265 |
+
)
|
| 266 |
+
```
|
| 267 |
+
|
| 268 |
+
### 2.2 Advanced `LlmAgent` Configuration
|
| 269 |
+
|
| 270 |
+
* **`generate_content_config`**: Controls LLM generation parameters (temperature, token limits, safety).
|
| 271 |
+
```python
|
| 272 |
+
from google.genai import types as genai_types
|
| 273 |
+
from google.adk.agents import Agent
|
| 274 |
+
|
| 275 |
+
gen_config = genai_types.GenerateContentConfig(
|
| 276 |
+
temperature=0.2, # Controls randomness (0.0-1.0), lower for more deterministic.
|
| 277 |
+
top_p=0.9, # Nucleus sampling: sample from top_p probability mass.
|
| 278 |
+
top_k=40, # Top-k sampling: sample from top_k most likely tokens.
|
| 279 |
+
max_output_tokens=1024, # Max tokens in LLM's response.
|
| 280 |
+
stop_sequences=["## END"] # LLM will stop generating if these sequences appear.
|
| 281 |
+
)
|
| 282 |
+
agent = Agent(
|
| 283 |
+
# ... basic config ...
|
| 284 |
+
generate_content_config=gen_config
|
| 285 |
+
)
|
| 286 |
+
```
|
| 287 |
+
|
| 288 |
+
* **`output_key`**: Automatically saves the agent's final text or structured (if `output_schema` is used) response to the `session.state` under this key. Facilitates data flow between agents.
|
| 289 |
+
```python
|
| 290 |
+
agent = Agent(
|
| 291 |
+
# ... basic config ...
|
| 292 |
+
output_key="llm_final_response_text"
|
| 293 |
+
)
|
| 294 |
+
# After agent runs, session.state['llm_final_response_text'] will contain its output.
|
| 295 |
+
```
|
| 296 |
+
|
| 297 |
+
* **`input_schema` & `output_schema`**: Define strict JSON input/output formats using Pydantic models.
|
| 298 |
+
> **Warning**: Using `output_schema` forces the LLM to generate JSON and **disables** its ability to use tools or delegate to other agents.
|
| 299 |
+
|
| 300 |
+
#### **Example: Defining and Using Structured Output**
|
| 301 |
+
|
| 302 |
+
This is the most reliable way to make an LLM produce predictable, parseable JSON, which is essential for multi-agent workflows.
|
| 303 |
+
|
| 304 |
+
1. **Define the Schema with Pydantic:**
|
| 305 |
+
```python
|
| 306 |
+
from pydantic import BaseModel, Field
|
| 307 |
+
from typing import Literal
|
| 308 |
+
|
| 309 |
+
class SearchQuery(BaseModel):
|
| 310 |
+
"""Model representing a specific search query for web search."""
|
| 311 |
+
search_query: str = Field(
|
| 312 |
+
description="A highly specific and targeted query for web search."
|
| 313 |
+
)
|
| 314 |
+
|
| 315 |
+
class Feedback(BaseModel):
|
| 316 |
+
"""Model for providing evaluation feedback on research quality."""
|
| 317 |
+
grade: Literal["pass", "fail"] = Field(
|
| 318 |
+
description="Evaluation result. 'pass' if the research is sufficient, 'fail' if it needs revision."
|
| 319 |
+
)
|
| 320 |
+
comment: str = Field(
|
| 321 |
+
description="Detailed explanation of the evaluation, highlighting strengths and/or weaknesses of the research."
|
| 322 |
+
)
|
| 323 |
+
follow_up_queries: list[SearchQuery] | None = Field(
|
| 324 |
+
default=None,
|
| 325 |
+
description="A list of specific, targeted follow-up search queries needed to fix research gaps. This should be null or empty if the grade is 'pass'."
|
| 326 |
+
)
|
| 327 |
+
```
|
| 328 |
+
* **`BaseModel` & `Field`**: Define data types, defaults, and crucial `description` fields. These descriptions are sent to the LLM to guide its output.
|
| 329 |
+
* **`Literal`**: Enforces strict enum-like values (`"pass"` or `"fail"`), preventing the LLM from hallucinating unexpected values.
|
| 330 |
+
|
| 331 |
+
2. **Assign the Schema to an `LlmAgent`:**
|
| 332 |
+
```python
|
| 333 |
+
research_evaluator = LlmAgent(
|
| 334 |
+
name="research_evaluator",
|
| 335 |
+
model="gemini-2.5-pro",
|
| 336 |
+
instruction="""You are a meticulous quality assurance analyst. Evaluate the research findings in 'section_research_findings' and be very critical.
|
| 337 |
+
If you find significant gaps, assign a grade of 'fail', write a detailed comment, and generate 5-7 specific follow-up queries.
|
| 338 |
+
If the research is thorough, grade it 'pass'.
|
| 339 |
+
Your response must be a single, raw JSON object validating against the 'Feedback' schema.
|
| 340 |
+
""",
|
| 341 |
+
output_schema=Feedback, # This forces the LLM to output JSON matching the Feedback model.
|
| 342 |
+
output_key="research_evaluation", # The resulting JSON object will be saved to state.
|
| 343 |
+
disallow_transfer_to_peers=True, # Prevents this agent from delegating. Its job is only to evaluate.
|
| 344 |
+
)
|
| 345 |
+
```
|
| 346 |
+
|
| 347 |
+
* **`include_contents`**: Controls whether the conversation history is sent to the LLM.
|
| 348 |
+
* `'default'` (default): Sends relevant history.
|
| 349 |
+
* `'none'`: Sends no history; agent operates purely on current turn's input and `instruction`. Useful for stateless API wrapper agents.
|
| 350 |
+
```python
|
| 351 |
+
agent = Agent(..., include_contents='none')
|
| 352 |
+
```
|
| 353 |
+
|
| 354 |
+
* **`planner`**: Assign a `BasePlanner` instance to enable multi-step reasoning.
|
| 355 |
+
* **`BuiltInPlanner`**: Leverages a model's native "thinking" or planning capabilities (e.g., Gemini).
|
| 356 |
+
```python
|
| 357 |
+
from google.adk.planners import BuiltInPlanner
|
| 358 |
+
from google.genai.types import ThinkingConfig
|
| 359 |
+
|
| 360 |
+
agent = Agent(
|
| 361 |
+
model="gemini-2.5-flash",
|
| 362 |
+
planner=BuiltInPlanner(
|
| 363 |
+
thinking_config=ThinkingConfig(include_thoughts=True)
|
| 364 |
+
),
|
| 365 |
+
# ... tools ...
|
| 366 |
+
)
|
| 367 |
+
```
|
| 368 |
+
* **`PlanReActPlanner`**: Instructs the model to follow a structured Plan-Reason-Act output format, useful for models without built-in planning.
|
| 369 |
+
|
| 370 |
+
* **`code_executor`**: Assign a `BaseCodeExecutor` to allow the agent to execute code blocks.
|
| 371 |
+
* **`BuiltInCodeExecutor`**: The standard, sandboxed code executor provided by ADK for safe execution.
|
| 372 |
+
```python
|
| 373 |
+
from google.adk.code_executors import BuiltInCodeExecutor
|
| 374 |
+
agent = Agent(
|
| 375 |
+
name="code_agent",
|
| 376 |
+
model="gemini-2.5-flash",
|
| 377 |
+
instruction="Write and execute Python code to solve math problems.",
|
| 378 |
+
code_executor=BuiltInCodeExecutor() # Corrected from a list to an instance
|
| 379 |
+
)
|
| 380 |
+
```
|
| 381 |
+
|
| 382 |
+
* **Callbacks**: Hooks for observing and modifying agent behavior at key lifecycle points (`before_model_callback`, `after_tool_callback`, etc.). (Covered in Callbacks).
|
| 383 |
+
|
| 384 |
+
### 2.3 LLM Instruction Crafting (`instruction`)
|
| 385 |
+
|
| 386 |
+
The `instruction` is critical. It guides the LLM's behavior, persona, and tool usage. The following examples demonstrate powerful techniques for creating specialized, reliable agents.
|
| 387 |
+
|
| 388 |
+
**Best Practices & Examples:**
|
| 389 |
+
|
| 390 |
+
* **Be Specific & Concise**: Avoid ambiguity.
|
| 391 |
+
* **Define Persona & Role**: Give the LLM a clear role.
|
| 392 |
+
* **Constrain Behavior & Tool Use**: Explicitly state what the LLM *and should not* do.
|
| 393 |
+
* **Define Output Format**: Tell the LLM *exactly* what its output should look like, especially when not using `output_schema`.
|
| 394 |
+
* **Dynamic Injection**: Use `{state_key}` to inject runtime data from `session.state` into the prompt.
|
| 395 |
+
* **Iteration**: Test, observe, and refine instructions.
|
| 396 |
+
|
| 397 |
+
**Example 1: Constraining Tool Use and Output Format**
|
| 398 |
+
```python
|
| 399 |
+
import datetime
|
| 400 |
+
from google.adk.tools import google_search
|
| 401 |
+
|
| 402 |
+
|
| 403 |
+
plan_generator = LlmAgent(
|
| 404 |
+
model="gemini-2.5-flash",
|
| 405 |
+
name="plan_generator",
|
| 406 |
+
description="Generates a 4-5 line action-oriented research plan.",
|
| 407 |
+
instruction=f"""
|
| 408 |
+
You are a research strategist. Your job is to create a high-level RESEARCH PLAN, not a summary.
|
| 409 |
+
**RULE: Your output MUST be a bulleted list of 4-5 action-oriented research goals or key questions.**
|
| 410 |
+
- A good goal starts with a verb like "Analyze," "Identify," "Investigate."
|
| 411 |
+
- A bad output is a statement of fact like "The event was in April 2024."
|
| 412 |
+
**TOOL USE IS STRICTLY LIMITED:**
|
| 413 |
+
Your goal is to create a generic, high-quality plan *without searching*.
|
| 414 |
+
Only use `google_search` if a topic is ambiguous and you absolutely cannot create a plan without it.
|
| 415 |
+
You are explicitly forbidden from researching the *content* or *themes* of the topic.
|
| 416 |
+
Current date: {datetime.datetime.now().strftime("%Y-%m-%d")}
|
| 417 |
+
""",
|
| 418 |
+
tools=[google_search],
|
| 419 |
+
)
|
| 420 |
+
```
|
| 421 |
+
|
| 422 |
+
**Example 2: Injecting Data from State and Specifying Custom Tags**
|
| 423 |
+
This agent's `instruction` relies on data placed in `session.state` by previous agents.
|
| 424 |
+
```python
|
| 425 |
+
report_composer = LlmAgent(
|
| 426 |
+
model="gemini-2.5-pro",
|
| 427 |
+
name="report_composer_with_citations",
|
| 428 |
+
include_contents="none", # History not needed; all data is injected.
|
| 429 |
+
description="Transforms research data and a markdown outline into a final, cited report.",
|
| 430 |
+
instruction="""
|
| 431 |
+
Transform the provided data into a polished, professional, and meticulously cited research report.
|
| 432 |
+
|
| 433 |
+
---
|
| 434 |
+
### INPUT DATA
|
| 435 |
+
* Research Plan: `{research_plan}`
|
| 436 |
+
* Research Findings: `{section_research_findings}`
|
| 437 |
+
* Citation Sources: `{sources}`
|
| 438 |
+
* Report Structure: `{report_sections}`
|
| 439 |
+
|
| 440 |
+
---
|
| 441 |
+
### CRITICAL: Citation System
|
| 442 |
+
To cite a source, you MUST insert a special citation tag directly after the claim it supports.
|
| 443 |
+
|
| 444 |
+
**The only correct format is:** `<cite source="src-ID_NUMBER" />`
|
| 445 |
+
|
| 446 |
+
---
|
| 447 |
+
### Final Instructions
|
| 448 |
+
Generate a comprehensive report using ONLY the `<cite source="src-ID_NUMBER" />` tag system for all citations.
|
| 449 |
+
The final report must strictly follow the structure provided in the **Report Structure** markdown outline.
|
| 450 |
+
Do not include a "References" or "Sources" section; all citations must be in-line.
|
| 451 |
+
""",
|
| 452 |
+
output_key="final_cited_report",
|
| 453 |
+
)
|
| 454 |
+
```
|
| 455 |
+
|
| 456 |
+
### 2.4 Production Wrapper (`App`)
|
| 457 |
+
Wraps the `root_agent` to enable production-grade runtime features that an `Agent` cannot handle alone.
|
| 458 |
+
|
| 459 |
+
```python
|
| 460 |
+
from google.adk.apps.app import App
|
| 461 |
+
from google.adk.agents.context_cache_config import ContextCacheConfig
|
| 462 |
+
from google.adk.apps.events_compaction_config import EventsCompactionConfig
|
| 463 |
+
from google.adk.apps.resumability_config import ResumabilityConfig
|
| 464 |
+
|
| 465 |
+
production_app = App(
|
| 466 |
+
name="my_app",
|
| 467 |
+
root_agent=my_agent,
|
| 468 |
+
# 1. Reduce costs/latency for long contexts
|
| 469 |
+
context_cache_config=ContextCacheConfig(min_tokens=2048, ttl_seconds=600),
|
| 470 |
+
# 2. Allow resuming crashed workflows from last state
|
| 471 |
+
resumability_config=ResumabilityConfig(is_resumable=True),
|
| 472 |
+
# 3. Manage long conversation history automatically
|
| 473 |
+
events_compaction_config=EventsCompactionConfig(compaction_interval=5, overlap_size=1)
|
| 474 |
+
)
|
| 475 |
+
|
| 476 |
+
# Usage: Pass 'app' instead of 'agent' to the Runner
|
| 477 |
+
# runner = Runner(app=production_app, ...)
|
| 478 |
+
```
|
| 479 |
+
|
| 480 |
+
---
|
| 481 |
+
|
| 482 |
+
## 3. Orchestration with Workflow Agents
|
| 483 |
+
|
| 484 |
+
Workflow agents (`SequentialAgent`, `ParallelAgent`, `LoopAgent`) provide deterministic control flow, combining LLM capabilities with structured execution. They do **not** use an LLM for their own orchestration logic.
|
| 485 |
+
|
| 486 |
+
### 3.1 `SequentialAgent`: Linear Execution
|
| 487 |
+
|
| 488 |
+
Executes `sub_agents` one after another in the order defined. The `InvocationContext` is passed along, allowing state changes to be visible to subsequent agents.
|
| 489 |
+
|
| 490 |
+
```python
|
| 491 |
+
from google.adk.agents import SequentialAgent, Agent
|
| 492 |
+
|
| 493 |
+
# Agent 1: Summarizes a document and saves to state
|
| 494 |
+
summarizer = Agent(
|
| 495 |
+
name="DocumentSummarizer",
|
| 496 |
+
model="gemini-2.5-flash",
|
| 497 |
+
instruction="Summarize the provided document in 3 sentences.",
|
| 498 |
+
output_key="document_summary" # Output saved to session.state['document_summary']
|
| 499 |
+
)
|
| 500 |
+
|
| 501 |
+
# Agent 2: Generates questions based on the summary from state
|
| 502 |
+
question_generator = Agent(
|
| 503 |
+
name="QuestionGenerator",
|
| 504 |
+
model="gemini-2.5-flash",
|
| 505 |
+
instruction="Generate 3 comprehension questions based on this summary: {document_summary}",
|
| 506 |
+
# 'document_summary' is dynamically injected from session.state
|
| 507 |
+
)
|
| 508 |
+
|
| 509 |
+
document_pipeline = SequentialAgent(
|
| 510 |
+
name="SummaryQuestionPipeline",
|
| 511 |
+
sub_agents=[summarizer, question_generator], # Order matters!
|
| 512 |
+
description="Summarizes a document then generates questions."
|
| 513 |
+
)
|
| 514 |
+
```
|
| 515 |
+
|
| 516 |
+
### 3.2 `ParallelAgent`: Concurrent Execution
|
| 517 |
+
|
| 518 |
+
Executes `sub_agents` simultaneously. Useful for independent tasks to reduce overall latency. All sub-agents share the same `session.state`.
|
| 519 |
+
|
| 520 |
+
```python
|
| 521 |
+
from google.adk.agents import ParallelAgent, Agent, SequentialAgent
|
| 522 |
+
|
| 523 |
+
# Agents to fetch data concurrently
|
| 524 |
+
fetch_stock_price = Agent(name="StockPriceFetcher", ..., output_key="stock_data")
|
| 525 |
+
fetch_news_headlines = Agent(name="NewsFetcher", ..., output_key="news_data")
|
| 526 |
+
fetch_social_sentiment = Agent(name="SentimentAnalyzer", ..., output_key="sentiment_data")
|
| 527 |
+
|
| 528 |
+
# Agent to merge results (runs after ParallelAgent, usually in a SequentialAgent)
|
| 529 |
+
merger_agent = Agent(
|
| 530 |
+
name="ReportGenerator",
|
| 531 |
+
model="gemini-2.5-flash",
|
| 532 |
+
instruction="Combine stock data: {stock_data}, news: {news_data}, and sentiment: {sentiment_data} into a market report."
|
| 533 |
+
)
|
| 534 |
+
|
| 535 |
+
# Pipeline to run parallel fetching then sequential merging
|
| 536 |
+
market_analysis_pipeline = SequentialAgent(
|
| 537 |
+
name="MarketAnalyzer",
|
| 538 |
+
sub_agents=[
|
| 539 |
+
ParallelAgent(
|
| 540 |
+
name="ConcurrentFetch",
|
| 541 |
+
sub_agents=[fetch_stock_price, fetch_news_headlines, fetch_social_sentiment]
|
| 542 |
+
),
|
| 543 |
+
merger_agent # Runs after all parallel agents complete
|
| 544 |
+
]
|
| 545 |
+
)
|
| 546 |
+
```
|
| 547 |
+
* **Concurrency Caution**: When parallel agents write to the same `state` key, race conditions can occur. Always use distinct `output_key`s or manage concurrent writes explicitly.
|
| 548 |
+
|
| 549 |
+
### 3.3 `LoopAgent`: Iterative Processes
|
| 550 |
+
|
| 551 |
+
Repeatedly executes its `sub_agents` (sequentially within each loop iteration) until a condition is met or `max_iterations` is reached.
|
| 552 |
+
|
| 553 |
+
#### **Termination of `LoopAgent`**
|
| 554 |
+
A `LoopAgent` terminates when:
|
| 555 |
+
1. `max_iterations` is reached.
|
| 556 |
+
2. Any `Event` yielded by a sub-agent (or a tool within it) sets `actions.escalate = True`. This provides dynamic, content-driven loop termination.
|
| 557 |
+
|
| 558 |
+
#### **Example: Iterative Refinement Loop with a Custom `BaseAgent` for Control**
|
| 559 |
+
This example shows a loop that continues until a condition, determined by an evaluation agent, is met.
|
| 560 |
+
|
| 561 |
+
```python
|
| 562 |
+
from google.adk.agents import LoopAgent, Agent, BaseAgent
|
| 563 |
+
from google.adk.events import Event, EventActions
|
| 564 |
+
from google.adk.agents.invocation_context import InvocationContext
|
| 565 |
+
from typing import AsyncGenerator
|
| 566 |
+
|
| 567 |
+
# An LLM Agent that evaluates research and produces structured JSON output
|
| 568 |
+
research_evaluator = Agent(
|
| 569 |
+
name="research_evaluator",
|
| 570 |
+
# ... configuration from Section 2.2 ...
|
| 571 |
+
output_schema=Feedback,
|
| 572 |
+
output_key="research_evaluation",
|
| 573 |
+
)
|
| 574 |
+
|
| 575 |
+
# An LLM Agent that performs additional searches based on feedback
|
| 576 |
+
enhanced_search_executor = Agent(
|
| 577 |
+
name="enhanced_search_executor",
|
| 578 |
+
instruction="Execute the follow-up queries from 'research_evaluation' and combine with existing findings.",
|
| 579 |
+
# ... other configurations ...
|
| 580 |
+
)
|
| 581 |
+
|
| 582 |
+
# A custom BaseAgent to check the evaluation and stop the loop
|
| 583 |
+
class EscalationChecker(BaseAgent):
|
| 584 |
+
"""Checks research evaluation and escalates to stop the loop if grade is 'pass'."""
|
| 585 |
+
async def _run_async_impl(self, ctx: InvocationContext) -> AsyncGenerator[Event, None]:
|
| 586 |
+
evaluation = ctx.session.state.get("research_evaluation")
|
| 587 |
+
if evaluation and evaluation.get("grade") == "pass":
|
| 588 |
+
# The key to stopping the loop: yield an Event with escalate=True
|
| 589 |
+
yield Event(author=self.name, actions=EventActions(escalate=True))
|
| 590 |
+
else:
|
| 591 |
+
# Let the loop continue
|
| 592 |
+
yield Event(author=self.name)
|
| 593 |
+
|
| 594 |
+
# Define the loop
|
| 595 |
+
iterative_refinement_loop = LoopAgent(
|
| 596 |
+
name="IterativeRefinementLoop",
|
| 597 |
+
sub_agents=[
|
| 598 |
+
research_evaluator, # Step 1: Evaluate
|
| 599 |
+
EscalationChecker(name="EscalationChecker"), # Step 2: Check and maybe stop
|
| 600 |
+
enhanced_search_executor, # Step 3: Refine (only runs if loop didn't stop)
|
| 601 |
+
],
|
| 602 |
+
max_iterations=5, # Fallback to prevent infinite loops
|
| 603 |
+
description="Iteratively evaluates and refines research until it passes quality checks."
|
| 604 |
+
)
|
| 605 |
+
```
|
| 606 |
+
|
| 607 |
+
---
|
| 608 |
+
|
| 609 |
+
## 4. Multi-Agent Systems & Communication
|
| 610 |
+
|
| 611 |
+
Building complex applications by composing multiple, specialized agents.
|
| 612 |
+
|
| 613 |
+
### 4.1 Agent Hierarchy
|
| 614 |
+
|
| 615 |
+
A hierarchical (tree-like) structure of parent-child relationships defined by the `sub_agents` parameter during `BaseAgent` initialization. An agent can only have one parent.
|
| 616 |
+
|
| 617 |
+
```python
|
| 618 |
+
# Conceptual Hierarchy
|
| 619 |
+
# Root
|
| 620 |
+
# └── Coordinator (LlmAgent)
|
| 621 |
+
# ├── SalesAgent (LlmAgent)
|
| 622 |
+
# └── SupportAgent (LlmAgent)
|
| 623 |
+
# └── DataPipeline (SequentialAgent)
|
| 624 |
+
# ├── DataFetcher (LlmAgent)
|
| 625 |
+
# └── DataProcessor (LlmAgent)
|
| 626 |
+
```
|
| 627 |
+
|
| 628 |
+
### 4.2 Inter-Agent Communication Mechanisms
|
| 629 |
+
|
| 630 |
+
1. **Shared Session State (`session.state`)**: The most common and robust method. Agents read from and write to the same mutable dictionary.
|
| 631 |
+
* **Mechanism**: Agent A sets `ctx.session.state['key'] = value`. Agent B later reads `ctx.session.state.get('key')`. `output_key` on `LlmAgent` is a convenient auto-setter.
|
| 632 |
+
* **Best for**: Passing intermediate results, shared configurations, and flags in pipelines (Sequential, Loop agents).
|
| 633 |
+
|
| 634 |
+
2. **LLM-Driven Delegation (`transfer_to_agent`)**: A `LlmAgent` can dynamically hand over control to another agent based on its reasoning.
|
| 635 |
+
* **Mechanism**: The LLM generates a special `transfer_to_agent` function call. The ADK framework intercepts this, routes the next turn to the target agent.
|
| 636 |
+
* **Prerequisites**:
|
| 637 |
+
* The initiating `LlmAgent` needs `instruction` to guide delegation and `description` of the target agent(s).
|
| 638 |
+
* Target agents need clear `description`s to help the LLM decide.
|
| 639 |
+
* Target agent must be discoverable within the current agent's hierarchy (direct `sub_agent` or a descendant).
|
| 640 |
+
* **Configuration**: Can be enabled/disabled via `disallow_transfer_to_parent` and `disallow_transfer_to_peers` on `LlmAgent`.
|
| 641 |
+
|
| 642 |
+
3. **Explicit Invocation (`AgentTool`)**: An `LlmAgent` can treat another `BaseAgent` instance as a callable tool.
|
| 643 |
+
* **Mechanism**: Wrap the target agent (`target_agent`) in `AgentTool(agent=target_agent)` and add it to the calling `LlmAgent`'s `tools` list. The `AgentTool` generates a `FunctionDeclaration` for the LLM. When called, `AgentTool` runs the target agent and returns its final response as the tool result.
|
| 644 |
+
* **Best for**: Hierarchical task decomposition, where a higher-level agent needs a specific output from a lower-level agent.
|
| 645 |
+
|
| 646 |
+
**Delegation vs. Agent-as-a-Tool**
|
| 647 |
+
* **Delegation (`sub_agents`)**: The parent agent *transfers control*. The sub-agent interacts directly with the user for subsequent turns until it finishes.
|
| 648 |
+
* **Agent-as-a-Tool (`AgentTool`)**: The parent agent *calls* another agent like a function. The parent remains in control, receives the sub-agent's entire interaction as a single tool result, and summarizes it for the user.
|
| 649 |
+
|
| 650 |
+
```python
|
| 651 |
+
# Delegation: "I'll let the specialist handle this conversation."
|
| 652 |
+
root = Agent(name="root", sub_agents=[specialist])
|
| 653 |
+
|
| 654 |
+
# Agent-as-a-Tool: "I need the specialist to do a task and give me the results."
|
| 655 |
+
from google.adk.tools import AgentTool
|
| 656 |
+
root = Agent(name="root", tools=[AgentTool(specialist)])
|
| 657 |
+
```
|
| 658 |
+
|
| 659 |
+
### 4.3 Common Multi-Agent Patterns
|
| 660 |
+
|
| 661 |
+
* **Coordinator/Dispatcher**: A central agent routes requests to specialized sub-agents (often via LLM-driven delegation).
|
| 662 |
+
* **Sequential Pipeline**: `SequentialAgent` orchestrates a fixed sequence of tasks, passing data via shared state.
|
| 663 |
+
* **Parallel Fan-Out/Gather**: `ParallelAgent` runs concurrent tasks, followed by a final agent that synthesizes results from state.
|
| 664 |
+
* **Review/Critique (Generator-Critic)**: `SequentialAgent` with a generator followed by a critic, often in a `LoopAgent` for iterative refinement.
|
| 665 |
+
* **Hierarchical Task Decomposition (Planner/Executor)**: High-level agents break down complex problems, delegating sub-tasks to lower-level agents (often via `AgentTool` and delegation).
|
| 666 |
+
|
| 667 |
+
#### **Example: Hierarchical Planner/Executor Pattern**
|
| 668 |
+
This pattern combines several mechanisms. A top-level `interactive_planner_agent` uses another agent (`plan_generator`) as a tool to create a plan, then delegates the execution of that plan to a complex `SequentialAgent` (`research_pipeline`).
|
| 669 |
+
|
| 670 |
+
```python
|
| 671 |
+
from google.adk.agents import LlmAgent, SequentialAgent, LoopAgent
|
| 672 |
+
from google.adk.tools.agent_tool import AgentTool
|
| 673 |
+
|
| 674 |
+
# Assume plan_generator, section_planner, research_evaluator, etc. are defined.
|
| 675 |
+
|
| 676 |
+
# The execution pipeline itself is a complex agent.
|
| 677 |
+
research_pipeline = SequentialAgent(
|
| 678 |
+
name="research_pipeline",
|
| 679 |
+
description="Executes a pre-approved research plan. It performs iterative research, evaluation, and composes a final, cited report.",
|
| 680 |
+
sub_agents=[
|
| 681 |
+
section_planner,
|
| 682 |
+
section_researcher,
|
| 683 |
+
LoopAgent(
|
| 684 |
+
name="iterative_refinement_loop",
|
| 685 |
+
max_iterations=3,
|
| 686 |
+
sub_agents=[
|
| 687 |
+
research_evaluator,
|
| 688 |
+
EscalationChecker(name="escalation_checker"),
|
| 689 |
+
enhanced_search_executor,
|
| 690 |
+
],
|
| 691 |
+
),
|
| 692 |
+
report_composer,
|
| 693 |
+
],
|
| 694 |
+
)
|
| 695 |
+
|
| 696 |
+
# The top-level agent that interacts with the user.
|
| 697 |
+
interactive_planner_agent = LlmAgent(
|
| 698 |
+
name="interactive_planner_agent",
|
| 699 |
+
model="gemini-2.5-flash",
|
| 700 |
+
description="The primary research assistant. It collaborates with the user to create a research plan, and then executes it upon approval.",
|
| 701 |
+
instruction="""
|
| 702 |
+
You are a research planning assistant. Your workflow is:
|
| 703 |
+
1. **Plan:** Use the `plan_generator` tool to create a draft research plan.
|
| 704 |
+
2. **Refine:** Incorporate user feedback until the plan is approved.
|
| 705 |
+
3. **Execute:** Once the user gives EXPLICIT approval (e.g., "looks good, run it"), you MUST delegate the task to the `research_pipeline` agent.
|
| 706 |
+
Your job is to Plan, Refine, and Delegate. Do not do the research yourself.
|
| 707 |
+
""",
|
| 708 |
+
# The planner delegates to the pipeline.
|
| 709 |
+
sub_agents=[research_pipeline],
|
| 710 |
+
# The planner uses another agent as a tool.
|
| 711 |
+
tools=[AgentTool(plan_generator)],
|
| 712 |
+
output_key="research_plan",
|
| 713 |
+
)
|
| 714 |
+
|
| 715 |
+
# The root agent of the application is the top-level planner.
|
| 716 |
+
root_agent = interactive_planner_agent
|
| 717 |
+
```
|
| 718 |
+
|
| 719 |
+
### 4.A. Distributed Communication (A2A Protocol)
|
| 720 |
+
|
| 721 |
+
The Agent-to-Agent (A2A) Protocol enables agents to communicate over a network, even if they are written in different languages or run as separate services. Use A2A for integrating with third-party agents, building microservice-based agent architectures, or when a strong, formal API contract is needed. For internal code organization, prefer local sub-agents.
|
| 722 |
+
|
| 723 |
+
* **Exposing an Agent**: Make an existing ADK agent available to others over A2A.
|
| 724 |
+
* **`to_a2a()` Utility**: The simplest method. Wraps your `root_agent` and creates a runnable FastAPI app, auto-generating the required `agent.json` card.
|
| 725 |
+
```python
|
| 726 |
+
from google.adk.a2a.utils.agent_to_a2a import to_a2a
|
| 727 |
+
# root_agent is your existing ADK Agent instance
|
| 728 |
+
a2a_app = to_a2a(root_agent, port=8001)
|
| 729 |
+
# Run with: uvicorn your_module:a2a_app --host localhost --port 8001
|
| 730 |
+
```
|
| 731 |
+
* **`adk api_server --a2a`**: A CLI command that serves agents from a directory. Requires you to manually create an `agent.json` card for each agent you want to expose.
|
| 732 |
+
|
| 733 |
+
* **Consuming a Remote Agent**: Use a remote A2A agent as if it were a local agent.
|
| 734 |
+
* **`RemoteA2aAgent`**: This agent acts as a client proxy. You initialize it with the URL to the remote agent's card.
|
| 735 |
+
```python
|
| 736 |
+
from google.adk.a2a.remote_a2a_agent import RemoteA2aAgent
|
| 737 |
+
|
| 738 |
+
# This agent can now be used as a sub-agent or tool
|
| 739 |
+
prime_checker_agent = RemoteA2aAgent(
|
| 740 |
+
name="prime_agent",
|
| 741 |
+
description="A remote agent that checks if numbers are prime.",
|
| 742 |
+
agent_card="http://localhost:8001/a2a/check_prime_agent/.well-known/agent.json"
|
| 743 |
+
)
|
| 744 |
+
```
|
| 745 |
+
|
| 746 |
+
---
|
| 747 |
+
|
| 748 |
+
## 5. Building Custom Agents (`BaseAgent`)
|
| 749 |
+
|
| 750 |
+
For unique orchestration logic that doesn't fit standard workflow agents, inherit directly from `BaseAgent`.
|
| 751 |
+
|
| 752 |
+
### 5.1 When to Use Custom Agents
|
| 753 |
+
|
| 754 |
+
* **Complex Conditional Logic**: `if/else` branching based on multiple state variables.
|
| 755 |
+
* **Dynamic Agent Selection**: Choosing which sub-agent to run based on runtime evaluation.
|
| 756 |
+
* **Direct External Integrations**: Calling external APIs or libraries directly within the orchestration flow.
|
| 757 |
+
* **Custom Loop/Retry Logic**: More sophisticated iteration patterns than `LoopAgent`, such as the `EscalationChecker` example.
|
| 758 |
+
|
| 759 |
+
### 5.2 Implementing `_run_async_impl`
|
| 760 |
+
|
| 761 |
+
This is the core asynchronous method you must override.
|
| 762 |
+
|
| 763 |
+
#### **Example: A Custom Agent for Loop Control**
|
| 764 |
+
This agent reads state, applies simple Python logic, and yields an `Event` with an `escalate` action to control a `LoopAgent`.
|
| 765 |
+
|
| 766 |
+
```python
|
| 767 |
+
from google.adk.agents import BaseAgent
|
| 768 |
+
from google.adk.agents.invocation_context import InvocationContext
|
| 769 |
+
from google.adk.events import Event, EventActions
|
| 770 |
+
from typing import AsyncGenerator
|
| 771 |
+
import logging
|
| 772 |
+
|
| 773 |
+
class EscalationChecker(BaseAgent):
|
| 774 |
+
"""Checks research evaluation and escalates to stop the loop if grade is 'pass'."""
|
| 775 |
+
|
| 776 |
+
def __init__(self, name: str):
|
| 777 |
+
super().__init__(name=name)
|
| 778 |
+
|
| 779 |
+
async def _run_async_impl(
|
| 780 |
+
self, ctx: InvocationContext
|
| 781 |
+
) -> AsyncGenerator[Event, None]:
|
| 782 |
+
# 1. Read from session state.
|
| 783 |
+
evaluation_result = ctx.session.state.get("research_evaluation")
|
| 784 |
+
|
| 785 |
+
# 2. Apply custom Python logic.
|
| 786 |
+
if evaluation_result and evaluation_result.get("grade") == "pass":
|
| 787 |
+
logging.info(
|
| 788 |
+
f"[{self.name}] Research passed. Escalating to stop loop."
|
| 789 |
+
)
|
| 790 |
+
# 3. Yield an Event with a control Action.
|
| 791 |
+
yield Event(author=self.name, actions=EventActions(escalate=True))
|
| 792 |
+
else:
|
| 793 |
+
logging.info(
|
| 794 |
+
f"[{self.name}] Research failed or not found. Loop continues."
|
| 795 |
+
)
|
| 796 |
+
# Yielding an event without actions lets the flow continue.
|
| 797 |
+
yield Event(author=self.name)
|
| 798 |
+
```
|
| 799 |
+
* **Asynchronous Generator**: `async def ... yield Event`. This allows pausing and resuming execution.
|
| 800 |
+
* **`ctx: InvocationContext`**: Provides access to all session state (`ctx.session.state`).
|
| 801 |
+
* **Calling Sub-Agents**: Use `async for event in self.sub_agent_instance.run_async(ctx): yield event`.
|
| 802 |
+
* **Control Flow**: Use standard Python `if/else`, `for/while` loops for complex logic.
|
| 803 |
+
|
| 804 |
+
---
|
| 805 |
+
|
| 806 |
+
## 6. Models: Gemini, LiteLLM, and Vertex AI
|
| 807 |
+
|
| 808 |
+
ADK's model flexibility allows integrating various LLMs for different needs.
|
| 809 |
+
|
| 810 |
+
### 6.1 Google Gemini Models (AI Studio & Vertex AI)
|
| 811 |
+
|
| 812 |
+
* **Default Integration**: Native support via `google-genai` library.
|
| 813 |
+
* **AI Studio (Easy Start)**:
|
| 814 |
+
* Set `GOOGLE_API_KEY="YOUR_API_KEY"` (environment variable).
|
| 815 |
+
* Set `GOOGLE_GENAI_USE_VERTEXAI="False"`.
|
| 816 |
+
* Model strings: `"gemini-2.5-flash"`, `"gemini-2.5-pro"`, etc.
|
| 817 |
+
* **Vertex AI (Production)**:
|
| 818 |
+
* Authenticate via `gcloud auth application-default login` (recommended).
|
| 819 |
+
* Set `GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"`, `GOOGLE_CLOUD_LOCATION="your-region"` (environment variables).
|
| 820 |
+
* Set `GOOGLE_GENAI_USE_VERTEXAI="True"`.
|
| 821 |
+
* Model strings: `"gemini-2.5-flash"`, `"gemini-2.5-pro"`, or full Vertex AI endpoint resource names for specific deployments.
|
| 822 |
+
|
| 823 |
+
### 6.2 Other Cloud & Proprietary Models via LiteLLM
|
| 824 |
+
|
| 825 |
+
`LiteLlm` provides a unified interface to 100+ LLMs (OpenAI, Anthropic, Cohere, etc.).
|
| 826 |
+
|
| 827 |
+
* **Installation**: `pip install litellm`
|
| 828 |
+
* **API Keys**: Set environment variables as required by LiteLLM (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`).
|
| 829 |
+
* **Usage**:
|
| 830 |
+
```python
|
| 831 |
+
from google.adk.models.lite_llm import LiteLlm
|
| 832 |
+
agent_openai = Agent(model=LiteLlm(model="openai/gpt-4o"), ...)
|
| 833 |
+
agent_claude = Agent(model=LiteLlm(model="anthropic/claude-3-haiku-20240307"), ...)
|
| 834 |
+
```
|
| 835 |
+
|
| 836 |
+
### 6.3 Open & Local Models via LiteLLM (Ollama, vLLM)
|
| 837 |
+
|
| 838 |
+
For self-hosting, cost savings, privacy, or offline use.
|
| 839 |
+
|
| 840 |
+
* **Ollama Integration**: Run Ollama locally (`ollama run <model>`).
|
| 841 |
+
```bash
|
| 842 |
+
export OLLAMA_API_BASE="http://localhost:11434" # Ensure Ollama server is running
|
| 843 |
+
```
|
| 844 |
+
```python
|
| 845 |
+
from google.adk.models.lite_llm import LiteLlm
|
| 846 |
+
# Use 'ollama_chat' provider for tool-calling capabilities with Ollama models
|
| 847 |
+
agent_ollama = Agent(model=LiteLlm(model="ollama_chat/llama3:instruct"), ...)
|
| 848 |
+
```
|
| 849 |
+
|
| 850 |
+
* **Self-Hosted Endpoint (e.g., vLLM)**:
|
| 851 |
+
```python
|
| 852 |
+
from google.adk.models.lite_llm import LiteLlm
|
| 853 |
+
api_base_url = "https://your-vllm-endpoint.example.com/v1"
|
| 854 |
+
agent_vllm = Agent(
|
| 855 |
+
model=LiteLlm(
|
| 856 |
+
model="your-model-name-on-vllm",
|
| 857 |
+
api_base=api_base_url,
|
| 858 |
+
extra_headers={"Authorization": "Bearer YOUR_TOKEN"},
|
| 859 |
+
),
|
| 860 |
+
...
|
| 861 |
+
)
|
| 862 |
+
```
|
| 863 |
+
|
| 864 |
+
### 6.4 Customizing LLM API Clients
|
| 865 |
+
|
| 866 |
+
For `google-genai` (used by Gemini models), you can configure the underlying client.
|
| 867 |
+
|
| 868 |
+
```python
|
| 869 |
+
import os
|
| 870 |
+
from google.genai import configure as genai_configure
|
| 871 |
+
|
| 872 |
+
genai_configure.use_defaults(
|
| 873 |
+
timeout=60, # seconds
|
| 874 |
+
client_options={"api_key": os.getenv("GOOGLE_API_KEY")},
|
| 875 |
+
)
|
| 876 |
+
```
|
| 877 |
+
|
| 878 |
+
---
|
| 879 |
+
|
| 880 |
+
## 7. Tools: The Agent's Capabilities
|
| 881 |
+
|
| 882 |
+
Tools extend an agent's abilities beyond text generation.
|
| 883 |
+
|
| 884 |
+
### 7.1 Defining Function Tools: Principles & Best Practices
|
| 885 |
+
|
| 886 |
+
* **Signature**: `def my_tool(param1: Type, param2: Type, tool_context: ToolContext) -> dict:`
|
| 887 |
+
* **Function Name**: Descriptive verb-noun (e.g., `schedule_meeting`).
|
| 888 |
+
* **Parameters**: Clear names, required type hints, **NO DEFAULT VALUES**.
|
| 889 |
+
* **Return Type**: **Must** be a `dict` (JSON-serializable), preferably with a `'status'` key.
|
| 890 |
+
* **Docstring**: **CRITICAL**. Explain purpose, when to use, arguments, and return value structure. **AVOID** mentioning `tool_context`.
|
| 891 |
+
|
| 892 |
+
```python
|
| 893 |
+
def calculate_compound_interest(
|
| 894 |
+
principal: float,
|
| 895 |
+
rate: float,
|
| 896 |
+
years: int,
|
| 897 |
+
compounding_frequency: int,
|
| 898 |
+
tool_context: ToolContext
|
| 899 |
+
) -> dict:
|
| 900 |
+
"""Calculates the future value of an investment with compound interest.
|
| 901 |
+
|
| 902 |
+
Use this tool to calculate the future value of an investment given a
|
| 903 |
+
principal amount, interest rate, number of years, and how often the
|
| 904 |
+
interest is compounded per year.
|
| 905 |
+
|
| 906 |
+
Args:
|
| 907 |
+
principal (float): The initial amount of money invested.
|
| 908 |
+
rate (float): The annual interest rate (e.g., 0.05 for 5%).
|
| 909 |
+
years (int): The number of years the money is invested.
|
| 910 |
+
compounding_frequency (int): The number of times interest is compounded
|
| 911 |
+
per year (e.g., 1 for annually, 12 for monthly).
|
| 912 |
+
|
| 913 |
+
Returns:
|
| 914 |
+
dict: Contains the calculation result.
|
| 915 |
+
- 'status' (str): "success" or "error".
|
| 916 |
+
- 'future_value' (float, optional): The calculated future value.
|
| 917 |
+
- 'error_message' (str, optional): Description of error, if any.
|
| 918 |
+
"""
|
| 919 |
+
# ... implementation ...
|
| 920 |
+
```
|
| 921 |
+
|
| 922 |
+
### 7.2 The `ToolContext` Object: Accessing Runtime Information
|
| 923 |
+
|
| 924 |
+
`ToolContext` is the gateway for tools to interact with the ADK runtime.
|
| 925 |
+
|
| 926 |
+
* `tool_context.state`: Read and write to the current `Session`'s `state` dictionary.
|
| 927 |
+
* `tool_context.actions`: Modify the `EventActions` object (e.g., `tool_context.actions.escalate = True`).
|
| 928 |
+
* `tool_context.load_artifact(filename)` / `tool_context.save_artifact(filename, part)`: Manage binary data.
|
| 929 |
+
* `tool_context.search_memory(query)`: Query the long-term `MemoryService`.
|
| 930 |
+
|
| 931 |
+
### 7.3 All Tool Types & Their Usage
|
| 932 |
+
|
| 933 |
+
1. **Custom Function Tools**:
|
| 934 |
+
* **`FunctionTool`**: The most common type, wrapping a standard Python function.
|
| 935 |
+
* **`LongRunningFunctionTool`**: Wraps an `async` function that `yields` intermediate results, for tasks that provide progress updates.
|
| 936 |
+
* **`AgentTool`**: Wraps another `BaseAgent` instance, allowing it to be invoked as a tool by a parent agent.
|
| 937 |
+
|
| 938 |
+
2. **Built-in Tools**: Ready-to-use tools provided by ADK.
|
| 939 |
+
* `google_search`: Provides Google Search grounding.
|
| 940 |
+
* **Code Execution**:
|
| 941 |
+
* `BuiltInCodeExecutor`: Local, convenient for development. **Not** for untrusted production use.
|
| 942 |
+
* `GkeCodeExecutor`: Production-grade. Executes code in ephemeral, sandboxed pods on Google Kubernetes Engine (GKE) using gVisor for isolation. Requires GKE cluster setup.
|
| 943 |
+
* `VertexAiSearchTool`: Provides grounding from your private Vertex AI Search data stores.
|
| 944 |
+
* `BigQueryToolset`: A collection of tools for interacting with BigQuery (e.g., `list_datasets`, `execute_sql`).
|
| 945 |
+
> **Warning**: An agent can only use one type of built-in tool at a time and they cannot be used in sub-agents.
|
| 946 |
+
|
| 947 |
+
3. **Third-Party Tool Wrappers**: For seamless integration with other frameworks.
|
| 948 |
+
* `LangchainTool`: Wraps a tool from the LangChain ecosystem.
|
| 949 |
+
|
| 950 |
+
4. **OpenAPI & Protocol Tools**: For interacting with APIs and services.
|
| 951 |
+
* **`OpenAPIToolset`**: Automatically generates a set of `RestApiTool`s from an OpenAPI (Swagger) v3 specification.
|
| 952 |
+
* **`MCPToolset`**: Connects to an external Model Context Protocol (MCP) server to dynamically load its tools.
|
| 953 |
+
|
| 954 |
+
5. **Google Cloud Tools**: For deep integration with Google Cloud services.
|
| 955 |
+
* **`ApiHubToolset`**: Turns any documented API from Apigee API Hub into a tool.
|
| 956 |
+
* **`ApplicationIntegrationToolset`**: Turns Application Integration workflows and Integration Connectors (e.g., Salesforce, SAP) into callable tools.
|
| 957 |
+
* **Toolbox for Databases**: An open-source MCP server that ADK can connect to for database interactions.
|
| 958 |
+
|
| 959 |
+
6. **Dynamic Toolsets (`BaseToolset`)**: Instead of a static list of tools, use a `Toolset` to dynamically determine which tools an agent can use based on the current context (e.g., user permissions).
|
| 960 |
+
```python
|
| 961 |
+
from google.adk.tools.base_toolset import BaseToolset
|
| 962 |
+
|
| 963 |
+
class AdminAwareToolset(BaseToolset):
|
| 964 |
+
async def get_tools(self, context: ReadonlyContext) -> list[BaseTool]:
|
| 965 |
+
# Check state to see if user is admin
|
| 966 |
+
if context.state.get('user:role') == 'admin':
|
| 967 |
+
return [admin_delete_tool, standard_query_tool]
|
| 968 |
+
return [standard_query_tool]
|
| 969 |
+
|
| 970 |
+
# Usage:
|
| 971 |
+
agent = Agent(tools=[AdminAwareToolset()])
|
| 972 |
+
```
|
| 973 |
+
|
| 974 |
+
### 7.4 Tool Confirmation (Human-in-the-Loop)
|
| 975 |
+
ADK can pause tool execution to request human or system confirmation before proceeding, essential for sensitive actions.
|
| 976 |
+
|
| 977 |
+
* **Boolean Confirmation**: Simple yes/no via `FunctionTool(..., require_confirmation=True)`.
|
| 978 |
+
* **Dynamic Confirmation**: Pass a function to `require_confirmation` to decide at runtime based on arguments.
|
| 979 |
+
* **Advanced/Payload Confirmation**: Use `tool_context.request_confirmation()` inside the tool for structured feedback.
|
| 980 |
+
|
| 981 |
+
```python
|
| 982 |
+
from google.adk.tools import FunctionTool, ToolContext
|
| 983 |
+
|
| 984 |
+
# 1. Simple Boolean Confirmation
|
| 985 |
+
# Pauses execution until a 'confirmed': True/False event is received.
|
| 986 |
+
sensitive_tool = FunctionTool(delete_database, require_confirmation=True)
|
| 987 |
+
|
| 988 |
+
# 2. Dynamic Threshold Confirmation
|
| 989 |
+
def needs_approval(amount: float, **kwargs) -> bool:
|
| 990 |
+
return amount > 10000
|
| 991 |
+
|
| 992 |
+
transfer_tool = FunctionTool(wire_money, require_confirmation=needs_approval)
|
| 993 |
+
|
| 994 |
+
# 3. Advanced Payload Confirmation (inside tool definition)
|
| 995 |
+
def book_flight(destination: str, price: float, tool_context: ToolContext):
|
| 996 |
+
# Pause and ask user to select a seat class before continuing
|
| 997 |
+
tool_context.request_confirmation(
|
| 998 |
+
hint="Please confirm booking and select seat class.",
|
| 999 |
+
payload={"seat_class": ["economy", "business", "first"]} # Expected structure
|
| 1000 |
+
)
|
| 1001 |
+
return {"status": "pending_confirmation"}
|
| 1002 |
+
```
|
| 1003 |
+
|
| 1004 |
+
---
|
| 1005 |
+
|
| 1006 |
+
## 8. Context, State, and Memory Management
|
| 1007 |
+
|
| 1008 |
+
Effective context management is crucial for coherent, multi-turn conversations.
|
| 1009 |
+
|
| 1010 |
+
### 8.1 The `Session` Object & `SessionService`
|
| 1011 |
+
|
| 1012 |
+
* **`Session`**: The container for a single, ongoing conversation (`id`, `state`, `events`).
|
| 1013 |
+
* **`SessionService`**: Manages the lifecycle of `Session` objects (`create_session`, `get_session`, `append_event`).
|
| 1014 |
+
* **Implementations**: `InMemorySessionService` (dev), `VertexAiSessionService` (prod), `DatabaseSessionService` (self-managed).
|
| 1015 |
+
|
| 1016 |
+
### 8.2 `State`: The Conversational Scratchpad
|
| 1017 |
+
|
| 1018 |
+
A mutable dictionary within `session.state` for short-term, dynamic data.
|
| 1019 |
+
|
| 1020 |
+
* **Update Mechanism**: Always update via `context.state` (in callbacks/tools) or `LlmAgent.output_key`.
|
| 1021 |
+
* **Prefixes for Scope**:
|
| 1022 |
+
* **(No prefix)**: Session-specific (e.g., `session.state['booking_step']`).
|
| 1023 |
+
* `user:`: Persistent for a `user_id` across all their sessions (e.g., `session.state['user:preferred_currency']`).
|
| 1024 |
+
* `app:`: Persistent for `app_name` across all users and sessions.
|
| 1025 |
+
* `temp:`: Ephemeral state that only exists for the current **invocation** (one user request -> final agent response cycle). It is discarded afterwards.
|
| 1026 |
+
|
| 1027 |
+
### 8.3 `Memory`: Long-Term Knowledge & Retrieval
|
| 1028 |
+
|
| 1029 |
+
For knowledge beyond a single conversation.
|
| 1030 |
+
|
| 1031 |
+
* **`BaseMemoryService`**: Defines the interface (`add_session_to_memory`, `search_memory`).
|
| 1032 |
+
* **Implementations**: `InMemoryMemoryService`, `VertexAiRagMemoryService`.
|
| 1033 |
+
* **Usage**: Agents interact via tools (e.g., the built-in `load_memory` tool).
|
| 1034 |
+
|
| 1035 |
+
### 8.4 `Artifacts`: Binary Data Management
|
| 1036 |
+
|
| 1037 |
+
For named, versioned binary data (files, images).
|
| 1038 |
+
|
| 1039 |
+
* **Representation**: `google.genai.types.Part` (containing a `Blob` with `data: bytes` and `mime_type: str`).
|
| 1040 |
+
* **`BaseArtifactService`**: Manages storage (`save_artifact`, `load_artifact`).
|
| 1041 |
+
* **Implementations**: `InMemoryArtifactService`, `GcsArtifactService`.
|
| 1042 |
+
|
| 1043 |
+
---
|
| 1044 |
+
|
| 1045 |
+
## 9. Runtime, Events, and Execution Flow
|
| 1046 |
+
|
| 1047 |
+
The `Runner` is the central orchestrator of an ADK application.
|
| 1048 |
+
|
| 1049 |
+
### 9.1 Runtime Configuration (`RunConfig`)
|
| 1050 |
+
Passed to `run` or `run_live` to control execution limits and output formats.
|
| 1051 |
+
|
| 1052 |
+
```python
|
| 1053 |
+
from google.adk.agents.run_config import RunConfig
|
| 1054 |
+
from google.genai import types
|
| 1055 |
+
|
| 1056 |
+
config = RunConfig(
|
| 1057 |
+
# Safety limits
|
| 1058 |
+
max_llm_calls=100, # Prevent infinite agent loops
|
| 1059 |
+
|
| 1060 |
+
# Streaming & Modality
|
| 1061 |
+
response_modalities=["AUDIO", "TEXT"], # Request specific output formats
|
| 1062 |
+
|
| 1063 |
+
# Voice configuration (for AUDIO modality)
|
| 1064 |
+
speech_config=types.SpeechConfig(
|
| 1065 |
+
voice_config=types.VoiceConfig(
|
| 1066 |
+
prebuilt_voice_config=types.PrebuiltVoiceConfig(voice_name="Kore")
|
| 1067 |
+
)
|
| 1068 |
+
),
|
| 1069 |
+
|
| 1070 |
+
# Debugging
|
| 1071 |
+
save_input_blobs_as_artifacts=True # Save uploaded files to ArtifactService
|
| 1072 |
+
)
|
| 1073 |
+
```
|
| 1074 |
+
|
| 1075 |
+
### 9.2 The `Runner`: The Orchestrator
|
| 1076 |
+
|
| 1077 |
+
* **Role**: Manages the agent's lifecycle, the event loop, and coordinates with services.
|
| 1078 |
+
* **Entry Point**: `runner.run_async(user_id, session_id, new_message)`.
|
| 1079 |
+
|
| 1080 |
+
### 9.3 The Event Loop: Core Execution Flow
|
| 1081 |
+
|
| 1082 |
+
1. User input becomes a `user` `Event`.
|
| 1083 |
+
2. `Runner` calls `agent.run_async(invocation_context)`.
|
| 1084 |
+
3. Agent `yield`s an `Event` (e.g., tool call, text response). Execution pauses.
|
| 1085 |
+
4. `Runner` processes the `Event` (applies state changes, etc.) and yields it to the client.
|
| 1086 |
+
5. Execution resumes. This cycle repeats until the agent is done.
|
| 1087 |
+
|
| 1088 |
+
### 9.4 `Event` Object: The Communication Backbone
|
| 1089 |
+
|
| 1090 |
+
`Event` objects carry all information and signals.
|
| 1091 |
+
|
| 1092 |
+
* `Event.author`: Source of the event (`'user'`, agent name, `'system'`).
|
| 1093 |
+
* `Event.content`: The primary payload (text, function calls, function responses).
|
| 1094 |
+
* `Event.actions`: Signals side effects (`state_delta`, `transfer_to_agent`, `escalate`).
|
| 1095 |
+
* `Event.is_final_response()`: Helper to identify the complete, displayable message.
|
| 1096 |
+
|
| 1097 |
+
### 9.5 Asynchronous Programming (Python Specific)
|
| 1098 |
+
|
| 1099 |
+
ADK is built on `asyncio`. Use `async def`, `await`, and `async for` for all I/O-bound operations.
|
| 1100 |
+
|
| 1101 |
+
---
|
| 1102 |
+
|
| 1103 |
+
## 10. Control Flow with Callbacks
|
| 1104 |
+
|
| 1105 |
+
Callbacks are functions that intercept and control agent execution at specific points.
|
| 1106 |
+
|
| 1107 |
+
### 10.1 Callback Mechanism: Interception & Control
|
| 1108 |
+
|
| 1109 |
+
* **Definition**: A Python function assigned to an agent's `callback` parameter (e.g., `after_agent_callback=my_func`).
|
| 1110 |
+
* **Context**: Receives a `CallbackContext` (or `ToolContext`) with runtime info.
|
| 1111 |
+
* **Return Value**: **Crucially determines flow.**
|
| 1112 |
+
* `return None`: Allow the default action to proceed.
|
| 1113 |
+
* `return <Specific Object>`: **Override** the default action/result.
|
| 1114 |
+
|
| 1115 |
+
### 10.2 Types of Callbacks
|
| 1116 |
+
|
| 1117 |
+
1. **Agent Lifecycle**: `before_agent_callback`, `after_agent_callback`.
|
| 1118 |
+
2. **LLM Interaction**: `before_model_callback`, `after_model_callback`.
|
| 1119 |
+
3. **Tool Execution**: `before_tool_callback`, `after_tool_callback`.
|
| 1120 |
+
|
| 1121 |
+
### 10.3 Callback Best Practices
|
| 1122 |
+
|
| 1123 |
+
* **Keep Focused**: Each callback for a single purpose.
|
| 1124 |
+
* **Performance**: Avoid blocking I/O or heavy computation.
|
| 1125 |
+
* **Error Handling**: Use `try...except` to prevent crashes.
|
| 1126 |
+
|
| 1127 |
+
#### **Example 1: Data Aggregation with `after_agent_callback`**
|
| 1128 |
+
This callback runs after an agent, inspects the `session.events` to find structured data from tool calls (like `google_search` results), and saves it to state for later use.
|
| 1129 |
+
|
| 1130 |
+
```python
|
| 1131 |
+
from google.adk.agents.callback_context import CallbackContext
|
| 1132 |
+
|
| 1133 |
+
def collect_research_sources_callback(callback_context: CallbackContext) -> None:
|
| 1134 |
+
"""Collects and organizes web research sources from agent events."""
|
| 1135 |
+
session = callback_context._invocation_context.session
|
| 1136 |
+
# Get existing sources from state to append to them.
|
| 1137 |
+
url_to_short_id = callback_context.state.get("url_to_short_id", {})
|
| 1138 |
+
sources = callback_context.state.get("sources", {})
|
| 1139 |
+
id_counter = len(url_to_short_id) + 1
|
| 1140 |
+
|
| 1141 |
+
# Iterate through all events in the session to find grounding metadata.
|
| 1142 |
+
for event in session.events:
|
| 1143 |
+
if not (event.grounding_metadata and event.grounding_metadata.grounding_chunks):
|
| 1144 |
+
continue
|
| 1145 |
+
# ... logic to parse grounding_chunks and grounding_supports ...
|
| 1146 |
+
# (See full implementation in the original code snippet)
|
| 1147 |
+
|
| 1148 |
+
# Save the updated source map back to state.
|
| 1149 |
+
callback_context.state["url_to_short_id"] = url_to_short_id
|
| 1150 |
+
callback_context.state["sources"] = sources
|
| 1151 |
+
|
| 1152 |
+
# Used in an agent like this:
|
| 1153 |
+
# section_researcher = LlmAgent(..., after_agent_callback=collect_research_sources_callback)
|
| 1154 |
+
```
|
| 1155 |
+
|
| 1156 |
+
#### **Example 2: Output Transformation with `after_agent_callback`**
|
| 1157 |
+
This callback takes an LLM's raw output (containing custom tags), uses Python to format it into markdown, and returns the modified content, overriding the original.
|
| 1158 |
+
|
| 1159 |
+
```python
|
| 1160 |
+
import re
|
| 1161 |
+
from google.adk.agents.callback_context import CallbackContext
|
| 1162 |
+
from google.genai import types as genai_types
|
| 1163 |
+
|
| 1164 |
+
def citation_replacement_callback(callback_context: CallbackContext) -> genai_types.Content:
|
| 1165 |
+
"""Replaces <cite> tags in a report with Markdown-formatted links."""
|
| 1166 |
+
# 1. Get raw report and sources from state.
|
| 1167 |
+
final_report = callback_context.state.get("final_cited_report", "")
|
| 1168 |
+
sources = callback_context.state.get("sources", {})
|
| 1169 |
+
|
| 1170 |
+
# 2. Define a replacer function for regex substitution.
|
| 1171 |
+
def tag_replacer(match: re.Match) -> str:
|
| 1172 |
+
short_id = match.group(1)
|
| 1173 |
+
if not (source_info := sources.get(short_id)):
|
| 1174 |
+
return "" # Remove invalid tags
|
| 1175 |
+
title = source_info.get("title", short_id)
|
| 1176 |
+
return f" [{title}]({source_info['url']})"
|
| 1177 |
+
|
| 1178 |
+
# 3. Use regex to find all <cite> tags and replace them.
|
| 1179 |
+
processed_report = re.sub(
|
| 1180 |
+
r'<cite\s+source\s*=\s*["\']?(src-\d+)["\']?\s*/>',
|
| 1181 |
+
tag_replacer,
|
| 1182 |
+
final_report,
|
| 1183 |
+
)
|
| 1184 |
+
processed_report = re.sub(r"\s+([.,;:])", r"\1", processed_report) # Fix spacing
|
| 1185 |
+
|
| 1186 |
+
# 4. Save the new version to state and return it to override the original agent output.
|
| 1187 |
+
callback_context.state["final_report_with_citations"] = processed_report
|
| 1188 |
+
return genai_types.Content(parts=[genai_types.Part(text=processed_report)])
|
| 1189 |
+
|
| 1190 |
+
# Used in an agent like this:
|
| 1191 |
+
# report_composer = LlmAgent(..., after_agent_callback=citation_replacement_callback)
|
| 1192 |
+
```
|
| 1193 |
+
|
| 1194 |
+
### 10.A. Global Control with Plugins
|
| 1195 |
+
|
| 1196 |
+
Plugins are stateful, reusable modules for implementing cross-cutting concerns that apply globally to all agents, tools, and model calls managed by a `Runner`. Unlike Callbacks which are configured per-agent, Plugins are registered once on the `Runner`.
|
| 1197 |
+
|
| 1198 |
+
* **Use Cases**: Ideal for universal logging, application-wide policy enforcement, global caching, and collecting metrics.
|
| 1199 |
+
* **Execution Order**: Plugin callbacks run **before** their corresponding agent-level callbacks. If a plugin callback returns a value, the agent-level callback is skipped.
|
| 1200 |
+
* **Defining a Plugin**: Inherit from `BasePlugin` and implement callback methods.
|
| 1201 |
+
```python
|
| 1202 |
+
from google.adk.plugins import BasePlugin
|
| 1203 |
+
from google.adk.agents.callback_context import CallbackContext
|
| 1204 |
+
from google.adk.models.llm_request import LlmRequest
|
| 1205 |
+
|
| 1206 |
+
class AuditLoggingPlugin(BasePlugin):
|
| 1207 |
+
def __init__(self):
|
| 1208 |
+
super().__init__(name="audit_logger")
|
| 1209 |
+
|
| 1210 |
+
async def before_model_callback(self, callback_context: CallbackContext, llm_request: LlmRequest):
|
| 1211 |
+
# Log every prompt sent to any LLM
|
| 1212 |
+
print(f"[AUDIT] Agent {callback_context.agent_name} calling LLM with: {llm_request.contents[-1]}")
|
| 1213 |
+
|
| 1214 |
+
async def on_tool_error_callback(self, tool, error, **kwargs):
|
| 1215 |
+
# Global error handler for all tools
|
| 1216 |
+
print(f"[ALERT] Tool {tool.name} failed: {error}")
|
| 1217 |
+
# Optionally return a dict to suppress the exception and provide fallback
|
| 1218 |
+
return {"status": "error", "message": "An internal error occurred, handled by plugin."}
|
| 1219 |
+
```
|
| 1220 |
+
* **Registering a Plugin**:
|
| 1221 |
+
```python
|
| 1222 |
+
from google.adk.runners import Runner
|
| 1223 |
+
# runner = Runner(agent=root_agent, ..., plugins=[AuditLoggingPlugin()])
|
| 1224 |
+
```
|
| 1225 |
+
* **Error Handling Callbacks**: Plugins support unique error hooks like `on_model_error_callback` and `on_tool_error_callback` for centralized error management.
|
| 1226 |
+
* **Limitation**: Plugins are not supported by the `adk web` interface.
|
| 1227 |
+
|
| 1228 |
+
---
|
| 1229 |
+
|
| 1230 |
+
## 11. Authentication for Tools
|
| 1231 |
+
|
| 1232 |
+
Enabling agents to securely access protected external resources.
|
| 1233 |
+
|
| 1234 |
+
### 11.1 Core Concepts: `AuthScheme` & `AuthCredential`
|
| 1235 |
+
|
| 1236 |
+
* **`AuthScheme`**: Defines *how* an API expects authentication (e.g., `APIKey`, `HTTPBearer`, `OAuth2`, `OpenIdConnectWithConfig`).
|
| 1237 |
+
* **`AuthCredential`**: Holds *initial* information to *start* the auth process (e.g., API key value, OAuth client ID/secret).
|
| 1238 |
+
|
| 1239 |
+
### 11.2 Interactive OAuth/OIDC Flows
|
| 1240 |
+
|
| 1241 |
+
When a tool requires user interaction (OAuth consent), ADK pauses and signals your `Agent Client` application.
|
| 1242 |
+
|
| 1243 |
+
1. **Detect Auth Request**: `runner.run_async()` yields an event with a special `adk_request_credential` function call.
|
| 1244 |
+
2. **Redirect User**: Extract `auth_uri` from `auth_config` in the event. Your client app redirects the user's browser to this `auth_uri` (appending `redirect_uri`).
|
| 1245 |
+
3. **Handle Callback**: Your client app has a pre-registered `redirect_uri` to receive the user after authorization. It captures the full callback URL (containing `authorization_code`).
|
| 1246 |
+
4. **Send Auth Result to ADK**: Your client prepares a `FunctionResponse` for `adk_request_credential`, setting `auth_config.exchanged_auth_credential.oauth2.auth_response_uri` to the captured callback URL.
|
| 1247 |
+
5. **Resume Execution**: `runner.run_async()` is called again with this `FunctionResponse`. ADK performs the token exchange, stores the access token, and retries the original tool call.
|
| 1248 |
+
|
| 1249 |
+
### 11.3 Custom Tool Authentication
|
| 1250 |
+
|
| 1251 |
+
If building a `FunctionTool` that needs authentication:
|
| 1252 |
+
|
| 1253 |
+
1. **Check for Cached Creds**: `tool_context.state.get("my_token_cache_key")`.
|
| 1254 |
+
2. **Check for Auth Response**: `tool_context.get_auth_response(my_auth_config)`.
|
| 1255 |
+
3. **Initiate Auth**: If no creds, call `tool_context.request_credential(my_auth_config)` and return a pending status. This triggers the external flow.
|
| 1256 |
+
4. **Cache Credentials**: After obtaining, store in `tool_context.state`.
|
| 1257 |
+
5. **Make API Call**: Use the valid credentials (e.g., `google.oauth2.credentials.Credentials`).
|
| 1258 |
+
|
| 1259 |
+
---
|
| 1260 |
+
|
| 1261 |
+
## 12. Deployment Strategies
|
| 1262 |
+
|
| 1263 |
+
From local dev to production.
|
| 1264 |
+
|
| 1265 |
+
### 12.1 Local Development & Testing (`adk web`, `adk run`, `adk api_server`)
|
| 1266 |
+
|
| 1267 |
+
* **`adk web`**: Launches a local web UI for interactive chat, session inspection, and visual tracing.
|
| 1268 |
+
```bash
|
| 1269 |
+
adk web /path/to/your/project_root
|
| 1270 |
+
```
|
| 1271 |
+
* **`adk run`**: Command-line interactive chat.
|
| 1272 |
+
```bash
|
| 1273 |
+
adk run /path/to/your/agent_folder
|
| 1274 |
+
```
|
| 1275 |
+
* **`adk api_server`**: Launches a local FastAPI server exposing `/run`, `/run_sse`, `/list-apps`, etc., for API testing with `curl` or client libraries.
|
| 1276 |
+
```bash
|
| 1277 |
+
adk api_server /path/to/your/project_root
|
| 1278 |
+
```
|
| 1279 |
+
|
| 1280 |
+
### 12.2 Vertex AI Agent Engine
|
| 1281 |
+
|
| 1282 |
+
Fully managed, scalable service for ADK agents on Google Cloud.
|
| 1283 |
+
|
| 1284 |
+
* **Features**: Auto-scaling, session management, observability integration.
|
| 1285 |
+
* **ADK CLI**: `adk deploy agent_engine --project <id> --region <loc> ... /path/to/agent`
|
| 1286 |
+
* **Deployment**: Use `vertexai.agent_engines.create()`.
|
| 1287 |
+
```python
|
| 1288 |
+
from vertexai.preview import reasoning_engines # or agent_engines directly in later versions
|
| 1289 |
+
|
| 1290 |
+
# Wrap your root_agent for deployment
|
| 1291 |
+
app_for_engine = reasoning_engines.AdkApp(agent=root_agent, enable_tracing=True)
|
| 1292 |
+
|
| 1293 |
+
# Deploy
|
| 1294 |
+
remote_app = agent_engines.create(
|
| 1295 |
+
agent_engine=app_for_engine,
|
| 1296 |
+
requirements=["google-cloud-aiplatform[adk,agent_engines]"],
|
| 1297 |
+
display_name="My Production Agent"
|
| 1298 |
+
)
|
| 1299 |
+
print(remote_app.resource_name) # projects/PROJECT_NUM/locations/REGION/reasoningEngines/ID
|
| 1300 |
+
```
|
| 1301 |
+
* **Interaction**: Use `remote_app.stream_query()`, `create_session()`, etc.
|
| 1302 |
+
|
| 1303 |
+
### 12.3 Cloud Run
|
| 1304 |
+
|
| 1305 |
+
Serverless container platform for custom web applications.
|
| 1306 |
+
|
| 1307 |
+
* **ADK CLI**: `adk deploy cloud_run --project <id> --region <loc> ... /path/to/agent`
|
| 1308 |
+
* **Deployment**:
|
| 1309 |
+
1. Create a `Dockerfile` for your FastAPI app (using `google.adk.cli.fast_api.get_fast_api_app`).
|
| 1310 |
+
2. Use `gcloud run deploy --source .`.
|
| 1311 |
+
3. Alternatively, `adk deploy cloud_run` (simpler, opinionated).
|
| 1312 |
+
* **Example `main.py`**:
|
| 1313 |
+
```python
|
| 1314 |
+
import os
|
| 1315 |
+
from fastapi import FastAPI
|
| 1316 |
+
from google.adk.cli.fast_api import get_fast_api_app
|
| 1317 |
+
|
| 1318 |
+
# Ensure your agent_folder (e.g., 'my_first_agent') is in the same directory as main.py
|
| 1319 |
+
app: FastAPI = get_fast_api_app(
|
| 1320 |
+
agents_dir=os.path.dirname(os.path.abspath(__file__)),
|
| 1321 |
+
session_service_uri="sqlite:///./sessions.db", # In-container SQLite, for simple cases
|
| 1322 |
+
# For production: use a persistent DB (Cloud SQL) or VertexAiSessionService
|
| 1323 |
+
allow_origins=["*"],
|
| 1324 |
+
web=True # Serve ADK UI
|
| 1325 |
+
)
|
| 1326 |
+
# uvicorn.run(app, host="0.0.0.0", port=int(os.environ.get("PORT", 8080))) # If running directly
|
| 1327 |
+
```
|
| 1328 |
+
|
| 1329 |
+
### 12.4 Google Kubernetes Engine (GKE)
|
| 1330 |
+
|
| 1331 |
+
For maximum control, run your containerized agent in a Kubernetes cluster.
|
| 1332 |
+
|
| 1333 |
+
* **ADK CLI**: `adk deploy gke --project <id> --cluster_name <name> ... /path/to/agent`
|
| 1334 |
+
* **Deployment**:
|
| 1335 |
+
1. Build Docker image (`gcloud builds submit`).
|
| 1336 |
+
2. Create Kubernetes Deployment and Service YAMLs.
|
| 1337 |
+
3. Apply with `kubectl apply -f deployment.yaml`.
|
| 1338 |
+
4. Configure Workload Identity for GCP permissions.
|
| 1339 |
+
|
| 1340 |
+
### 12.5 CI/CD Integration
|
| 1341 |
+
|
| 1342 |
+
* Automate testing (`pytest`, `adk eval`) in CI.
|
| 1343 |
+
* Automate container builds and deployments (e.g., Cloud Build, GitHub Actions).
|
| 1344 |
+
* Use environment variables for secrets.
|
| 1345 |
+
|
| 1346 |
+
---
|
| 1347 |
+
|
| 1348 |
+
## 13. Evaluation and Safety
|
| 1349 |
+
|
| 1350 |
+
Critical for robust, production-ready agents.
|
| 1351 |
+
|
| 1352 |
+
### 13.1 Agent Evaluation (`adk eval`)
|
| 1353 |
+
|
| 1354 |
+
Systematically assess agent performance using predefined test cases.
|
| 1355 |
+
|
| 1356 |
+
* **Evalset File (`.evalset.json`)**: Contains `eval_cases`, each with a `conversation` (user queries, expected tool calls, expected intermediate/final responses) and `session_input` (initial state).
|
| 1357 |
+
```json
|
| 1358 |
+
{
|
| 1359 |
+
"eval_set_id": "weather_bot_eval",
|
| 1360 |
+
"eval_cases": [
|
| 1361 |
+
{
|
| 1362 |
+
"eval_id": "london_weather_query",
|
| 1363 |
+
"conversation": [
|
| 1364 |
+
{
|
| 1365 |
+
"user_content": {"parts": [{"text": "What's the weather in London?"}]},
|
| 1366 |
+
"final_response": {"parts": [{"text": "The weather in London is cloudy..."}]},
|
| 1367 |
+
"intermediate_data": {
|
| 1368 |
+
"tool_uses": [{"name": "get_weather", "args": {"city": "London"}}]
|
| 1369 |
+
}
|
| 1370 |
+
}
|
| 1371 |
+
],
|
| 1372 |
+
"session_input": {"app_name": "weather_app", "user_id": "test_user", "state": {}}
|
| 1373 |
+
}
|
| 1374 |
+
]
|
| 1375 |
+
}
|
| 1376 |
+
```
|
| 1377 |
+
* **Running Evaluation**:
|
| 1378 |
+
* `adk web`: Interactive UI for creating/running eval cases.
|
| 1379 |
+
* `adk eval /path/to/agent_folder /path/to/evalset.json`: CLI execution.
|
| 1380 |
+
* `pytest`: Integrate `AgentEvaluator.evaluate()` into unit/integration tests.
|
| 1381 |
+
* **Metrics**: `tool_trajectory_avg_score` (tool calls match expected), `response_match_score` (final response similarity using ROUGE). Configurable via `test_config.json`.
|
| 1382 |
+
|
| 1383 |
+
### 13.2 Safety & Guardrails
|
| 1384 |
+
|
| 1385 |
+
Multi-layered defense against harmful content, misalignment, and unsafe actions.
|
| 1386 |
+
|
| 1387 |
+
1. **Identity and Authorization**:
|
| 1388 |
+
* **Agent-Auth**: Tool acts with the agent's service account (e.g., `Vertex AI User` role). Simple, but all users share access level. Logs needed for attribution.
|
| 1389 |
+
* **User-Auth**: Tool acts with the end-user's identity (via OAuth tokens). Reduces risk of abuse.
|
| 1390 |
+
2. **In-Tool Guardrails**: Design tools defensively. Tools can read policies from `tool_context.state` (set deterministically by developer) and validate model-provided arguments before execution.
|
| 1391 |
+
```python
|
| 1392 |
+
def execute_sql(query: str, tool_context: ToolContext) -> dict:
|
| 1393 |
+
policy = tool_context.state.get("user:sql_policy", {})
|
| 1394 |
+
if not policy.get("allow_writes", False) and ("INSERT" in query.upper() or "DELETE" in query.upper()):
|
| 1395 |
+
return {"status": "error", "message": "Policy: Write operations are not allowed."}
|
| 1396 |
+
# ... execute query ...
|
| 1397 |
+
```
|
| 1398 |
+
3. **Built-in Gemini Safety Features**:
|
| 1399 |
+
* **Content Safety Filters**: Automatically block harmful content (CSAM, PII, hate speech, etc.). Configurable thresholds.
|
| 1400 |
+
* **System Instructions**: Guide model behavior, define prohibited topics, brand tone, disclaimers.
|
| 1401 |
+
4. **Model and Tool Callbacks (LLM as a Guardrail)**: Use callbacks to inspect inputs/outputs.
|
| 1402 |
+
* `before_model_callback`: Intercept `LlmRequest` before it hits the LLM. Block (return `LlmResponse`) or modify.
|
| 1403 |
+
* `before_tool_callback`: Intercept tool calls (name, args) before execution. Block (return `dict`) or modify.
|
| 1404 |
+
* **LLM-based Safety**: Use a cheap/fast LLM (e.g., Gemini Flash) in a callback to classify input/output safety.
|
| 1405 |
+
```python
|
| 1406 |
+
def safety_checker_callback(context: CallbackContext, llm_request: LlmRequest) -> Optional[LlmResponse]:
|
| 1407 |
+
# Use a separate, small LLM to classify safety
|
| 1408 |
+
safety_llm_agent = Agent(name="SafetyChecker", model="gemini-2.5-flash-001", instruction="Classify input as 'safe' or 'unsafe'. Output ONLY the word.")
|
| 1409 |
+
# Run the safety agent (might need a new runner instance or direct model call)
|
| 1410 |
+
# For simplicity, a mock:
|
| 1411 |
+
user_input = llm_request.contents[-1].parts[0].text
|
| 1412 |
+
if "dangerous_phrase" in user_input.lower():
|
| 1413 |
+
context.state["safety_violation"] = True
|
| 1414 |
+
return LlmResponse(content=genai_types.Content(parts=[genai_types.Part(text="I cannot process this request due to safety concerns.")]))
|
| 1415 |
+
return None
|
| 1416 |
+
```
|
| 1417 |
+
5. **Sandboxed Code Execution**:
|
| 1418 |
+
* `BuiltInCodeExecutor`: Uses secure, sandboxed execution environments.
|
| 1419 |
+
* Vertex AI Code Interpreter Extension.
|
| 1420 |
+
* If custom, ensure hermetic environments (no network, isolated).
|
| 1421 |
+
6. **Network Controls & VPC-SC**: Confine agent activity within secure perimeters (VPC Service Controls) to prevent data exfiltration.
|
| 1422 |
+
7. **Output Escaping in UIs**: Always properly escape LLM-generated content in web UIs to prevent XSS attacks and indirect prompt injections.
|
| 1423 |
+
|
| 1424 |
+
**Grounding**: A key safety and reliability feature that connects agent responses to verifiable information.
|
| 1425 |
+
* **Mechanism**: Uses tools like `google_search` or `VertexAiSearchTool` to fetch real-time or private data.
|
| 1426 |
+
* **Benefit**: Reduces model hallucination by basing responses on retrieved facts.
|
| 1427 |
+
* **Requirement**: When using `google_search`, your application UI **must** display the provided search suggestions and citations to comply with terms of service.
|
| 1428 |
+
|
| 1429 |
+
---
|
| 1430 |
+
|
| 1431 |
+
## 14. Debugging, Logging & Observability
|
| 1432 |
+
|
| 1433 |
+
* **`adk web` UI**: Best first step. Provides visual trace, session history, and state inspection.
|
| 1434 |
+
* **Event Stream Logging**: Iterate `runner.run_async()` events and print relevant fields.
|
| 1435 |
+
```python
|
| 1436 |
+
async for event in runner.run_async(...):
|
| 1437 |
+
print(f"[{event.author}] Event ID: {event.id}, Invocation: {event.invocation_id}")
|
| 1438 |
+
if event.content and event.content.parts:
|
| 1439 |
+
if event.content.parts[0].text:
|
| 1440 |
+
print(f" Text: {event.content.parts[0].text[:100]}...")
|
| 1441 |
+
if event.get_function_calls():
|
| 1442 |
+
print(f" Tool Call: {event.get_function_calls()[0].name} with {event.get_function_calls()[0].args}")
|
| 1443 |
+
if event.get_function_responses():
|
| 1444 |
+
print(f" Tool Response: {event.get_function_responses()[0].response}")
|
| 1445 |
+
if event.actions:
|
| 1446 |
+
if event.actions.state_delta:
|
| 1447 |
+
print(f" State Delta: {event.actions.state_delta}")
|
| 1448 |
+
if event.actions.transfer_to_agent:
|
| 1449 |
+
print(f" TRANSFER TO: {event.actions.transfer_to_agent}")
|
| 1450 |
+
if event.error_message:
|
| 1451 |
+
print(f" ERROR: {event.error_message}")
|
| 1452 |
+
```
|
| 1453 |
+
* **Tool/Callback `print` statements**: Simple logging directly within your functions.
|
| 1454 |
+
* **Logging**: Use Python's standard `logging` module. Control verbosity with `adk web --log_level DEBUG` or `adk web -v`.
|
| 1455 |
+
* **One-Line Observability Integrations**: ADK has native hooks for popular tracing platforms.
|
| 1456 |
+
* **AgentOps**:
|
| 1457 |
+
```python
|
| 1458 |
+
import agentops
|
| 1459 |
+
agentops.init(api_key="...") # Automatically instruments ADK agents
|
| 1460 |
+
```
|
| 1461 |
+
* **Arize Phoenix**:
|
| 1462 |
+
```python
|
| 1463 |
+
from phoenix.otel import register
|
| 1464 |
+
register(project_name="my_agent", auto_instrument=True)
|
| 1465 |
+
```
|
| 1466 |
+
* **Google Cloud Trace**: Enable via flag during deployment: `adk deploy [cloud_run|agent_engine] --trace_to_cloud ...`
|
| 1467 |
+
* **Session History (`session.events`)**: Persisted for detailed post-mortem analysis.
|
| 1468 |
+
|
| 1469 |
+
---
|
| 1470 |
+
|
| 1471 |
+
## 15. Streaming & Advanced I/O
|
| 1472 |
+
|
| 1473 |
+
ADK supports real-time, bidirectional communication for interactive experiences like live voice conversations.
|
| 1474 |
+
|
| 1475 |
+
#### Bidirectional Streaming Loop (`run_live`)
|
| 1476 |
+
For real-time voice/video, use `run_live` with a `LiveRequestQueue`. This enables low-latency, two-way communication where the user can interrupt the agent.
|
| 1477 |
+
|
| 1478 |
+
```python
|
| 1479 |
+
import asyncio
|
| 1480 |
+
from google.adk.agents import LiveRequestQueue
|
| 1481 |
+
from google.adk.agents.run_config import RunConfig
|
| 1482 |
+
|
| 1483 |
+
async def start_streaming_session(runner, session, user_id):
|
| 1484 |
+
# 1. Configure modalities (e.g., AUDIO output for voice agents)
|
| 1485 |
+
run_config = RunConfig(response_modalities=["AUDIO"])
|
| 1486 |
+
|
| 1487 |
+
# 2. Create input queue for client data (audio chunks, text)
|
| 1488 |
+
live_queue = LiveRequestQueue()
|
| 1489 |
+
|
| 1490 |
+
# 3. Start the bidirectional stream
|
| 1491 |
+
live_events = runner.run_live(
|
| 1492 |
+
session=session,
|
| 1493 |
+
live_request_queue=live_queue,
|
| 1494 |
+
run_config=run_config
|
| 1495 |
+
)
|
| 1496 |
+
|
| 1497 |
+
# 4. Process events (simplified loop)
|
| 1498 |
+
try:
|
| 1499 |
+
async for event in live_events:
|
| 1500 |
+
# Handle agent output (text or audio bytes)
|
| 1501 |
+
if event.content and event.content.parts:
|
| 1502 |
+
part = event.content.parts[0]
|
| 1503 |
+
if part.inline_data and part.inline_data.mime_type.startswith("audio/"):
|
| 1504 |
+
# Send audio bytes to client
|
| 1505 |
+
await client.send_audio(part.inline_data.data)
|
| 1506 |
+
elif part.text:
|
| 1507 |
+
# Send text to client
|
| 1508 |
+
await client.send_text(part.text)
|
| 1509 |
+
|
| 1510 |
+
# Handle turn signals
|
| 1511 |
+
if event.turn_complete:
|
| 1512 |
+
pass # Signal client that agent finished speaking
|
| 1513 |
+
finally:
|
| 1514 |
+
live_queue.close()
|
| 1515 |
+
|
| 1516 |
+
# To send user input to agent during the stream:
|
| 1517 |
+
# await live_queue.send_content(Content(role="user", parts=[Part(text="Hello")]))
|
| 1518 |
+
# await live_queue.send_realtime(Blob(mime_type="audio/pcm", data=audio_bytes))
|
| 1519 |
+
```
|
| 1520 |
+
|
| 1521 |
+
* **Streaming Tools**: A special type of `FunctionTool` that can stream intermediate results back to the agent.
|
| 1522 |
+
* **Definition**: Must be an `async` function with a return type of `AsyncGenerator`.
|
| 1523 |
+
```python
|
| 1524 |
+
from typing import AsyncGenerator
|
| 1525 |
+
|
| 1526 |
+
async def monitor_stock_price(symbol: str) -> AsyncGenerator[str, None]:
|
| 1527 |
+
"""Yields stock price updates as they occur."""
|
| 1528 |
+
while True:
|
| 1529 |
+
price = await get_live_price(symbol)
|
| 1530 |
+
yield f"Update for {symbol}: ${price}"
|
| 1531 |
+
await asyncio.sleep(5)
|
| 1532 |
+
```
|
| 1533 |
+
|
| 1534 |
+
* **Advanced I/O Modalities**: ADK (especially with Gemini Live API models) supports richer interactions.
|
| 1535 |
+
* **Audio**: Input via `Blob(mime_type="audio/pcm", data=bytes)`, Output via `genai_types.SpeechConfig` in `RunConfig`.
|
| 1536 |
+
* **Vision (Images/Video)**: Input via `Blob(mime_type="image/jpeg", data=bytes)` or `Blob(mime_type="video/mp4", data=bytes)`. Models like `gemini-2.5-flash-exp` can process these.
|
| 1537 |
+
* **Multimodal Input in `Content`**:
|
| 1538 |
+
```python
|
| 1539 |
+
multimodal_content = genai_types.Content(
|
| 1540 |
+
parts=[
|
| 1541 |
+
genai_types.Part(text="Describe this image:"),
|
| 1542 |
+
genai_types.Part(inline_data=genai_types.Blob(mime_type="image/jpeg", data=image_bytes))
|
| 1543 |
+
]
|
| 1544 |
+
)
|
| 1545 |
+
```
|
| 1546 |
+
|
| 1547 |
+
---
|
| 1548 |
+
|
| 1549 |
+
## 16. Performance Optimization
|
| 1550 |
+
|
| 1551 |
+
* **Model Selection**: Choose the smallest model that meets requirements (e.g., `gemini-2.5-flash` for simple tasks).
|
| 1552 |
+
* **Instruction Prompt Engineering**: Concise, clear instructions reduce tokens and improve accuracy.
|
| 1553 |
+
* **Tool Use Optimization**:
|
| 1554 |
+
* Design efficient tools (fast API calls, optimize database queries).
|
| 1555 |
+
* Cache tool results (e.g., using `before_tool_callback` or `tool_context.state`).
|
| 1556 |
+
* **State Management**: Store only necessary data in state to avoid large context windows.
|
| 1557 |
+
* **`include_contents='none'`**: For stateless utility agents, saves LLM context window.
|
| 1558 |
+
* **Parallelization**: Use `ParallelAgent` for independent tasks.
|
| 1559 |
+
* **Streaming**: Use `StreamingMode.SSE` or `BIDI` for perceived latency reduction.
|
| 1560 |
+
* **`max_llm_calls`**: Limit LLM calls to prevent runaway agents and control costs.
|
| 1561 |
+
|
| 1562 |
+
---
|
| 1563 |
+
|
| 1564 |
+
## 17. General Best Practices & Common Pitfalls
|
| 1565 |
+
|
| 1566 |
+
* **Start Simple**: Begin with `LlmAgent`, mock tools, and `InMemorySessionService`. Gradually add complexity.
|
| 1567 |
+
* **Iterative Development**: Build small features, test, debug, refine.
|
| 1568 |
+
* **Modular Design**: Use agents and tools to encapsulate logic.
|
| 1569 |
+
* **Clear Naming**: Descriptive names for agents, tools, state keys.
|
| 1570 |
+
* **Error Handling**: Implement robust `try...except` blocks in tools and callbacks. Guide LLMs on how to handle tool errors.
|
| 1571 |
+
* **Testing**: Write unit tests for tools/callbacks, integration tests for agent flows (`pytest`, `adk eval`).
|
| 1572 |
+
* **Dependency Management**: Use virtual environments (`venv`) and `requirements.txt`.
|
| 1573 |
+
* **Secrets Management**: Never hardcode API keys. Use `.env` for local dev, environment variables or secret managers (Google Cloud Secret Manager) for production.
|
| 1574 |
+
* **Avoid Infinite Loops**: Especially with `LoopAgent` or complex LLM tool-calling chains. Use `max_iterations`, `max_llm_calls`, and strong instructions.
|
| 1575 |
+
* **Handle `None` & `Optional`**: Always check for `None` or `Optional` values when accessing nested properties (e.g., `event.content and event.content.parts and event.content.parts[0].text`).
|
| 1576 |
+
* **Immutability of Events**: Events are immutable records. If you need to change something *before* it's processed, do so in a `before_*` callback and return a *new* modified object.
|
| 1577 |
+
* **Understand `output_key` vs. direct `state` writes**: `output_key` is for the agent's *final conversational* output. Direct `tool_context.state['key'] = value` is for *any other* data you want to save.
|
| 1578 |
+
* **Example Agents**: Find practical examples and reference implementations in the [ADK Samples repository](https://github.com/google/adk-samples).
|
| 1579 |
+
|
| 1580 |
+
|
| 1581 |
+
### Testing the output of an agent
|
| 1582 |
+
|
| 1583 |
+
The following script demonstrates how to programmatically test an agent's output. This approach is extremely useful when an LLM or coding agent needs to interact with a work-in-progress agent, as well as for automated testing, debugging, or when you need to integrate agent execution into other workflows:
|
| 1584 |
+
```python
|
| 1585 |
+
import asyncio
|
| 1586 |
+
|
| 1587 |
+
from google.adk.runners import Runner
|
| 1588 |
+
from google.adk.sessions import InMemorySessionService
|
| 1589 |
+
from rag_agent.agent import root_agent
|
| 1590 |
+
from google.genai import types as genai_types
|
| 1591 |
+
|
| 1592 |
+
|
| 1593 |
+
async def main():
|
| 1594 |
+
"""Runs the agent with a sample query."""
|
| 1595 |
+
session_service = InMemorySessionService()
|
| 1596 |
+
await session_service.create_session(
|
| 1597 |
+
app_name="rag_agent", user_id="test_user", session_id="test_session"
|
| 1598 |
+
)
|
| 1599 |
+
runner = Runner(
|
| 1600 |
+
agent=root_agent, app_name="rag_agent", session_service=session_service
|
| 1601 |
+
)
|
| 1602 |
+
query = "I want a recipe for pancakes"
|
| 1603 |
+
async for event in runner.run_async(
|
| 1604 |
+
user_id="test_user",
|
| 1605 |
+
session_id="test_session",
|
| 1606 |
+
new_message=genai_types.Content(
|
| 1607 |
+
role="user",
|
| 1608 |
+
parts=[genai_types.Part.from_text(text=query)]
|
| 1609 |
+
),
|
| 1610 |
+
):
|
| 1611 |
+
if event.is_final_response():
|
| 1612 |
+
print(event.content.parts[0].text)
|
| 1613 |
+
|
| 1614 |
+
|
| 1615 |
+
if __name__ == "__main__":
|
| 1616 |
+
asyncio.run(main())
|
| 1617 |
+
```
|
| 1618 |
+
|
| 1619 |
+
---
|
| 1620 |
+
|
| 1621 |
+
## 18. Official API & CLI References
|
| 1622 |
+
|
| 1623 |
+
For detailed specifications of all classes, methods, and commands, refer to the official reference documentation.
|
| 1624 |
+
|
| 1625 |
+
* [Python API Reference](https://github.com/google/adk-docs/tree/main/docs/api-reference/python)
|
| 1626 |
+
* [Java API Reference](https://github.com/google/adk-docs/tree/main/docs/api-reference/java)
|
| 1627 |
+
* [CLI Reference](https://github.com/google/adk-docs/tree/main/docs/api-reference/cli)
|
| 1628 |
+
* [REST API Reference](https://github.com/google/adk-docs/tree/main/docs/api-reference/rest)
|
| 1629 |
+
* [Agent Config YAML Reference](https://github.com/google/adk-docs/tree/main/docs/api-reference/agentconfig)
|
| 1630 |
+
|
| 1631 |
+
---
|
| 1632 |
+
**llm.txt** documents the "Agent Starter Pack" repository, providing a source of truth on its purpose, features, and usage.
|
| 1633 |
+
---
|
| 1634 |
+
|
| 1635 |
+
### Section 1: Project Overview
|
| 1636 |
+
|
| 1637 |
+
* **Project Name:** Agent Starter Pack
|
| 1638 |
+
* **Purpose:** Accelerate development of production-ready GenAI Agents on Google Cloud.
|
| 1639 |
+
* **Tagline:** Production-Ready Agents on Google Cloud, faster.
|
| 1640 |
+
|
| 1641 |
+
**The "Production Gap":**
|
| 1642 |
+
While prototyping GenAI agents is quick, production deployment often takes 3-9 months.
|
| 1643 |
+
|
| 1644 |
+
**Key Challenges Addressed:**
|
| 1645 |
+
* **Customization:** Business logic, data grounding, security/compliance.
|
| 1646 |
+
* **Evaluation:** Metrics, quality assessment, test datasets.
|
| 1647 |
+
* **Deployment:** Cloud infrastructure, CI/CD, UI integration.
|
| 1648 |
+
* **Observability:** Performance tracking, user feedback.
|
| 1649 |
+
|
| 1650 |
+
**Solution: Agent Starter Pack**
|
| 1651 |
+
Provides MLOps and infrastructure templates so developers focus on agent logic.
|
| 1652 |
+
|
| 1653 |
+
* **You Build:** Prompts, LLM interactions, business logic, agent orchestration.
|
| 1654 |
+
* **We Provide:**
|
| 1655 |
+
* Deployment infrastructure, CI/CD, testing
|
| 1656 |
+
* Logging, monitoring
|
| 1657 |
+
* Evaluation tools
|
| 1658 |
+
* Data connections, UI playground
|
| 1659 |
+
* Security best practices
|
| 1660 |
+
|
| 1661 |
+
Establishes production patterns from day one, saving setup time.
|
| 1662 |
+
|
| 1663 |
+
---
|
| 1664 |
+
### Section 2: Creating & Enhancing Agent Projects
|
| 1665 |
+
|
| 1666 |
+
Start by creating a new agent project from a predefined template, or enhance an existing project with agent capabilities. Both processes support interactive and fully automated setup.
|
| 1667 |
+
|
| 1668 |
+
**Prerequisites:**
|
| 1669 |
+
Before you begin, ensure you have `uv`/`uvx`, `gcloud` CLI, `terraform`, `git`, and `gh` CLI (for automated CI/CD setup) installed and authenticated.
|
| 1670 |
+
|
| 1671 |
+
**Installing the `agent-starter-pack` CLI:**
|
| 1672 |
+
Choose one method to get the `agent-starter-pack` command:
|
| 1673 |
+
|
| 1674 |
+
1. **`uvx` (Recommended for Zero-Install/Automation):** Run directly without prior installation.
|
| 1675 |
+
```bash
|
| 1676 |
+
uvx agent-starter-pack create ...
|
| 1677 |
+
```
|
| 1678 |
+
2. **Virtual Environment (`pip` or `uv`):**
|
| 1679 |
+
```bash
|
| 1680 |
+
pip install agent-starter-pack
|
| 1681 |
+
```
|
| 1682 |
+
3. **Persistent CLI Install (`pipx` or `uv tool`):** Installs globally in an isolated environment.
|
| 1683 |
+
|
| 1684 |
+
---
|
| 1685 |
+
### `agent-starter-pack create` Command
|
| 1686 |
+
|
| 1687 |
+
Generates a new agent project directory based on a chosen template and configuration.
|
| 1688 |
+
|
| 1689 |
+
**Usage:**
|
| 1690 |
+
```bash
|
| 1691 |
+
agent-starter-pack create PROJECT_NAME [OPTIONS]
|
| 1692 |
+
```
|
| 1693 |
+
|
| 1694 |
+
**Arguments:**
|
| 1695 |
+
* `PROJECT_NAME`: Name for your new project directory and base for GCP resource naming (max 26 chars, converted to lowercase).
|
| 1696 |
+
|
| 1697 |
+
**Template Selection:**
|
| 1698 |
+
* `-a, --agent`: Agent template - built-in agents (e.g., `adk_base`, `agentic_rag`), remote templates (`adk@gemini-fullstack`, `github.com/user/repo@branch`), or local projects (`local@./path`).
|
| 1699 |
+
|
| 1700 |
+
**Deployment Options:**
|
| 1701 |
+
* `-d, --deployment-target`: Target environment (`cloud_run` or `agent_engine`).
|
| 1702 |
+
* `--cicd-runner`: CI/CD runner (`google_cloud_build` or `github_actions`).
|
| 1703 |
+
* `--region`: GCP region (default: `asia-southeast1`).
|
| 1704 |
+
|
| 1705 |
+
**Data & Storage:**
|
| 1706 |
+
* `-i, --include-data-ingestion`: Include data ingestion pipeline.
|
| 1707 |
+
* `-ds, --datastore`: Datastore type (`vertex_ai_search`, `vertex_ai_vector_search`, `cloud_sql`).
|
| 1708 |
+
* `--session-type`: Session storage (`in_memory`, `cloud_sql`, `agent_engine`).
|
| 1709 |
+
|
| 1710 |
+
**Project Creation:**
|
| 1711 |
+
* `-o, --output-dir`: Output directory (default: current directory).
|
| 1712 |
+
* `--agent-directory, -dir`: Agent code directory name (default: `app`).
|
| 1713 |
+
* `--in-folder`: Create files in current directory instead of new subdirectory.
|
| 1714 |
+
|
| 1715 |
+
**Automation:**
|
| 1716 |
+
* `--auto-approve`: **Skip all interactive prompts (crucial for automation).**
|
| 1717 |
+
* `--skip-checks`: Skip GCP/Vertex AI verification checks.
|
| 1718 |
+
* `--debug`: Enable debug logging.
|
| 1719 |
+
|
| 1720 |
+
**Automated Creation Example:**
|
| 1721 |
+
```bash
|
| 1722 |
+
uvx agent-starter-pack create my-automated-agent \
|
| 1723 |
+
-a adk_base \
|
| 1724 |
+
-d cloud_run \
|
| 1725 |
+
--region asia-southeast1 \
|
| 1726 |
+
--auto-approve
|
| 1727 |
+
```
|
| 1728 |
+
|
| 1729 |
+
---
|
| 1730 |
+
|
| 1731 |
+
### `agent-starter-pack enhance` Command
|
| 1732 |
+
|
| 1733 |
+
Enhance your existing project with AI agent capabilities by adding agent-starter-pack features in-place. This command supports all the same options as `create` but templates directly into the current directory instead of creating a new project directory.
|
| 1734 |
+
|
| 1735 |
+
**Usage:**
|
| 1736 |
+
```bash
|
| 1737 |
+
agent-starter-pack enhance [TEMPLATE_PATH] [OPTIONS]
|
| 1738 |
+
```
|
| 1739 |
+
|
| 1740 |
+
**Key Differences from `create`:**
|
| 1741 |
+
* Templates into current directory (equivalent to `create --in-folder`)
|
| 1742 |
+
* `TEMPLATE_PATH` defaults to current directory (`.`)
|
| 1743 |
+
* Project name defaults to current directory name
|
| 1744 |
+
* Additional `--base-template` option to override template inheritance
|
| 1745 |
+
|
| 1746 |
+
**Enhanced Project Example:**
|
| 1747 |
+
```bash
|
| 1748 |
+
# Enhance current directory with agent capabilities
|
| 1749 |
+
uvx agent-starter-pack enhance . \
|
| 1750 |
+
--base-template adk_base \
|
| 1751 |
+
-d cloud_run \
|
| 1752 |
+
--region asia-southeast1 \
|
| 1753 |
+
--auto-approve
|
| 1754 |
+
```
|
| 1755 |
+
|
| 1756 |
+
**Project Structure:** Expects agent code in `app/` directory (configurable via `--agent-directory`).
|
| 1757 |
+
|
| 1758 |
+
---
|
| 1759 |
+
|
| 1760 |
+
### Available Agent Templates
|
| 1761 |
+
|
| 1762 |
+
Templates for the `create` command (via `-a` or `--agent`):
|
| 1763 |
+
|
| 1764 |
+
| Agent Name | Description |
|
| 1765 |
+
| :--------------------- | :------------------------------------------- |
|
| 1766 |
+
| `adk_base` | Base ReAct agent (ADK) |
|
| 1767 |
+
| `adk_gemini_fullstack` | Production-ready fullstack research agent |
|
| 1768 |
+
| `agentic_rag` | RAG agent for document retrieval & Q&A |
|
| 1769 |
+
| `langgraph_base` | Base ReAct agent (LangGraph) |
|
| 1770 |
+
| `adk_live` | Real-time multimodal RAG agent |
|
| 1771 |
+
|
| 1772 |
+
---
|
| 1773 |
+
|
| 1774 |
+
### Including a Data Ingestion Pipeline (for RAG agents)
|
| 1775 |
+
|
| 1776 |
+
For RAG agents needing custom document search, enabling this option automates loading, chunking, embedding documents with Vertex AI, and storing them in a vector database.
|
| 1777 |
+
|
| 1778 |
+
**How to enable:**
|
| 1779 |
+
```bash
|
| 1780 |
+
uvx agent-starter-pack create my-rag-agent \
|
| 1781 |
+
-a agentic_rag \
|
| 1782 |
+
-d cloud_run \
|
| 1783 |
+
-i \
|
| 1784 |
+
-ds vertex_ai_search \
|
| 1785 |
+
--auto-approve
|
| 1786 |
+
```
|
| 1787 |
+
**Post-creation:** Follow your new project's `data_ingestion/README.md` to deploy the necessary infrastructure.
|
| 1788 |
+
|
| 1789 |
+
---
|
| 1790 |
+
### Section 3: Development & Automated Deployment Workflow
|
| 1791 |
+
---
|
| 1792 |
+
|
| 1793 |
+
This section describes the end-to-end lifecycle of an agent, with emphasis on automation.
|
| 1794 |
+
|
| 1795 |
+
|
| 1796 |
+
### 1. Local Development & Iteration
|
| 1797 |
+
|
| 1798 |
+
Once your project is created, navigate into its directory to begin development.
|
| 1799 |
+
|
| 1800 |
+
**First, install dependencies (run once):**
|
| 1801 |
+
```bash
|
| 1802 |
+
make install
|
| 1803 |
+
```
|
| 1804 |
+
|
| 1805 |
+
**Next, test your agent. The recommended method is to use a programmatic script.**
|
| 1806 |
+
|
| 1807 |
+
#### Programmatic Testing (Recommended Workflow)
|
| 1808 |
+
|
| 1809 |
+
This method allows for quick, automated validation of your agent's logic.
|
| 1810 |
+
|
| 1811 |
+
1. **Create a script:** In the project's root directory, create a Python script named `run_agent.py`.
|
| 1812 |
+
2. **Invoke the agent:** In the script, write code to programmatically call your agent with sample input and `print()` the output for inspection.
|
| 1813 |
+
* **Guidance:** If you're unsure or no guidance exists, you can look at files in the `tests/` directory for examples of how to import and call the agent's main function.
|
| 1814 |
+
* **Important:** This script is for simple validation. **Assertions are not required**, and you should not create a formal `pytest` file.
|
| 1815 |
+
3. **Run the test:** Execute your script from the terminal using `uv`.
|
| 1816 |
+
```bash
|
| 1817 |
+
uv run python run_agent.py
|
| 1818 |
+
```
|
| 1819 |
+
You can keep the test file for future testing.
|
| 1820 |
+
|
| 1821 |
+
#### Manual Testing with the UI Playground (Optional)
|
| 1822 |
+
|
| 1823 |
+
If the user needs to interact with your agent manually in a chat interface for debugging:
|
| 1824 |
+
|
| 1825 |
+
1. Run the following command to start the local web UI:
|
| 1826 |
+
```bash
|
| 1827 |
+
make playground
|
| 1828 |
+
```
|
| 1829 |
+
This is useful for human-in-the-loop testing and features hot-reloading.
|
| 1830 |
+
|
| 1831 |
+
### 2. Deploying to a Cloud Development Environment
|
| 1832 |
+
Before setting up full CI/CD, you can deploy to a personal cloud dev environment.
|
| 1833 |
+
|
| 1834 |
+
1. **Set Project:** `gcloud config set project YOUR_DEV_PROJECT_ID`
|
| 1835 |
+
2. **Provision Resources:** `make setup-dev-env` (uses Terraform).
|
| 1836 |
+
3. **Deploy Backend:** `make deploy` (builds and deploys the agent).
|
| 1837 |
+
|
| 1838 |
+
### 3. Automated Production-Ready Deployment with CI/CD
|
| 1839 |
+
For reliable deployments, the `setup-cicd` command streamlines the entire process. It creates a GitHub repo, connects it to your chosen CI/CD runner (Google Cloud Build or GitHub Actions), provisions staging/prod infrastructure, and configures deployment triggers.
|
| 1840 |
+
|
| 1841 |
+
**Automated CI/CD Setup Example (Recommended):**
|
| 1842 |
+
```bash
|
| 1843 |
+
# Run from the project root. This command will guide you or can be automated with flags.
|
| 1844 |
+
uvx agent-starter-pack setup-cicd
|
| 1845 |
+
```
|
| 1846 |
+
|
| 1847 |
+
**CI/CD Workflow Logic:**
|
| 1848 |
+
* **On Pull Request:** CI pipeline runs tests.
|
| 1849 |
+
* **On Merge to `main`:** CD pipeline deploys to staging.
|
| 1850 |
+
* **Manual Approval:** A manual approval step triggers the production deployment.
|
| 1851 |
+
|
| 1852 |
+
---
|
| 1853 |
+
### Section 4: Key Features & Customization
|
| 1854 |
+
---
|
| 1855 |
+
|
| 1856 |
+
### Deploying with a User Interface (UI)
|
| 1857 |
+
* **Unified Deployment (for Dev/Test):** The backend and frontend can be packaged and served from a single Cloud Run service, secured with Identity-Aware Proxy (IAP).
|
| 1858 |
+
* **Deploying with UI:** `make deploy IAP=true`
|
| 1859 |
+
* **Access Control:** After deploying with IAP, grant users the `IAP-secured Web App User` role in IAM to give them access.
|
| 1860 |
+
|
| 1861 |
+
### Session Management
|
| 1862 |
+
|
| 1863 |
+
For stateful agents, the starter pack supports persistent sessions.
|
| 1864 |
+
* **Cloud Run:** Choose between `in_memory` (for testing) and durable `cloud_sql` sessions using the `--session-type` flag.
|
| 1865 |
+
* **Agent Engine:** Provides session management automatically.
|
| 1866 |
+
|
| 1867 |
+
### Monitoring & Observability
|
| 1868 |
+
* **Technology:** Uses OpenTelemetry to emit events to Google Cloud Trace and Logging.
|
| 1869 |
+
* **Custom Tracer:** A custom tracer in `app/utils/tracing.py` (or a different agent directory instead of app) handles large payloads by linking to GCS, overcoming default service limits.
|
| 1870 |
+
* **Infrastructure:** A Log Router to sink data to BigQuery is provisioned by Terraform.
|
| 1871 |
+
|
| 1872 |
+
---
|
| 1873 |
+
### Section 5: CLI Reference for CI/CD Setup
|
| 1874 |
+
---
|
| 1875 |
+
|
| 1876 |
+
### `agent-starter-pack setup-cicd`
|
| 1877 |
+
Automates the complete CI/CD infrastructure setup for GitHub-based deployments. Intelligently detects your CI/CD runner (Google Cloud Build or GitHub Actions) and configures everything automatically.
|
| 1878 |
+
|
| 1879 |
+
**Usage:**
|
| 1880 |
+
```bash
|
| 1881 |
+
uvx agent-starter-pack setup-cicd [OPTIONS]
|
| 1882 |
+
```
|
| 1883 |
+
|
| 1884 |
+
**Prerequisites:**
|
| 1885 |
+
- Run from the project root (directory with `pyproject.toml`)
|
| 1886 |
+
- Required tools: `gh` CLI (authenticated), `gcloud` CLI (authenticated), `terraform`
|
| 1887 |
+
- `Owner` role on GCP projects
|
| 1888 |
+
- GitHub token with `repo` and `workflow` scopes
|
| 1889 |
+
|
| 1890 |
+
**Key Options:**
|
| 1891 |
+
* `--staging-project`, `--prod-project`: GCP project IDs (will prompt if omitted).
|
| 1892 |
+
* `--repository-name`, `--repository-owner`: GitHub repo details (will prompt if omitted).
|
| 1893 |
+
* `--cicd-project`: CI/CD resources project (defaults to prod project).
|
| 1894 |
+
* `--dev-project`: Development project ID (optional).
|
| 1895 |
+
* `--region`: GCP region (default: `asia-southeast1`).
|
| 1896 |
+
* `--auto-approve`: Skip all interactive prompts.
|
| 1897 |
+
* `--local-state`: Use local Terraform state instead of GCS backend.
|
| 1898 |
+
* `--debug`: Enable debug logging.
|
| 1899 |
+
|
| 1900 |
+
**What it does:**
|
| 1901 |
+
1. Creates/connects GitHub repository
|
| 1902 |
+
2. Sets up Terraform infrastructure with remote state
|
| 1903 |
+
3. Configures CI/CD runner connection (Cloud Build or GitHub Actions with WIF)
|
| 1904 |
+
4. Provisions staging/prod environments
|
| 1905 |
+
5. Sets up local Git repository with origin remote
|
| 1906 |
+
|
| 1907 |
+
**Automated Example:**
|
| 1908 |
+
```bash
|
| 1909 |
+
uvx agent-starter-pack setup-cicd \
|
| 1910 |
+
--staging-project your-staging-project \
|
| 1911 |
+
--prod-project your-prod-project \
|
| 1912 |
+
--repository-name your-repo-name \
|
| 1913 |
+
--repository-owner your-username \
|
| 1914 |
+
--auto-approve
|
| 1915 |
+
```
|
| 1916 |
+
|
| 1917 |
+
**After setup, push to trigger pipeline:**
|
| 1918 |
+
```bash
|
| 1919 |
+
git add . && git commit -m "Initial commit" && git push -u origin main
|
| 1920 |
+
```
|
| 1921 |
+
|
| 1922 |
+
* Note: For coding agents - ask user for required project IDs and repo details before running with `--auto-approve`.
|
| 1923 |
+
* Note: If user prefers different git provider, refer to `deployment/README.md` for manual deployment.
|
| 1924 |
+
---
|
| 1925 |
+
### Section 6: Operational Guidelines for Coding Agents
|
| 1926 |
+
|
| 1927 |
+
These guidelines are essential for interacting with the Agent Starter Pack project effectively.
|
| 1928 |
+
|
| 1929 |
+
---
|
| 1930 |
+
|
| 1931 |
+
### Principle 1: Code Preservation & Isolation
|
| 1932 |
+
|
| 1933 |
+
When executing code modifications using tools like `replace` or `write_file`, your paramount objective is surgical precision. You **must alter only the code segments directly targeted** by the user's request, while **strictly preserving all surrounding and unrelated code.**
|
| 1934 |
+
|
| 1935 |
+
**Mandatory Pre-Execution Verification:**
|
| 1936 |
+
|
| 1937 |
+
Before finalizing any `new_string` for a `replace` operation, meticulously verify the following:
|
| 1938 |
+
|
| 1939 |
+
1. **Target Identification:** Clearly define the exact lines or expressions to be changed, based *solely* on the user's explicit instructions.
|
| 1940 |
+
2. **Preservation Check:** Compare your proposed `new_string` against the `old_string`. Ensure all code, configuration values (e.g., `model`, `version`, `api_key`), comments, and formatting *outside* the identified target remain identical and verbatim.
|
| 1941 |
+
|
| 1942 |
+
**Example: Adhering to Preservation**
|
| 1943 |
+
|
| 1944 |
+
* **User Request:** "Change the agent's instruction to be a recipe suggester."
|
| 1945 |
+
* **Original Code Snippet:**
|
| 1946 |
+
```python
|
| 1947 |
+
root_agent = Agent(
|
| 1948 |
+
name="root_agent",
|
| 1949 |
+
model="gemini-2.5-flash",
|
| 1950 |
+
instruction="You are a helpful AI assistant."
|
| 1951 |
+
)
|
| 1952 |
+
```
|
| 1953 |
+
* **Incorrect Modification (VIOLATION):**
|
| 1954 |
+
```python
|
| 1955 |
+
root_agent = Agent(
|
| 1956 |
+
name="recipe_suggester",
|
| 1957 |
+
model="gemini-1.5-flash", # UNINTENDED MUTATION - model was not requested to change
|
| 1958 |
+
instruction="You are a recipe suggester."
|
| 1959 |
+
)
|
| 1960 |
+
```
|
| 1961 |
+
* **Correct Modification (COMPLIANT):**
|
| 1962 |
+
```python
|
| 1963 |
+
root_agent = Agent(
|
| 1964 |
+
name="recipe_suggester", # OK, related to new purpose
|
| 1965 |
+
model="gemini-2.5-flash", # MUST be preserved
|
| 1966 |
+
instruction="You are a recipe suggester." # OK, the direct target
|
| 1967 |
+
)
|
| 1968 |
+
```
|
| 1969 |
+
|
| 1970 |
+
**Critical Error:** Failure to adhere to this preservation principle is a critical error. Always prioritize the integrity of existing, unchanged code over the convenience of rewriting entire blocks.
|
| 1971 |
+
|
| 1972 |
+
---
|
| 1973 |
+
|
| 1974 |
+
### Principle 2: Workflow & Execution Best Practices
|
| 1975 |
+
|
| 1976 |
+
* **Standard Workflow:**
|
| 1977 |
+
The validated end-to-end process is: `create` → `test` → `setup-cicd` → push to deploy. Trust this high-level workflow as the default for developing and shipping agents.
|
| 1978 |
+
|
| 1979 |
+
* **Agent Testing:**
|
| 1980 |
+
* **Avoid `make playground`** unless specifically instructed; it is designed for human interaction. Focus on programmatic testing.
|
| 1981 |
+
|
| 1982 |
+
* **Model Selection:**
|
| 1983 |
+
* **When using Gemini, prefer the 2.5 model family** for optimal performance and capabilities: "gemini-2.5-pro" and "gemini-2.5-flash"
|
| 1984 |
+
|
| 1985 |
+
* **Running Python Commands:**
|
| 1986 |
+
* Always use `uv` to execute Python commands within this repository (e.g., `uv run run_agent.py`).
|
| 1987 |
+
* Ensure project dependencies are installed by running `make install` before executing scripts.
|
| 1988 |
+
* Consult the project's `Makefile` and `README.md` for other useful development commands.
|
| 1989 |
+
|
| 1990 |
+
* **Further Reading & Troubleshooting:**
|
| 1991 |
+
* For questions about specific frameworks (e.g., LangGraph) or Google Cloud products (e.g., Cloud Run), their official documentation and online resources are the best source of truth.
|
| 1992 |
+
* **When encountering persistent errors or if you're unsure how to proceed after initial troubleshooting, a targeted Google Search is strongly recommended.** It is often the fastest way to find relevant documentation, community discussions, or direct solutions to your problem.
|
GRADIO_COMPLETE_SETUP.md
ADDED
|
@@ -0,0 +1,300 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🎉 Gradio Chat UI - Complete Setup
|
| 2 |
+
|
| 3 |
+
## ✅ What Was Created
|
| 4 |
+
|
| 5 |
+
Your Gradio chat interface is now ready! Here's everything that was added to your project:
|
| 6 |
+
|
| 7 |
+
### 🎨 Main Applications
|
| 8 |
+
1. **`gradio_app.py`** - Simple version using AgentEngine directly
|
| 9 |
+
2. **`gradio_app_v2.py`** ⭐ - **Recommended** version with full features
|
| 10 |
+
3. **`run_gradio.py`** - Quick launcher script
|
| 11 |
+
|
| 12 |
+
### 📚 Documentation
|
| 13 |
+
4. **`GRADIO_README.md`** - Complete feature documentation
|
| 14 |
+
5. **`QUICKSTART_GRADIO.md`** - Step-by-step setup guide
|
| 15 |
+
6. **`GRADIO_SUMMARY.md`** - Project overview and architecture
|
| 16 |
+
7. **`VERSIONS_COMPARISON.md`** - Comparison of both app versions
|
| 17 |
+
|
| 18 |
+
### 🛠️ Utilities
|
| 19 |
+
8. **`setup_gradio.sh`** - Automated setup script
|
| 20 |
+
9. **`test_gradio_setup.py`** - Configuration verification tool
|
| 21 |
+
|
| 22 |
+
### 📦 Updated
|
| 23 |
+
10. **`requirements.txt`** - Added Gradio and python-dotenv
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## 🚀 Get Started in 3 Steps
|
| 28 |
+
|
| 29 |
+
### Step 1: Install Dependencies
|
| 30 |
+
```bash
|
| 31 |
+
pip install -r requirements.txt
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
### Step 2: Authenticate
|
| 35 |
+
```bash
|
| 36 |
+
gcloud auth application-default login
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
### Step 3: Run the App
|
| 40 |
+
```bash
|
| 41 |
+
python gradio_app_v2.py
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
**Open your browser to: http://localhost:7860**
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
## 🎯 What You Can Do
|
| 49 |
+
|
| 50 |
+
Your Gradio chat UI provides:
|
| 51 |
+
|
| 52 |
+
### 🤖 Agent Selection
|
| 53 |
+
- **Automatic Discovery**: Lists all agents from Agent Engine
|
| 54 |
+
- **Dropdown Selection**: Easy agent switching
|
| 55 |
+
- **Refresh Button**: Update agent list on-the-fly
|
| 56 |
+
|
| 57 |
+
### 💬 Chat Interface
|
| 58 |
+
- **Real-time Conversation**: Chat with your RAG agent
|
| 59 |
+
- **Chat History**: View full conversation
|
| 60 |
+
- **Copy Buttons**: Copy responses easily
|
| 61 |
+
- **Session Management**: Maintain context across queries
|
| 62 |
+
|
| 63 |
+
### 📋 RAG Operations
|
| 64 |
+
Your agents can:
|
| 65 |
+
- **List Corpora** - "List all available corpora"
|
| 66 |
+
- **Query Documents** - "What information do you have about X?"
|
| 67 |
+
- **Create Corpus** - "Create a new corpus called 'docs'"
|
| 68 |
+
- **Add Data** - "Add this file to the corpus: [URL]"
|
| 69 |
+
- **Get Info** - "Show me details about the corpus"
|
| 70 |
+
- **Delete** - "Delete the old-docs corpus"
|
| 71 |
+
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
+
## 📖 Quick Reference
|
| 75 |
+
|
| 76 |
+
### Launch Commands
|
| 77 |
+
```bash
|
| 78 |
+
# Recommended way
|
| 79 |
+
python gradio_app_v2.py
|
| 80 |
+
|
| 81 |
+
# Alternative launcher
|
| 82 |
+
python run_gradio.py
|
| 83 |
+
|
| 84 |
+
# With setup script
|
| 85 |
+
./setup_gradio.sh
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
### Test Your Setup
|
| 89 |
+
```bash
|
| 90 |
+
python test_gradio_setup.py
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
### Check Deployed Agents
|
| 94 |
+
```bash
|
| 95 |
+
gcloud agent-engines list --location=YOUR_LOCATION
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
---
|
| 99 |
+
|
| 100 |
+
## 🎨 UI Features
|
| 101 |
+
|
| 102 |
+
### Main Components
|
| 103 |
+
- **Agent Dropdown** - Select which agent to chat with
|
| 104 |
+
- **Session ID Input** - Maintain conversation context
|
| 105 |
+
- **Refresh Button** - Update available agents
|
| 106 |
+
- **Chat History** - View conversation with copy buttons
|
| 107 |
+
- **Message Input** - Type and send messages
|
| 108 |
+
- **Clear Button** - Clear chat history
|
| 109 |
+
- **Examples Accordion** - View capabilities and examples
|
| 110 |
+
|
| 111 |
+
### Visual Design
|
| 112 |
+
- Modern, clean Gradio interface
|
| 113 |
+
- Emoji indicators for status messages
|
| 114 |
+
- Avatar support (🤖 for agent)
|
| 115 |
+
- Responsive layout
|
| 116 |
+
- Dark/light mode support (via browser)
|
| 117 |
+
|
| 118 |
+
---
|
| 119 |
+
|
| 120 |
+
## 📊 Architecture
|
| 121 |
+
|
| 122 |
+
```
|
| 123 |
+
┌─────────────────────────────────────────────┐
|
| 124 |
+
│ User's Web Browser │
|
| 125 |
+
│ http://localhost:7860 │
|
| 126 |
+
└─────────────┬───────────────────────────────┘
|
| 127 |
+
│
|
| 128 |
+
▼
|
| 129 |
+
┌─────────────────────────────────────────────┐
|
| 130 |
+
│ Gradio UI (Python) │
|
| 131 |
+
│ gradio_app_v2.py │
|
| 132 |
+
└─────────────┬───────────────────────────────┘
|
| 133 |
+
│
|
| 134 |
+
▼
|
| 135 |
+
┌─────────────────────────────────────────────┐
|
| 136 |
+
│ AgentEnginesServiceClient │
|
| 137 |
+
│ (Google Cloud AI Platform) │
|
| 138 |
+
└─────────────┬───────────────────────────────┘
|
| 139 |
+
│
|
| 140 |
+
▼
|
| 141 |
+
┌─────────────────────────────────────────────┐
|
| 142 |
+
│ Agent Engine (Cloud Service) │
|
| 143 |
+
│ Lists & Queries Deployed Agents │
|
| 144 |
+
└─────────────┬───────────────────────────────┘
|
| 145 |
+
│
|
| 146 |
+
▼
|
| 147 |
+
┌─────────────────────────────────────────────┐
|
| 148 |
+
│ Your RAG Agent │
|
| 149 |
+
│ (Deployed with ADK) │
|
| 150 |
+
└─────────────┬───────────────────────────────┘
|
| 151 |
+
│
|
| 152 |
+
▼
|
| 153 |
+
┌─────────────────────────────────────────────┐
|
| 154 |
+
│ Vertex AI RAG Service │
|
| 155 |
+
│ Document Corpora & Embeddings │
|
| 156 |
+
└─────────────────────────────────────────────┘
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
---
|
| 160 |
+
|
| 161 |
+
## 🔧 Configuration
|
| 162 |
+
|
| 163 |
+
### Environment Variables
|
| 164 |
+
Located in `rag_agent/.env`:
|
| 165 |
+
```bash
|
| 166 |
+
GOOGLE_CLOUD_PROJECT="your-project-id"
|
| 167 |
+
GOOGLE_CLOUD_LOCATION="us-central1"
|
| 168 |
+
GOOGLE_GENAI_USE_VERTEXAI="true"
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
### Customization Options
|
| 172 |
+
Edit `gradio_app_v2.py` to modify:
|
| 173 |
+
- **Port**: Change `server_port=7860`
|
| 174 |
+
- **Theme**: Use `gr.themes.Glass()` or other themes
|
| 175 |
+
- **Public URL**: Set `share=True`
|
| 176 |
+
- **Authentication**: Add `auth=("user", "pass")`
|
| 177 |
+
|
| 178 |
+
---
|
| 179 |
+
|
| 180 |
+
## 🐛 Common Issues
|
| 181 |
+
|
| 182 |
+
### "No agents found"
|
| 183 |
+
**Cause**: No agents deployed
|
| 184 |
+
**Fix**: `make deploy`
|
| 185 |
+
|
| 186 |
+
### Authentication error
|
| 187 |
+
**Cause**: Not authenticated
|
| 188 |
+
**Fix**: `gcloud auth application-default login`
|
| 189 |
+
|
| 190 |
+
### Module not found
|
| 191 |
+
**Cause**: Dependencies not installed
|
| 192 |
+
**Fix**: `pip install -r requirements.txt`
|
| 193 |
+
|
| 194 |
+
### Wrong location
|
| 195 |
+
**Cause**: Location mismatch
|
| 196 |
+
**Fix**: Update `GOOGLE_CLOUD_LOCATION` in `.env`
|
| 197 |
+
|
| 198 |
+
### Port already in use
|
| 199 |
+
**Cause**: Port 7860 occupied
|
| 200 |
+
**Fix**: Change port in code or stop other process
|
| 201 |
+
|
| 202 |
+
---
|
| 203 |
+
|
| 204 |
+
## 📈 Next Steps
|
| 205 |
+
|
| 206 |
+
### Immediate Actions
|
| 207 |
+
1. ✅ Run `python test_gradio_setup.py` to verify setup
|
| 208 |
+
2. ✅ Launch `python gradio_app_v2.py`
|
| 209 |
+
3. ✅ Select an agent from the dropdown
|
| 210 |
+
4. ✅ Start chatting!
|
| 211 |
+
|
| 212 |
+
### Enhancements
|
| 213 |
+
- 📤 **Add file upload** for document ingestion
|
| 214 |
+
- 🎨 **Customize theme** to match your brand
|
| 215 |
+
- 📊 **Add analytics** to track usage
|
| 216 |
+
- 🔒 **Enable authentication** for production
|
| 217 |
+
- 🌐 **Deploy to Cloud Run** for public access
|
| 218 |
+
|
| 219 |
+
### Learning Resources
|
| 220 |
+
- 📚 [Gradio Documentation](https://gradio.app/docs)
|
| 221 |
+
- 🤖 [Vertex AI Agent Engine](https://cloud.google.com/agent-engine/docs)
|
| 222 |
+
- 🔧 [Google ADK](https://github.com/google/genai-adk)
|
| 223 |
+
|
| 224 |
+
---
|
| 225 |
+
|
| 226 |
+
## 🎓 Example Conversations
|
| 227 |
+
|
| 228 |
+
### Example 1: List Corpora
|
| 229 |
+
```
|
| 230 |
+
You: List all available corpora
|
| 231 |
+
🤖: Here are the available corpora:
|
| 232 |
+
1. tech-docs
|
| 233 |
+
2. company-handbook
|
| 234 |
+
3. research-papers
|
| 235 |
+
```
|
| 236 |
+
|
| 237 |
+
### Example 2: Query Documents
|
| 238 |
+
```
|
| 239 |
+
You: What information do you have about machine learning?
|
| 240 |
+
🤖: Based on the documents in the research-papers corpus,
|
| 241 |
+
machine learning is a subset of artificial intelligence...
|
| 242 |
+
```
|
| 243 |
+
|
| 244 |
+
### Example 3: Create Corpus
|
| 245 |
+
```
|
| 246 |
+
You: Create a new corpus called customer-feedback
|
| 247 |
+
🤖: ✅ Successfully created the corpus 'customer-feedback'.
|
| 248 |
+
You can now add documents to it.
|
| 249 |
+
```
|
| 250 |
+
|
| 251 |
+
### Example 4: Add Data
|
| 252 |
+
```
|
| 253 |
+
You: Add this file to the corpus: https://drive.google.com/file/d/abc123
|
| 254 |
+
🤖: ✅ Successfully added the document to the corpus.
|
| 255 |
+
The document is now being processed and will be
|
| 256 |
+
available for querying shortly.
|
| 257 |
+
```
|
| 258 |
+
|
| 259 |
+
---
|
| 260 |
+
|
| 261 |
+
## 🌟 Features Highlight
|
| 262 |
+
|
| 263 |
+
### For Users
|
| 264 |
+
- ✨ No coding required - just chat!
|
| 265 |
+
- 🔄 Switch between agents easily
|
| 266 |
+
- 📝 Copy responses with one click
|
| 267 |
+
- 💡 Examples built into the UI
|
| 268 |
+
- 🎯 Session-based conversations
|
| 269 |
+
|
| 270 |
+
### For Developers
|
| 271 |
+
- 🛠️ Easy to customize and extend
|
| 272 |
+
- 📦 Clean, modular code
|
| 273 |
+
- 🔌 Simple integration with Agent Engine
|
| 274 |
+
- 📊 Ready for analytics integration
|
| 275 |
+
- 🚀 Deploy-ready
|
| 276 |
+
|
| 277 |
+
---
|
| 278 |
+
|
| 279 |
+
## 🎉 You're All Set!
|
| 280 |
+
|
| 281 |
+
Your Gradio chat interface is ready to use! Here's your checklist:
|
| 282 |
+
|
| 283 |
+
- ✅ Gradio app files created
|
| 284 |
+
- ✅ Documentation provided
|
| 285 |
+
- ✅ Setup scripts ready
|
| 286 |
+
- ✅ Configuration verified
|
| 287 |
+
- ✅ Examples included
|
| 288 |
+
|
| 289 |
+
**Just run:** `python gradio_app_v2.py` and start chatting! 🚀
|
| 290 |
+
|
| 291 |
+
---
|
| 292 |
+
|
| 293 |
+
## 📞 Need Help?
|
| 294 |
+
|
| 295 |
+
1. **Check Documentation**: Read the QUICKSTART_GRADIO.md
|
| 296 |
+
2. **Test Setup**: Run `python test_gradio_setup.py`
|
| 297 |
+
3. **Review Examples**: See GRADIO_README.md for examples
|
| 298 |
+
4. **Compare Versions**: Check VERSIONS_COMPARISON.md
|
| 299 |
+
|
| 300 |
+
**Happy Chatting with Your RAG Agent! 🤖💬**
|
GRADIO_README.md
ADDED
|
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Gradio Chat UI for RAG Agent
|
| 2 |
+
|
| 3 |
+
This Gradio application provides a user-friendly chat interface to interact with your deployed RAG agents on Google Cloud Agent Engine.
|
| 4 |
+
|
| 5 |
+
## Features
|
| 6 |
+
|
| 7 |
+
- 🔍 **Agent Discovery**: Automatically lists all deployed agents from Agent Engine
|
| 8 |
+
- 💬 **Interactive Chat**: Chat with any selected agent in real-time
|
| 9 |
+
- 🔄 **Dynamic Updates**: Refresh agent list without restarting the app
|
| 10 |
+
- 📝 **Chat History**: View full conversation history
|
| 11 |
+
- 🎨 **Modern UI**: Clean and intuitive Gradio interface
|
| 12 |
+
|
| 13 |
+
## Prerequisites
|
| 14 |
+
|
| 15 |
+
1. Python 3.10 or higher
|
| 16 |
+
2. Google Cloud Project with Agent Engine enabled
|
| 17 |
+
3. Deployed RAG Agent(s) in Agent Engine
|
| 18 |
+
4. Proper authentication set up (gcloud or service account)
|
| 19 |
+
|
| 20 |
+
## Installation
|
| 21 |
+
|
| 22 |
+
1. Install the required dependencies:
|
| 23 |
+
```bash
|
| 24 |
+
pip install -r requirements.txt
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
2. Set up environment variables in `.env` file (in `rag_agent/` directory):
|
| 28 |
+
```bash
|
| 29 |
+
GOOGLE_CLOUD_PROJECT=your-project-id
|
| 30 |
+
GOOGLE_CLOUD_LOCATION=us-central1 # or your preferred location
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
3. Authenticate with Google Cloud:
|
| 34 |
+
```bash
|
| 35 |
+
gcloud auth application-default login
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
## Running the App
|
| 39 |
+
|
| 40 |
+
From the project root directory:
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
python gradio_app.py
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
The app will start on `http://localhost:7860`
|
| 47 |
+
|
| 48 |
+
## Usage
|
| 49 |
+
|
| 50 |
+
1. **Select an Agent**: Use the dropdown menu to choose from available agents
|
| 51 |
+
2. **Refresh Agents**: Click the refresh button to update the agent list
|
| 52 |
+
3. **Chat**: Type your message and press Send or Enter
|
| 53 |
+
4. **Clear Chat**: Clear the conversation history with the Clear Chat button
|
| 54 |
+
|
| 55 |
+
## Agent Capabilities
|
| 56 |
+
|
| 57 |
+
The RAG Agent supports the following operations:
|
| 58 |
+
|
| 59 |
+
- **Query Documents**: Ask questions and retrieve information from document corpora
|
| 60 |
+
- **List Corpora**: View all available document collections
|
| 61 |
+
- **Create Corpus**: Create new document collections
|
| 62 |
+
- **Add Data**: Add documents (Google Drive URLs, GCS paths) to corpora
|
| 63 |
+
- **Get Corpus Info**: View detailed information about a specific corpus
|
| 64 |
+
- **Delete Document**: Remove specific documents from a corpus
|
| 65 |
+
- **Delete Corpus**: Remove entire document collections
|
| 66 |
+
|
| 67 |
+
## Example Queries
|
| 68 |
+
|
| 69 |
+
```
|
| 70 |
+
- "List all available corpora"
|
| 71 |
+
- "What information do you have about [topic]?"
|
| 72 |
+
- "Create a new corpus called 'my-documents'"
|
| 73 |
+
- "Add this Google Drive file to the corpus: https://drive.google.com/..."
|
| 74 |
+
- "Show me details about the 'my-documents' corpus"
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
## Troubleshooting
|
| 78 |
+
|
| 79 |
+
### No agents found
|
| 80 |
+
- Ensure you have deployed at least one agent to Agent Engine
|
| 81 |
+
- Check that your `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` are correct
|
| 82 |
+
- Verify you have proper permissions to list agents
|
| 83 |
+
|
| 84 |
+
### Authentication errors
|
| 85 |
+
- Run `gcloud auth application-default login`
|
| 86 |
+
- Ensure your service account (if using one) has the necessary permissions
|
| 87 |
+
|
| 88 |
+
### Connection errors
|
| 89 |
+
- Verify your Agent Engine location matches `GOOGLE_CLOUD_LOCATION`
|
| 90 |
+
- Check firewall settings if running in a restricted environment
|
| 91 |
+
|
| 92 |
+
## Configuration
|
| 93 |
+
|
| 94 |
+
You can customize the app by modifying `gradio_app.py`:
|
| 95 |
+
|
| 96 |
+
- Change server port (default: 7860)
|
| 97 |
+
- Modify UI theme
|
| 98 |
+
- Adjust chat history height
|
| 99 |
+
- Enable/disable share link
|
| 100 |
+
|
| 101 |
+
## Deployment
|
| 102 |
+
|
| 103 |
+
To make the app publicly accessible:
|
| 104 |
+
|
| 105 |
+
```bash
|
| 106 |
+
# Enable share link (creates temporary public URL)
|
| 107 |
+
demo.launch(share=True)
|
| 108 |
+
|
| 109 |
+
# Or deploy to Hugging Face Spaces
|
| 110 |
+
# Follow: https://huggingface.co/docs/hub/spaces-sdks-gradio
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Support
|
| 114 |
+
|
| 115 |
+
For issues related to:
|
| 116 |
+
- **Agent Engine**: Check Google Cloud Agent Engine documentation
|
| 117 |
+
- **Gradio**: Visit https://gradio.app/docs
|
| 118 |
+
- **This project**: Open an issue in the repository
|
GRADIO_SUMMARY.md
ADDED
|
@@ -0,0 +1,237 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Gradio Chat UI - Project Summary
|
| 2 |
+
|
| 3 |
+
## 📁 Files Created
|
| 4 |
+
|
| 5 |
+
### Main Application Files
|
| 6 |
+
|
| 7 |
+
1. **`gradio_app.py`** - Original Gradio chat UI
|
| 8 |
+
- Uses `vertexai.agent_engines._agent_engines.AgentEngine`
|
| 9 |
+
- Simple agent selection and chat interface
|
| 10 |
+
|
| 11 |
+
2. **`gradio_app_v2.py`** ⭐ **RECOMMENDED**
|
| 12 |
+
- Uses `google.cloud.aiplatform_v1beta1.AgentEnginesServiceClient`
|
| 13 |
+
- More robust agent listing and querying
|
| 14 |
+
- Better error handling
|
| 15 |
+
- Session ID support for conversation continuity
|
| 16 |
+
- Enhanced UI with examples and tips
|
| 17 |
+
|
| 18 |
+
3. **`run_gradio.py`** - Simple launcher script
|
| 19 |
+
- Quick way to start the Gradio app
|
| 20 |
+
|
| 21 |
+
### Setup & Documentation
|
| 22 |
+
|
| 23 |
+
4. **`setup_gradio.sh`** - Setup script
|
| 24 |
+
- Checks prerequisites
|
| 25 |
+
- Installs dependencies
|
| 26 |
+
- Provides next steps
|
| 27 |
+
|
| 28 |
+
5. **`GRADIO_README.md`** - Comprehensive documentation
|
| 29 |
+
- Features overview
|
| 30 |
+
- Installation guide
|
| 31 |
+
- Usage examples
|
| 32 |
+
- Troubleshooting tips
|
| 33 |
+
|
| 34 |
+
6. **`QUICKSTART_GRADIO.md`** - Quick start guide
|
| 35 |
+
- Step-by-step setup instructions
|
| 36 |
+
- Running options
|
| 37 |
+
- Common issues and solutions
|
| 38 |
+
- Success checklist
|
| 39 |
+
|
| 40 |
+
### Updated Files
|
| 41 |
+
|
| 42 |
+
7. **`requirements.txt`** - Added dependencies
|
| 43 |
+
- `gradio==5.8.0`
|
| 44 |
+
- `python-dotenv==1.0.0`
|
| 45 |
+
|
| 46 |
+
## 🚀 How to Use
|
| 47 |
+
|
| 48 |
+
### Quick Start (3 steps)
|
| 49 |
+
|
| 50 |
+
```bash
|
| 51 |
+
# 1. Install dependencies
|
| 52 |
+
pip install -r requirements.txt
|
| 53 |
+
|
| 54 |
+
# 2. Authenticate
|
| 55 |
+
gcloud auth application-default login
|
| 56 |
+
|
| 57 |
+
# 3. Run the app
|
| 58 |
+
python gradio_app_v2.py
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
Then open http://localhost:7860 in your browser!
|
| 62 |
+
|
| 63 |
+
### Alternative Methods
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
# Using launcher
|
| 67 |
+
python run_gradio.py
|
| 68 |
+
|
| 69 |
+
# Using setup script
|
| 70 |
+
./setup_gradio.sh
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
## ✨ Key Features
|
| 74 |
+
|
| 75 |
+
### Agent Discovery
|
| 76 |
+
- Automatically lists all deployed agents from Agent Engine
|
| 77 |
+
- Refresh button to update agent list dynamically
|
| 78 |
+
- Clear display names for easy selection
|
| 79 |
+
|
| 80 |
+
### Chat Interface
|
| 81 |
+
- Real-time conversation with selected agent
|
| 82 |
+
- Chat history with copy functionality
|
| 83 |
+
- Session ID for conversation continuity
|
| 84 |
+
- Modern, clean UI built with Gradio
|
| 85 |
+
|
| 86 |
+
### RAG Capabilities
|
| 87 |
+
The agents support full RAG operations:
|
| 88 |
+
- 📋 List document corpora
|
| 89 |
+
- 🔍 Query documents
|
| 90 |
+
- ➕ Create new corpora
|
| 91 |
+
- 📄 Add documents
|
| 92 |
+
- ℹ️ Get corpus information
|
| 93 |
+
- 🗑️ Delete documents/corpora
|
| 94 |
+
|
| 95 |
+
## 📊 Architecture
|
| 96 |
+
|
| 97 |
+
```
|
| 98 |
+
User Browser
|
| 99 |
+
↓
|
| 100 |
+
Gradio Web UI (Port 7860)
|
| 101 |
+
↓
|
| 102 |
+
gradio_app_v2.py
|
| 103 |
+
↓
|
| 104 |
+
AgentEnginesServiceClient
|
| 105 |
+
↓
|
| 106 |
+
Google Cloud Agent Engine
|
| 107 |
+
↓
|
| 108 |
+
Deployed RAG Agent
|
| 109 |
+
↓
|
| 110 |
+
Vertex AI RAG Service
|
| 111 |
+
↓
|
| 112 |
+
Document Corpora
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
## 🔧 Configuration
|
| 116 |
+
|
| 117 |
+
### Environment Variables (in `rag_agent/.env`)
|
| 118 |
+
|
| 119 |
+
```bash
|
| 120 |
+
GOOGLE_CLOUD_PROJECT="your-project-id"
|
| 121 |
+
GOOGLE_CLOUD_LOCATION="us-central1" # or your region
|
| 122 |
+
GOOGLE_GENAI_USE_VERTEXAI="true"
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
### Customization Options
|
| 126 |
+
|
| 127 |
+
In `gradio_app_v2.py`, you can modify:
|
| 128 |
+
|
| 129 |
+
- **Server Port**: Default is 7860
|
| 130 |
+
```python
|
| 131 |
+
demo.launch(server_port=8080)
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
- **Theme**: Change the UI theme
|
| 135 |
+
```python
|
| 136 |
+
gr.Blocks(theme=gr.themes.Glass())
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
- **Share**: Create public temporary URL
|
| 140 |
+
```python
|
| 141 |
+
demo.launch(share=True)
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
## 🎯 Example Usage
|
| 145 |
+
|
| 146 |
+
### List Available Corpora
|
| 147 |
+
```
|
| 148 |
+
User: List all available corpora
|
| 149 |
+
Agent: Here are the available corpora:
|
| 150 |
+
1. tech-docs
|
| 151 |
+
2. company-handbook
|
| 152 |
+
3. research-papers
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
### Query Documents
|
| 156 |
+
```
|
| 157 |
+
User: What information do you have about machine learning?
|
| 158 |
+
Agent: Based on the documents in the 'research-papers' corpus,
|
| 159 |
+
here's what I found about machine learning...
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
### Create Corpus
|
| 163 |
+
```
|
| 164 |
+
User: Create a new corpus called 'customer-feedback'
|
| 165 |
+
Agent: ✅ Successfully created corpus 'customer-feedback'
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
### Add Data
|
| 169 |
+
```
|
| 170 |
+
User: Add this Google Drive file to the corpus: https://drive.google.com/file/d/abc123
|
| 171 |
+
Agent: ✅ Successfully added the document to the corpus
|
| 172 |
+
```
|
| 173 |
+
|
| 174 |
+
## 🐛 Common Issues & Solutions
|
| 175 |
+
|
| 176 |
+
### No Agents Found
|
| 177 |
+
- **Cause**: No agents deployed to Agent Engine
|
| 178 |
+
- **Solution**: Run `make deploy` to deploy an agent
|
| 179 |
+
|
| 180 |
+
### Authentication Error
|
| 181 |
+
- **Cause**: Not authenticated with Google Cloud
|
| 182 |
+
- **Solution**: Run `gcloud auth application-default login`
|
| 183 |
+
|
| 184 |
+
### Module Not Found
|
| 185 |
+
- **Cause**: Dependencies not installed
|
| 186 |
+
- **Solution**: Run `pip install -r requirements.txt`
|
| 187 |
+
|
| 188 |
+
### Wrong Location
|
| 189 |
+
- **Cause**: `GOOGLE_CLOUD_LOCATION` doesn't match agent location
|
| 190 |
+
- **Solution**: Update `.env` file with correct location
|
| 191 |
+
|
| 192 |
+
## 📈 Next Steps
|
| 193 |
+
|
| 194 |
+
### Enhancements You Can Add
|
| 195 |
+
|
| 196 |
+
1. **File Upload**: Add Gradio File component for document upload
|
| 197 |
+
```python
|
| 198 |
+
file_upload = gr.File(label="Upload Document")
|
| 199 |
+
```
|
| 200 |
+
|
| 201 |
+
2. **Multi-modal**: Add image support
|
| 202 |
+
```python
|
| 203 |
+
image_input = gr.Image(label="Upload Image")
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
3. **Analytics**: Track usage and conversations
|
| 207 |
+
```python
|
| 208 |
+
# Log queries to BigQuery or Cloud Logging
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
4. **Authentication**: Add user authentication
|
| 212 |
+
```python
|
| 213 |
+
demo.launch(auth=("username", "password"))
|
| 214 |
+
```
|
| 215 |
+
|
| 216 |
+
5. **Streaming**: Add streaming responses
|
| 217 |
+
```python
|
| 218 |
+
# Use generator pattern for streaming
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
+
## 📚 Resources
|
| 222 |
+
|
| 223 |
+
- **Gradio**: https://gradio.app/docs
|
| 224 |
+
- **Vertex AI Agent Engine**: https://cloud.google.com/agent-engine/docs
|
| 225 |
+
- **Google ADK**: https://github.com/google/genai-adk
|
| 226 |
+
|
| 227 |
+
## 🎉 Summary
|
| 228 |
+
|
| 229 |
+
You now have a fully functional Gradio chat UI that:
|
| 230 |
+
- ✅ Lists all deployed agents
|
| 231 |
+
- ✅ Allows agent selection via dropdown
|
| 232 |
+
- ✅ Provides real-time chat interface
|
| 233 |
+
- ✅ Supports full RAG operations
|
| 234 |
+
- ✅ Maintains conversation context
|
| 235 |
+
- ✅ Has modern, user-friendly UI
|
| 236 |
+
|
| 237 |
+
Ready to chat with your RAG agents! 🤖💬
|
Makefile
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ==============================================================================
|
| 2 |
+
# Installation & Setup
|
| 3 |
+
# ==============================================================================
|
| 4 |
+
|
| 5 |
+
# Install dependencies using uv package manager
|
| 6 |
+
install:
|
| 7 |
+
@command -v uv >/dev/null 2>&1 || { echo "uv is not installed. Installing uv..."; curl -LsSf https://astral.sh/uv/0.8.13/install.sh | sh; source $HOME/.local/bin/env; }
|
| 8 |
+
uv sync
|
| 9 |
+
|
| 10 |
+
# ==============================================================================
|
| 11 |
+
# Playground Targets
|
| 12 |
+
# ==============================================================================
|
| 13 |
+
|
| 14 |
+
# Launch local dev playground
|
| 15 |
+
playground:
|
| 16 |
+
@echo "==============================================================================="
|
| 17 |
+
@echo "| 🚀 Starting your agent playground... |"
|
| 18 |
+
@echo "| |"
|
| 19 |
+
@echo "| 💡 Try asking: What's the weather in San Francisco? |"
|
| 20 |
+
@echo "| |"
|
| 21 |
+
@echo "| 🔍 IMPORTANT: Select the 'rag_agent' folder to interact with your agent. |"
|
| 22 |
+
@echo "==============================================================================="
|
| 23 |
+
uv run adk web . --port 8501 --reload_agents
|
| 24 |
+
|
| 25 |
+
# ==============================================================================
|
| 26 |
+
# Backend Deployment Targets
|
| 27 |
+
# ==============================================================================
|
| 28 |
+
|
| 29 |
+
# Deploy the agent remotely
|
| 30 |
+
deploy:
|
| 31 |
+
# Export dependencies to requirements file using uv export.
|
| 32 |
+
(uv export --no-hashes --no-header --no-dev --no-emit-project --no-annotate > rag_agent/app_utils/.requirements.txt 2>/dev/null || \
|
| 33 |
+
uv export --no-hashes --no-header --no-dev --no-emit-project > rag_agent/app_utils/.requirements.txt) && \
|
| 34 |
+
uv run -m rag_agent.app_utils.deploy \
|
| 35 |
+
--source-packages=./rag_agent \
|
| 36 |
+
--display-name="bitcast_agent_focus" \
|
| 37 |
+
--entrypoint-module=rag_agent.agent_engine_app \
|
| 38 |
+
--entrypoint-object=agent_engine \
|
| 39 |
+
--requirements-file=rag_agent/app_utils/.requirements.txt
|
| 40 |
+
|
| 41 |
+
# Alias for 'make deploy' for backward compatibility
|
| 42 |
+
backend: deploy
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
# ==============================================================================
|
| 46 |
+
# Infrastructure Setup
|
| 47 |
+
# ==============================================================================
|
| 48 |
+
|
| 49 |
+
# Set up development environment resources using Terraform
|
| 50 |
+
setup-dev-env:
|
| 51 |
+
PROJECT_ID=$$(gcloud config get-value project) && \
|
| 52 |
+
(cd deployment/terraform/dev && terraform init && terraform apply --var-file vars/env.tfvars --var dev_project_id=$$PROJECT_ID --auto-approve)
|
| 53 |
+
|
| 54 |
+
# ==============================================================================
|
| 55 |
+
# Testing & Code Quality
|
| 56 |
+
# ==============================================================================
|
| 57 |
+
|
| 58 |
+
# Run unit and integration tests
|
| 59 |
+
test:
|
| 60 |
+
uv sync --dev
|
| 61 |
+
uv run pytest tests/unit && uv run pytest tests/integration
|
| 62 |
+
|
| 63 |
+
# Run code quality checks (codespell, ruff, mypy)
|
| 64 |
+
lint:
|
| 65 |
+
uv sync --dev --extra lint
|
| 66 |
+
uv run codespell
|
| 67 |
+
uv run ruff check . --diff
|
| 68 |
+
uv run ruff format . --check --diff
|
| 69 |
+
uv run mypy .
|
| 70 |
+
|
| 71 |
+
# ==============================================================================
|
| 72 |
+
# Gemini Enterprise Integration
|
| 73 |
+
# ==============================================================================
|
| 74 |
+
|
| 75 |
+
# Register the deployed agent to Gemini Enterprise
|
| 76 |
+
# Usage: make register-gemini-enterprise (interactive - will prompt for required details)
|
| 77 |
+
# For non-interactive use, set env vars: ID or GEMINI_ENTERPRISE_APP_ID (full GE resource name)
|
| 78 |
+
# Optional env vars: GEMINI_DISPLAY_NAME, GEMINI_DESCRIPTION, GEMINI_TOOL_DESCRIPTION, AGENT_ENGINE_ID
|
| 79 |
+
register-gemini-enterprise:
|
| 80 |
+
@uvx agent-starter-pack@0.21.0 register-gemini-enterprise
|
QUICKSTART_GRADIO.md
ADDED
|
@@ -0,0 +1,266 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Quick Start Guide - Gradio Chat UI
|
| 2 |
+
|
| 3 |
+
This guide will help you set up and run the Gradio chat interface for your RAG Agent.
|
| 4 |
+
|
| 5 |
+
## 📋 Prerequisites
|
| 6 |
+
|
| 7 |
+
1. **Python 3.10+** installed
|
| 8 |
+
2. **Google Cloud Project** with Agent Engine enabled
|
| 9 |
+
3. **Agent deployed** to Agent Engine (or follow deployment steps below)
|
| 10 |
+
4. **Authentication** set up with Google Cloud
|
| 11 |
+
|
| 12 |
+
## 🛠️ Setup Instructions
|
| 13 |
+
|
| 14 |
+
### Step 1: Install Dependencies
|
| 15 |
+
|
| 16 |
+
```bash
|
| 17 |
+
pip install -r requirements.txt
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
This will install:
|
| 21 |
+
- `gradio` - Web UI framework
|
| 22 |
+
- `google-cloud-aiplatform` - Google Cloud AI Platform SDK
|
| 23 |
+
- `vertexai` - Vertex AI SDK
|
| 24 |
+
- `python-dotenv` - Environment variable management
|
| 25 |
+
- Other required packages
|
| 26 |
+
|
| 27 |
+
### Step 2: Configure Environment
|
| 28 |
+
|
| 29 |
+
Make sure your `rag_agent/.env` file has the correct values:
|
| 30 |
+
|
| 31 |
+
```bash
|
| 32 |
+
GOOGLE_CLOUD_PROJECT="your-project-id"
|
| 33 |
+
GOOGLE_CLOUD_LOCATION="your-location" # e.g., us-central1, asia-southeast1
|
| 34 |
+
GOOGLE_GENAI_USE_VERTEXAI="true"
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
### Step 3: Authenticate with Google Cloud
|
| 38 |
+
|
| 39 |
+
```bash
|
| 40 |
+
gcloud auth application-default login
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
### Step 4: Deploy Your Agent (if not already deployed)
|
| 44 |
+
|
| 45 |
+
```bash
|
| 46 |
+
make deploy
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
Or manually:
|
| 50 |
+
|
| 51 |
+
```bash
|
| 52 |
+
python -m rag_agent.app_utils.deploy
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
## ▶️ Running the App
|
| 56 |
+
|
| 57 |
+
### Option 1: Run directly
|
| 58 |
+
|
| 59 |
+
```bash
|
| 60 |
+
python gradio_app_v2.py
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
### Option 2: Use the launcher script
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
python run_gradio.py
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### Option 3: Use the setup script
|
| 70 |
+
|
| 71 |
+
```bash
|
| 72 |
+
./setup_gradio.sh
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
The app will start on **http://localhost:7860**
|
| 76 |
+
|
| 77 |
+
## 🎯 Using the Chat Interface
|
| 78 |
+
|
| 79 |
+
### 1. Select an Agent
|
| 80 |
+
|
| 81 |
+
- Use the dropdown menu to choose from deployed agents
|
| 82 |
+
- Click "🔄 Refresh" to update the agent list
|
| 83 |
+
|
| 84 |
+
### 2. Set Session ID (Optional)
|
| 85 |
+
|
| 86 |
+
- Enter a unique session ID to maintain conversation context
|
| 87 |
+
- Same session ID = continuous conversation
|
| 88 |
+
- Different session ID = new conversation
|
| 89 |
+
|
| 90 |
+
### 3. Start Chatting
|
| 91 |
+
|
| 92 |
+
Example queries:
|
| 93 |
+
|
| 94 |
+
```
|
| 95 |
+
📋 List all available corpora
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
```
|
| 99 |
+
🔍 What information do you have about machine learning?
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
```
|
| 103 |
+
➕ Create a new corpus called 'tech-docs'
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
```
|
| 107 |
+
📄 Add this file to the corpus: https://drive.google.com/file/d/YOUR_FILE_ID
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
```
|
| 111 |
+
ℹ️ Show me details about the 'tech-docs' corpus
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
```
|
| 115 |
+
🗑️ Delete the 'old-docs' corpus
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
## 🔧 Troubleshooting
|
| 119 |
+
|
| 120 |
+
### Issue: "No agents found"
|
| 121 |
+
|
| 122 |
+
**Solution:**
|
| 123 |
+
1. Verify your agent is deployed:
|
| 124 |
+
```bash
|
| 125 |
+
gcloud agent-engines list --location=YOUR_LOCATION
|
| 126 |
+
```
|
| 127 |
+
2. Check your environment variables are correct
|
| 128 |
+
3. Ensure you have proper permissions
|
| 129 |
+
|
| 130 |
+
### Issue: Authentication errors
|
| 131 |
+
|
| 132 |
+
**Solution:**
|
| 133 |
+
```bash
|
| 134 |
+
gcloud auth application-default login
|
| 135 |
+
gcloud config set project YOUR_PROJECT_ID
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
### Issue: Import errors
|
| 139 |
+
|
| 140 |
+
**Solution:**
|
| 141 |
+
```bash
|
| 142 |
+
pip install -r requirements.txt --upgrade
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
### Issue: Connection timeout
|
| 146 |
+
|
| 147 |
+
**Solution:**
|
| 148 |
+
1. Check your `GOOGLE_CLOUD_LOCATION` matches your agent's location
|
| 149 |
+
2. Verify firewall settings
|
| 150 |
+
3. Try a different location/region
|
| 151 |
+
|
| 152 |
+
## 📊 Features
|
| 153 |
+
|
| 154 |
+
### ✅ Available Features
|
| 155 |
+
|
| 156 |
+
- ✨ **Auto-discovery** of all deployed agents
|
| 157 |
+
- 💬 **Real-time chat** with selected agents
|
| 158 |
+
- 🔄 **Dynamic refresh** of agent list
|
| 159 |
+
- 📝 **Chat history** with copy functionality
|
| 160 |
+
- 🎨 **Modern UI** with Gradio
|
| 161 |
+
- 🔐 **Session management** for conversation continuity
|
| 162 |
+
- 🤖 **Full RAG capabilities** (query, create, delete, etc.)
|
| 163 |
+
|
| 164 |
+
### 🎨 UI Components
|
| 165 |
+
|
| 166 |
+
- **Agent Dropdown**: Select from available agents
|
| 167 |
+
- **Session ID**: Maintain conversation context
|
| 168 |
+
- **Chat History**: View full conversation
|
| 169 |
+
- **Message Input**: Type and send messages
|
| 170 |
+
- **Action Buttons**: Send, Clear, Refresh
|
| 171 |
+
- **Examples Accordion**: View capabilities and examples
|
| 172 |
+
|
| 173 |
+
## 🌐 Deployment Options
|
| 174 |
+
|
| 175 |
+
### Local Development
|
| 176 |
+
|
| 177 |
+
Default configuration (localhost only):
|
| 178 |
+
```python
|
| 179 |
+
demo.launch(
|
| 180 |
+
server_name="0.0.0.0",
|
| 181 |
+
server_port=7860,
|
| 182 |
+
share=False
|
| 183 |
+
)
|
| 184 |
+
```
|
| 185 |
+
|
| 186 |
+
### Share with Temporary Public URL
|
| 187 |
+
|
| 188 |
+
```python
|
| 189 |
+
demo.launch(share=True) # Creates temporary gradio.live URL
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
### Deploy to Hugging Face Spaces
|
| 193 |
+
|
| 194 |
+
1. Create a new Space on [Hugging Face](https://huggingface.co/spaces)
|
| 195 |
+
2. Push your code to the Space repository
|
| 196 |
+
3. Add secrets for your Google Cloud credentials
|
| 197 |
+
|
| 198 |
+
### Deploy to Cloud Run
|
| 199 |
+
|
| 200 |
+
```bash
|
| 201 |
+
# Create Dockerfile
|
| 202 |
+
# Build and deploy to Cloud Run
|
| 203 |
+
gcloud run deploy gradio-rag-agent \
|
| 204 |
+
--source . \
|
| 205 |
+
--region=us-central1 \
|
| 206 |
+
--allow-unauthenticated
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
## 📝 Customization
|
| 210 |
+
|
| 211 |
+
### Change Port
|
| 212 |
+
|
| 213 |
+
```python
|
| 214 |
+
demo.launch(server_port=8080)
|
| 215 |
+
```
|
| 216 |
+
|
| 217 |
+
### Change Theme
|
| 218 |
+
|
| 219 |
+
```python
|
| 220 |
+
with gr.Blocks(theme=gr.themes.Glass()) as demo:
|
| 221 |
+
# or gr.themes.Monochrome(), gr.themes.Soft()
|
| 222 |
+
```
|
| 223 |
+
|
| 224 |
+
### Add Custom Features
|
| 225 |
+
|
| 226 |
+
Edit `gradio_app_v2.py` to add:
|
| 227 |
+
- Custom styling
|
| 228 |
+
- Additional inputs/outputs
|
| 229 |
+
- File upload capabilities
|
| 230 |
+
- Analytics tracking
|
| 231 |
+
|
| 232 |
+
## 🔒 Security Notes
|
| 233 |
+
|
| 234 |
+
- Never commit your `.env` file
|
| 235 |
+
- Use service accounts with minimal required permissions
|
| 236 |
+
- Enable authentication for production deployments
|
| 237 |
+
- Review and sanitize user inputs if deploying publicly
|
| 238 |
+
|
| 239 |
+
## 📚 Additional Resources
|
| 240 |
+
|
| 241 |
+
- [Gradio Documentation](https://gradio.app/docs)
|
| 242 |
+
- [Vertex AI Agent Engine Docs](https://cloud.google.com/agent-engine/docs)
|
| 243 |
+
- [Google Cloud Authentication](https://cloud.google.com/docs/authentication)
|
| 244 |
+
|
| 245 |
+
## 🆘 Getting Help
|
| 246 |
+
|
| 247 |
+
If you encounter issues:
|
| 248 |
+
|
| 249 |
+
1. Check the console output for error messages
|
| 250 |
+
2. Verify all prerequisites are met
|
| 251 |
+
3. Review the troubleshooting section above
|
| 252 |
+
4. Check Google Cloud logs for agent-related issues
|
| 253 |
+
5. Open an issue in the repository
|
| 254 |
+
|
| 255 |
+
## 🎉 Success Checklist
|
| 256 |
+
|
| 257 |
+
- [ ] Dependencies installed
|
| 258 |
+
- [ ] Environment configured
|
| 259 |
+
- [ ] Authenticated with Google Cloud
|
| 260 |
+
- [ ] Agent deployed to Agent Engine
|
| 261 |
+
- [ ] Gradio app running
|
| 262 |
+
- [ ] Can select agent from dropdown
|
| 263 |
+
- [ ] Can send messages and receive responses
|
| 264 |
+
- [ ] Chat history displays correctly
|
| 265 |
+
|
| 266 |
+
Happy chatting! 🤖💬
|
VERSIONS_COMPARISON.md
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Gradio App Versions Comparison
|
| 2 |
+
|
| 3 |
+
## 📊 Overview
|
| 4 |
+
|
| 5 |
+
Two versions of the Gradio chat UI have been created. Here's a comparison to help you choose:
|
| 6 |
+
|
| 7 |
+
## Version 1: `gradio_app.py`
|
| 8 |
+
|
| 9 |
+
### Implementation
|
| 10 |
+
- Uses `vertexai.agent_engines._agent_engines.AgentEngine`
|
| 11 |
+
- Direct agent instantiation approach
|
| 12 |
+
|
| 13 |
+
### Pros
|
| 14 |
+
- ✅ Simpler code structure
|
| 15 |
+
- ✅ Direct AgentEngine object usage
|
| 16 |
+
- ✅ Good for basic use cases
|
| 17 |
+
|
| 18 |
+
### Cons
|
| 19 |
+
- ❌ Less robust error handling
|
| 20 |
+
- ❌ May have issues with agent listing
|
| 21 |
+
- ❌ No session management
|
| 22 |
+
- ❌ Uses internal/private API (`_agent_engines`)
|
| 23 |
+
|
| 24 |
+
### Best For
|
| 25 |
+
- Quick prototypes
|
| 26 |
+
- Development/testing
|
| 27 |
+
- Simple single-agent scenarios
|
| 28 |
+
|
| 29 |
+
## Version 2: `gradio_app_v2.py` ⭐ **RECOMMENDED**
|
| 30 |
+
|
| 31 |
+
### Implementation
|
| 32 |
+
- Uses `google.cloud.aiplatform_v1beta1.AgentEnginesServiceClient`
|
| 33 |
+
- Official Google Cloud API client
|
| 34 |
+
|
| 35 |
+
### Pros
|
| 36 |
+
- ✅ More reliable agent listing
|
| 37 |
+
- ✅ Better error handling
|
| 38 |
+
- ✅ Session ID support for conversation continuity
|
| 39 |
+
- ✅ Uses official/public API
|
| 40 |
+
- ✅ Enhanced UI with examples
|
| 41 |
+
- ✅ More informative error messages
|
| 42 |
+
- ✅ Better structured code
|
| 43 |
+
|
| 44 |
+
### Cons
|
| 45 |
+
- ⚠️ Slightly more complex
|
| 46 |
+
- ⚠️ Requires understanding of request/response patterns
|
| 47 |
+
|
| 48 |
+
### Best For
|
| 49 |
+
- Production deployments
|
| 50 |
+
- Multi-agent scenarios
|
| 51 |
+
- Long-term maintenance
|
| 52 |
+
- Full-featured applications
|
| 53 |
+
|
| 54 |
+
## 🔍 Detailed Comparison
|
| 55 |
+
|
| 56 |
+
| Feature | gradio_app.py | gradio_app_v2.py |
|
| 57 |
+
|---------|---------------|------------------|
|
| 58 |
+
| Agent Listing | Basic | Robust with caching |
|
| 59 |
+
| Error Handling | Basic | Comprehensive |
|
| 60 |
+
| Session Management | ❌ No | ✅ Yes |
|
| 61 |
+
| UI/UX | Simple | Enhanced with examples |
|
| 62 |
+
| API Stability | Internal API | Official API |
|
| 63 |
+
| Code Structure | Simpler | Well-organized |
|
| 64 |
+
| Documentation | Basic | Detailed |
|
| 65 |
+
| Examples | ❌ No | ✅ Yes |
|
| 66 |
+
| Avatar Support | ❌ No | ✅ Yes |
|
| 67 |
+
| Status Messages | Basic | Emoji + detailed |
|
| 68 |
+
|
| 69 |
+
## 💻 Code Examples
|
| 70 |
+
|
| 71 |
+
### Version 1 - Agent Querying
|
| 72 |
+
```python
|
| 73 |
+
agent_engine = AgentEngine(name=agent_resource_name)
|
| 74 |
+
response = agent_engine.query(query=message)
|
| 75 |
+
response_text = response.text
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
### Version 2 - Agent Querying
|
| 79 |
+
```python
|
| 80 |
+
client = aiplatform.AgentEnginesServiceClient(
|
| 81 |
+
client_options={"api_endpoint": f"{LOCATION}-aiplatform.googleapis.com"}
|
| 82 |
+
)
|
| 83 |
+
|
| 84 |
+
request = aiplatform.QueryAgentEngineRequest(
|
| 85 |
+
name=agent_name,
|
| 86 |
+
query_config=aiplatform.QueryConfig(query=query_text),
|
| 87 |
+
session_id=session_id,
|
| 88 |
+
)
|
| 89 |
+
|
| 90 |
+
response = client.query_agent_engine(request=request)
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
## 🎯 Recommendation
|
| 94 |
+
|
| 95 |
+
### Use `gradio_app_v2.py` if:
|
| 96 |
+
- ✅ You want a production-ready solution
|
| 97 |
+
- ✅ You need session management
|
| 98 |
+
- ✅ You want better error handling
|
| 99 |
+
- ✅ You prefer using official APIs
|
| 100 |
+
- ✅ You need a more polished UI
|
| 101 |
+
|
| 102 |
+
### Use `gradio_app.py` if:
|
| 103 |
+
- ✅ You're just testing quickly
|
| 104 |
+
- ✅ You prefer simpler code
|
| 105 |
+
- ✅ You don't need advanced features
|
| 106 |
+
- ✅ You want minimal dependencies
|
| 107 |
+
|
| 108 |
+
## 🚀 Migration Path
|
| 109 |
+
|
| 110 |
+
If you start with v1 and want to upgrade to v2:
|
| 111 |
+
|
| 112 |
+
1. **No code changes needed in your agent** - Both versions work with the same deployed agents
|
| 113 |
+
|
| 114 |
+
2. **Switch the file you run**:
|
| 115 |
+
```bash
|
| 116 |
+
# From
|
| 117 |
+
python gradio_app.py
|
| 118 |
+
|
| 119 |
+
# To
|
| 120 |
+
python gradio_app_v2.py
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
3. **Benefits immediately available**:
|
| 124 |
+
- Session management
|
| 125 |
+
- Better error messages
|
| 126 |
+
- Enhanced UI
|
| 127 |
+
- More stable agent listing
|
| 128 |
+
|
| 129 |
+
## 🔧 Technical Differences
|
| 130 |
+
|
| 131 |
+
### Agent Listing
|
| 132 |
+
|
| 133 |
+
**Version 1:**
|
| 134 |
+
```python
|
| 135 |
+
for agent in client.agent_engines.list():
|
| 136 |
+
if agent.api_resource.display_name == display_name
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
**Version 2:**
|
| 140 |
+
```python
|
| 141 |
+
request = aiplatform.ListAgentEnginesRequest(parent=parent)
|
| 142 |
+
for agent in client.list_agent_engines(request=request):
|
| 143 |
+
agents.append({
|
| 144 |
+
"name": agent.name,
|
| 145 |
+
"display_name": agent.display_name
|
| 146 |
+
})
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
### Error Handling
|
| 150 |
+
|
| 151 |
+
**Version 1:**
|
| 152 |
+
```python
|
| 153 |
+
except Exception as e:
|
| 154 |
+
error_msg = f"Error communicating with agent: {str(e)}"
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
+
**Version 2:**
|
| 158 |
+
```python
|
| 159 |
+
except Exception as e:
|
| 160 |
+
error_msg = f"❌ Error: {str(e)}"
|
| 161 |
+
# More context provided to user
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
## 📈 Feature Roadmap
|
| 165 |
+
|
| 166 |
+
### Planned for Both Versions
|
| 167 |
+
- File upload support
|
| 168 |
+
- Multi-modal inputs (images, PDFs)
|
| 169 |
+
- Export chat history
|
| 170 |
+
- Custom themes
|
| 171 |
+
|
| 172 |
+
### Already in V2 Only
|
| 173 |
+
- ✅ Session management
|
| 174 |
+
- ✅ Enhanced error messages
|
| 175 |
+
- ✅ UI examples accordion
|
| 176 |
+
- ✅ Avatar support
|
| 177 |
+
|
| 178 |
+
## 🎓 Learning Path
|
| 179 |
+
|
| 180 |
+
1. **Start with V1** to understand basics
|
| 181 |
+
2. **Review V2** to see best practices
|
| 182 |
+
3. **Use V2** for actual deployment
|
| 183 |
+
4. **Customize** based on your needs
|
| 184 |
+
|
| 185 |
+
## 🏆 Winner: `gradio_app_v2.py`
|
| 186 |
+
|
| 187 |
+
For most use cases, **Version 2** is the recommended choice due to:
|
| 188 |
+
- Better stability
|
| 189 |
+
- Official API usage
|
| 190 |
+
- Enhanced features
|
| 191 |
+
- Production-ready design
|
| 192 |
+
|
| 193 |
+
However, both versions are maintained and functional!
|
| 194 |
+
|
| 195 |
+
## 🤝 Support
|
| 196 |
+
|
| 197 |
+
Both versions support the same agent capabilities:
|
| 198 |
+
- 📋 List corpora
|
| 199 |
+
- 🔍 Query documents
|
| 200 |
+
- ➕ Create corpus
|
| 201 |
+
- 📄 Add data
|
| 202 |
+
- ℹ️ Get corpus info
|
| 203 |
+
- 🗑️ Delete document/corpus
|
| 204 |
+
|
| 205 |
+
Choose based on your needs and preferences! 🚀
|
deployment/README.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Deployment
|
| 2 |
+
|
| 3 |
+
This directory contains the Terraform configurations for provisioning the necessary Google Cloud infrastructure for your agent.
|
| 4 |
+
|
| 5 |
+
The recommended way to deploy the infrastructure and set up the CI/CD pipeline is by using the `agent-starter-pack setup-cicd` command from the root of your project.
|
| 6 |
+
|
| 7 |
+
However, for a more hands-on approach, you can always apply the Terraform configurations manually for a do-it-yourself setup.
|
| 8 |
+
|
| 9 |
+
For detailed information on the deployment process, infrastructure, and CI/CD pipelines, please refer to the official documentation:
|
| 10 |
+
|
| 11 |
+
**[Agent Starter Pack Deployment Guide](https://googlecloudplatform.github.io/agent-starter-pack/guide/deployment.html)**
|
deployment_metadata.json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"remote_agent_engine_id": "projects/38827506989/locations/asia-southeast1/reasoningEngines/6874287434343907328",
|
| 3 |
+
"deployment_target": "agent_engine",
|
| 4 |
+
"is_a2a": false,
|
| 5 |
+
"deployment_timestamp": "2025-11-22T15:59:29.627181"
|
| 6 |
+
}
|
gradio_app.py
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Gradio Chat UI for RAG Agent
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
import os
|
| 6 |
+
import sys
|
| 7 |
+
import gradio as gr
|
| 8 |
+
import vertexai
|
| 9 |
+
from vertexai.agent_engines._agent_engines import AgentEngine
|
| 10 |
+
from google.cloud import aiplatform
|
| 11 |
+
from dotenv import load_dotenv
|
| 12 |
+
|
| 13 |
+
# Add the project root to the path
|
| 14 |
+
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
| 15 |
+
|
| 16 |
+
# Load environment variables from rag_agent/.env
|
| 17 |
+
env_path = os.path.join(os.path.dirname(__file__), "rag_agent", ".env")
|
| 18 |
+
load_dotenv(env_path)
|
| 19 |
+
|
| 20 |
+
PROJECT_ID = os.environ.get("GOOGLE_CLOUD_PROJECT")
|
| 21 |
+
LOCATION = os.environ.get("GOOGLE_CLOUD_LOCATION", "us-central1")
|
| 22 |
+
|
| 23 |
+
# Initialize Vertex AI
|
| 24 |
+
vertexai.init(project=PROJECT_ID, location=LOCATION)
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
def list_available_agents():
|
| 28 |
+
"""List all available agents from Agent Engine."""
|
| 29 |
+
try:
|
| 30 |
+
client = aiplatform.gapic.AgentEnginesServiceClient(
|
| 31 |
+
client_options={"api_endpoint": f"{LOCATION}-aiplatform.googleapis.com"}
|
| 32 |
+
)
|
| 33 |
+
parent = f"projects/{PROJECT_ID}/locations/{LOCATION}"
|
| 34 |
+
|
| 35 |
+
agents = []
|
| 36 |
+
for agent in client.list_agent_engines(parent=parent):
|
| 37 |
+
agent_info = {
|
| 38 |
+
"name": agent.name,
|
| 39 |
+
"display_name": agent.display_name,
|
| 40 |
+
}
|
| 41 |
+
agents.append(agent_info)
|
| 42 |
+
|
| 43 |
+
return agents
|
| 44 |
+
except Exception as e:
|
| 45 |
+
print(f"Error listing agents: {e}")
|
| 46 |
+
return []
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
def get_agent_names():
|
| 50 |
+
"""Get list of agent display names for dropdown."""
|
| 51 |
+
agents = list_available_agents()
|
| 52 |
+
if not agents:
|
| 53 |
+
return ["No agents found"]
|
| 54 |
+
return [agent["display_name"] for agent in agents]
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def get_agent_by_display_name(display_name):
|
| 58 |
+
"""Get agent resource name by display name."""
|
| 59 |
+
agents = list_available_agents()
|
| 60 |
+
for agent in agents:
|
| 61 |
+
if agent["display_name"] == display_name:
|
| 62 |
+
return agent["name"]
|
| 63 |
+
return None
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
def chat_with_agent(message, history, agent_name):
|
| 67 |
+
"""Send message to selected agent and get response."""
|
| 68 |
+
if not message:
|
| 69 |
+
return history
|
| 70 |
+
|
| 71 |
+
if agent_name == "No agents found":
|
| 72 |
+
history.append((message, "Error: No agents available. Please deploy an agent first."))
|
| 73 |
+
return history
|
| 74 |
+
|
| 75 |
+
try:
|
| 76 |
+
# Get the agent resource name
|
| 77 |
+
agent_resource_name = get_agent_by_display_name(agent_name)
|
| 78 |
+
if not agent_resource_name:
|
| 79 |
+
history.append((message, f"Error: Could not find agent '{agent_name}'"))
|
| 80 |
+
return history
|
| 81 |
+
|
| 82 |
+
# Create AgentEngine instance
|
| 83 |
+
agent_engine = AgentEngine(name=agent_resource_name)
|
| 84 |
+
|
| 85 |
+
# Send query to agent
|
| 86 |
+
response = agent_engine.query(query=message)
|
| 87 |
+
|
| 88 |
+
# Extract response text
|
| 89 |
+
if hasattr(response, 'text'):
|
| 90 |
+
response_text = response.text
|
| 91 |
+
elif isinstance(response, dict) and 'text' in response:
|
| 92 |
+
response_text = response['text']
|
| 93 |
+
else:
|
| 94 |
+
response_text = str(response)
|
| 95 |
+
|
| 96 |
+
history.append((message, response_text))
|
| 97 |
+
|
| 98 |
+
except Exception as e:
|
| 99 |
+
error_msg = f"Error communicating with agent: {str(e)}"
|
| 100 |
+
history.append((message, error_msg))
|
| 101 |
+
|
| 102 |
+
return history
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
def refresh_agents():
|
| 106 |
+
"""Refresh the list of available agents."""
|
| 107 |
+
return gr.Dropdown(choices=get_agent_names(), value=get_agent_names()[0])
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
# Create Gradio interface
|
| 111 |
+
with gr.Blocks(title="RAG Agent Chat", theme=gr.themes.Soft()) as demo:
|
| 112 |
+
gr.Markdown("# 🤖 RAG Agent Chat Interface")
|
| 113 |
+
gr.Markdown("Select an agent and start chatting!")
|
| 114 |
+
|
| 115 |
+
with gr.Row():
|
| 116 |
+
with gr.Column(scale=3):
|
| 117 |
+
agent_dropdown = gr.Dropdown(
|
| 118 |
+
choices=get_agent_names(),
|
| 119 |
+
value=get_agent_names()[0] if get_agent_names() else "No agents found",
|
| 120 |
+
label="Select Agent",
|
| 121 |
+
interactive=True
|
| 122 |
+
)
|
| 123 |
+
with gr.Column(scale=1):
|
| 124 |
+
refresh_btn = gr.Button("🔄 Refresh Agents", size="sm")
|
| 125 |
+
|
| 126 |
+
chatbot = gr.Chatbot(
|
| 127 |
+
label="Chat History",
|
| 128 |
+
height=500,
|
| 129 |
+
show_copy_button=True
|
| 130 |
+
)
|
| 131 |
+
|
| 132 |
+
with gr.Row():
|
| 133 |
+
msg = gr.Textbox(
|
| 134 |
+
label="Your Message",
|
| 135 |
+
placeholder="Type your message here...",
|
| 136 |
+
scale=4,
|
| 137 |
+
lines=2
|
| 138 |
+
)
|
| 139 |
+
submit_btn = gr.Button("Send", variant="primary", scale=1)
|
| 140 |
+
|
| 141 |
+
clear_btn = gr.Button("Clear Chat")
|
| 142 |
+
|
| 143 |
+
gr.Markdown("""
|
| 144 |
+
### 📝 Agent Capabilities
|
| 145 |
+
- **Query Documents**: Ask questions and retrieve information from document corpora
|
| 146 |
+
- **List Corpora**: See all available document collections
|
| 147 |
+
- **Create Corpus**: Create new document collections
|
| 148 |
+
- **Add Data**: Add documents to existing corpora
|
| 149 |
+
- **Get Corpus Info**: View detailed corpus information
|
| 150 |
+
- **Delete Document/Corpus**: Remove documents or entire corpora
|
| 151 |
+
""")
|
| 152 |
+
|
| 153 |
+
# Event handlers
|
| 154 |
+
def submit_message(message, history, agent_name):
|
| 155 |
+
history = chat_with_agent(message, history, agent_name)
|
| 156 |
+
return "", history
|
| 157 |
+
|
| 158 |
+
submit_btn.click(
|
| 159 |
+
submit_message,
|
| 160 |
+
inputs=[msg, chatbot, agent_dropdown],
|
| 161 |
+
outputs=[msg, chatbot]
|
| 162 |
+
)
|
| 163 |
+
|
| 164 |
+
msg.submit(
|
| 165 |
+
submit_message,
|
| 166 |
+
inputs=[msg, chatbot, agent_dropdown],
|
| 167 |
+
outputs=[msg, chatbot]
|
| 168 |
+
)
|
| 169 |
+
|
| 170 |
+
clear_btn.click(lambda: [], outputs=chatbot)
|
| 171 |
+
|
| 172 |
+
refresh_btn.click(
|
| 173 |
+
refresh_agents,
|
| 174 |
+
outputs=agent_dropdown
|
| 175 |
+
)
|
| 176 |
+
|
| 177 |
+
|
| 178 |
+
if __name__ == "__main__":
|
| 179 |
+
# Check if required environment variables are set
|
| 180 |
+
if not PROJECT_ID:
|
| 181 |
+
print("⚠️ Warning: GOOGLE_CLOUD_PROJECT environment variable not set")
|
| 182 |
+
print("Please set it in your .env file or environment")
|
| 183 |
+
|
| 184 |
+
print(f"🚀 Starting Gradio app...")
|
| 185 |
+
print(f"📍 Project: {PROJECT_ID}")
|
| 186 |
+
print(f"📍 Location: {LOCATION}")
|
| 187 |
+
print(f"🔍 Found {len(get_agent_names())} agent(s)")
|
| 188 |
+
|
| 189 |
+
demo.launch(
|
| 190 |
+
server_name="0.0.0.0",
|
| 191 |
+
server_port=7860,
|
| 192 |
+
share=False
|
| 193 |
+
)
|
gradio_app_v2.py
ADDED
|
@@ -0,0 +1,443 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Alternative Gradio Chat UI using Agent Engine Client directly
|
| 3 |
+
This version uses the Vertex AI SDK for Python for more reliable agent listing and querying.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import os
|
| 7 |
+
import sys
|
| 8 |
+
import gradio as gr
|
| 9 |
+
import vertexai
|
| 10 |
+
from dotenv import load_dotenv
|
| 11 |
+
|
| 12 |
+
# Add the project root to the path
|
| 13 |
+
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
| 14 |
+
|
| 15 |
+
# Load environment variables from rag_agent/.env
|
| 16 |
+
env_path = os.path.join(os.path.dirname(__file__), "rag_agent", ".env")
|
| 17 |
+
load_dotenv(env_path)
|
| 18 |
+
|
| 19 |
+
PROJECT_ID = os.environ.get("GOOGLE_CLOUD_PROJECT")
|
| 20 |
+
LOCATION = os.environ.get("GOOGLE_CLOUD_LOCATION", "us-central1")
|
| 21 |
+
|
| 22 |
+
# Initialize Vertex AI
|
| 23 |
+
vertexai.init(project=PROJECT_ID, location=LOCATION)
|
| 24 |
+
|
| 25 |
+
# Global state for agents and sessions
|
| 26 |
+
agents_cache = []
|
| 27 |
+
# Store session_id per agent to maintain conversation context
|
| 28 |
+
agent_sessions = {}
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
def list_available_agents():
|
| 32 |
+
"""List all available agents from Agent Engine."""
|
| 33 |
+
global agents_cache
|
| 34 |
+
try:
|
| 35 |
+
client = vertexai.Client(project=PROJECT_ID, location=LOCATION)
|
| 36 |
+
|
| 37 |
+
agents = []
|
| 38 |
+
for agent in client.agent_engines.list():
|
| 39 |
+
# Access the underlying api_resource which contains name and display_name
|
| 40 |
+
agent_info = {
|
| 41 |
+
"name": agent.api_resource.name,
|
| 42 |
+
"display_name": agent.api_resource.display_name or agent.api_resource.name.split("/")[-1],
|
| 43 |
+
}
|
| 44 |
+
agents.append(agent_info)
|
| 45 |
+
|
| 46 |
+
agents_cache = agents
|
| 47 |
+
return agents
|
| 48 |
+
except Exception as e:
|
| 49 |
+
print(f"Error listing agents: {e}")
|
| 50 |
+
import traceback
|
| 51 |
+
traceback.print_exc()
|
| 52 |
+
return []
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
def get_agent_names():
|
| 56 |
+
"""Get list of agent display names for dropdown."""
|
| 57 |
+
agents = list_available_agents()
|
| 58 |
+
if not agents:
|
| 59 |
+
return ["No agents found - Please deploy an agent first"]
|
| 60 |
+
return [agent["display_name"] for agent in agents]
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
def get_agent_by_display_name(display_name):
|
| 64 |
+
"""Get agent resource name by display name."""
|
| 65 |
+
for agent in agents_cache:
|
| 66 |
+
if agent["display_name"] == display_name:
|
| 67 |
+
return agent["name"]
|
| 68 |
+
return None
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
def query_agent(agent_name, query_text, session_id=None):
|
| 72 |
+
"""Query the agent engine."""
|
| 73 |
+
try:
|
| 74 |
+
client = vertexai.Client(project=PROJECT_ID, location=LOCATION)
|
| 75 |
+
adk_app = client.agent_engines.get(name=agent_name)
|
| 76 |
+
|
| 77 |
+
print(f"\n{'='*60}")
|
| 78 |
+
print(f"DEBUG: Starting query...")
|
| 79 |
+
print(f"Agent: {agent_name}")
|
| 80 |
+
print(f"Query: {query_text}")
|
| 81 |
+
print(f"Session ID: {session_id}")
|
| 82 |
+
print(f"{'='*60}\n")
|
| 83 |
+
|
| 84 |
+
# Use async_stream_query and wrap it with asyncio.run()
|
| 85 |
+
import asyncio
|
| 86 |
+
|
| 87 |
+
async def run_query():
|
| 88 |
+
result_parts = []
|
| 89 |
+
event_count = 0
|
| 90 |
+
|
| 91 |
+
# Create or get existing session for this agent
|
| 92 |
+
global agent_sessions
|
| 93 |
+
if agent_name not in agent_sessions:
|
| 94 |
+
print("Creating new session for agent...")
|
| 95 |
+
session = await adk_app.async_create_session(user_id="gradio-user")
|
| 96 |
+
print("sessionn nnnnn. ", session)
|
| 97 |
+
agent_sessions[agent_name] = session["id"]
|
| 98 |
+
# print(f"Created session: {session.id}")
|
| 99 |
+
|
| 100 |
+
current_session_id = agent_sessions[agent_name]
|
| 101 |
+
|
| 102 |
+
# Build kwargs with session_id and user_id
|
| 103 |
+
kwargs = {
|
| 104 |
+
"session_id": current_session_id,
|
| 105 |
+
"user_id": "gradio-user",
|
| 106 |
+
"message": query_text,
|
| 107 |
+
}
|
| 108 |
+
|
| 109 |
+
print(f"Query kwargs: {kwargs}\n")
|
| 110 |
+
|
| 111 |
+
async for event in adk_app.async_stream_query(**kwargs):
|
| 112 |
+
event_count += 1
|
| 113 |
+
print(f"\n--- Event #{event_count} ---")
|
| 114 |
+
print(f"Event type: {type(event)}")
|
| 115 |
+
print(f"Event: {event}")
|
| 116 |
+
|
| 117 |
+
if isinstance(event, dict):
|
| 118 |
+
print(f"Event keys: {list(event.keys())}")
|
| 119 |
+
# Extract text from content.parts
|
| 120 |
+
content = event.get('content', {})
|
| 121 |
+
print(f"Content: {content}")
|
| 122 |
+
print(f"Content type: {type(content)}")
|
| 123 |
+
|
| 124 |
+
if isinstance(content, dict):
|
| 125 |
+
parts = content.get('parts', [])
|
| 126 |
+
print(f"Parts: {parts}")
|
| 127 |
+
print(f"Parts count: {len(parts)}")
|
| 128 |
+
|
| 129 |
+
for i, part in enumerate(parts):
|
| 130 |
+
print(f" Part {i}: {part}")
|
| 131 |
+
print(f" Part {i} type: {type(part)}")
|
| 132 |
+
if isinstance(part, dict):
|
| 133 |
+
print(f" Part {i} keys: {list(part.keys())}")
|
| 134 |
+
if 'text' in part:
|
| 135 |
+
text = part['text']
|
| 136 |
+
print(f" ✓ Found text: {text[:100]}...")
|
| 137 |
+
result_parts.append(text)
|
| 138 |
+
else:
|
| 139 |
+
print(f"Event is not a dict!")
|
| 140 |
+
print(f"--- End Event #{event_count} ---\n")
|
| 141 |
+
|
| 142 |
+
print(f"\n{'='*60}")
|
| 143 |
+
print(f"DEBUG: Query complete")
|
| 144 |
+
print(f"Total events: {event_count}")
|
| 145 |
+
print(f"Total text parts extracted: {len(result_parts)}")
|
| 146 |
+
print(f"{'='*60}\n")
|
| 147 |
+
|
| 148 |
+
return "\n".join(result_parts) if result_parts else "No response received"
|
| 149 |
+
|
| 150 |
+
# Run the async function
|
| 151 |
+
return asyncio.run(run_query())
|
| 152 |
+
|
| 153 |
+
except Exception as e:
|
| 154 |
+
print(f"\n{'='*60}")
|
| 155 |
+
print(f"ERROR in query_agent:")
|
| 156 |
+
import traceback
|
| 157 |
+
traceback.print_exc()
|
| 158 |
+
print(f"{'='*60}\n")
|
| 159 |
+
raise Exception(f"Error querying agent: {str(e)}")
|
| 160 |
+
|
| 161 |
+
|
| 162 |
+
def chat_with_agent(message, history, agent_name):
|
| 163 |
+
"""Send message to selected agent and get response."""
|
| 164 |
+
if not message or not message.strip():
|
| 165 |
+
return history
|
| 166 |
+
|
| 167 |
+
if "No agents found" in agent_name:
|
| 168 |
+
history.append((message, "❌ Error: No agents available. Please deploy an agent first."))
|
| 169 |
+
return history
|
| 170 |
+
|
| 171 |
+
try:
|
| 172 |
+
# Get the agent resource name
|
| 173 |
+
agent_resource_name = get_agent_by_display_name(agent_name)
|
| 174 |
+
if not agent_resource_name:
|
| 175 |
+
history.append((message, f"❌ Error: Could not find agent '{agent_name}'"))
|
| 176 |
+
return history
|
| 177 |
+
|
| 178 |
+
# Query the agent (session is managed per agent to maintain context)
|
| 179 |
+
response_text = query_agent(agent_resource_name, message)
|
| 180 |
+
|
| 181 |
+
history.append((message, response_text))
|
| 182 |
+
|
| 183 |
+
except Exception as e:
|
| 184 |
+
error_msg = f"❌ Error: {str(e)}"
|
| 185 |
+
history.append((message, error_msg))
|
| 186 |
+
|
| 187 |
+
return history
|
| 188 |
+
|
| 189 |
+
|
| 190 |
+
def refresh_agents():
|
| 191 |
+
"""Refresh the list of available agents."""
|
| 192 |
+
names = get_agent_names()
|
| 193 |
+
return gr.Dropdown(choices=names, value=names[0])
|
| 194 |
+
|
| 195 |
+
|
| 196 |
+
def clear_chat_and_session(agent_name):
|
| 197 |
+
"""Clear chat history and reset session for the agent."""
|
| 198 |
+
global agent_sessions
|
| 199 |
+
agent_resource_name = get_agent_by_display_name(agent_name)
|
| 200 |
+
if agent_resource_name and agent_resource_name in agent_sessions:
|
| 201 |
+
print(f"Clearing session for agent: {agent_name}")
|
| 202 |
+
del agent_sessions[agent_resource_name]
|
| 203 |
+
return []
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
# Create LINE-themed Gradio interface
|
| 207 |
+
line_theme = gr.themes.Base(
|
| 208 |
+
primary_hue="green",
|
| 209 |
+
secondary_hue="emerald",
|
| 210 |
+
neutral_hue="slate",
|
| 211 |
+
font=[gr.themes.GoogleFont("Inter"), "ui-sans-serif", "system-ui", "sans-serif"],
|
| 212 |
+
).set(
|
| 213 |
+
# LINE green colors
|
| 214 |
+
button_primary_background_fill="#06C755",
|
| 215 |
+
button_primary_background_fill_hover="#05B04C",
|
| 216 |
+
button_primary_text_color="white",
|
| 217 |
+
button_secondary_background_fill="white",
|
| 218 |
+
button_secondary_border_color="#06C755",
|
| 219 |
+
button_secondary_text_color="#06C755",
|
| 220 |
+
|
| 221 |
+
# Light background like LINE
|
| 222 |
+
body_background_fill="white",
|
| 223 |
+
body_background_fill_dark="white",
|
| 224 |
+
background_fill_primary="white",
|
| 225 |
+
background_fill_primary_dark="white",
|
| 226 |
+
background_fill_secondary="#F7F7F7",
|
| 227 |
+
background_fill_secondary_dark="#F7F7F7",
|
| 228 |
+
|
| 229 |
+
# Input styling
|
| 230 |
+
input_background_fill="white",
|
| 231 |
+
input_background_fill_dark="white",
|
| 232 |
+
input_border_color="#E0E0E0",
|
| 233 |
+
input_border_color_dark="#E0E0E0",
|
| 234 |
+
|
| 235 |
+
# Block styling
|
| 236 |
+
block_background_fill="white",
|
| 237 |
+
block_background_fill_dark="white",
|
| 238 |
+
block_border_color="#E0E0E0",
|
| 239 |
+
block_border_color_dark="#E0E0E0",
|
| 240 |
+
block_title_text_weight="600",
|
| 241 |
+
block_border_width="1px",
|
| 242 |
+
block_shadow="0 1px 3px rgba(0,0,0,0.05)",
|
| 243 |
+
|
| 244 |
+
# Text colors
|
| 245 |
+
body_text_color="#111111",
|
| 246 |
+
body_text_color_dark="#111111",
|
| 247 |
+
block_title_text_color="#111111",
|
| 248 |
+
block_title_text_color_dark="#111111",
|
| 249 |
+
block_label_text_color="#666666",
|
| 250 |
+
block_label_text_color_dark="#666666",
|
| 251 |
+
)
|
| 252 |
+
|
| 253 |
+
with gr.Blocks(title="RAG Agent Chat", theme=line_theme, css="""
|
| 254 |
+
.gradio-container {
|
| 255 |
+
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, sans-serif !important;
|
| 256 |
+
background: white !important;
|
| 257 |
+
}
|
| 258 |
+
.chat-message {
|
| 259 |
+
border-radius: 18px !important;
|
| 260 |
+
padding: 12px 16px !important;
|
| 261 |
+
}
|
| 262 |
+
/* User message styling - light gray bubble on right */
|
| 263 |
+
.message.user {
|
| 264 |
+
background-color: #E5E5EA !important;
|
| 265 |
+
color: #000000 !important;
|
| 266 |
+
border-radius: 18px !important;
|
| 267 |
+
}
|
| 268 |
+
.message.user .message-content {
|
| 269 |
+
color: #000000 !important;
|
| 270 |
+
}
|
| 271 |
+
/* Bot message styling - LINE green bubble on left */
|
| 272 |
+
.message.bot {
|
| 273 |
+
background-color: #E8F5E9 !important;
|
| 274 |
+
color: #000000 !important;
|
| 275 |
+
border-radius: 18px !important;
|
| 276 |
+
}
|
| 277 |
+
.message.bot .message-content {
|
| 278 |
+
color: #000000 !important;
|
| 279 |
+
}
|
| 280 |
+
/* Make sure all text in chat is dark */
|
| 281 |
+
.message * {
|
| 282 |
+
color: #000000 !important;
|
| 283 |
+
}
|
| 284 |
+
/* Fix code blocks and inline code */
|
| 285 |
+
.message code {
|
| 286 |
+
background-color: rgba(0, 0, 0, 0.05) !important;
|
| 287 |
+
color: #000000 !important;
|
| 288 |
+
padding: 2px 6px !important;
|
| 289 |
+
border-radius: 4px !important;
|
| 290 |
+
}
|
| 291 |
+
.message pre {
|
| 292 |
+
background-color: rgba(0, 0, 0, 0.05) !important;
|
| 293 |
+
color: #000000 !important;
|
| 294 |
+
padding: 12px !important;
|
| 295 |
+
border-radius: 8px !important;
|
| 296 |
+
}
|
| 297 |
+
.message pre code {
|
| 298 |
+
background-color: transparent !important;
|
| 299 |
+
}
|
| 300 |
+
/* Fix any remaining dark backgrounds */
|
| 301 |
+
.message p, .message span, .message div {
|
| 302 |
+
color: #000000 !important;
|
| 303 |
+
}
|
| 304 |
+
#component-0, #component-1, #component-2 {
|
| 305 |
+
background: white !important;
|
| 306 |
+
}
|
| 307 |
+
.dark {
|
| 308 |
+
background: white !important;
|
| 309 |
+
}
|
| 310 |
+
""") as demo:
|
| 311 |
+
gr.Markdown("# 💬 RAG Agent Chat")
|
| 312 |
+
gr.Markdown(f"**Project:** `{PROJECT_ID}` | **Location:** `{LOCATION}`")
|
| 313 |
+
|
| 314 |
+
with gr.Row():
|
| 315 |
+
with gr.Column(scale=3):
|
| 316 |
+
agent_dropdown = gr.Dropdown(
|
| 317 |
+
choices=get_agent_names(),
|
| 318 |
+
value=get_agent_names()[0] if get_agent_names() else "No agents found",
|
| 319 |
+
label="Select Agent",
|
| 320 |
+
interactive=True
|
| 321 |
+
)
|
| 322 |
+
with gr.Column(scale=1):
|
| 323 |
+
refresh_btn = gr.Button("🔄 Refresh", size="sm")
|
| 324 |
+
|
| 325 |
+
chatbot = gr.Chatbot(
|
| 326 |
+
label="💬 Messages",
|
| 327 |
+
height=500,
|
| 328 |
+
show_copy_button=True,
|
| 329 |
+
avatar_images=(
|
| 330 |
+
None, # User avatar (none shows default)
|
| 331 |
+
"https://em-content.zobj.net/source/apple/391/robot_1f916.png" # Cute robot emoji as bot avatar
|
| 332 |
+
),
|
| 333 |
+
bubble_full_width=False,
|
| 334 |
+
)
|
| 335 |
+
|
| 336 |
+
with gr.Row():
|
| 337 |
+
msg = gr.Textbox(
|
| 338 |
+
label="",
|
| 339 |
+
placeholder="💭 Type a message...",
|
| 340 |
+
scale=5,
|
| 341 |
+
lines=1,
|
| 342 |
+
container=False,
|
| 343 |
+
)
|
| 344 |
+
|
| 345 |
+
with gr.Row():
|
| 346 |
+
submit_btn = gr.Button("Send", variant="primary", scale=1, size="sm")
|
| 347 |
+
clear_btn = gr.Button("Clear", variant="secondary", scale=1, size="sm")
|
| 348 |
+
|
| 349 |
+
with gr.Accordion("📚 Agent Capabilities & Examples", open=False):
|
| 350 |
+
gr.Markdown("""
|
| 351 |
+
### What the RAG Agent Can Do:
|
| 352 |
+
|
| 353 |
+
- **📋 List Corpora**: View all available document collections
|
| 354 |
+
- Example: *"List all available corpora"*
|
| 355 |
+
- Example: *"What corpora do you have?"*
|
| 356 |
+
|
| 357 |
+
- **🔍 Query Documents**: Ask questions about documents in corpora
|
| 358 |
+
- Example: *"What information do you have about [topic]?"*
|
| 359 |
+
- Example: *"Search for information about X in the corpus"*
|
| 360 |
+
|
| 361 |
+
- **➕ Create Corpus**: Create new document collections
|
| 362 |
+
- Example: *"Create a new corpus called 'company-docs'"*
|
| 363 |
+
|
| 364 |
+
- **📄 Add Data**: Add documents to corpora
|
| 365 |
+
- Example: *"Add this Google Drive file to the corpus: https://drive.google.com/file/d/..."*
|
| 366 |
+
- Example: *"Add data from gs://bucket/file.pdf to the corpus"*
|
| 367 |
+
|
| 368 |
+
- **ℹ️ Get Corpus Info**: View detailed information about corpora
|
| 369 |
+
- Example: *"Show me details about the 'company-docs' corpus"*
|
| 370 |
+
- Example: *"What files are in the corpus?"*
|
| 371 |
+
|
| 372 |
+
- **🗑️ Delete Document/Corpus**: Remove documents or collections
|
| 373 |
+
- Example: *"Delete the document with ID XYZ from the corpus"*
|
| 374 |
+
- Example: *"Delete the 'old-docs' corpus"*
|
| 375 |
+
""")
|
| 376 |
+
|
| 377 |
+
gr.Markdown("""
|
| 378 |
+
---
|
| 379 |
+
💡 **Tip**: The agent automatically maintains conversation context within each chat session.
|
| 380 |
+
""")
|
| 381 |
+
|
| 382 |
+
# Event handlers
|
| 383 |
+
def submit_message(message, history, agent_name):
|
| 384 |
+
history = chat_with_agent(message, history, agent_name)
|
| 385 |
+
return "", history
|
| 386 |
+
|
| 387 |
+
submit_btn.click(
|
| 388 |
+
submit_message,
|
| 389 |
+
inputs=[msg, chatbot, agent_dropdown],
|
| 390 |
+
outputs=[msg, chatbot]
|
| 391 |
+
)
|
| 392 |
+
|
| 393 |
+
msg.submit(
|
| 394 |
+
submit_message,
|
| 395 |
+
inputs=[msg, chatbot, agent_dropdown],
|
| 396 |
+
outputs=[msg, chatbot]
|
| 397 |
+
)
|
| 398 |
+
|
| 399 |
+
clear_btn.click(
|
| 400 |
+
clear_chat_and_session,
|
| 401 |
+
inputs=[agent_dropdown],
|
| 402 |
+
outputs=chatbot
|
| 403 |
+
)
|
| 404 |
+
|
| 405 |
+
refresh_btn.click(
|
| 406 |
+
refresh_agents,
|
| 407 |
+
outputs=agent_dropdown
|
| 408 |
+
)
|
| 409 |
+
|
| 410 |
+
|
| 411 |
+
if __name__ == "__main__":
|
| 412 |
+
# Check if required environment variables are set
|
| 413 |
+
if not PROJECT_ID:
|
| 414 |
+
print("⚠️ Warning: GOOGLE_CLOUD_PROJECT environment variable not set")
|
| 415 |
+
print("Please set it in your rag_agent/.env file")
|
| 416 |
+
sys.exit(1)
|
| 417 |
+
|
| 418 |
+
print("=" * 60)
|
| 419 |
+
print("🚀 Starting RAG Agent Chat UI (Gradio)")
|
| 420 |
+
print("=" * 60)
|
| 421 |
+
print(f"📍 Project ID: {PROJECT_ID}")
|
| 422 |
+
print(f"📍 Location: {LOCATION}")
|
| 423 |
+
|
| 424 |
+
agents = get_agent_names()
|
| 425 |
+
print(f"🔍 Found {len([a for a in agents if 'No agents' not in a])} agent(s)")
|
| 426 |
+
|
| 427 |
+
if agents and "No agents found" not in agents[0]:
|
| 428 |
+
print("\n✅ Available Agents:")
|
| 429 |
+
for agent in agents:
|
| 430 |
+
print(f" • {agent}")
|
| 431 |
+
else:
|
| 432 |
+
print("\n⚠�� No agents found. Please deploy an agent first using: make deploy")
|
| 433 |
+
|
| 434 |
+
print("\n" + "=" * 60)
|
| 435 |
+
print("🌐 Launching Gradio interface...")
|
| 436 |
+
print("=" * 60 + "\n")
|
| 437 |
+
|
| 438 |
+
demo.launch(
|
| 439 |
+
server_name="0.0.0.0",
|
| 440 |
+
server_port=7860,
|
| 441 |
+
share=True,
|
| 442 |
+
show_error=True
|
| 443 |
+
)
|
notebooks/adk_app_testing.ipynb
ADDED
|
@@ -0,0 +1,367 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "markdown",
|
| 5 |
+
"metadata": {},
|
| 6 |
+
"source": [
|
| 7 |
+
"# 🧪 ADK Application Testing\n",
|
| 8 |
+
"\n",
|
| 9 |
+
"This notebook demonstrates how to test an ADK (Agent Development Kit) application.\n",
|
| 10 |
+
"It covers both local and remote testing, both with Agent Engine and Cloud Run.\n",
|
| 11 |
+
"\n",
|
| 12 |
+
"> **Note**: This notebook assumes that the agent files are stored in the `app` folder. If your agent files are located in a different directory, please update all relevant file paths and references accordingly."
|
| 13 |
+
]
|
| 14 |
+
},
|
| 15 |
+
{
|
| 16 |
+
"cell_type": "markdown",
|
| 17 |
+
"metadata": {},
|
| 18 |
+
"source": [
|
| 19 |
+
"## Set Up Your Environment\n",
|
| 20 |
+
"\n",
|
| 21 |
+
"> **Note:** For best results, use the same `.venv` created for local development with `uv` to ensure dependency compatibility and avoid environment-related issues."
|
| 22 |
+
]
|
| 23 |
+
},
|
| 24 |
+
{
|
| 25 |
+
"cell_type": "code",
|
| 26 |
+
"execution_count": null,
|
| 27 |
+
"metadata": {},
|
| 28 |
+
"outputs": [],
|
| 29 |
+
"source": [
|
| 30 |
+
"# Uncomment the following lines if you're not using the virtual environment created by uv\n",
|
| 31 |
+
"# import sys\n",
|
| 32 |
+
"\n",
|
| 33 |
+
"# sys.path.append(\"../\")\n",
|
| 34 |
+
"# !pip install google-cloud-aiplatform a2a-sdk --upgrade"
|
| 35 |
+
]
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"cell_type": "markdown",
|
| 39 |
+
"metadata": {},
|
| 40 |
+
"source": [
|
| 41 |
+
"### Import libraries"
|
| 42 |
+
]
|
| 43 |
+
},
|
| 44 |
+
{
|
| 45 |
+
"cell_type": "code",
|
| 46 |
+
"execution_count": null,
|
| 47 |
+
"metadata": {},
|
| 48 |
+
"outputs": [],
|
| 49 |
+
"source": [
|
| 50 |
+
"import json\n",
|
| 51 |
+
"\n",
|
| 52 |
+
"import requests\n",
|
| 53 |
+
"import vertexai"
|
| 54 |
+
]
|
| 55 |
+
},
|
| 56 |
+
{
|
| 57 |
+
"cell_type": "markdown",
|
| 58 |
+
"metadata": {},
|
| 59 |
+
"source": [
|
| 60 |
+
"### Initialize Vertex AI Client"
|
| 61 |
+
]
|
| 62 |
+
},
|
| 63 |
+
{
|
| 64 |
+
"cell_type": "code",
|
| 65 |
+
"execution_count": null,
|
| 66 |
+
"metadata": {},
|
| 67 |
+
"outputs": [],
|
| 68 |
+
"source": [
|
| 69 |
+
"# Initialize the Vertex AI client\n",
|
| 70 |
+
"LOCATION = \"us-central1\"\n",
|
| 71 |
+
"\n",
|
| 72 |
+
"client = vertexai.Client(\n",
|
| 73 |
+
" location=LOCATION,\n",
|
| 74 |
+
")"
|
| 75 |
+
]
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"cell_type": "markdown",
|
| 79 |
+
"metadata": {},
|
| 80 |
+
"source": [
|
| 81 |
+
"## If you are using Agent Engine\n",
|
| 82 |
+
"See more documentation at [Agent Engine Overview](https://cloud.google.com/vertex-ai/generative-ai/docs/agent-engine/overview)"
|
| 83 |
+
]
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"cell_type": "markdown",
|
| 87 |
+
"metadata": {},
|
| 88 |
+
"source": [
|
| 89 |
+
"### Remote Testing"
|
| 90 |
+
]
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"cell_type": "code",
|
| 94 |
+
"execution_count": null,
|
| 95 |
+
"metadata": {},
|
| 96 |
+
"outputs": [],
|
| 97 |
+
"source": [
|
| 98 |
+
"# Set to None to auto-detect from ./deployment_metadata.json, or specify manually\n",
|
| 99 |
+
"# \"projects/PROJECT_ID/locations/us-central1/reasoningEngines/ENGINE_ID\"\n",
|
| 100 |
+
"REASONING_ENGINE_ID = None\n",
|
| 101 |
+
"\n",
|
| 102 |
+
"if REASONING_ENGINE_ID is None:\n",
|
| 103 |
+
" try:\n",
|
| 104 |
+
" with open(\"../deployment_metadata.json\") as f:\n",
|
| 105 |
+
" metadata = json.load(f)\n",
|
| 106 |
+
" REASONING_ENGINE_ID = metadata.get(\"remote_agent_engine_id\")\n",
|
| 107 |
+
" except (FileNotFoundError, json.JSONDecodeError):\n",
|
| 108 |
+
" pass\n",
|
| 109 |
+
"\n",
|
| 110 |
+
"print(f\"Using REASONING_ENGINE_ID: {REASONING_ENGINE_ID}\")\n",
|
| 111 |
+
"# Get the existing agent engine\n",
|
| 112 |
+
"remote_agent_engine = client.agent_engines.get(name=REASONING_ENGINE_ID)"
|
| 113 |
+
]
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"cell_type": "code",
|
| 117 |
+
"execution_count": null,
|
| 118 |
+
"metadata": {},
|
| 119 |
+
"outputs": [],
|
| 120 |
+
"source": [
|
| 121 |
+
"async for event in remote_agent_engine.async_stream_query(\n",
|
| 122 |
+
" message=\"hi!\", user_id=\"test\"\n",
|
| 123 |
+
"):\n",
|
| 124 |
+
" print(event)"
|
| 125 |
+
]
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"cell_type": "code",
|
| 129 |
+
"execution_count": null,
|
| 130 |
+
"metadata": {},
|
| 131 |
+
"outputs": [],
|
| 132 |
+
"source": "remote_agent_engine.register_feedback(\n feedback={\n \"score\": 5,\n \"text\": \"Great response!\",\n \"user_id\": \"test-user-123\",\n \"session_id\": \"test-session-123\",\n }\n)"
|
| 133 |
+
},
|
| 134 |
+
{
|
| 135 |
+
"cell_type": "markdown",
|
| 136 |
+
"metadata": {},
|
| 137 |
+
"source": [
|
| 138 |
+
"### Local Testing\n",
|
| 139 |
+
"\n",
|
| 140 |
+
"You can import directly the AgentEngineApp class within your environment. \n",
|
| 141 |
+
"To run the agent locally, follow these steps:\n",
|
| 142 |
+
"1. Make sure all required packages are installed in your environment\n",
|
| 143 |
+
"2. The recommended approach is to use the same virtual environment created by the 'uv' tool\n",
|
| 144 |
+
"3. You can set up this environment by running 'make install' from your agent's root directory\n",
|
| 145 |
+
"4. Then select this kernel (.venv folder in your project) in your Jupyter notebook to ensure all dependencies are available"
|
| 146 |
+
]
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"cell_type": "code",
|
| 150 |
+
"execution_count": null,
|
| 151 |
+
"metadata": {},
|
| 152 |
+
"outputs": [],
|
| 153 |
+
"source": [
|
| 154 |
+
"from app.agent_engine_app import agent_engine\n",
|
| 155 |
+
"\n",
|
| 156 |
+
"agent_engine.set_up()"
|
| 157 |
+
]
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"cell_type": "code",
|
| 161 |
+
"execution_count": null,
|
| 162 |
+
"metadata": {},
|
| 163 |
+
"outputs": [],
|
| 164 |
+
"source": [
|
| 165 |
+
"async for event in agent_engine.async_stream_query(message=\"hi!\", user_id=\"test\"):\n",
|
| 166 |
+
" print(event)"
|
| 167 |
+
]
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"cell_type": "markdown",
|
| 171 |
+
"metadata": {},
|
| 172 |
+
"source": [
|
| 173 |
+
"## If you are using Cloud Run"
|
| 174 |
+
]
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"cell_type": "markdown",
|
| 178 |
+
"metadata": {},
|
| 179 |
+
"source": [
|
| 180 |
+
"#### Remote Testing\n",
|
| 181 |
+
"\n",
|
| 182 |
+
"For more information about authenticating HTTPS requests to Cloud Run services, see:\n",
|
| 183 |
+
"[Cloud Run Authentication Documentation](https://cloud.google.com/run/docs/triggering/https-request)\n",
|
| 184 |
+
"\n",
|
| 185 |
+
"Remote testing involves using a deployed service URL instead of localhost.\n",
|
| 186 |
+
"\n",
|
| 187 |
+
"Authentication is handled using GCP identity tokens instead of local credentials."
|
| 188 |
+
]
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"cell_type": "code",
|
| 192 |
+
"execution_count": null,
|
| 193 |
+
"metadata": {},
|
| 194 |
+
"outputs": [],
|
| 195 |
+
"source": [
|
| 196 |
+
"ID_TOKEN = get_ipython().getoutput(\"gcloud auth print-identity-token -q\")[0]"
|
| 197 |
+
]
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"cell_type": "code",
|
| 201 |
+
"execution_count": null,
|
| 202 |
+
"metadata": {},
|
| 203 |
+
"outputs": [],
|
| 204 |
+
"source": [
|
| 205 |
+
"SERVICE_URL = \"YOUR_SERVICE_URL_HERE\" # Replace with your Cloud Run service URL"
|
| 206 |
+
]
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"cell_type": "markdown",
|
| 210 |
+
"metadata": {},
|
| 211 |
+
"source": [
|
| 212 |
+
"You'll need to first create a Session"
|
| 213 |
+
]
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"cell_type": "code",
|
| 217 |
+
"execution_count": null,
|
| 218 |
+
"metadata": {},
|
| 219 |
+
"outputs": [],
|
| 220 |
+
"source": [
|
| 221 |
+
"user_id = \"test_user_123\"\n",
|
| 222 |
+
"session_data = {\"state\": {\"preferred_language\": \"English\", \"visit_count\": 1}}\n",
|
| 223 |
+
"\n",
|
| 224 |
+
"session_url = f\"{SERVICE_URL}/apps/app/users/{user_id}/sessions\"\n",
|
| 225 |
+
"headers = {\"Content-Type\": \"application/json\", \"Authorization\": f\"Bearer {ID_TOKEN}\"}\n",
|
| 226 |
+
"\n",
|
| 227 |
+
"session_response = requests.post(session_url, headers=headers, json=session_data)\n",
|
| 228 |
+
"print(f\"Session creation status code: {session_response.status_code}\")\n",
|
| 229 |
+
"print(f\"Session creation response: {session_response.json()}\")\n",
|
| 230 |
+
"session_id = session_response.json()[\"id\"]"
|
| 231 |
+
]
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"cell_type": "markdown",
|
| 235 |
+
"metadata": {},
|
| 236 |
+
"source": [
|
| 237 |
+
"Then you will be able to send a message"
|
| 238 |
+
]
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"cell_type": "code",
|
| 242 |
+
"execution_count": null,
|
| 243 |
+
"metadata": {},
|
| 244 |
+
"outputs": [],
|
| 245 |
+
"source": [
|
| 246 |
+
"message_data = {\n",
|
| 247 |
+
" \"app_name\": \"app\",\n",
|
| 248 |
+
" \"user_id\": user_id,\n",
|
| 249 |
+
" \"session_id\": session_id,\n",
|
| 250 |
+
" \"new_message\": {\"role\": \"user\", \"parts\": [{\"text\": \"Hello! Weather in New york?\"}]},\n",
|
| 251 |
+
" \"streaming\": True,\n",
|
| 252 |
+
"}\n",
|
| 253 |
+
"\n",
|
| 254 |
+
"message_url = f\"{SERVICE_URL}/run_sse\"\n",
|
| 255 |
+
"message_response = requests.post(\n",
|
| 256 |
+
" message_url, headers=headers, json=message_data, stream=True\n",
|
| 257 |
+
")\n",
|
| 258 |
+
"\n",
|
| 259 |
+
"print(f\"Message send status code: {message_response.status_code}\")\n",
|
| 260 |
+
"\n",
|
| 261 |
+
"# Print streamed response\n",
|
| 262 |
+
"for line in message_response.iter_lines():\n",
|
| 263 |
+
" if line:\n",
|
| 264 |
+
" line_str = line.decode(\"utf-8\")\n",
|
| 265 |
+
" if line_str.startswith(\"data: \"):\n",
|
| 266 |
+
" event_json = line_str[6:]\n",
|
| 267 |
+
" event = json.loads(event_json)\n",
|
| 268 |
+
" print(f\"Received event: {event}\")"
|
| 269 |
+
]
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"cell_type": "markdown",
|
| 273 |
+
"metadata": {},
|
| 274 |
+
"source": [
|
| 275 |
+
"### Local Testing\n",
|
| 276 |
+
"\n",
|
| 277 |
+
"> You can run the application locally via the `make local-backend` command."
|
| 278 |
+
]
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"cell_type": "markdown",
|
| 282 |
+
"metadata": {},
|
| 283 |
+
"source": [
|
| 284 |
+
"#### Create a session\n",
|
| 285 |
+
" Create a new session with user preferences and state information\n"
|
| 286 |
+
]
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"cell_type": "code",
|
| 290 |
+
"execution_count": null,
|
| 291 |
+
"metadata": {},
|
| 292 |
+
"outputs": [],
|
| 293 |
+
"source": [
|
| 294 |
+
"user_id = \"test_user_123\"\n",
|
| 295 |
+
"session_data = {\"state\": {\"preferred_language\": \"English\", \"visit_count\": 1}}\n",
|
| 296 |
+
"\n",
|
| 297 |
+
"session_url = f\"http://127.0.0.1:8000/apps/app/users/{user_id}/sessions\"\n",
|
| 298 |
+
"headers = {\"Content-Type\": \"application/json\"}\n",
|
| 299 |
+
"\n",
|
| 300 |
+
"session_response = requests.post(session_url, headers=headers, json=session_data)\n",
|
| 301 |
+
"print(f\"Session creation status code: {session_response.status_code}\")\n",
|
| 302 |
+
"print(f\"Session creation response: {session_response.json()}\")\n",
|
| 303 |
+
"session_id = session_response.json()[\"id\"]"
|
| 304 |
+
]
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"cell_type": "markdown",
|
| 308 |
+
"metadata": {},
|
| 309 |
+
"source": [
|
| 310 |
+
"#### Send a message\n",
|
| 311 |
+
"Send a message to the backend service and receive a streaming response\n"
|
| 312 |
+
]
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"cell_type": "code",
|
| 316 |
+
"execution_count": null,
|
| 317 |
+
"metadata": {},
|
| 318 |
+
"outputs": [],
|
| 319 |
+
"source": [
|
| 320 |
+
"message_data = {\n",
|
| 321 |
+
" \"app_name\": \"app\",\n",
|
| 322 |
+
" \"user_id\": user_id,\n",
|
| 323 |
+
" \"session_id\": session_id,\n",
|
| 324 |
+
" \"new_message\": {\"role\": \"user\", \"parts\": [{\"text\": \"Hello! Weather in New york?\"}]},\n",
|
| 325 |
+
" \"streaming\": True,\n",
|
| 326 |
+
"}\n",
|
| 327 |
+
"\n",
|
| 328 |
+
"message_url = \"http://127.0.0.1:8000/run_sse\"\n",
|
| 329 |
+
"message_response = requests.post(\n",
|
| 330 |
+
" message_url, headers=headers, json=message_data, stream=True\n",
|
| 331 |
+
")\n",
|
| 332 |
+
"\n",
|
| 333 |
+
"print(f\"Message send status code: {message_response.status_code}\")\n",
|
| 334 |
+
"\n",
|
| 335 |
+
"# Print streamed response\n",
|
| 336 |
+
"for line in message_response.iter_lines():\n",
|
| 337 |
+
" if line:\n",
|
| 338 |
+
" line_str = line.decode(\"utf-8\")\n",
|
| 339 |
+
" if line_str.startswith(\"data: \"):\n",
|
| 340 |
+
" event_json = line_str[6:]\n",
|
| 341 |
+
" event = json.loads(event_json)\n",
|
| 342 |
+
" print(f\"Received event: {event}\")"
|
| 343 |
+
]
|
| 344 |
+
}
|
| 345 |
+
],
|
| 346 |
+
"metadata": {
|
| 347 |
+
"kernelspec": {
|
| 348 |
+
"display_name": "myagent-1762384391",
|
| 349 |
+
"language": "python",
|
| 350 |
+
"name": "python3"
|
| 351 |
+
},
|
| 352 |
+
"language_info": {
|
| 353 |
+
"codemirror_mode": {
|
| 354 |
+
"name": "ipython",
|
| 355 |
+
"version": 3
|
| 356 |
+
},
|
| 357 |
+
"file_extension": ".py",
|
| 358 |
+
"mimetype": "text/x-python",
|
| 359 |
+
"name": "python",
|
| 360 |
+
"nbconvert_exporter": "python",
|
| 361 |
+
"pygments_lexer": "ipython3",
|
| 362 |
+
"version": "3.12.9"
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
"nbformat": 4,
|
| 366 |
+
"nbformat_minor": 2
|
| 367 |
+
}
|
notebooks/evaluating_adk_agent.ipynb
ADDED
|
@@ -0,0 +1,1535 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "code",
|
| 5 |
+
"execution_count": null,
|
| 6 |
+
"metadata": {
|
| 7 |
+
"id": "ur8xi4C7S06n"
|
| 8 |
+
},
|
| 9 |
+
"outputs": [],
|
| 10 |
+
"source": [
|
| 11 |
+
"# Copyright 2025 Google LLC\n",
|
| 12 |
+
"#\n",
|
| 13 |
+
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
|
| 14 |
+
"# you may not use this file except in compliance with the License.\n",
|
| 15 |
+
"# You may obtain a copy of the License at\n",
|
| 16 |
+
"#\n",
|
| 17 |
+
"# https://www.apache.org/licenses/LICENSE-2.0\n",
|
| 18 |
+
"#\n",
|
| 19 |
+
"# Unless required by applicable law or agreed to in writing, software\n",
|
| 20 |
+
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
|
| 21 |
+
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
|
| 22 |
+
"# See the License for the specific language governing permissions and\n",
|
| 23 |
+
"# limitations under the License."
|
| 24 |
+
]
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"cell_type": "markdown",
|
| 28 |
+
"metadata": {
|
| 29 |
+
"id": "JAPoU8Sm5E6e"
|
| 30 |
+
},
|
| 31 |
+
"source": [
|
| 32 |
+
"# Evaluate your ADK agent using Vertex AI Gen AI Evaluation service\n",
|
| 33 |
+
"\n",
|
| 34 |
+
"<table align=\"left\">\n",
|
| 35 |
+
" <td style=\"text-align: center\">\n",
|
| 36 |
+
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluating_adk_agent.ipynb\">\n",
|
| 37 |
+
" <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
|
| 38 |
+
" </a>\n",
|
| 39 |
+
" </td>\n",
|
| 40 |
+
" <td style=\"text-align: center\">\n",
|
| 41 |
+
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fevaluation%2Fevaluating_adk_agent.ipynb\">\n",
|
| 42 |
+
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
|
| 43 |
+
" </a>\n",
|
| 44 |
+
" </td>\n",
|
| 45 |
+
" <td style=\"text-align: center\">\n",
|
| 46 |
+
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/evaluation/evaluating_adk_agent.ipynb\">\n",
|
| 47 |
+
" <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
|
| 48 |
+
" </a>\n",
|
| 49 |
+
" </td>\n",
|
| 50 |
+
" <td style=\"text-align: center\">\n",
|
| 51 |
+
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluating_adk_agent.ipynb\">\n",
|
| 52 |
+
" <img width=\"32px\" src=\"https://www.svgrepo.com/download/217753/github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
|
| 53 |
+
" </a>\n",
|
| 54 |
+
" </td>\n",
|
| 55 |
+
"</table>\n",
|
| 56 |
+
"\n",
|
| 57 |
+
"<div style=\"clear: both;\"></div>\n",
|
| 58 |
+
"\n",
|
| 59 |
+
"<b>Share to:</b>\n",
|
| 60 |
+
"\n",
|
| 61 |
+
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluating_adk_agent.ipynb\" target=\"_blank\">\n",
|
| 62 |
+
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
|
| 63 |
+
"</a>\n",
|
| 64 |
+
"\n",
|
| 65 |
+
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluating_adk_agent.ipynb\" target=\"_blank\">\n",
|
| 66 |
+
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
|
| 67 |
+
"</a>\n",
|
| 68 |
+
"\n",
|
| 69 |
+
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluating_adk_agent.ipynb\" target=\"_blank\">\n",
|
| 70 |
+
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
|
| 71 |
+
"</a>\n",
|
| 72 |
+
"\n",
|
| 73 |
+
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluating_adk_agent.ipynb\" target=\"_blank\">\n",
|
| 74 |
+
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
|
| 75 |
+
"</a>\n",
|
| 76 |
+
"\n",
|
| 77 |
+
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/evaluating_adk_agent.ipynb\" target=\"_blank\">\n",
|
| 78 |
+
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
|
| 79 |
+
"</a>"
|
| 80 |
+
]
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"cell_type": "markdown",
|
| 84 |
+
"metadata": {
|
| 85 |
+
"id": "84f0f73a0f76"
|
| 86 |
+
},
|
| 87 |
+
"source": [
|
| 88 |
+
"| Author(s) |\n",
|
| 89 |
+
"| --- |\n",
|
| 90 |
+
"| [Ivan Nardini](https://github.com/inardini) |"
|
| 91 |
+
]
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"cell_type": "markdown",
|
| 95 |
+
"metadata": {
|
| 96 |
+
"id": "tvgnzT1CKxrO"
|
| 97 |
+
},
|
| 98 |
+
"source": [
|
| 99 |
+
"## Overview\n",
|
| 100 |
+
"\n",
|
| 101 |
+
"Agent Development Kit (ADK in short) is a flexible and modular open source framework for developing and deploying AI agents. While ADK has its own evaluation module, using Vertex AI Gen AI Evaluation provides a toolkit of quality controlled and explainable methods and metrics to evaluate any generative model or application, including agents, and benchmark the evaluation results against your own judgment, using your own evaluation criteria.\n",
|
| 102 |
+
"\n",
|
| 103 |
+
"This tutorial shows how to evaluate an ADK agent using Vertex AI Gen AI Evaluation for agent evaluation.\n",
|
| 104 |
+
"\n",
|
| 105 |
+
"The steps performed include:\n",
|
| 106 |
+
"\n",
|
| 107 |
+
"* Build local agent using ADK\n",
|
| 108 |
+
"* Prepare Agent Evaluation dataset\n",
|
| 109 |
+
"* Single tool usage evaluation\n",
|
| 110 |
+
"* Trajectory evaluation\n",
|
| 111 |
+
"* Response evaluation"
|
| 112 |
+
]
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"cell_type": "markdown",
|
| 116 |
+
"metadata": {
|
| 117 |
+
"id": "61RBz8LLbxCR"
|
| 118 |
+
},
|
| 119 |
+
"source": [
|
| 120 |
+
"## Get started"
|
| 121 |
+
]
|
| 122 |
+
},
|
| 123 |
+
{
|
| 124 |
+
"cell_type": "markdown",
|
| 125 |
+
"metadata": {
|
| 126 |
+
"id": "No17Cw5hgx12"
|
| 127 |
+
},
|
| 128 |
+
"source": [
|
| 129 |
+
"### Install Google Gen AI SDK and other required packages\n"
|
| 130 |
+
]
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"cell_type": "code",
|
| 134 |
+
"execution_count": null,
|
| 135 |
+
"metadata": {
|
| 136 |
+
"id": "tFy3H3aPgx12"
|
| 137 |
+
},
|
| 138 |
+
"outputs": [],
|
| 139 |
+
"source": [
|
| 140 |
+
"%pip install --upgrade --quiet 'google-adk'\n",
|
| 141 |
+
"%pip install --upgrade --quiet 'google-cloud-aiplatform[evaluation]'"
|
| 142 |
+
]
|
| 143 |
+
},
|
| 144 |
+
{
|
| 145 |
+
"cell_type": "markdown",
|
| 146 |
+
"metadata": {
|
| 147 |
+
"id": "dmWOrTJ3gx13"
|
| 148 |
+
},
|
| 149 |
+
"source": [
|
| 150 |
+
"### Authenticate your notebook environment (Colab only)\n",
|
| 151 |
+
"\n",
|
| 152 |
+
"If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
|
| 153 |
+
]
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"cell_type": "code",
|
| 157 |
+
"execution_count": null,
|
| 158 |
+
"metadata": {
|
| 159 |
+
"id": "NyKGtVQjgx13"
|
| 160 |
+
},
|
| 161 |
+
"outputs": [],
|
| 162 |
+
"source": [
|
| 163 |
+
"import sys\n",
|
| 164 |
+
"\n",
|
| 165 |
+
"if \"google.colab\" in sys.modules:\n",
|
| 166 |
+
" from google.colab import auth\n",
|
| 167 |
+
"\n",
|
| 168 |
+
" auth.authenticate_user()"
|
| 169 |
+
]
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"cell_type": "markdown",
|
| 173 |
+
"metadata": {
|
| 174 |
+
"id": "DF4l8DTdWgPY"
|
| 175 |
+
},
|
| 176 |
+
"source": [
|
| 177 |
+
"### Set Google Cloud project information\n",
|
| 178 |
+
"\n",
|
| 179 |
+
"To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
|
| 180 |
+
"\n",
|
| 181 |
+
"Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
|
| 182 |
+
]
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"cell_type": "code",
|
| 186 |
+
"execution_count": null,
|
| 187 |
+
"metadata": {
|
| 188 |
+
"id": "Nqwi-5ufWp_B"
|
| 189 |
+
},
|
| 190 |
+
"outputs": [],
|
| 191 |
+
"source": [
|
| 192 |
+
"# Use the environment variable if the user doesn't provide Project ID.\n",
|
| 193 |
+
"import os\n",
|
| 194 |
+
"\n",
|
| 195 |
+
"import vertexai\n",
|
| 196 |
+
"\n",
|
| 197 |
+
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
|
| 198 |
+
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
|
| 199 |
+
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
|
| 200 |
+
"\n",
|
| 201 |
+
"LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
|
| 202 |
+
"\n",
|
| 203 |
+
"BUCKET_NAME = \"[your-bucket-name]\" # @param {type: \"string\", placeholder: \"[your-bucket-name]\", isTemplate: true}\n",
|
| 204 |
+
"BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
|
| 205 |
+
"\n",
|
| 206 |
+
"!gsutil mb -l {LOCATION} {BUCKET_URI}\n",
|
| 207 |
+
"\n",
|
| 208 |
+
"os.environ[\"GOOGLE_CLOUD_PROJECT\"] = PROJECT_ID\n",
|
| 209 |
+
"os.environ[\"GOOGLE_CLOUD_LOCATION\"] = LOCATION\n",
|
| 210 |
+
"os.environ[\"GOOGLE_GENAI_USE_VERTEXAI\"] = \"True\"\n",
|
| 211 |
+
"\n",
|
| 212 |
+
"EXPERIMENT_NAME = \"evaluate-adk-agent\" # @param {type:\"string\"}\n",
|
| 213 |
+
"\n",
|
| 214 |
+
"vertexai.init(project=PROJECT_ID, location=LOCATION, experiment=EXPERIMENT_NAME)"
|
| 215 |
+
]
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"cell_type": "markdown",
|
| 219 |
+
"metadata": {
|
| 220 |
+
"id": "5303c05f7aa6"
|
| 221 |
+
},
|
| 222 |
+
"source": [
|
| 223 |
+
"## Import libraries\n",
|
| 224 |
+
"\n",
|
| 225 |
+
"Import tutorial libraries."
|
| 226 |
+
]
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"cell_type": "code",
|
| 230 |
+
"execution_count": null,
|
| 231 |
+
"metadata": {
|
| 232 |
+
"id": "6fc324893334"
|
| 233 |
+
},
|
| 234 |
+
"outputs": [],
|
| 235 |
+
"source": [
|
| 236 |
+
"import json\n",
|
| 237 |
+
"import asyncio\n",
|
| 238 |
+
"\n",
|
| 239 |
+
"# General\n",
|
| 240 |
+
"import random\n",
|
| 241 |
+
"import string\n",
|
| 242 |
+
"from typing import Any\n",
|
| 243 |
+
"\n",
|
| 244 |
+
"from IPython.display import HTML, Markdown, display\n",
|
| 245 |
+
"from google.adk.agents import Agent\n",
|
| 246 |
+
"\n",
|
| 247 |
+
"# Build agent with adk\n",
|
| 248 |
+
"from google.adk.events import Event\n",
|
| 249 |
+
"from google.adk.runners import Runner\n",
|
| 250 |
+
"from google.adk.sessions import InMemorySessionService\n",
|
| 251 |
+
"\n",
|
| 252 |
+
"# Evaluate agent\n",
|
| 253 |
+
"from google.cloud import aiplatform\n",
|
| 254 |
+
"from google.genai import types\n",
|
| 255 |
+
"import pandas as pd\n",
|
| 256 |
+
"import plotly.graph_objects as go\n",
|
| 257 |
+
"from vertexai.preview.evaluation import EvalTask\n",
|
| 258 |
+
"from vertexai.preview.evaluation.metrics import (\n",
|
| 259 |
+
" PointwiseMetric,\n",
|
| 260 |
+
" PointwiseMetricPromptTemplate,\n",
|
| 261 |
+
" TrajectorySingleToolUse,\n",
|
| 262 |
+
")"
|
| 263 |
+
]
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"cell_type": "markdown",
|
| 267 |
+
"metadata": {
|
| 268 |
+
"id": "MVnBDX54gz7j"
|
| 269 |
+
},
|
| 270 |
+
"source": [
|
| 271 |
+
"## Define helper functions\n",
|
| 272 |
+
"\n",
|
| 273 |
+
"Initiate a set of helper functions to print tutorial results."
|
| 274 |
+
]
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"cell_type": "code",
|
| 278 |
+
"execution_count": null,
|
| 279 |
+
"metadata": {
|
| 280 |
+
"id": "uSgWjMD_g1_v"
|
| 281 |
+
},
|
| 282 |
+
"outputs": [],
|
| 283 |
+
"source": [
|
| 284 |
+
"def get_id(length: int = 8) -> str:\n",
|
| 285 |
+
" \"\"\"Generate a uuid of a specified length (default=8).\"\"\"\n",
|
| 286 |
+
" return \"\".join(random.choices(string.ascii_lowercase + string.digits, k=length))\n",
|
| 287 |
+
"\n",
|
| 288 |
+
"\n",
|
| 289 |
+
"def parse_adk_output_to_dictionary(events: list[Event], *, as_json: bool = False):\n",
|
| 290 |
+
" \"\"\"\n",
|
| 291 |
+
" Parse ADK event output into a structured dictionary format,\n",
|
| 292 |
+
" with the predicted trajectory dumped as a JSON string.\n",
|
| 293 |
+
"\n",
|
| 294 |
+
" \"\"\"\n",
|
| 295 |
+
"\n",
|
| 296 |
+
" final_response = \"\"\n",
|
| 297 |
+
" trajectory = []\n",
|
| 298 |
+
"\n",
|
| 299 |
+
" for event in events:\n",
|
| 300 |
+
" if not getattr(event, \"content\", None) or not getattr(event.content, \"parts\", None):\n",
|
| 301 |
+
" continue\n",
|
| 302 |
+
" for part in event.content.parts:\n",
|
| 303 |
+
" if getattr(part, \"function_call\", None):\n",
|
| 304 |
+
" info = {\n",
|
| 305 |
+
" \"tool_name\": part.function_call.name,\n",
|
| 306 |
+
" \"tool_input\": dict(part.function_call.args),\n",
|
| 307 |
+
" }\n",
|
| 308 |
+
" if info not in trajectory:\n",
|
| 309 |
+
" trajectory.append(info)\n",
|
| 310 |
+
" if event.content.role == \"model\" and getattr(part, \"text\", None):\n",
|
| 311 |
+
" final_response = part.text.strip()\n",
|
| 312 |
+
"\n",
|
| 313 |
+
" if as_json:\n",
|
| 314 |
+
" trajectory_out = json.dumps(trajectory)\n",
|
| 315 |
+
" else:\n",
|
| 316 |
+
" trajectory_out = trajectory\n",
|
| 317 |
+
"\n",
|
| 318 |
+
" return {\"response\": final_response, \"predicted_trajectory\": trajectory_out}\n",
|
| 319 |
+
"\n",
|
| 320 |
+
"\n",
|
| 321 |
+
"def format_output_as_markdown(output: dict) -> str:\n",
|
| 322 |
+
" \"\"\"Convert the output dictionary to a formatted markdown string.\"\"\"\n",
|
| 323 |
+
" markdown = \"### AI Response\\n\" + output[\"response\"] + \"\\n\\n\"\n",
|
| 324 |
+
" if output[\"predicted_trajectory\"]:\n",
|
| 325 |
+
" markdown += \"### Function Calls\\n\"\n",
|
| 326 |
+
" for call in output[\"predicted_trajectory\"]:\n",
|
| 327 |
+
" markdown += f\"- **Function**: `{call['tool_name']}`\\n\"\n",
|
| 328 |
+
" markdown += \" - **Arguments**\\n\"\n",
|
| 329 |
+
" for key, value in call[\"tool_input\"].items():\n",
|
| 330 |
+
" markdown += f\" - `{key}`: `{value}`\\n\"\n",
|
| 331 |
+
" return markdown\n",
|
| 332 |
+
"\n",
|
| 333 |
+
"\n",
|
| 334 |
+
"def display_eval_report(eval_result: pd.DataFrame) -> None:\n",
|
| 335 |
+
" \"\"\"Display the evaluation results.\"\"\"\n",
|
| 336 |
+
" display(Markdown(\"### Summary Metrics\"))\n",
|
| 337 |
+
" display(\n",
|
| 338 |
+
" pd.DataFrame(\n",
|
| 339 |
+
" eval_result.summary_metrics.items(), columns=[\"metric\", \"value\"]\n",
|
| 340 |
+
" )\n",
|
| 341 |
+
" )\n",
|
| 342 |
+
" if getattr(eval_result, \"metrics_table\", None) is not None:\n",
|
| 343 |
+
" display(Markdown(\"### Row‑wise Metrics\"))\n",
|
| 344 |
+
" display(eval_result.metrics_table.head())\n",
|
| 345 |
+
"\n",
|
| 346 |
+
"\n",
|
| 347 |
+
"def display_drilldown(row: pd.Series) -> None:\n",
|
| 348 |
+
" \"\"\"Displays a drill-down view for trajectory data within a row.\"\"\"\n",
|
| 349 |
+
"\n",
|
| 350 |
+
" style = \"white-space: pre-wrap; width: 800px; overflow-x: auto;\"\n",
|
| 351 |
+
"\n",
|
| 352 |
+
" if not (\n",
|
| 353 |
+
" isinstance(row[\"predicted_trajectory\"], list)\n",
|
| 354 |
+
" and isinstance(row[\"reference_trajectory\"], list)\n",
|
| 355 |
+
" ):\n",
|
| 356 |
+
" return\n",
|
| 357 |
+
"\n",
|
| 358 |
+
" for predicted_trajectory, reference_trajectory in zip(\n",
|
| 359 |
+
" row[\"predicted_trajectory\"], row[\"reference_trajectory\"]\n",
|
| 360 |
+
" ):\n",
|
| 361 |
+
" display(\n",
|
| 362 |
+
" HTML(\n",
|
| 363 |
+
" f\"<h3>Tool Names:</h3><div style='{style}'>{predicted_trajectory['tool_name'], reference_trajectory['tool_name']}</div>\"\n",
|
| 364 |
+
" )\n",
|
| 365 |
+
" )\n",
|
| 366 |
+
"\n",
|
| 367 |
+
" if not (\n",
|
| 368 |
+
" isinstance(predicted_trajectory.get(\"tool_input\"), dict)\n",
|
| 369 |
+
" and isinstance(reference_trajectory.get(\"tool_input\"), dict)\n",
|
| 370 |
+
" ):\n",
|
| 371 |
+
" continue\n",
|
| 372 |
+
"\n",
|
| 373 |
+
" for tool_input_key in predicted_trajectory[\"tool_input\"]:\n",
|
| 374 |
+
" print(\"Tool Input Key: \", tool_input_key)\n",
|
| 375 |
+
"\n",
|
| 376 |
+
" if tool_input_key in reference_trajectory[\"tool_input\"]:\n",
|
| 377 |
+
" print(\n",
|
| 378 |
+
" \"Tool Values: \",\n",
|
| 379 |
+
" predicted_trajectory[\"tool_input\"][tool_input_key],\n",
|
| 380 |
+
" reference_trajectory[\"tool_input\"][tool_input_key],\n",
|
| 381 |
+
" )\n",
|
| 382 |
+
" else:\n",
|
| 383 |
+
" print(\n",
|
| 384 |
+
" \"Tool Values: \",\n",
|
| 385 |
+
" predicted_trajectory[\"tool_input\"][tool_input_key],\n",
|
| 386 |
+
" \"N/A\",\n",
|
| 387 |
+
" )\n",
|
| 388 |
+
" print(\"\\n\")\n",
|
| 389 |
+
" display(HTML(\"<hr>\"))\n",
|
| 390 |
+
"\n",
|
| 391 |
+
"\n",
|
| 392 |
+
"def display_dataframe_rows(\n",
|
| 393 |
+
" df: pd.DataFrame,\n",
|
| 394 |
+
" columns: list[str] | None = None,\n",
|
| 395 |
+
" num_rows: int = 3,\n",
|
| 396 |
+
" display_drilldown: bool = False,\n",
|
| 397 |
+
") -> None:\n",
|
| 398 |
+
" \"\"\"Displays a subset of rows from a DataFrame, optionally including a drill-down view.\"\"\"\n",
|
| 399 |
+
"\n",
|
| 400 |
+
" if columns:\n",
|
| 401 |
+
" df = df[columns]\n",
|
| 402 |
+
"\n",
|
| 403 |
+
" base_style = \"font-family: monospace; font-size: 14px; white-space: pre-wrap; width: auto; overflow-x: auto;\"\n",
|
| 404 |
+
" header_style = base_style + \"font-weight: bold;\"\n",
|
| 405 |
+
"\n",
|
| 406 |
+
" for _, row in df.head(num_rows).iterrows():\n",
|
| 407 |
+
" for column in df.columns:\n",
|
| 408 |
+
" display(\n",
|
| 409 |
+
" HTML(\n",
|
| 410 |
+
" f\"<span style='{header_style}'>{column.replace('_', ' ').title()}: </span>\"\n",
|
| 411 |
+
" )\n",
|
| 412 |
+
" )\n",
|
| 413 |
+
" display(HTML(f\"<span style='{base_style}'>{row[column]}</span><br>\"))\n",
|
| 414 |
+
"\n",
|
| 415 |
+
" display(HTML(\"<hr>\"))\n",
|
| 416 |
+
"\n",
|
| 417 |
+
" if (\n",
|
| 418 |
+
" display_drilldown\n",
|
| 419 |
+
" and \"predicted_trajectory\" in df.columns\n",
|
| 420 |
+
" and \"reference_trajectory\" in df.columns\n",
|
| 421 |
+
" ):\n",
|
| 422 |
+
" display_drilldown(row)\n",
|
| 423 |
+
"\n",
|
| 424 |
+
"\n",
|
| 425 |
+
"def plot_bar_plot(\n",
|
| 426 |
+
" eval_result: pd.DataFrame, title: str, metrics: list[str] = None\n",
|
| 427 |
+
") -> None:\n",
|
| 428 |
+
" fig = go.Figure()\n",
|
| 429 |
+
" data = []\n",
|
| 430 |
+
"\n",
|
| 431 |
+
" summary_metrics = eval_result.summary_metrics\n",
|
| 432 |
+
" if metrics:\n",
|
| 433 |
+
" summary_metrics = {\n",
|
| 434 |
+
" k: summary_metrics[k]\n",
|
| 435 |
+
" for k, v in summary_metrics.items()\n",
|
| 436 |
+
" if any(selected_metric in k for selected_metric in metrics)\n",
|
| 437 |
+
" }\n",
|
| 438 |
+
"\n",
|
| 439 |
+
" data.append(\n",
|
| 440 |
+
" go.Bar(\n",
|
| 441 |
+
" x=list(summary_metrics.keys()),\n",
|
| 442 |
+
" y=list(summary_metrics.values()),\n",
|
| 443 |
+
" name=title,\n",
|
| 444 |
+
" )\n",
|
| 445 |
+
" )\n",
|
| 446 |
+
"\n",
|
| 447 |
+
" fig = go.Figure(data=data)\n",
|
| 448 |
+
"\n",
|
| 449 |
+
" # Change the bar mode\n",
|
| 450 |
+
" fig.update_layout(barmode=\"group\")\n",
|
| 451 |
+
" fig.show()\n",
|
| 452 |
+
"\n",
|
| 453 |
+
"\n",
|
| 454 |
+
"def display_radar_plot(eval_results, title: str, metrics=None):\n",
|
| 455 |
+
" \"\"\"Plot the radar plot.\"\"\"\n",
|
| 456 |
+
" fig = go.Figure()\n",
|
| 457 |
+
" summary_metrics = eval_results.summary_metrics\n",
|
| 458 |
+
" if metrics:\n",
|
| 459 |
+
" summary_metrics = {\n",
|
| 460 |
+
" k: summary_metrics[k]\n",
|
| 461 |
+
" for k, v in summary_metrics.items()\n",
|
| 462 |
+
" if any(selected_metric in k for selected_metric in metrics)\n",
|
| 463 |
+
" }\n",
|
| 464 |
+
"\n",
|
| 465 |
+
" min_val = min(summary_metrics.values())\n",
|
| 466 |
+
" max_val = max(summary_metrics.values())\n",
|
| 467 |
+
"\n",
|
| 468 |
+
" fig.add_trace(\n",
|
| 469 |
+
" go.Scatterpolar(\n",
|
| 470 |
+
" r=list(summary_metrics.values()),\n",
|
| 471 |
+
" theta=list(summary_metrics.keys()),\n",
|
| 472 |
+
" fill=\"toself\",\n",
|
| 473 |
+
" name=title,\n",
|
| 474 |
+
" )\n",
|
| 475 |
+
" )\n",
|
| 476 |
+
" fig.update_layout(\n",
|
| 477 |
+
" title=title,\n",
|
| 478 |
+
" polar=dict(radialaxis=dict(visible=True, range=[min_val, max_val])),\n",
|
| 479 |
+
" showlegend=True,\n",
|
| 480 |
+
" )\n",
|
| 481 |
+
" fig.show()"
|
| 482 |
+
]
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"cell_type": "markdown",
|
| 486 |
+
"metadata": {
|
| 487 |
+
"id": "bDaa2Mtsifmq"
|
| 488 |
+
},
|
| 489 |
+
"source": [
|
| 490 |
+
"## Build ADK agent\n",
|
| 491 |
+
"\n",
|
| 492 |
+
"Build your application using ADK, including the Gemini model and custom tools that you define."
|
| 493 |
+
]
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"cell_type": "markdown",
|
| 497 |
+
"metadata": {
|
| 498 |
+
"id": "KHwShhpOitKp"
|
| 499 |
+
},
|
| 500 |
+
"source": [
|
| 501 |
+
"### Set tools\n",
|
| 502 |
+
"\n",
|
| 503 |
+
"To start, set the tools that a customer support agent needs to do their job."
|
| 504 |
+
]
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"cell_type": "code",
|
| 508 |
+
"execution_count": null,
|
| 509 |
+
"metadata": {
|
| 510 |
+
"id": "gA2ZKvfeislw"
|
| 511 |
+
},
|
| 512 |
+
"outputs": [],
|
| 513 |
+
"source": [
|
| 514 |
+
"def get_product_details(product_name: str):\n",
|
| 515 |
+
" \"\"\"Gathers basic details about a product.\"\"\"\n",
|
| 516 |
+
" details = {\n",
|
| 517 |
+
" \"smartphone\": \"A cutting-edge smartphone with advanced camera features and lightning-fast processing.\",\n",
|
| 518 |
+
" \"usb charger\": \"A super fast and light usb charger\",\n",
|
| 519 |
+
" \"shoes\": \"High-performance running shoes designed for comfort, support, and speed.\",\n",
|
| 520 |
+
" \"headphones\": \"Wireless headphones with advanced noise cancellation technology for immersive audio.\",\n",
|
| 521 |
+
" \"speaker\": \"A voice-controlled smart speaker that plays music, sets alarms, and controls smart home devices.\",\n",
|
| 522 |
+
" }\n",
|
| 523 |
+
" return details.get(product_name, \"Product details not found.\")\n",
|
| 524 |
+
"\n",
|
| 525 |
+
"\n",
|
| 526 |
+
"def get_product_price(product_name: str):\n",
|
| 527 |
+
" \"\"\"Gathers price about a product.\"\"\"\n",
|
| 528 |
+
" details = {\n",
|
| 529 |
+
" \"smartphone\": 500,\n",
|
| 530 |
+
" \"usb charger\": 10,\n",
|
| 531 |
+
" \"shoes\": 100,\n",
|
| 532 |
+
" \"headphones\": 50,\n",
|
| 533 |
+
" \"speaker\": 80,\n",
|
| 534 |
+
" }\n",
|
| 535 |
+
" return details.get(product_name, \"Product price not found.\")"
|
| 536 |
+
]
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"cell_type": "markdown",
|
| 540 |
+
"metadata": {
|
| 541 |
+
"id": "l4mk5XPui4Y1"
|
| 542 |
+
},
|
| 543 |
+
"source": [
|
| 544 |
+
"### Set the model\n",
|
| 545 |
+
"\n",
|
| 546 |
+
"Choose which Gemini AI model your agent will use. If you're curious about Gemini and its different capabilities, take a look at [the official documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models) for more details."
|
| 547 |
+
]
|
| 548 |
+
},
|
| 549 |
+
{
|
| 550 |
+
"cell_type": "code",
|
| 551 |
+
"execution_count": null,
|
| 552 |
+
"metadata": {
|
| 553 |
+
"id": "BaYeo6K2i-w1"
|
| 554 |
+
},
|
| 555 |
+
"outputs": [],
|
| 556 |
+
"source": [
|
| 557 |
+
"model = \"gemini-2.0-flash\""
|
| 558 |
+
]
|
| 559 |
+
},
|
| 560 |
+
{
|
| 561 |
+
"cell_type": "markdown",
|
| 562 |
+
"metadata": {
|
| 563 |
+
"id": "tNlAY9cojEWz"
|
| 564 |
+
},
|
| 565 |
+
"source": [
|
| 566 |
+
"### Assemble the agent\n",
|
| 567 |
+
"\n",
|
| 568 |
+
"The Vertex AI Gen AI Evaluation works directly with 'Queryable' agents, and also lets you add your own custom functions with a specific structure (signature).\n",
|
| 569 |
+
"\n",
|
| 570 |
+
"In this case, you assemble the agent using a custom function. The function triggers the agent for a given input and parse the agent outcome to extract the response and called tools."
|
| 571 |
+
]
|
| 572 |
+
},
|
| 573 |
+
{
|
| 574 |
+
"cell_type": "code",
|
| 575 |
+
"execution_count": null,
|
| 576 |
+
"metadata": {
|
| 577 |
+
"id": "gD5OB44g4sc3"
|
| 578 |
+
},
|
| 579 |
+
"outputs": [],
|
| 580 |
+
"source": [
|
| 581 |
+
"async def agent_parsed_outcome(query):\n",
|
| 582 |
+
" app_name = \"product_research_app\"\n",
|
| 583 |
+
" user_id = \"user_one\"\n",
|
| 584 |
+
" session_id = \"session_one\"\n",
|
| 585 |
+
" \n",
|
| 586 |
+
" product_research_agent = Agent(\n",
|
| 587 |
+
" name=\"ProductResearchAgent\",\n",
|
| 588 |
+
" model=model,\n",
|
| 589 |
+
" description=\"An agent that performs product research.\",\n",
|
| 590 |
+
" instruction=f\"\"\"\n",
|
| 591 |
+
" Analyze this user request: '{query}'.\n",
|
| 592 |
+
" If the request is about price, use get_product_price tool.\n",
|
| 593 |
+
" Otherwise, use get_product_details tool to get product information.\n",
|
| 594 |
+
" \"\"\",\n",
|
| 595 |
+
" tools=[get_product_details, get_product_price],\n",
|
| 596 |
+
" )\n",
|
| 597 |
+
"\n",
|
| 598 |
+
" session_service = InMemorySessionService()\n",
|
| 599 |
+
" await session_service.create_session(\n",
|
| 600 |
+
" app_name=app_name, user_id=user_id, session_id=session_id\n",
|
| 601 |
+
" )\n",
|
| 602 |
+
"\n",
|
| 603 |
+
" runner = Runner(\n",
|
| 604 |
+
" agent=product_research_agent, app_name=app_name, session_service=session_service\n",
|
| 605 |
+
" )\n",
|
| 606 |
+
"\n",
|
| 607 |
+
" content = types.Content(role=\"user\", parts=[types.Part(text=query)])\n",
|
| 608 |
+
" events = [event async for event in runner.run_async(user_id=user_id, session_id=session_id, new_message=content)]\n",
|
| 609 |
+
" \n",
|
| 610 |
+
" return parse_adk_output_to_dictionary(events)\n"
|
| 611 |
+
]
|
| 612 |
+
},
|
| 613 |
+
{
|
| 614 |
+
"cell_type": "code",
|
| 615 |
+
"execution_count": null,
|
| 616 |
+
"metadata": {},
|
| 617 |
+
"outputs": [],
|
| 618 |
+
"source": [
|
| 619 |
+
"# --- Sync wrapper for Vertex‑AI evaluation\n",
|
| 620 |
+
"def agent_parsed_outcome_sync(prompt: str):\n",
|
| 621 |
+
" result = asyncio.run(agent_parsed_outcome(prompt))\n",
|
| 622 |
+
" result[\"predicted_trajectory\"] = json.dumps(result[\"predicted_trajectory\"])\n",
|
| 623 |
+
" return result"
|
| 624 |
+
]
|
| 625 |
+
},
|
| 626 |
+
{
|
| 627 |
+
"cell_type": "markdown",
|
| 628 |
+
"metadata": {
|
| 629 |
+
"id": "_HGcs6PVjRj_"
|
| 630 |
+
},
|
| 631 |
+
"source": [
|
| 632 |
+
"### Test the agent\n",
|
| 633 |
+
"\n",
|
| 634 |
+
"Query your agent."
|
| 635 |
+
]
|
| 636 |
+
},
|
| 637 |
+
{
|
| 638 |
+
"cell_type": "code",
|
| 639 |
+
"execution_count": null,
|
| 640 |
+
"metadata": {
|
| 641 |
+
"id": "lGb58OJkjUs9"
|
| 642 |
+
},
|
| 643 |
+
"outputs": [],
|
| 644 |
+
"source": [
|
| 645 |
+
"response = await agent_parsed_outcome(query=\"Get product details for shoes\")\n",
|
| 646 |
+
"display(Markdown(format_output_as_markdown(response)))"
|
| 647 |
+
]
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"cell_type": "code",
|
| 651 |
+
"execution_count": null,
|
| 652 |
+
"metadata": {
|
| 653 |
+
"id": "2wCFstt8w4Dx"
|
| 654 |
+
},
|
| 655 |
+
"outputs": [],
|
| 656 |
+
"source": [
|
| 657 |
+
"response = await agent_parsed_outcome(query=\"Get product price for shoes\")\n",
|
| 658 |
+
"display(Markdown(format_output_as_markdown(response)))"
|
| 659 |
+
]
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"cell_type": "markdown",
|
| 663 |
+
"metadata": {
|
| 664 |
+
"id": "aOGPePsorpUl"
|
| 665 |
+
},
|
| 666 |
+
"source": [
|
| 667 |
+
"## Evaluating a ADK agent with Vertex AI Gen AI Evaluation\n",
|
| 668 |
+
"\n",
|
| 669 |
+
"When working with AI agents, it's important to keep track of their performance and how well they're working. You can look at this in two main ways: **monitoring** and **observability**.\n",
|
| 670 |
+
"\n",
|
| 671 |
+
"Monitoring focuses on how well your agent is performing specific tasks:\n",
|
| 672 |
+
"\n",
|
| 673 |
+
"* **Single Tool Selection**: Is the agent choosing the right tools for the job?\n",
|
| 674 |
+
"\n",
|
| 675 |
+
"* **Multiple Tool Selection (or Trajectory)**: Is the agent making logical choices in the order it uses tools?\n",
|
| 676 |
+
"\n",
|
| 677 |
+
"* **Response generation**: Is the agent's output good, and does it make sense based on the tools it used?\n",
|
| 678 |
+
"\n",
|
| 679 |
+
"Observability is about understanding the overall health of the agent:\n",
|
| 680 |
+
"\n",
|
| 681 |
+
"* **Latency**: How long does it take the agent to respond?\n",
|
| 682 |
+
"\n",
|
| 683 |
+
"* **Failure Rate**: How often does the agent fail to produce a response?\n",
|
| 684 |
+
"\n",
|
| 685 |
+
"Vertex AI Gen AI Evaluation service helps you to assess all of these aspects both while you are prototyping the agent or after you deploy it in production. It provides [pre-built evaluation criteria and metrics](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval) so you can see exactly how your agents are doing and identify areas for improvement."
|
| 686 |
+
]
|
| 687 |
+
},
|
| 688 |
+
{
|
| 689 |
+
"cell_type": "markdown",
|
| 690 |
+
"metadata": {
|
| 691 |
+
"id": "e43229f3ad4f"
|
| 692 |
+
},
|
| 693 |
+
"source": [
|
| 694 |
+
"### Prepare Agent Evaluation dataset\n",
|
| 695 |
+
"\n",
|
| 696 |
+
"To evaluate your AI agent using the Vertex AI Gen AI Evaluation service, you need a specific dataset depending on what aspects you want to evaluate of your agent. \n",
|
| 697 |
+
"\n",
|
| 698 |
+
"This dataset should include the prompts given to the agent. It can also contain the ideal or expected response (ground truth) and the intended sequence of tool calls the agent should take (reference trajectory) representing the sequence of tools you expect agent calls for each given prompt.\n",
|
| 699 |
+
"\n",
|
| 700 |
+
"> Optionally, you can provide both generated responses and predicted trajectory (**Bring-Your-Own-Dataset scenario**).\n",
|
| 701 |
+
"\n",
|
| 702 |
+
"Below you have an example of dataset you might have with a customer support agent with user prompt and the reference trajectory."
|
| 703 |
+
]
|
| 704 |
+
},
|
| 705 |
+
{
|
| 706 |
+
"cell_type": "code",
|
| 707 |
+
"execution_count": null,
|
| 708 |
+
"metadata": {
|
| 709 |
+
"id": "fFf8uTdUiDt3"
|
| 710 |
+
},
|
| 711 |
+
"outputs": [],
|
| 712 |
+
"source": [
|
| 713 |
+
"eval_data = {\n",
|
| 714 |
+
" \"prompt\": [\n",
|
| 715 |
+
" \"Get price for smartphone\",\n",
|
| 716 |
+
" \"Get product details and price for headphones\",\n",
|
| 717 |
+
" \"Get details for usb charger\",\n",
|
| 718 |
+
" \"Get product details and price for shoes\",\n",
|
| 719 |
+
" \"Get product details for speaker?\",\n",
|
| 720 |
+
" ],\n",
|
| 721 |
+
" \"predicted_trajectory\": [\n",
|
| 722 |
+
" [\n",
|
| 723 |
+
" {\n",
|
| 724 |
+
" \"tool_name\": \"get_product_price\",\n",
|
| 725 |
+
" \"tool_input\": {\"product_name\": \"smartphone\"},\n",
|
| 726 |
+
" }\n",
|
| 727 |
+
" ],\n",
|
| 728 |
+
" [\n",
|
| 729 |
+
" {\n",
|
| 730 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 731 |
+
" \"tool_input\": {\"product_name\": \"headphones\"},\n",
|
| 732 |
+
" },\n",
|
| 733 |
+
" {\n",
|
| 734 |
+
" \"tool_name\": \"get_product_price\",\n",
|
| 735 |
+
" \"tool_input\": {\"product_name\": \"headphones\"},\n",
|
| 736 |
+
" },\n",
|
| 737 |
+
" ],\n",
|
| 738 |
+
" [\n",
|
| 739 |
+
" {\n",
|
| 740 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 741 |
+
" \"tool_input\": {\"product_name\": \"usb charger\"},\n",
|
| 742 |
+
" }\n",
|
| 743 |
+
" ],\n",
|
| 744 |
+
" [\n",
|
| 745 |
+
" {\n",
|
| 746 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 747 |
+
" \"tool_input\": {\"product_name\": \"shoes\"},\n",
|
| 748 |
+
" },\n",
|
| 749 |
+
" {\"tool_name\": \"get_product_price\", \"tool_input\": {\"product_name\": \"shoes\"}},\n",
|
| 750 |
+
" ],\n",
|
| 751 |
+
" [\n",
|
| 752 |
+
" {\n",
|
| 753 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 754 |
+
" \"tool_input\": {\"product_name\": \"speaker\"},\n",
|
| 755 |
+
" }\n",
|
| 756 |
+
" ],\n",
|
| 757 |
+
" ],\n",
|
| 758 |
+
"}\n",
|
| 759 |
+
"\n",
|
| 760 |
+
"eval_sample_dataset = pd.DataFrame(eval_data)"
|
| 761 |
+
]
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"cell_type": "markdown",
|
| 765 |
+
"metadata": {
|
| 766 |
+
"id": "PQEI1EcfvFHb"
|
| 767 |
+
},
|
| 768 |
+
"source": [
|
| 769 |
+
"Print some samples from the dataset."
|
| 770 |
+
]
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"cell_type": "code",
|
| 774 |
+
"execution_count": null,
|
| 775 |
+
"metadata": {
|
| 776 |
+
"id": "EjsonqWWvIvE"
|
| 777 |
+
},
|
| 778 |
+
"outputs": [],
|
| 779 |
+
"source": [
|
| 780 |
+
"display_dataframe_rows(eval_sample_dataset, num_rows=3)"
|
| 781 |
+
]
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"cell_type": "markdown",
|
| 785 |
+
"metadata": {
|
| 786 |
+
"id": "m4CvBuf1afHG"
|
| 787 |
+
},
|
| 788 |
+
"source": [
|
| 789 |
+
"### Single tool usage evaluation\n",
|
| 790 |
+
"\n",
|
| 791 |
+
"After you've set your AI agent and the evaluation dataset, you start evaluating if the agent is choosing the correct single tool for a given task.\n"
|
| 792 |
+
]
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"cell_type": "markdown",
|
| 796 |
+
"metadata": {
|
| 797 |
+
"id": "_rS5GGKHd5bx"
|
| 798 |
+
},
|
| 799 |
+
"source": [
|
| 800 |
+
"#### Set single tool usage metrics\n",
|
| 801 |
+
"\n",
|
| 802 |
+
"The `trajectory_single_tool_use` metric in Vertex AI Gen AI Evaluation gives you a quick way to evaluate whether your agent is using the tool you expect it to use, regardless of any specific tool order. It's a basic but useful way to start evaluating if the right tool was used at some point during the agent's process.\n",
|
| 803 |
+
"\n",
|
| 804 |
+
"To use the `trajectory_single_tool_use` metric, you need to set what tool should have been used for a particular user's request. For example, if a user asks to \"send an email\", you might expect the agent to use an \"send_email\" tool, and you'd specify that tool's name when using this metric.\n"
|
| 805 |
+
]
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"cell_type": "code",
|
| 809 |
+
"execution_count": null,
|
| 810 |
+
"metadata": {
|
| 811 |
+
"id": "xixvq8dwd5by"
|
| 812 |
+
},
|
| 813 |
+
"outputs": [],
|
| 814 |
+
"source": [
|
| 815 |
+
"single_tool_usage_metrics = [TrajectorySingleToolUse(tool_name=\"get_product_price\")]"
|
| 816 |
+
]
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"cell_type": "markdown",
|
| 820 |
+
"metadata": {
|
| 821 |
+
"id": "ktKZoT2Qd5by"
|
| 822 |
+
},
|
| 823 |
+
"source": [
|
| 824 |
+
"#### Run an evaluation task\n",
|
| 825 |
+
"\n",
|
| 826 |
+
"To run the evaluation, you initiate an `EvalTask` using the pre-defined dataset (`eval_sample_dataset`) and metrics (`single_tool_usage_metrics` in this case) within an experiment. Then, you run the evaluation using agent_parsed_outcome function and assigns a unique identifier to this specific evaluation run, storing and visualizing the evaluation results.\n"
|
| 827 |
+
]
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"cell_type": "code",
|
| 831 |
+
"execution_count": null,
|
| 832 |
+
"metadata": {
|
| 833 |
+
"id": "SRv43fDcd5by"
|
| 834 |
+
},
|
| 835 |
+
"outputs": [],
|
| 836 |
+
"source": [
|
| 837 |
+
"EXPERIMENT_RUN = f\"single-metric-eval-{get_id()}\"\n",
|
| 838 |
+
"\n",
|
| 839 |
+
"single_tool_call_eval_task = EvalTask(\n",
|
| 840 |
+
" dataset=eval_sample_dataset,\n",
|
| 841 |
+
" metrics=single_tool_usage_metrics,\n",
|
| 842 |
+
" experiment=EXPERIMENT_NAME,\n",
|
| 843 |
+
" output_uri_prefix=BUCKET_URI + \"/single-metric-eval\",\n",
|
| 844 |
+
")\n",
|
| 845 |
+
"\n",
|
| 846 |
+
"single_tool_call_eval_result = single_tool_call_eval_task.evaluate(\n",
|
| 847 |
+
" runnable=agent_parsed_outcome_sync, experiment_run_name=EXPERIMENT_RUN\n",
|
| 848 |
+
")\n",
|
| 849 |
+
"\n",
|
| 850 |
+
"display_eval_report(single_tool_call_eval_result)"
|
| 851 |
+
]
|
| 852 |
+
},
|
| 853 |
+
{
|
| 854 |
+
"cell_type": "markdown",
|
| 855 |
+
"metadata": {
|
| 856 |
+
"id": "6o5BjSTFKVMS"
|
| 857 |
+
},
|
| 858 |
+
"source": [
|
| 859 |
+
"#### Visualize evaluation results\n",
|
| 860 |
+
"\n",
|
| 861 |
+
"Use some helper functions to visualize a sample of evaluation result."
|
| 862 |
+
]
|
| 863 |
+
},
|
| 864 |
+
{
|
| 865 |
+
"cell_type": "code",
|
| 866 |
+
"execution_count": null,
|
| 867 |
+
"metadata": {
|
| 868 |
+
"id": "1Jopzw83k14w"
|
| 869 |
+
},
|
| 870 |
+
"outputs": [],
|
| 871 |
+
"source": [
|
| 872 |
+
"display_dataframe_rows(single_tool_call_eval_result.metrics_table, num_rows=3)"
|
| 873 |
+
]
|
| 874 |
+
},
|
| 875 |
+
{
|
| 876 |
+
"cell_type": "markdown",
|
| 877 |
+
"metadata": {
|
| 878 |
+
"id": "JlujdJpu5Kn6"
|
| 879 |
+
},
|
| 880 |
+
"source": [
|
| 881 |
+
"### Trajectory Evaluation\n",
|
| 882 |
+
"\n",
|
| 883 |
+
"After evaluating the agent's ability to select the single most appropriate tool for a given task, you generalize the evaluation by analyzing the tool sequence choices with respect to the user input (trajectory). This assesses whether the agent not only chooses the right tools but also utilizes them in a rational and effective order."
|
| 884 |
+
]
|
| 885 |
+
},
|
| 886 |
+
{
|
| 887 |
+
"cell_type": "markdown",
|
| 888 |
+
"metadata": {
|
| 889 |
+
"id": "8s-nHdDJneHM"
|
| 890 |
+
},
|
| 891 |
+
"source": [
|
| 892 |
+
"#### Set trajectory metrics\n",
|
| 893 |
+
"\n",
|
| 894 |
+
"To evaluate agent's trajectory, Vertex AI Gen AI Evaluation provides several ground-truth based metrics:\n",
|
| 895 |
+
"\n",
|
| 896 |
+
"* `trajectory_exact_match`: identical trajectories (same actions, same order)\n",
|
| 897 |
+
"\n",
|
| 898 |
+
"* `trajectory_in_order_match`: reference actions present in predicted trajectory, in order (extras allowed)\n",
|
| 899 |
+
"\n",
|
| 900 |
+
"* `trajectory_any_order_match`: all reference actions present in predicted trajectory (order, extras don't matter).\n",
|
| 901 |
+
"\n",
|
| 902 |
+
"* `trajectory_precision`: proportion of predicted actions present in reference\n",
|
| 903 |
+
"\n",
|
| 904 |
+
"* `trajectory_recall`: proportion of reference actions present in predicted. \n",
|
| 905 |
+
"\n",
|
| 906 |
+
"All metrics score 0 or 1, except `trajectory_precision` and `trajectory_recall` which range from 0 to 1."
|
| 907 |
+
]
|
| 908 |
+
},
|
| 909 |
+
{
|
| 910 |
+
"cell_type": "code",
|
| 911 |
+
"execution_count": null,
|
| 912 |
+
"metadata": {
|
| 913 |
+
"id": "c32WIS95neHN"
|
| 914 |
+
},
|
| 915 |
+
"outputs": [],
|
| 916 |
+
"source": [
|
| 917 |
+
"trajectory_metrics = [\n",
|
| 918 |
+
" \"trajectory_exact_match\",\n",
|
| 919 |
+
" \"trajectory_in_order_match\",\n",
|
| 920 |
+
" \"trajectory_any_order_match\",\n",
|
| 921 |
+
" \"trajectory_precision\",\n",
|
| 922 |
+
" \"trajectory_recall\",\n",
|
| 923 |
+
"]"
|
| 924 |
+
]
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"cell_type": "markdown",
|
| 928 |
+
"metadata": {
|
| 929 |
+
"id": "DF3jhTH3neHN"
|
| 930 |
+
},
|
| 931 |
+
"source": [
|
| 932 |
+
"#### Run an evaluation task\n",
|
| 933 |
+
"\n",
|
| 934 |
+
"Submit an evaluation by running `evaluate` method of the new `EvalTask`."
|
| 935 |
+
]
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"cell_type": "code",
|
| 939 |
+
"execution_count": null,
|
| 940 |
+
"metadata": {
|
| 941 |
+
"id": "vOdS7TJUneHN"
|
| 942 |
+
},
|
| 943 |
+
"outputs": [],
|
| 944 |
+
"source": [
|
| 945 |
+
"EXPERIMENT_RUN = f\"trajectory-{get_id()}\"\n",
|
| 946 |
+
"\n",
|
| 947 |
+
"trajectory_eval_task = EvalTask(\n",
|
| 948 |
+
" dataset=eval_sample_dataset,\n",
|
| 949 |
+
" metrics=trajectory_metrics,\n",
|
| 950 |
+
" experiment=EXPERIMENT_NAME,\n",
|
| 951 |
+
" output_uri_prefix=BUCKET_URI + \"/multiple-metric-eval\",\n",
|
| 952 |
+
")\n",
|
| 953 |
+
"\n",
|
| 954 |
+
"trajectory_eval_result = trajectory_eval_task.evaluate(\n",
|
| 955 |
+
" runnable=agent_parsed_outcome_sync, experiment_run_name=EXPERIMENT_RUN\n",
|
| 956 |
+
")\n",
|
| 957 |
+
"\n",
|
| 958 |
+
"display_eval_report(trajectory_eval_result)"
|
| 959 |
+
]
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"cell_type": "markdown",
|
| 963 |
+
"metadata": {
|
| 964 |
+
"id": "DBiUI3LyLBtj"
|
| 965 |
+
},
|
| 966 |
+
"source": [
|
| 967 |
+
"#### Visualize evaluation results\n",
|
| 968 |
+
"\n",
|
| 969 |
+
"Print and visualize a sample of evaluation results."
|
| 970 |
+
]
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"cell_type": "code",
|
| 974 |
+
"execution_count": null,
|
| 975 |
+
"metadata": {
|
| 976 |
+
"id": "z7-LdM3mLBtk"
|
| 977 |
+
},
|
| 978 |
+
"outputs": [],
|
| 979 |
+
"source": [
|
| 980 |
+
"display_dataframe_rows(trajectory_eval_result.metrics_table, num_rows=3)"
|
| 981 |
+
]
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"cell_type": "code",
|
| 985 |
+
"execution_count": null,
|
| 986 |
+
"metadata": {
|
| 987 |
+
"id": "sLVRdN5llA0h"
|
| 988 |
+
},
|
| 989 |
+
"outputs": [],
|
| 990 |
+
"source": [
|
| 991 |
+
"plot_bar_plot(\n",
|
| 992 |
+
" trajectory_eval_result,\n",
|
| 993 |
+
" title=\"Trajectory Metrics\",\n",
|
| 994 |
+
" metrics=[f\"{metric}/mean\" for metric in trajectory_metrics],\n",
|
| 995 |
+
")"
|
| 996 |
+
]
|
| 997 |
+
},
|
| 998 |
+
{
|
| 999 |
+
"cell_type": "markdown",
|
| 1000 |
+
"metadata": {
|
| 1001 |
+
"id": "T8TipU2akHEd"
|
| 1002 |
+
},
|
| 1003 |
+
"source": [
|
| 1004 |
+
"### Evaluate final response\n",
|
| 1005 |
+
"\n",
|
| 1006 |
+
"Similar to model evaluation, you can evaluate the final response of the agent using Vertex AI Gen AI Evaluation."
|
| 1007 |
+
]
|
| 1008 |
+
},
|
| 1009 |
+
{
|
| 1010 |
+
"cell_type": "markdown",
|
| 1011 |
+
"metadata": {
|
| 1012 |
+
"id": "DeK-py7ykkDN"
|
| 1013 |
+
},
|
| 1014 |
+
"source": [
|
| 1015 |
+
"#### Set response metrics\n",
|
| 1016 |
+
"\n",
|
| 1017 |
+
"After agent inference, Vertex AI Gen AI Evaluation provides several metrics to evaluate generated responses. You can use computation-based metrics to compare the response to a reference (if needed) and using existing or custom model-based metrics to determine the quality of the final response.\n",
|
| 1018 |
+
"\n",
|
| 1019 |
+
"Check out the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval) to learn more.\n"
|
| 1020 |
+
]
|
| 1021 |
+
},
|
| 1022 |
+
{
|
| 1023 |
+
"cell_type": "code",
|
| 1024 |
+
"execution_count": null,
|
| 1025 |
+
"metadata": {
|
| 1026 |
+
"id": "cyGHGgeVklvz"
|
| 1027 |
+
},
|
| 1028 |
+
"outputs": [],
|
| 1029 |
+
"source": [
|
| 1030 |
+
"response_metrics = [\"safety\", \"coherence\"]"
|
| 1031 |
+
]
|
| 1032 |
+
},
|
| 1033 |
+
{
|
| 1034 |
+
"cell_type": "markdown",
|
| 1035 |
+
"metadata": {
|
| 1036 |
+
"id": "DaBJWcg1kn55"
|
| 1037 |
+
},
|
| 1038 |
+
"source": [
|
| 1039 |
+
"#### Run an evaluation task\n",
|
| 1040 |
+
"\n",
|
| 1041 |
+
"To evaluate agent's generated responses, use the `evaluate` method of the EvalTask class."
|
| 1042 |
+
]
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"cell_type": "code",
|
| 1046 |
+
"execution_count": null,
|
| 1047 |
+
"metadata": {
|
| 1048 |
+
"id": "wRb2EC_hknSD"
|
| 1049 |
+
},
|
| 1050 |
+
"outputs": [],
|
| 1051 |
+
"source": [
|
| 1052 |
+
"EXPERIMENT_RUN = f\"response-{get_id()}\"\n",
|
| 1053 |
+
"\n",
|
| 1054 |
+
"response_eval_task = EvalTask(\n",
|
| 1055 |
+
" dataset=eval_sample_dataset,\n",
|
| 1056 |
+
" metrics=response_metrics,\n",
|
| 1057 |
+
" experiment=EXPERIMENT_NAME,\n",
|
| 1058 |
+
" output_uri_prefix=BUCKET_URI + \"/response-metric-eval\",\n",
|
| 1059 |
+
")\n",
|
| 1060 |
+
"\n",
|
| 1061 |
+
"response_eval_result = response_eval_task.evaluate(\n",
|
| 1062 |
+
" runnable=agent_parsed_outcome_sync, experiment_run_name=EXPERIMENT_RUN\n",
|
| 1063 |
+
")\n",
|
| 1064 |
+
"\n",
|
| 1065 |
+
"display_eval_report(response_eval_result)"
|
| 1066 |
+
]
|
| 1067 |
+
},
|
| 1068 |
+
{
|
| 1069 |
+
"cell_type": "markdown",
|
| 1070 |
+
"metadata": {
|
| 1071 |
+
"id": "JtewTwiwg9qH"
|
| 1072 |
+
},
|
| 1073 |
+
"source": [
|
| 1074 |
+
"#### Visualize evaluation results\n",
|
| 1075 |
+
"\n",
|
| 1076 |
+
"\n",
|
| 1077 |
+
"Print new evaluation result sample."
|
| 1078 |
+
]
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"cell_type": "code",
|
| 1082 |
+
"execution_count": null,
|
| 1083 |
+
"metadata": {
|
| 1084 |
+
"id": "ZODTRuq2lF75"
|
| 1085 |
+
},
|
| 1086 |
+
"outputs": [],
|
| 1087 |
+
"source": [
|
| 1088 |
+
"display_dataframe_rows(response_eval_result.metrics_table, num_rows=3)"
|
| 1089 |
+
]
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"cell_type": "markdown",
|
| 1093 |
+
"metadata": {
|
| 1094 |
+
"id": "ntRBK3Te6PEc"
|
| 1095 |
+
},
|
| 1096 |
+
"source": [
|
| 1097 |
+
"### Evaluate generated response conditioned by tool choosing\n",
|
| 1098 |
+
"\n",
|
| 1099 |
+
"When evaluating AI agents that interact with environments, standard text generation metrics like coherence may not be sufficient. This is because these metrics primarily focus on text structure, while agent responses should be assessed based on their effectiveness within the environment.\n",
|
| 1100 |
+
"\n",
|
| 1101 |
+
"Instead, use custom metrics that assess whether the agent's response logically follows from its tools choices like the one you have in this section."
|
| 1102 |
+
]
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"cell_type": "markdown",
|
| 1106 |
+
"metadata": {
|
| 1107 |
+
"id": "4bENwFcd6prX"
|
| 1108 |
+
},
|
| 1109 |
+
"source": [
|
| 1110 |
+
"#### Define a custom metric\n",
|
| 1111 |
+
"\n",
|
| 1112 |
+
"According to the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#model-based-metrics), you can define a prompt template for evaluating whether an AI agent's response follows logically from its actions by setting up criteria and a rating system for this evaluation.\n",
|
| 1113 |
+
"\n",
|
| 1114 |
+
"Define a `criteria` to set the evaluation guidelines and a `pointwise_rating_rubric` to provide a scoring system (1 or 0). Then use a `PointwiseMetricPromptTemplate` to create the template using these components.\n"
|
| 1115 |
+
]
|
| 1116 |
+
},
|
| 1117 |
+
{
|
| 1118 |
+
"cell_type": "code",
|
| 1119 |
+
"execution_count": null,
|
| 1120 |
+
"metadata": {
|
| 1121 |
+
"id": "txGEHcg76riI"
|
| 1122 |
+
},
|
| 1123 |
+
"outputs": [],
|
| 1124 |
+
"source": [
|
| 1125 |
+
"criteria = {\n",
|
| 1126 |
+
" \"Follows trajectory\": (\n",
|
| 1127 |
+
" \"Evaluate whether the agent's response logically follows from the \"\n",
|
| 1128 |
+
" \"sequence of actions it took. Consider these sub-points:\\n\"\n",
|
| 1129 |
+
" \" - Does the response reflect the information gathered during the trajectory?\\n\"\n",
|
| 1130 |
+
" \" - Is the response consistent with the goals and constraints of the task?\\n\"\n",
|
| 1131 |
+
" \" - Are there any unexpected or illogical jumps in reasoning?\\n\"\n",
|
| 1132 |
+
" \"Provide specific examples from the trajectory and response to support your evaluation.\"\n",
|
| 1133 |
+
" )\n",
|
| 1134 |
+
"}\n",
|
| 1135 |
+
"\n",
|
| 1136 |
+
"pointwise_rating_rubric = {\n",
|
| 1137 |
+
" \"1\": \"Follows trajectory\",\n",
|
| 1138 |
+
" \"0\": \"Does not follow trajectory\",\n",
|
| 1139 |
+
"}\n",
|
| 1140 |
+
"\n",
|
| 1141 |
+
"response_follows_trajectory_prompt_template = PointwiseMetricPromptTemplate(\n",
|
| 1142 |
+
" criteria=criteria,\n",
|
| 1143 |
+
" rating_rubric=pointwise_rating_rubric,\n",
|
| 1144 |
+
" input_variables=[\"prompt\", \"predicted_trajectory\"],\n",
|
| 1145 |
+
")"
|
| 1146 |
+
]
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"cell_type": "markdown",
|
| 1150 |
+
"metadata": {
|
| 1151 |
+
"id": "8MJqXu0kikxd"
|
| 1152 |
+
},
|
| 1153 |
+
"source": [
|
| 1154 |
+
"Print the prompt_data of this template containing the combined criteria and rubric information ready for use in an evaluation."
|
| 1155 |
+
]
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"cell_type": "code",
|
| 1159 |
+
"execution_count": null,
|
| 1160 |
+
"metadata": {
|
| 1161 |
+
"id": "5EL7iEDMikNQ"
|
| 1162 |
+
},
|
| 1163 |
+
"outputs": [],
|
| 1164 |
+
"source": [
|
| 1165 |
+
"print(response_follows_trajectory_prompt_template.prompt_data)"
|
| 1166 |
+
]
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"cell_type": "markdown",
|
| 1170 |
+
"metadata": {
|
| 1171 |
+
"id": "e1djVp7Fi4Yy"
|
| 1172 |
+
},
|
| 1173 |
+
"source": [
|
| 1174 |
+
"After you define the evaluation prompt template, set up the associated metric to evaluate how well a response follows a specific trajectory. The `PointwiseMetric` creates a metric where `response_follows_trajectory` is the metric's name and `response_follows_trajectory_prompt_template` provides instructions or context for evaluation you set up before.\n"
|
| 1175 |
+
]
|
| 1176 |
+
},
|
| 1177 |
+
{
|
| 1178 |
+
"cell_type": "code",
|
| 1179 |
+
"execution_count": null,
|
| 1180 |
+
"metadata": {
|
| 1181 |
+
"id": "Nx1xbZD87iMj"
|
| 1182 |
+
},
|
| 1183 |
+
"outputs": [],
|
| 1184 |
+
"source": [
|
| 1185 |
+
"response_follows_trajectory_metric = PointwiseMetric(\n",
|
| 1186 |
+
" metric=\"response_follows_trajectory\",\n",
|
| 1187 |
+
" metric_prompt_template=response_follows_trajectory_prompt_template,\n",
|
| 1188 |
+
")"
|
| 1189 |
+
]
|
| 1190 |
+
},
|
| 1191 |
+
{
|
| 1192 |
+
"cell_type": "markdown",
|
| 1193 |
+
"metadata": {
|
| 1194 |
+
"id": "1pmxLwTe7Ywv"
|
| 1195 |
+
},
|
| 1196 |
+
"source": [
|
| 1197 |
+
"#### Set response metrics\n",
|
| 1198 |
+
"\n",
|
| 1199 |
+
"Set new generated response evaluation metrics by including the custom metric.\n"
|
| 1200 |
+
]
|
| 1201 |
+
},
|
| 1202 |
+
{
|
| 1203 |
+
"cell_type": "code",
|
| 1204 |
+
"execution_count": null,
|
| 1205 |
+
"metadata": {
|
| 1206 |
+
"id": "wrsbVFDd7Ywv"
|
| 1207 |
+
},
|
| 1208 |
+
"outputs": [],
|
| 1209 |
+
"source": [
|
| 1210 |
+
"response_tool_metrics = [\n",
|
| 1211 |
+
" \"trajectory_exact_match\",\n",
|
| 1212 |
+
" \"trajectory_in_order_match\",\n",
|
| 1213 |
+
" \"safety\",\n",
|
| 1214 |
+
" response_follows_trajectory_metric,\n",
|
| 1215 |
+
"]"
|
| 1216 |
+
]
|
| 1217 |
+
},
|
| 1218 |
+
{
|
| 1219 |
+
"cell_type": "markdown",
|
| 1220 |
+
"metadata": {
|
| 1221 |
+
"id": "Lo-Sza807Ywv"
|
| 1222 |
+
},
|
| 1223 |
+
"source": [
|
| 1224 |
+
"#### Run an evaluation task\n",
|
| 1225 |
+
"\n",
|
| 1226 |
+
"Run a new agent's evaluation."
|
| 1227 |
+
]
|
| 1228 |
+
},
|
| 1229 |
+
{
|
| 1230 |
+
"cell_type": "code",
|
| 1231 |
+
"execution_count": null,
|
| 1232 |
+
"metadata": {
|
| 1233 |
+
"id": "_dkb4gSn7Ywv"
|
| 1234 |
+
},
|
| 1235 |
+
"outputs": [],
|
| 1236 |
+
"source": [
|
| 1237 |
+
"EXPERIMENT_RUN = f\"response-over-tools-{get_id()}\"\n",
|
| 1238 |
+
"\n",
|
| 1239 |
+
"response_eval_tool_task = EvalTask(\n",
|
| 1240 |
+
" dataset=eval_sample_dataset,\n",
|
| 1241 |
+
" metrics=response_tool_metrics,\n",
|
| 1242 |
+
" experiment=EXPERIMENT_NAME,\n",
|
| 1243 |
+
" output_uri_prefix=BUCKET_URI + \"/reasoning-metric-eval\",\n",
|
| 1244 |
+
")\n",
|
| 1245 |
+
"\n",
|
| 1246 |
+
"response_eval_tool_result = response_eval_tool_task.evaluate(\n",
|
| 1247 |
+
" # Uncomment the line below if you are providing the agent with an unparsed dataset\n",
|
| 1248 |
+
" #runnable=agent_parsed_outcome_sync, \n",
|
| 1249 |
+
" experiment_run_name=EXPERIMENT_RUN\n",
|
| 1250 |
+
")\n",
|
| 1251 |
+
"\n",
|
| 1252 |
+
"display_eval_report(response_eval_tool_result)"
|
| 1253 |
+
]
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"cell_type": "markdown",
|
| 1257 |
+
"metadata": {
|
| 1258 |
+
"id": "AtOfIFi2j88g"
|
| 1259 |
+
},
|
| 1260 |
+
"source": [
|
| 1261 |
+
"#### Visualize evaluation results\n",
|
| 1262 |
+
"\n",
|
| 1263 |
+
"Visualize evaluation result sample."
|
| 1264 |
+
]
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"cell_type": "code",
|
| 1268 |
+
"execution_count": null,
|
| 1269 |
+
"metadata": {
|
| 1270 |
+
"id": "GH2YvXgLlLH7"
|
| 1271 |
+
},
|
| 1272 |
+
"outputs": [],
|
| 1273 |
+
"source": [
|
| 1274 |
+
"display_dataframe_rows(response_eval_tool_result.metrics_table, num_rows=3)"
|
| 1275 |
+
]
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"cell_type": "code",
|
| 1279 |
+
"execution_count": null,
|
| 1280 |
+
"metadata": {
|
| 1281 |
+
"id": "tdVhCURXMdLG"
|
| 1282 |
+
},
|
| 1283 |
+
"outputs": [],
|
| 1284 |
+
"source": [
|
| 1285 |
+
"plot_bar_plot(\n",
|
| 1286 |
+
" response_eval_tool_result,\n",
|
| 1287 |
+
" title=\"Response Metrics\",\n",
|
| 1288 |
+
" metrics=[f\"{metric}/mean\" for metric in response_tool_metrics],\n",
|
| 1289 |
+
")"
|
| 1290 |
+
]
|
| 1291 |
+
},
|
| 1292 |
+
{
|
| 1293 |
+
"cell_type": "markdown",
|
| 1294 |
+
"metadata": {
|
| 1295 |
+
"id": "4nuUDP3a2eTB"
|
| 1296 |
+
},
|
| 1297 |
+
"source": [
|
| 1298 |
+
"## Bonus: Bring-Your-Own-Dataset (BYOD) and evaluate a LangGraph agent using Vertex AI Gen AI Evaluation\n",
|
| 1299 |
+
"\n",
|
| 1300 |
+
"In Bring Your Own Dataset (BYOD) [scenarios](https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-dataset), you provide both the predicted trajectory and the generated response from the agent.\n"
|
| 1301 |
+
]
|
| 1302 |
+
},
|
| 1303 |
+
{
|
| 1304 |
+
"cell_type": "markdown",
|
| 1305 |
+
"metadata": {
|
| 1306 |
+
"id": "DRLKlmWd27PK"
|
| 1307 |
+
},
|
| 1308 |
+
"source": [
|
| 1309 |
+
"### Bring your own evaluation dataset\n",
|
| 1310 |
+
"\n",
|
| 1311 |
+
"Define the evaluation dataset with the predicted trajectory and the generated response."
|
| 1312 |
+
]
|
| 1313 |
+
},
|
| 1314 |
+
{
|
| 1315 |
+
"cell_type": "code",
|
| 1316 |
+
"execution_count": null,
|
| 1317 |
+
"metadata": {
|
| 1318 |
+
"id": "y9hBgsg324Ej"
|
| 1319 |
+
},
|
| 1320 |
+
"outputs": [],
|
| 1321 |
+
"source": [
|
| 1322 |
+
"byod_eval_data = {\n",
|
| 1323 |
+
" \"prompt\": [\n",
|
| 1324 |
+
" \"Get price for smartphone\",\n",
|
| 1325 |
+
" \"Get product details and price for headphones\",\n",
|
| 1326 |
+
" \"Get details for usb charger\",\n",
|
| 1327 |
+
" \"Get product details and price for shoes\",\n",
|
| 1328 |
+
" \"Get product details for speaker?\",\n",
|
| 1329 |
+
" ],\n",
|
| 1330 |
+
" \"reference_trajectory\": [\n",
|
| 1331 |
+
" [\n",
|
| 1332 |
+
" {\n",
|
| 1333 |
+
" \"tool_name\": \"get_product_price\",\n",
|
| 1334 |
+
" \"tool_input\": {\"product_name\": \"smartphone\"},\n",
|
| 1335 |
+
" }\n",
|
| 1336 |
+
" ],\n",
|
| 1337 |
+
" [\n",
|
| 1338 |
+
" {\n",
|
| 1339 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1340 |
+
" \"tool_input\": {\"product_name\": \"headphones\"},\n",
|
| 1341 |
+
" },\n",
|
| 1342 |
+
" {\n",
|
| 1343 |
+
" \"tool_name\": \"get_product_price\",\n",
|
| 1344 |
+
" \"tool_input\": {\"product_name\": \"headphones\"},\n",
|
| 1345 |
+
" },\n",
|
| 1346 |
+
" ],\n",
|
| 1347 |
+
" [\n",
|
| 1348 |
+
" {\n",
|
| 1349 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1350 |
+
" \"tool_input\": {\"product_name\": \"usb charger\"},\n",
|
| 1351 |
+
" }\n",
|
| 1352 |
+
" ],\n",
|
| 1353 |
+
" [\n",
|
| 1354 |
+
" {\n",
|
| 1355 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1356 |
+
" \"tool_input\": {\"product_name\": \"shoes\"},\n",
|
| 1357 |
+
" },\n",
|
| 1358 |
+
" {\"tool_name\": \"get_product_price\", \"tool_input\": {\"product_name\": \"shoes\"}},\n",
|
| 1359 |
+
" ],\n",
|
| 1360 |
+
" [\n",
|
| 1361 |
+
" {\n",
|
| 1362 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1363 |
+
" \"tool_input\": {\"product_name\": \"speaker\"},\n",
|
| 1364 |
+
" }\n",
|
| 1365 |
+
" ],\n",
|
| 1366 |
+
" ],\n",
|
| 1367 |
+
" \"predicted_trajectory\": [\n",
|
| 1368 |
+
" [\n",
|
| 1369 |
+
" {\n",
|
| 1370 |
+
" \"tool_name\": \"get_product_price\",\n",
|
| 1371 |
+
" \"tool_input\": {\"product_name\": \"smartphone\"},\n",
|
| 1372 |
+
" }\n",
|
| 1373 |
+
" ],\n",
|
| 1374 |
+
" [\n",
|
| 1375 |
+
" {\n",
|
| 1376 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1377 |
+
" \"tool_input\": {\"product_name\": \"headphones\"},\n",
|
| 1378 |
+
" },\n",
|
| 1379 |
+
" {\n",
|
| 1380 |
+
" \"tool_name\": \"get_product_price\",\n",
|
| 1381 |
+
" \"tool_input\": {\"product_name\": \"headphones\"},\n",
|
| 1382 |
+
" },\n",
|
| 1383 |
+
" ],\n",
|
| 1384 |
+
" [\n",
|
| 1385 |
+
" {\n",
|
| 1386 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1387 |
+
" \"tool_input\": {\"product_name\": \"usb charger\"},\n",
|
| 1388 |
+
" }\n",
|
| 1389 |
+
" ],\n",
|
| 1390 |
+
" [\n",
|
| 1391 |
+
" {\n",
|
| 1392 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1393 |
+
" \"tool_input\": {\"product_name\": \"shoes\"},\n",
|
| 1394 |
+
" },\n",
|
| 1395 |
+
" {\"tool_name\": \"get_product_price\", \"tool_input\": {\"product_name\": \"shoes\"}},\n",
|
| 1396 |
+
" ],\n",
|
| 1397 |
+
" [\n",
|
| 1398 |
+
" {\n",
|
| 1399 |
+
" \"tool_name\": \"get_product_details\",\n",
|
| 1400 |
+
" \"tool_input\": {\"product_name\": \"speaker\"},\n",
|
| 1401 |
+
" }\n",
|
| 1402 |
+
" ],\n",
|
| 1403 |
+
" ],\n",
|
| 1404 |
+
" \"response\": [\n",
|
| 1405 |
+
" \"500\",\n",
|
| 1406 |
+
" \"50\",\n",
|
| 1407 |
+
" \"A super fast and light usb charger\",\n",
|
| 1408 |
+
" \"100\",\n",
|
| 1409 |
+
" \"A voice-controlled smart speaker that plays music, sets alarms, and controls smart home devices.\",\n",
|
| 1410 |
+
" ],\n",
|
| 1411 |
+
"}\n",
|
| 1412 |
+
"\n",
|
| 1413 |
+
"byod_eval_sample_dataset = pd.DataFrame(byod_eval_data)\n",
|
| 1414 |
+
"byod_eval_sample_dataset[\"predicted_trajectory\"] = byod_eval_sample_dataset[\n",
|
| 1415 |
+
" \"predicted_trajectory\"\n",
|
| 1416 |
+
"].apply(json.dumps)\n",
|
| 1417 |
+
"byod_eval_sample_dataset[\"reference_trajectory\"] = byod_eval_sample_dataset[\n",
|
| 1418 |
+
" \"reference_trajectory\"\n",
|
| 1419 |
+
"].apply(json.dumps)\n",
|
| 1420 |
+
"byod_eval_sample_dataset[\"response\"] = byod_eval_sample_dataset[\"response\"].apply(json.dumps)"
|
| 1421 |
+
]
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"cell_type": "markdown",
|
| 1425 |
+
"metadata": {
|
| 1426 |
+
"id": "oEYmU2eJ7q-1"
|
| 1427 |
+
},
|
| 1428 |
+
"source": [
|
| 1429 |
+
"### Run an evaluation task\n",
|
| 1430 |
+
"\n",
|
| 1431 |
+
"Run a new agent's evaluation using your own dataset and the same setting of the latest evaluation."
|
| 1432 |
+
]
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"cell_type": "code",
|
| 1436 |
+
"execution_count": null,
|
| 1437 |
+
"metadata": {
|
| 1438 |
+
"id": "wBD-4wpB7q-3"
|
| 1439 |
+
},
|
| 1440 |
+
"outputs": [],
|
| 1441 |
+
"source": [
|
| 1442 |
+
"EXPERIMENT_RUN_NAME = f\"response-over-tools-byod-{get_id()}\"\n",
|
| 1443 |
+
"\n",
|
| 1444 |
+
"byod_response_eval_tool_task = EvalTask(\n",
|
| 1445 |
+
" dataset=byod_eval_sample_dataset,\n",
|
| 1446 |
+
" metrics=response_tool_metrics,\n",
|
| 1447 |
+
" experiment=EXPERIMENT_NAME,\n",
|
| 1448 |
+
" output_uri_prefix=BUCKET_URI + \"/byod-eval\",\n",
|
| 1449 |
+
")\n",
|
| 1450 |
+
"\n",
|
| 1451 |
+
"byod_response_eval_tool_result = byod_response_eval_tool_task.evaluate(\n",
|
| 1452 |
+
" experiment_run_name=EXPERIMENT_RUN_NAME\n",
|
| 1453 |
+
")\n",
|
| 1454 |
+
"\n",
|
| 1455 |
+
"display_eval_report(byod_response_eval_tool_result)"
|
| 1456 |
+
]
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"cell_type": "markdown",
|
| 1460 |
+
"metadata": {
|
| 1461 |
+
"id": "9eU3LG6r7q-3"
|
| 1462 |
+
},
|
| 1463 |
+
"source": [
|
| 1464 |
+
"### Visualize evaluation results\n",
|
| 1465 |
+
"\n",
|
| 1466 |
+
"Visualize evaluation result sample."
|
| 1467 |
+
]
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"cell_type": "code",
|
| 1471 |
+
"execution_count": null,
|
| 1472 |
+
"metadata": {
|
| 1473 |
+
"id": "pQFzmd2I7q-3"
|
| 1474 |
+
},
|
| 1475 |
+
"outputs": [],
|
| 1476 |
+
"source": [
|
| 1477 |
+
"display_dataframe_rows(byod_response_eval_tool_result.metrics_table, num_rows=3)"
|
| 1478 |
+
]
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"cell_type": "code",
|
| 1482 |
+
"execution_count": null,
|
| 1483 |
+
"metadata": {
|
| 1484 |
+
"id": "84HiPDOkPseW"
|
| 1485 |
+
},
|
| 1486 |
+
"outputs": [],
|
| 1487 |
+
"source": [
|
| 1488 |
+
"display_radar_plot(\n",
|
| 1489 |
+
" byod_response_eval_tool_result,\n",
|
| 1490 |
+
" title=\"ADK agent evaluation\",\n",
|
| 1491 |
+
" metrics=[f\"{metric}/mean\" for metric in response_tool_metrics],\n",
|
| 1492 |
+
")"
|
| 1493 |
+
]
|
| 1494 |
+
},
|
| 1495 |
+
{
|
| 1496 |
+
"cell_type": "markdown",
|
| 1497 |
+
"metadata": {
|
| 1498 |
+
"id": "fIppkS2jq_Dn"
|
| 1499 |
+
},
|
| 1500 |
+
"source": [
|
| 1501 |
+
"## Cleaning up\n"
|
| 1502 |
+
]
|
| 1503 |
+
},
|
| 1504 |
+
{
|
| 1505 |
+
"cell_type": "code",
|
| 1506 |
+
"execution_count": null,
|
| 1507 |
+
"metadata": {
|
| 1508 |
+
"id": "Ox2I3UfRlTOd"
|
| 1509 |
+
},
|
| 1510 |
+
"outputs": [],
|
| 1511 |
+
"source": [
|
| 1512 |
+
"delete_experiment = True\n",
|
| 1513 |
+
"\n",
|
| 1514 |
+
"if delete_experiment:\n",
|
| 1515 |
+
" try:\n",
|
| 1516 |
+
" experiment = aiplatform.Experiment(EXPERIMENT_NAME)\n",
|
| 1517 |
+
" experiment.delete(delete_backing_tensorboard_runs=True)\n",
|
| 1518 |
+
" except Exception as e:\n",
|
| 1519 |
+
" print(e)"
|
| 1520 |
+
]
|
| 1521 |
+
}
|
| 1522 |
+
],
|
| 1523 |
+
"metadata": {
|
| 1524 |
+
"colab": {
|
| 1525 |
+
"name": "evaluating_adk_agent.ipynb",
|
| 1526 |
+
"toc_visible": true
|
| 1527 |
+
},
|
| 1528 |
+
"kernelspec": {
|
| 1529 |
+
"display_name": "Python 3",
|
| 1530 |
+
"name": "python3"
|
| 1531 |
+
}
|
| 1532 |
+
},
|
| 1533 |
+
"nbformat": 4,
|
| 1534 |
+
"nbformat_minor": 0
|
| 1535 |
+
}
|
pyproject.toml
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[project]
|
| 2 |
+
name = "adk-rag-agent"
|
| 3 |
+
version = "0.1.0"
|
| 4 |
+
description = ""
|
| 5 |
+
authors = [
|
| 6 |
+
{name = "Your Name", email = "your@email.com"},
|
| 7 |
+
]
|
| 8 |
+
dependencies = [
|
| 9 |
+
"google-adk>=1.15.0,<2.0.0",
|
| 10 |
+
"opentelemetry-instrumentation-google-genai>=0.1.0,<1.0.0",
|
| 11 |
+
"gcsfs>=2024.11.0",
|
| 12 |
+
"google-cloud-logging>=3.12.0,<4.0.0",
|
| 13 |
+
"google-cloud-aiplatform[evaluation,agent-engines]>=1.118.0,<2.0.0",
|
| 14 |
+
"protobuf>=6.31.1,<7.0.0",
|
| 15 |
+
"gradio>=5.49.1",
|
| 16 |
+
]
|
| 17 |
+
requires-python = ">=3.10,<3.14"
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
[dependency-groups]
|
| 21 |
+
dev = [
|
| 22 |
+
"pytest>=8.3.4,<9.0.0",
|
| 23 |
+
"pytest-asyncio>=0.23.8,<1.0.0",
|
| 24 |
+
"nest-asyncio>=1.6.0,<2.0.0",
|
| 25 |
+
]
|
| 26 |
+
|
| 27 |
+
[project.optional-dependencies]
|
| 28 |
+
jupyter = [
|
| 29 |
+
"jupyter>=1.0.0,<2.0.0",
|
| 30 |
+
]
|
| 31 |
+
lint = [
|
| 32 |
+
"ruff>=0.4.6,<1.0.0",
|
| 33 |
+
"mypy>=1.15.0,<2.0.0",
|
| 34 |
+
"codespell>=2.2.0,<3.0.0",
|
| 35 |
+
"types-pyyaml>=6.0.12.20240917,<7.0.0",
|
| 36 |
+
"types-requests>=2.32.0.20240914,<3.0.0",
|
| 37 |
+
]
|
| 38 |
+
|
| 39 |
+
[tool.ruff]
|
| 40 |
+
line-length = 88
|
| 41 |
+
target-version = "py310"
|
| 42 |
+
|
| 43 |
+
[tool.ruff.lint]
|
| 44 |
+
select = [
|
| 45 |
+
"E", # pycodestyle
|
| 46 |
+
"F", # pyflakes
|
| 47 |
+
"W", # pycodestyle warnings
|
| 48 |
+
"I", # isort
|
| 49 |
+
"C", # flake8-comprehensions
|
| 50 |
+
"B", # flake8-bugbear
|
| 51 |
+
"UP", # pyupgrade
|
| 52 |
+
"RUF", # ruff specific rules
|
| 53 |
+
]
|
| 54 |
+
ignore = ["E501", "C901", "B006"] # ignore line too long, too complex
|
| 55 |
+
|
| 56 |
+
[tool.ruff.lint.isort]
|
| 57 |
+
known-first-party = ["rag_agent", "frontend"]
|
| 58 |
+
|
| 59 |
+
[tool.mypy]
|
| 60 |
+
disallow_untyped_calls = true
|
| 61 |
+
disallow_untyped_defs = true
|
| 62 |
+
disallow_incomplete_defs = true
|
| 63 |
+
no_implicit_optional = true
|
| 64 |
+
check_untyped_defs = true
|
| 65 |
+
disallow_subclassing_any = true
|
| 66 |
+
warn_incomplete_stub = true
|
| 67 |
+
warn_redundant_casts = true
|
| 68 |
+
warn_unused_ignores = true
|
| 69 |
+
warn_unreachable = true
|
| 70 |
+
follow_imports = "silent"
|
| 71 |
+
ignore_missing_imports = true
|
| 72 |
+
explicit_package_bases = true
|
| 73 |
+
disable_error_code = ["misc", "no-untyped-call", "no-any-return"]
|
| 74 |
+
|
| 75 |
+
exclude = [".venv"]
|
| 76 |
+
|
| 77 |
+
[tool.codespell]
|
| 78 |
+
ignore-words-list = "rouge"
|
| 79 |
+
skip = "./locust_env/*,uv.lock,.venv,./frontend,**/*.ipynb"
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
[build-system]
|
| 83 |
+
requires = ["hatchling"]
|
| 84 |
+
build-backend = "hatchling.build"
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
[tool.pytest.ini_options]
|
| 88 |
+
pythonpath = "."
|
| 89 |
+
asyncio_default_fixture_loop_scope = "function"
|
| 90 |
+
|
| 91 |
+
[tool.hatch.build.targets.wheel]
|
| 92 |
+
packages = ["rag_agent","frontend"]
|
rag_agent/.env.example
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
GOOGLE_CLOUD_PROJECT="adk-rag-yt"
|
| 2 |
-
GOOGLE_CLOUD_LOCATION="us-central1"
|
| 3 |
-
GOOGLE_GENAI_USE_VERTEXAI="True"
|
|
|
|
|
|
|
|
|
|
|
|
rag_agent/agent.py
CHANGED
|
@@ -11,105 +11,53 @@ from .tools.rag_query import rag_query
|
|
| 11 |
root_agent = Agent(
|
| 12 |
name="RagAgent",
|
| 13 |
# Using Gemini 2.5 Flash for best performance with RAG operations
|
| 14 |
-
model="gemini-2.5-flash
|
| 15 |
description="Vertex AI RAG Agent",
|
| 16 |
tools=[
|
| 17 |
rag_query,
|
| 18 |
-
list_corpora,
|
| 19 |
-
create_corpus,
|
| 20 |
-
add_data,
|
| 21 |
-
get_corpus_info,
|
| 22 |
-
delete_corpus,
|
| 23 |
-
delete_document,
|
| 24 |
],
|
| 25 |
-
instruction=
|
| 26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
1. `rag_query`: Query a corpus to answer questions
|
| 60 |
-
- Parameters:
|
| 61 |
-
- corpus_name: The name of the corpus to query (required, but can be empty to use current corpus)
|
| 62 |
-
- query: The text question to ask
|
| 63 |
-
|
| 64 |
-
2. `list_corpora`: List all available corpora
|
| 65 |
-
- When this tool is called, it returns the full resource names that should be used with other tools
|
| 66 |
-
|
| 67 |
-
3. `create_corpus`: Create a new corpus
|
| 68 |
-
- Parameters:
|
| 69 |
-
- corpus_name: The name for the new corpus
|
| 70 |
-
|
| 71 |
-
4. `add_data`: Add new data to a corpus
|
| 72 |
-
- Parameters:
|
| 73 |
-
- corpus_name: The name of the corpus to add data to (required, but can be empty to use current corpus)
|
| 74 |
-
- paths: List of Google Drive or GCS URLs
|
| 75 |
-
|
| 76 |
-
5. `get_corpus_info`: Get detailed information about a specific corpus
|
| 77 |
-
- Parameters:
|
| 78 |
-
- corpus_name: The name of the corpus to get information about
|
| 79 |
-
|
| 80 |
-
6. `delete_document`: Delete a specific document from a corpus
|
| 81 |
-
- Parameters:
|
| 82 |
-
- corpus_name: The name of the corpus containing the document
|
| 83 |
-
- document_id: The ID of the document to delete (can be obtained from get_corpus_info results)
|
| 84 |
-
- confirm: Boolean flag that must be set to True to confirm deletion
|
| 85 |
-
|
| 86 |
-
7. `delete_corpus`: Delete an entire corpus and all its associated files
|
| 87 |
-
- Parameters:
|
| 88 |
-
- corpus_name: The name of the corpus to delete
|
| 89 |
-
- confirm: Boolean flag that must be set to True to confirm deletion
|
| 90 |
-
|
| 91 |
-
## INTERNAL: Technical Implementation Details
|
| 92 |
-
|
| 93 |
-
This section is NOT user-facing information - don't repeat these details to users:
|
| 94 |
-
|
| 95 |
-
- The system tracks a "current corpus" in the state. When a corpus is created or used, it becomes the current corpus.
|
| 96 |
-
- For rag_query and add_data, you can provide an empty string for corpus_name to use the current corpus.
|
| 97 |
-
- If no current corpus is set and an empty corpus_name is provided, the tools will prompt the user to specify one.
|
| 98 |
-
- Whenever possible, use the full resource name returned by the list_corpora tool when calling other tools.
|
| 99 |
-
- Using the full resource name instead of just the display name will ensure more reliable operation.
|
| 100 |
-
- Do not tell users to use full resource names in your responses - just use them internally in your tool calls.
|
| 101 |
-
|
| 102 |
-
## Communication Guidelines
|
| 103 |
-
|
| 104 |
-
- Be clear and concise in your responses.
|
| 105 |
-
- If querying a corpus, explain which corpus you're using to answer the question.
|
| 106 |
-
- If managing corpora, explain what actions you've taken.
|
| 107 |
-
- When new data is added, confirm what was added and to which corpus.
|
| 108 |
-
- When corpus information is displayed, organize it clearly for the user.
|
| 109 |
-
- When deleting a document or corpus, always ask for confirmation before proceeding.
|
| 110 |
-
- If an error occurs, explain what went wrong and suggest next steps.
|
| 111 |
-
- When listing corpora, just provide the display names and basic information - don't tell users about resource names.
|
| 112 |
-
|
| 113 |
-
Remember, your primary goal is to help users access and manage information through RAG capabilities.
|
| 114 |
-
""",
|
| 115 |
)
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
root_agent = Agent(
|
| 12 |
name="RagAgent",
|
| 13 |
# Using Gemini 2.5 Flash for best performance with RAG operations
|
| 14 |
+
model="gemini-2.5-flash",
|
| 15 |
description="Vertex AI RAG Agent",
|
| 16 |
tools=[
|
| 17 |
rag_query,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
],
|
| 19 |
+
instruction=
|
| 20 |
+
f"""
|
| 21 |
+
<Character>
|
| 22 |
+
Characteristics: {"ผู้หญิง เรียบร้อย สุขุม ใจดี เฟรนลี่"}
|
| 23 |
+
Language: {"ไทย"}
|
| 24 |
+
Name: {"แคส"}
|
| 25 |
+
you're and expert of the company {"บริษัท ไทย บิทแคสต์ จำกัด (Thai Bitcast Company Limited)"}
|
| 26 |
+
You are a helpful assistant agent capable of answering user questions by retrieving relevant information.
|
| 27 |
+
</Character>
|
| 28 |
|
| 29 |
+
The assistant can answer user queries by retrieving information from Vertex AI’s document corpora using the `rag_query` tool.
|
| 30 |
+
|
| 31 |
+
<Tool>
|
| 32 |
+
Use the `rag_query` tool when the user asks about the company's business, products, or services.
|
| 33 |
+
|
| 34 |
+
rag_query parameters:
|
| 35 |
+
- query: str
|
| 36 |
+
- type: str
|
| 37 |
+
|
| 38 |
+
Guidelines:
|
| 39 |
+
• query: You must simplify and reframe the user's question into a concise query appropriate for the tool.
|
| 40 |
+
• type: Must be one of the following — "business", "product", or "service".
|
| 41 |
+
|
| 42 |
+
Type selection rules:
|
| 43 |
+
• business — Use this when the user asks general business questions or requests an overview of the company, such as:
|
| 44 |
+
- “เกี่ยวกับอะไร”
|
| 45 |
+
- “ขายสินค้าอะไรบ้าง”
|
| 46 |
+
|
| 47 |
+
• product — Use this when the user is looking for a specific product or describing product requirements, such as:
|
| 48 |
+
- “มีสินค้าลักษณะนี้ไหม”
|
| 49 |
+
|
| 50 |
+
• service — Use this when the user is looking for company services, such as:
|
| 51 |
+
- “มีบริการนี้ไหม”
|
| 52 |
+
</Tool>
|
| 53 |
+
|
| 54 |
+
<Response_Handling>
|
| 55 |
+
After retrieving data, you must verify the relevance and accuracy of the information before responding to the user.
|
| 56 |
+
Do not generate or infer any information that is not explicitly provided in the retrieved data.
|
| 57 |
+
</Response_Handling>
|
| 58 |
+
"""
|
| 59 |
+
,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
)
|
| 61 |
+
|
| 62 |
+
from google.adk.apps.app import App
|
| 63 |
+
app = App(root_agent=root_agent, name="rag_agent")
|
rag_agent/agent_engine_app.py
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
|
| 15 |
+
# mypy: disable-error-code="attr-defined,arg-type"
|
| 16 |
+
import logging
|
| 17 |
+
import os
|
| 18 |
+
from typing import Any
|
| 19 |
+
|
| 20 |
+
import vertexai
|
| 21 |
+
from google.adk.artifacts import GcsArtifactService, InMemoryArtifactService
|
| 22 |
+
from google.cloud import logging as google_cloud_logging
|
| 23 |
+
from vertexai.agent_engines.templates.adk import AdkApp
|
| 24 |
+
|
| 25 |
+
from rag_agent.agent import app as adk_app
|
| 26 |
+
from rag_agent.app_utils.telemetry import setup_telemetry
|
| 27 |
+
from rag_agent.app_utils.typing import Feedback
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
class AgentEngineApp(AdkApp):
|
| 31 |
+
def set_up(self) -> None:
|
| 32 |
+
"""Initialize the agent engine app with logging and telemetry."""
|
| 33 |
+
vertexai.init()
|
| 34 |
+
setup_telemetry()
|
| 35 |
+
super().set_up()
|
| 36 |
+
logging.basicConfig(level=logging.INFO)
|
| 37 |
+
logging_client = google_cloud_logging.Client()
|
| 38 |
+
self.logger = logging_client.logger(__name__)
|
| 39 |
+
if gemini_location:
|
| 40 |
+
os.environ["GOOGLE_CLOUD_LOCATION"] = gemini_location
|
| 41 |
+
|
| 42 |
+
def register_feedback(self, feedback: dict[str, Any]) -> None:
|
| 43 |
+
"""Collect and log feedback."""
|
| 44 |
+
feedback_obj = Feedback.model_validate(feedback)
|
| 45 |
+
self.logger.log_struct(feedback_obj.model_dump(), severity="INFO")
|
| 46 |
+
|
| 47 |
+
def register_operations(self) -> dict[str, list[str]]:
|
| 48 |
+
"""Registers the operations of the Agent."""
|
| 49 |
+
operations = super().register_operations()
|
| 50 |
+
operations[""] = operations.get("", []) + ["register_feedback"]
|
| 51 |
+
return operations
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
gemini_location = os.environ.get("GOOGLE_CLOUD_LOCATION")
|
| 55 |
+
logs_bucket_name = os.environ.get("LOGS_BUCKET_NAME")
|
| 56 |
+
agent_engine = AgentEngineApp(
|
| 57 |
+
app=adk_app,
|
| 58 |
+
artifact_service_builder=lambda: GcsArtifactService(bucket_name=logs_bucket_name)
|
| 59 |
+
if logs_bucket_name
|
| 60 |
+
else InMemoryArtifactService(),
|
| 61 |
+
)
|
rag_agent/app_utils/.requirements.txt
ADDED
|
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
aiofiles==24.1.0
|
| 2 |
+
aiohappyeyeballs==2.6.1
|
| 3 |
+
aiohttp==3.13.2
|
| 4 |
+
aiosignal==1.4.0
|
| 5 |
+
aiosqlite==0.21.0
|
| 6 |
+
alembic==1.17.2
|
| 7 |
+
annotated-types==0.7.0
|
| 8 |
+
anyio==4.11.0
|
| 9 |
+
async-timeout==5.0.1 ; python_full_version < '3.11'
|
| 10 |
+
attrs==25.4.0
|
| 11 |
+
audioop-lts==0.2.2 ; python_full_version >= '3.13'
|
| 12 |
+
authlib==1.6.5
|
| 13 |
+
brotli==1.2.0
|
| 14 |
+
cachetools==6.2.2
|
| 15 |
+
certifi==2025.11.12
|
| 16 |
+
cffi==2.0.0 ; platform_python_implementation != 'PyPy'
|
| 17 |
+
charset-normalizer==3.4.4
|
| 18 |
+
click==8.3.1
|
| 19 |
+
cloudpickle==3.1.2
|
| 20 |
+
colorama==0.4.6 ; sys_platform == 'win32'
|
| 21 |
+
cryptography==46.0.3
|
| 22 |
+
decorator==5.2.1
|
| 23 |
+
distro==1.9.0
|
| 24 |
+
docstring-parser==0.17.0
|
| 25 |
+
exceptiongroup==1.3.0 ; python_full_version < '3.11'
|
| 26 |
+
fastapi==0.118.3
|
| 27 |
+
fastuuid==0.14.0
|
| 28 |
+
ffmpy==1.0.0
|
| 29 |
+
filelock==3.20.0
|
| 30 |
+
frozenlist==1.8.0
|
| 31 |
+
fsspec==2025.10.0
|
| 32 |
+
gcsfs==2025.10.0
|
| 33 |
+
google-adk==1.19.0
|
| 34 |
+
google-api-core==2.28.1
|
| 35 |
+
google-api-python-client==2.187.0
|
| 36 |
+
google-auth==2.43.0
|
| 37 |
+
google-auth-httplib2==0.2.1
|
| 38 |
+
google-auth-oauthlib==1.2.2
|
| 39 |
+
google-cloud-aiplatform==1.128.0
|
| 40 |
+
google-cloud-appengine-logging==1.7.0
|
| 41 |
+
google-cloud-audit-log==0.4.0
|
| 42 |
+
google-cloud-bigquery==3.38.0
|
| 43 |
+
google-cloud-bigquery-storage==2.34.0
|
| 44 |
+
google-cloud-bigtable==2.34.0
|
| 45 |
+
google-cloud-core==2.5.0
|
| 46 |
+
google-cloud-discoveryengine==0.13.12
|
| 47 |
+
google-cloud-logging==3.12.1
|
| 48 |
+
google-cloud-monitoring==2.28.0
|
| 49 |
+
google-cloud-resource-manager==1.15.0
|
| 50 |
+
google-cloud-secret-manager==2.25.0
|
| 51 |
+
google-cloud-spanner==3.59.0
|
| 52 |
+
google-cloud-speech==2.34.0
|
| 53 |
+
google-cloud-storage==3.6.0
|
| 54 |
+
google-cloud-trace==1.17.0
|
| 55 |
+
google-crc32c==1.7.1
|
| 56 |
+
google-genai==1.51.0
|
| 57 |
+
google-resumable-media==2.8.0
|
| 58 |
+
googleapis-common-protos==1.72.0
|
| 59 |
+
gradio==5.49.1
|
| 60 |
+
gradio-client==1.13.3
|
| 61 |
+
graphviz==0.21
|
| 62 |
+
greenlet==3.2.4 ; platform_machine == 'AMD64' or platform_machine == 'WIN32' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'ppc64le' or platform_machine == 'win32' or platform_machine == 'x86_64'
|
| 63 |
+
groovy==0.1.2
|
| 64 |
+
grpc-google-iam-v1==0.14.3
|
| 65 |
+
grpc-interceptor==0.15.4
|
| 66 |
+
grpcio==1.76.0
|
| 67 |
+
grpcio-status==1.76.0
|
| 68 |
+
h11==0.16.0
|
| 69 |
+
hf-xet==1.2.0 ; platform_machine == 'AMD64' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'
|
| 70 |
+
httpcore==1.0.9
|
| 71 |
+
httplib2==0.31.0
|
| 72 |
+
httpx==0.28.1
|
| 73 |
+
httpx-sse==0.4.3
|
| 74 |
+
huggingface-hub==1.1.4
|
| 75 |
+
idna==3.11
|
| 76 |
+
importlib-metadata==8.7.0
|
| 77 |
+
jinja2==3.1.6
|
| 78 |
+
jiter==0.12.0
|
| 79 |
+
joblib==1.5.2
|
| 80 |
+
jsonschema==4.25.1
|
| 81 |
+
jsonschema-specifications==2025.9.1
|
| 82 |
+
litellm==1.80.0
|
| 83 |
+
mako==1.3.10
|
| 84 |
+
markdown-it-py==4.0.0
|
| 85 |
+
markupsafe==3.0.3
|
| 86 |
+
mcp==1.21.2
|
| 87 |
+
mdurl==0.1.2
|
| 88 |
+
multidict==6.7.0
|
| 89 |
+
numpy==2.2.6 ; python_full_version < '3.11'
|
| 90 |
+
numpy==2.3.5 ; python_full_version >= '3.11'
|
| 91 |
+
oauthlib==3.3.1
|
| 92 |
+
openai==2.8.1
|
| 93 |
+
opentelemetry-api==1.37.0
|
| 94 |
+
opentelemetry-exporter-gcp-logging==1.11.0a0
|
| 95 |
+
opentelemetry-exporter-gcp-monitoring==1.11.0a0
|
| 96 |
+
opentelemetry-exporter-gcp-trace==1.11.0
|
| 97 |
+
opentelemetry-exporter-otlp-proto-common==1.37.0
|
| 98 |
+
opentelemetry-exporter-otlp-proto-http==1.37.0
|
| 99 |
+
opentelemetry-instrumentation==0.58b0
|
| 100 |
+
opentelemetry-instrumentation-google-genai==0.4b0
|
| 101 |
+
opentelemetry-proto==1.37.0
|
| 102 |
+
opentelemetry-resourcedetector-gcp==1.11.0a0
|
| 103 |
+
opentelemetry-sdk==1.37.0
|
| 104 |
+
opentelemetry-semantic-conventions==0.58b0
|
| 105 |
+
opentelemetry-util-genai==0.2b0
|
| 106 |
+
orjson==3.11.4
|
| 107 |
+
packaging==25.0
|
| 108 |
+
pandas==2.3.3
|
| 109 |
+
pillow==11.3.0
|
| 110 |
+
propcache==0.4.1
|
| 111 |
+
proto-plus==1.26.1
|
| 112 |
+
protobuf==6.33.1
|
| 113 |
+
pyarrow==22.0.0
|
| 114 |
+
pyasn1==0.6.1
|
| 115 |
+
pyasn1-modules==0.4.2
|
| 116 |
+
pycparser==2.23 ; implementation_name != 'PyPy' and platform_python_implementation != 'PyPy'
|
| 117 |
+
pydantic==2.11.10
|
| 118 |
+
pydantic-core==2.33.2
|
| 119 |
+
pydantic-settings==2.12.0
|
| 120 |
+
pydub==0.25.1
|
| 121 |
+
pygments==2.19.2
|
| 122 |
+
pyjwt==2.10.1
|
| 123 |
+
pyparsing==3.2.5
|
| 124 |
+
python-dateutil==2.9.0.post0
|
| 125 |
+
python-dotenv==1.2.1
|
| 126 |
+
python-multipart==0.0.20
|
| 127 |
+
pytz==2025.2
|
| 128 |
+
pywin32==311 ; sys_platform == 'win32'
|
| 129 |
+
pyyaml==6.0.3
|
| 130 |
+
referencing==0.37.0
|
| 131 |
+
regex==2025.11.3
|
| 132 |
+
requests==2.32.5
|
| 133 |
+
requests-oauthlib==2.0.0
|
| 134 |
+
rich==14.2.0
|
| 135 |
+
rpds-py==0.29.0
|
| 136 |
+
rsa==4.9.1
|
| 137 |
+
ruamel-yaml==0.18.16
|
| 138 |
+
ruamel-yaml-clib==0.2.15 ; platform_python_implementation == 'CPython'
|
| 139 |
+
ruff==0.14.5
|
| 140 |
+
safehttpx==0.1.7
|
| 141 |
+
scikit-learn==1.5.2 ; python_full_version < '3.11'
|
| 142 |
+
scikit-learn==1.7.2 ; python_full_version >= '3.11'
|
| 143 |
+
scipy==1.15.3 ; python_full_version < '3.11'
|
| 144 |
+
scipy==1.16.3 ; python_full_version >= '3.11'
|
| 145 |
+
semantic-version==2.10.0
|
| 146 |
+
shapely==2.1.2
|
| 147 |
+
shellingham==1.5.4
|
| 148 |
+
six==1.17.0
|
| 149 |
+
sniffio==1.3.1
|
| 150 |
+
sqlalchemy==2.0.44
|
| 151 |
+
sqlalchemy-spanner==1.17.1
|
| 152 |
+
sqlparse==0.5.3
|
| 153 |
+
sse-starlette==3.0.3
|
| 154 |
+
starlette==0.48.0
|
| 155 |
+
tenacity==9.1.2
|
| 156 |
+
threadpoolctl==3.6.0
|
| 157 |
+
tiktoken==0.12.0
|
| 158 |
+
tokenizers==0.22.1
|
| 159 |
+
tomli==2.3.0 ; python_full_version < '3.11'
|
| 160 |
+
tomlkit==0.13.3
|
| 161 |
+
tqdm==4.67.1
|
| 162 |
+
typer==0.20.0
|
| 163 |
+
typer-slim==0.20.0
|
| 164 |
+
typing-extensions==4.15.0
|
| 165 |
+
typing-inspection==0.4.2
|
| 166 |
+
tzdata==2025.2
|
| 167 |
+
tzlocal==5.3.1
|
| 168 |
+
uritemplate==4.2.0
|
| 169 |
+
urllib3==2.5.0
|
| 170 |
+
uvicorn==0.38.0
|
| 171 |
+
watchdog==6.0.0
|
| 172 |
+
websockets==15.0.1
|
| 173 |
+
wrapt==1.17.3
|
| 174 |
+
yarl==1.22.0
|
| 175 |
+
zipp==3.23.0
|
rag_agent/app_utils/deploy.py
ADDED
|
@@ -0,0 +1,338 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
|
| 15 |
+
import asyncio
|
| 16 |
+
import datetime
|
| 17 |
+
import importlib
|
| 18 |
+
import inspect
|
| 19 |
+
import json
|
| 20 |
+
import logging
|
| 21 |
+
import warnings
|
| 22 |
+
from typing import Any
|
| 23 |
+
|
| 24 |
+
import click
|
| 25 |
+
import google.auth
|
| 26 |
+
import vertexai
|
| 27 |
+
from vertexai._genai import _agent_engines_utils
|
| 28 |
+
from vertexai._genai.types import AgentEngine, AgentEngineConfig
|
| 29 |
+
|
| 30 |
+
# Suppress google-cloud-storage version compatibility warning
|
| 31 |
+
warnings.filterwarnings(
|
| 32 |
+
"ignore", category=FutureWarning, module="google.cloud.aiplatform"
|
| 33 |
+
)
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
def generate_class_methods_from_agent(agent_instance: Any) -> list[dict[str, Any]]:
|
| 37 |
+
"""Generate method specifications with schemas from agent's register_operations().
|
| 38 |
+
|
| 39 |
+
See: https://docs.cloud.google.com/agent-builder/agent-engine/use/custom#supported-operations
|
| 40 |
+
"""
|
| 41 |
+
registered_operations = _agent_engines_utils._get_registered_operations(
|
| 42 |
+
agent=agent_instance
|
| 43 |
+
)
|
| 44 |
+
class_methods_spec = _agent_engines_utils._generate_class_methods_spec_or_raise(
|
| 45 |
+
agent=agent_instance,
|
| 46 |
+
operations=registered_operations,
|
| 47 |
+
)
|
| 48 |
+
class_methods_list = [
|
| 49 |
+
_agent_engines_utils._to_dict(method_spec) for method_spec in class_methods_spec
|
| 50 |
+
]
|
| 51 |
+
return class_methods_list
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def parse_key_value_pairs(kv_string: str | None) -> dict[str, str]:
|
| 55 |
+
"""Parse key-value pairs from a comma-separated KEY=VALUE string."""
|
| 56 |
+
result = {}
|
| 57 |
+
if kv_string:
|
| 58 |
+
for pair in kv_string.split(","):
|
| 59 |
+
if "=" in pair:
|
| 60 |
+
key, value = pair.split("=", 1)
|
| 61 |
+
result[key.strip()] = value.strip()
|
| 62 |
+
else:
|
| 63 |
+
logging.warning(f"Skipping malformed key-value pair: {pair}")
|
| 64 |
+
return result
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
def write_deployment_metadata(
|
| 68 |
+
remote_agent: Any,
|
| 69 |
+
metadata_file: str = "deployment_metadata.json",
|
| 70 |
+
) -> None:
|
| 71 |
+
"""Write deployment metadata to file."""
|
| 72 |
+
metadata = {
|
| 73 |
+
"remote_agent_engine_id": remote_agent.api_resource.name,
|
| 74 |
+
"deployment_target": "agent_engine",
|
| 75 |
+
"is_a2a": False,
|
| 76 |
+
"deployment_timestamp": datetime.datetime.now().isoformat(),
|
| 77 |
+
}
|
| 78 |
+
|
| 79 |
+
with open(metadata_file, "w") as f:
|
| 80 |
+
json.dump(metadata, f, indent=2)
|
| 81 |
+
|
| 82 |
+
logging.info(f"Agent Engine ID written to {metadata_file}")
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
def print_deployment_success(
|
| 86 |
+
remote_agent: Any,
|
| 87 |
+
location: str,
|
| 88 |
+
project: str,
|
| 89 |
+
) -> None:
|
| 90 |
+
"""Print deployment success message with console URL."""
|
| 91 |
+
# Extract agent engine ID and project number for console URL
|
| 92 |
+
resource_name_parts = remote_agent.api_resource.name.split("/")
|
| 93 |
+
agent_engine_id = resource_name_parts[-1]
|
| 94 |
+
project_number = resource_name_parts[1]
|
| 95 |
+
print("\n✅ Deployment successful!")
|
| 96 |
+
service_account = remote_agent.api_resource.spec.service_account
|
| 97 |
+
if service_account:
|
| 98 |
+
print(f"Service Account: {service_account}")
|
| 99 |
+
else:
|
| 100 |
+
default_sa = (
|
| 101 |
+
f"service-{project_number}@gcp-sa-aiplatform-re.iam.gserviceaccount.com"
|
| 102 |
+
)
|
| 103 |
+
print(f"Service Account: {default_sa}")
|
| 104 |
+
playground_url = f"https://console.cloud.google.com/vertex-ai/agents/locations/{location}/agent-engines/{agent_engine_id}/playground?project={project}"
|
| 105 |
+
print(f"\n📊 Open Console Playground: {playground_url}\n")
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
@click.command()
|
| 109 |
+
@click.option(
|
| 110 |
+
"--project",
|
| 111 |
+
default=None,
|
| 112 |
+
help="GCP project ID (defaults to application default credentials)",
|
| 113 |
+
)
|
| 114 |
+
@click.option(
|
| 115 |
+
"--location",
|
| 116 |
+
default="asia-southeast1",
|
| 117 |
+
help="GCP region (defaults to asia-southeast1)",
|
| 118 |
+
)
|
| 119 |
+
@click.option(
|
| 120 |
+
"--display-name",
|
| 121 |
+
default="adk-rag-agent",
|
| 122 |
+
help="Display name for the agent engine",
|
| 123 |
+
)
|
| 124 |
+
@click.option(
|
| 125 |
+
"--description",
|
| 126 |
+
default="",
|
| 127 |
+
help="Description of the agent",
|
| 128 |
+
)
|
| 129 |
+
@click.option(
|
| 130 |
+
"--source-packages",
|
| 131 |
+
multiple=True,
|
| 132 |
+
default=["./rag_agent"],
|
| 133 |
+
help="Source packages to deploy. Can be specified multiple times (e.g., --source-packages=./app --source-packages=./lib)",
|
| 134 |
+
)
|
| 135 |
+
@click.option(
|
| 136 |
+
"--entrypoint-module",
|
| 137 |
+
default="rag_agent.agent_engine_app",
|
| 138 |
+
help="Python module path for the agent entrypoint (required)",
|
| 139 |
+
)
|
| 140 |
+
@click.option(
|
| 141 |
+
"--entrypoint-object",
|
| 142 |
+
default="agent_engine",
|
| 143 |
+
help="Name of the agent instance at module level (required)",
|
| 144 |
+
)
|
| 145 |
+
@click.option(
|
| 146 |
+
"--requirements-file",
|
| 147 |
+
default="rag_agent/app_utils/.requirements.txt",
|
| 148 |
+
help="Path to requirements.txt file",
|
| 149 |
+
)
|
| 150 |
+
@click.option(
|
| 151 |
+
"--set-env-vars",
|
| 152 |
+
default=None,
|
| 153 |
+
help="Comma-separated list of environment variables in KEY=VALUE format",
|
| 154 |
+
)
|
| 155 |
+
@click.option(
|
| 156 |
+
"--labels",
|
| 157 |
+
default=None,
|
| 158 |
+
help="Comma-separated list of labels in KEY=VALUE format",
|
| 159 |
+
)
|
| 160 |
+
@click.option(
|
| 161 |
+
"--service-account",
|
| 162 |
+
default=None,
|
| 163 |
+
help="Service account email to use for the agent engine",
|
| 164 |
+
)
|
| 165 |
+
@click.option(
|
| 166 |
+
"--min-instances",
|
| 167 |
+
type=int,
|
| 168 |
+
default=1,
|
| 169 |
+
help="Minimum number of instances (default: 1)",
|
| 170 |
+
)
|
| 171 |
+
@click.option(
|
| 172 |
+
"--max-instances",
|
| 173 |
+
type=int,
|
| 174 |
+
default=10,
|
| 175 |
+
help="Maximum number of instances (default: 10)",
|
| 176 |
+
)
|
| 177 |
+
@click.option(
|
| 178 |
+
"--cpu",
|
| 179 |
+
default="4",
|
| 180 |
+
help="CPU limit (default: 4)",
|
| 181 |
+
)
|
| 182 |
+
@click.option(
|
| 183 |
+
"--memory",
|
| 184 |
+
default="8Gi",
|
| 185 |
+
help="Memory limit (default: 8Gi)",
|
| 186 |
+
)
|
| 187 |
+
@click.option(
|
| 188 |
+
"--container-concurrency",
|
| 189 |
+
type=int,
|
| 190 |
+
default=9,
|
| 191 |
+
help="Container concurrency (default: 9)",
|
| 192 |
+
)
|
| 193 |
+
@click.option(
|
| 194 |
+
"--num-workers",
|
| 195 |
+
type=int,
|
| 196 |
+
default=1,
|
| 197 |
+
help="Number of worker processes (default: 1)",
|
| 198 |
+
)
|
| 199 |
+
def deploy_agent_engine_app(
|
| 200 |
+
project: str | None,
|
| 201 |
+
location: str,
|
| 202 |
+
display_name: str,
|
| 203 |
+
description: str,
|
| 204 |
+
source_packages: tuple[str, ...],
|
| 205 |
+
entrypoint_module: str,
|
| 206 |
+
entrypoint_object: str,
|
| 207 |
+
requirements_file: str,
|
| 208 |
+
set_env_vars: str | None,
|
| 209 |
+
labels: str | None,
|
| 210 |
+
service_account: str | None,
|
| 211 |
+
min_instances: int,
|
| 212 |
+
max_instances: int,
|
| 213 |
+
cpu: str,
|
| 214 |
+
memory: str,
|
| 215 |
+
container_concurrency: int,
|
| 216 |
+
num_workers: int,
|
| 217 |
+
) -> AgentEngine:
|
| 218 |
+
"""Deploy the agent engine app to Vertex AI."""
|
| 219 |
+
|
| 220 |
+
logging.basicConfig(level=logging.INFO)
|
| 221 |
+
logging.getLogger("httpx").setLevel(logging.WARNING)
|
| 222 |
+
|
| 223 |
+
# Parse environment variables and labels if provided
|
| 224 |
+
env_vars = parse_key_value_pairs(set_env_vars)
|
| 225 |
+
labels_dict = parse_key_value_pairs(labels)
|
| 226 |
+
|
| 227 |
+
# Set GOOGLE_CLOUD_REGION to match deployment location
|
| 228 |
+
env_vars["GOOGLE_CLOUD_REGION"] = location
|
| 229 |
+
|
| 230 |
+
# Add NUM_WORKERS from CLI argument (can be overridden via --set-env-vars)
|
| 231 |
+
if "NUM_WORKERS" not in env_vars:
|
| 232 |
+
env_vars["NUM_WORKERS"] = str(num_workers)
|
| 233 |
+
|
| 234 |
+
# Enable telemetry by default for Agent Engine
|
| 235 |
+
if "GOOGLE_CLOUD_AGENT_ENGINE_ENABLE_TELEMETRY" not in env_vars:
|
| 236 |
+
env_vars["GOOGLE_CLOUD_AGENT_ENGINE_ENABLE_TELEMETRY"] = "true"
|
| 237 |
+
if "OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT" not in env_vars:
|
| 238 |
+
env_vars["OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT"] = "true"
|
| 239 |
+
|
| 240 |
+
if not project:
|
| 241 |
+
_, project = google.auth.default()
|
| 242 |
+
|
| 243 |
+
print("""
|
| 244 |
+
╔═══════════════════════════════════════════════════════════╗
|
| 245 |
+
║ ║
|
| 246 |
+
║ 🤖 DEPLOYING AGENT TO VERTEX AI AGENT ENGINE 🤖 ║
|
| 247 |
+
║ ║
|
| 248 |
+
╚═══════════════════════════════════════════════════════════╝
|
| 249 |
+
""")
|
| 250 |
+
|
| 251 |
+
# Log deployment parameters
|
| 252 |
+
click.echo("\n📋 Deployment Parameters:")
|
| 253 |
+
click.echo(f" Project: {project}")
|
| 254 |
+
click.echo(f" Location: {location}")
|
| 255 |
+
click.echo(f" Display Name: {display_name}")
|
| 256 |
+
click.echo(f" Min Instances: {min_instances}")
|
| 257 |
+
click.echo(f" Max Instances: {max_instances}")
|
| 258 |
+
click.echo(f" CPU: {cpu}")
|
| 259 |
+
click.echo(f" Memory: {memory}")
|
| 260 |
+
click.echo(f" Container Concurrency: {container_concurrency}")
|
| 261 |
+
if service_account:
|
| 262 |
+
click.echo(f" Service Account: {service_account}")
|
| 263 |
+
if env_vars:
|
| 264 |
+
click.echo("\n🌍 Environment Variables:")
|
| 265 |
+
for key, value in sorted(env_vars.items()):
|
| 266 |
+
click.echo(f" {key}: {value}")
|
| 267 |
+
|
| 268 |
+
source_packages_list = list(source_packages)
|
| 269 |
+
|
| 270 |
+
# Initialize vertexai client
|
| 271 |
+
client = vertexai.Client(
|
| 272 |
+
project=project,
|
| 273 |
+
location=location,
|
| 274 |
+
)
|
| 275 |
+
vertexai.init(project=project, location=location)
|
| 276 |
+
|
| 277 |
+
# Add agent garden labels if configured
|
| 278 |
+
|
| 279 |
+
# Dynamically import the agent instance to generate class_methods
|
| 280 |
+
logging.info(f"Importing {entrypoint_module}.{entrypoint_object}")
|
| 281 |
+
module = importlib.import_module(entrypoint_module)
|
| 282 |
+
agent_instance = getattr(module, entrypoint_object)
|
| 283 |
+
|
| 284 |
+
# If the agent_instance is a coroutine, await it to get the actual instance
|
| 285 |
+
if inspect.iscoroutine(agent_instance):
|
| 286 |
+
logging.info(f"Detected coroutine, awaiting {entrypoint_object}...")
|
| 287 |
+
agent_instance = asyncio.run(agent_instance)
|
| 288 |
+
# Generate class methods spec from register_operations
|
| 289 |
+
class_methods_list = generate_class_methods_from_agent(agent_instance)
|
| 290 |
+
|
| 291 |
+
config = AgentEngineConfig(
|
| 292 |
+
display_name=display_name,
|
| 293 |
+
description=description,
|
| 294 |
+
source_packages=source_packages_list,
|
| 295 |
+
entrypoint_module=entrypoint_module,
|
| 296 |
+
entrypoint_object=entrypoint_object,
|
| 297 |
+
class_methods=class_methods_list,
|
| 298 |
+
env_vars=env_vars,
|
| 299 |
+
service_account=service_account,
|
| 300 |
+
requirements_file=requirements_file,
|
| 301 |
+
labels=labels_dict,
|
| 302 |
+
min_instances=min_instances,
|
| 303 |
+
max_instances=max_instances,
|
| 304 |
+
resource_limits={"cpu": cpu, "memory": memory},
|
| 305 |
+
container_concurrency=container_concurrency,
|
| 306 |
+
agent_framework="google-adk",
|
| 307 |
+
)
|
| 308 |
+
|
| 309 |
+
# Check if an agent with this name already exists
|
| 310 |
+
existing_agents = list(client.agent_engines.list())
|
| 311 |
+
matching_agents = [
|
| 312 |
+
agent
|
| 313 |
+
for agent in existing_agents
|
| 314 |
+
if agent.api_resource.display_name == display_name
|
| 315 |
+
]
|
| 316 |
+
|
| 317 |
+
# Deploy the agent (create or update)
|
| 318 |
+
if matching_agents:
|
| 319 |
+
click.echo(f"\n📝 Updating existing agent: {display_name}")
|
| 320 |
+
else:
|
| 321 |
+
click.echo(f"\n🚀 Creating new agent: {display_name}")
|
| 322 |
+
|
| 323 |
+
click.echo("🚀 Deploying to Vertex AI Agent Engine (this can take 3-5 minutes)...")
|
| 324 |
+
if matching_agents:
|
| 325 |
+
remote_agent = client.agent_engines.update(
|
| 326 |
+
name=matching_agents[0].api_resource.name, config=config
|
| 327 |
+
)
|
| 328 |
+
else:
|
| 329 |
+
remote_agent = client.agent_engines.create(config=config)
|
| 330 |
+
|
| 331 |
+
write_deployment_metadata(remote_agent)
|
| 332 |
+
print_deployment_success(remote_agent, location, project)
|
| 333 |
+
|
| 334 |
+
return remote_agent
|
| 335 |
+
|
| 336 |
+
|
| 337 |
+
if __name__ == "__main__":
|
| 338 |
+
deploy_agent_engine_app()
|
rag_agent/app_utils/gcs.py
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
|
| 15 |
+
import logging
|
| 16 |
+
|
| 17 |
+
import google.cloud.storage as storage
|
| 18 |
+
from google.api_core import exceptions
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
def create_bucket_if_not_exists(bucket_name: str, project: str, location: str) -> None:
|
| 22 |
+
"""Creates a new bucket if it doesn't already exist.
|
| 23 |
+
|
| 24 |
+
Args:
|
| 25 |
+
bucket_name: Name of the bucket to create
|
| 26 |
+
project: Google Cloud project ID
|
| 27 |
+
location: Location to create the bucket in (defaults to asia-southeast1)
|
| 28 |
+
"""
|
| 29 |
+
storage_client = storage.Client(project=project)
|
| 30 |
+
|
| 31 |
+
if bucket_name.startswith("gs://"):
|
| 32 |
+
bucket_name = bucket_name[5:]
|
| 33 |
+
try:
|
| 34 |
+
storage_client.get_bucket(bucket_name)
|
| 35 |
+
logging.info(f"Bucket {bucket_name} already exists")
|
| 36 |
+
except exceptions.NotFound:
|
| 37 |
+
bucket = storage_client.create_bucket(
|
| 38 |
+
bucket_name,
|
| 39 |
+
location=location,
|
| 40 |
+
project=project,
|
| 41 |
+
)
|
| 42 |
+
logging.info(f"Created bucket {bucket.name} in {bucket.location}")
|
rag_agent/app_utils/telemetry.py
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
import logging
|
| 15 |
+
import os
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def setup_telemetry() -> str | None:
|
| 19 |
+
"""Configure OpenTelemetry and GenAI telemetry with GCS upload."""
|
| 20 |
+
os.environ.setdefault("GOOGLE_CLOUD_AGENT_ENGINE_ENABLE_TELEMETRY", "true")
|
| 21 |
+
|
| 22 |
+
bucket = os.environ.get("LOGS_BUCKET_NAME")
|
| 23 |
+
capture_content = os.environ.get(
|
| 24 |
+
"OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT", "false"
|
| 25 |
+
)
|
| 26 |
+
if bucket and capture_content != "false":
|
| 27 |
+
logging.info("Setting up GenAI telemetry with GCS upload...")
|
| 28 |
+
os.environ["OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT"] = "NO_CONTENT"
|
| 29 |
+
os.environ.setdefault("OTEL_INSTRUMENTATION_GENAI_UPLOAD_FORMAT", "jsonl")
|
| 30 |
+
os.environ.setdefault("OTEL_INSTRUMENTATION_GENAI_COMPLETION_HOOK", "upload")
|
| 31 |
+
os.environ.setdefault(
|
| 32 |
+
"OTEL_SEMCONV_STABILITY_OPT_IN", "gen_ai_latest_experimental"
|
| 33 |
+
)
|
| 34 |
+
commit_sha = os.environ.get("COMMIT_SHA", "dev")
|
| 35 |
+
os.environ.setdefault(
|
| 36 |
+
"OTEL_RESOURCE_ATTRIBUTES",
|
| 37 |
+
f"service.namespace=adk-rag-agent,service.version={commit_sha}",
|
| 38 |
+
)
|
| 39 |
+
path = os.environ.get("GENAI_TELEMETRY_PATH", "completions")
|
| 40 |
+
os.environ.setdefault(
|
| 41 |
+
"OTEL_INSTRUMENTATION_GENAI_UPLOAD_BASE_PATH",
|
| 42 |
+
f"gs://{bucket}/{path}",
|
| 43 |
+
)
|
| 44 |
+
|
| 45 |
+
return bucket
|
rag_agent/app_utils/typing.py
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2025 Google LLC
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
import uuid
|
| 15 |
+
from typing import (
|
| 16 |
+
Literal,
|
| 17 |
+
)
|
| 18 |
+
|
| 19 |
+
from pydantic import (
|
| 20 |
+
BaseModel,
|
| 21 |
+
Field,
|
| 22 |
+
)
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
class Feedback(BaseModel):
|
| 26 |
+
"""Represents feedback for a conversation."""
|
| 27 |
+
|
| 28 |
+
score: int | float
|
| 29 |
+
text: str | None = ""
|
| 30 |
+
log_type: Literal["feedback"] = "feedback"
|
| 31 |
+
service_name: Literal["adk-rag-agent"] = "adk-rag-agent"
|
| 32 |
+
user_id: str = Field(default_factory=lambda: str(uuid.uuid4()))
|
| 33 |
+
session_id: str = Field(default_factory=lambda: str(uuid.uuid4()))
|
rag_agent/config.py
CHANGED
|
@@ -20,7 +20,9 @@ LOCATION = os.environ.get("GOOGLE_CLOUD_LOCATION")
|
|
| 20 |
# RAG settings
|
| 21 |
DEFAULT_CHUNK_SIZE = 512
|
| 22 |
DEFAULT_CHUNK_OVERLAP = 100
|
| 23 |
-
|
|
|
|
|
|
|
| 24 |
DEFAULT_DISTANCE_THRESHOLD = 0.5
|
| 25 |
DEFAULT_EMBEDDING_MODEL = "publishers/google/models/text-embedding-005"
|
| 26 |
DEFAULT_EMBEDDING_REQUESTS_PER_MIN = 1000
|
|
|
|
| 20 |
# RAG settings
|
| 21 |
DEFAULT_CHUNK_SIZE = 512
|
| 22 |
DEFAULT_CHUNK_OVERLAP = 100
|
| 23 |
+
DEFAULT_BUSINESS_TOP_K = 1
|
| 24 |
+
DEFAULT_PRODUCT_TOP_K = 3
|
| 25 |
+
DEFAULT_SERVICE_TOP_K = 3
|
| 26 |
DEFAULT_DISTANCE_THRESHOLD = 0.5
|
| 27 |
DEFAULT_EMBEDDING_MODEL = "publishers/google/models/text-embedding-005"
|
| 28 |
DEFAULT_EMBEDDING_REQUESTS_PER_MIN = 1000
|
rag_agent/tools/rag_query.py
CHANGED
|
@@ -2,111 +2,105 @@
|
|
| 2 |
Tool for querying Vertex AI RAG corpora and retrieving relevant information.
|
| 3 |
"""
|
| 4 |
|
| 5 |
-
import logging
|
| 6 |
-
|
| 7 |
-
from google.adk.tools.tool_context import ToolContext
|
| 8 |
from vertexai import rag
|
| 9 |
|
| 10 |
from ..config import (
|
| 11 |
DEFAULT_DISTANCE_THRESHOLD,
|
| 12 |
-
|
|
|
|
|
|
|
| 13 |
)
|
| 14 |
-
from .utils import check_corpus_exists, get_corpus_resource_name
|
| 15 |
-
|
| 16 |
|
| 17 |
def rag_query(
|
| 18 |
-
corpus_name: str,
|
| 19 |
query: str,
|
| 20 |
-
|
| 21 |
) -> dict:
|
| 22 |
"""
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
Args:
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
query (str): The text query to search for in the corpus
|
| 29 |
-
tool_context (ToolContext): The tool context
|
| 30 |
|
| 31 |
Returns:
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
try:
|
| 35 |
-
|
| 36 |
-
# Check if the corpus exists
|
| 37 |
-
if not check_corpus_exists(corpus_name, tool_context):
|
| 38 |
-
return {
|
| 39 |
-
"status": "error",
|
| 40 |
-
"message": f"Corpus '{corpus_name}' does not exist. Please create it first using the create_corpus tool.",
|
| 41 |
-
"query": query,
|
| 42 |
-
"corpus_name": corpus_name,
|
| 43 |
-
}
|
| 44 |
-
|
| 45 |
-
# Get the corpus resource name
|
| 46 |
-
corpus_resource_name = get_corpus_resource_name(corpus_name)
|
| 47 |
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
)
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
print("Performing retrieval query...")
|
| 56 |
response = rag.retrieval_query(
|
| 57 |
rag_resources=[
|
| 58 |
rag.RagResource(
|
| 59 |
-
rag_corpus=
|
|
|
|
| 60 |
)
|
| 61 |
],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
text=query,
|
| 63 |
-
rag_retrieval_config=rag_retrieval_config,
|
| 64 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
"source_uri"
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
"score": ctx_group.score if hasattr(ctx_group, "score") else 0.0,
|
| 81 |
-
}
|
| 82 |
-
results.append(result)
|
| 83 |
-
|
| 84 |
-
# If we didn't find any results
|
| 85 |
-
if not results:
|
| 86 |
-
return {
|
| 87 |
-
"status": "warning",
|
| 88 |
-
"message": f"No results found in corpus '{corpus_name}' for query: '{query}'",
|
| 89 |
-
"query": query,
|
| 90 |
-
"corpus_name": corpus_name,
|
| 91 |
-
"results": [],
|
| 92 |
-
"results_count": 0,
|
| 93 |
}
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
"message": f"Successfully queried corpus '{corpus_name}'",
|
| 98 |
-
"query": query,
|
| 99 |
-
"corpus_name": corpus_name,
|
| 100 |
-
"results": results,
|
| 101 |
-
"results_count": len(results),
|
| 102 |
-
}
|
| 103 |
-
|
| 104 |
-
except Exception as e:
|
| 105 |
-
error_msg = f"Error querying corpus: {str(e)}"
|
| 106 |
-
logging.error(error_msg)
|
| 107 |
-
return {
|
| 108 |
-
"status": "error",
|
| 109 |
-
"message": error_msg,
|
| 110 |
-
"query": query,
|
| 111 |
-
"corpus_name": corpus_name,
|
| 112 |
-
}
|
|
|
|
| 2 |
Tool for querying Vertex AI RAG corpora and retrieving relevant information.
|
| 3 |
"""
|
| 4 |
|
|
|
|
|
|
|
|
|
|
| 5 |
from vertexai import rag
|
| 6 |
|
| 7 |
from ..config import (
|
| 8 |
DEFAULT_DISTANCE_THRESHOLD,
|
| 9 |
+
DEFAULT_BUSINESS_TOP_K,
|
| 10 |
+
DEFAULT_PRODUCT_TOP_K,
|
| 11 |
+
DEFAULT_SERVICE_TOP_K
|
| 12 |
)
|
|
|
|
|
|
|
| 13 |
|
| 14 |
def rag_query(
|
|
|
|
| 15 |
query: str,
|
| 16 |
+
type: str,
|
| 17 |
) -> dict:
|
| 18 |
"""
|
| 19 |
+
Executes a RAG retrieval query against a predefined Vertex AI corpus.
|
| 20 |
+
|
| 21 |
+
The query is routed to one of three document groups—business, product, or
|
| 22 |
+
service—based on the specified type. The function retrieves the top matching
|
| 23 |
+
contexts and returns them in a simplified dictionary format.
|
| 24 |
|
| 25 |
Args:
|
| 26 |
+
query: The user's question to retrieve information for.
|
| 27 |
+
type: Query category. One of {"business", "product", "service"}.
|
|
|
|
|
|
|
| 28 |
|
| 29 |
Returns:
|
| 30 |
+
A dictionary containing processed retrieval results, including text,
|
| 31 |
+
source metadata, and relevance scores.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
+
Raises:
|
| 34 |
+
ValueError: If `type` is not a valid category.
|
| 35 |
+
"""
|
| 36 |
+
corpus = rag.get_corpus("projects/38827506989/locations/asia-southeast1/ragCorpora/3458764513820540928")
|
| 37 |
+
if type == "business":
|
| 38 |
+
response = rag.retrieval_query(
|
| 39 |
+
rag_resources=[
|
| 40 |
+
rag.RagResource(
|
| 41 |
+
rag_corpus=corpus.name,
|
| 42 |
+
rag_file_ids=["5572399974328298423"]
|
| 43 |
+
)
|
| 44 |
+
],
|
| 45 |
+
rag_retrieval_config=rag.RagRetrievalConfig(
|
| 46 |
+
top_k = DEFAULT_BUSINESS_TOP_K,
|
| 47 |
+
filter=rag.Filter(
|
| 48 |
+
vector_distance_threshold = DEFAULT_DISTANCE_THRESHOLD,
|
| 49 |
+
),
|
| 50 |
+
),
|
| 51 |
+
text=query,
|
| 52 |
)
|
| 53 |
+
print(response)
|
| 54 |
+
elif type == "product":
|
|
|
|
| 55 |
response = rag.retrieval_query(
|
| 56 |
rag_resources=[
|
| 57 |
rag.RagResource(
|
| 58 |
+
rag_corpus=corpus.name,
|
| 59 |
+
rag_file_ids=["5572400164943779227"]
|
| 60 |
)
|
| 61 |
],
|
| 62 |
+
rag_retrieval_config=rag.RagRetrievalConfig(
|
| 63 |
+
top_k = DEFAULT_PRODUCT_TOP_K,
|
| 64 |
+
filter=rag.Filter(
|
| 65 |
+
vector_distance_threshold = DEFAULT_DISTANCE_THRESHOLD,
|
| 66 |
+
),
|
| 67 |
+
),
|
| 68 |
text=query,
|
|
|
|
| 69 |
)
|
| 70 |
+
print(response)
|
| 71 |
+
elif type == "service":
|
| 72 |
+
response = rag.retrieval_query(
|
| 73 |
+
rag_resources=[
|
| 74 |
+
rag.RagResource(
|
| 75 |
+
rag_corpus=corpus.name,
|
| 76 |
+
rag_file_ids=["5572400273133586357"]
|
| 77 |
+
)
|
| 78 |
+
],
|
| 79 |
+
rag_retrieval_config=rag.RagRetrievalConfig(
|
| 80 |
+
top_k = DEFAULT_SERVICE_TOP_K,
|
| 81 |
+
filter=rag.Filter(
|
| 82 |
+
vector_distance_threshold = DEFAULT_DISTANCE_THRESHOLD,
|
| 83 |
+
),
|
| 84 |
+
),
|
| 85 |
+
text=query,
|
| 86 |
+
)
|
| 87 |
+
print(response)
|
| 88 |
|
| 89 |
+
results = []
|
| 90 |
+
if hasattr(response, "contexts") and response.contexts:
|
| 91 |
+
for ctx_group in response.contexts.contexts:
|
| 92 |
+
result = {
|
| 93 |
+
"source_uri": (
|
| 94 |
+
ctx_group.source_uri if hasattr(ctx_group, "source_uri") else ""
|
| 95 |
+
),
|
| 96 |
+
"source_name": (
|
| 97 |
+
ctx_group.source_display_name
|
| 98 |
+
if hasattr(ctx_group, "source_display_name")
|
| 99 |
+
else ""
|
| 100 |
+
),
|
| 101 |
+
"text": ctx_group.text if hasattr(ctx_group, "text") else "",
|
| 102 |
+
"score": ctx_group.score if hasattr(ctx_group, "score") else 0.0,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
}
|
| 104 |
+
results.append(result)
|
| 105 |
+
print(results)
|
| 106 |
+
return results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
requirements.txt
CHANGED
|
@@ -3,3 +3,5 @@ google-cloud-storage==2.19.0
|
|
| 3 |
google-genai==1.14.0
|
| 4 |
gitpython==3.1.40
|
| 5 |
google-adk==0.5.0
|
|
|
|
|
|
|
|
|
| 3 |
google-genai==1.14.0
|
| 4 |
gitpython==3.1.40
|
| 5 |
google-adk==0.5.0
|
| 6 |
+
gradio==5.8.0
|
| 7 |
+
python-dotenv==1.0.0
|
run_gradio.py
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Quick launcher for Gradio RAG Agent Chat UI
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import sys
|
| 7 |
+
import os
|
| 8 |
+
|
| 9 |
+
# Add the parent directory to the path to import rag_agent modules if needed
|
| 10 |
+
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
|
| 11 |
+
|
| 12 |
+
# Import and run the gradio app
|
| 13 |
+
if __name__ == "__main__":
|
| 14 |
+
from gradio_app import demo
|
| 15 |
+
|
| 16 |
+
print("=" * 50)
|
| 17 |
+
print("🤖 RAG Agent Chat Interface")
|
| 18 |
+
print("=" * 50)
|
| 19 |
+
print("\n📖 Starting Gradio interface...")
|
| 20 |
+
print("🌐 Open your browser and navigate to the URL shown below\n")
|
| 21 |
+
|
| 22 |
+
demo.launch()
|
setup_gradio.sh
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# Setup script for Gradio RAG Agent Chat UI
|
| 4 |
+
|
| 5 |
+
echo "🚀 Setting up Gradio RAG Agent Chat UI..."
|
| 6 |
+
echo ""
|
| 7 |
+
|
| 8 |
+
# Check if Python is installed
|
| 9 |
+
if ! command -v python3 &> /dev/null; then
|
| 10 |
+
echo "❌ Python 3 is not installed. Please install Python 3.10 or higher."
|
| 11 |
+
exit 1
|
| 12 |
+
fi
|
| 13 |
+
|
| 14 |
+
echo "✅ Python 3 found: $(python3 --version)"
|
| 15 |
+
echo ""
|
| 16 |
+
|
| 17 |
+
# Check if .env file exists
|
| 18 |
+
if [ ! -f "rag_agent/.env" ]; then
|
| 19 |
+
echo "⚠️ Warning: rag_agent/.env file not found"
|
| 20 |
+
echo "Please create it with the following content:"
|
| 21 |
+
echo ""
|
| 22 |
+
echo "GOOGLE_CLOUD_PROJECT=your-project-id"
|
| 23 |
+
echo "GOOGLE_CLOUD_LOCATION=us-central1"
|
| 24 |
+
echo "GOOGLE_GENAI_USE_VERTEXAI=true"
|
| 25 |
+
echo ""
|
| 26 |
+
exit 1
|
| 27 |
+
fi
|
| 28 |
+
|
| 29 |
+
echo "✅ Environment file found"
|
| 30 |
+
echo ""
|
| 31 |
+
|
| 32 |
+
# Install requirements
|
| 33 |
+
echo "📦 Installing Python dependencies..."
|
| 34 |
+
pip install -r requirements.txt
|
| 35 |
+
|
| 36 |
+
if [ $? -eq 0 ]; then
|
| 37 |
+
echo ""
|
| 38 |
+
echo "✅ Installation complete!"
|
| 39 |
+
echo ""
|
| 40 |
+
echo "📋 Next steps:"
|
| 41 |
+
echo "1. Authenticate with Google Cloud:"
|
| 42 |
+
echo " gcloud auth application-default login"
|
| 43 |
+
echo ""
|
| 44 |
+
echo "2. Deploy your agent (if not already deployed):"
|
| 45 |
+
echo " make deploy"
|
| 46 |
+
echo ""
|
| 47 |
+
echo "3. Run the Gradio app:"
|
| 48 |
+
echo " python gradio_app.py"
|
| 49 |
+
echo ""
|
| 50 |
+
echo " OR"
|
| 51 |
+
echo ""
|
| 52 |
+
echo " python run_gradio.py"
|
| 53 |
+
echo ""
|
| 54 |
+
else
|
| 55 |
+
echo "❌ Installation failed. Please check the error messages above."
|
| 56 |
+
exit 1
|
| 57 |
+
fi
|
starter_pack_README.md
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# adk-rag-agent
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
Agent generated with [`googleCloudPlatform/agent-starter-pack`](https://github.com/GoogleCloudPlatform/agent-starter-pack) version `0.21.0`
|
| 5 |
+
|
| 6 |
+
## Project Structure
|
| 7 |
+
|
| 8 |
+
This project is organized as follows:
|
| 9 |
+
|
| 10 |
+
```
|
| 11 |
+
adk-rag-agent/
|
| 12 |
+
├── rag_agent/ # Core application code
|
| 13 |
+
│ ├── agent.py # Main agent logic
|
| 14 |
+
│ ├── agent_engine_app.py # Agent Engine application logic
|
| 15 |
+
│ └── app_utils/ # App utilities and helpers
|
| 16 |
+
├── .cloudbuild/ # CI/CD pipeline configurations for Google Cloud Build
|
| 17 |
+
├── deployment/ # Infrastructure and deployment scripts
|
| 18 |
+
├── notebooks/ # Jupyter notebooks for prototyping and evaluation
|
| 19 |
+
├── tests/ # Unit, integration, and load tests
|
| 20 |
+
├── Makefile # Makefile for common commands
|
| 21 |
+
├── GEMINI.md # AI-assisted development guide
|
| 22 |
+
└── pyproject.toml # Project dependencies and configuration
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
## Requirements
|
| 26 |
+
|
| 27 |
+
Before you begin, ensure you have:
|
| 28 |
+
- **uv**: Python package manager (used for all dependency management in this project) - [Install](https://docs.astral.sh/uv/getting-started/installation/) ([add packages](https://docs.astral.sh/uv/concepts/dependencies/) with `uv add <package>`)
|
| 29 |
+
- **Google Cloud SDK**: For GCP services - [Install](https://cloud.google.com/sdk/docs/install)
|
| 30 |
+
- **Terraform**: For infrastructure deployment - [Install](https://developer.hashicorp.com/terraform/downloads)
|
| 31 |
+
- **make**: Build automation tool - [Install](https://www.gnu.org/software/make/) (pre-installed on most Unix-based systems)
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## Quick Start (Local Testing)
|
| 35 |
+
|
| 36 |
+
Install required packages and launch the local development environment:
|
| 37 |
+
|
| 38 |
+
```bash
|
| 39 |
+
make install && make playground
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## Commands
|
| 43 |
+
|
| 44 |
+
| Command | Description |
|
| 45 |
+
| -------------------- | ------------------------------------------------------------------------------------------- |
|
| 46 |
+
| `make install` | Install all required dependencies using uv |
|
| 47 |
+
| `make playground` | Launch local development environment for testing agent |
|
| 48 |
+
| `make deploy` | Deploy agent to Agent Engine |
|
| 49 |
+
| `make register-gemini-enterprise` | Register deployed agent to Gemini Enterprise ([docs](https://googlecloudplatform.github.io/agent-starter-pack/cli/register_gemini_enterprise.html)) |
|
| 50 |
+
| `make test` | Run unit and integration tests |
|
| 51 |
+
| `make lint` | Run code quality checks (codespell, ruff, mypy) |
|
| 52 |
+
| `make setup-dev-env` | Set up development environment resources using Terraform |
|
| 53 |
+
|
| 54 |
+
For full command options and usage, refer to the [Makefile](Makefile).
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
## Usage
|
| 58 |
+
|
| 59 |
+
This template follows a "bring your own agent" approach - you focus on your business logic, and the template handles everything else (UI, infrastructure, deployment, monitoring).
|
| 60 |
+
|
| 61 |
+
1. **Prototype:** Build your Generative AI Agent using the intro notebooks in `notebooks/` for guidance. Use Vertex AI Evaluation to assess performance.
|
| 62 |
+
2. **Integrate:** Import your agent into the app by editing `rag_agent/agent.py`.
|
| 63 |
+
3. **Test:** Explore your agent functionality using the local playground with `make playground`. The playground automatically reloads your agent on code changes.
|
| 64 |
+
4. **Deploy:** Set up and initiate the CI/CD pipelines, customizing tests as necessary. Refer to the [deployment section](#deployment) for comprehensive instructions. For streamlined infrastructure deployment, simply run `uvx agent-starter-pack setup-cicd`. Check out the [`agent-starter-pack setup-cicd` CLI command](https://googlecloudplatform.github.io/agent-starter-pack/cli/setup_cicd.html). Currently supports GitHub with both Google Cloud Build and GitHub Actions as CI/CD runners.
|
| 65 |
+
5. **Monitor:** Track performance and gather insights using BigQuery telemetry data, Cloud Logging, and Cloud Trace to iterate on your application.
|
| 66 |
+
|
| 67 |
+
The project includes a `GEMINI.md` file that provides context for AI tools like Gemini CLI when asking questions about your template.
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
## Deployment
|
| 71 |
+
|
| 72 |
+
> **Note:** For a streamlined one-command deployment of the entire CI/CD pipeline and infrastructure using Terraform, you can use the [`agent-starter-pack setup-cicd` CLI command](https://googlecloudplatform.github.io/agent-starter-pack/cli/setup_cicd.html). Currently supports GitHub with both Google Cloud Build and GitHub Actions as CI/CD runners.
|
| 73 |
+
|
| 74 |
+
### Dev Environment
|
| 75 |
+
|
| 76 |
+
You can test deployment towards a Dev Environment using the following command:
|
| 77 |
+
|
| 78 |
+
```bash
|
| 79 |
+
gcloud config set project <your-dev-project-id>
|
| 80 |
+
make deploy
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
The repository includes a Terraform configuration for the setup of the Dev Google Cloud project.
|
| 85 |
+
See [deployment/README.md](deployment/README.md) for instructions.
|
| 86 |
+
|
| 87 |
+
### Production Deployment
|
| 88 |
+
|
| 89 |
+
The repository includes a Terraform configuration for the setup of a production Google Cloud project. Refer to [deployment/README.md](deployment/README.md) for detailed instructions on how to deploy the infrastructure and application.
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
## Monitoring and Observability
|
| 93 |
+
|
| 94 |
+
The application uses [OpenTelemetry GenAI instrumentation](https://opentelemetry.io/docs/specs/semconv/gen-ai/) for comprehensive observability. Telemetry data is automatically captured and exported to:
|
| 95 |
+
|
| 96 |
+
- **Google Cloud Storage**: GenAI telemetry in JSONL format for efficient querying
|
| 97 |
+
- **BigQuery**: External tables and linked datasets provide immediate access to telemetry data via SQL queries
|
| 98 |
+
- **Cloud Logging**: Dedicated logging bucket with 10-year retention for GenAI operation logs
|
| 99 |
+
|
| 100 |
+
**Query your telemetry data:**
|
| 101 |
+
|
| 102 |
+
```bash
|
| 103 |
+
# Example: Query recent completions
|
| 104 |
+
bq query --use_legacy_sql=false \
|
| 105 |
+
"SELECT * FROM \`adk-rag-agent_telemetry.completions\` LIMIT 10"
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
For detailed setup instructions, example queries, testing in dev, and optional dashboard visualization, see the [starter pack observability guide](https://googlecloudplatform.github.io/agent-starter-pack/guide/observability.html).
|
test.ipynb
ADDED
|
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "code",
|
| 5 |
+
"execution_count": 1,
|
| 6 |
+
"id": "03b6eee8",
|
| 7 |
+
"metadata": {},
|
| 8 |
+
"outputs": [
|
| 9 |
+
{
|
| 10 |
+
"name": "stdout",
|
| 11 |
+
"output_type": "stream",
|
| 12 |
+
"text": [
|
| 13 |
+
"Testing endpoint: https://asia-southeast1-aiplatform.googleapis.com/v1/projects/angular-stacker-473507-t1/locations/asia-southeast1/reasoningEngines/734755242331078656:query\n",
|
| 14 |
+
"Payload: {\n",
|
| 15 |
+
" \"input\": {\n",
|
| 16 |
+
" \"query\": \"List all available RAG corpora\"\n",
|
| 17 |
+
" }\n",
|
| 18 |
+
"}\n",
|
| 19 |
+
"\n",
|
| 20 |
+
"Sending request...\n",
|
| 21 |
+
"\n",
|
| 22 |
+
"Status Code: 400\n",
|
| 23 |
+
"Response Headers: {'Vary': 'Origin, X-Origin, Referer', 'Content-Type': 'application/json; charset=UTF-8', 'Content-Encoding': 'gzip', 'Date': 'Thu, 20 Nov 2025 16:23:57 GMT', 'Server': 'scaffolding on HTTPServer2', 'X-XSS-Protection': '0', 'X-Frame-Options': 'SAMEORIGIN', 'X-Content-Type-Options': 'nosniff', 'Alt-Svc': 'h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000', 'Transfer-Encoding': 'chunked'}\n",
|
| 24 |
+
"\n",
|
| 25 |
+
"Response Body:\n",
|
| 26 |
+
"{\n",
|
| 27 |
+
" \"error\": {\n",
|
| 28 |
+
" \"code\": 400,\n",
|
| 29 |
+
" \"message\": \"Reasoning Engine Execution failed.\\nPlease refer to our documentation (https://cloud.google.com/vertex-ai/generative-ai/docs/agent-engine/troubleshooting/use) for checking logs and other troubleshooting tips.\\nError Details: {\\\"detail\\\":\\\"Agent Engine Error: Default method `query` not found. Available methods are: ['async_delete_session', 'async_search_memory', 'async_add_session_to_memory', 'list_sessions', 'register_feedback', 'async_list_sessions', 'get_session', 'async_get_session', 'delete_session', 'create_session', 'async_create_session'].\\\"}\",\n",
|
| 30 |
+
" \"status\": \"FAILED_PRECONDITION\"\n",
|
| 31 |
+
" }\n",
|
| 32 |
+
"}\n"
|
| 33 |
+
]
|
| 34 |
+
}
|
| 35 |
+
],
|
| 36 |
+
"source": [
|
| 37 |
+
"import requests\n",
|
| 38 |
+
"import subprocess\n",
|
| 39 |
+
"import json\n",
|
| 40 |
+
"\n",
|
| 41 |
+
"# Get access token\n",
|
| 42 |
+
"def get_access_token():\n",
|
| 43 |
+
" result = subprocess.run(\n",
|
| 44 |
+
" ['gcloud', 'auth', 'print-access-token'],\n",
|
| 45 |
+
" capture_output=True,\n",
|
| 46 |
+
" text=True\n",
|
| 47 |
+
" )\n",
|
| 48 |
+
" return result.stdout.strip()\n",
|
| 49 |
+
"\n",
|
| 50 |
+
"# Test the Reasoning Engine endpoint\n",
|
| 51 |
+
"def test_reasoning_engine():\n",
|
| 52 |
+
" url = \"https://asia-southeast1-aiplatform.googleapis.com/v1/projects/angular-stacker-473507-t1/locations/asia-southeast1/reasoningEngines/734755242331078656:query\"\n",
|
| 53 |
+
" \n",
|
| 54 |
+
" token = get_access_token()\n",
|
| 55 |
+
" \n",
|
| 56 |
+
" headers = {\n",
|
| 57 |
+
" \"Authorization\": f\"Bearer {token}\",\n",
|
| 58 |
+
" \"Content-Type\": \"application/json\"\n",
|
| 59 |
+
" }\n",
|
| 60 |
+
" \n",
|
| 61 |
+
" # Test payload - adjust based on your agent's expected input\n",
|
| 62 |
+
" payload = {\n",
|
| 63 |
+
" \"input\": {\n",
|
| 64 |
+
" \"query\": \"List all available RAG corpora\"\n",
|
| 65 |
+
" }\n",
|
| 66 |
+
" }\n",
|
| 67 |
+
" \n",
|
| 68 |
+
" print(f\"Testing endpoint: {url}\")\n",
|
| 69 |
+
" print(f\"Payload: {json.dumps(payload, indent=2)}\")\n",
|
| 70 |
+
" print(\"\\nSending request...\")\n",
|
| 71 |
+
" \n",
|
| 72 |
+
" response = requests.post(url, headers=headers, json=payload)\n",
|
| 73 |
+
" \n",
|
| 74 |
+
" print(f\"\\nStatus Code: {response.status_code}\")\n",
|
| 75 |
+
" print(f\"Response Headers: {dict(response.headers)}\")\n",
|
| 76 |
+
" print(f\"\\nResponse Body:\")\n",
|
| 77 |
+
" print(json.dumps(response.json(), indent=2))\n",
|
| 78 |
+
" \n",
|
| 79 |
+
" return response\n",
|
| 80 |
+
"\n",
|
| 81 |
+
"if __name__ == \"__main__\":\n",
|
| 82 |
+
" try:\n",
|
| 83 |
+
" response = test_reasoning_engine()\n",
|
| 84 |
+
" except Exception as e:\n",
|
| 85 |
+
" print(f\"Error: {e}\")"
|
| 86 |
+
]
|
| 87 |
+
},
|
| 88 |
+
{
|
| 89 |
+
"cell_type": "code",
|
| 90 |
+
"execution_count": null,
|
| 91 |
+
"id": "2878a9fc",
|
| 92 |
+
"metadata": {},
|
| 93 |
+
"outputs": [],
|
| 94 |
+
"source": []
|
| 95 |
+
}
|
| 96 |
+
],
|
| 97 |
+
"metadata": {
|
| 98 |
+
"kernelspec": {
|
| 99 |
+
"display_name": "adk-rag-agent",
|
| 100 |
+
"language": "python",
|
| 101 |
+
"name": "python3"
|
| 102 |
+
},
|
| 103 |
+
"language_info": {
|
| 104 |
+
"codemirror_mode": {
|
| 105 |
+
"name": "ipython",
|
| 106 |
+
"version": 3
|
| 107 |
+
},
|
| 108 |
+
"file_extension": ".py",
|
| 109 |
+
"mimetype": "text/x-python",
|
| 110 |
+
"name": "python",
|
| 111 |
+
"nbconvert_exporter": "python",
|
| 112 |
+
"pygments_lexer": "ipython3",
|
| 113 |
+
"version": "3.13.5"
|
| 114 |
+
}
|
| 115 |
+
},
|
| 116 |
+
"nbformat": 4,
|
| 117 |
+
"nbformat_minor": 5
|
| 118 |
+
}
|
test_gradio_setup.py
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Simple test script to verify the Gradio setup
|
| 3 |
+
Run this before launching the full app to check configuration
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import os
|
| 7 |
+
import sys
|
| 8 |
+
from dotenv import load_dotenv
|
| 9 |
+
|
| 10 |
+
print("=" * 60)
|
| 11 |
+
print("🧪 Gradio Setup Verification")
|
| 12 |
+
print("=" * 60)
|
| 13 |
+
|
| 14 |
+
# Test 1: Check environment file
|
| 15 |
+
print("\n1️⃣ Checking environment file...")
|
| 16 |
+
env_path = os.path.join(os.path.dirname(__file__), "rag_agent", ".env")
|
| 17 |
+
if os.path.exists(env_path):
|
| 18 |
+
print(f" ✅ Found .env file at: {env_path}")
|
| 19 |
+
load_dotenv(env_path)
|
| 20 |
+
else:
|
| 21 |
+
print(f" ❌ .env file not found at: {env_path}")
|
| 22 |
+
sys.exit(1)
|
| 23 |
+
|
| 24 |
+
# Test 2: Check environment variables
|
| 25 |
+
print("\n2️⃣ Checking environment variables...")
|
| 26 |
+
PROJECT_ID = os.environ.get("GOOGLE_CLOUD_PROJECT")
|
| 27 |
+
LOCATION = os.environ.get("GOOGLE_CLOUD_LOCATION")
|
| 28 |
+
|
| 29 |
+
if PROJECT_ID:
|
| 30 |
+
print(f" ✅ GOOGLE_CLOUD_PROJECT: {PROJECT_ID}")
|
| 31 |
+
else:
|
| 32 |
+
print(" ❌ GOOGLE_CLOUD_PROJECT not set")
|
| 33 |
+
sys.exit(1)
|
| 34 |
+
|
| 35 |
+
if LOCATION:
|
| 36 |
+
print(f" ✅ GOOGLE_CLOUD_LOCATION: {LOCATION}")
|
| 37 |
+
else:
|
| 38 |
+
print(" ⚠️ GOOGLE_CLOUD_LOCATION not set, defaulting to 'us-central1'")
|
| 39 |
+
LOCATION = "us-central1"
|
| 40 |
+
|
| 41 |
+
# Test 3: Check Python packages
|
| 42 |
+
print("\n3️⃣ Checking required packages...")
|
| 43 |
+
required_packages = {
|
| 44 |
+
"gradio": "Gradio UI framework",
|
| 45 |
+
"vertexai": "Vertex AI SDK",
|
| 46 |
+
"google.cloud.aiplatform_v1beta1": "AI Platform Client",
|
| 47 |
+
"dotenv": "Environment loader"
|
| 48 |
+
}
|
| 49 |
+
|
| 50 |
+
all_packages_ok = True
|
| 51 |
+
for package, description in required_packages.items():
|
| 52 |
+
try:
|
| 53 |
+
__import__(package.replace("-", "_"))
|
| 54 |
+
print(f" ✅ {package}: {description}")
|
| 55 |
+
except ImportError:
|
| 56 |
+
print(f" ❌ {package}: NOT INSTALLED - {description}")
|
| 57 |
+
all_packages_ok = False
|
| 58 |
+
|
| 59 |
+
if not all_packages_ok:
|
| 60 |
+
print("\n ⚠️ Missing packages. Install with:")
|
| 61 |
+
print(" pip install -r requirements.txt")
|
| 62 |
+
sys.exit(1)
|
| 63 |
+
|
| 64 |
+
# Test 4: Check Google Cloud authentication
|
| 65 |
+
print("\n4️⃣ Checking Google Cloud authentication...")
|
| 66 |
+
try:
|
| 67 |
+
from google.auth import default
|
| 68 |
+
credentials, project = default()
|
| 69 |
+
print(f" ✅ Authenticated with project: {project}")
|
| 70 |
+
except Exception as e:
|
| 71 |
+
print(f" ❌ Authentication error: {e}")
|
| 72 |
+
print(" Run: gcloud auth application-default login")
|
| 73 |
+
sys.exit(1)
|
| 74 |
+
|
| 75 |
+
# Test 5: Try to list agents
|
| 76 |
+
print("\n5️⃣ Testing Agent Engine connection...")
|
| 77 |
+
try:
|
| 78 |
+
from google.cloud import aiplatform_v1beta1 as aiplatform
|
| 79 |
+
|
| 80 |
+
client = aiplatform.AgentEnginesServiceClient(
|
| 81 |
+
client_options={"api_endpoint": f"{LOCATION}-aiplatform.googleapis.com"}
|
| 82 |
+
)
|
| 83 |
+
parent = f"projects/{PROJECT_ID}/locations/{LOCATION}"
|
| 84 |
+
request = aiplatform.ListAgentEnginesRequest(parent=parent)
|
| 85 |
+
|
| 86 |
+
agents = list(client.list_agent_engines(request=request))
|
| 87 |
+
|
| 88 |
+
if agents:
|
| 89 |
+
print(f" ✅ Successfully connected! Found {len(agents)} agent(s):")
|
| 90 |
+
for agent in agents:
|
| 91 |
+
display_name = agent.display_name or agent.name.split("/")[-1]
|
| 92 |
+
print(f" • {display_name}")
|
| 93 |
+
else:
|
| 94 |
+
print(" ⚠️ Connected, but no agents found")
|
| 95 |
+
print(" Deploy an agent with: make deploy")
|
| 96 |
+
|
| 97 |
+
except Exception as e:
|
| 98 |
+
print(f" ❌ Connection error: {e}")
|
| 99 |
+
print(" Check your project ID and location")
|
| 100 |
+
sys.exit(1)
|
| 101 |
+
|
| 102 |
+
# All tests passed
|
| 103 |
+
print("\n" + "=" * 60)
|
| 104 |
+
print("✅ All checks passed! Ready to launch Gradio app")
|
| 105 |
+
print("=" * 60)
|
| 106 |
+
print("\nRun the app with:")
|
| 107 |
+
print(" python gradio_app_v2.py")
|
| 108 |
+
print("\nOr:")
|
| 109 |
+
print(" python run_gradio.py")
|
| 110 |
+
print()
|