Spaces:
Sleeping
Sleeping
jens.luecke
commited on
Commit
·
f0ca218
1
Parent(s):
64133ea
refactor: cleanup and update README.md with new features and usage instructions
Browse files- .env.example +0 -14
- README.md +84 -15
- app.py → src/app.py +5 -5
- kiss_agent.py → src/kiss_agent.py +1 -1
- settings.py → src/settings.py +3 -147
- ui_helpers.py → src/ui_helpers.py +10 -50
- utils.py → src/utils.py +0 -0
- start.sh +1 -1
.env.example
CHANGED
|
@@ -6,17 +6,3 @@ API_KEY=
|
|
| 6 |
GRADIO_HOST=127.0.0.1
|
| 7 |
GRADIO_PORT=7860
|
| 8 |
GRADIO_DEBUG=true
|
| 9 |
-
|
| 10 |
-
# Planning Agent Settings
|
| 11 |
-
PLANNING_VERBOSITY=1
|
| 12 |
-
MAX_PLANNING_STEPS=10
|
| 13 |
-
|
| 14 |
-
# Coding Agent Settings
|
| 15 |
-
CODING_VERBOSITY=2
|
| 16 |
-
MAX_CODING_STEPS=20
|
| 17 |
-
CODE_MODEL_ID=Qwen/Qwen2.5-Coder-32B-Instruct
|
| 18 |
-
|
| 19 |
-
# Testing Agent Settings
|
| 20 |
-
TESTING_VERBOSITY=2
|
| 21 |
-
MAX_TESTING_STEPS=15
|
| 22 |
-
TEST_MODEL_ID=Qwen/Qwen2.5-Coder-32B-Instruct
|
|
|
|
| 6 |
GRADIO_HOST=127.0.0.1
|
| 7 |
GRADIO_PORT=7860
|
| 8 |
GRADIO_DEBUG=true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -15,26 +15,95 @@ authors:
|
|
| 15 |
tags:
|
| 16 |
- agent-demo-track
|
| 17 |
---
|
|
|
|
| 18 |
|
| 19 |
-
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
-
|
| 26 |
-
- **Live Preview**: See your applications running instantly in an iframe
|
| 27 |
-
- **Multiple AI Providers**: Support for Anthropic, OpenAI, Mistral, and more
|
| 28 |
-
- **File Management**: Edit and save files directly in the interface
|
| 29 |
-
- **API Key Management**: Secure configuration for different AI providers
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
-
##
|
| 39 |
|
| 40 |
-
This
|
|
|
|
| 15 |
tags:
|
| 16 |
- agent-demo-track
|
| 17 |
---
|
| 18 |
+
# 💗 Likable: It's almost lovable...
|
| 19 |
|
| 20 |
+
[](https://huggingface.co/spaces/ZwischenholtzW/likable)
|
| 21 |
+
[](https://opensource.org/licenses/Apache-2.0)
|
| 22 |
|
| 23 |
+
**Likable is a powerful, real-time AI coding assistant that allows you to develop and preview Gradio applications through a conversational interface.**
|
| 24 |
|
| 25 |
+
Just describe the application you want to build, and watch as the AI agent writes the code, handles dependencies, and spins up a live, interactive preview for you in real-time. It's the fastest way to go from idea to a working Gradio app.
|
| 26 |
|
| 27 |
+
This project is a submission for the [Gradio Agents & MCP Hackathon 2025](https://huggingface.co/Agents-MCP-Hackathon).
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## ✨ Features
|
| 32 |
+
|
| 33 |
+
- **🤖 Conversational AI Development**: Simply chat with the agent to build, modify, and extend your Gradio applications.
|
| 34 |
+
- **⚡ Real-time Live Preview**: The agent automatically runs your application in a sandboxed environment, and you can interact with it instantly in the "Preview" tab.
|
| 35 |
+
- **📝 In-App Code Editor**: View, edit, and save the code generated by the agent directly within the "Code" tab.
|
| 36 |
+
- **🛠️ Autonomous Tool Use**: The agent can:
|
| 37 |
+
- Create and edit Python files.
|
| 38 |
+
- Install necessary Python packages using `pip`.
|
| 39 |
+
- List and view files to understand the project context.
|
| 40 |
+
- Test the generated code to ensure it runs without errors.
|
| 41 |
+
- **🔐 Secure API Key Management**: Easily configure API keys for various LLM providers through the "Settings" tab.
|
| 42 |
+
- **🔄 Multi-Provider Support**: Powered by `LiteLLM`, allowing for integration with OpenAI, Anthropic, Mistral, Qwen, and more.
|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
## 🚀 Getting Started (Local Development)
|
| 47 |
+
|
| 48 |
+
Likable is designed to run in a Docker container, mirroring its Hugging Face Spaces environment.
|
| 49 |
+
|
| 50 |
+
**Prerequisites:**
|
| 51 |
+
- [Docker](https://www.docker.com/get-started) installed on your system.
|
| 52 |
+
- Git.
|
| 53 |
+
|
| 54 |
+
**1. Clone the repository:**
|
| 55 |
+
```bash
|
| 56 |
+
git clone https://huggingface.co/spaces/ZwischenholtzW/likable
|
| 57 |
+
cd likable
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
**2. Create your environment file:**
|
| 61 |
+
Create a `.env` file in the root of the project. This is where you'll put your API key.
|
| 62 |
+
```bash
|
| 63 |
+
cp .env.example .env
|
| 64 |
+
```
|
| 65 |
+
Now, open `.env` and add your API key:
|
| 66 |
+
```dotenv
|
| 67 |
+
# .env
|
| 68 |
+
API_KEY="your_secret_api_key_here"
|
| 69 |
+
MODEL_ID="anthropic/claude-sonnet-4-20250514"
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
**3. Build and run the Docker container:**
|
| 73 |
+
```bash
|
| 74 |
+
docker build -t likable-app .
|
| 75 |
+
docker run -p 7860:7860 --env-file .env likable-app
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
**4. Open the application:**
|
| 79 |
+
Navigate to [http://127.0.0.1:7860](http://127.0.0.1:7860) in your web browser.
|
| 80 |
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
## 🛠️ How It Works
|
| 84 |
+
|
| 85 |
+
Likable uses a `smolagents` based AI agent specifically prompted for Gradio development. The workflow is as follows:
|
| 86 |
+
|
| 87 |
+
1. **User Prompt**: You provide a task in the chat interface (e.g., "Create a simple calculator app").
|
| 88 |
+
2. **Agent Execution**: The agent receives the prompt and uses its available tools (`create_new_file`, `python_editor`, `install_package`, `test_app_py`) to accomplish the task. All generated code is created inside a secure `sandbox` directory.
|
| 89 |
+
3. **Code Generation**: The agent writes or modifies an `app.py` file within the sandbox. It is critically instructed to include the necessary boilerplate to make the app launchable.
|
| 90 |
+
4. **Live Preview Subprocess**: The main application detects changes to `sandbox/app.py` and launches it in a separate, isolated subprocess.
|
| 91 |
+
5. **Iframe Display**: The running Gradio app is served on an internal port and displayed to the user through an iframe in the "Preview" tab, with cache-busting to ensure the latest version is always shown.
|
| 92 |
+
|
| 93 |
+
This cycle repeats as you provide more instructions, allowing for iterative and interactive development.
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
## 💻 Technology Stack
|
| 98 |
+
|
| 99 |
+
- **Backend & UI**: [Gradio](https://www.gradio.app/)
|
| 100 |
+
- **AI Agent Framework**: [smol-agents](https://github.com/smol-ai/developer)
|
| 101 |
+
- **LLM Integration**: [LiteLLM](https://github.com/BerriAI/litellm)
|
| 102 |
+
- **Containerization**: [Docker](https://www.docker.com/)
|
| 103 |
+
- **Code Quality**: `ruff` and `pre-commit`
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
|
| 107 |
+
## 📜 License
|
| 108 |
|
| 109 |
+
This project is licensed under the [Apache 2.0 License](LICENSE).
|
app.py → src/app.py
RENAMED
|
@@ -10,8 +10,8 @@ from pathlib import Path
|
|
| 10 |
import gradio as gr
|
| 11 |
from smolagents.agents import MultiStepAgent
|
| 12 |
|
| 13 |
-
from ui_helpers import stream_to_gradio
|
| 14 |
-
from utils import load_file
|
| 15 |
|
| 16 |
preview_process = None
|
| 17 |
PREVIEW_PORT = 7861 # Internal port for preview apps
|
|
@@ -385,7 +385,7 @@ def get_default_model_for_provider(provider: str) -> str:
|
|
| 385 |
provider_model_map = {
|
| 386 |
"Anthropic": "anthropic/claude-sonnet-4-20250514",
|
| 387 |
"OpenAI": "openai/gpt-4.1",
|
| 388 |
-
"Mistral": "mistral/devstral-latest",
|
| 389 |
"SambaNova": "sambanova/Qwen3-32B",
|
| 390 |
"Hugging Face": "huggingface/together/Qwen/Qwen2.5-Coder-32B-Instruct",
|
| 391 |
}
|
|
@@ -947,7 +947,7 @@ class GradioUI:
|
|
| 947 |
|
| 948 |
def recreate_agent_with_new_model(self, session_state, provider=None):
|
| 949 |
"""Recreate the agent with updated model configuration."""
|
| 950 |
-
from kiss_agent import KISSAgent
|
| 951 |
|
| 952 |
# Get the new model ID if provider is specified
|
| 953 |
if provider and provider != "Hugging Face":
|
|
@@ -978,7 +978,7 @@ class GradioUI:
|
|
| 978 |
if __name__ == "__main__":
|
| 979 |
import sys
|
| 980 |
|
| 981 |
-
from kiss_agent import KISSAgent
|
| 982 |
|
| 983 |
# Initialize model configuration based on available API keys
|
| 984 |
print("🔍 Checking for available API keys...")
|
|
|
|
| 10 |
import gradio as gr
|
| 11 |
from smolagents.agents import MultiStepAgent
|
| 12 |
|
| 13 |
+
from .ui_helpers import stream_to_gradio
|
| 14 |
+
from .utils import load_file
|
| 15 |
|
| 16 |
preview_process = None
|
| 17 |
PREVIEW_PORT = 7861 # Internal port for preview apps
|
|
|
|
| 385 |
provider_model_map = {
|
| 386 |
"Anthropic": "anthropic/claude-sonnet-4-20250514",
|
| 387 |
"OpenAI": "openai/gpt-4.1",
|
| 388 |
+
"Mistral": "mistral/devstral-small-latest",
|
| 389 |
"SambaNova": "sambanova/Qwen3-32B",
|
| 390 |
"Hugging Face": "huggingface/together/Qwen/Qwen2.5-Coder-32B-Instruct",
|
| 391 |
}
|
|
|
|
| 947 |
|
| 948 |
def recreate_agent_with_new_model(self, session_state, provider=None):
|
| 949 |
"""Recreate the agent with updated model configuration."""
|
| 950 |
+
from .kiss_agent import KISSAgent
|
| 951 |
|
| 952 |
# Get the new model ID if provider is specified
|
| 953 |
if provider and provider != "Hugging Face":
|
|
|
|
| 978 |
if __name__ == "__main__":
|
| 979 |
import sys
|
| 980 |
|
| 981 |
+
from .kiss_agent import KISSAgent
|
| 982 |
|
| 983 |
# Initialize model configuration based on available API keys
|
| 984 |
print("🔍 Checking for available API keys...")
|
kiss_agent.py → src/kiss_agent.py
RENAMED
|
@@ -5,7 +5,7 @@ from pathlib import Path
|
|
| 5 |
|
| 6 |
from smolagents import LiteLLMModel, ToolCallingAgent, tool
|
| 7 |
|
| 8 |
-
from settings import settings
|
| 9 |
|
| 10 |
PROMPT_TEMPLATE = """You are an expert software developer for Gradio.
|
| 11 |
You are given a task to develop a Gradio application and can use all the tools \
|
|
|
|
| 5 |
|
| 6 |
from smolagents import LiteLLMModel, ToolCallingAgent, tool
|
| 7 |
|
| 8 |
+
from .settings import settings
|
| 9 |
|
| 10 |
PROMPT_TEMPLATE = """You are an expert software developer for Gradio.
|
| 11 |
You are given a task to develop a Gradio application and can use all the tools \
|
settings.py → src/settings.py
RENAMED
|
@@ -23,30 +23,11 @@ class Settings:
|
|
| 23 |
self.api_base_url: str | None = os.getenv("API_BASE_URL")
|
| 24 |
self.api_key: str | None = os.getenv("API_KEY")
|
| 25 |
|
| 26 |
-
# Manager Agent Settings
|
| 27 |
-
self.manager_model_id: str = os.getenv("MANAGER_MODEL_ID", self.model_id)
|
| 28 |
-
self.manager_verbosity: int = int(os.getenv("MANAGER_VERBOSITY", "1"))
|
| 29 |
-
self.max_manager_steps: int = int(os.getenv("MAX_MANAGER_STEPS", "15"))
|
| 30 |
-
|
| 31 |
-
# Coding Agent Settings
|
| 32 |
-
self.code_model_id: str = os.getenv("CODE_MODEL_ID", self.model_id)
|
| 33 |
-
self.coding_verbosity: int = int(os.getenv("CODING_VERBOSITY", "2"))
|
| 34 |
-
self.max_coding_steps: int = int(os.getenv("MAX_CODING_STEPS", "20"))
|
| 35 |
-
|
| 36 |
-
# Testing Agent Settings
|
| 37 |
-
self.test_model_id: str = os.getenv("TEST_MODEL_ID", self.model_id)
|
| 38 |
-
self.testing_verbosity: int = int(os.getenv("TESTING_VERBOSITY", "2"))
|
| 39 |
-
self.max_testing_steps: int = int(os.getenv("MAX_TESTING_STEPS", "15"))
|
| 40 |
-
|
| 41 |
# Application Settings
|
| 42 |
self.gradio_host: str = os.getenv("GRADIO_HOST", "127.0.0.1")
|
| 43 |
self.gradio_port: int = int(os.getenv("GRADIO_PORT", "7860"))
|
| 44 |
self.gradio_debug: bool = os.getenv("GRADIO_DEBUG", "false").lower() == "true"
|
| 45 |
|
| 46 |
-
# Planning Agent Settings
|
| 47 |
-
self.planning_verbosity: int = int(os.getenv("PLANNING_VERBOSITY", "1"))
|
| 48 |
-
self.max_planning_steps: int = int(os.getenv("MAX_PLANNING_STEPS", "10"))
|
| 49 |
-
|
| 50 |
# Validate critical settings
|
| 51 |
self._validate()
|
| 52 |
|
|
@@ -55,47 +36,12 @@ class Settings:
|
|
| 55 |
|
| 56 |
if not self.api_key:
|
| 57 |
print("⚠️ Warning: API_KEY not set in environment variables.")
|
| 58 |
-
print(
|
| 59 |
-
" The planning and coding agents may not work \
|
| 60 |
-
without a valid API key."
|
| 61 |
-
)
|
| 62 |
print(" Set it in your .env file or as an environment variable.")
|
| 63 |
print()
|
| 64 |
|
| 65 |
-
if self.manager_verbosity not in [0, 1, 2]:
|
| 66 |
-
print(
|
| 67 |
-
f"⚠️ Warning: MANAGER_VERBOSITY={self.manager_verbosity} is not \
|
| 68 |
-
in valid range [0, 1, 2]"
|
| 69 |
-
)
|
| 70 |
-
print(" Using default value of 1")
|
| 71 |
-
self.manager_verbosity = 1
|
| 72 |
-
|
| 73 |
-
if self.planning_verbosity not in [0, 1, 2]:
|
| 74 |
-
print(
|
| 75 |
-
f"⚠️ Warning: PLANNING_VERBOSITY={self.planning_verbosity} is not \
|
| 76 |
-
in valid range [0, 1, 2]"
|
| 77 |
-
)
|
| 78 |
-
print(" Using default value of 1")
|
| 79 |
-
self.planning_verbosity = 1
|
| 80 |
-
|
| 81 |
-
if self.coding_verbosity not in [0, 1, 2]:
|
| 82 |
-
print(
|
| 83 |
-
f"⚠️ Warning: CODING_VERBOSITY={self.coding_verbosity} is not \
|
| 84 |
-
in valid range [0, 1, 2]"
|
| 85 |
-
)
|
| 86 |
-
print(" Using default value of 2")
|
| 87 |
-
self.coding_verbosity = 2
|
| 88 |
-
|
| 89 |
-
if self.testing_verbosity not in [0, 1, 2]:
|
| 90 |
-
print(
|
| 91 |
-
f"⚠️ Warning: TESTING_VERBOSITY={self.testing_verbosity} is not \
|
| 92 |
-
in valid range [0, 1, 2]"
|
| 93 |
-
)
|
| 94 |
-
print(" Using default value of 2")
|
| 95 |
-
self.testing_verbosity = 2
|
| 96 |
-
|
| 97 |
def get_model_config(self) -> dict:
|
| 98 |
-
"""Get model configuration for the
|
| 99 |
config = {"model_id": self.model_id, "api_key": self.api_key}
|
| 100 |
|
| 101 |
if self.api_base_url:
|
|
@@ -105,28 +51,6 @@ in valid range [0, 1, 2]"
|
|
| 105 |
|
| 106 |
return config
|
| 107 |
|
| 108 |
-
def get_manager_model_config(self) -> dict:
|
| 109 |
-
"""Get model configuration for the manager agent."""
|
| 110 |
-
config = {"model_id": self.manager_model_id, "api_key": self.api_key}
|
| 111 |
-
|
| 112 |
-
if self.api_base_url:
|
| 113 |
-
config["api_base_url"] = self.api_base_url
|
| 114 |
-
if self.api_key:
|
| 115 |
-
config["api_key"] = self.api_key
|
| 116 |
-
|
| 117 |
-
return config
|
| 118 |
-
|
| 119 |
-
def get_code_model_config(self) -> dict:
|
| 120 |
-
"""Get model configuration for the coding agent."""
|
| 121 |
-
config = {"model_id": self.code_model_id, "api_key": self.api_key}
|
| 122 |
-
|
| 123 |
-
if self.api_base_url:
|
| 124 |
-
config["api_base_url"] = self.api_base_url
|
| 125 |
-
if self.api_key:
|
| 126 |
-
config["api_key"] = self.api_key
|
| 127 |
-
|
| 128 |
-
return config
|
| 129 |
-
|
| 130 |
def get_gradio_config(self) -> dict:
|
| 131 |
"""Get Gradio launch configuration."""
|
| 132 |
return {
|
|
@@ -135,65 +59,15 @@ in valid range [0, 1, 2]"
|
|
| 135 |
"debug": self.gradio_debug,
|
| 136 |
}
|
| 137 |
|
| 138 |
-
def get_manager_config(self) -> dict:
|
| 139 |
-
"""Get manager agent configuration."""
|
| 140 |
-
return {
|
| 141 |
-
"verbosity_level": self.manager_verbosity,
|
| 142 |
-
"max_steps": self.max_manager_steps,
|
| 143 |
-
}
|
| 144 |
-
|
| 145 |
-
def get_planning_config(self) -> dict:
|
| 146 |
-
"""Get planning agent configuration."""
|
| 147 |
-
return {
|
| 148 |
-
"verbosity_level": self.planning_verbosity,
|
| 149 |
-
"max_steps": self.max_planning_steps,
|
| 150 |
-
}
|
| 151 |
-
|
| 152 |
-
def get_coding_config(self) -> dict:
|
| 153 |
-
"""Get coding agent configuration."""
|
| 154 |
-
return {
|
| 155 |
-
"verbosity_level": self.coding_verbosity,
|
| 156 |
-
"max_steps": self.max_coding_steps,
|
| 157 |
-
}
|
| 158 |
-
|
| 159 |
-
def get_test_model_config(self) -> dict:
|
| 160 |
-
"""Get model configuration for the testing agent."""
|
| 161 |
-
config = {"model_id": self.test_model_id, "api_key": self.api_key}
|
| 162 |
-
|
| 163 |
-
if self.api_base_url:
|
| 164 |
-
config["api_base_url"] = self.api_base_url
|
| 165 |
-
if self.api_key:
|
| 166 |
-
config["api_key"] = self.api_key
|
| 167 |
-
|
| 168 |
-
return config
|
| 169 |
-
|
| 170 |
-
def get_testing_config(self) -> dict:
|
| 171 |
-
"""Get testing agent configuration."""
|
| 172 |
-
return {
|
| 173 |
-
"verbosity_level": self.testing_verbosity,
|
| 174 |
-
"max_steps": self.max_testing_steps,
|
| 175 |
-
}
|
| 176 |
-
|
| 177 |
def __repr__(self) -> str:
|
| 178 |
"""String representation of settings (excluding sensitive data)."""
|
| 179 |
return f"""Settings(
|
| 180 |
model_id='{self.model_id}',
|
| 181 |
-
manager_model_id='{self.manager_model_id}',
|
| 182 |
-
code_model_id='{self.code_model_id}',
|
| 183 |
-
test_model_id='{self.test_model_id}',
|
| 184 |
api_key={'***' if self.api_key else 'None'},
|
| 185 |
api_base_url='{self.api_base_url}',
|
| 186 |
gradio_host='{self.gradio_host}',
|
| 187 |
gradio_port={self.gradio_port},
|
| 188 |
-
gradio_debug={self.gradio_debug}
|
| 189 |
-
manager_verbosity={self.manager_verbosity},
|
| 190 |
-
max_manager_steps={self.max_manager_steps},
|
| 191 |
-
planning_verbosity={self.planning_verbosity},
|
| 192 |
-
max_planning_steps={self.max_planning_steps},
|
| 193 |
-
coding_verbosity={self.coding_verbosity},
|
| 194 |
-
max_coding_steps={self.max_coding_steps},
|
| 195 |
-
testing_verbosity={self.testing_verbosity},
|
| 196 |
-
max_testing_steps={self.max_testing_steps}
|
| 197 |
)"""
|
| 198 |
|
| 199 |
|
|
@@ -220,23 +94,5 @@ if __name__ == "__main__":
|
|
| 220 |
print("Model Config:")
|
| 221 |
print(settings.get_model_config())
|
| 222 |
print()
|
| 223 |
-
print("Manager Model Config:")
|
| 224 |
-
print(settings.get_manager_model_config())
|
| 225 |
-
print()
|
| 226 |
-
print("Code Model Config:")
|
| 227 |
-
print(settings.get_code_model_config())
|
| 228 |
-
print()
|
| 229 |
print("Gradio Config:")
|
| 230 |
print(settings.get_gradio_config())
|
| 231 |
-
print()
|
| 232 |
-
print("Manager Config:")
|
| 233 |
-
print(settings.get_manager_config())
|
| 234 |
-
print()
|
| 235 |
-
print("Planning Config:")
|
| 236 |
-
print(settings.get_planning_config())
|
| 237 |
-
print()
|
| 238 |
-
print("Coding Config:")
|
| 239 |
-
print(settings.get_coding_config())
|
| 240 |
-
print()
|
| 241 |
-
print("Testing Config:")
|
| 242 |
-
print(settings.get_testing_config())
|
|
|
|
| 23 |
self.api_base_url: str | None = os.getenv("API_BASE_URL")
|
| 24 |
self.api_key: str | None = os.getenv("API_KEY")
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
# Application Settings
|
| 27 |
self.gradio_host: str = os.getenv("GRADIO_HOST", "127.0.0.1")
|
| 28 |
self.gradio_port: int = int(os.getenv("GRADIO_PORT", "7860"))
|
| 29 |
self.gradio_debug: bool = os.getenv("GRADIO_DEBUG", "false").lower() == "true"
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
# Validate critical settings
|
| 32 |
self._validate()
|
| 33 |
|
|
|
|
| 36 |
|
| 37 |
if not self.api_key:
|
| 38 |
print("⚠️ Warning: API_KEY not set in environment variables.")
|
| 39 |
+
print(" The agent may not work without a valid API key.")
|
|
|
|
|
|
|
|
|
|
| 40 |
print(" Set it in your .env file or as an environment variable.")
|
| 41 |
print()
|
| 42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
def get_model_config(self) -> dict:
|
| 44 |
+
"""Get model configuration for the agent."""
|
| 45 |
config = {"model_id": self.model_id, "api_key": self.api_key}
|
| 46 |
|
| 47 |
if self.api_base_url:
|
|
|
|
| 51 |
|
| 52 |
return config
|
| 53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
def get_gradio_config(self) -> dict:
|
| 55 |
"""Get Gradio launch configuration."""
|
| 56 |
return {
|
|
|
|
| 59 |
"debug": self.gradio_debug,
|
| 60 |
}
|
| 61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
def __repr__(self) -> str:
|
| 63 |
"""String representation of settings (excluding sensitive data)."""
|
| 64 |
return f"""Settings(
|
| 65 |
model_id='{self.model_id}',
|
|
|
|
|
|
|
|
|
|
| 66 |
api_key={'***' if self.api_key else 'None'},
|
| 67 |
api_base_url='{self.api_base_url}',
|
| 68 |
gradio_host='{self.gradio_host}',
|
| 69 |
gradio_port={self.gradio_port},
|
| 70 |
+
gradio_debug={self.gradio_debug}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
)"""
|
| 72 |
|
| 73 |
|
|
|
|
| 94 |
print("Model Config:")
|
| 95 |
print(settings.get_model_config())
|
| 96 |
print()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 97 |
print("Gradio Config:")
|
| 98 |
print(settings.get_gradio_config())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
ui_helpers.py → src/ui_helpers.py
RENAMED
|
@@ -218,35 +218,6 @@ def _process_action_step(
|
|
| 218 |
# )
|
| 219 |
|
| 220 |
|
| 221 |
-
def _process_planning_step(
|
| 222 |
-
step_log: PlanningStep, skip_model_outputs: bool = False
|
| 223 |
-
) -> Generator:
|
| 224 |
-
"""
|
| 225 |
-
Process a [`PlanningStep`] and yield appropriate gradio.ChatMessage objects.
|
| 226 |
-
|
| 227 |
-
Args:
|
| 228 |
-
step_log ([`PlanningStep`]): PlanningStep to process.
|
| 229 |
-
|
| 230 |
-
Yields:
|
| 231 |
-
`gradio.ChatMessage`: Gradio ChatMessages representing the planning step.
|
| 232 |
-
"""
|
| 233 |
-
import gradio as gr
|
| 234 |
-
|
| 235 |
-
if not skip_model_outputs:
|
| 236 |
-
yield gr.ChatMessage(
|
| 237 |
-
role="assistant", content="**Planning step**", metadata={"status": "done"}
|
| 238 |
-
)
|
| 239 |
-
yield gr.ChatMessage(
|
| 240 |
-
role="assistant", content=step_log.plan, metadata={"status": "done"}
|
| 241 |
-
)
|
| 242 |
-
yield gr.ChatMessage(
|
| 243 |
-
role="assistant",
|
| 244 |
-
content=get_step_footnote_content(step_log, "Planning step"),
|
| 245 |
-
metadata={"status": "done"},
|
| 246 |
-
)
|
| 247 |
-
yield gr.ChatMessage(role="assistant", content="-----", metadata={"status": "done"})
|
| 248 |
-
|
| 249 |
-
|
| 250 |
def _process_final_answer_step(step_log: FinalAnswerStep) -> Generator:
|
| 251 |
"""
|
| 252 |
Process a [`FinalAnswerStep`] and yield appropriate gradio.ChatMessage objects.
|
|
@@ -287,35 +258,24 @@ def _process_final_answer_step(step_log: FinalAnswerStep) -> Generator:
|
|
| 287 |
|
| 288 |
|
| 289 |
def pull_messages_from_step(
|
| 290 |
-
step_log: ActionStep |
|
| 291 |
skip_model_outputs: bool = False,
|
| 292 |
parent_id: str | None = None,
|
| 293 |
):
|
| 294 |
-
"""
|
|
|
|
| 295 |
|
| 296 |
Args:
|
| 297 |
-
step_log
|
| 298 |
-
|
| 299 |
-
|
| 300 |
-
This is used for instance when streaming model outputs have
|
| 301 |
-
already been displayed.
|
| 302 |
-
parent_id: The ID of the parent message. Only used for nested thoughts.
|
| 303 |
-
Nested thoughts can be nested by setting the parent_id to the id
|
| 304 |
-
of the parent thought.
|
| 305 |
"""
|
| 306 |
-
if not _is_package_available("gradio"):
|
| 307 |
-
raise ModuleNotFoundError(
|
| 308 |
-
"Please install 'gradio' extra to use the GradioUI: "
|
| 309 |
-
"`pip install 'smolagents[gradio]'`"
|
| 310 |
-
)
|
| 311 |
if isinstance(step_log, ActionStep):
|
| 312 |
-
yield from _process_action_step(
|
| 313 |
-
|
| 314 |
-
|
| 315 |
elif isinstance(step_log, FinalAnswerStep):
|
| 316 |
yield from _process_final_answer_step(step_log)
|
| 317 |
-
else:
|
| 318 |
-
raise ValueError(f"Unsupported step type: {type(step_log)}")
|
| 319 |
|
| 320 |
|
| 321 |
def stream_to_gradio(
|
|
@@ -342,7 +302,7 @@ def stream_to_gradio(
|
|
| 342 |
reset=reset_agent_memory,
|
| 343 |
additional_args=additional_args,
|
| 344 |
):
|
| 345 |
-
if isinstance(event, ActionStep |
|
| 346 |
intermediate_text = ""
|
| 347 |
yield from pull_messages_from_step(
|
| 348 |
event,
|
|
|
|
| 218 |
# )
|
| 219 |
|
| 220 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 221 |
def _process_final_answer_step(step_log: FinalAnswerStep) -> Generator:
|
| 222 |
"""
|
| 223 |
Process a [`FinalAnswerStep`] and yield appropriate gradio.ChatMessage objects.
|
|
|
|
| 258 |
|
| 259 |
|
| 260 |
def pull_messages_from_step(
|
| 261 |
+
step_log: ActionStep | FinalAnswerStep,
|
| 262 |
skip_model_outputs: bool = False,
|
| 263 |
parent_id: str | None = None,
|
| 264 |
):
|
| 265 |
+
"""
|
| 266 |
+
Pulls and yields messages from a given step log.
|
| 267 |
|
| 268 |
Args:
|
| 269 |
+
step_log (`ActionStep` | `PlanningStep` | `FinalAnswerStep`):
|
| 270 |
+
The step log to process.
|
| 271 |
+
skip_model_outputs (`bool`): Whether to skip model outputs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 272 |
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 273 |
if isinstance(step_log, ActionStep):
|
| 274 |
+
yield from _process_action_step(
|
| 275 |
+
step_log, skip_model_outputs=skip_model_outputs, parent_id=parent_id
|
| 276 |
+
)
|
| 277 |
elif isinstance(step_log, FinalAnswerStep):
|
| 278 |
yield from _process_final_answer_step(step_log)
|
|
|
|
|
|
|
| 279 |
|
| 280 |
|
| 281 |
def stream_to_gradio(
|
|
|
|
| 302 |
reset=reset_agent_memory,
|
| 303 |
additional_args=additional_args,
|
| 304 |
):
|
| 305 |
+
if isinstance(event, ActionStep | FinalAnswerStep):
|
| 306 |
intermediate_text = ""
|
| 307 |
yield from pull_messages_from_step(
|
| 308 |
event,
|
utils.py → src/utils.py
RENAMED
|
File without changes
|
start.sh
CHANGED
|
@@ -6,4 +6,4 @@ nginx -c /app/nginx.conf &
|
|
| 6 |
|
| 7 |
# Start the main Gradio app
|
| 8 |
echo "Starting Gradio app on port 7862..."
|
| 9 |
-
exec python -u app
|
|
|
|
| 6 |
|
| 7 |
# Start the main Gradio app
|
| 8 |
echo "Starting Gradio app on port 7862..."
|
| 9 |
+
exec python -u -m src.app --server-port 7862 --server-name 0.0.0.0
|