BolyosCsaba commited on
Commit
e214abd
·
1 Parent(s): 5d1b172

initial commit

Browse files
Files changed (11) hide show
  1. .gitignore +52 -0
  2. OFPBadWord +1 -0
  3. README.md +343 -12
  4. app.py +422 -0
  5. config/config.yaml +73 -0
  6. requirements.txt +5 -0
  7. src/__init__.py +2 -0
  8. src/chat_agent.py +237 -0
  9. src/llm_client.py +219 -0
  10. src/models.py +154 -0
  11. src/ofp_client.py +116 -0
.gitignore ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ *.egg-info/
20
+ .installed.cfg
21
+ *.egg
22
+
23
+ # Virtual Environment
24
+ venv/
25
+ ENV/
26
+ env/
27
+
28
+ # IDE
29
+ .vscode/
30
+ .idea/
31
+ *.swp
32
+ *.swo
33
+ *~
34
+
35
+ # OS
36
+ .DS_Store
37
+ Thumbs.db
38
+
39
+ # Logs
40
+ *.log
41
+
42
+ # Environment variables
43
+ .env
44
+
45
+ # Gradio
46
+ gradio_cached_examples/
47
+ flagged/
48
+
49
+ # Development files
50
+ private/
51
+ .claude/
52
+ .pytest_cache/
OFPBadWord ADDED
@@ -0,0 +1 @@
 
 
1
+ Subproject commit 90f2d18964851953163fa88bac5b749b665f6949
README.md CHANGED
@@ -1,14 +1,345 @@
1
- ---
2
- title: Talker
3
- emoji: 📚
4
- colorFrom: gray
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 6.5.1
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- short_description: Talkative OFP compliant agent
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
+ # 💬 Talker - OFP Chat Agent
2
+
3
+ An AI-powered conversational agent that implements the [Open Floor Protocol (OFP)](https://openfloor.dev) for interoperable multi-agent conversations. Built with Gradio ChatInterface and supports multiple LLM providers including Qwen, OpenAI, and Ollama.
4
+
5
+ ## ✨ Features
6
+
7
+ - **🤖 Multi-LLM Support**: Works with HuggingFace (Qwen), OpenAI, or local Ollama models
8
+ - **💬 Clean Chat Interface**: Modern Gradio ChatInterface for seamless conversations
9
+ - **🔌 OFP Protocol**: Full implementation of Open Floor Protocol v1.0.0
10
+ - **🔍 Debug Panel**: Real-time status monitoring and activity logs
11
+ - **⚙️ Configurable**: Easy YAML configuration for all settings
12
+ - **📡 API Endpoints**: RESTful endpoints for OFP integration
13
+
14
+ ## 🏗️ Architecture
15
+
16
+ ```
17
+ ┌─────────────────────────────────────┐
18
+ │ Gradio Chat Interface │
19
+ │ (User chat + OFP debugging) │
20
+ └─────────────┬───────────────────────┘
21
+
22
+
23
+ ┌─────────────────────────────────────┐
24
+ │ FastAPI Backend │
25
+ │ ┌──────────────────────────────┐ │
26
+ │ │ /ofp - OFP Envelope Handler │ │ ← Receives OFP envelopes
27
+ │ │ /manifest - Agent Manifest │ │
28
+ │ └──────────────┬───────────────┘ │
29
+ │ │ │
30
+ │ ▼ │
31
+ │ ┌──────────────────────────────┐ │
32
+ │ │ Chat Agent │ │
33
+ │ │ - Process OFP envelopes │ │
34
+ │ │ - Manage conversation │ │
35
+ │ └──────────────┬───────────────┘ │
36
+ │ │ │
37
+ │ ▼ │
38
+ │ ┌──────────────────────────────┐ │
39
+ │ │ LLM Client │ │
40
+ │ │ - Qwen / OpenAI / Ollama │ │
41
+ │ │ - Generate responses │ │
42
+ │ └──────────────────────────────┘ │
43
+ └─────────────────────────────────────┘
44
+ ```
45
+
46
+ ## 📋 Requirements
47
+
48
+ - Python 3.8+
49
+ - pip package manager
50
+
51
+ ## 🚀 Quick Start
52
+
53
+ ### 1. Clone and Setup
54
+
55
+ ```bash
56
+ # Clone the repository
57
+ git clone https://huggingface.co/spaces/BladeSzaSza/Talker
58
+ cd Talker
59
+
60
+ # Create virtual environment
61
+ python -m venv venv
62
+ source venv/bin/activate # On Windows: venv\Scripts\activate
63
+
64
+ # Install dependencies
65
+ pip install -r requirements.txt
66
+ ```
67
+
68
+ ### 2. Configuration
69
+
70
+ Edit `config/config.yaml` to configure your LLM:
71
+
72
+ ```yaml
73
+ llm:
74
+ provider: 'huggingface' # or 'openai', 'ollama'
75
+ model: 'Qwen/Qwen2.5-7B-Instruct'
76
+ system_prompt: |
77
+ You are Talker, a helpful AI assistant...
78
+ ```
79
+
80
+ ### 3. Set API Keys (if needed)
81
+
82
+ ```bash
83
+ # For HuggingFace
84
+ export HF_TOKEN="your_huggingface_token"
85
+
86
+ # For OpenAI
87
+ export OPENAI_API_KEY="your_openai_key"
88
+
89
+ # Ollama runs locally, no key needed
90
+ ```
91
+
92
+ ### 4. Run
93
+
94
+ ```bash
95
+ # Standard launch
96
+ python app.py
97
+
98
+ # With public share URL
99
+ python app.py --share
100
+
101
+ # Custom port
102
+ python app.py --port 8080
103
+ ```
104
+
105
+ Access the interface at: `http://localhost:7860`
106
+
107
+ ## 🎯 Usage
108
+
109
+ ### Chat Interface
110
+
111
+ 1. Open the **💬 Chat** tab
112
+ 2. Type your message and press Enter
113
+ 3. The AI responds using your configured LLM
114
+ 4. Conversation history is maintained automatically
115
+
116
+ ### OFP Integration
117
+
118
+ The agent exposes two OFP-compliant endpoints:
119
+
120
+ **Manifest Endpoint:**
121
+ ```bash
122
+ curl http://localhost:7860/manifest
123
+ ```
124
+
125
+ **OFP Envelope Endpoint:**
126
+ ```bash
127
+ curl -X POST http://localhost:7860/ofp \
128
+ -H "Content-Type: application/json" \
129
+ -d '{
130
+ "openFloor": {
131
+ "schema": {"version": "1.0.0"},
132
+ "conversation": {"id": "conv:test"},
133
+ "sender": {"speakerUri": "tag:test,2025:user"},
134
+ "events": [{
135
+ "eventType": "utterance",
136
+ "parameters": {
137
+ "dialogEvent": {
138
+ "id": "de:test",
139
+ "speakerUri": "tag:test,2025:user",
140
+ "span": {"startTime": "2025-01-01T00:00:00Z"},
141
+ "features": {
142
+ "text": {
143
+ "mimeType": "text/plain",
144
+ "tokens": [{"value": "Hello!"}]
145
+ }
146
+ }
147
+ }
148
+ }
149
+ }]
150
+ }
151
+ }'
152
+ ```
153
+
154
+ ### Debug Panel
155
+
156
+ The **🔍 Status & Debug** tab shows:
157
+ - Agent status and statistics
158
+ - Recent activity logs
159
+ - Conversation history length
160
+ - OFP message processing
161
+
162
+ ## ⚙️ Configuration
163
+
164
+ ### LLM Providers
165
+
166
+ #### HuggingFace (Default)
167
+
168
+ ```yaml
169
+ llm:
170
+ provider: 'huggingface'
171
+ model: 'Qwen/Qwen2.5-7B-Instruct'
172
+ max_tokens: 512
173
+ temperature: 0.7
174
+ ```
175
+
176
+ Requires `HF_TOKEN` environment variable for private models.
177
+
178
+ #### OpenAI
179
+
180
+ ```yaml
181
+ llm:
182
+ provider: 'openai'
183
+ model: 'gpt-4'
184
+ api_key: 'your-api-key' # or use OPENAI_API_KEY env var
185
+ ```
186
+
187
+ #### Ollama (Local)
188
+
189
+ ```yaml
190
+ llm:
191
+ provider: 'ollama'
192
+ model: 'qwen2.5:7b'
193
+ api_url: 'http://localhost:11434/api/generate'
194
+ ```
195
+
196
+ Install Ollama from https://ollama.ai and run:
197
+ ```bash
198
+ ollama pull qwen2.5:7b
199
+ ollama serve
200
+ ```
201
+
202
+ ### Agent Configuration
203
+
204
+ ```yaml
205
+ agent:
206
+ speaker_uri: 'tag:talker.service,2025:agent-01'
207
+ service_url: 'https://your-url.com/ofp'
208
+ convener_uri: 'tag:convener.service,2025:default'
209
+ convener_url: 'https://convener-url.com/ofp'
210
+ ```
211
+
212
+ ### UI Settings
213
+
214
+ ```yaml
215
+ ui:
216
+ title: '💬 Talker - OFP Chat Agent'
217
+ description: 'AI-powered conversational agent...'
218
+ theme: 'soft' # default, soft, glass, monochrome
219
+ show_debug_panel: true
220
+ show_settings: true
221
+ ```
222
+
223
+ ## 📁 Project Structure
224
+
225
+ ```
226
+ Talker/
227
+ ├── app.py # Main Gradio + FastAPI application
228
+ ├── requirements.txt # Python dependencies
229
+ ├── config/
230
+ │ └── config.yaml # Configuration file
231
+ ├── src/
232
+ │ ├── __init__.py
233
+ │ ├── models.py # OFP data structures
234
+ │ ├── ofp_client.py # OFP envelope handling
235
+ │ ├── llm_client.py # LLM integration
236
+ │ └── chat_agent.py # Main agent logic
237
+ └── README.md # This file
238
+ ```
239
+
240
+ ## 🌐 Deployment
241
+
242
+ ### HuggingFace Spaces
243
+
244
+ 1. Push to HuggingFace Spaces repository
245
+ 2. Set `HF_TOKEN` secret in Space settings (if using HF models)
246
+ 3. Space will auto-deploy with public URL
247
+
248
+ ### Local Network
249
+
250
+ ```bash
251
+ python app.py --share
252
+ ```
253
+
254
+ Generates a temporary public URL for testing with OFP Playground.
255
+
256
+ ### Production Server
257
+
258
+ ```bash
259
+ # Using uvicorn directly
260
+ uvicorn app:app --host 0.0.0.0 --port 7860
261
+
262
+ # With Gunicorn
263
+ gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker
264
+ ```
265
+
266
+ ## 🔧 Development
267
+
268
+ ### Adding Custom LLM Provider
269
+
270
+ 1. Extend `LLMClient` in `src/llm_client.py`:
271
+
272
+ ```python
273
+ def _generate_custom(self, message, history, max_tokens, temperature):
274
+ """Generate response using custom API"""
275
+ # Your implementation here
276
+ pass
277
+ ```
278
+
279
+ 2. Update `generate_response()` to call your method
280
+
281
+ ### Modifying OFP Behavior
282
+
283
+ Edit `src/chat_agent.py`:
284
+ - `process_envelope()`: Handle incoming OFP events
285
+ - `_handle_utterance()`: Process text messages
286
+ - `send_message()`: Send OFP responses
287
+
288
+ ## 🧪 Testing
289
+
290
+ Test the chat interface:
291
+ ```bash
292
+ python app.py
293
+ # Open browser to http://localhost:7860
294
+ ```
295
+
296
+ Test OFP endpoints:
297
+ ```bash
298
+ # Test manifest
299
+ curl http://localhost:7860/manifest
300
+
301
+ # Test message processing
302
+ curl -X POST http://localhost:7860/ofp \
303
+ -H "Content-Type: application/json" \
304
+ -d @test_envelope.json
305
+ ```
306
+
307
+ ## 📚 OFP Protocol
308
+
309
+ This agent implements:
310
+ - **Dialog Event Object v1.0.2**: Text-based conversation messages
311
+ - **Inter-agent Message v1.0.0**: Envelope structure for communication
312
+ - **Assistant Manifest v1.0.0**: Agent identification and capabilities
313
+
314
+ Supported OFP events:
315
+ - `utterance`: Text messages (main interaction)
316
+ - `getManifests`: Capability discovery
317
+ - `floorRequest`: Request speaking permission (future)
318
+
319
+ ## 🤝 Contributing
320
+
321
+ Contributions welcome! Areas for improvement:
322
+ - Additional LLM providers
323
+ - Voice/audio support
324
+ - Multi-modal capabilities
325
+ - Enhanced OFP features
326
+ - UI improvements
327
+
328
+ ## 📄 License
329
+
330
+ Apache 2.0 - See LICENSE file for details
331
+
332
+ ## 🔗 Links
333
+
334
+ - [Open Floor Protocol](https://openfloor.dev)
335
+ - [OFP Specifications](https://github.com/open-voice-interoperability/openfloor-docs)
336
+ - [Gradio Documentation](https://gradio.app/docs)
337
+ - [Qwen Models](https://huggingface.co/Qwen)
338
+
339
+ ## 💡 Credits
340
+
341
+ Based on the OFP architecture patterns from [OFPBadWord](https://huggingface.co/spaces/BladeSzaSza/OFPBadWord) sentinel agent.
342
+
343
  ---
344
 
345
+ **Questions or issues?** Open an issue on the repository or check the [OFP documentation](https://openfloor.dev).
app.py ADDED
@@ -0,0 +1,422 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Talker - OFP Chat Agent with AI Integration
3
+ Gradio ChatInterface with OFP protocol support
4
+ """
5
+
6
+ import gradio as gr
7
+ import os
8
+ import logging
9
+ import yaml
10
+ from fastapi import FastAPI
11
+ from fastapi.responses import JSONResponse
12
+ import uvicorn
13
+ from datetime import datetime, timezone
14
+ import uuid
15
+ import json
16
+
17
+ # Import agent components
18
+ from src.llm_client import LLMClient
19
+ from src.chat_agent import ChatAgent
20
+ from src.models import Envelope
21
+
22
+ # Configure logging
23
+ logging.basicConfig(
24
+ level=logging.INFO,
25
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
26
+ )
27
+ logger = logging.getLogger(__name__)
28
+
29
+ # Load configuration
30
+ CONFIG_FILE = 'config/config.yaml'
31
+ try:
32
+ with open(CONFIG_FILE, 'r') as f:
33
+ config = yaml.safe_load(f)
34
+ logger.info("Configuration loaded successfully")
35
+ except FileNotFoundError:
36
+ logger.warning("Config file not found, using defaults")
37
+ config = {
38
+ 'agent': {
39
+ 'speaker_uri': 'tag:talker.service,2025:agent-01',
40
+ 'service_url': 'https://talker-service.com/ofp',
41
+ 'convener_uri': 'tag:convener.service,2025:default',
42
+ 'convener_url': 'https://convener-service.com/ofp'
43
+ },
44
+ 'llm': {
45
+ 'provider': 'huggingface',
46
+ 'model': 'Qwen/Qwen2.5-7B-Instruct',
47
+ 'max_tokens': 512,
48
+ 'temperature': 0.7,
49
+ 'system_prompt': 'You are a helpful AI assistant participating in an Open Floor Protocol conversation.'
50
+ },
51
+ 'conversation': {
52
+ 'auto_respond': True,
53
+ 'max_history': 20
54
+ },
55
+ 'ui': {
56
+ 'title': '💬 Talker - OFP Chat Agent',
57
+ 'description': 'AI-powered conversational agent with Open Floor Protocol support',
58
+ 'theme': 'soft',
59
+ 'show_debug_panel': True,
60
+ 'show_settings': True
61
+ }
62
+ }
63
+
64
+ # Initialize LLM client
65
+ llm_client = LLMClient(
66
+ provider=config['llm'].get('provider', 'huggingface'),
67
+ model=config['llm'].get('model', 'Qwen/Qwen2.5-7B-Instruct'),
68
+ api_key=config['llm'].get('api_key'),
69
+ api_url=config['llm'].get('api_url'),
70
+ system_prompt=config['llm'].get('system_prompt')
71
+ )
72
+
73
+ # Initialize Chat Agent
74
+ agent = ChatAgent(
75
+ speaker_uri=config['agent']['speaker_uri'],
76
+ service_url=config['agent']['service_url'],
77
+ llm_client=llm_client,
78
+ convener_uri=config['agent'].get('convener_uri'),
79
+ convener_url=config['agent'].get('convener_url')
80
+ )
81
+
82
+ # Create FastAPI app
83
+ app = FastAPI()
84
+
85
+ @app.get("/manifest")
86
+ async def get_manifest():
87
+ """Serve the assistant manifest"""
88
+ manifest = agent.get_manifest()
89
+ return JSONResponse(content=manifest)
90
+
91
+ @app.post("/ofp")
92
+ async def receive_ofp_envelope(envelope: dict):
93
+ """
94
+ Receive OFP envelopes from convener
95
+
96
+ Handles:
97
+ - getManifests: Returns publishManifest event
98
+ - utterance: Processes and responds to messages
99
+ """
100
+ try:
101
+ if "openFloor" not in envelope:
102
+ return JSONResponse(
103
+ content={"status": "error", "message": "Invalid OFP envelope"},
104
+ status_code=400
105
+ )
106
+
107
+ openfloor_data = envelope["openFloor"]
108
+ events = openfloor_data.get("events", [])
109
+
110
+ # Process each event
111
+ for event in events:
112
+ event_type = event.get("eventType")
113
+
114
+ # Handle getManifests event
115
+ if event_type == "getManifests":
116
+ manifest = agent.get_manifest()
117
+ return JSONResponse(content={
118
+ "openFloor": {
119
+ "schema": {"version": "1.0.0"},
120
+ "conversation": openfloor_data.get("conversation", {}),
121
+ "sender": {
122
+ "speakerUri": config['agent']['speaker_uri'],
123
+ "serviceUrl": config['agent']['service_url']
124
+ },
125
+ "events": [{
126
+ "eventType": "publishManifest",
127
+ "to": event.get("to", {}),
128
+ "parameters": {
129
+ "servicingManifests": [manifest]
130
+ }
131
+ }]
132
+ }
133
+ })
134
+
135
+ # Handle utterance events
136
+ elif event_type == "utterance":
137
+ # Process envelope through agent
138
+ ofp_envelope = Envelope.from_dict(envelope)
139
+ response_text = agent.process_envelope(ofp_envelope)
140
+
141
+ if response_text and config['conversation'].get('auto_respond', True):
142
+ # Send response back
143
+ return JSONResponse(content={
144
+ "openFloor": {
145
+ "schema": {"version": "1.0.0"},
146
+ "conversation": openfloor_data.get("conversation", {}),
147
+ "sender": {
148
+ "speakerUri": config['agent']['speaker_uri'],
149
+ "serviceUrl": config['agent']['service_url']
150
+ },
151
+ "events": [{
152
+ "eventType": "utterance",
153
+ "to": event.get("to", {}),
154
+ "parameters": {
155
+ "dialogEvent": {
156
+ "id": f"de:{uuid.uuid4()}",
157
+ "speakerUri": config['agent']['speaker_uri'],
158
+ "span": {
159
+ "startTime": datetime.now(timezone.utc).isoformat().replace('+00:00', 'Z')
160
+ },
161
+ "features": {
162
+ "text": {
163
+ "mimeType": "text/plain",
164
+ "tokens": [{
165
+ "value": response_text,
166
+ "confidence": 1
167
+ }]
168
+ }
169
+ }
170
+ }
171
+ }
172
+ }]
173
+ }
174
+ })
175
+
176
+ # Acknowledge other events
177
+ return JSONResponse(content={
178
+ "openFloor": {
179
+ "schema": {"version": "1.0.0"},
180
+ "conversation": openfloor_data.get("conversation", {}),
181
+ "sender": {
182
+ "speakerUri": config['agent']['speaker_uri'],
183
+ "serviceUrl": config['agent']['service_url']
184
+ },
185
+ "events": []
186
+ }
187
+ })
188
+
189
+ except Exception as e:
190
+ logger.error(f"Error processing OFP envelope: {e}")
191
+ return JSONResponse(
192
+ content={"status": "error", "message": str(e)},
193
+ status_code=400
194
+ )
195
+
196
+
197
+ # Gradio ChatInterface Functions
198
+ def chat_response(message, history):
199
+ """Generate response for Gradio ChatInterface"""
200
+ try:
201
+ # Add to conversation history
202
+ agent.conversation_history.append({
203
+ "role": "user",
204
+ "content": message
205
+ })
206
+
207
+ # Generate response
208
+ response = llm_client.generate_response(
209
+ message=message,
210
+ conversation_history=agent.conversation_history[:-1],
211
+ max_tokens=config['llm'].get('max_tokens', 512),
212
+ temperature=config['llm'].get('temperature', 0.7)
213
+ )
214
+
215
+ # Add response to history
216
+ agent.conversation_history.append({
217
+ "role": "assistant",
218
+ "content": response
219
+ })
220
+
221
+ # Keep history within limits
222
+ max_history = config['conversation'].get('max_history', 20)
223
+ if len(agent.conversation_history) > max_history * 2:
224
+ agent.conversation_history = agent.conversation_history[-(max_history * 2):]
225
+
226
+ return response
227
+
228
+ except Exception as e:
229
+ logger.error(f"Error generating chat response: {e}")
230
+ return f"Sorry, I encountered an error: {str(e)}"
231
+
232
+
233
+ def get_status_info():
234
+ """Get current agent status for display"""
235
+ status = agent.get_status()
236
+ return f"""
237
+ **Agent Status:**
238
+ - Speaker URI: `{status['speaker_uri']}`
239
+ - Messages Processed: {status['messages_processed']}
240
+ - Responses Sent: {status['responses_sent']}
241
+ - History Length: {status['history_length']}
242
+ - Conversation ID: `{status.get('conversation_id', 'N/A')}`
243
+ """
244
+
245
+
246
+ def get_recent_logs():
247
+ """Get recent activity logs"""
248
+ status = agent.get_status()
249
+ logs = status.get('recent_logs', [])
250
+ return "\n".join(logs) if logs else "No recent activity"
251
+
252
+
253
+ def clear_conversation():
254
+ """Clear conversation history"""
255
+ agent.clear_history()
256
+ return "Conversation history cleared!"
257
+
258
+
259
+ def get_ofp_envelope_info():
260
+ """Show OFP endpoint information"""
261
+ return f"""
262
+ ### OFP Endpoints
263
+
264
+ **Manifest URL:** `{config['agent']['service_url'].replace('/ofp', '/manifest')}`
265
+
266
+ **OFP Endpoint:** `{config['agent']['service_url']}`
267
+
268
+ **Test with curl:**
269
+ ```bash
270
+ curl -X POST {config['agent']['service_url']} \\
271
+ -H "Content-Type: application/json" \\
272
+ -d '{{
273
+ "openFloor": {{
274
+ "schema": {{"version": "1.0.0"}},
275
+ "conversation": {{"id": "conv:test"}},
276
+ "sender": {{"speakerUri": "tag:test,2025:user"}},
277
+ "events": [{{
278
+ "eventType": "utterance",
279
+ "parameters": {{
280
+ "dialogEvent": {{
281
+ "id": "de:test",
282
+ "speakerUri": "tag:test,2025:user",
283
+ "span": {{"startTime": "2025-01-01T00:00:00Z"}},
284
+ "features": {{
285
+ "text": {{
286
+ "mimeType": "text/plain",
287
+ "tokens": [{{"value": "Hello!"}}]
288
+ }}
289
+ }}
290
+ }}
291
+ }}
292
+ }}]
293
+ }}
294
+ }}'
295
+ ```
296
+ """
297
+
298
+
299
+ # Build Gradio Interface
300
+ custom_css = """
301
+ .gradio-container {
302
+ font-family: 'Inter', system-ui, sans-serif !important;
303
+ max-width: 1400px !important;
304
+ }
305
+ .chat-message {
306
+ padding: 1rem !important;
307
+ }
308
+ """
309
+
310
+ with gr.Blocks(
311
+ title=config['ui'].get('title', 'Talker - OFP Chat Agent'),
312
+ theme=config['ui'].get('theme', 'soft'),
313
+ css=custom_css
314
+ ) as demo:
315
+
316
+ gr.Markdown(f"""
317
+ # {config['ui'].get('title', '💬 Talker - OFP Chat Agent')}
318
+ {config['ui'].get('description', 'AI-powered conversational agent with Open Floor Protocol support')}
319
+
320
+ **Model:** {config['llm']['model']} | **Provider:** {config['llm']['provider']}
321
+ """)
322
+
323
+ with gr.Tab("💬 Chat"):
324
+ chatbot = gr.ChatInterface(
325
+ fn=chat_response,
326
+ title="",
327
+ description="Chat with the AI assistant",
328
+ examples=[
329
+ "Hello! How are you?",
330
+ "What is the Open Floor Protocol?",
331
+ "Can you explain what you do?",
332
+ "Tell me a joke!"
333
+ ],
334
+ retry_btn=None,
335
+ undo_btn=None,
336
+ clear_btn="🗑️ Clear Chat"
337
+ )
338
+
339
+ if config['ui'].get('show_debug_panel', True):
340
+ with gr.Tab("🔍 Status & Debug"):
341
+ with gr.Row():
342
+ with gr.Column():
343
+ status_display = gr.Markdown(get_status_info())
344
+ refresh_status_btn = gr.Button("🔄 Refresh Status")
345
+ refresh_status_btn.click(
346
+ fn=get_status_info,
347
+ outputs=status_display
348
+ )
349
+
350
+ with gr.Column():
351
+ logs_display = gr.Textbox(
352
+ label="Activity Logs",
353
+ value=get_recent_logs(),
354
+ lines=10,
355
+ max_lines=20
356
+ )
357
+ refresh_logs_btn = gr.Button("🔄 Refresh Logs")
358
+ refresh_logs_btn.click(
359
+ fn=get_recent_logs,
360
+ outputs=logs_display
361
+ )
362
+
363
+ clear_history_btn = gr.Button("🗑️ Clear Conversation History")
364
+ clear_result = gr.Textbox(label="Result", interactive=False)
365
+ clear_history_btn.click(
366
+ fn=clear_conversation,
367
+ outputs=clear_result
368
+ )
369
+
370
+ if config['ui'].get('show_settings', True):
371
+ with gr.Tab("⚙️ OFP Configuration"):
372
+ ofp_info = gr.Markdown(get_ofp_envelope_info())
373
+
374
+ gr.Markdown("""
375
+ ### Configuration
376
+ Edit `config/config.yaml` to change:
377
+ - LLM provider and model
378
+ - System prompt
379
+ - Generation parameters
380
+ - OFP endpoints
381
+
382
+ ### Environment Variables
383
+ - `HF_TOKEN`: HuggingFace API token (for private models)
384
+ - `OPENAI_API_KEY`: OpenAI API key (if using OpenAI)
385
+ """)
386
+
387
+ gr.Markdown("""
388
+ ---
389
+ **About:** This agent implements the [Open Floor Protocol](https://openfloor.dev) for interoperable AI conversations.
390
+
391
+ **OFP v1.0.0** | Built with [Gradio](https://gradio.app) | Powered by {model}
392
+ """.format(model=config['llm']['model']))
393
+
394
+
395
+ # Launch configuration
396
+ if __name__ == "__main__":
397
+ import argparse
398
+
399
+ is_hf_space = os.getenv("SPACE_ID") is not None
400
+
401
+ parser = argparse.ArgumentParser(description='Talker - OFP Chat Agent')
402
+ parser.add_argument('--share', action='store_true', help='Create public share link')
403
+ parser.add_argument('--port', type=int, default=7860, help='Port to run on')
404
+ args = parser.parse_args()
405
+
406
+ print("\n" + "="*60)
407
+ print("💬 Talker - OFP Chat Agent")
408
+ print("="*60)
409
+ print(f"\nModel: {config['llm']['model']}")
410
+ print(f"Provider: {config['llm']['provider']}")
411
+ print(f"Speaker URI: {config['agent']['speaker_uri']}")
412
+ print("="*60 + "\n")
413
+
414
+ # Mount Gradio app to FastAPI
415
+ app_with_gradio = gr.mount_gradio_app(app, demo, path="/")
416
+
417
+ uvicorn.run(
418
+ app_with_gradio,
419
+ host="0.0.0.0",
420
+ port=args.port,
421
+ log_level="info"
422
+ )
config/config.yaml ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Talker - OFP Chat Agent Configuration
2
+
3
+ agent:
4
+ # Agent identification
5
+ speaker_uri: 'tag:talker.service,2025:agent-01'
6
+ service_url: 'https://bladeszasza-talker.hf.space/ofp'
7
+
8
+ # Convener endpoints (optional - for sending messages)
9
+ convener_uri: 'tag:convener.service,2025:default'
10
+ convener_url: 'https://convener-service.com/ofp'
11
+
12
+ llm:
13
+ # LLM provider: huggingface, openai, ollama
14
+ provider: 'huggingface'
15
+
16
+ # Model name
17
+ model: 'Qwen/Qwen2.5-7B-Instruct'
18
+
19
+ # API configuration
20
+ # For HuggingFace: set HF_TOKEN environment variable
21
+ # For OpenAI: set OPENAI_API_KEY environment variable
22
+ # For Ollama: runs locally, no key needed
23
+ api_url: null # Optional: custom API endpoint
24
+
25
+ # Generation parameters
26
+ max_tokens: 512
27
+ temperature: 0.7
28
+
29
+ # System prompt
30
+ system_prompt: |
31
+ You are Talker, a helpful AI assistant participating in an Open Floor Protocol conversation.
32
+ You provide clear, concise, and friendly responses.
33
+ You can discuss a wide range of topics and help with questions.
34
+
35
+ conversation:
36
+ # Automatically respond to all messages
37
+ auto_respond: true
38
+
39
+ # Maximum conversation history length
40
+ max_history: 20
41
+
42
+ # Clear history after inactivity (minutes)
43
+ clear_after_minutes: 30
44
+
45
+ ui:
46
+ # Chat interface title
47
+ title: '💬 Talker - OFP Chat Agent'
48
+
49
+ # Chat interface description
50
+ description: 'AI-powered conversational agent with Open Floor Protocol support'
51
+
52
+ # Theme: default, soft, glass, monochrome
53
+ theme: 'soft'
54
+
55
+ # Show OFP debugging panel
56
+ show_debug_panel: true
57
+
58
+ # Show settings panel
59
+ show_settings: true
60
+
61
+ # Alternative LLM configurations (uncomment to use)
62
+
63
+ # OpenAI Configuration:
64
+ # llm:
65
+ # provider: 'openai'
66
+ # model: 'gpt-4'
67
+ # api_key: 'your-api-key' # or use OPENAI_API_KEY env var
68
+
69
+ # Ollama Configuration (local):
70
+ # llm:
71
+ # provider: 'ollama'
72
+ # model: 'qwen2.5:7b'
73
+ # api_url: 'http://localhost:11434/api/generate'
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ gradio==5.9.1
2
+ fastapi>=0.104.0
3
+ uvicorn>=0.24.0
4
+ requests>=2.31.0
5
+ pyyaml>=6.0
src/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ # Talker - OFP Chat Agent with AI Integration
2
+ __version__ = "1.0.0"
src/chat_agent.py ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Chat Agent
3
+ OFP-compliant conversational agent with LLM integration
4
+ """
5
+
6
+ import logging
7
+ from typing import Dict, List, Optional
8
+ from datetime import datetime, timezone
9
+ import uuid
10
+
11
+ from .ofp_client import OFPClient
12
+ from .llm_client import LLMClient
13
+ from .models import Envelope, DialogEvent
14
+
15
+ logger = logging.getLogger(__name__)
16
+
17
+
18
+ class ChatAgent:
19
+ """OFP Chat Agent with AI conversation capabilities"""
20
+
21
+ def __init__(
22
+ self,
23
+ speaker_uri: str,
24
+ service_url: str,
25
+ llm_client: LLMClient,
26
+ convener_uri: Optional[str] = None,
27
+ convener_url: Optional[str] = None
28
+ ):
29
+ """
30
+ Initialize chat agent
31
+
32
+ Args:
33
+ speaker_uri: Agent's unique speaker URI
34
+ service_url: Agent's service endpoint URL
35
+ llm_client: Configured LLM client instance
36
+ convener_uri: Convener's speaker URI (optional)
37
+ convener_url: Convener's service endpoint URL (optional)
38
+ """
39
+ self.speaker_uri = speaker_uri
40
+ self.service_url = service_url
41
+ self.convener_uri = convener_uri
42
+ self.convener_url = convener_url
43
+
44
+ # Initialize LLM client
45
+ self.llm_client = llm_client
46
+
47
+ # Initialize OFP client
48
+ manifest = self._create_manifest()
49
+ self.ofp_client = OFPClient(speaker_uri, service_url, manifest)
50
+
51
+ # Conversation state
52
+ self.conversation_history: List[Dict[str, str]] = []
53
+ self.current_conversation_id: Optional[str] = None
54
+ self.messages_processed = 0
55
+ self.responses_sent = 0
56
+ self.activity_log = []
57
+
58
+ logger.info(f"Chat Agent initialized: {speaker_uri}")
59
+
60
+ def _create_manifest(self) -> Dict:
61
+ """Create assistant manifest for chat agent"""
62
+ return {
63
+ "identification": {
64
+ "speakerUri": self.speaker_uri,
65
+ "serviceUrl": self.service_url,
66
+ "conversationalName": "Talker AI Assistant",
67
+ "role": "Conversational Agent",
68
+ "synopsis": "AI-powered conversational assistant for Open Floor Protocol"
69
+ },
70
+ "capabilities": [{
71
+ "keyphrases": ["chat", "conversation", "AI assistant", "questions", "help"],
72
+ "supportedLayers": ["text"],
73
+ "descriptions": ["Engages in natural conversations and answers questions"]
74
+ }]
75
+ }
76
+
77
+ def process_envelope(self, envelope: Envelope) -> Optional[str]:
78
+ """
79
+ Process incoming OFP envelope
80
+
81
+ Args:
82
+ envelope: OFP envelope to process
83
+
84
+ Returns:
85
+ Response text if generated, None otherwise
86
+ """
87
+ try:
88
+ self.messages_processed += 1
89
+
90
+ # Update conversation ID
91
+ if envelope.conversation.get('id'):
92
+ self.current_conversation_id = envelope.conversation['id']
93
+
94
+ for event in envelope.events:
95
+ event_type = event.get('eventType')
96
+
97
+ # Handle utterance events
98
+ if event_type == 'utterance':
99
+ return self._handle_utterance(envelope, event)
100
+
101
+ # Handle getManifests
102
+ elif event_type == 'getManifests':
103
+ self._log_activity("Received getManifests request")
104
+
105
+ return None
106
+
107
+ except Exception as e:
108
+ logger.error(f"Error processing envelope: {e}")
109
+ self._log_activity(f"ERROR: {str(e)}")
110
+ return None
111
+
112
+ def _handle_utterance(self, envelope: Envelope, event: Dict) -> Optional[str]:
113
+ """
114
+ Handle utterance event and generate response
115
+
116
+ Args:
117
+ envelope: OFP envelope
118
+ event: Utterance event
119
+
120
+ Returns:
121
+ Generated response text
122
+ """
123
+ try:
124
+ # Extract text from dialog event
125
+ params = event.get('parameters', {})
126
+ dialog_event = params.get('dialogEvent', {})
127
+ features = dialog_event.get('features', {})
128
+ text_feature = features.get('text', {})
129
+ tokens = text_feature.get('tokens', [])
130
+
131
+ # Combine tokens into text
132
+ text = ' '.join(token.get('value', '') for token in tokens)
133
+
134
+ if not text:
135
+ return None
136
+
137
+ # Get speaker info
138
+ speaker_uri = dialog_event.get('speakerUri', 'unknown')
139
+
140
+ # Don't respond to own messages
141
+ if speaker_uri == self.speaker_uri:
142
+ return None
143
+
144
+ self._log_activity(f"Received: {text[:50]}..." if len(text) > 50 else f"Received: {text}")
145
+
146
+ # Add to conversation history
147
+ self.conversation_history.append({
148
+ "role": "user",
149
+ "content": text
150
+ })
151
+
152
+ # Generate response using LLM
153
+ response = self.llm_client.generate_response(
154
+ message=text,
155
+ conversation_history=self.conversation_history[:-1] # Exclude current message
156
+ )
157
+
158
+ # Add response to history
159
+ self.conversation_history.append({
160
+ "role": "assistant",
161
+ "content": response
162
+ })
163
+
164
+ self.responses_sent += 1
165
+ self._log_activity(f"Sent: {response[:50]}..." if len(response) > 50 else f"Sent: {response}")
166
+
167
+ return response
168
+
169
+ except Exception as e:
170
+ logger.error(f"Error handling utterance: {e}")
171
+ return f"Sorry, I encountered an error processing your message."
172
+
173
+ def send_message(self, text: str, recipient_url: Optional[str] = None) -> bool:
174
+ """
175
+ Send message to the floor
176
+
177
+ Args:
178
+ text: Message text to send
179
+ recipient_url: Recipient URL (uses convener_url if not specified)
180
+
181
+ Returns:
182
+ True if sent successfully
183
+ """
184
+ try:
185
+ if not self.current_conversation_id:
186
+ self.current_conversation_id = f"conv:{uuid.uuid4()}"
187
+
188
+ url = recipient_url or self.convener_url
189
+ if not url:
190
+ logger.error("No recipient URL specified")
191
+ return False
192
+
193
+ success = self.ofp_client.send_utterance(
194
+ conversation_id=self.current_conversation_id,
195
+ recipient_url=url,
196
+ text=text
197
+ )
198
+
199
+ if success:
200
+ self.responses_sent += 1
201
+ self._log_activity(f"Sent: {text[:50]}..." if len(text) > 50 else f"Sent: {text}")
202
+
203
+ return success
204
+
205
+ except Exception as e:
206
+ logger.error(f"Error sending message: {e}")
207
+ return False
208
+
209
+ def clear_history(self):
210
+ """Clear conversation history"""
211
+ self.conversation_history = []
212
+ self._log_activity("Conversation history cleared")
213
+
214
+ def get_status(self) -> Dict:
215
+ """Get current agent status"""
216
+ return {
217
+ "speaker_uri": self.speaker_uri,
218
+ "conversation_id": self.current_conversation_id,
219
+ "messages_processed": self.messages_processed,
220
+ "responses_sent": self.responses_sent,
221
+ "history_length": len(self.conversation_history),
222
+ "recent_logs": self.activity_log[-10:] if self.activity_log else []
223
+ }
224
+
225
+ def get_manifest(self) -> Dict:
226
+ """Get assistant manifest"""
227
+ return self.ofp_client.get_manifest()
228
+
229
+ def _log_activity(self, message: str):
230
+ """Log activity with timestamp"""
231
+ timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
232
+ log_entry = f"[{timestamp}] {message}"
233
+ self.activity_log.append(log_entry)
234
+
235
+ # Keep only last 100 entries
236
+ if len(self.activity_log) > 100:
237
+ self.activity_log = self.activity_log[-100:]
src/llm_client.py ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ LLM Client
3
+ Handles integration with various language models (Qwen, OpenAI-compatible APIs, etc.)
4
+ """
5
+
6
+ import logging
7
+ import requests
8
+ from typing import List, Dict, Optional
9
+ import os
10
+
11
+ logger = logging.getLogger(__name__)
12
+
13
+
14
+ class LLMClient:
15
+ """Client for interacting with language models"""
16
+
17
+ def __init__(
18
+ self,
19
+ provider: str = "huggingface",
20
+ model: str = "Qwen/Qwen2.5-7B-Instruct",
21
+ api_key: Optional[str] = None,
22
+ api_url: Optional[str] = None,
23
+ system_prompt: Optional[str] = None
24
+ ):
25
+ """
26
+ Initialize LLM client
27
+
28
+ Args:
29
+ provider: LLM provider (huggingface, openai, ollama)
30
+ model: Model name/identifier
31
+ api_key: API key for authentication
32
+ api_url: Custom API endpoint URL
33
+ system_prompt: System instructions for the model
34
+ """
35
+ self.provider = provider
36
+ self.model = model
37
+ self.api_key = api_key or os.getenv("HF_TOKEN")
38
+ self.system_prompt = system_prompt or "You are a helpful AI assistant participating in an Open Floor Protocol conversation."
39
+
40
+ # Set API URL based on provider
41
+ if api_url:
42
+ self.api_url = api_url
43
+ elif provider == "huggingface":
44
+ self.api_url = f"https://api-inference.huggingface.co/models/{model}"
45
+ elif provider == "openai":
46
+ self.api_url = "https://api.openai.com/v1/chat/completions"
47
+ elif provider == "ollama":
48
+ self.api_url = api_url or "http://localhost:11434/api/generate"
49
+ else:
50
+ raise ValueError(f"Unsupported provider: {provider}")
51
+
52
+ logger.info(f"LLM Client initialized: {provider} - {model}")
53
+
54
+ def generate_response(
55
+ self,
56
+ message: str,
57
+ conversation_history: Optional[List[Dict[str, str]]] = None,
58
+ max_tokens: int = 512,
59
+ temperature: float = 0.7
60
+ ) -> str:
61
+ """
62
+ Generate response from LLM
63
+
64
+ Args:
65
+ message: User message to respond to
66
+ conversation_history: Previous messages in conversation
67
+ max_tokens: Maximum tokens to generate
68
+ temperature: Sampling temperature
69
+
70
+ Returns:
71
+ Generated response text
72
+ """
73
+ try:
74
+ if self.provider == "huggingface":
75
+ return self._generate_huggingface(
76
+ message, conversation_history, max_tokens, temperature
77
+ )
78
+ elif self.provider == "openai":
79
+ return self._generate_openai(
80
+ message, conversation_history, max_tokens, temperature
81
+ )
82
+ elif self.provider == "ollama":
83
+ return self._generate_ollama(
84
+ message, conversation_history, max_tokens, temperature
85
+ )
86
+ else:
87
+ return f"Error: Unsupported provider {self.provider}"
88
+
89
+ except Exception as e:
90
+ logger.error(f"Error generating response: {e}")
91
+ return f"Sorry, I encountered an error: {str(e)}"
92
+
93
+ def _generate_huggingface(
94
+ self,
95
+ message: str,
96
+ conversation_history: Optional[List[Dict[str, str]]],
97
+ max_tokens: int,
98
+ temperature: float
99
+ ) -> str:
100
+ """Generate response using HuggingFace Inference API"""
101
+ headers = {}
102
+ if self.api_key:
103
+ headers["Authorization"] = f"Bearer {self.api_key}"
104
+
105
+ # Build prompt with conversation history
106
+ prompt = self._build_prompt(message, conversation_history)
107
+
108
+ payload = {
109
+ "inputs": prompt,
110
+ "parameters": {
111
+ "max_new_tokens": max_tokens,
112
+ "temperature": temperature,
113
+ "return_full_text": False
114
+ }
115
+ }
116
+
117
+ response = requests.post(
118
+ self.api_url,
119
+ headers=headers,
120
+ json=payload,
121
+ timeout=60
122
+ )
123
+ response.raise_for_status()
124
+
125
+ result = response.json()
126
+
127
+ # Handle different response formats
128
+ if isinstance(result, list) and len(result) > 0:
129
+ return result[0].get("generated_text", "").strip()
130
+ elif isinstance(result, dict):
131
+ return result.get("generated_text", "").strip()
132
+ else:
133
+ logger.warning(f"Unexpected response format: {result}")
134
+ return "I couldn't generate a proper response."
135
+
136
+ def _generate_openai(
137
+ self,
138
+ message: str,
139
+ conversation_history: Optional[List[Dict[str, str]]],
140
+ max_tokens: int,
141
+ temperature: float
142
+ ) -> str:
143
+ """Generate response using OpenAI-compatible API"""
144
+ headers = {
145
+ "Content-Type": "application/json",
146
+ "Authorization": f"Bearer {self.api_key}"
147
+ }
148
+
149
+ messages = [{"role": "system", "content": self.system_prompt}]
150
+
151
+ if conversation_history:
152
+ messages.extend(conversation_history)
153
+
154
+ messages.append({"role": "user", "content": message})
155
+
156
+ payload = {
157
+ "model": self.model,
158
+ "messages": messages,
159
+ "max_tokens": max_tokens,
160
+ "temperature": temperature
161
+ }
162
+
163
+ response = requests.post(
164
+ self.api_url,
165
+ headers=headers,
166
+ json=payload,
167
+ timeout=60
168
+ )
169
+ response.raise_for_status()
170
+
171
+ result = response.json()
172
+ return result["choices"][0]["message"]["content"].strip()
173
+
174
+ def _generate_ollama(
175
+ self,
176
+ message: str,
177
+ conversation_history: Optional[List[Dict[str, str]]],
178
+ max_tokens: int,
179
+ temperature: float
180
+ ) -> str:
181
+ """Generate response using Ollama local API"""
182
+ prompt = self._build_prompt(message, conversation_history)
183
+
184
+ payload = {
185
+ "model": self.model,
186
+ "prompt": prompt,
187
+ "options": {
188
+ "num_predict": max_tokens,
189
+ "temperature": temperature
190
+ }
191
+ }
192
+
193
+ response = requests.post(
194
+ self.api_url,
195
+ json=payload,
196
+ timeout=60
197
+ )
198
+ response.raise_for_status()
199
+
200
+ result = response.json()
201
+ return result.get("response", "").strip()
202
+
203
+ def _build_prompt(
204
+ self,
205
+ message: str,
206
+ conversation_history: Optional[List[Dict[str, str]]]
207
+ ) -> str:
208
+ """Build prompt with system instruction and conversation history"""
209
+ parts = [f"System: {self.system_prompt}\n"]
210
+
211
+ if conversation_history:
212
+ for msg in conversation_history:
213
+ role = msg.get("role", "user")
214
+ content = msg.get("content", "")
215
+ parts.append(f"{role.capitalize()}: {content}\n")
216
+
217
+ parts.append(f"User: {message}\nAssistant:")
218
+
219
+ return "\n".join(parts)
src/models.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ OFP Data Models
3
+ Implements Open Floor Protocol envelope and event structures following v1.0.0 specifications
4
+ """
5
+
6
+ from dataclasses import dataclass, field
7
+ from typing import List, Dict, Optional, Any
8
+ from datetime import datetime, timezone
9
+ import json
10
+ import uuid
11
+
12
+
13
+ @dataclass
14
+ class Identification:
15
+ """Assistant identification information"""
16
+ speaker_uri: str
17
+ service_url: str
18
+ conversational_name: str
19
+ organization: Optional[str] = None
20
+ role: Optional[str] = None
21
+ synopsis: Optional[str] = None
22
+
23
+
24
+ @dataclass
25
+ class DialogEvent:
26
+ """Dialog event following OFP Dialog Event Object v1.0.2"""
27
+ id: str
28
+ speaker_uri: str
29
+ span: Dict[str, str]
30
+ features: Dict[str, Any]
31
+
32
+ @staticmethod
33
+ def create_text_event(speaker_uri: str, text: str, event_id: Optional[str] = None) -> 'DialogEvent':
34
+ """Create a text-based dialog event"""
35
+ return DialogEvent(
36
+ id=event_id or f"de:{uuid.uuid4()}",
37
+ speaker_uri=speaker_uri,
38
+ span={"startTime": datetime.now(timezone.utc).isoformat().replace('+00:00', 'Z')},
39
+ features={
40
+ "text": {
41
+ "mimeType": "text/plain",
42
+ "tokens": [{"value": text}]
43
+ }
44
+ }
45
+ )
46
+
47
+ def to_dict(self) -> Dict:
48
+ """Convert to dictionary for serialization"""
49
+ return {
50
+ "id": self.id,
51
+ "speakerUri": self.speaker_uri,
52
+ "span": self.span,
53
+ "features": self.features
54
+ }
55
+
56
+
57
+ @dataclass
58
+ class Event:
59
+ """OFP Event structure for inter-agent messages"""
60
+ event_type: str
61
+ to: Optional[Dict[str, Any]] = None
62
+ parameters: Optional[Dict[str, Any]] = None
63
+
64
+ def to_dict(self) -> Dict:
65
+ """Convert to dictionary for serialization"""
66
+ result = {"eventType": self.event_type}
67
+ if self.to:
68
+ result["to"] = self.to
69
+ if self.parameters:
70
+ result["parameters"] = self.parameters
71
+ return result
72
+
73
+
74
+ @dataclass
75
+ class Envelope:
76
+ """OFP Envelope following Inter-agent Message v1.0.0"""
77
+ schema: Dict[str, str]
78
+ conversation: Dict[str, Any]
79
+ sender: Dict[str, str]
80
+ events: List[Dict[str, Any]]
81
+
82
+ @staticmethod
83
+ def from_json(json_str: str) -> 'Envelope':
84
+ """Parse OFP envelope from JSON string"""
85
+ data = json.loads(json_str)
86
+ ofp = data.get('openFloor', {})
87
+ return Envelope(
88
+ schema=ofp.get('schema', {}),
89
+ conversation=ofp.get('conversation', {}),
90
+ sender=ofp.get('sender', {}),
91
+ events=ofp.get('events', [])
92
+ )
93
+
94
+ @staticmethod
95
+ def from_dict(data: Dict) -> 'Envelope':
96
+ """Parse OFP envelope from dictionary"""
97
+ ofp = data.get('openFloor', data) # Support both wrapped and unwrapped
98
+ return Envelope(
99
+ schema=ofp.get('schema', {}),
100
+ conversation=ofp.get('conversation', {}),
101
+ sender=ofp.get('sender', {}),
102
+ events=ofp.get('events', [])
103
+ )
104
+
105
+ def to_payload(self) -> Dict:
106
+ """Convert to JSON payload for transmission"""
107
+ return {
108
+ "openFloor": {
109
+ "schema": self.schema,
110
+ "conversation": self.conversation,
111
+ "sender": self.sender,
112
+ "events": self.events
113
+ }
114
+ }
115
+
116
+ def to_json(self) -> str:
117
+ """Convert to JSON string"""
118
+ return json.dumps(self.to_payload(), indent=2)
119
+
120
+
121
+ def validate_envelope(envelope: Envelope) -> bool:
122
+ """Validate OFP envelope structure"""
123
+ try:
124
+ # Check required fields
125
+ if not envelope.schema or 'version' not in envelope.schema:
126
+ return False
127
+
128
+ if not envelope.conversation or 'id' not in envelope.conversation:
129
+ return False
130
+
131
+ if not envelope.sender or 'speakerUri' not in envelope.sender:
132
+ return False
133
+
134
+ if not isinstance(envelope.events, list):
135
+ return False
136
+
137
+ # Validate each event
138
+ for event in envelope.events:
139
+ if not isinstance(event, dict) or 'eventType' not in event:
140
+ return False
141
+
142
+ return True
143
+ except Exception:
144
+ return False
145
+
146
+
147
+ def create_envelope(conversation_id: str, speaker_uri: str, events: List[Dict]) -> Envelope:
148
+ """Helper function to create a valid OFP envelope"""
149
+ return Envelope(
150
+ schema={"version": "1.0.0"},
151
+ conversation={"id": conversation_id},
152
+ sender={"speakerUri": speaker_uri},
153
+ events=events
154
+ )
src/ofp_client.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ OFP Client
3
+ Handles sending and receiving Open Floor Protocol envelopes via HTTPS
4
+ """
5
+
6
+ import requests
7
+ import logging
8
+ import json
9
+ from typing import Dict, Optional
10
+ from .models import Envelope, DialogEvent
11
+
12
+ logger = logging.getLogger(__name__)
13
+
14
+
15
+ class OFPClient:
16
+ """Client for sending OFP envelopes to conveners and other assistants"""
17
+
18
+ def __init__(self, speaker_uri: str, service_url: str, manifest: Dict):
19
+ self.speaker_uri = speaker_uri
20
+ self.service_url = service_url
21
+ self.manifest = manifest
22
+ logger.info(f"OFP Client initialized for {speaker_uri}")
23
+
24
+ def send_envelope(self, recipient_url: str, envelope: Envelope, timeout: int = 10) -> bool:
25
+ """Send OFP envelope to recipient via HTTPS POST"""
26
+ try:
27
+ payload = envelope.to_payload()
28
+ logger.debug(f"Sending envelope to {recipient_url}: {json.dumps(payload, indent=2)}")
29
+
30
+ response = requests.post(
31
+ recipient_url,
32
+ json=payload,
33
+ headers={
34
+ 'Content-Type': 'application/json',
35
+ 'User-Agent': 'OFP-Talker-Agent/1.0'
36
+ },
37
+ timeout=timeout
38
+ )
39
+ response.raise_for_status()
40
+
41
+ logger.info(f"✓ Envelope sent successfully to {recipient_url}")
42
+ return True
43
+
44
+ except requests.exceptions.Timeout:
45
+ logger.error(f"✗ Timeout sending envelope to {recipient_url}")
46
+ return False
47
+
48
+ except requests.exceptions.RequestException as e:
49
+ logger.error(f"✗ Failed to send envelope to {recipient_url}: {e}")
50
+ return False
51
+
52
+ except Exception as e:
53
+ logger.error(f"✗ Unexpected error sending envelope: {e}")
54
+ return False
55
+
56
+ def send_utterance(
57
+ self,
58
+ conversation_id: str,
59
+ recipient_url: str,
60
+ text: str,
61
+ to: Optional[Dict] = None
62
+ ) -> bool:
63
+ """Send utterance message to the floor"""
64
+ try:
65
+ dialog_event = DialogEvent.create_text_event(
66
+ speaker_uri=self.speaker_uri,
67
+ text=text
68
+ )
69
+
70
+ event = {
71
+ "eventType": "utterance",
72
+ "parameters": {
73
+ "dialogEvent": dialog_event.to_dict()
74
+ }
75
+ }
76
+
77
+ if to:
78
+ event["to"] = to
79
+
80
+ envelope = Envelope(
81
+ schema={"version": "1.0.0"},
82
+ conversation={"id": conversation_id},
83
+ sender={"speakerUri": self.speaker_uri},
84
+ events=[event]
85
+ )
86
+
87
+ return self.send_envelope(recipient_url, envelope)
88
+
89
+ except Exception as e:
90
+ logger.error(f"Error sending utterance: {e}")
91
+ return False
92
+
93
+ def request_floor(
94
+ self,
95
+ conversation_id: str,
96
+ convener_url: str,
97
+ convener_uri: str
98
+ ) -> bool:
99
+ """Request speaking floor from convener"""
100
+ envelope = Envelope(
101
+ schema={"version": "1.0.0"},
102
+ conversation={"id": conversation_id},
103
+ sender={"speakerUri": self.speaker_uri},
104
+ events=[{
105
+ "eventType": "floorRequest",
106
+ "to": {
107
+ "speakerUri": convener_uri
108
+ }
109
+ }]
110
+ )
111
+
112
+ return self.send_envelope(convener_url, envelope)
113
+
114
+ def get_manifest(self) -> Dict:
115
+ """Return assistant manifest"""
116
+ return self.manifest