Delete skills/agent-kernel/SKILL.md
Browse files- skills/agent-kernel/SKILL.md +0 -379
skills/agent-kernel/SKILL.md
DELETED
|
@@ -1,379 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
name: agentkernel
|
| 3 |
-
description: >
|
| 4 |
-
Spawn and orchestrate agents as local subprocesses or Kubernetes pods.
|
| 5 |
-
Each agent runs with an independent runtime, conversation, tools,
|
| 6 |
-
and skills. Use when a task benefits from parallel work, role
|
| 7 |
-
specialization, persistent agent state, or sandboxed execution.
|
| 8 |
-
metadata:
|
| 9 |
-
version: "2.1"
|
| 10 |
-
pre-condition: "0"
|
| 11 |
-
---
|
| 12 |
-
|
| 13 |
-
# AgentKernel
|
| 14 |
-
|
| 15 |
-
Spawn and orchestrate agents from `<helpers>` blocks. Each agent runs in its own process (local subprocess or Kubernetes pod) with an independent runtime, conversation state, tools, and skills. You decide what agents to create and what to say to them. The kernel handles process lifecycle, networking, image management, and health checks.
|
| 16 |
-
|
| 17 |
-
## AgentKernel vs `agents` Skill
|
| 18 |
-
|
| 19 |
-
| | `agents` skill | `agentkernel` |
|
| 20 |
-
|---|---|---|
|
| 21 |
-
| **Backends** | Local subprocesses only | Local subprocesses or k8s pods |
|
| 22 |
-
| **Addressing** | By name (`call_async("my-agent", ...)`) | By UUID + secret nonce |
|
| 23 |
-
| **Protocol** | Anthropic Messages API | Custom SSE (TurnRequest/TurnResponse) |
|
| 24 |
-
| **Access control** | Open — any caller can talk to any agent | Nonce-secured single-owner |
|
| 25 |
-
| **Teams / capacity** | No | Yes |
|
| 26 |
-
| **Image packaging** | No | Yes (OCI images for k8s) |
|
| 27 |
-
| **AgentBus** | No | Yes |
|
| 28 |
-
| **Dependencies** | API server skill | None |
|
| 29 |
-
|
| 30 |
-
**Use `agents`** for lightweight local agent workflows where convenience matters — create by name, call by name, check event logs.
|
| 31 |
-
|
| 32 |
-
**Use `agentkernel`** when you need k8s deployment, container isolation, capacity management, nonce-secured access, or agentbus observability.
|
| 33 |
-
|
| 34 |
-
## Core Concepts
|
| 35 |
-
|
| 36 |
-
**Kernel**: `AgentKernel` is the entry point. It wires together the spawner, agent client, and storage. The backend determines where agents run.
|
| 37 |
-
|
| 38 |
-
**Backends**: Two backends are available:
|
| 39 |
-
- **local** — agents run as subprocesses on the same machine. No isolation, no config file needed. Good for development and quick experiments.
|
| 40 |
-
- **kubernetes** — agents run as pods in a k8s cluster. Full container isolation. Requires a config file with cluster details.
|
| 41 |
-
|
| 42 |
-
The entire API after initialization is identical across backends.
|
| 43 |
-
|
| 44 |
-
**SpawnRequest + SpawnInfo**: A `SpawnRequest` defines the agent identity (name, team, metadata). The `spawn_info` field carries agent-type-specific config (system prompt, model, tools, etc.) — e.g. `OpenClawSpawnInfo`.
|
| 45 |
-
|
| 46 |
-
**Nonce**: Each spawn returns a `SpawnResult` containing the agent record and a secret nonce. The nonce is required for all communication — it enforces single-owner access. Only the entity that spawned an agent can talk to it.
|
| 47 |
-
|
| 48 |
-
**AgentBus**: Optional observability/safety layer. When enabled, all LLM inference and code execution events are logged to an agent bus that can be inspected externally.
|
| 49 |
-
|
| 50 |
-
**Teams**: Logical groups with capacity limits. Spawning into a full team raises an error.
|
| 51 |
-
|
| 52 |
-
## Initialization
|
| 53 |
-
|
| 54 |
-
### Local backend
|
| 55 |
-
|
| 56 |
-
No config file needed. Agents run as subprocesses with the same permissions as the parent process.
|
| 57 |
-
|
| 58 |
-
<helpers>
|
| 59 |
-
from agentic.kernel import AgentKernel
|
| 60 |
-
from agentic.kernel.plugins.openclaw import OpenClawPlugin
|
| 61 |
-
|
| 62 |
-
kernel = AgentKernel(backend="local", plugins=[OpenClawPlugin()])
|
| 63 |
-
</helpers>
|
| 64 |
-
|
| 65 |
-
### Kubernetes backend
|
| 66 |
-
|
| 67 |
-
Agents run as pods in the `agentkernel-0` namespace. The config file is at `agentkernel/examples/agentkernel.yaml`:
|
| 68 |
-
|
| 69 |
-
```yaml
|
| 70 |
-
backend: kubernetes
|
| 71 |
-
namespace: agentkernel-0
|
| 72 |
-
base_image: your-registry.example.com/agentkernel:latest
|
| 73 |
-
kubeconfig: ~/.kube/config
|
| 74 |
-
registry_url: your-registry.example.com
|
| 75 |
-
debug: true
|
| 76 |
-
```
|
| 77 |
-
|
| 78 |
-
- `debug: true` preserves pods on failure for inspection (otherwise they are cleaned up automatically).
|
| 79 |
-
|
| 80 |
-
<helpers>
|
| 81 |
-
from agentic.kernel import AgentKernel
|
| 82 |
-
from agentic.kernel.plugins.openclaw import OpenClawPlugin
|
| 83 |
-
|
| 84 |
-
kernel = AgentKernel.from_config("agentkernel/examples/agentkernel.yaml", plugins=[OpenClawPlugin()])
|
| 85 |
-
</helpers>
|
| 86 |
-
|
| 87 |
-
## API
|
| 88 |
-
|
| 89 |
-
All API calls below work identically regardless of backend.
|
| 90 |
-
|
| 91 |
-
### Spawn an Agent
|
| 92 |
-
|
| 93 |
-
<helpers>
|
| 94 |
-
import os
|
| 95 |
-
from agentic.kernel import SpawnRequest
|
| 96 |
-
from agentic.kernel.plugins.openclaw import OpenClawSpawnInfo
|
| 97 |
-
|
| 98 |
-
result = await kernel.spawner.spawn(SpawnRequest(
|
| 99 |
-
name="researcher",
|
| 100 |
-
agent_type="openclaw",
|
| 101 |
-
metadata={"role": "research"},
|
| 102 |
-
spawn_info=OpenClawSpawnInfo(
|
| 103 |
-
system_prompt="You are a research specialist. Be thorough and cite sources.",
|
| 104 |
-
model="claude-sonnet-4-5",
|
| 105 |
-
api_key=os.environ.get("LLM_API_KEY", ""),
|
| 106 |
-
),
|
| 107 |
-
))
|
| 108 |
-
|
| 109 |
-
agent = result.agent # Agent(id, name, team_id, state, metadata, ...)
|
| 110 |
-
nonce = result.nonce # Secret — required for all communication
|
| 111 |
-
print(f"Spawned: {agent.id} ({agent.name})")
|
| 112 |
-
</helpers>
|
| 113 |
-
|
| 114 |
-
`SpawnRequest` fields:
|
| 115 |
-
- `name` — agent name (also used in k8s pod naming)
|
| 116 |
-
- `team_id` — team for capacity tracking (optional, default: "")
|
| 117 |
-
- `metadata` — arbitrary labels for discovery (e.g. `{"role": "worker"}`)
|
| 118 |
-
- `image_id` — custom image from packaging (optional, defaults to base_image in k8s)
|
| 119 |
-
- `spawn_info` — agent-type-specific config (e.g. `OpenClawSpawnInfo`)
|
| 120 |
-
- `env` — extra environment variables forwarded to the agent process
|
| 121 |
-
|
| 122 |
-
`OpenClawSpawnInfo` fields:
|
| 123 |
-
- `system_prompt` — system prompt for the agent
|
| 124 |
-
- `model` — LLM model name (default: `"claude-sonnet-4-5"`)
|
| 125 |
-
- `provider` — LLM provider (default: `"anthropic"`)
|
| 126 |
-
- `tools` — list of tool names to enable (default: `["bash"]`)
|
| 127 |
-
- `thinking_level` — thinking level: `"none"`, `"low"`, `"medium"`, `"high"`
|
| 128 |
-
- `api_key` — LLM API key (also forwarded from host `LLM_API_KEY` env var)
|
| 129 |
-
- `base_url` — override LLM API base URL
|
| 130 |
-
|
| 131 |
-
### Send a Message (Turn)
|
| 132 |
-
|
| 133 |
-
Use the `ask()` helper to send a message and get the full response:
|
| 134 |
-
|
| 135 |
-
<helpers>
|
| 136 |
-
response = await ask(kernel, agent.id, nonce, "What are the latest findings on topic X?")
|
| 137 |
-
print(response)
|
| 138 |
-
</helpers>
|
| 139 |
-
|
| 140 |
-
The agent maintains conversation state — subsequent turns see the full history.
|
| 141 |
-
|
| 142 |
-
For manual streaming (e.g. to display progress), use `kernel.agent_client.turn()` directly — note `end=""` to avoid extra newlines between tokens:
|
| 143 |
-
|
| 144 |
-
<helpers>
|
| 145 |
-
import json
|
| 146 |
-
from agentic.kernel import TurnRequest
|
| 147 |
-
|
| 148 |
-
request = TurnRequest(
|
| 149 |
-
agent_id=agent.id,
|
| 150 |
-
nonce=nonce,
|
| 151 |
-
body=json.dumps({
|
| 152 |
-
"messages": [{"role": "user", "content": "What are the latest findings on topic X?"}]
|
| 153 |
-
}).encode(),
|
| 154 |
-
)
|
| 155 |
-
|
| 156 |
-
response_text = []
|
| 157 |
-
async for chunk in kernel.agent_client.turn(request):
|
| 158 |
-
if chunk.body:
|
| 159 |
-
print(chunk.body, end="", flush=True)
|
| 160 |
-
response_text.append(chunk.body)
|
| 161 |
-
if chunk.error:
|
| 162 |
-
print(f"\nError: {chunk.error}")
|
| 163 |
-
full_response = "".join(response_text)
|
| 164 |
-
</helpers>
|
| 165 |
-
|
| 166 |
-
### Get History
|
| 167 |
-
|
| 168 |
-
<helpers>
|
| 169 |
-
history = await kernel.agent_client.get_history(agent.id, last_n=5)
|
| 170 |
-
for entry in history:
|
| 171 |
-
print(f"[{entry['role']}] {entry['content'][:100]}")
|
| 172 |
-
</helpers>
|
| 173 |
-
|
| 174 |
-
### Get Agent Info
|
| 175 |
-
|
| 176 |
-
<helpers>
|
| 177 |
-
info = await kernel.agent_client.get_info(agent.id)
|
| 178 |
-
print(f"pid={info['pid']} cwd={info['cwd']} uid={info['uid']}")
|
| 179 |
-
</helpers>
|
| 180 |
-
|
| 181 |
-
### Check Status
|
| 182 |
-
|
| 183 |
-
<helpers>
|
| 184 |
-
statuses = await kernel.status()
|
| 185 |
-
for s in statuses:
|
| 186 |
-
line = f"{s['name']}: state={s['state']} live={s['live']}"
|
| 187 |
-
if s.get('pod_phase'): # k8s backend
|
| 188 |
-
line += f" pod={s['pod_phase']}"
|
| 189 |
-
if s.get('process_alive') is not None: # local backend
|
| 190 |
-
line += f" process_alive={s['process_alive']}"
|
| 191 |
-
print(line)
|
| 192 |
-
</helpers>
|
| 193 |
-
|
| 194 |
-
### Kill an Agent
|
| 195 |
-
|
| 196 |
-
<helpers>
|
| 197 |
-
await kernel.spawner.kill(agent.id)
|
| 198 |
-
</helpers>
|
| 199 |
-
|
| 200 |
-
### Clean Up All Agents
|
| 201 |
-
|
| 202 |
-
<helpers>
|
| 203 |
-
await kernel.cleanup()
|
| 204 |
-
</helpers>
|
| 205 |
-
|
| 206 |
-
## Teams
|
| 207 |
-
|
| 208 |
-
Teams reserve capacity and group agents together.
|
| 209 |
-
|
| 210 |
-
<helpers>
|
| 211 |
-
from agentic.kernel import CreateTeamRequest
|
| 212 |
-
|
| 213 |
-
# Reserve capacity
|
| 214 |
-
await kernel.spawner.create_team(CreateTeamRequest(
|
| 215 |
-
team_id="analysis-team",
|
| 216 |
-
resources={"cpu": 4},
|
| 217 |
-
))
|
| 218 |
-
|
| 219 |
-
# Spawn into the team
|
| 220 |
-
result = await kernel.spawner.spawn(SpawnRequest(
|
| 221 |
-
name="analyst",
|
| 222 |
-
team_id="analysis-team",
|
| 223 |
-
agent_type="openclaw",
|
| 224 |
-
spawn_info=OpenClawSpawnInfo(
|
| 225 |
-
system_prompt="You are a data analyst.",
|
| 226 |
-
api_key=os.environ.get("LLM_API_KEY", ""),
|
| 227 |
-
),
|
| 228 |
-
))
|
| 229 |
-
|
| 230 |
-
# Delete team (kills all agents first)
|
| 231 |
-
await kernel.spawner.delete_team("analysis-team")
|
| 232 |
-
</helpers>
|
| 233 |
-
|
| 234 |
-
## AgentBus
|
| 235 |
-
|
| 236 |
-
AgentBus adds observability and safety to agent execution. When enabled, the agent logs all LLM inference and code execution events to a bus that can be inspected via the agentbus CLI.
|
| 237 |
-
|
| 238 |
-
<helpers>
|
| 239 |
-
from agentic.kernel import AgentBusConfig
|
| 240 |
-
|
| 241 |
-
result = await kernel.spawner.spawn(SpawnRequest(
|
| 242 |
-
name="worker",
|
| 243 |
-
agent_type="openclaw",
|
| 244 |
-
spawn_info=OpenClawSpawnInfo(
|
| 245 |
-
system_prompt="You are a helpful worker.",
|
| 246 |
-
api_key=os.environ.get("LLM_API_KEY", ""),
|
| 247 |
-
),
|
| 248 |
-
agentbus=AgentBusConfig(
|
| 249 |
-
port=8095,
|
| 250 |
-
disable_safety=False,
|
| 251 |
-
),
|
| 252 |
-
))
|
| 253 |
-
</helpers>
|
| 254 |
-
|
| 255 |
-
To inspect the bus, you can use the agentbus skill.
|
| 256 |
-
|
| 257 |
-
# Kubernetes backend — port-forward first
|
| 258 |
-
kubectl --kubeconfig ~/.kube/config \
|
| 259 |
-
-n agentkernel-0 port-forward pod/agent-<id-prefix> 8095:8095
|
| 260 |
-
# Then poll as above
|
| 261 |
-
```
|
| 262 |
-
|
| 263 |
-
The bus ID follows the pattern `{agent_name}.{agent_uuid}`.
|
| 264 |
-
|
| 265 |
-
## Patterns
|
| 266 |
-
|
| 267 |
-
### Fan-out / Fan-in
|
| 268 |
-
|
| 269 |
-
Spawn specialists, query them in parallel, synthesize results.
|
| 270 |
-
|
| 271 |
-
<helpers>
|
| 272 |
-
import asyncio
|
| 273 |
-
|
| 274 |
-
# Spawn specialists
|
| 275 |
-
researcher_r = await kernel.spawner.spawn(SpawnRequest(
|
| 276 |
-
name="researcher", agent_type="openclaw", spawn_info=OpenClawSpawnInfo(
|
| 277 |
-
system_prompt="You are a research specialist.",
|
| 278 |
-
api_key=os.environ.get("LLM_API_KEY", ""),
|
| 279 |
-
),
|
| 280 |
-
))
|
| 281 |
-
analyst_r = await kernel.spawner.spawn(SpawnRequest(
|
| 282 |
-
name="analyst", agent_type="openclaw", spawn_info=OpenClawSpawnInfo(
|
| 283 |
-
system_prompt="You are a data analyst.",
|
| 284 |
-
api_key=os.environ.get("LLM_API_KEY", ""),
|
| 285 |
-
),
|
| 286 |
-
))
|
| 287 |
-
|
| 288 |
-
# Fan out — ask() collects streaming chunks into a single string
|
| 289 |
-
research_task = asyncio.create_task(
|
| 290 |
-
ask(kernel, researcher_r.agent.id, researcher_r.nonce, "Find papers on quantum error correction")
|
| 291 |
-
)
|
| 292 |
-
analysis_task = asyncio.create_task(
|
| 293 |
-
ask(kernel, analyst_r.agent.id, analyst_r.nonce, "Run cost-benefit analysis on approach X")
|
| 294 |
-
)
|
| 295 |
-
research, analysis = await asyncio.gather(research_task, analysis_task)
|
| 296 |
-
|
| 297 |
-
print(f"Research: {research[:200]}")
|
| 298 |
-
print(f"Analysis: {analysis[:200]}")
|
| 299 |
-
</helpers>
|
| 300 |
-
|
| 301 |
-
### Pipeline
|
| 302 |
-
|
| 303 |
-
One agent's output feeds the next.
|
| 304 |
-
|
| 305 |
-
<helpers>
|
| 306 |
-
raw_data = await ask(kernel, researcher_r.agent.id, researcher_r.nonce, "Gather data on topic X")
|
| 307 |
-
analysis = await ask(kernel, analyst_r.agent.id, analyst_r.nonce, f"Analyze this data:\n{raw_data}")
|
| 308 |
-
print(analysis)
|
| 309 |
-
</helpers>
|
| 310 |
-
|
| 311 |
-
### Image Packaging
|
| 312 |
-
|
| 313 |
-
Bundle custom code into agent images. On local backend, bundles are copied to a directory. On k8s, an OCI image is built and pushed to the registry.
|
| 314 |
-
|
| 315 |
-
<helpers>
|
| 316 |
-
from agentic.kernel import SourceBundle
|
| 317 |
-
|
| 318 |
-
# Upload code to blob storage
|
| 319 |
-
helpers_uri = kernel.blob_store.upload_dir("./my_helpers/")
|
| 320 |
-
|
| 321 |
-
# Build an agent image with the bundle
|
| 322 |
-
job = await kernel.packaging.create_agent_image(
|
| 323 |
-
name="custom-worker",
|
| 324 |
-
bundles=[SourceBundle(uri=helpers_uri, labels={"name": "my_helpers"})],
|
| 325 |
-
)
|
| 326 |
-
if job.image:
|
| 327 |
-
# Spawn an agent using the custom image
|
| 328 |
-
result = await kernel.spawner.spawn(SpawnRequest(
|
| 329 |
-
name="custom-agent",
|
| 330 |
-
agent_type="openclaw",
|
| 331 |
-
image_id=job.image.id,
|
| 332 |
-
spawn_info=OpenClawSpawnInfo(
|
| 333 |
-
system_prompt="You have custom tools available.",
|
| 334 |
-
api_key=os.environ.get("LLM_API_KEY", ""),
|
| 335 |
-
),
|
| 336 |
-
))
|
| 337 |
-
</helpers>
|
| 338 |
-
|
| 339 |
-
## Lifecycle
|
| 340 |
-
|
| 341 |
-
- Agents persist (as subprocesses or pods) until explicitly killed. Always clean up when done.
|
| 342 |
-
- Each agent has one conversation and one owner. The nonce enforces this — only the spawner can communicate with its agent.
|
| 343 |
-
- Teams have capacity limits. Spawning into a full team raises `ValueError`.
|
| 344 |
-
- The `LLM_API_KEY` and `OPENAI_API_KEY` environment variables are automatically forwarded to agent processes.
|
| 345 |
-
|
| 346 |
-
## Operations
|
| 347 |
-
|
| 348 |
-
**Note**: If behind a proxy, configure `HTTP_PROXY`/`HTTPS_PROXY` environment variables.
|
| 349 |
-
|
| 350 |
-
### Run examples locally
|
| 351 |
-
|
| 352 |
-
```bash
|
| 353 |
-
# Single agent, local backend (no config file needed)
|
| 354 |
-
LLM_API_KEY=... uv run python -m agentkernel.examples.simple_agent
|
| 355 |
-
|
| 356 |
-
# Team scenario, local backend
|
| 357 |
-
LLM_API_KEY=... uv run python -m agentkernel.examples.team_scenario
|
| 358 |
-
```
|
| 359 |
-
|
| 360 |
-
### Run examples on Kubernetes
|
| 361 |
-
|
| 362 |
-
```bash
|
| 363 |
-
# run_k8s_scenario.sh runs the scenario against the configured k8s cluster
|
| 364 |
-
LLM_API_KEY=... ./agentkernel/scripts/run_k8s_scenario.sh simple_agent
|
| 365 |
-
LLM_API_KEY=... ./agentkernel/scripts/run_k8s_scenario.sh team_scenario
|
| 366 |
-
```
|
| 367 |
-
|
| 368 |
-
### Build and push the base image (k8s only)
|
| 369 |
-
|
| 370 |
-
```bash
|
| 371 |
-
./scripts/build_base_image.sh --force-base
|
| 372 |
-
```
|
| 373 |
-
|
| 374 |
-
### Clean up cluster resources (k8s only)
|
| 375 |
-
|
| 376 |
-
```bash
|
| 377 |
-
./agentkernel/scripts/cleanup_k8s.sh # delete all agentkernel pods/svc/cm
|
| 378 |
-
./agentkernel/scripts/cleanup_k8s.sh --dry-run # preview what would be deleted
|
| 379 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|