metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | tunnel-manager | 1.1.14 | Create SSH Tunnels to your remote hosts and host as an MCP Server for Agentic AI! | # Tunnel Manager - A2A | AG-UI | MCP


















*Version: 1.1.14*
## Overview
This project provides a Python-based `Tunnel` class for secure SSH connections and file transfers, integrated with a FastMCP server (`tunnel_manager_mcp.py`) to expose these capabilities as tools for AI-driven workflows. The implementation supports both standard SSH (e.g., for local networks) and Teleport's secure access platform, leveraging the `paramiko` library for SSH operations.
### Features
### Tunnel Class
- **Purpose**: Facilitates secure SSH connections, file transfers, and key management for single or multiple hosts.
- **Key Functionality**:
- **Run Remote Commands**: Execute shell commands on a remote host and retrieve output.
- **File Upload/Download**: Transfer files to/from a single host or all hosts in an inventory group using SFTP.
- **Passwordless SSH Setup**: Configure key-based authentication for secure, passwordless access, with support for RSA and Ed25519 key types.
- **SSH Config Management**: Copy local SSH config files to remote hosts.
- **Key Rotation**: Generate and deploy new SSH key pairs (RSA or Ed25519), updating `authorized_keys`.
- **Inventory Support**: Operate on multiple hosts defined in an Ansible-style YAML inventory, with group targeting (e.g., `all`, `homelab`, `poweredge`).
- **Teleport Support**: Seamlessly integrates with Teleport's certificate-based authentication and proxying.
- **Configuration Flexibility**: Loads SSH settings from `~/.ssh/config` by default, with optional overrides for username, password, identity files, certificates, and proxy commands.
- **Logging**: Optional file-based logging for debugging and auditing.
- **Parallel Execution**: Support for parallel operations across multiple hosts with configurable thread limits.
- **Key Type Support**: Explicit support for both RSA and Ed25519 keys in authentication, generation, and rotation for enhanced security and compatibility.
### FastMCP Server
- **Purpose**: Exposes `Tunnel` class functionality as a FastMCP server, enabling AI tools to perform remote operations programmatically.
- **Tools Provided**:
- `run_command_on_remote_host`: Runs a shell command on a single remote host.
- `send_file_to_remote_host`: Uploads a file to a single remote host via SFTP.
- `receive_file_from_remote_host`: Downloads a file from a single remote host via SFTP.
- `check_ssh_server`: Checks if the SSH server is running and configured for key-based authentication.
- `test_key_auth`: Tests key-based authentication for a host.
- `setup_passwordless_ssh`: Sets up passwordless SSH for a single host.
- `copy_ssh_config`: Copies an SSH config file to a single remote host.
- `rotate_ssh_key`: Rotates SSH keys for a single host.
- `remove_host_key`: Removes a host’s key from the local `known_hosts` file.
- `configure_key_auth_on_inventory`: Sets up passwordless SSH for all hosts in an inventory group.
- `run_command_on_inventory`: Runs a command on all hosts in an inventory group.
- `copy_ssh_config_on_inventory`: Copies an SSH config file to all hosts in an inventory group.
- `rotate_ssh_key_on_inventory`: Rotates SSH keys for all hosts in an inventory group.
- `send_file_to_inventory`: Uploads a file to all hosts in an inventory group via SFTP.
- `receive_file_from_inventory`: Downloads a file from all hosts in an inventory group via SFTP.
- **Transport Options**: Supports `stdio` (for local scripting) and `http` (for networked access) transport modes.
- **Progress Reporting**: Integrates with FastMCP's `Context` for progress updates during operations.
- **Logging**: Comprehensive logging to a file (`tunnel_mcp.log` by default) or a user-specified file.
## Usage
### CLI
| Short Flag | Long Flag | Description | Required | Default Value |
|------------|----------------------|----------------------------------------------------------|----------|---------------|
| -h | --help | Show usage for the script | No | None |
| | --log-file | Log to specified file (default: console output) | No | Console |
| | setup-all | Setup passwordless SSH for all hosts in inventory | Yes* | None |
| | --inventory | YAML inventory path | Yes | None |
| | --shared-key-path | Path to shared private key | No | ~/.ssh/id_shared |
| | --key-type | Key type (rsa or ed25519) | No | ed25519 |
| | --group | Inventory group to target | No | all |
| | --parallel | Run operation in parallel | No | False |
| | --max-threads | Max threads for parallel execution | No | 5 |
| | run-command | Run a shell command on all hosts in inventory | Yes* | None |
| | --remote-command | Shell command to run | Yes | None |
| | copy-config | Copy SSH config to all hosts in inventory | Yes* | None |
| | --local-config-path | Local SSH config path | Yes | None |
| | --remote-config-path | Remote path for SSH config | No | ~/.ssh/config |
| | rotate-key | Rotate SSH keys for all hosts in inventory | Yes* | None |
| | --key-prefix | Prefix for new key paths (appends hostname) | No | ~/.ssh/id_ |
| | --key-type | Key type (rsa or ed25519) | No | ed25519 |
| | send-file | Upload a file to all hosts in inventory | Yes* | None |
| | --local-path | Local file path to upload | Yes | None |
| | --remote-path | Remote destination path | Yes | None |
| | receive-file | Download a file from all hosts in inventory | Yes* | None |
| | --remote-path | Remote file path to download | Yes | None |
| | --local-path-prefix | Local directory path prefix to save files | Yes | None |
### Notes
One of the commands (`setup-all`, `run-command`, `copy-config`, `rotate-key`, `send-file`, `receive-file`) must be specified as the first argument to `tunnel_manager.py`. Each command has required arguments that must be specified with flags:
- `setup-all`: Requires `--inventory`.
- `run-command`: Requires `--inventory` and `--remote-command`.
- `copy-config`: Requires `--inventory` and `--local-config-path`.
- `rotate-key`: Requires `--inventory`.
- `send-file`: Requires `--inventory`, `--local-path`, and `--remote-path`.
- `receive-file`: Requires `--inventory`, `--remote-path`, and `--local-path-prefix`.
### Additional Notes
- Ensure `ansible_host` values in `inventory.yml` are resolvable IPs or hostnames.
- Update `ansible_ssh_private_key_file` in the inventory after running `rotate-key`.
- Use `--log-file` for file-based logging or omit for console output.
- The `--parallel` option speeds up operations but may overload resources; adjust `--max-threads` as needed.
- The `receive-file` command saves files to `local_path_prefix/<hostname>/<filename>` to preserve original filenames and avoid conflicts.
- Ed25519 keys are recommended for better security and performance over RSA, but RSA is supported for compatibility with older systems.
#### 1. Setup Passwordless SSH
Set up passwordless SSH for hosts in the inventory, distributing a shared key. Use `--key-type` to specify RSA or Ed25519 (default: ed25519).
- **Target `all` group (sequential, Ed25519)**:
```bash
tunnel-manager setup-all --inventory inventory.yml --shared-key-path ~/.ssh/id_shared --key-type ed25519
```
- **Target `homelab` group (parallel, 3 threads, RSA)**:
```bash
tunnel-manager setup-all --inventory inventory.yml --shared-key-path ~/.ssh/id_shared_rsa --key-type rsa --group homelab --parallel --max-threads 3
```
- **Target `poweredge` group (sequential, Ed25519)**:
```bash
tunnel-manager --log-file setup_poweredge.log setup-all --inventory inventory.yml --shared-key-path ~/.ssh/id_shared --key-type ed25519 --group poweredge
```
#### 2. Run a Command
Execute a shell command on all hosts in the specified group.
- **Run `uptime` on `all` group (sequential)**:
```bash
tunnel-manager run-command --inventory inventory.yml --remote-command "uptime"
```
- **Run `df -h` on `homelab` group (parallel, 5 threads)**:
```bash
tunnel-manager run-command --inventory inventory.yml --remote-command "df -h" --group homelab --parallel --max-threads 5
```
- **Run `whoami` on `poweredge` group (sequential)**:
```bash
tunnel-manager run-command --inventory inventory.yml --remote-command "whoami" --group poweredge
```
#### 3. Copy SSH Config
Copy a local SSH config file to the remote hosts’ `~/.ssh/config`.
- **Copy to `all` group (sequential)**:
```bash
tunnel-manager copy-config --inventory inventory.yml --local-config-path ~/.ssh/config
```
- **Copy to `homelab` group (parallel, 4 threads)**:
```bash
tunnel-manager copy-config --inventory inventory.yml --local-config-path ~/.ssh/config --group homelab --parallel --max-threads 4
```
- **Copy to `poweredge` group with custom remote path**:
```bash
tunnel-manager --log-file copy_config.log copy-config --inventory inventory.yml --local-config-path ~/.ssh/config --remote-config-path ~/.ssh/custom_config --group poweredge
```
#### 4. Rotate SSH Keys
Rotate SSH keys for hosts, generating new keys with a prefix. Use `--key-type` to specify RSA or Ed25519 (default: ed25519).
- **Rotate keys for `all` group (sequential, Ed25519)**:
```bash
tunnel-manager rotate-key --inventory inventory.yml --key-prefix ~/.ssh/id_ --key-type ed25519
```
- **Rotate keys for `homelab` group (parallel, 3 threads, RSA)**:
```bash
tunnel-manager rotate-key --inventory inventory.yml --key-prefix ~/.ssh/id_rsa_ --key-type rsa --group homelab --parallel --max-threads 3
```
- **Rotate keys for `poweredge` group (sequential, Ed25519)**:
```bash
tunnel-manager --log-file rotate.log rotate-key --inventory inventory.yml --key-prefix ~/.ssh/id_ --key-type ed25519 --group poweredge
```
#### 5. Upload a File
Upload a local file to all hosts in the specified group.
- **Upload to `all` group (sequential)**:
```bash
tunnel-manager send-file --inventory inventory.yml --local-path ./myfile.txt --remote-path /home/user/myfile.txt
```
- **Upload to `homelab` group (parallel, 3 threads)**:
```bash
tunnel-manager send-file --inventory inventory.yml --local-path ./myfile.txt --remote-path /home/user/myfile.txt --group homelab --parallel --max-threads 3
```
- **Upload to `poweredge` group (sequential)**:
```bash
tunnel-manager --log-file upload_poweredge.log send-file --inventory inventory.yml --local-path ./myfile.txt --remote-path /home/user/myfile.txt --group poweredge
```
#### 6. Download a File
Download a file from all hosts in the specified group, saving to host-specific subdirectories (e.g., `downloads/R510/myfile.txt`).
- **Download from `all` group (sequential)**:
```bash
tunnel-manager receive-file --inventory inventory.yml --remote-path /home/user/myfile.txt --local-path-prefix ./downloads
```
- **Download from `homelab` group (parallel, 3 threads)**:
```bash
tunnel-manager receive-file --inventory inventory.yml --remote-path /home/user/myfile.txt --local-path-prefix ./downloads --group homelab --parallel --max-threads 3
```
- **Download from `poweredge` group (sequential)**:
```bash
tunnel-manager --log-file download_poweredge.log receive-file --inventory inventory.yml --remote-path /home/user/myfile.txt --local-path-prefix ./downloads --group poweredge
```
### Tunnel Manager Inventory
**Inventory File Example (`inventory.yml`)**:
```yaml
all:
hosts:
r510:
ansible_host: 192.168.1.10
ansible_user: admin
ansible_ssh_private_key_file: "~/.ssh/id_ed25519"
r710:
ansible_host: 192.168.1.11
ansible_user: admin
ansible_ssh_pass: mypassword
gr1080:
ansible_host: 192.168.1.14
ansible_user: admin
ansible_ssh_private_key_file: "~/.ssh/id_rsa"
homelab:
hosts:
r510:
ansible_host: 192.168.1.10
ansible_user: admin
ansible_ssh_private_key_file: "~/.ssh/id_ed25519"
r710:
ansible_host: 192.168.1.11
ansible_user: admin
ansible_ssh_pass: mypassword
gr1080:
ansible_host: 192.168.1.14
ansible_user: admin
ansible_ssh_private_key_file: "~/.ssh/id_rsa"
poweredge:
hosts:
r510:
ansible_host: 192.168.1.10
ansible_user: admin
ansible_ssh_private_key_file: "~/.ssh/id_ed25519"
r710:
ansible_host: 192.168.1.11
ansible_user: admin
ansible_ssh_pass: mypassword
```
Replace IPs, usernames, and passwords with your actual values.
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen3:4b) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| | --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
#### Run in stdio mode (default):
```bash
tunnel-manager-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
tunnel-manager-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
### Tunnel Class
The `Tunnel` class can be used standalone for SSH operations. Examples:
#### Using RSA Keys
```python
from tunnel_manager.tunnel_manager import Tunnel
# Initialize with a remote host (assumes ~/.ssh/config or explicit params)
tunnel = Tunnel(
remote_host="192.168.1.10",
username="admin",
password="mypassword",
identity_file="/path/to/id_rsa",
certificate_file="/path/to/cert", # Optional for Teleport
proxy_command="tsh proxy ssh %h", # Optional for Teleport
ssh_config_file="~/.ssh/config",
)
# Connect and run a command
tunnel.connect()
out, err = tunnel.run_command("ls -la /tmp")
print(f"Output: {out}\nError: {err}")
# Upload a file
tunnel.send_file("/local/file.txt", "/remote/file.txt")
# Download a file
tunnel.receive_file("/remote/file.txt", "/local/downloaded.txt")
# Setup passwordless SSH with RSA
tunnel.setup_passwordless_ssh(local_key_path="~/.ssh/id_rsa", key_type="rsa")
# Copy SSH config
tunnel.copy_ssh_config("/local/ssh_config", "~/.ssh/config")
# Rotate SSH key with RSA
tunnel.rotate_ssh_key("/path/to/new_rsa_key", key_type="rsa")
# Close the connection
tunnel.close()
```
#### Using Ed25519 Keys
```python
from tunnel_manager.tunnel_manager import Tunnel
# Initialize with a remote host (assumes ~/.ssh/config or explicit params)
tunnel = Tunnel(
remote_host="192.168.1.10",
username="admin",
password="mypassword",
identity_file="/path/to/id_ed25519",
certificate_file="/path/to/cert", # Optional for Teleport
proxy_command="tsh proxy ssh %h", # Optional for Teleport
ssh_config_file="~/.ssh/config",
)
# Connect and run a command
tunnel.connect()
out, err = tunnel.run_command("ls -la /tmp")
print(f"Output: {out}\nError: {err}")
# Upload a file
tunnel.send_file("/local/file.txt", "/remote/file.txt")
# Download a file
tunnel.receive_file("/remote/file.txt", "/local/downloaded.txt")
# Setup passwordless SSH with Ed25519
tunnel.setup_passwordless_ssh(local_key_path="~/.ssh/id_ed25519", key_type="ed25519")
# Copy SSH config
tunnel.copy_ssh_config("/local/ssh_config", "~/.ssh/config")
# Rotate SSH key with Ed25519
tunnel.rotate_ssh_key("/path/to/new_ed25519_key", key_type="ed25519")
# Close the connection
tunnel.close()
```
### Deploy MCP Server as a Service
The MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/tunnel-manager:latest
docker run -d \
--name tunnel-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
knucklessg1/tunnel-manager:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name tunnel-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
knucklessg1/tunnel-manager:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
tunnel-manager-mcp:
image: knucklessg1/tunnel-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
tunnel-manager-mcp:
image: knucklessg1/tunnel-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"tunnel_manager": {
"command": "uv",
"args": [
"run",
"--with",
"tunnel-manager",
"tunnel_manager_mcp"
],
"env": {
"TUNNEL_REMOTE_HOST": "192.168.1.12", // Optional
"TUNNEL_USERNAME": "admin", // Optional
"TUNNEL_PASSWORD": "", // Optional
"TUNNEL_REMOTE_PORT": "22", // Optional
"TUNNEL_IDENTITY_FILE": "", // Optional
"TUNNEL_INVENTORY": "~/inventory.yaml", // Optional
"TUNNEL_INVENTORY_GROUP": "all", // Optional
"TUNNEL_PARALLEL": "true", // Optional
"TUNNEL_CERTIFICATE": "", // Optional
"TUNNEL_PROXY_COMMAND": "", // Optional
"TUNNEL_LOG_FILE": "~/tunnel_log.txt", // Optional
"TUNNEL_MAX_THREADS": "6" // Optional
},
"timeout": 200000
}
}
}
```
## Install Python Package
```bash
python -m pip install tunnel-manager
```
or
```bash
uv pip install --upgrade tunnel-manager
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"fastmcp>=3.0.0b1",
"paramiko>=4.0.0",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pyda... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:35:32.616757 | tunnel_manager-1.1.14.tar.gz | 51,477 | 5b/de/edf8037aeeb24190781b139004ce3f6c222ce17a483d30aeae54e510eb81/tunnel_manager-1.1.14.tar.gz | source | sdist | null | false | 001a7aced1e05cb7a3890a335b3ccfb1 | b8bf48a0c934071846beab2466e2621424fab8ef86e523dc9dd153f8b1427741 | 5bdeedf8037aeeb24190781b139004ce3f6c222ce17a483d30aeae54e510eb81 | null | [
"LICENSE"
] | 261 |
2.4 | traintrack-ai | 0.1.6 | TrainTrack Client: PyTorch training-time evaluation | # TrainTrack
**Training-time evaluation and win-rate tracking for LLMs.**
TrainTrack helps you monitor model behavior during training by running automated LLM-as-a-Judge evaluations on every checkpoint.
## Features
- 🚀 **Real-time Metrics**: Get immediate feedback on conciseness, helpfulness, and reasoning quality.
- 📊 **Win-rate Tracking**: Automatically track win-rates against an anchor checkpoint (baseline) or the previous step.
- 📚 **Built-in Benchmarks**: Integrated support for GPQA, MMLU-Pro, IFEval, and TruthfulQA.
- 🛠️ **Seamless Integration**: Works with standard PyTorch loops and HuggingFace Trainer.
## Quick Installation
```bash
pip install traintrack-ai
```
## Minimal Example
```python
from traintrack import TrainTrackHook
# 1. Initialize the hook
hook = TrainTrackHook(
model=model,
tokenizer=tokenizer,
run_name="my-first-run",
datasets=["reasoning", "helpfulness"]
)
# 2. Capture a baseline (optional)
hook.capture_anchor()
# 3. Add to your training loop
for step, batch in enumerate(train_dataloader):
# ... training logic ...
hook.step(step)
```
## Documentation
For full documentation and advanced configuration (custom metrics, rubrics, and category-based evaluation), visit:
[github.com/traintrack/traintrack](https://github.com/traintrack/traintrack)
## Tinker Integration
TrainTrack includes a Tinker inline evaluator integration that can be used with
`evaluator_builders`:
```python
from traintrack import BuildTrainTrackTinkerEvaluator
traintrack_builder = BuildTrainTrackTinkerEvaluator(
run_name="my-tinker-run",
categories=["reasoning", "helpfulness"],
model_name="Qwen/Qwen2.5-1.5B-Instruct",
eval_every_steps=10,
)
# Add `traintrack_builder` to your Tinker `evaluator_builders` list.
```
If you're using a low-level custom training loop, call:
`evaluator.evaluate_training_step(training_client=..., step=...)`
to let TrainTrack handle scheduling + snapshot + sampling + ingest.
For a runnable single-eval smoke test, see:
`Canary/examples/tinker_sampling_evaluator_example.py`
For low blocking overhead during training, wire a small frequent evaluator into
`evaluator_builders` and larger suites into `infrequent_evaluator_builders`.
For a full training run example (real training + eval every N steps), see:
`Canary/examples/tinker_supervised_traintrack_training.py`
For a low-level Tinker quickstart-style example (Pig Latin with manual
forward/backward updates + TrainTrack eval every N steps), see:
`Canary/examples/tinker_pig_latin_with_traintrack.py`
For a real NoRobots finetune with TrainTrack evaluation on the NoRobots test set
(helpfulness metric, criteria + pairwise_anchor), see:
`Canary/examples/tinker_norobots_traintrack_finetune.py`
| text/markdown | TrainTrack Team | null | null | null | MIT | pytorch, llm, evaluation, training, ml | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"torch>=2.0.0",
"requests>=2.28.0",
"pydantic>=2.0.0",
"tinker; extra == \"tinker\"",
"tinker-cookbook; extra == \"tinker\""
] | [] | [] | [] | [
"Homepage, https://github.com/traintrack/traintrack"
] | twine/6.2.0 CPython/3.13.0 | 2026-02-19T06:35:30.620540 | traintrack_ai-0.1.6.tar.gz | 482,301 | 18/d3/ac7153bef57a1acc5e2dd90951f2cbc80fdc69d27bd1a21927076845cc4c/traintrack_ai-0.1.6.tar.gz | source | sdist | null | false | bf61cfab0667b04fe203382e77e30976 | 703665ab35d556e760a48e6882a282dc40f0882761b9e28a9ae2f9d7c12d7fe8 | 18d3ac7153bef57a1acc5e2dd90951f2cbc80fdc69d27bd1a21927076845cc4c | null | [] | 245 |
2.4 | searxng-mcp | 0.1.17 | SearXNG Search Engine MCP Server for Agentic AI! | # SearXNG - A2A | AG-UI | MCP


















*Version: 0.1.17*
## Overview
SearXNG MCP Server + A2A Server
It includes a Model Context Protocol (MCP) server and an out of the box Agent2Agent (A2A) agent
Perform privacy-respecting web searches using SearXNG through an MCP server!
This repository is actively maintained - Contributions are welcome!
### Supports:
- Privacy-respecting metasearch
- Customizable search parameters (language, time range, categories, engines)
- Safe search levels
- Pagination control
- Basic authentication support
- Random instance selection
## MCP
### MCP Tools
| Function Name | Description | Tag(s) |
|:--------------|:---------------------------------------------------------------------------------------------------------------------------------------|:---------|
| `web_search` | Perform web searches using SearXNG, a privacy-respecting metasearch engine. Returns relevant web content with customizable parameters. | `search` |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
searxng-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
searxng-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
AI Prompt:
```text
Search for information about artificial intelligence
```
AI Response:
```text
Search completed successfully. Found 10 results for "artificial intelligence":
1. **What is Artificial Intelligence?**
URL: https://example.com/ai
Content: Artificial intelligence (AI) refers to the simulation of human intelligence in machines...
2. **AI Overview**
URL: https://example.org/ai-overview
Content: AI encompasses machine learning, deep learning, and more...
```
## A2A Agent
This package also includes an A2A agent server that can be used to interact with the SearXNG MCP server.
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen/qwen3-coder-next) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| | --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
searxng-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
searxng-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
AI Prompt:
```text
Search for information about artificial intelligence
```
AI Response:
```text
Search completed successfully. Found 10 results for "artificial intelligence":
1. **What is Artificial Intelligence?**
URL: https://example.com/ai
Content: Artificial intelligence (AI) refers to the simulation of human intelligence in machines...
2. **AI Overview**
URL: https://example.org/ai-overview
Content: AI encompasses machine learning, deep learning, and more...
```
### Agentic AI
`searxng-mcp` is designed to be used by Agentic AI systems. It provides a set of tools that allow agents to search the web using SearXNG.
## Agent-to-Agent (A2A)
This package also includes an A2A agent server that can be used to interact with the SearXNG MCP server.
### CLI
| Argument | Description | Default |
|-------------------|----------------------------------------------------------------|--------------------------------|
| `--host` | Host to bind the server to | `0.0.0.0` |
| `--port` | Port to bind the server to | `9000` |
| `--reload` | Enable auto-reload | `False` |
| `--provider` | LLM Provider (openai, anthropic, google, huggingface) | `openai` |
| `--model-id` | LLM Model ID | `qwen/qwen3-coder-next` |
| `--base-url` | LLM Base URL (for OpenAI compatible providers) | `http://ollama.arpa/v1` |
| `--api-key` | LLM API Key | `ollama` |
| `--mcp-url` | MCP Server URL | `http://searxng-mcp:8000/mcp` |
| `--allowed-tools` | List of allowed MCP tools | `web_search` |
### Examples
#### Run A2A Server
```bash
searxng-agent --provider openai --model-id gpt-4 --api-key sk-... --mcp-url http://localhost:8000/mcp
```
#### Run with Docker
```bash
docker run -e CMD=searxng-agent -p 8000:8000 searxng-mcp
```
## Docker
### Build
```bash
docker build -t searxng-mcp .
```
### Run MCP Server
```bash
docker run -p 8000:8000 searxng-mcp
```
### Run A2A Server
```bash
docker run -e CMD=searxng-agent -p 8001:8001 searxng-mcp
```
### Deploy MCP Server as a Service
The ServiceNow MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/searxng-mcp:latest
docker run -d \
--name searxng-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
-e SEARXNG_URL=https://searxng.example.com \
-e SEARXNG_USERNAME=user \
-e SEARXNG_PASSWORD=pass \
-e USE_RANDOM_INSTANCE=false \
knucklessg1/searxng-mcp:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name searxng-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
-e SEARXNG_URL=https://searxng.example.com \
-e SEARXNG_USERNAME=user \
-e SEARXNG_PASSWORD=pass \
-e USE_RANDOM_INSTANCE=false \
knucklessg1/searxng-mcp:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
searxng-mcp:
image: knucklessg1/searxng-mcp:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
- SEARXNG_URL=https://searxng.example.com
- SEARXNG_USERNAME=user
- SEARXNG_PASSWORD=pass
- USE_RANDOM_INSTANCE=false
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
searxng-mcp:
image: knucklessg1/searxng-mcp:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
- SEARXNG_URL=https://searxng.example.com
- SEARXNG_USERNAME=user
- SEARXNG_PASSWORD=pass
- USE_RANDOM_INSTANCE=false
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"searxng": {
"command": "uv",
"args": [
"run",
"--with",
"searxng-mcp",
"searxng-mcp"
],
"env": {
"SEARXNG_URL": "https://searxng.example.com",
"SEARXNG_USERNAME": "user",
"SEARXNG_PASSWORD": "pass",
"USE_RANDOM_INSTANCE": "false"
},
"timeout": 300000
}
}
}
```
## Install Python Package
```bash
python -m pip install searxng-mcp
```
```bash
uv pip install searxng-mcp
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pydantic-ai-skills>=v0.4.0; extra == \"a2a\"",
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:35:09.810450 | searxng_mcp-0.1.17.tar.gz | 33,460 | 62/ee/a910d7d9699c974d2d72893a43043fea536ec983312ac59434439b7dfae0/searxng_mcp-0.1.17.tar.gz | source | sdist | null | false | 6c01e48708240a80258d5fe230a10233 | d309fe4ff05d4fdd58853eb12fd4f38799a36d581fab28c5aabc193446e2af85 | 62eea910d7d9699c974d2d72893a43043fea536ec983312ac59434439b7dfae0 | null | [
"LICENSE"
] | 263 |
2.4 | markethub-ddn | 1.0.0 | MarketHub DDN - WebSocket SDK for Centrifugo with built-in offset persistence and automatic recovery | # centrifuge-python
[](https://github.com/centrifugal/centrifuge-python/actions/workflows/test.yml?query=event%3Apush+branch%3Amaster+workflow%3ATest)
[](https://pypi.python.org/pypi/centrifuge-python)
[](https://github.com/centrifugal/centrifuge-python)
[](https://github.com/centrifugal/centrifuge-python/blob/master/LICENSE)
This is a WebSocket real-time SDK for [Centrifugo](https://github.com/centrifugal/centrifugo) server (and any [Centrifuge-based](https://github.com/centrifugal/centrifuge) server) on top of Python asyncio library.
> [!TIP]
> If you are looking for Centrifugo [server API](https://centrifugal.dev/docs/server/server_api) client – check out [pycent](https://github.com/centrifugal/pycent) instead.
Before starting to work with this library check out Centrifugo [client SDK API specification](https://centrifugal.dev/docs/transports/client_api) as it contains common information about Centrifugal real-time SDK behavior. This SDK supports all major features of Centrifugo client protocol - see [SDK feature matrix](https://centrifugal.dev/docs/transports/client_sdk#sdk-feature-matrix).
## Install
```
pip install centrifuge-python
```
Then in your code:
```
from centrifuge import Client
```
See [example code](https://github.com/centrifugal/centrifuge-python/blob/master/example.py) and [how to run it](#run-example) locally.
## JSON vs Protobuf protocols
By default, SDK uses JSON protocol. If you want to use Protobuf protocol instead then pass `use_protobuf=True` option to `Client` constructor.
When using JSON protocol:
* all payloads (data to publish, connect/subscribe data) you pass to the library are encoded to JSON internally using `json.dumps` before sending to server. So make sure you pass only JSON-serializable data to the library.
* all payloads received from server are decoded to Python objects using `json.loads` internally before passing to your code.
When using Protobuf protocol:
* all payloads you pass to the library must be `bytes` or `None` if optional. If you pass non-`bytes` data – exception will be raised.
* all payloads received from the library will be `bytes` or `None` if not present.
* don't forget that when using Protobuf protocol you can still have JSON payloads - just encode them to `bytes` before passing to the library.
## Callbacks should not block
Event callbacks are called by SDK using `await` internally, the websocket connection read loop is blocked for the time SDK waits for the callback to be executed. This means that if you need to perform long operations in callbacks consider moving the work to a separate coroutine/task to return fast and continue reading data from the websocket.
The fact WebSocket read is blocked for the time we execute callbacks means that you can not call awaitable SDK APIs from callback – because SDK does not have a chance to read the reply. You will get `OperationTimeoutError` exception. The rule is the same - do the work asynchronously, for example use `asyncio.ensure_future`.
## Run example
To run [example](https://github.com/centrifugal/centrifuge-python/blob/master/example.py), first start Centrifugo with config like this:
```json
{
"client": {
"token": {
"hmac_secret_key": "secret"
}
},
"channel": {
"namespaces": [
{
"name": "example",
"presence": true,
"history_size": 300,
"history_ttl": "300s",
"join_leave": true,
"force_push_join_leave": true,
"allow_publish_for_subscriber": true,
"allow_presence_for_subscriber": true,
"allow_history_for_subscriber": true
}
]
}
}
```
And then:
```bash
python -m venv env
. env/bin/activate
make dev
python example.py
```
## Run tests
To run tests, first start Centrifugo server:
```bash
docker pull centrifugo/centrifugo:v6
docker run -d -p 8000:8000 \
-e CENTRIFUGO_CLIENT_TOKEN_HMAC_SECRET_KEY="secret" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_ALLOWED_DELTA_TYPES="fossil" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_DELTA_PUBLISH="true" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_PRESENCE="true" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_JOIN_LEAVE="true" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_FORCE_PUSH_JOIN_LEAVE="true" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_HISTORY_SIZE="100" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_HISTORY_TTL="300s" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_FORCE_RECOVERY="true" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_ALLOW_PUBLISH_FOR_SUBSCRIBER="true" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_ALLOW_PRESENCE_FOR_SUBSCRIBER="true" \
-e CENTRIFUGO_CHANNEL_WITHOUT_NAMESPACE_ALLOW_HISTORY_FOR_SUBSCRIBER="true" \
-e CENTRIFUGO_CLIENT_SUBSCRIBE_TO_USER_PERSONAL_CHANNEL_ENABLED="true" \
-e CENTRIFUGO_LOG_LEVEL="trace" \
centrifugo/centrifugo:v6 centrifugo
```
And then (from cloned repo root):
```bash
python -m venv env
. env/bin/activate
make dev
make test
```
| text/markdown | MarketHub | null | null | null | null | Centrifugo, DDN, MarketHub, Offset Persistence, Pub/Sub, Realtime, Streaming, WebSocket | [
"Development Status :: 4 - Beta",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Progra... | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=4.23.4",
"websockets<16.0.0,>=14.0.0",
"pre-commit~=3.5.0; extra == \"dev\"",
"ruff~=0.1.4; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-19T06:35:07.652993 | markethub_ddn-1.0.0.tar.gz | 26,470 | 05/69/4e4cb29564ddd16f3ce679f5ffa16ebf3274c7a3769c7a2bedf4bca57c82/markethub_ddn-1.0.0.tar.gz | source | sdist | null | false | 61d517b3e74108dce56c3dbb840c370d | 6d7737574fe57e9bec3e53f34bb74626e90109c3a38866e1cccadf8f6d1212e2 | 05694e4cb29564ddd16f3ce679f5ffa16ebf3274c7a3769c7a2bedf4bca57c82 | MIT | [
"LICENSE"
] | 262 |
2.1 | dotted-notation | 0.32.2 | Dotted notation for safe nested data traversal with optional chaining, pattern matching, and transforms | # Dotted
Sometimes you want to fetch data from a deeply nested data structure. Dotted notation
helps you do that.
## Installation
pip install dotted-notation
Since this package includes the [**`dq`** command-line tool](#cli-dq), several data formats are supported:
| Format | Status |
|--------|----------|
| JSON | included |
| JSONL | included |
| CSV | included |
| YAML | optional |
| TOML | optional |
To install optional format support:
pip install dotted-notation[all]
Or pick only what you need:
pip install dotted-notation[yaml,toml]
## Table of Contents
- [Safe Traversal (Optional Chaining)](#safe-traversal-optional-chaining)
- [Why Dotted?](#why-dotted)
- [Breaking Changes](#breaking-changes)
- [API](#api)
- [Get](#get)
- [Update](#update)
- [Remove](#remove)
- [Match](#match)
- [Expand](#expand)
- [Overlaps](#overlaps)
- [Has](#has)
- [Mutable](#mutable)
- [Setdefault](#setdefault)
- [Pluck](#pluck)
- [Unpack](#unpack)
- [Build](#build)
- [Apply](#apply)
- [Assemble](#assemble)
- [Quote](#quote)
- [Normalize](#normalize)
- [AUTO](#auto)
- [Multi Operations](#multi-operations)
- [Paths](#paths)
- [Key fields](#key-fields)
- [Bracketed fields](#bracketed-fields)
- [Attr fields](#attr-fields)
- [Slicing](#slicing)
- [Dot notation for sequence indexing](#dot-notation-for-sequence-indexing)
- [Empty path (root access)](#empty-path-root-access)
- [Typing & Quoting](#typing--quoting)
- [Numeric types](#numeric-types)
- [Quoting](#quoting)
- [The numericize `#` operator](#the-numericize--operator)
- [Container types](#container-types)
- [Bytes literals](#bytes-literals)
- [Patterns](#patterns)
- [Wildcards](#wildcards)
- [Regular expressions](#regular-expressions)
- [The match-first operator](#the-match-first-operator)
- [Slicing vs Patterns](#slicing-vs-patterns)
- [Recursive Traversal](#recursive-traversal)
- [The recursive operator `*`](#the-recursive-operator-)
- [Recursive wildcard `**`](#recursive-wildcard-)
- [Depth slicing](#depth-slicing)
- [Recursive with value guard](#recursive-with-value-guard)
- [Recursive update and remove](#recursive-update-and-remove)
- [Recursive match](#recursive-match)
- [Grouping](#grouping)
- [Path grouping](#path-grouping)
- [Operation grouping](#operation-grouping)
- [Operators](#operators)
- [The append `+` operator](#the-append--operator)
- [The append-unique `+?` operator](#the-append-unique--operator)
- [The invert `-` operator](#the-invert---operator)
- [The NOP `~` operator](#the-nop--operator)
- [The cut `#` operator](#the-cut--operator)
- [The soft cut `##` operator](#the-soft-cut--operator)
- [The numericize `#` operator](#the-numericize--operator-1)
- [Filters](#filters)
- [The key-value filter](#the-key-value-filter)
- [The key-value first filter](#the-key-value-first-filter)
- [Conjunction vs disjunction](#conjunction-vs-disjunction)
- [Grouping with parentheses](#grouping-with-parentheses)
- [Filter negation and not-equals](#filter-negation-and-not-equals)
- [Boolean and None filter values](#boolean-and-none-filter-values)
- [Value guard](#value-guard)
- [Container filter values](#container-filter-values)
- [Type prefixes](#type-prefixes)
- [String glob patterns](#string-glob-patterns)
- [Bytes glob patterns](#bytes-glob-patterns)
- [Value groups](#value-groups)
- [Dotted filter keys](#dotted-filter-keys)
- [Slice notation in filter keys](#slice-notation-in-filter-keys)
- [Transforms](#transforms)
- [Built-in Transforms](#built-in-transforms)
- [Container transform arguments](#container-transform-arguments)
- [Custom Transforms](#custom-transforms)
- [Constants and Exceptions](#constants-and-exceptions)
- [CLI (`dq`)](#cli-dq)
- [File input](#file-input)
- [Format conversion](#format-conversion)
- [Path files](#path-files)
- [Projection](#projection)
- [Unpack](#unpack)
- [Pack](#pack)
- [FAQ](#faq)
- [Why do I get a tuple for my get?](#why-do-i-get-a-tuple-for-my-get)
- [How do I craft an efficient path?](#how-do-i-craft-an-efficient-path)
- [Why do I get a RuntimeError when updating with a slice filter?](#why-do-i-get-a-runtimeerror-when-updating-with-a-slice-filter)
<a id="safe-traversal-optional-chaining"></a>
## Safe Traversal (Optional Chaining)
Like JavaScript's optional chaining operator (`?.`), dotted safely handles missing paths.
If any part of the path doesn't exist, `get` returns `None` (or a specified default)
instead of raising an exception:
>>> import dotted
>>> d = {'a': {'b': 1}}
>>> dotted.get(d, 'a.b.c.d.e') # path doesn't exist
None
>>> dotted.get(d, 'a.b.c.d.e', 'default') # with default
'default'
>>> dotted.get(d, 'x.y.z', 42) # missing from the start
42
This makes dotted ideal for safely navigating deeply nested or uncertain data structures
without defensive coding or try/except blocks.
<a id="why-dotted"></a>
## Why Dotted?
Several Python libraries handle nested data access. Here's how dotted compares:
| Feature | dotted | glom | jmespath | pydash |
|---------|--------|------|----------|--------|
| Safe traversal (no exceptions) | ✅ | ✅ | ✅ | ✅ |
| Familiar dot notation | ✅ | ❌ (custom spec) | ❌ (JSON syntax) | ✅ |
| Pattern matching (wildcards) | ✅ | ✅ | ❌ | ❌ |
| Regex patterns | ✅ | ❌ | ❌ | ❌ |
| In-place mutation | ✅ | ✅ | ❌ (read-only) | ✅ |
| Attribute access (`@attr`) | ✅ | ✅ | ❌ | ✅ |
| Transforms/coercion | ✅ | ✅ | ✅ | ❌ |
| Slicing | ✅ | ❌ | ✅ | ❌ |
| Filters | ✅ | ❌ | ✅ | ❌ |
| AND/OR/NOT filters | ✅ | ❌ | ✅ | ❌ |
| Grouping `(a,b)`, `(.a,.b)` | ✅ | ❌ | ❌ | ❌ |
| Recursive traversal (`**`, `*key`) | ✅ | ✅ | ❌ | ❌ |
| Depth slicing (`**:-1`, `**:2`) | ✅ | ❌ | ❌ | ❌ |
| NOP (~) match but don't update | ✅ | ❌ | ❌ | ❌ |
| Cut (#) and soft cut (##) in disjunction | ✅ | ❌ | ❌ | ❌ |
| Container filter values (`[1, ...]`, `{k: v}`) | ✅ | ❌ | ❌ | ❌ |
| String/bytes glob patterns (`"pre"..."suf"`) | ✅ | ❌ | ❌ | ❌ |
| Value groups (`(val1, val2)`) | ✅ | ❌ | ❌ | ❌ |
| Bytes literal support (`b"..."`) | ✅ | ❌ | ❌ | ❌ |
| Zero dependencies | ❌ (pyparsing) | ❌ | ✅ | ❌ |
**Choose dotted if you want:**
- Intuitive `a.b[0].c` syntax that looks like Python
- Pattern matching with wildcards (`*`) and regex (`/pattern/`)
- Both read and write operations on nested structures
- Transforms to coerce types inline (`path|int`, `path|str:fmt`)
- Recursive traversal with `**` (any depth) and `*key` (chain-following), with Python-style depth slicing
- Path grouping `(a,b).c` and operation grouping `prefix(.a,.b)` for multi-access
- **Cut (`#`) in disjunction**—first matching branch wins; e.g. `(a#, b)` or `emails[(*&email="x"#, +)]` for "update if exists, else append"
- **Soft cut (`##`) in disjunction**—suppress later branches only for overlapping paths; e.g. `(**:-2(.*, [])##, *)` for "recurse into containers, fall back to `*` for the rest"
- NOP (`~`) to match without updating—e.g. `(name.~first#, name.first)` for conditional updates
- **String/bytes glob patterns**—match by prefix, suffix, or substring: `*="user_"...`, `*=b"header"...b"footer"`
- **Value groups**—disjunction over filter values: `*=(1, 2, 3)`, `[*&status=("active", "pending")]`
<a id="breaking-changes"></a>
## Breaking Changes
### v0.31.0
- **`{` and `}` are now reserved characters**: Curly braces are used for container
filter values (dict and set patterns). If you have keys containing literal `{` or `}`,
quote them: `"my{key}"`.
### v0.30.0
- **`update_if` pred now gates on incoming value, not current value**: Previously,
`update_if(obj, key, val)` checked `pred(current_value_at_path)`. Now it checks
`pred(val)`. Default pred changed from `lambda val: val is None` to
`lambda val: val is not None` — meaning None values are skipped.
Use path expressions with NOP (`~`) and cut (`#`) for conditional updates based on
existing values.
- **`remove_if` pred now gates on key, not current value**: Previously,
`remove_if(obj, key)` checked `pred(current_value_at_path)`. Now it checks
`pred(key)`. Default pred is `lambda key: key is not None` — meaning None keys
are skipped.
- **`update_if_multi` / `remove_if_multi`**: Same pred changes as their single
counterparts.
### v0.28.0
- **`[*=value]` on primitive lists no longer works** — use `[*]=value` (value guard) instead.
`[*=value]` is a SliceFilter that tests *keys* of dict-like items; primitives have no keys,
so it now correctly returns `[]`.
- **`[!*=value]` on primitive lists no longer works** — use `[*]!=value` instead.
- **`*&*=value` no longer matches primitives** — use `*=value` (value guard) instead.
- Existing `[*=value]` on dicts/objects is unchanged.
- Existing `&` filter behavior on dict-like nodes is unchanged.
### v0.13.0
- **Filter conjunction operator changed from `.` to `&`**: The conjunction operator for
chaining multiple filters has changed. Previously, `*.id=1.name="alice"` was used for
conjunctive (AND) filtering. Now use `*&id=1&name="alice"`. This change enables support
for dotted paths within filter keys (e.g., `items[user.id=1]` to filter on nested fields).
<a id="api"></a>
## API
Probably the easiest thing to do is pydoc the api layer.
$ pydoc dotted
Parsed dotted paths are LRU-cached (after the first parse of a given path string), so repeated use of the same path string is cheap.
<a id="get"></a>
### Get
See the Paths, Patterns, and Operators sections below for the full notation.
>>> import dotted
>>> dotted.get({'a': {'b': {'c': {'d': 'nested'}}}}, 'a.b.c.d')
'nested'
<a id="update"></a>
### Update
Update will mutate the object if it can. It always returns the changed object though. If
it's not mutable, then get via the return.
>>> import dotted
>>> l = []
>>> t = ()
>>> dotted.update(l, '[0]', 'hello')
['hello']
>>> l
['hello']
>>> dotted.update(t, '[0]', 'hello')
('hello',)
>>> t
()
#### Update via pattern
You can update all fields that match pattern given by either a wildcard OR regex.
>>> import dotted
>>> d = {'a': 'hello', 'b': 'bye'}
>>> dotted.update(d, '*', 'me')
{'a': 'me', 'b': 'me'}
#### Immutable updates
Use `mutable=False` to prevent mutation of the original object:
>>> import dotted
>>> data = {'a': 1, 'b': 2}
>>> result = dotted.update(data, 'a', 99, mutable=False)
>>> data
{'a': 1, 'b': 2}
>>> result
{'a': 99, 'b': 2}
This works for `remove` as well:
>>> data = {'a': 1, 'b': 2}
>>> result = dotted.remove(data, 'a', mutable=False)
>>> data
{'a': 1, 'b': 2}
>>> result
{'b': 2}
When `mutable=False` is specified and the root object is mutable, `copy.deepcopy()`
is called first. This ensures no mutation occurs even when updating through nested
immutable containers (e.g., a tuple inside a dict).
#### Update if
`update_if` updates only when `pred(val)` is true. Default pred is
`lambda val: val is not None`, so None values are skipped. Use `pred=None`
for unconditional update (same as `update`):
>>> import dotted
>>> dotted.update_if({}, 'a', 1)
{'a': 1}
>>> dotted.update_if({}, 'a', None)
{}
>>> dotted.update_if({}, 'a', '', pred=bool)
{}
Use `update_if_multi` for batch updates with per-item `(key, val)` or `(key, val, pred)`.
#### Update with NOP (~)
The NOP operator `~` means "match but don't update." Use it when some matches should
be left unchanged. Combine with cut (`#`) for conditional updates:
>>> import dotted
>>> data = {'name': {'first': 'hello'}}
>>> dotted.update(data, '(name.~first#, name.first)', 'world') # first exists, NOP + cut
{'name': {'first': 'hello'}}
>>> data = {'name': {}}
>>> dotted.update(data, '(name.~first#, name.first)', 'world') # first missing, update
{'name': {'first': 'world'}}
<a id="remove"></a>
### Remove
You can remove a field or do so only if it matches value. For example,
>>> import dotted
>>> d = {'a': 'hello', 'b': 'bye'}
>>> dotted.remove(d, 'b')
{'a': 'hello'}
>>> dotted.remove(d, 'a', 'bye')
{'a': 'hello'}
#### Remove via pattern
Similar to update, all patterns that match will be removed. If you provide a value as
well, only the matched patterns that also match the value will be removed.
#### Remove if
`remove_if` removes only when `pred(key)` is true. Default pred is
`lambda key: key is not None`, so None keys are skipped. Use `pred=None`
for unconditional remove (same as `remove`):
>>> import dotted
>>> dotted.remove_if({'a': 1, 'b': 2}, 'a')
{'b': 2}
>>> dotted.remove_if({'a': 1}, None)
{'a': 1}
Use `remove_if_multi` for batch removal with per-item `(key, val, pred)`.
<a id="match"></a>
### Match
Use to match a dotted-style pattern to a field. Partial matching is on by default. You
can match via wildcard OR via regex. Here's a regex example:
>>> import dotted
>>> dotted.match('/a.+/', 'abced.b')
'abced.b'
>>> dotted.match('/a.+/', 'abced.b', partial=False)
With the `groups=True` parameter, you'll see how it was matched:
>>> import dotted
>>> dotted.match('hello.*', 'hello.there.bye', groups=True)
('hello.there.bye', ('hello', 'there.bye'))
In the above example, `hello` matched to `hello` and `*` matched to `there.bye` (partial
matching is enabled by default).
<a id="expand"></a>
### Expand
You may wish to _expand_ all fields that match a pattern in an object.
>>> import dotted
>>> d = {'hello': {'there': [1, 2, 3]}, 'bye': 7}
>>> dotted.expand(d, '*')
('hello', 'bye')
>>> dotted.expand(d, '*.*')
('hello.there',)
>>> dotted.expand(d, '*.*[*]')
('hello.there[0]', 'hello.there[1]', 'hello.there[2]')
>>> dotted.expand(d, '*.*[1:]')
('hello.there[1:]',)
<a id="overlaps"></a>
### Overlaps
Test whether two dotted paths overlap—i.e. one is a prefix of the other, or they
are identical. Used internally by [soft cut (`##`)](#the-soft-cut--operator) to
decide which later-branch results to suppress.
>>> import dotted
>>> dotted.overlaps('a', 'a.b.c')
True
>>> dotted.overlaps('a.b.c', 'a')
True
>>> dotted.overlaps('a.b', 'a.b')
True
>>> dotted.overlaps('a.b', 'a.c')
False
<a id="has"></a>
### Has
Check if a key or pattern exists in an object.
>>> import dotted
>>> d = {'a': {'b': 1}}
>>> dotted.has(d, 'a.b')
True
>>> dotted.has(d, 'a.c')
False
>>> dotted.has(d, 'a.*')
True
<a id="mutable"></a>
### Mutable
Check if `update(obj, key, val)` would mutate `obj` in place. Returns `False` for
empty paths (root replacement) or when the object or any container in the path
is immutable.
>>> import dotted
>>> dotted.mutable({'a': 1}, 'a')
True
>>> dotted.mutable({'a': 1}, '') # empty path
False
>>> dotted.mutable((1, 2), '[0]') # tuple is immutable
False
>>> dotted.mutable({'a': (1, 2)}, 'a[0]') # nested tuple
False
This is useful when you need to know whether to use the return value:
>>> data = {'a': 1}
>>> if dotted.mutable(data, 'a'):
... dotted.update(data, 'a', 2) # mutates in place
... else:
... data = dotted.update(data, 'a', 2) # use return value
<a id="setdefault"></a>
### Setdefault
Set a value only if the key doesn't already exist. Creates nested structures as needed.
>>> import dotted
>>> d = {'a': 1}
>>> dotted.setdefault(d, 'a', 999) # key exists, no change; returns value
1
>>> dotted.setdefault(d, 'b', 2) # key missing, sets value; returns it
2
>>> dotted.setdefault({}, 'a.b.c', 7) # creates nested structure; returns value
7
<a id="pluck"></a>
### Pluck
Extract (key, value) pairs from an object matching a pattern.
>>> import dotted
>>> d = {'a': 1, 'b': 2, 'nested': {'x': 10}}
>>> dotted.pluck(d, 'a')
('a', 1)
>>> dotted.pluck(d, '*')
(('a', 1), ('b', 2), ('nested', {'x': 10}))
>>> dotted.pluck(d, 'nested.*')
(('nested.x', 10),)
<a id="unpack"></a>
### Unpack
Recursively unpack a nested structure into `(path, value)` pairs — its dotted
normal form. The result can be replayed with `update_multi` to reconstruct the
original object. Use `AUTO` as the base object to infer the root container type.
>>> import dotted
>>> d = {'a': {'b': [1, 2, 3]}, 'x': {'y': {'z': [4, 5]}}, 'extra': 'stuff'}
>>> dotted.unpack(d)
(('a.b', [1, 2, 3]), ('x.y.z', [4, 5]), ('extra', 'stuff'))
>>> dotted.update_multi(dotted.AUTO, dotted.unpack(d)) == d
True
<a id="build"></a>
### Build
Create a default nested structure for a dotted key.
>>> import dotted
>>> dotted.build({}, 'a.b.c')
{'a': {'b': {'c': None}}}
>>> dotted.build({}, 'items[]')
{'items': []}
>>> dotted.build({}, 'items[0]')
{'items': [None]}
<a id="apply"></a>
### Apply
Apply transforms to values in an object in-place.
>>> import dotted
>>> d = {'price': '99.99', 'quantity': '5'}
>>> dotted.apply(d, 'price|float')
{'price': 99.99, 'quantity': '5'}
>>> dotted.apply(d, '*|int')
{'price': 99, 'quantity': 5}
<a id="assemble"></a>
### Assemble
Build a dotted notation string from a list of path segments.
>>> import dotted
>>> dotted.assemble(['a', 'b', 'c'])
'a.b.c'
>>> dotted.assemble(['items', '[0]', 'name'])
'items[0].name'
>>> dotted.assemble([7, 'hello'])
'7.hello'
<a id="quote"></a>
### Quote
Quote a key for use in a dotted path. Wraps in single quotes if the key
contains reserved characters or whitespace.
>>> import dotted
>>> dotted.quote('hello')
'hello'
>>> dotted.quote('has.dot')
"'has.dot'"
>>> dotted.quote('has space')
"'has space'"
>>> dotted.quote(7)
'7'
>>> dotted.quote('7')
'7'
<a id="normalize"></a>
### Normalize
Convert a raw Python key to dotted normal form. Like `quote()`, but also
quotes string keys that look numeric so they round-trip correctly through
`unpack`/`update_multi` (preserving string vs int key type).
>>> import dotted
>>> dotted.normalize('hello')
'hello'
>>> dotted.normalize('has.dot')
"'has.dot'"
>>> dotted.normalize(7)
'7'
>>> dotted.normalize('7')
"'7'"
`unpack` uses `normalize` internally, so dotted normal form paths always
round-trip correctly:
>>> d = {'7': 'seven', 'a.b': 'dotted', 'hello': 'world'}
>>> flat = dict(dotted.unpack(d))
>>> dotted.update_multi(dotted.AUTO, flat.items()) == d
True
<a id="auto"></a>
### AUTO
All write operations (`update`, `update_multi`, `build`, `setdefault`, etc.) accept
`AUTO` as the base object. Instead of passing `{}` or `[]`, let dotted infer
the root container type from the first key — dict keys produce `{}`, slot keys
produce `[]`.
>>> import dotted
>>> dotted.update(dotted.AUTO, 'a.b', 1)
{'a': {'b': 1}}
>>> dotted.update(dotted.AUTO, '[0]', 'hello')
['hello']
>>> dotted.update_multi(dotted.AUTO, [('a', 1), ('b', 2)])
{'a': 1, 'b': 2}
<a id="multi-operations"></a>
### Multi Operations
Most operations have `*_multi` variants for batch processing:
**Note:** `get_multi` returns a generator (not a list or tuple). That distinguishes it from a pattern `get`, which returns a tuple of matches. It also keeps input and output in the same style when you pass an iterator or generator of paths—lazy in, lazy out.
>>> import dotted
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> list(dotted.get_multi(d, ['a', 'b']))
[1, 2]
>>> dotted.update_multi({}, [('a.b', 1), ('c.d', 2)])
{'a': {'b': 1}, 'c': {'d': 2}}
>>> dotted.remove_multi(d, ['a', 'c'])
{'b': 2}
>>> d = {'a': 1}; list(dotted.setdefault_multi(d, [('a', 999), ('b', 2)]))
[1, 2]
>>> d
{'a': 1, 'b': 2}
>>> dotted.update_if_multi({}, [('a', 1), ('b', None), ('c', 3)]) # skips None vals
{'a': 1, 'c': 3}
>>> dotted.remove_if_multi({'a': 1, 'b': 2}, ['a', None, 'b']) # skips None keys
{}
Available multi operations: `get_multi`, `update_multi`, `update_if_multi`, `remove_multi`,
`remove_if_multi`, `setdefault_multi`, `match_multi`, `expand_multi`, `apply_multi`,
`build_multi`, `pluck_multi`, `assemble_multi`.
<a id="paths"></a>
## Paths
Dotted notation shares similarities with python. A _dot_ `.` field expects to see a
dictionary-like object (using `keys` and `__getitem__` internally). A _bracket_ `[]`
field is biased towards sequences (like lists or strs) but can also act on dicts. A
_attr_ `@` field uses `getattr/setattr/delattr`.
<a id="key-fields"></a>
### Key fields
A key field is expressed as `a` or part of a dotted expression, such as `a.b`. The
grammar parser is permissive for what can be in a key field. Pretty much any non-reserved
char will match. Note that key fields will only work on objects that have a `keys`
method. Basically, they work with dictionary or dictionary-like objects.
>>> import dotted
>>> dotted.get({'a': {'b': 'hello'}}, 'a.b')
'hello'
If the key field starts with a space or `-`, you should either quote it OR you may use
a `\` as the first char.
<a id="bracketed-fields"></a>
### Bracketed fields
You may also use bracket notation, such as `a[0]` which does a `__getitem__` at key 0.
The parser prefers numeric types over string types (if you wish to look up a non-numeric
field using brackets be sure to quote it). Bracketed fields will work with pretty much
any object that can be looked up via `__getitem__`.
>>> import dotted
>>> dotted.get({'a': ['first', 'second', 'third']}, 'a[0]')
'first'
>>> dotted.get({'a': {'b': 'hello'}}, 'a["b"]')
'hello'
<a id="attr-fields"></a>
### Attr fields
An attr field is expressed by prefixing with `@`. This will fetch data at that attribute.
You may wonder why have this when you can just as easily use standard python to access.
Two important reasons: nested expressions and patterns.
>>> import dotted, types
>>> ns = types.SimpleNamespace()
>>> ns.hello = {'me': 'goodbye'}
>>> dotted.get(ns, '@hello.me')
'goodbye'
<a id="slicing"></a>
### Slicing
Dotted slicing works like python slicing and all that entails.
>>> import dotted
>>> d = {'hi': {'there': [1, 2, 3]}, 'bye': {'there': [4, 5, 6]}}
>>> dotted.get(d, 'hi.there[::2]')
[1, 3]
>>> dotted.get(d, '*.there[1:]')
([2, 3], [5, 6])
<a id="dot-notation-for-sequence-indexing"></a>
### Dot notation for sequence indexing
Numeric path segments work as indices when accessing sequences (lists, tuples, strings):
>>> import dotted
>>> data = {'items': [10, 20, 30]}
>>> dotted.get(data, 'items.0')
10
>>> dotted.get(data, 'items.-1') # negative index
30
This is equivalent to bracket notation for existing sequences:
>>> dotted.get(data, 'items[0]') # same result
10
Chaining works naturally:
>>> data = {'users': [{'name': 'alice'}, {'name': 'bob'}]}
>>> dotted.get(data, 'users.0.name')
'alice'
Updates and removes also work:
>>> dotted.update(data, 'users.0.name', 'ALICE')
>>> dotted.get(data, 'users.0.name')
'ALICE'
**Note**: When _creating_ structures, use bracket notation for lists:
>>> dotted.build({}, 'items.0') # creates dict: {'items': {0: None}}
>>> dotted.build({}, 'items[0]') # creates list: {'items': [None]}
<a id="empty-path-root-access"></a>
### Empty path (root access)
An empty string `''` refers to the root of the data structure itself:
>>> import dotted
>>> data = {'a': 1, 'b': 2}
>>> dotted.get(data, '')
{'a': 1, 'b': 2}
Unlike normal paths which mutate in place, `update` with an empty path is non-mutating
since Python cannot rebind the caller's variable:
>>> data = {'a': 1, 'b': 2}
>>> result = dotted.update(data, '', {'replaced': True})
>>> result
{'replaced': True}
>>> data
{'a': 1, 'b': 2}
Compare with a normal path which mutates:
>>> data = {'a': 1, 'b': 2}
>>> dotted.update(data, 'a', 99)
{'a': 99, 'b': 2}
>>> data
{'a': 99, 'b': 2}
Other empty path operations:
>>> data = {'a': 1, 'b': 2}
>>> dotted.remove(data, '')
None
>>> dotted.expand(data, '')
('',)
>>> dotted.pluck(data, '')
('', {'a': 1, 'b': 2})
<a id="typing--quoting"></a>
## Typing & Quoting
<a id="numeric-types"></a>
### Numeric types
The parser will attempt to interpret a field numerically if it can, such as `field.1`
will interpret the `1` part numerically.
>>> import dotted
>>> dotted.get({'7': 'me', 7: 'you'}, '7')
'you'
<a id="quoting"></a>
### Quoting
Sometimes you need to quote a field which you can do by just putting the field in quotes.
>>> import dotted
>>> dotted.get({'has . in it': 7}, '"has . in it"')
7
<a id="the-numericize--operator"></a>
### The numericize `#` operator
Non-integer numeric fields may be interpreted incorrectly if they have decimal point. To
solve, use the numerize operator `#` at the front of a quoted field, such as `#'123.45'`.
This will coerce to a numeric type (e.g. float).
>>> import dotted
>>> d = {'a': {1.2: 'hello', 1: {2: 'fooled you'}}}
>>> dotted.get(d, 'a.1.2')
'fooled you'
>>> dotted.get(d, 'a.#"1.2"')
'hello'
<a id="container-types"></a>
### Container types
Container literals express list, dict, set, tuple, and frozenset values inline. They
appear in two contexts: as [filter values / value guards](#container-filter-values)
(with pattern support) and as [transform arguments](#container-transform-arguments)
(concrete values only).
| Syntax | Type | Notes |
|--------|------|-------|
| `[1, 2, 3]` | list or tuple | Unprefixed matches both |
| `l[1, 2, 3]` | list | Strict: list only |
| `t[1, 2, 3]` | tuple | Strict: tuple only |
| `{"a": 1}` | dict | Unprefixed matches dict-like |
| `d{"a": 1}` | dict | Strict: dict only (isinstance) |
| `{1, 2, 3}` | set or frozenset | Unprefixed matches both |
| `s{1, 2, 3}` | set | Strict: set only |
| `fs{1, 2, 3}` | frozenset | Strict: frozenset only |
Empty containers: `[]` (empty list/tuple), `{}` (empty dict), `s{}` (empty set),
`fs{}` (empty frozenset), `l[]`, `t[]`, `d{}`.
Without a [type prefix](#type-prefixes), brackets match loosely: `[]` matches any
list or tuple, `{v, v}` matches any set or frozenset, `{}` matches dict (following
Python convention where `{}` is a dict literal, not a set).
In filter context, containers support patterns (`*`, `...`, `/regex/`) inside — see
[Container filter values](#container-filter-values). In transform argument context,
only concrete scalar values are allowed.
<a id="bytes-literals"></a>
### Bytes literals
Prefix a quoted string with `b` to create a bytes literal: `b"hello"` or `b'hello'`.
Bytes literals produce `bytes` values and only match `bytes` — never `str`:
>>> import dotted
>>> d = {'a': b'hello', 'b': b'world', 'c': 'hello'}
>>> dotted.get(d, '*=b"hello"')
(b'hello',)
Use in filters:
>>> data = [{'data': b'yes'}, {'data': b'no'}, {'data': 'yes'}]
>>> dotted.get(data, '[*&data=b"yes"]')
({'data': b'yes'},)
Note that `b"hello"` does not match the string `'hello'` — types must match exactly.
<a id="patterns"></a>
## Patterns
You may use dotted for pattern matching. You can match to wildcards or regular
expressions. You'll note that patterns always return a tuple of matches.
>>> import dotted
>>> d = {'hi': {'there': [1, 2, 3]}, 'bye': {'there': [4, 5, 6]}}
>>> dotted.get(d, '*.there[2]')
(3, 6)
>>> dotted.get(d, '/h.*/.*')
([1, 2, 3],)
Dotted will return all values that match the pattern(s).
<a id="wildcards"></a>
### Wildcards
The wildcard pattern is `*`. It will match anything.
<a id="regular-expressions"></a>
### Regular expressions
The regex pattern is enclosed in slashes: `/regex/`. Note that if the field is a non-str,
the regex pattern will internally match to its str representation.
<a id="the-match-first-operator"></a>
### The match-first operator
You can also postfix any pattern with a `?`. This will return only
the first match.
>>> import dotted
>>> d = {'hi': {'there': [1, 2, 3]}, 'bye': {'there': [4, 5, 6]}}
>>> dotted.get(d, '*?.there[2]')
(3,)
<a id="slicing-vs-patterns"></a>
### Slicing vs Patterns
Slicing a sequence produces a sequence and a filter on a sequence is a special
type of slice operation. Whereas, patterns _iterate_ through items:
>>> import dotted
>>> data = [{'name': 'alice'}, {'name': 'bob'}, {'name': 'alice'}]
>>> dotted.get(data, '[1:3]')
[{'name': 'bob'}, {'name': 'alice'}]
>>> dotted.get(data, '[name="alice"]')
[{'name': 'alice'}, {'name': 'alice'}]
>>> dotted.get(data, '[*]')
({'name': 'alice'}, {'name': 'bob'}, {'name': 'alice'})
Chaining after a slice accesses the result itself, not the items within it:
>>> dotted.get(data, '[1:3].name') # accessing .name on the list
None
>>> dotted.get(data, '[name="alice"].name') # also accessing .name on the list
None
>>> dotted.get(data, '[].name') # .name on a raw list
None
To chain through the items, use a pattern instead:
>>> dotted.get(data, '[*].name')
('alice', 'bob', 'alice')
>>> dotted.get(data, '[*&name="alice"]')
({'name': 'alice'}, {'name': 'alice'})
<a id="recursive-traversal"></a>
## Recursive Traversal
The recursive operator `*` traverses nested data structures by following path
segments that match a pattern at successive levels.
<a id="the-recursive-operator-"></a>
### The recursive operator `*`
`*pattern` recurses into values whose path segments match the pattern. It follows
chains of matching segments — at each level, if a segment matches, its value is
yielded and the traversal continues into that value:
>>> import dotted
>>> d = {'b': {'b': {'c': 1}}}
>>> dotted.get(d, '*b')
({'b': {'c': 1}}, {'c': 1})
>>> dotted.get(d, '*b.c')
(1,)
The chain stops when the key no longer matches:
>>> d = {'a': {'b': {'c': 1}}}
>>> dotted.get(d, '*b')
()
The inner pattern can be any key pattern — a literal key, a wildcard, or a regex:
>>> d = {'x1': {'x2': 1}, 'y': 2}
>>> dotted.get(d, '*/x.*/')
({'x2': 1}, 1)
<a id="recursive-wildcard-"></a>
### Recursive wildcard `**`
`**` is the recursive wildcard — it matches all path segments and visits every
value at every depth:
>>> d = {'a': {'b': {'c': 1}}, 'x': {'y': 2}}
>>> dotted.get(d, '**')
({'b': {'c': 1}}, {'c': 1}, 1, {'y': 2}, 2)
Use `**` with continuation to find a key at any depth:
>>> dotted.get(d, '**.c')
(1,)
Use `**?` to get only the first match:
>>> dotted.get(d, '**?')
({'b': {'c': 1}},)
<a id="depth-slicing"></a>
### Depth slicing
Control which depths are visited using slice notation: `**:start`, `**:start:stop`,
or `**:::step`. Note the leading `:` — depth slicing looks a little different from
regular Python slicing since it follows the `**` operator. Depth 0 is the values of
the first-level path segments. Lists increment depth (their elements are one level deeper).
>>> d = {'a': {'x': 1}, 'b': {'y': {'z': 2}}}
>>> dotted.get(d, '**:0')
({'x': 1}, {'y': {'z': 2}})
>>> dotted.get(d, '**:1')
(1, {'z': 2})
Use negative indices to count from the leaf. `**:-1` returns leaves only,
`**:-2` returns the penultimate level:
>>> dotted.get(d, '**:-1')
(1, 2)
Range slicing works like Python slices: `**:start:stop` and `**:::step`:
>>> dotted.get(d, '**:0:1')
({'x': 1}, 1, {'y': {'z': 2}}, {'z': 2})
<a id="recursive-with-value-guard"></a>
### Recursive with value guard
Combine `**` with value guards to find specific values at any depth:
>>> d = {'a': {'b': 7, 'c': 3}, 'd': {'e': 7}}
>>> dotted.get(d, '**=7')
(7, 7)
>>> dotted.get(d, '**!=7')
({'b': 7, 'c': 3}, 3, {'e': 7})
<a id="recursive-update-and-remove"></a>
### Recursive update and remove
Recursive operators work with `update` and `remove`:
>>> d = {'a': {'b': 7, 'c': 3}, 'd': 7}
>>> dotted.update(d, '**=7', 99)
{'a': {'b': 99, 'c': 3}, 'd': 99}
>>> d = {'a': {'b': 7, 'c': 3}, 'd': 7}
>>> dotted.remove(d, '**=7')
{'a': {'c': 3}}
<a id="recursive-match"></a>
### Recursive match
Recursive patterns work with `match`. `**` matches any key path, `*key` matches
chains of a specific key:
>>> dotted.match('**.c', 'a.b.c')
'a.b.c'
>>> dotted.match('*b', 'b.b.b')
'b.b.b'
>>> dotted.match('*b', 'a.b.c') is None
True
<a id="grouping"></a>
## Grouping
<a id="path-grouping"></a>
### Path grouping
Use parentheses to group keys that share a common prefix or suffix:
>>> import dotted
>>> d = {'a': 1, 'b': 2, 'c': 3}
# Group keys
>>> dotted.get(d, '(a,b)')
(1, 2)
# With a shared suffix
>>> d = {'x': {'val': 1}, 'y': {'val': 2}}
>>> dotted.get(d, '(x,y).val')
(1, 2)
Path grouping is syntactic sugar for [operation grouping](#operation-grouping)
where each branch is a single key—`(a,b).c` is equivalent to `(.a,.b).c`. All
operators (disjunction, conjunction, negation, cut `#`, soft cut `##`,
first-match `?`) work the same way; see operation grouping for full details.
<a id="operation-grouping"></a>
### Operation grouping
Use parentheses to group **operation sequences** that diverge from a common point.
Each branch is a full operation chain including dots, brackets, and attrs:
>>> import dotted
# Mix different operation types from a common prefix
>>> d = {'items': [10, 20, 30]}
>>> dotted.get(d, 'items(.0,[])')
(10, [10, 20, 30])
# Nested paths in branches
>>> d = {'x': {'a': {'i': 1}, 'b': {'k': 3}}}
>>> dotted.get(d, 'x(.a.i,.b.k)')
(1, 3)
Operation groups support these operators:
| Syntax | Meaning | Behavior |
|--------|---------|----------|
| `(.a,.b)` | Disjunction (OR) | Returns all values that exist |
| `(.a&.b)` | Conjunction (AND) | Returns values only if ALL branches exist |
| `(!.a)` | Negation (NOT) | Returns values for keys NOT matching |
#### Disjunction (OR)
Comma separates branches. Returns all matches that exist. Disjunction doesn't
short-circuit—when updating, all matching branches get the update. Using the
match-first operator (`?`) is probably what you want when updating.
>>> d = {'a': {'x': 1, 'y': 2}}
>>> dotted.get(d, 'a(.x,.y)')
(1, 2)
>>> dotted.get(d, 'a(.x,.z)') # z missing, x still returned
(1,)
Updates apply to all matching branches. When nothing matches, the first
concrete path (scanning last to first) is created:
>>> d = {'a': {'x': 1, 'y': 2}}
>>> dotted.update(d, 'a(.x,.y)', 99)
{'a': {'x': 99, 'y': 99}}
>>> dotted.update({'a': {}}, 'a(.x,.y)', 99) # nothing matches → creates last (.y)
{'a': {'y': 99}}
#### Cut (`#`) in disjunction
Suffix a branch with `#` so that if it matches, only that branch is used
(get/update/remove); later branches are not tried. Useful for "update if exists,
else append" in lists. Example with slot grouping:
>>> data = {'emails': [{'email': 'alice@x.com', 'verified': False}]}
>>> dotted.update(data, 'emails[(*&email="alice@x.com"#, +)]', {'email': 'alice@x.com', 'verified': True})
{'emails': [{'email': 'alice@x.com', 'verified': True}]}
>>> data = {'emails': [{'email': 'other@x.com'}]}
>>> dotted.update(data, 'emails[(*&email="alice@x.com"#, +)]', {'email': 'alice@x.com', 'verified': True})
{'emails': [{'email': 'other@x.com'}, {'email': 'alice@x.com', 'verified': True}]}
First branch matches items where `email="alice@x.com"` and updates them (then cut);
if none match, the `+` branch appends the new dict.
#### Soft cut (`##`) in disjunction
Hard cut (`#`) stops all later branches when the cut branch matches. Soft cut (`##`)
is more selective: later branches still run, but skip any paths that overlap with
what the soft-cut branch already yielded. Use soft cut when a branch handles some
keys and you want a fallback branch to handle the rest.
>>> d = {'a': {'b': [1, 2, 3]}, 'x': {'y': {'z': [4, 5]}}, 'extra': 'stuff'}
>>> dotted.pluck(d, '(**:-2(.*, [])##, *)')
(('a.b', [1, 2, 3]), ('x.y.z', [4, 5]), ('extra', 'stuff'))
Here `**:-2(.*, [])` recurses into containers (dicts and lists) and yields their
leaf containers. The `##` means: for keys that recursion covered (like `a` and `x`),
don't try the `*` fallback. But `extra` was not covered by the recursive branch,
so `*` picks it up.
Compare with hard cut (`#`), which would lose `extra` entirely:
>>> dotted.pluck(d, '(**:-2(.*, [])#, *)')
(('a.b', [1, 2, 3]), ('x.y.z', [4, 5]))
#### Conjunction (AND)
Use `&` for all-or-nothing behavior. Returns values only if ALL branches exist:
>>> d = {'a': {'x': 1, 'y': 2}}
>>> dotted.get(d, 'a(.x&.y)')
(1, 2)
>>> dotted.get(d, 'a(.x&.z)') # z missing, fails entirely
()
Updates all branches so the conjunction eval as true—creates missing paths.
If a filter or NOP prevents a branch, no update:
>>> dotted.update({'a': {'x': 1, 'y': 2}}, 'a(.x&.y)', 99)
{'a': {'x': 99, 'y': 99}}
>>> dotted.update({'a': {'x': 1}}, 'a(.x&.y)', 99) # y missing → creates it
{'a': {'x': 99, 'y': 99}}
#### First match
Use `?` suffix to return only the first match. When nothing matches, same
fallback as disjunction—first concrete path (last to first):
>>> d = {'a': {'x': 1, 'y': 2}}
>>> dotted.get(d, 'a(.z,.x,.y)?') # first that exists
(1,)
>>> dotted.update({'a': {}}, 'a(.x,.y)?', 99) # nothing matches → creates last (.y)
{'a': {'y': 99}}
#### Negation (NOT)
Use `!` prefix to exclude keys matching a pattern:
>>> import dotted
# Exclude single key - get user fields except password
>>> user = {'email': 'a@x.com', 'name': 'alice', 'password': 'secret'}
>>> sorted(dotted.get({'user': user}, 'user(!.password)'))
['a@x.com', 'alice']
# Works with lists too
>>> dotted.get({'items': [10, 20, 30]}, 'items(![0])')
(20, 30)
Updates and removes apply to all non-matching keys:
>>> d = {'a': {'x': 1, 'y': 2, 'z': 3}}
>>> dotted.update(d, 'a(!.x)', 99)
{'a': {'x': 1, 'y': 99, 'z': 99}}
>>> dotted.remove(d, 'a(!.x)')
{'a': {'x': 1}}
**Note**: For De Morgan's law with filter expressions, see the Filters section below.
<a id="operators"></a>
## Operators
<a id="the-append--operator"></a>
### The append `+` operator
Both bracketed fields and slices support the '+' operator which refers to the end of
sequence. You may append an item or slice to the end a sequence.
>>> import dotted
>>> d = {'hi': {'there': [1, 2, 3]}, 'bye': {'there': [4, 5, 6]}}
>>> dotted.update(d, '*.there[+]', 8)
{'hi': {'there': [1, 2, 3, 8]}, 'bye': {'there': [4, 5, 6, 8]}}
>>> dotted.update(d, '*.there[+:]', [999])
{'hi': {'there': [1, 2, 3, 8, 999]}, 'bye': {'there': [4, 5, 6, 8, 999]}}
<a id="the-append-unique--operator"></a>
### The append-unique `+?` operator
If you want to update only _unique_ items to a list, you can use the `?`
postfix. This will ensure that it's only added once (see match-first below).
>>> import dotted
>>> items = [1, 2]
>>> dotted.update(items, '[+?]', 3)
[1, 2, 3]
>>> dotted.update(items, '[+?]', 3)
[1, 2, 3]
<a id="the-invert---operator"></a>
### The invert `-` operator
You can invert the meaning of the notation by prefixing a `-`. For example,
to remove an item using `update`:
>>> import dotted
>>> d = {'a': 'hello', 'b': 'bye'}
>>> dotted.update(d, '-b', dotted.ANY)
{'a': 'hello'}
>>> dotted.remove(d, | text/markdown | Frey Waid | logophage1@gmail.com | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/freywaid/dotted | null | >=3.6 | [] | [] | [] | [
"pyparsing>=3.0",
"PyYAML>=5.0; extra == \"yaml\"",
"tomli>=1.0; python_version < \"3.11\" and extra == \"toml\"",
"tomli_w>=1.0; extra == \"toml\"",
"PyYAML>=5.0; extra == \"all\"",
"tomli>=1.0; python_version < \"3.11\" and extra == \"all\"",
"tomli_w>=1.0; extra == \"all\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.7 | 2026-02-19T06:35:03.614126 | dotted_notation-0.32.2.tar.gz | 150,602 | ac/c4/e6838b36c1bde02fbab58a33316fa84b574d87d45ebeed96b70ee7012cc9/dotted_notation-0.32.2.tar.gz | source | sdist | null | false | af831b062e54c17f17540846e3c834f7 | 2b0316653c10c158b41f7ca9c23a91234958f56e677641f3fa22ac150a56eff0 | acc4e6838b36c1bde02fbab58a33316fa84b574d87d45ebeed96b70ee7012cc9 | null | [] | 421 |
2.4 | github-agent | 0.2.14 | GitHub Agent for MCP | # GitHub Agent - A2A | AG-UI | MCP


















*Version: 0.2.14*
## Overview
**GitHub Agent** is a powerful **Model Context Protocol (MCP)** server and **Agent-to-Agent (A2A)** system designed to interact with GitHub.
It acts as a **Supervisor Agent**, delegating tasks to a suite of specialized **Child Agents**, each focused on a specific domain of the GitHub API (e.g., Issues, Pull Requests, Repositories, Actions). This architecture allows for precise and efficient handling of complex GitHub operations.
This repository is actively maintained - Contributions are welcome!
### Capabilities:
- **Supervisor-Worker Architecture**: Orchestrates specialized agents for optimal task execution.
- **Comprehensive GitHub Coverage**: specialized agents for Issues, PRs, Repos, Actions, Organizations, and more.
- **MCP Support**: Fully compatible with the Model Context Protocol.
- **A2A Integration**: Ready for Agent-to-Agent communication.
- **Flexible Deployment**: Run via Docker, Docker Compose, or locally.
## Architecture
### System components
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
Supervisor["Supervisor Agent"]
Server["A2A Server - Uvicorn/FastAPI"]
ChildAgents["Child Agents (Specialists)"]
MCP["GitHub MCP Tools"]
end
Supervisor --> ChildAgents
ChildAgents --> MCP
User["User Query"] --> Server
Server --> Supervisor
MCP --> GitHubAPI["GitHub API"]
Supervisor:::agent
ChildAgents:::agent
Server:::server
User:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style Server stroke:#000000,fill:#FFD600
style MCP stroke:#000000,fill:#BBDEFB
style GitHubAPI fill:#E6E6FA
style User fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Supervisor as Supervisor Agent
participant Child as Child Agent (e.g. Issues)
participant MCP as GitHub MCP Tools
participant GitHub as GitHub API
User->>Server: "Create an issue in repo X"
Server->>Supervisor: Invoke Supervisor
Supervisor->>Supervisor: Analyze Request & Select Specialist
Supervisor->>Child: Delegate to Issues Agent
Child->>MCP: Call create_issue Tool
MCP->>GitHub: POST /repos/user/repo/issues
GitHub-->>MCP: Issue Created JSON
MCP-->>Child: Tool Response
Child-->>Supervisor: Task Complete
Supervisor-->>Server: Final Response
Server-->>User: "Issue #123 created successfully"
```
## Specialized Agents
The Supervisor delegates tasks to these specialized agents:
| Agent Name | Description |
|:-----------|:------------|
| `GitHub_Context_Agent` | Provides context about the current user and GitHub status. |
| `GitHub_Actions_Agent` | Manages GitHub Actions workflows and runs. |
| `GitHub_Code_Security_Agent` | Handles code security scanning and alerts. |
| `GitHub_Dependabot_Agent` | Manages Dependabot alerts and configurations. |
| `GitHub_Discussions_Agent` | Manages repository discussions. |
| `GitHub_Gists_Agent` | Manages GitHub Gists. |
| `GitHub_Git_Agent` | Performs low-level Git operations (refs, trees, blobs). |
| `GitHub_Issues_Agent` | Manages Issues (create, list, update, comment). |
| `GitHub_Labels_Agent` | Manages repository labels. |
| `GitHub_Notifications_Agent` | Checks and manages notifications. |
| `GitHub_Organizations_Agent` | Manages Organization memberships and settings. |
| `GitHub_Projects_Agent` | Manages GitHub Projects (V2). |
| `GitHub_Pull_Requests_Agent` | Manages Pull Requests (create, review, merge). |
| `GitHub_Repos_Agent` | Manages Repositories (create, list, delete, settings). |
| `GitHub_Secret_Protection_Agent` | Manages secret scanning protection. |
| `GitHub_Security_Advisories_Agent` | Accesses security advisories. |
| `GitHub_Stargazers_Agent` | Views repository stargazers. |
| `GitHub_Users_Agent` | Accesses public user information. |
| `GitHub_Copilot_Agent` | Assists with coding tasks via Copilot. |
| `GitHub_Support_Docs_Agent` | Searches GitHub Support documentation. |
## Usage
### Prerequisites
- Python 3.10+
- A valid GitHub Personal Access Token (PAT) with appropriate permissions.
### Installation
```bash
pip install github-agent
```
Or using UV:
```bash
uv pip install github-agent
```
### CLI
The `github-agent` command starts the server.
| Argument | Description | Default |
|:---|:---|:---|
| `--host` | Host to bind the server to | `0.0.0.0` |
| `--port` | Port to bind the server to | `9000` |
| `--mcp-config` | Path to MCP configuration file | `mcp_config.json` |
| `--provider` | LLM Provider (openai, anthropic, google, etc.) | `openai` |
| `--model-id` | LLM Model ID | `qwen/qwen3-coder-next` |
### Running the Agent Server
```bash
github-agent --provider openai --model-id gpt-4o --api-key sk-...
```
## Docker
### Build
```bash
docker build -t github-agent .
```
### Run using Docker
```bash
docker run -d \
-p 9000:9000 \
-e LLM_API_KEY=sk-... \
-e MCP_CONFIG=/app/mcp_config.json \
knucklessg1/github-agent:latest
```
### Run using Docker Compose
Create a `docker-compose.yml`:
```yaml
services:
github-agent:
image: knucklessg1/github-agent:latest
ports:
- "9000:9000"
environment:
- PROVIDER=openai
- MODEL_ID=gpt-4o
- LLM_API_KEY=${LLM_API_KEY}
volumes:
- ./mcp_config.json:/app/mcp_config.json
```
Then run:
```bash
docker-compose up -d
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0",
"pydantic-ai-skills>=v0.4.0",
"fastapi>=0.128.0",
"fastmcp",
"uvicorn",
"fastapi"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:35:01.705744 | github_agent-0.2.14.tar.gz | 21,717 | e0/a5/0bde1a1437f0f93025eacb9217eff4553f99cf15c3692b9ce496caea91be/github_agent-0.2.14.tar.gz | source | sdist | null | false | 0a05beac4e6c1edbcddcbafa7272b0ed | e5b2cf829765ba4dd852111be6372bdf61b608fc14003e3f05c6cb130b02fc20 | e0a50bde1a1437f0f93025eacb9217eff4553f99cf15c3692b9ce496caea91be | null | [
"LICENSE"
] | 252 |
2.4 | servicenow-api | 1.6.15 | Python ServiceNow API Wrapper | # ServiceNow - API | MCP | A2A | AG-UI


















*Version: 1.6.15*
## Overview
This project started out as a python wrapper for ServiceNow, but since the dawn of standards like
Model Context Protocol (MCP) and Agent2Agent (A2A) Agent. This Agent has solved the issue with **tons** of MCP tools by distributing those tools to **child agents**.
This allows you to run this entire agent on a small context window and maintain blazing speed! A 10K Context token minimum recommended, although I got away with 4K in some tests using qwen-4b on default settings.
This repository has become a sandbox for some brand new and pretty cool features with respects to each of those.
The original APIs should remain stable,
but please note, the latest features like A2A may still be unstable as it is under active development.
This project now includes an MCP server, which wraps all the original APIs you know and love in the base project. This
allows any MCP capable LLM to leverage these tools and interact with ServiceNow. The MCP Server is enhanced with
various authentication mechanisms, middleware for observability and control,
and optional Eunomia authorization for policy-based access control.
ServiceNow A2A implements a multi-agent system designed to manage and interact with ServiceNow tasks through a delegated,
specialist-based architecture. Built using Python, it leverages libraries like `pydantic-ai` for agent creation,
`FastMCPToolset` for integrating Model Context Protocol (MCP) tools, and `Graphiti`
for building a temporal knowledge graph from official ServiceNow documentation.
The system runs as a FastAPI server via Uvicorn, exposing an Agent-to-Agent (A2A) interface for handling requests.
`pydantic-ai` is able to expose your agent as an A2A agent out of the box with `.to_a2a()`.
This allows this agent to be integrated in
any agentic framework like `Microsoft Agent Framework` (MAF) or `crew.ai`.
The core idea is a **Supervisor Agent** that acts as an intelligent router. It analyzes user queries and assigns tasks to specialized **child agents**,
each focused on a specific domain (e.g., incidents, change management).
Child agents have access to filtered MCP tools and a shared knowledge graph for reasoning over documentation,
ensuring accurate and context-aware executions without direct access to the underlying APIs from the orchestrator.
This architecture is optimized for:
- **Speed**: Parallel execution potential and smaller context windows per agent.
- **Context Usage**: Each agent only loads the tools and skills relevant to its domain, allowing the system to handle hundreds of MCP tools without exceeding token limits or confusing the LLM.
- **Modularity**: New domains can be added as simple child agents without touching the core supervisor logic.
This architecture promotes modularity, scalability, and maintainability, making it suitable for enterprise
integrations with platforms like ServiceNow.
Contributions are welcome!
## API
### API Calls
- Application Service
- Change Management
- CI/CD
- CMDB
- Import Sets
- Incident
- Knowledge Base
- Table
- Custom Endpoint
If your API call isn't supported, you can use the `api_request` tool to perform GET/POST/PUT/DELETE requests to any ServiceNow endpoint.
## MCP
All the available API Calls above are wrapped in MCP Tools. You can find those below with their tool descriptions and associated tag.
### MCP Tools
| Function Name | Description | Tag(s) |
|--------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|-------------------------|
| get_application | Retrieves details of a specific application from a ServiceNow instance by its unique identifier. | application |
| get_cmdb | Fetches a specific Configuration Management Database (CMDB) record from a ServiceNow instance using its unique identifier. | cmdb |
| batch_install_result | Retrieves the result of a batch installation process in ServiceNow by result ID. | cicd |
| instance_scan_progress | Gets the progress status of an instance scan in ServiceNow by progress ID. | cicd |
| progress | Retrieves the progress status of a specified process in ServiceNow by progress ID. | cicd |
| batch_install | Initiates a batch installation of specified packages in ServiceNow with optional notes. | cicd |
| batch_rollback | Performs a rollback of a batch installation in ServiceNow using the rollback ID. | cicd |
| app_repo_install | Installs an application from a repository in ServiceNow with specified parameters. | cicd |
| app_repo_publish | Publishes an application to a repository in ServiceNow with development notes and version. | cicd |
| app_repo_rollback | Rolls back an application to a previous version in ServiceNow by sys_id, scope, and version. | cicd |
| full_scan | Initiates a full scan of the ServiceNow instance. | cicd |
| point_scan | Performs a targeted scan on a specific instance and table in ServiceNow. | cicd |
| combo_suite_scan | Executes a scan on a combination of suites in ServiceNow by combo sys_id. | cicd |
| suite_scan | Runs a scan on a specified suite with a list of sys_ids and scan type in ServiceNow. | cicd |
| activate_plugin | Activates a specified plugin in ServiceNow by plugin ID. | plugins |
| rollback_plugin | Rolls back a specified plugin in ServiceNow to its previous state by plugin ID. | plugins |
| apply_remote_source_control_changes | Applies changes from a remote source control branch to a ServiceNow application. | source_control |
| import_repository | Imports a repository into ServiceNow with specified credentials and branch. | source_control |
| run_test_suite | Executes a test suite in ServiceNow with specified browser and OS configurations. | testing |
| update_set_create | Creates a new update set in ServiceNow with a given name, scope, and description. | update_sets |
| update_set_retrieve | Retrieves an update set from a source instance in ServiceNow with optional preview and cleanup. | update_sets |
| update_set_preview | Previews an update set in ServiceNow by its remote sys_id. | update_sets |
| update_set_commit | Commits an update set in ServiceNow with an option to force commit. | update_sets |
| update_set_commit_multiple | Commits multiple update sets in ServiceNow in the specified order. | update_sets |
| update_set_back_out | Backs out an update set in ServiceNow with an option to rollback installations. | update_sets |
| get_change_requests | Retrieves change requests from ServiceNow with optional filtering and pagination. | change_management |
| get_change_request_nextstate | Gets the next state for a specific change request in ServiceNow. | change_management |
| get_change_request_schedule | Retrieves the schedule for a change request based on a Configuration Item (CI) in ServiceNow. | change_management |
| get_change_request_tasks | Fetches tasks associated with a change request in ServiceNow with optional filtering. | change_management |
| get_change_request | Retrieves details of a specific change request in ServiceNow by sys_id and type. | change_management |
| get_change_request_ci | Gets Configuration Items (CIs) associated with a change request in ServiceNow. | change_management |
| get_change_request_conflict | Checks for conflicts in a change request in ServiceNow. | change_management |
| get_standard_change_request_templates | Retrieves standard change request templates from ServiceNow with optional filtering. | change_management |
| get_change_request_models | Fetches change request models from ServiceNow with optional filtering and type. | change_management |
| get_standard_change_request_model | Retrieves a specific standard change request model in ServiceNow by sys_id. | change_management |
| get_standard_change_request_template | Gets a specific standard change request template in ServiceNow by sys_id. | change_management |
| get_change_request_worker | Retrieves details of a change request worker in ServiceNow by sys_id. | change_management |
| create_change_request | Creates a new change request in ServiceNow with specified details and type. | change_management |
| create_change_request_task | Creates a task for a change request in ServiceNow with provided details. | change_management |
| create_change_request_ci_association | Associates Configuration Items (CIs) with a change request in ServiceNow. | change_management |
| calculate_standard_change_request_risk | Calculates the risk for a standard change request in ServiceNow. | change_management |
| check_change_request_conflict | Checks for conflicts in a change request in ServiceNow. | change_management |
| refresh_change_request_impacted_services | Refreshes the impacted services for a change request in ServiceNow. | change_management |
| approve_change_request | Approves or rejects a change request in ServiceNow by setting its state. | change_management |
| update_change_request | Updates a change request in ServiceNow with new details and type. | change_management |
| update_change_request_first_available | Updates a change request to the first available state in ServiceNow. | change_management |
| update_change_request_task | Updates a task for a change request in ServiceNow with new details. | change_management |
| delete_change_request | Deletes a change request from ServiceNow by sys_id and type. | change_management |
| delete_change_request_task | Deletes a task associated with a change request in ServiceNow. | change_management |
| delete_change_request_conflict_scan | Deletes a conflict scan for a change request in ServiceNow. | change_management |
| get_import_set | Retrieves details of a specific import set record from a ServiceNow instance. | import_sets |
| insert_import_set | Inserts a new record into a specified import set on a ServiceNow instance. | import_sets |
| insert_multiple_import_sets | Inserts multiple records into a specified import set on a ServiceNow instance. | import_sets |
| get_incidents | Retrieves incident records from a ServiceNow instance, optionally by specific incident ID. | incidents |
| create_incident | Creates a new incident record on a ServiceNow instance with provided details. | incidents |
| get_knowledge_articles | Get all Knowledge Base articles from a ServiceNow instance. | knowledge_management |
| get_knowledge_article | Get a specific Knowledge Base article from a ServiceNow instance. | knowledge_management |
| get_knowledge_article_attachment | Get a Knowledge Base article attachment from a ServiceNow instance. | knowledge_management |
| get_featured_knowledge_article | Get featured Knowledge Base articles from a ServiceNow instance. | knowledge_management |
| get_most_viewed_knowledge_articles | Get most viewed Knowledge Base articles from a ServiceNow instance. | knowledge_management |
| batch_request | Sends multiple REST API requests in a single call. | batch |
| check_ci_lifecycle_compat_actions | Determines whether two specified CI actions are compatible. | cilifecycle |
| register_ci_lifecycle_operator | Registers an operator for a non-workflow user. | cilifecycle |
| unregister_ci_lifecycle_operator | Unregisters an operator for non-workflow users. | cilifecycle |
| check_devops_change_control | Checks if the orchestration task is under change control. | devops |
| register_devops_artifact | Enables orchestration tools to register artifacts into a ServiceNow instance. | devops |
| delete_table_record | Delete a record from the specified table on a ServiceNow instance. | table_api |
| get_table | Get records from the specified table on a ServiceNow instance. | table_api |
| get_table_record | Get a specific record from the specified table on a ServiceNow instance. | table_api |
| patch_table_record | Partially update a record in the specified table on a ServiceNow instance. | table_api |
| update_table_record | Fully update a record in the specified table on a ServiceNow instance. | table_api |
| add_table_record | Add a new record to the specified table on a ServiceNow instance. | table_api |
| refresh_auth_token | Refreshes the authentication token for the ServiceNow client. | auth |
| api_request | Make a custom API request to a ServiceNow instance. | custom_api |
| send_email | Sends an email via ServiceNow. | email |
| get_data_classification | Retrieves data classification information. | data_classification |
| get_attachment | Retrieves attachment metadata. | attachment |
| upload_attachment | Uploads an attachment to a record. | attachment |
| delete_attachment | Deletes an attachment. | attachment |
| get_stats | Retrieves aggregate statistics for a table. | aggregate |
| get_activity_subscriptions | Retrieves activity subscriptions. | activity_subscriptions |
| get_account | Retrieves CSM account information. | account |
| get_hr_profile | Retrieves HR profile information. | hr |
| metricbase_insert | Inserts time series data into MetricBase. | metricbase |
| check_service_qualification | Creates a technical service qualification request. | service_qualification |
| get_service_qualification | Retrieves a technical qualification request by ID or list all. | service_qualification |
| process_service_qualification_result | Processes a technical service qualification result. | service_qualification |
| insert_cost_plans | Creates cost plans. | ppm |
| insert_project_tasks | Creates a project and associated project tasks. | ppm |
| get_product_inventory | Retrieves a list of all product inventories. | product_inventory |
| delete_product_inventory | Deletes a specified product inventory record. | product_inventory |
## A2A Agent
### Architecture Summary
The system follows a hierarchical multi-agent design:
- **Orchestrator Agent**: Acts as the entry point, parsing user requests and delegating to child agents based on domain tags.
- **Child Agents**: Domain-specific specialists (one per tag, e.g., "cmdb", "incidents") that execute tasks using filtered MCP tools and query the knowledge graph for guidance from official docs.
- **Knowledge Graph (Graphiti)**: A centralized, temporal graph database ingesting ServiceNow documentation, accessible by child agents for retrieval-augmented reasoning.
- **MCP Tools**: Distributed via tags to ensure each child agent only accesses relevant tools, preventing overload and enhancing security.
- **Server Layer**: Exposes the orchestrator as an A2A API using `pydantic-ai`'s `to_a2a` method and Uvicorn.
Key integrations:
- **Pydantic-AI**: Used for defining agents, models (e.g., OpenAI, Anthropic), and run contexts.
- **A2A (Agent-to-Agent)**: Enables inter-agent communication through delegation tools.
- **Graphiti**: Builds a corpus from URLs of official docs, supporting backends like Kuzu (local), Neo4j, or FalkorDB.
- **FastMCPToolset**: Provides MCP wrappers for ServiceNow APIs, filtered by tags for distribution.
The script initializes the graph on startup (ingesting docs if needed), creates child agents with tag-filtered tools and graph access, builds the supervisor with delegation tools, and launches the server.
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
D["MCP Tools"]
F["Agent Skills"]
C["Specialized Agents"]
end
A["User Query"] --> B["A2A Server - Uvicorn/FastAPI"]
B --> G["Supervisor Agent"]
G -- "Assigns Task" --> C
C --> D & F
D --> E["Platform API"]
G:::supervisor
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
classDef supervisor fill:#ff9,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
This diagram shows the flow from user input to delegation, tool execution, and knowledge retrieval. The orchestrator synthesizes results from children before responding.
### Breakdown
#### 1. Pydantic-AI Integration
Pydantic-AI is the backbone for agent modeling and execution. Agents are created using the `Agent` class, which takes an LLM model (e.g., OpenAIChatModel), system prompt, tools/toolsets, and a name.
- **Model Creation**: The `create_model` function supports multiple providers (OpenAI, Anthropic, Google, HuggingFace) via environment variables or CLI args.
- **RunContext**: Used in tool functions for contextual execution.
- **Agent Hierarchy**: Orchestrator has delegation tools (async functions wrapping child `agent.run()` calls). Children have MCP toolsets and Graphiti tools.
This enables structured, type-safe agent interactions with Pydantic validation.
### 2. A2A (Agent-to-Agent) Framework
A2A facilitates communication between agents:
- The orchestrator is converted to an A2A app via `agent.to_a2a()`, defining skills (high-level capabilities per tag).
- Delegation tools are async callables named `delegate_to_{tag}`, allowing the orchestrator to invoke children dynamically.
- This distributes workload, with the orchestrator synthesizing outputs for a cohesive response.
### 3. Multiple Agents
- **Orchestrator**: Analyzes queries to identify domains (tags), delegates via tools, and combines results. System prompt emphasizes delegation without direct action.
- **Child Agents**: One per tag in `TAGS` list (e.g., "incidents", "cmdb"). Each has a focused system prompt, filtered tools, and Graphiti access.
- **Creation Flow**: In `create_orchestrator`, children are instantiated in a loop, stored in a dict, and wrapped as tools for the parent.
This multi-agent setup scales by adding tags/children without refactoring the core.
### 4. Distribution of MCP Tools via Tags
MCP tools (from `FastMCPToolset`) wrap ServiceNow APIs. Distribution:
- Tools are tagged (e.g., "incidents" for incident-related endpoints).
- In `create_child_agent`, the toolset is filtered: `filtered_toolset = toolset.filtered(lambda ctx, tool_def: tag in (tool_def.tags or []))`.
- This ensures each child only sees relevant tools, reducing complexity and potential misuse.
Example: The "incidents" child gets tools like incident creation/query, while "cmdb" gets CMDB-specific ones.
### 5. Corpus of Knowledge from Official Documentation (Graphiti)
- **Initialization**: `initialize_graphiti_db` creates a Graphiti instance with the chosen backend (Kuzu for local, Neo4j/FalkorDB for servers).
- **Ingestion**: If the graph is empty or `--graphiti-force-reinit` is set, it fetches and adds episodes from `INITIAL_DOC_URLS` (ServiceNow API refs, guides).
- **Temporal Aspect**: Graphiti's episodes preserve dates for version-aware queries.
- **Access by Children**:
- For Kuzu (embedded): Custom tools `ingest_to_graph` and `query_graph` wrap Graphiti methods.
- For servers: Uses a separate `FastMCPToolset` for Graphiti MCP, filtered by tag.
- **Usage**: Children query the graph (e.g., "Retrieve details for Table API") to inform tool calls, enabling retrieval-augmented generation (RAG) over docs.
This creates a dynamic corpus, allowing agents to "understand" APIs without hardcoding.
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Supervisor as Supervisor Agent
participant Agent as Specialized Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Supervisor: Invoke Supervisor
Supervisor->>Agent: Assign Task (e.g. "Get Incident")
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Supervisor: Return Results Summarized
Supervisor-->>Server: Final Response
Server-->>User: Output
```
This sequence highlights delegation, knowledge retrieval, and tool execution.
### Strengths
- **Modularity & Scalability**: Tags allow easy addition of domains (e.g., new ServiceNow modules) by extending `TAGS` and MCP tools. Graphiti scales with more docs via incremental ingestion.
- **Knowledge-Driven Reasoning**: By ingesting official docs, children can handle evolving APIs (e.g., Zurich bundle updates) without code changes—queries adapt to temporal data.
- **Efficiency**: Tool filtering prevents overload; delegation parallelizes tasks (though sequential in code, extensible to async).
- **Flexibility**: Supports multiple LLMs/backends via args/envs; A2A enables integration with other systems.
- **Minimal Setup**: Graph auto-initializes; defaults make it runnable out-of-box (assuming MCP/Graphiti servers).
#### Features:
- **Authentication**: Supports multiple authentication types including none (disabled), static (internal tokens), JWT, OAuth Proxy, OIDC Proxy, and Remote OAuth for external identity providers.
- **Middleware**: Includes logging, timing, rate limiting, and error handling for robust server operation.
- **Eunomia Authorization**: Optional policy-based authorization with embedded or remote Eunomia server integration.
- **Resources**: Provides `instance_config` and `incident_categories` for ServiceNow configuration and data.
- **Prompts**: Includes `create_incident_prompt` and `query_table_prompt` for AI-driven interactions.
- **OIDC Token Delegation**: Supports token exchange for ServiceNow API calls, enabling user-specific authentication via OIDC.
- **OpenAPI JSON Tool Import**: Import custom ServiceNow API Endpoints through the OpenAPI JSON generated.
## Usage
### API
**OAuth Authentication**
```python
#!/usr/bin/python
# coding: utf-8
from servicenow_api.servicenow_api import Api
username = "<SERVICENOW USERNAME>"
password = "<SERVICENOW PASSWORD>"
client_id = "<SERVICENOW CLIENT_ID>"
client_secret = "<SERVICENOW_CLIENT_SECRET>"
servicenow_url = "<SERVICENOW_URL>"
client = Api(
url=servicenow_url,
username=username,
password=password,
client_id=client_id,
client_secret=client_secret
)
table = client.get_table(table="<TABLE NAME>")
print(f"Table: {table.model_dump()}")
```
**Basic Authentication**
```python
#!/usr/bin/python
# coding: utf-8
from servicenow_api.servicenow_api import Api
username = "<SERVICENOW USERNAME>"
password = "<SERVICENOW PASSWORD>"
servicenow_url = "<SERVICENOW_URL>"
client = Api(
url=servicenow_url,
username=username,
password=password
)
table = client.get_table(table="<TABLE NAME>")
print(f"Table: {table.model_dump()}")
```
**Proxy and SSL Verify**
```python
#!/usr/bin/python
# coding: utf-8
from servicenow_api.servicenow_api import Api
username = "<SERVICENOW USERNAME>"
password = "<SERVICENOW PASSWORD>"
servicenow_url = "<SERVICENOW_URL>"
proxies = {"https": "https://proxy.net"}
client = Api(
url=servicenow_url,
username=username,
password=password,
proxies=proxies,
verify=False
)
table = client.get_table(table="<TABLE NAME>")
print(f"Table: {table.model_dump()}")
```
### MCP
#### MCP CLI
| Short Flag | Long Flag | Description |
|------------|---------------------------------|-----------------------------------------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --token-algorithm | JWT signing algorithm (e.g., HS256, RS256). Required for HMAC or static keys. Auto-detected for JWKS. |
| | --token-secret | Shared secret for HMAC (HS*) verification. Used with --token-algorithm. |
| | --token-public-key | Path to PEM public key file or inline PEM string for static asymmetric verification. |
| | --required-scopes | Comma-separated required scopes (e.g., servicenow.read,servicenow.write). Enforced by JWTVerifier. |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
| | --enable-delegation | Enable OIDC token delegation to ServiceNow (default: False) |
| | --servicenow-audience | Audience for the delegated ServiceNow token |
| | --delegated-scopes | Scopes for the delegated ServiceNow token (space-separated) |
| | --openapi-file | Path to OpenAPI JSON spec to import tools/resources from |
| | --openapi-base-url | Base URL for the OpenAPI client (defaults to ServiceNow instance URL) |
#### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
Run in stdio mode (default):
```bash
servicenow-mcp --transport "stdio"
```
Run in HTTP mode:
```bash
servicenow-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
Run in Production:
**Embedded Eunomia:**
`mcp_policies.json`
```json
{
"policies": [
{
"id": "servicenow_read_policy",
"description": "Allow read-only tools if user has read scope",
"allow": true,
"conditions": [
{
"tool": ["get_application", "get_cmdb", "batch_install_result"],
"scopes": ["servicenow.read", "servicenow.full"]
}
]
},
{
"id": "servicenow_write_policy",
"description": "Allow write tools if user has write scope and is admin",
"allow": true,
"conditions": [
{
"tool": ["batch_install", "batch_rollback", "app_repo_install"],
"scopes": ["servicenow.write", "servicenow.full"],
"claims": {"role": "admin"}
}
]
},
{
"id": "default_deny",
"description": "Deny all other access",
"allow": false
}
]
}
```
Run command examples:
```bash
export IDENTITY_JWKS_URI="https://your-identity-provider.com/.well-known/jwks.json"
export API_IDENTIFIER="servicenow-mcp"
export PRODUCT_READ_SCOPE="mcpserverapi.product.read"
export INVENTORY_READ_SCOPE="mcpserverapi.inventory.read"
servicenow-mcp \
--transport "http" \
--host "0.0.0.0" \
--port "8000" \
--auth-type "jwt" \
--token-jwks-uri "${IDENTITY_JWKS_URI}" \
--token-issuer "https://your-identity-provider.com" \
--token-audience "${API_IDENTIFIER}" \
--required-scopes "$PRODUCT_READ_SCOPE,$INVENTORY_READ_SCOPE" \
--eunomia-type "embedded" \
--eunomia-policy-file "mcp_policies.json"
```
```bash
# 1. JWKS (Production, RS256)
servicenow-mcp --auth- | text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.8.1",
"urllib3>=2.2.2",
"pydantic[email]>=2.8.2",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; ... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:35:00.631343 | servicenow_api-1.6.15.tar.gz | 132,859 | 25/14/e8048a4ee1572b314b858f06861d33083f15d7f5b56d6024906a253a04e2/servicenow_api-1.6.15.tar.gz | source | sdist | null | false | e08d3c0bb2199be2e68c7c5001206c32 | bdfd3cbf03606fbf2344f13e139325ab742b79e5d5bf2434eb8f745e9adc3ca0 | 2514e8048a4ee1572b314b858f06861d33083f15d7f5b56d6024906a253a04e2 | null | [
"LICENSE"
] | 763 |
2.4 | failsafe-ai | 0.1.1 | Contract testing and compliance validation for multi-agent AI systems | # FailSafe
Contract testing and compliance validation for multi-agent AI systems.
Built by [PhT Labs](https://github.com/pht-labs).
## Install
```bash
pip install failsafe-ai
```
## Quick Start
```python
from failsafe import FailSafe
fs = FailSafe(mode="block")
# Register agents
fs.register_agent("research_agent")
fs.register_agent("writer_agent")
# Define a contract between them
fs.contract(
name="research-to-writer",
source="research_agent",
target="writer_agent",
allow=["query", "sources", "summary"],
deny=["api_key", "internal_config"],
require=["query", "sources"],
)
# Validate a handoff
result = await fs.handoff(
source="research_agent",
target="writer_agent",
payload={
"query": "AI safety",
"sources": ["arxiv.org/1234"],
"api_key": "sk-secret-123", # blocked
},
)
print(result.passed) # False
print(result.violations) # Denied fields found in payload: ['api_key']
```
## Features
- **Validate in milliseconds** — Deterministic contract rules execute without LLM calls.
- **Prevent data leakage** — Allow/deny field lists and sensitive pattern detection block data from crossing agent boundaries.
- **Compliance policies** — Pre-built policy packs for finance regulations and GDPR.
- **LLM-as-judge** — Natural language rules evaluated by an LLM for nuanced validation.
- **Full audit trail** — Every handoff logged to SQLite with violations, timestamps, and trace IDs.
- **Warn or block modes** — Choose per-contract whether violations log warnings or actively block handoffs.
## Integrations
```bash
pip install failsafe-ai[langchain]
```
Includes LangChain callback handler, LangGraph integration, and `@validated_tool` decorator.
## Dashboard
```bash
failsafe dashboard
```
Real-time visualization of validation events via WebSocket.
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0",
"fastapi>=0.100.0",
"uvicorn>=0.23.0",
"websockets>=12.0",
"httpx>=0.25.0",
"aiosqlite>=0.19.0",
"click>=8.0",
"sse-starlette>=1.6.0",
"langchain-core>=0.2.0; extra == \"langchain\"",
"langgraph>=0.2.0; extra == \"langchain\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.11 | 2026-02-19T06:34:59.083311 | failsafe_ai-0.1.1.tar.gz | 24,791 | 42/f8/6e0c2f4967b8db975ddc5289fa9a2587e28becf002849846d8cb62261561/failsafe_ai-0.1.1.tar.gz | source | sdist | null | false | c2ba358bb42b8bf42147819c12a13a88 | 50cace2b8f8872983632ff7e2300f779b862542e14b0c89cf7a001dd21d7da8a | 42f86e0c2f4967b8db975ddc5289fa9a2587e28becf002849846d8cb62261561 | MIT | [] | 246 |
2.4 | plane-agent | 0.1.5 | Plane MCP Agent | <br /><br />
<p align="center">
<a href="https://plane.so">
<img src="https://media.docs.plane.so/logo/plane_github_readme.png" alt="Plane Logo" width="400">
</a>
</p>
<p align="center"><b>Modern project management for all teams</b></p>
<p align="center">
<a href="https://discord.com/invite/A92xrEGCge">
<img alt="Discord online members" src="https://img.shields.io/discord/1031547764020084846?color=5865F2&label=Discord&style=for-the-badge" />
</a>
<img alt="Commit activity per month" src="https://img.shields.io/github/commit-activity/m/makeplane/plane?style=for-the-badge" />
</p>
<p align="center">
<a href="https://plane.so/"><b>Website</b></a> •
<a href="https://github.com/makeplane/plane/releases"><b>Releases</b></a> •
<a href="https://twitter.com/planepowers"><b>Twitter</b></a> •
<a href="https://docs.plane.so/"><b>Documentation</b></a>
</p>
<p>
<a href="https://app.plane.so/#gh-light-mode-only" target="_blank">
<img
src="https://media.docs.plane.so/GitHub-readme/github-top.webp"
alt="Plane Screens"
width="100%"
/>
</a>
</p>
Meet [Plane](https://plane.so/), an open-source project management tool to track issues, run ~sprints~ cycles, and manage product roadmaps without the chaos of managing the tool itself. 🧘♀️
*Version: 0.1.5*
> Plane is evolving every day. Your suggestions, ideas, and reported bugs help us immensely. Do not hesitate to join in the conversation on [Discord](https://discord.com/invite/A92xrEGCge) or raise a GitHub issue. We read everything and respond to most.
## 🚀 Installation
Getting started with Plane is simple. Choose the setup that works best for you:
- **Plane Cloud**
Sign up for a free account on [Plane Cloud](https://app.plane.so)—it's the fastest way to get up and running without worrying about infrastructure.
- **Self-host Plane**
Prefer full control over your data and infrastructure? Install and run Plane on your own servers. Follow our detailed [deployment guides](https://developers.plane.so/self-hosting/overview) to get started.
| Installation methods | Docs link |
| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Docker | [](https://developers.plane.so/self-hosting/methods/docker-compose) |
| Kubernetes | [](https://developers.plane.so/self-hosting/methods/kubernetes) |
`Instance admins` can configure instance settings with [God mode](https://developers.plane.so/self-hosting/govern/instance-admin).
## 🌟 Features
- **Work Items**
Efficiently create and manage tasks with a robust rich text editor that supports file uploads. Enhance organization and tracking by adding sub-properties and referencing related issues.
- **Cycles**
Maintain your team’s momentum with Cycles. Track progress effortlessly using burn-down charts and other insightful tools.
- **Modules**
Simplify complex projects by dividing them into smaller, manageable modules.
- **Views**
Customize your workflow by creating filters to display only the most relevant issues. Save and share these views with ease.
- **Pages**
Capture and organize ideas using Plane Pages, complete with AI capabilities and a rich text editor. Format text, insert images, add hyperlinks, or convert your notes into actionable items.
- **Analytics**
Access real-time insights across all your Plane data. Visualize trends, remove blockers, and keep your projects moving forward.
## 🛠️ Local development
See [CONTRIBUTING](./CONTRIBUTING.md)
## ⚙️ Built with
[](https://reactrouter.com/)
[](https://www.djangoproject.com/)
[](https://nodejs.org/en)
## 📸 Screenshots
<p>
<a href="https://plane.so" target="_blank">
<img
src="https://media.docs.plane.so/GitHub-readme/github-work-items.webp"
alt="Plane Views"
width="100%"
/>
</a>
</p>
<p>
<a href="https://plane.so" target="_blank">
<img
src="https://media.docs.plane.so/GitHub-readme/github-cycles.webp"
width="100%"
/>
</a>
</p>
<p>
<a href="https://plane.so" target="_blank">
<img
src="https://media.docs.plane.so/GitHub-readme/github-modules.webp"
alt="Plane Cycles and Modules"
width="100%"
/>
</a>
</p>
<p>
<a href="https://plane.so" target="_blank">
<img
src="https://media.docs.plane.so/GitHub-readme/github-views.webp"
alt="Plane Analytics"
width="100%"
/>
</a>
</p>
<p>
<a href="https://plane.so" target="_blank">
<img
src="https://media.docs.plane.so/GitHub-readme/github-analytics.webp"
alt="Plane Pages"
width="100%"
/>
</a>
</p>
</p>
## 📝 Documentation
Explore Plane's [product documentation](https://docs.plane.so/) and [developer documentation](https://developers.plane.so/) to learn about features, setup, and usage.
## ❤️ Community
Join the Plane community on [GitHub Discussions](https://github.com/orgs/makeplane/discussions) and our [Discord server](https://discord.com/invite/A92xrEGCge). We follow a [Code of conduct](https://github.com/makeplane/plane/blob/master/CODE_OF_CONDUCT.md) in all our community channels.
Feel free to ask questions, report bugs, participate in discussions, share ideas, request features, or showcase your projects. We’d love to hear from you!
## 🛡️ Security
If you discover a security vulnerability in Plane, please report it responsibly instead of opening a public issue. We take all legitimate reports seriously and will investigate them promptly. See [Security policy](https://github.com/makeplane/plane/blob/master/SECURITY.md) for more info.
To disclose any security issues, please email us at security@plane.so.
## 🤝 Contributing
There are many ways you can contribute to Plane:
- Report [bugs](https://github.com/makeplane/plane/issues/new?assignees=srinivaspendem%2Cpushya22&labels=%F0%9F%90%9Bbug&projects=&template=--bug-report.yaml&title=%5Bbug%5D%3A+) or submit [feature requests](https://github.com/makeplane/plane/issues/new?assignees=srinivaspendem%2Cpushya22&labels=%E2%9C%A8feature&projects=&template=--feature-request.yaml&title=%5Bfeature%5D%3A+).
- Review the [documentation](https://docs.plane.so/) and submit [pull requests](https://github.com/makeplane/docs) to improve it—whether it's fixing typos or adding new content.
- Talk or write about Plane or any other ecosystem integration and [let us know](https://discord.com/invite/A92xrEGCge)!
- Show your support by upvoting [popular feature requests](https://github.com/makeplane/plane/issues).
Please read [CONTRIBUTING.md](https://github.com/makeplane/plane/blob/master/CONTRIBUTING.md) for details on the process for submitting pull requests to us.
### Repo activity

### We couldn't have done this without you.
<a href="https://github.com/makeplane/plane/graphs/contributors">
<img src="https://contrib.rocks/image?repo=makeplane/plane" />
</a>
Plane-MCP:
https://github.com/makeplane/plane-mcp-server
# Plane MCP Server
A Model Context Protocol (MCP) server for Plane integration. This server provides tools and resources for interacting with Plane through AI agents.
## Features
* 🔧 **Plane Integration**: Interact with Plane APIs and services
* 🔌 **Multiple Transports**: Supports stdio, SSE, and streamable HTTP transports
* 🌐 **Remote & Local**: Works both locally and as a remote service
* 🛠️ **Extensible**: Easy to add new tools and resources
## Usage
The server supports three transport methods. **We recommend using `uvx`** as it doesn't require installation.
### 1. Stdio Transport (for local use)
**MCP Client Configuration** (using uvx - recommended):
```json
{
"mcpServers": {
"plane": {
"command": "uvx",
"args": ["plane-mcp-server", "stdio"],
"env": {
"PLANE_API_KEY": "<your-api-key>",
"PLANE_WORKSPACE_SLUG": "<your-workspace-slug>",
"PLANE_BASE_URL": "https://api.plane.so"
}
}
}
}
```
### 2. Remote HTTP Transport with OAuth
Connect to the hosted Plane MCP server using OAuth authentication.
**URL**: `https://mcp.plane.so/http/mcp`
**MCP Client Configuration** (for tools like Claude Desktop without native remote MCP support):
```json
{
"mcpServers": {
"plane": {
"command": "npx",
"args": ["mcp-remote@latest", "https://mcp.plane.so/http/mcp"]
}
}
}
```
**Note**: OAuth authentication will be handled automatically when connecting to the remote server.
### 3. Remote HTTP Transport using PAT Token
Connect to the hosted Plane MCP server using a Personal Access Token (PAT).
**URL**: `https://mcp.plane.so/api-key/mcp`
**Headers**:
- `Authorization: Bearer <PAT_TOKEN>`
- `X-Workspace-slug: <SLUG>`
**MCP Client Configuration** (for tools like Claude Desktop without native remote MCP support):
```json
{
"mcpServers": {
"plane": {
"command": "npx",
"args": ["mcp-remote@latest", "https://mcp.plane.so/http/api-key/mcp"],
"headers": {
"Authorization": "Bearer <PAT_TOKEN>",
"X-Workspace-slug": "<SLUG>"
}
}
}
}
```
### 4. SSE Transport (Legacy)
⚠️ **Legacy Transport**: SSE (Server-Sent Events) transport is maintained for backward compatibility. New implementations should use the HTTP transport (sections 2 or 3) instead.
Connect to the hosted Plane MCP server using OAuth authentication via Server-Sent Events.
**URL**: `https://mcp.plane.so/sse`
**MCP Client Configuration** (for tools that support SSE transport):
```json
{
"mcpServers": {
"plane": {
"command": "npx",
"args": ["mcp-remote@latest", "https://mcp.plane.so/sse"]
}
}
}
```
**Note**: OAuth authentication will be handled automatically when connecting to the remote server. This transport is deprecated in favor of the HTTP transport.
## Configuration
### Authentication
The server requires authentication via environment variables:
- `PLANE_BASE_URL`: Base URL for Plane API (default: `https://api.plane.so`) - Optional
- `PLANE_API_KEY`: API key for authentication (required for stdio transport)
- `PLANE_WORKSPACE_SLUG`: Workspace slug identifier (required for stdio transport)
- `PLANE_ACCESS_TOKEN`: Access token for authentication (alternative to API key)
**Example** (for stdio transport):
```bash
export PLANE_BASE_URL="https://api.plane.so"
export PLANE_API_KEY="your-api-key"
export PLANE_WORKSPACE_SLUG="your-workspace-slug"
```
**Note**: For remote HTTP transports (OAuth or PAT), authentication is handled via the connection method (OAuth flow or PAT headers) and does not require these environment variables.
## Available Tools
The server provides comprehensive tools for interacting with Plane. All tools use Pydantic models from the Plane SDK for type safety and validation.
### Projects
| Tool Name | Description |
|-----------|-------------|
| `list_projects` | List all projects in a workspace with optional pagination and filtering |
| `create_project` | Create a new project with name, identifier, and optional configuration |
| `retrieve_project` | Retrieve a project by ID |
| `update_project` | Update a project with partial data |
| `delete_project` | Delete a project by ID |
| `get_project_worklog_summary` | Get work log summary for a project |
| `get_project_members` | Get all members of a project |
| `get_project_features` | Get features configuration of a project |
| `update_project_features` | Update features configuration of a project |
### Work Items
| Tool Name | Description |
|-----------|-------------|
| `list_work_items` | List all work items in a project with optional filtering and pagination |
| `create_work_item` | Create a new work item with name, assignees, labels, and other attributes |
| `retrieve_work_item` | Retrieve a work item by ID with optional field expansion |
| `retrieve_work_item_by_identifier` | Retrieve a work item by project identifier and issue sequence number |
| `update_work_item` | Update a work item with partial data |
| `delete_work_item` | Delete a work item by ID |
| `search_work_items` | Search work items across a workspace with query string |
### Cycles
| Tool Name | Description |
|-----------|-------------|
| `list_cycles` | List all cycles in a project |
| `create_cycle` | Create a new cycle with name, dates, and owner |
| `retrieve_cycle` | Retrieve a cycle by ID |
| `update_cycle` | Update a cycle with partial data |
| `delete_cycle` | Delete a cycle by ID |
| `list_archived_cycles` | List archived cycles in a project |
| `add_work_items_to_cycle` | Add work items to a cycle |
| `remove_work_item_from_cycle` | Remove a work item from a cycle |
| `list_cycle_work_items` | List work items in a cycle |
| `transfer_cycle_work_items` | Transfer work items from one cycle to another |
| `archive_cycle` | Archive a cycle |
| `unarchive_cycle` | Unarchive a cycle |
### Modules
| Tool Name | Description |
|-----------|-------------|
| `list_modules` | List all modules in a project |
| `create_module` | Create a new module with name, dates, status, and members |
| `retrieve_module` | Retrieve a module by ID |
| `update_module` | Update a module with partial data |
| `delete_module` | Delete a module by ID |
| `list_archived_modules` | List archived modules in a project |
| `add_work_items_to_module` | Add work items to a module |
| `remove_work_item_from_module` | Remove a work item from a module |
| `list_module_work_items` | List work items in a module |
| `archive_module` | Archive a module |
| `unarchive_module` | Unarchive a module |
### Initiatives
| Tool Name | Description |
|-----------|-------------|
| `list_initiatives` | List all initiatives in a workspace |
| `create_initiative` | Create a new initiative with name, dates, state, and lead |
| `retrieve_initiative` | Retrieve an initiative by ID |
| `update_initiative` | Update an initiative with partial data |
| `delete_initiative` | Delete an initiative by ID |
### Intake Work Items
| Tool Name | Description |
|-----------|-------------|
| `list_intake_work_items` | List all intake work items in a project with optional pagination |
| `create_intake_work_item` | Create a new intake work item in a project |
| `retrieve_intake_work_item` | Retrieve an intake work item by work item ID with optional field expansion |
| `update_intake_work_item` | Update an intake work item with partial data |
| `delete_intake_work_item` | Delete an intake work item by work item ID |
### Work Item Properties
| Tool Name | Description |
|-----------|-------------|
| `list_work_item_properties` | List work item properties for a work item type |
| `create_work_item_property` | Create a new work item property with type, settings, and validation rules |
| `retrieve_work_item_property` | Retrieve a work item property by ID |
| `update_work_item_property` | Update a work item property with partial data |
| `delete_work_item_property` | Delete a work item property by ID |
### Users
| Tool Name | Description |
|-----------|-------------|
| `get_me` | Get current authenticated user information |
**Total Tools**: 55+ tools across 8 categories
## Development
### Running Tests
```bash
pytest
```
### Code Formatting
```bash
black plane_mcp/
ruff check plane_mcp/
```
## License
MIT License - see LICENSE for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Deprecation Notice
⚠️ **The Node.js-based `plane-mcp-server` is deprecated and no longer maintained.**
This repository represents the new Python+FastMCP based implementation of the Plane MCP server. If you were using the previous Node.js version, please migrate to this Python-based version for continued support and updates.
The new implementation offers:
- Better type safety with Pydantic models
- Improved performance with FastMCP
- Enhanced tool coverage
- Active maintenance and development
For migration assistance, please refer to the configuration examples in this README or open an issue for support.
**Old Node.js Configuration (Deprecated):**
If you were using the previous Node.js-based `@makeplane/plane-mcp-server`, your configuration looked like this:
```json
{
"mcpServers": {
"plane": {
"command": "npx",
"args": [
"-y",
"@makeplane/plane-mcp-server"
],
"env": {
"PLANE_API_KEY": "<YOUR_API_KEY>",
"PLANE_API_HOST_URL": "<HOST_URL_FOR_SELF_HOSTED>",
"PLANE_WORKSPACE_SLUG": "<YOUR_WORKSPACE_SLUG>"
}
}
}
}
```
**Please migrate to the new Python-based configuration shown in the Usage section above.**
| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.28.1",
"httpx>=0.27.0",
"pydantic>=2.8.2",
"fastmcp>=3.0.0b1",
"eunomia-mcp>=0.3.10",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0",
"pydantic-ai-skills>=v0.4.0",
"fastapi>=0.128.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:34:31.216230 | plane_agent-0.1.5.tar.gz | 33,857 | 65/90/13b653f88af448b8a9553ed3ea12a8903e3bd04e113ff553777550e71253/plane_agent-0.1.5.tar.gz | source | sdist | null | false | f5d82ef2b12c9173d882b82b755b2f85 | 0e50e50192cece1fdf3f094115ce4d7b1c585fb3955688c7bf2e1a63e0150cd8 | 659013b653f88af448b8a9553ed3ea12a8903e3bd04e113ff553777550e71253 | null | [
"LICENSE",
"LICENSE copy"
] | 235 |
2.4 | jellyfin-mcp | 0.2.15 | Jellyfin MCP Server for Agentic AI! | # Jellyfin - A2A | AG-UI | MCP


















*Version: 0.2.15*
## Overview
**Jellyfin MCP Server + A2A Agent**
This repository implements a **Model Context Protocol (MCP)** server and an intelligent **Agent-to-Agent (A2A)** system for interacting with a **Jellyfin Media Server**.
It allows AI agents to manage your media library, control playback, query system status, and interact with connected devices using natural language.
This repository is actively maintained - Contributions are welcome!
### Capabilities:
- **Media Management**: Search and retrieve Movies, TV Shows, Music, and more.
- **System Control**: Check server status, configuration, and logs.
- **User & Session Management**: Manage users, view active sessions, and control playback.
- **Live TV**: Access channels, tuners, and guide information.
- **Device Control**: Interact with devices connected to the Jellyfin server.
## MCP
### MCP Tools
The system exposes a comprehensive set of tools, organized by domain. These can be used directly by an MCP client or orchestrated by the A2A Agent.
| Domain | Description | Key Tags |
|:---|:---|:---|
| **Media** | Managing content (Movies, TV, Music), libraries, and metadata. | `Library`, `Items`, `Movies`, `TvShows`, `Music` |
| **System** | Server configuration, logs, plugins, tasks, and system info. | `System`, `Configuration`, `ActivityLog`, `ScheduledTasks` |
| **User** | User supervision, session management, and playstate control. | `User`, `Session`, `Playstate`, `DisplayPreferences` |
| **LiveTV** | Managing Live TV channels, tuners, and recordings. | `LiveTv`, `Channels` |
| **Device** | Managing connected client devices and remote control. | `Devices`, `QuickConnect` |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access).
#### Environment Variables
The following environment variables are required to connect to your Jellyfin instance:
* `JELLYFIN_BASE_URL`: The URL of your Jellyfin server (e.g., `http://192.168.1.10:8096`).
* `JELLYFIN_TOKEN`: Your Jellyfin API Token.
* **OR**
* `JELLYFIN_USERNAME`: Your Jellyfin Username.
* `JELLYFIN_PASSWORD`: Your Jellyfin Password.
#### Run in stdio mode (default):
```bash
export JELLYFIN_BASE_URL="http://localhost:8096"
export JELLYFIN_TOKEN="your_api_token"
jellyfin-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
export JELLYFIN_BASE_URL="http://localhost:8096"
export JELLYFIN_TOKEN="your_api_token"
jellyfin-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
## A2A Agent
This package includes a sophisticated **Supervisor Agent** that delegates tasks to specialized sub-agents based on the user's intent.
### Agent Architecture
* **Supervisor Agent**: The entry point. Analyzes the request and routes it to the correct specialist.
* **Media Agent**: Handles content queries ("Play Inception", "Find movies from 1999").
* **System Agent**: Handles server ops ("Restart the server", "Check logs").
* **User Agent**: Handles user data ("Create a new user", "What is Bob watching?").
* **LiveTV Agent**: Handles TV ("What's on channel 5?").
* **Device Agent**: Handles hardware ("Cast to Living Room TV").
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Supervisor Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["Sub-Agents"]
F["MCP Tools"]
end
C --> D
D --> F
A["User Query"] --> B
B --> C
F --> E["Jellyfin API"]
C:::agent
D:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style F stroke:#000000,fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Supervisor as Supervisor Agent
participant MediaAgent as Media Agent
participant MCP as MCP Tools
User->>Server: "Play the movie Inception"
Server->>Supervisor: Invoke Agent
Supervisor->>Supervisor: Analyze Intent (Media)
Supervisor->>MediaAgent: Delegate Task
MediaAgent->>MCP: get_items(search_term="Inception")
MCP-->>MediaAgent: Item Details
MediaAgent-->>Supervisor: Found "Inception", triggering playback...
Supervisor-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-----------|-------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9001) |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID |
| | --mcp-config | Path to MCP config file |
### Examples
#### Run A2A Server
```bash
export JELLYFIN_BASE_URL="http://localhost:8096"
export JELLYFIN_TOKEN="your_token"
jellyfin-agent --provider openai --model-id gpt-4o --api-key sk-...
```
## Docker
### Build
```bash
docker build -t jellyfin-mcp .
```
### Run MCP Server
```bash
docker run -d \
--name jellyfin-mcp \
-p 8000:8000 \
-e TRANSPORT=http \
-e JELLYFIN_BASE_URL="http://192.168.1.10:8096" \
-e JELLYFIN_TOKEN="your_token" \
knucklessg1/jellyfin-mcp:latest
```
### Deploy with Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
jellyfin-mcp:
image: knucklessg1/jellyfin-mcp:latest
environment:
- HOST=0.0.0.0
- PORT=8000
- TRANSPORT=http
- JELLYFIN_BASE_URL=http://your-jellyfin-ip:8096
- JELLYFIN_TOKEN=your_api_token
ports:
- 8000:8000
```
#### Configure `mcp.json` for AI Integration (e.g. Claude Desktop)
```json
{
"mcpServers": {
"jellyfin": {
"command": "uv",
"args": [
"run",
"--with",
"jellyfin-mcp",
"jellyfin-mcp"
],
"env": {
"JELLYFIN_BASE_URL": "http://your-jellyfin-ip:8096",
"JELLYFIN_TOKEN": "your_api_token"
}
}
}
}
```
## Install Python Package
```bash
python -m pip install jellyfin-mcp
```
```bash
uv pip install jellyfin-mcp
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.8.1",
"urllib3>=2.2.2",
"fastmcp>=3.0.0b1",
"eunomia-mcp>=0.3.10",
"fastapi>=0.128.0",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pydantic-ai-skills>=v0.4.0; extra == \"a2a\"",
"fastapi>=0... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:34:29.499776 | jellyfin_mcp-0.2.15.tar.gz | 114,187 | 73/f4/3e80605ab4b2a372aa10a9876660641e86377ab0ffd7825091bac4af465e/jellyfin_mcp-0.2.15.tar.gz | source | sdist | null | false | fd8f71491a2963fabe67035d2c36e79f | a0102d4c884e388c8e15a4710f05d86d256b3f1d645a57852d78663b5889644e | 73f43e80605ab4b2a372aa10a9876660641e86377ab0ffd7825091bac4af465e | null | [
"LICENSE"
] | 252 |
2.4 | fixshell | 0.1.1 | Hybrid, LLM-Centered, System-Aware FixShell | # FixShell AI (System-Aware Diagnostic Engine)
Refining Linux command-line error recovery with a hybrid, LLM-centered approach. FixShell intercepts failed commands, collects system context, and uses a local LLM (Ollama) to diagnose and suggest fixes.
## 🚀 Features
- **Context-Aware Diagnosis**: Analyzes command output, exit codes, and system state (distro, package manager, SELinux).
- **Hybrid Intelligence**: Combines deterministic rules (for instant fixes) with LLM reasoning (for complex errors).
- **Safety First**: Filters dangerous commands and utilizes a strict safety layer to prevent system damage.
- **Local Privacy**: Runs entirely on your machine using Ollama; no data leaves your system.
## 📦 Installation
```bash
pip install fixshell-ai
```
## 🛠️ Prerequisites (Ollama)
FixShell requires **Ollama** to be installed and running locally to perform AI analysis.
1. **Install Ollama**: [Download from ollama.com](https://ollama.com)
2. **Pull the Model**:
```bash
ollama pull qwen2.5:3b
```
3. **Start Ollama Server**:
Ensure the Ollama server is running (`ollama serve`).
## 💻 Usage
Prepend `fixshell --` to any command you want to run safely or debug.
```bash
fixshell -- <your_command>
```
**Examples:**
```bash
# Debug a failed docker command
fixshell -- docker run hello-world
# Analyze a missing package
fixshell -- python3 script.py
# Investigate permission errors
fixshell -- cat /etc/shadow
```
## ⚠️ Important Warning
FixShell uses a Large Language Model (LLM) for diagnosis. While we have implemented safety filters:
- **Always review suggested commands before execution.**
- **Do not run suggested commands blindly.**
- **The AI can make mistakes.**
## 🏗️ Architecture
- **Executor**: Runs your command and captures output.
- **Context Collector**: Gathers system metadata (OS, shell, history).
- **Log Filtering**: Extracts relevant error lines to reduce noise.
- **Safety Layer**: Scans for dangerous patterns (e.g., `rm -rf /`, `chmod 777`).
- **LLM Integration**: Queries the local Ollama instance for a diagnosis.
## 📄 License
MIT License
| text/markdown | Thilak Divyadharshan | tdd@example.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"click<9.0.0,>=8.1.0",
"ollama<0.5.0,>=0.4.0",
"psutil<6.0.0,>=5.9.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-19T06:34:25.496511 | fixshell-0.1.1.tar.gz | 13,446 | 32/fa/bb1cca9fecdb06070ea99253f1894ae0803862db1faffdda6068714d4cb8/fixshell-0.1.1.tar.gz | source | sdist | null | false | 26aee412e749bc1b47b573dbe336363c | ce21ca606d112981f3f24210f75dae33b6fc3c7d446e48f67df82869feb7b1d0 | 32fabb1cca9fecdb06070ea99253f1894ae0803862db1faffdda6068714d4cb8 | null | [
"LICENSE"
] | 254 |
2.4 | documentdb-mcp | 0.1.14 | DocumentDB MCP Server & A2A Server. DocumentDB is a MongoDB compatible open source document database built on PostgreSQL. | # DocumentDB - A2A | AG-UI | MCP


















*Version: 0.1.14*
## Overview
DocumentDB + MCP Server + A2A
A [FastMCP](https://github.com/jlowin/fastmcp) server and A2A (Agent-to-Agent) agent for [DocumentDB](https://documentdb.io/).
DocumentDB is a MongoDB-compatible open source document database built on PostgreSQL.
This package provides:
1. **MCP Server**: Exposes DocumentDB functionality (CRUD, Administration) as tools for LLMs.
2. **A2A Agent**: A specialized agent that uses these tools to help users manage their database.
### Features
- **CRUD Operations**: Insert, Find, Update, Replace, Delete, Count, Distinct, Aggregate.
- **Collection Management**: Create, Drop, List, Rename collections.
- **User Management**: Create, Update, Drop users.
- **Direct Commands**: Run raw database commands.
## MCP
### MCP Tools
| Function Name | Description | Tag(s) |
|:-----------------------|:------------------------------------------------------------------------------------------------|:--------------|
| `binary_version` | Get the binary version of the server (using buildInfo). | `system` |
| `list_databases` | List all databases in the connected DocumentDB/MongoDB instance. | `system` |
| `run_command` | Run a raw command against the database. | `system` |
| `list_collections` | List all collections in a specific database. | `collections` |
| `create_collection` | Create a new collection in the specified database. | `collections` |
| `drop_collection` | Drop a collection from the specified database. | `collections` |
| `create_database` | Explicitly create a database by creating a collection in it (MongoDB creates DBs lazily). | `collections` |
| `drop_database` | Drop a database. | `collections` |
| `rename_collection` | Rename a collection. | `collections` |
| `create_user` | Create a new user on the specified database. | `users` |
| `drop_user` | Drop a user from the specified database. | `users` |
| `update_user` | Update a user's password or roles. | `users` |
| `users_info` | Get information about a user. | `users` |
| `insert_one` | Insert a single document into a collection. | `crud` |
| `insert_many` | Insert multiple documents into a collection. | `crud` |
| `find_one` | Find a single document matching the filter. | `crud` |
| `find` | Find documents matching the filter. | `crud` |
| `replace_one` | Replace a single document matching the filter. | `crud` |
| `update_one` | Update a single document matching the filter. 'update' must contain update operators like $set. | `crud` |
| `update_many` | Update multiple documents matching the filter. | `crud` |
| `delete_one` | Delete a single document matching the filter. | `crud` |
| `delete_many` | Delete multiple documents matching the filter. | `crud` |
| `count_documents` | Count documents matching the filter. | `crud` |
| `distinct` | Find distinct values for a key. | `analysis` |
| `aggregate` | Run an aggregation pipeline. | `analysis` |
| `find_one_and_update` | Finds a single document and updates it. return_document: 'before' or 'after'. | `crud` |
| `find_one_and_replace` | Finds a single document and replaces it. return_document: 'before' or 'after'. | `crud` |
| `find_one_and_delete` | Finds a single document and deletes it. | `crud` |
## A2A Agent
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|---------------------------------|-----------------------------------------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --token-algorithm | JWT signing algorithm (e.g., HS256, RS256). Required for HMAC or static keys. Auto-detected for JWKS. |
| | --token-secret | Shared secret for HMAC (HS*) verification. Used with --token-algorithm. |
| | --token-public-key | Path to PEM public key file or inline PEM string for static asymmetric verification. |
| | --required-scopes | Comma-separated required scopes (e.g., documentdb.read,documentdb.write). Enforced by JWTVerifier. |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
| | --enable-delegation | Enable OIDC token delegation to (default: False) |
| | --audience | Audience for the delegated token |
| | --delegated-scopes | Scopes for the delegated token (space-separated) |
| | --openapi-file | Path to OpenAPI JSON spec to import tools/resources from |
| | --openapi-base-url | Base URL for the OpenAPI client (defaults to instance URL) |
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen/qwen3-coder-next) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| | --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
## Usage
### 1. DocumentDB MCP Server
The MCP server connects to your DocumentDB (or MongoDB) instance.
**Environment Variables:**
- `MONGODB_URI`: Connection string (e.g., `mongodb://localhost:27017/`).
- Alternatively: `MONGODB_HOST` (default: `localhost`) and `MONGODB_PORT` (default: `27017`).
**Running the Server:**
```bash
# Stdio mode (default)
documentdb-mcp
# HTTP mode
documentdb-mcp --transport http --port 8000
```
### 2. DocumentDB A2A Agent
The A2A agent connects to the MCP server to perform tasks.
**Environment Variables:**
- `LLM_API_KEY` / `LLM_API_KEY`: API key for your chosen LLM provider.
- `LLM_BASE_URL`: (Optional) Base URL for OpenAI-compatible providers (e.g. Ollama).
**Running the Agent:**
```bash
# Start Agent Server (Default: OpenAI/Ollama)
documentdb-agent
# Custom Configuration
documentdb-agent --provider anthropic --model-id claude-3-5-sonnet-20240620 --mcp-url http://localhost:8000/mcp
```
## Installation
```bash
pip install documentdb-mcp
```
## Development
```bash
# Install dependencies
pip install -e ".[dev]"
# Run tests or verification
python -m build
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" alt=""/>


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"fastmcp>=3.0.0b1",
"eunomia-mcp>=0.3.10",
"pymongo>=4.0",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pydantic-ai-skills>=v0.4.0; extra == \"a2a\"",
"fastapi>=0.128.0; extra == \"a2a\"",
"fastmcp>=3.0.0b... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:34:14.896422 | documentdb_mcp-0.1.14.tar.gz | 32,402 | 75/cb/fa63e8f5559a4652d68adee3c2b508d96f0d6717d9703f9047c3d2a4cd9e/documentdb_mcp-0.1.14.tar.gz | source | sdist | null | false | a7ddf9691793b8e755e1d94714f14eb7 | bdc7ff7bc792bc3ed58862773e011ad162b1aa32071c289fa8c11a157d9601e8 | 75cbfa63e8f5559a4652d68adee3c2b508d96f0d6717d9703f9047c3d2a4cd9e | null | [
"LICENSE"
] | 257 |
2.4 | vector-mcp | 1.1.19 | Integrate RAG into AI Agents via MCP Server. Supports multiple Vector database technologies. | # Vector Database - A2A | AG-UI | MCP


















*Version: 1.1.19*
## Overview
This is an MCP Server implementation which allows for a standardized
collection management system across vector database technologies.
This was heavily inspired by the RAG implementation of Microsoft's Autogen V1 framework, however,
this was changed to an MCP server model instead.
AI Agents can:
- Hybrid search for document information (lexical/vector)
- Create collections with documents stored on the local filesystem or URLs
- Add documents to a collection
- Utilize collection for retrieval augmented generation (RAG)
- Delete collection
Supports:
- ChromaDB
- PGVector
- Couchbase
- Qdrant
- MongoDB
This repository is actively maintained - Contributions and bug reports are welcome!
Automated tests are planned
## MCP
### MCP Tools
| Function Name | Description | Tag(s) |
|:--------------------|:-----------------------------------------------------------------------------------------------------------------------------------|:------------------------|
| `create_collection` | Creates a new collection or retrieves an existing one in the vector database. | `collection_management` |
| `semantic_search` | Retrieves and gathers related knowledge from the vector database instance using the question variable. | `semantic_search` |
| `add_documents` | Adds documents to an existing collection in the vector database. This can be used to extend collections with additional documents. | `collection_management` |
| `delete_collection` | Deletes a collection from the vector database. | `collection_management` |
| `list_collections` | Lists all collections in the vector database. | `collection_management` |
## A2A Agent
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
vector-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
vector-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
### Creating Collection
AI Prompt:
```text
Create a collection called zapdos with the documents in this directory:
/home/user/Documents/Chroma
```
AI Response:
```text
The collection named "zapdos" has been successfully created or retrieved from the vector database
using the documents in the directory /home/user/Documents/Chroma.
Let me know if you'd like to perform any further actions, such as querying the collection or adding more documents!
```
### Retrieving from Collection
AI Prompt:
```text
Create a collection called zapdos with the documents in this directory:
/home/user/Documents/Chroma
```
AI Response:
```text
The collection named "zapdos" has been successfully created or retrieved from the vector database
using the documents in the directory /home/user/Documents/Chroma.
Let me know if you'd like to perform any further actions, such as querying the collection or adding more documents!
```
### Deleting Collection
AI Prompt:
```text
Delete the collection called memory
```
AI Response:
```text
The collection named "memory" has been successfully deleted.
Let me know if you'd like to create a new collection or perform any other actions!
```
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen3:4b) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| | --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
### Deploy MCP Server as a Service
The MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/vector-mcp:latest
docker run -d \
--name vector-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
knucklessg1/vector-mcp:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name vector-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
knucklessg1/vector-mcp:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
vector-mcp:
image: knucklessg1/vector-mcp:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
vector-mcp:
image: knucklessg1/vector-mcp:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"vector_mcp": {
"command": "uv",
"args": [
"run",
"--with",
"vector-mcp",
"vector-mcp"
],
"env": {
"DATABASE_TYPE": "chromadb", // Optional
"COLLECTION_NAME": "memory", // Optional
"DOCUMENT_DIRECTORY": "/home/user/Documents/" // Optional
},
"timeout": 300000
}
}
}
```
## Install Python Package
```bash
python -m pip install vector-mcp
```
PGVector dependencies
```bash
python -m pip install vector-mcp[postgres]
```
All
```bash
python -m pip install vector-mcp[all]
```
or
```bash
uv pip install --upgrade vector-mcp[all]
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


Special shoutouts to Microsoft Autogen V1 ♥️
| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"fastmcp>=3.0.0b1",
"markdownify>=1.2.2",
"beautifulsoup4>=4.14.3",
"ebooklib>=0.19",
"html2text>=2025.4.15",
"ipython>=9.9.0",
"pypdf>=6.6.2",
"protobuf>=6.33.4",
"llama-index-core>=0.14.13",
"llama-index-llms-langchain>=0.7.1",
"llama-index-vector-stores-chroma>=0.5.5"... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:34:12.709675 | vector_mcp-1.1.19.tar.gz | 66,012 | 42/3e/f9d2da7cb58d2b62ceeaeb1fd634877043a5eb47ebbe0b30714ed2c402b6/vector_mcp-1.1.19.tar.gz | source | sdist | null | false | 5f8b6f6e4b861e3a2d6ce50f06260f01 | 1fe226f0af838dff85a4429d9599f262bf6499ffec2282e87a7c7390229cd7ad | 423ef9d2da7cb58d2b62ceeaeb1fd634877043a5eb47ebbe0b30714ed2c402b6 | null | [
"LICENSE"
] | 258 |
2.4 | systems-manager | 1.2.14 | Systems Manager will update your system and install/upgrade applications. Additionally, as allow AI to perform these activities as an MCP Server | # Systems-Manager - A2A | AG-UI | MCP


















*Version: 1.2.14*
## Overview
Systems-Manager is a powerful CLI and MCP server tool to manage your system across multiple operating systems. It supports updating, installing, and optimizing applications, managing Windows features, installing Nerd Fonts, and retrieving system and hardware statistics. It now supports Ubuntu, Debian, Red Hat, Oracle Linux, SLES, Arch, and Windows, with Snap fallback for Linux application installations.
This repository is actively maintained - Contributions are welcome!
### Features
- **Multi-OS Support**: Works on Windows, Ubuntu, Debian, Red Hat, Oracle Linux, SLES, and Arch.
- **Application Management**: Install and update applications using native package managers (apt, dnf, zypper, pacman, winget) with automatic Snap fallback for Linux.
- **Font Installation**: Install specific Nerd Fonts (default: Hack) or all available fonts from the latest release.
- **Windows Feature Management**: List, enable, or disable Windows optional features (Windows only).
- **System Optimization**: Clean and optimize system resources (e.g., trash/recycle bin, autoremove, defragmentation on Windows).
- **System and Hardware Stats**: Retrieve detailed OS and hardware information using `psutil`.
- **Logging**: Optional logging to a specified file or default `systems_manager.log` in the script directory.
- **FastMCP Server**: Expose all functionality via a Model Context Protocol (MCP) server over stdio or HTTP for integration with AI or automation systems.
## MCP
### MCP tools:
- `install_applications`: Install applications with Snap fallback (Linux).
- `update`: Update system and applications.
- `clean`: Clean system resources (e.g., trash/recycle bin).
- `optimize`: Optimize system (e.g., autoremove, defrag on Windows).
- `install_python_modules`: Install Python modules via pip.
- `install_fonts`: Install specified Nerd Fonts (default: Hack) or all fonts.
- `get_os_stats`: Retrieve OS statistics.
- `get_hardware_stats`: Retrieve hardware statistics.
- `list_windows_features`: List Windows features (Windows only).
- `enable_windows_features`: Enable Windows features (Windows only).
- `disable_windows_features`: Disable Windows features (Windows only).
- `run_command`: Run elevated commands on shell (Enable at your own risk).
## A2A Agent
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### CLI
| Short Flag | Long Flag | Description |
|------------|---------------------|----------------------------------------------------------|
| -h | --help | See usage for script |
| -c | --clean | Clean Recycle/Trash bin |
| -e | --enable-features | Enable Windows features (comma-separated, Windows only) |
| -d | --disable-features | Disable Windows features (comma-separated, Windows only) |
| -l | --list-features | List all Windows features and their status (Windows only) |
| -f | --fonts | Install Nerd Fonts (comma-separated, e.g., Hack,Meslo or 'all'; default: Hack) |
| -i | --install | Install applications (comma-separated, e.g., python3,git) |
| -p | --python | Install Python modules (comma-separated) |
| -s | --silent | Suppress output to stdout |
| -u | --update | Update applications and Operating System |
| -o | --optimize | Optimize system (e.g., autoremove, clean cache, defrag) |
| | --os-stats | Print OS statistics (e.g., system, release, version) |
| | --hw-stats | Print hardware statistics (e.g., CPU, memory, disk) |
| | --log-file | Log to specified file (default: systems_manager.log) |
```bash
systems-manager --fonts Hack,Meslo --update --clean --python geniusbot --install python3,git --enable-features Microsoft-Hyper-V-All,Containers --log-file /path/to/log.log
```
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| --mcp-url | MCP Server URL to connect to | http://systems-manager-mcp.arpa/mcp |
| --allowed-tools | List of allowed MCP tools | system_management |
| --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
systems-manager-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
systems-manager-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
### Dependencies
The following Python packages are automatically installed if missing:
- `distro`: For Linux distribution detection.
- `psutil`: For system and hardware statistics.
- `requests`: For downloading Nerd Fonts.
- `fastmcp`: For MCP server functionality (required for `systems-manager-mcp`).
### Agent-to-Agent (A2A) Server
This package includes an Agent utilizing `pydantic-ai` that can be deployed as an A2A server.
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
#### A2A CLI
| Long Flag | Description | Default |
|------------------|--------------------------------------------------|-----------------------------|
| --host | Host to bind the server to | 0.0.0.0 |
| --port | Port to bind the server to | 9000 |
| --reload | Enable auto-reload | False |
| --provider | LLM Provider (openai, anthropic, google, etc) | openai |
| --model-id | LLM Model ID | qwen/qwen3-coder-next |
| --base-url | LLM Base URL (for OpenAI compatible providers) | http://host.docker.internal:1234/v1 |
| --api-key | LLM API Key | ollama |
| --mcp-url | MCP Server URL to connect to | None |
| --mcp-config | MCP Server Config | ... |
| --skills-directory| Directory containing agent skills | ... |
| --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
#### Run A2A Server
```bash
systems-manager-agent --provider openai --model-id qwen/qwen3-coder-next
```
### Deploy MCP Server as a Service
The MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/systems-manager:latest
docker run -d \
--name systems-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
knucklessg1/systems-manager:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name systems-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
knucklessg1/systems-manager:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
systems-manager-mcp:
image: knucklessg1/systems-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
systems-manager-mcp:
image: knucklessg1/systems-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"systems_manager": {
"command": "uv",
"args": [
"run",
"--with",
"systems-manager",
"systems-manager-mcp"
],
"env": {
"SYSTEMS_MANAGER_SILENT": "False",
"SYSTEMS_MANAGER_LOG_FILE": "~/Documents/systems_manager_mcp.log"
},
"timeout": 200000
}
}
}
```
## Install Python Package
```bash
python -m pip install systems-manager
```
or
```bash
uv pip install --upgrade systems-manager
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.28.1",
"psutil>=7.0.0",
"distro>=1.9.0",
"tree-sitter>=0.25.2",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == ... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:34:09.613962 | systems_manager-1.2.14.tar.gz | 62,435 | cd/6f/40b881327177cb386f5fff2e549f787331e1e47756427d47485c384ca53f/systems_manager-1.2.14.tar.gz | source | sdist | null | false | 24859c0b8a65579248c1d3f06e141b50 | c33c4921406c07e7677c43171ba92b97a5ed119ee243876922bd5b5be3e3a8e5 | cd6f40b881327177cb386f5fff2e549f787331e1e47756427d47485c384ca53f | null | [
"LICENSE"
] | 247 |
2.4 | archivebox-api | 0.1.14 | Pythonic ArchiveBox API Wrapper and Fast MCP Server for Agentic AI use! | # ArchiveBox API - A2A | AG-UI | MCP


















*Version: 0.1.14*
## Overview
ArchiveBox API Python Wrapper & Fast MCP Server!
This repository provides a Python wrapper for interacting with the ArchiveBox API, enabling programmatic access to web archiving functionality. It includes a Model Context Protocol (MCP) server for Agentic AI, enhanced with various authentication mechanisms, middleware for observability and control, and optional Eunomia authorization for policy-based access control.
Contributions are welcome!
All API Response objects are customized for the response call. You can access return values in a `parent.value.nested_value` format, or use `parent.json()` to get the response as a dictionary.
#### Features:
- **Authentication**: Supports multiple authentication types including none (disabled), static (internal tokens), JWT, OAuth Proxy, OIDC Proxy, and Remote OAuth for external identity providers.
- **Middleware**: Includes logging, timing, rate limiting, and error handling for robust server operation.
- **Eunomia Authorization**: Optional policy-based authorization with embedded or remote Eunomia server integration.
- **Resources**: Provides `instance_config` for ArchiveBox configuration.
- **Prompts**: Includes `cli_add_prompt` for AI-driven interactions.
## API
### API Calls:
- Authentication
- Core Model (Snapshots, ArchiveResults, Tags)
- CLI Commands (add, update, schedule, list, remove)
If your API call isn't supported, you can extend the functionality by adding custom endpoints or modifying the existing wrapper.
[These are the API endpoints currently supported](https://demo.archivebox.io/api/v1/docs)
## MCP
All the available API Calls above are wrapped in MCP Tools. You can find those below with their tool descriptions and associated tag.
### MCP Tools
| Function Name | Description | Tag(s) |
|:---------------------|:---------------------------------------------------------------|:-----------------|
| `get_api_token` | Generate an API token for a given username & password. | `authentication` |
| `check_api_token` | Validate an API token to make sure it's valid and non-expired. | `authentication` |
| `get_snapshots` | Retrieve list of snapshots. | `core` |
| `get_snapshot` | Get a specific Snapshot by abid or id. | `core` |
| `get_archiveresults` | List all ArchiveResult entries matching these filters. | `core` |
| `get_tag` | Get a specific Tag by id or abid. | `core` |
| `get_any` | Get a specific Snapshot, ArchiveResult, or Tag by abid. | `core` |
| `cli_add` | Execute archivebox add command. | `cli` |
| `cli_update` | Execute archivebox update command. | `cli` |
| `cli_schedule` | Execute archivebox schedule command. | `cli` |
| `cli_list` | Execute archivebox list command. | `cli` |
| `cli_remove` | Execute archivebox remove command. | `cli` |
## A2A Agent
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP
#### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
#### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
archivebox-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
archivebox-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
### Basic API Usage
**Token Authentication**
```python
#!/usr/bin/python
# coding: utf-8
import archivebox_api
archivebox_url = "<ARCHIVEBOX_URL>"
token = "<ARCHIVEBOX_TOKEN>"
client = archivebox_api.Api(
url=archivebox_url,
token=token
)
snapshots = client.get_snapshots()
print(f"Snapshots: {snapshots.json()}")
```
**Basic Authentication**
```python
#!/usr/bin/python
# coding: utf-8
import archivebox_api
username = "<ARCHIVEBOX_USERNAME>"
password = "<ARCHIVEBOX_PASSWORD>"
archivebox_url = "<ARCHIVEBOX_URL>"
client = archivebox_api.Api(
url=archivebox_url,
username=username,
password=password
)
snapshots = client.get_snapshots()
print(f"Snapshots: {snapshots.json()}")
```
**API Key Authentication**
```python
#!/usr/bin/python
# coding: utf-8
import archivebox_api
archivebox_url = "<ARCHIVEBOX_URL>"
api_key = "<ARCHIVEBOX_API_KEY>"
client = archivebox_api.Api(
url=archivebox_url,
api_key=api_key
)
snapshots = client.get_snapshots()
print(f"Snapshots: {snapshots.json()}")
```
**SSL Verify**
```python
#!/usr/bin/python
# coding: utf-8
import archivebox_api
username = "<ARCHIVEBOX_USERNAME>"
password = "<ARCHIVEBOX_PASSWORD>"
archivebox_url = "<ARCHIVEBOX_URL>"
client = archivebox_api.Api(
url=archivebox_url,
username=username,
password=password,
verify=False
)
snapshots = client.get_snapshots()
print(f"Snapshots: {snapshots.json()}")
```
### Deploy MCP Server as a Service
The ArchiveBox MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull archivebox/archivebox:latest
docker run -d \
--name archivebox-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
-e ARCHIVEBOX_URL=https://yourinstance.archivebox.com \
-e ARCHIVEBOX_USERNAME=user \
-e ARCHIVEBOX_PASSWORD=pass \
-e ARCHIVEBOX_TOKEN=token \
-e ARCHIVEBOX_API_KEY=api_key \
-e ARCHIVEBOX_VERIFY=False \
archivebox/archivebox:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name archivebox-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
-e ARCHIVEBOX_URL=https://yourinstance.archivebox.com \
-e ARCHIVEBOX_USERNAME=user \
-e ARCHIVEBOX_PASSWORD=pass \
-e ARCHIVEBOX_TOKEN=token \
-e ARCHIVEBOX_API_KEY=api_key \
-e ARCHIVEBOX_VERIFY=False \
archivebox/archivebox:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
archivebox-mcp:
image: archivebox/archivebox:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
- ARCHIVEBOX_URL=https://yourinstance.archivebox.com
- ARCHIVEBOX_USERNAME=user
- ARCHIVEBOX_PASSWORD=pass
- ARCHIVEBOX_TOKEN=token
- ARCHIVEBOX_API_KEY=api_key
- ARCHIVEBOX_VERIFY=False
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
archivebox-mcp:
image: archivebox/archivebox:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
- ARCHIVEBOX_URL=https://yourinstance.archivebox.com
- ARCHIVEBOX_USERNAME=user
- ARCHIVEBOX_PASSWORD=pass
- ARCHIVEBOX_TOKEN=token
- ARCHIVEBOX_API_KEY=api_key
- ARCHIVEBOX_VERIFY=False
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
Recommended: Store secrets in environment variables with lookup in the JSON file.
For Testing Only: Plain text storage will also work, although **not** recommended.
```json
{
"mcpServers": {
"archivebox": {
"command": "uv",
"args": [
"run",
"--with",
"archivebox-api",
"archivebox-mcp",
"--transport",
"${TRANSPORT}",
"--host",
"${HOST}",
"--port",
"${PORT}",
"--auth-type",
"${AUTH_TYPE}",
"--eunomia-type",
"${EUNOMIA_TYPE}"
],
"env": {
"ARCHIVEBOX_URL": "https://yourinstance.archivebox.com",
"ARCHIVEBOX_USERNAME": "user",
"ARCHIVEBOX_PASSWORD": "pass",
"ARCHIVEBOX_TOKEN": "token",
"ARCHIVEBOX_API_KEY": "api_key",
"ARCHIVEBOX_VERIFY": "False",
"TOKEN_JWKS_URI": "${TOKEN_JWKS_URI}",
"TOKEN_ISSUER": "${TOKEN_ISSUER}",
"TOKEN_AUDIENCE": "${TOKEN_AUDIENCE}",
"OAUTH_UPSTREAM_AUTH_ENDPOINT": "${OAUTH_UPSTREAM_AUTH_ENDPOINT}",
"OAUTH_UPSTREAM_TOKEN_ENDPOINT": "${OAUTH_UPSTREAM_TOKEN_ENDPOINT}",
"OAUTH_UPSTREAM_CLIENT_ID": "${OAUTH_UPSTREAM_CLIENT_ID}",
"OAUTH_UPSTREAM_CLIENT_SECRET": "${OAUTH_UPSTREAM_CLIENT_SECRET}",
"OAUTH_BASE_URL": "${OAUTH_BASE_URL}",
"OIDC_CONFIG_URL": "${OIDC_CONFIG_URL}",
"OIDC_CLIENT_ID": "${OIDC_CLIENT_ID}",
"OIDC_CLIENT_SECRET": "${OIDC_CLIENT_SECRET}",
"OIDC_BASE_URL": "${OIDC_BASE_URL}",
"REMOTE_AUTH_SERVERS": "${REMOTE_AUTH_SERVERS}",
"REMOTE_BASE_URL": "${REMOTE_BASE_URL}",
"ALLOWED_CLIENT_REDIRECT_URIS": "${ALLOWED_CLIENT_REDIRECT_URIS}",
"EUNOMIA_TYPE": "${EUNOMIA_TYPE}",
"EUNOMIA_POLICY_FILE": "${EUNOMIA_POLICY_FILE}",
"EUNOMIA_REMOTE_URL": "${EUNOMIA_REMOTE_URL}"
},
"timeout": 200000
}
}
}
```
#### CLI Parameters
The `archivebox-mcp` command supports the following CLI options for configuration:
- `--transport`: Transport method (`stdio`, `http`, `sse`) [default: `http`]
- `--host`: Host address for HTTP transport [default: `0.0.0.0`]
- `--port`: Port number for HTTP transport [default: `8000`]
- `--auth-type`: Authentication type (`none`, `static`, `jwt`, `oauth-proxy`, `oidc-proxy`, `remote-oauth`) [default: `none`]
- `--token-jwks-uri`: JWKS URI for JWT verification
- `--token-issuer`: Issuer for JWT verification
- `--token-audience`: Audience for JWT verification
- `--oauth-upstream-auth-endpoint`: Upstream authorization endpoint for OAuth Proxy
- `--oauth-upstream-token-endpoint`: Upstream token endpoint for OAuth Proxy
- `--oauth-upstream-client-id`: Upstream client ID for OAuth Proxy
- `--oauth-upstream-client-secret`: Upstream client secret for OAuth Proxy
- `--oauth-base-url`: Base URL for OAuth Proxy
- `--oidc-config-url`: OIDC configuration URL
- `--oidc-client-id`: OIDC client ID
- `--oidc-client-secret`: OIDC client secret
- `--oidc-base-url`: Base URL for OIDC Proxy
- `--remote-auth-servers`: Comma-separated list of authorization servers for Remote OAuth
- `--remote-base-url`: Base URL for Remote OAuth
- `--allowed-client-redirect-uris`: Comma-separated list of allowed client redirect URIs
- `--eunomia-type`: Eunomia authorization type (`none`, `embedded`, `remote`) [default: `none`]
- `--eunomia-policy-file`: Policy file for embedded Eunomia [default: `mcp_policies.json`]
- `--eunomia-remote-url`: URL for remote Eunomia server
#### Middleware
The MCP server includes the following built-in middleware for enhanced functionality:
- **ErrorHandlingMiddleware**: Provides comprehensive error logging and transformation.
- **RateLimitingMiddleware**: Limits request frequency with a token bucket algorithm (10 requests/second, burst capacity of 20).
- **TimingMiddleware**: Tracks execution time of requests.
- **LoggingMiddleware**: Logs all requests and responses for observability.
#### Eunomia Authorization
The server supports optional Eunomia authorization for policy-based access control:
- **Disabled (`none`)**: No authorization checks.
- **Embedded (`embedded`)**: Runs an embedded Eunomia server with a local policy file (`mcp_policies.json` by default).
- **Remote (`remote`)**: Connects to an external Eunomia server for centralized policy decisions.
To configure Eunomia policies:
```bash
# Initialize a default policy file
eunomia-mcp init
# Validate the policy file
eunomia-mcp validate mcp_policies.json
```
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen3:4b) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| | --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
## Install Python Package
```bash
python -m pip install archivebox-api[all]
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pydantic-ai-skills>=v0.4.0; extra == \"a2a\"",
"fastapi>=0.128.0; extr... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:34:07.725548 | archivebox_api-0.1.14.tar.gz | 39,848 | 97/f9/b5377999da33833cc826ceeb77b5fc76fdab74d3656c1150d73495f86591/archivebox_api-0.1.14.tar.gz | source | sdist | null | false | e932d9fb8c210fae27b36541a64090a6 | 57fbc8602e8c9f71105d8891f3019ee41d2530ad28d5ba768e8c2e829266fac6 | 97f9b5377999da33833cc826ceeb77b5fc76fdab74d3656c1150d73495f86591 | null | [
"LICENSE"
] | 249 |
2.4 | microsoft-agent | 0.2.15 | Microsoft Graph Agent MCP Server | # Microsoft Agent - A2A | AG-UI | MCP


















*Version: 0.2.15*
## Overview
Microsoft Graph MCP Server + A2A Supervisor Agent
It includes a Model Context Protocol (MCP) server that wraps the Microsoft Graph API and an out-of-the-box Agent2Agent (A2A) Supervisor Agent.
Manage your Microsoft 365 tenant (Users, Groups, Calendars, Drive, etc.) through natural language!
This repository is actively maintained - Contributions are welcome!
### Capabilities:
- **Comprehensive Graph API Coverage**: Access thousands of Microsoft Graph endpoints via MCP tools.
- **Supervisor-Worker Agent Architecture**: A smart supervisor delegates tasks to specialized agents (e.g., Calendar Agent, User Agent).
- **Secure Authentication**: Supports OAuth, OIDC, and other authentication methods.
## MCP
### MCP Tools
This server provides tools for a vast array of Microsoft Graph resources. Due to the large number of tools, they are organized by resource type.
**Supported Resources (Partial List):**
- **Users**: `get_user`, `update_user`, `list_user`, etc.
- **Groups**: `get_group`, `post_groups_group`, `list_members_group`, etc.
- **Calendar**: `get_calendar`, `post_events`, `list_calendarview`, etc.
- **Drive/DriveItems**: `get_drive`, `search_driveitem`, `upload_driveitem`, etc.
- **Mail**: `send_mail`, `list_messages`, etc.
- **Directory Objects**: `get_directoryobject`, `check_member_objects`, etc.
- **Planner**, **OneNote**, **Teams**, and more.
All tools generally follow the naming convention: `action_resource` (e.g., `list_user`, `delete_group`).
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access).
#### Run in stdio mode (default):
```bash
microsoft-agent --transport "stdio"
```
#### Run in HTTP mode:
```bash
microsoft-agent --transport "http" --host "0.0.0.0" --port "8000"
```
AI Prompt:
```text
Find who manages the 'Engineering' group and list its members.
```
AI Response:
```text
I've found the 'Engineering' group.
Owners:
- Jane Doe (Head of Engineering)
Members:
- John Smith
- Alice Johnson
- Bob Williams
...
```
## A2A Agent
This package includes a powerful A2A Supervisor Agent that orchestrates interaction with the Microsoft MCP tools.
### Architecture
The system uses a Supervisor Agent that analyzes user requests and delegates them to domain-specific Child Agents.
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent System"]
S["Supervisor Agent"]
subGraph1["Specialized Agents"]
CA["Calendar Agent"]
GA["Group Agent"]
UA["User Agent"]
DA["Drive Agent"]
OA["...Other Agents"]
end
B["A2A Server"]
M["MCP Tools"]
end
U["User Query"] --> B
B --> S
S --Delegates--> CA & GA & UA & DA & OA
CA & GA & UA & DA & OA --> M
M --> api["Microsoft Graph API"]
S:::agent
B:::server
U:::server
CA:::worker
GA:::worker
UA:::worker
DA:::worker
OA:::worker
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
classDef worker fill:#dcedc8,stroke:#333
style B stroke:#000000,fill:#FFD600
style M stroke:#000000,fill:#BBDEFB
style U fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction
1. **User** sends a request (e.g., "Schedule a meeting with the Engineering team").
2. **Supervisor Agent** identifies this as a calendar and group task.
3. **Supervisor** delegates finding the group members to the **Group Agent**.
4. **Group Agent** calls `list_members_group` tool and returns emails.
5. **Supervisor** delegates scheduling to the **Calendar Agent** with the retrieved emails.
6. **Calendar Agent** calls `post_events` tool.
7. **Supervisor** confirms completion to the User.
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Auth type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy' (default: none) |
| | ... | (See standard FastMCP auth flags) |
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:9000/` (if enabled)
- **A2A**: `http://localhost:9000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:9000/ag-ui` (POST)
| Argument | Description | Default |
|-------------------|----------------------------------------------------------------|--------------------------------|
| `--host` | Host to bind the server to | `0.0.0.0` |
| `--port` | Port to bind the server to | `9000` |
| `--provider` | LLM Provider (openai, anthropic, google, huggingface) | `openai` |
| `--model-id` | LLM Model ID | `qwen/qwen3-coder-next` |
| `--mcp-url` | MCP Server URL | `http://microsoft-agent:8000/mcp` |
### Examples
#### Run A2A Server
```bash
microsoft-agent-server --provider openai --model-id gpt-4o --api-key sk-... --mcp-url http://localhost:8000/mcp
```
## Docker
### Build
```bash
docker build -t microsoft-agent .
```
### Run MCP Server
```bash
docker run -p 8000:8000 microsoft-agent
```
### Run Agent Server
```bash
docker run -e CMD=agent-server -p 9000:9000 microsoft-agent
```
### Deploy as a Service
```bash
docker pull knucklessg1/microsoft-agent:latest
docker run -d \
--name microsoft-agent \
-p 8000:8000 \
-e HOST=0.0.0.0 \
-e PORT=8000 \
-e TRANSPORT=http \
knucklessg1/microsoft-agent:latest
```
## Install Python Package
```bash
python -m pip install microsoft-agent
```
```bash
uv pip install microsoft-agent
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


Documentation:
[Microsoft API Docs](https://learn.microsoft.com/en-us/graph/api/resources/mail-api-overview?view=graph-rest-1.0)
[Microsoft Graph SDK](https://github.com/microsoftgraph/msgraph-sdk-python)
| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.32.0",
"urllib3>=2.2.2",
"msal>=1.31.0",
"keyring>=25.1.0",
"msgraph-sdk>=1.54.0",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,hugg... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:50.692093 | microsoft_agent-0.2.15.tar.gz | 87,635 | 02/c6/7a79726e15568e15df8f95f53aee70af80f89406364df4d1d80b7299a9a2/microsoft_agent-0.2.15.tar.gz | source | sdist | null | false | 6a1b2d77d70d4ab7f4e7934376a4c9ca | 1ec8dfcef7f5597f01e3f848593df860fc0f699345268d91e342dc6665cf0f40 | 02c67a79726e15568e15df8f95f53aee70af80f89406364df4d1d80b7299a9a2 | null | [
"LICENSE"
] | 251 |
2.4 | media-downloader | 2.2.14 | Download audio/videos from the internet! | # Media Downloader - A2A | AG-UI | MCP


















*Version: 2.2.14*
## Overview
Download videos and audio from the internet!
This package comes ready with an MCP Server and an A2A Server so you can plug this Agent into any of your existing agentic framework!
You can also plug in the MCP Server directly to your own agent if you prefer!
This repository is actively maintained - Contributions are welcome!
### Supports:
- YouTube
- Twitter
- Rumble
- BitChute
- Vimeo
- And More!
This requires a [javascript runtime](https://github.com/yt-dlp/yt-dlp/issues/15012#issue-3614398875).
However, the container is fully baked and ready to go!
## Usage
### CLI
| Short Flag | Long Flag | Description |
|------------|-------------|---------------------------------------------|
| -h | --help | See usage |
| -a | --audio | Download audio only |
| -c | --channel | YouTube Channel/User - Downloads all videos |
| -f | --file | File with video links |
| -l | --links | Comma separated links |
| -d | --directory | Location to save videos |
```bash
media-downloader --file "C:\Users\videos.txt" --directory "C:\Users\Downloads" --channel "WhiteHouse" --links "URL1,URL2,URL3"
```
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
media-downloader-mcp
```
#### Run in HTTP mode:
```bash
media-downloader-mcp --transport http --host 0.0.0.0 --port 8012
```
AI Prompt:
```text
Download me this video: https://youtube.com/watch?askdjfa
```
AI Response:
```text
Sure thing, the video has been downloaded to:
"C:\Users\User\Downloads\YouTube Video - Episode 1.mp4"
```
### Use in Python
```python
# Import library
from media_downloader.media_downloader import MediaDownloader
# Set URL of video/audio here
url = "https://YootToob.com/video"
# Instantiate vide_downloader_instance
video_downloader_instance = MediaDownloader()
# Set the location to save the video
video_downloader_instance.set_save_path("C:/Users/you/Downloads")
# Add URL to download
video_downloader_instance.append_link(url)
# Download all videos appended
video_downloader_instance.download_all()
```
```python
# Optional - Set Audio to True, Default is False if unspecified.
video_downloader_instance.set_audio(audio=True)
# Optional - Open a file of video/audio URL(s)
video_downloader_instance.open_file("FILE")
# Optional - Enter a YouTube channel name and download their latest videos
video_downloader_instance.get_channel_videos("YT-Channel Name")
```
### Agent-to-Agent (A2A) Server
This package includes an Agent utilizing `pydantic-ai` that can be deployed as an A2A server. This agent is capable of using the `media-downloader` MCP server to fulfill media retrieval requests.
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
#### A2A CLI
| Long Flag | Description | Default |
|------------------|--------------------------------------------------|-----------------------------|
| --host | Host to bind the server to | 0.0.0.0 |
| --port | Port to bind the server to | 8000 |
| --reload | Enable auto-reload | False |
| --provider | LLM Provider (openai, anthropic, google, etc) | openai |
| --model-id | LLM Model ID | qwen3:4b |
| --base-url | LLM Base URL (for OpenAI compatible providers) | http://ollama.arpa/v1 |
| --api-key | LLM API Key | ollama |
| --mcp-url | MCP Server URL to connect to | http://media-downloader-mcp.arpa/mcp |
| --allowed-tools | List of allowed MCP tools | download_media |
| --web | Enable Pydantic AI Web UI | False |
#### Run A2A Server
```bash
media-downloader-agent --provider openai --model-id qwen2.5:7b --mcp-url http://localhost:8004/mcp
```
### Deploy A2A Server as a Service
```bash
docker run -e CMD=media-downloader-agent \
-e PROVIDER=openai \
-e MODEL_ID=qwen2.5:7b \
-p 8000:8000 \
my-media-downloader-image
```
### Deploy MCP Server as a Service
The MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/media-downloader:latest
docker run -d \
--name media-downloader-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
-e DOWNLOAD_DIRECTORY=/downloads \
-e AUDIO_ONLY=false \
-v "/home/genius/Downloads:/downloads" \
knucklessg1/media-downloader:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name media-downloader-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
-e DOWNLOAD_DIRECTORY=/downloads \
-e AUDIO_ONLY=false \
-v "/home/genius/Downloads:/downloads" \
knucklessg1/media-downloader:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
media-downloader-mcp:
image: knucklessg1/media-downloader:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
- DOWNLOAD_DIRECTORY=/downloads
- AUDIO_ONLY=false
volumes:
- "/home/genius/Downloads:/downloads"
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
media-downloader-mcp:
image: knucklessg1/media-downloader:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
- DOWNLOAD_DIRECTORY=/downloads
- AUDIO_ONLY=false
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
- "/home/genius/Downloads:/downloads"
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"media_downloader": {
"command": "uv",
"args": [
"run",
"--with",
"media-downloader",
"media-downloader-mcp"
],
"env": {
"DOWNLOAD_DIRECTORY": "~/Downloads", // Optional - Can be specified at prompt
"AUDIO_ONLY": false // Optional - Can be specified at prompt
},
"timeout": 300000
}
}
}
```
## Install Python Package
```bash
python -m pip install --upgrade media-downloader
```
or
```bash
uv pip install --upgrade media-downloader
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"yt-dlp[default]>=2025.12.08",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pydantic-ai-sk... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:48.258946 | media_downloader-2.2.14.tar.gz | 34,048 | 25/8c/9c3045aed4f2ab9427a68565c244de55e114d45559fb80d950df7628f40a/media_downloader-2.2.14.tar.gz | source | sdist | null | false | 10fa43d7528e54c88aec3f9b4c6a15d9 | af201d5ca6357f86105dcd1851e2cf2bcb1eb1eb50b3c4a2c3a5b19510ffa945 | 258c9c3045aed4f2ab9427a68565c244de55e114d45559fb80d950df7628f40a | null | [
"LICENSE"
] | 261 |
2.4 | nextcloud-agent | 0.2.14 | Nextcloud MCP Server for Agentic AI! | # Nextcloud - A2A | AG-UI | MCP


















*Version: 0.2.14*
## Overview
Nextcloud MCP Server + A2A Server
It includes a Model Context Protocol (MCP) server and an out of the box Agent2Agent (A2A) agent
Interacts with your self-hosted Nextcloud instance to manage files, calendars, contacts, and sharing through an MCP server!
This repository is actively maintained - Contributions are welcome!
### Supports:
- **File Operations**: List, Read, Write, Move, Copy, Delete, Create Folder, Get Properties
- **Sharing**: List Shares, Create Share, Delete Share
- **Calendars**: List Calendars, List Events, Create Event
- **Contacts**: List Address Books, List Contacts, Create Contact
- **User Info**: Get current user details
## MCP
### MCP Tools
| Function Name | Description | Tag(s) |
|:---|:---|:---|
| `list_files` | List files and directories at a specific path. | `files` |
| `read_file` | Read the contents of a text file. | `files` |
| `write_file` | Write text content to a file. | `files` |
| `create_folder` | Create a new directory. | `files` |
| `delete_item` | Delete a file or directory. | `files` |
| `move_item` | Move a file or directory. | `files` |
| `copy_item` | Copy a file or directory. | `files` |
| `get_properties` | Get detailed properties for a file or folder. | `files` |
| `list_shares` | List all shares. | `sharing` |
| `create_share` | Create a new share (User, Group, Link, Email). | `sharing` |
| `delete_share` | Delete a share. | `sharing` |
| `list_calendars` | List available calendars. | `calendar` |
| `list_calendar_events` | List events in a calendar. | `calendar` |
| `create_calendar_event` | Create a calendar event. | `calendar` |
| `list_address_books` | List address books. | `contacts` |
| `list_contacts` | List contacts in an address book. | `contacts` |
| `create_contact` | Create a new contact. | `contacts` |
| `get_user_info` | Get information about the current user. | `user` |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
nextcloud-agent --transport "stdio"
```
#### Run in HTTP mode:
```bash
nextcloud-agent --transport "http" --host "0.0.0.0" --port "8016"
```
AI Prompt:
```text
List all files in my 'Documents' folder.
```
AI Response:
```text
Contents of 'Documents':
[FILE] Project_Proposal.docx (Size: 15403, Modified: Sun, 01 Feb 2026 10:00:00 GMT)
[FILE] Notes.txt (Size: 450, Modified: Sun, 01 Feb 2026 09:30:00 GMT)
[DIR] Financials (Size: -, Modified: Fri, 30 Jan 2026 14:20:00 GMT)
```
## A2A Agent
This package also includes an A2A agent server that can be used to interact with the Nextcloud MCP server.
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Nextcloud API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8016) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:9016/` (if enabled)
- **A2A**: `http://localhost:9016/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:9016/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9016) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen/qwen3-coder-next) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --mcp-url | MCP Server URL (default: http://localhost:8016/mcp) |
| | --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
nextcloud-agent --transport "stdio"
```
#### Run in HTTP mode:
```bash
nextcloud-agent --transport "http" --host "0.0.0.0" --port "8016"
```
AI Prompt:
```text
List all files in my 'Documents' folder.
```
AI Response:
```text
Contents of 'Documents':
[FILE] Project_Proposal.docx (Size: 15403, Modified: Sun, 01 Feb 2026 10:00:00 GMT)
[FILE] Notes.txt (Size: 450, Modified: Sun, 01 Feb 2026 09:30:00 GMT)
[DIR] Financials (Size: -, Modified: Fri, 30 Jan 2026 14:20:00 GMT)
```
### Agentic AI
`nextcloud-agent` is designed to be used by Agentic AI systems. It provides a set of tools that allow agents to manage Nextcloud resources.
## Agent-to-Agent (A2A)
This package also includes an A2A agent server that can be used to interact with the Nextcloud MCP server.
### CLI
| Argument | Description | Default |
|-------------------|----------------------------------------------------------------|--------------------------------|
| `--host` | Host to bind the server to | `0.0.0.0` |
| `--port` | Port to bind the server to | `9016` |
| `--reload` | Enable auto-reload | `False` |
| `--provider` | LLM Provider (openai, anthropic, google, huggingface) | `openai` |
| `--model-id` | LLM Model ID | `qwen/qwen3-coder-next` |
| `--base-url` | LLM Base URL (for OpenAI compatible providers) | `http://ollama.arpa/v1` |
| `--api-key` | LLM API Key | `ollama` |
| `--mcp-url` | MCP Server URL | `http://nextcloud-mcp:8016/mcp` |
| `--allowed-tools` | List of allowed MCP tools | `list_files`, `...` |
### Examples
#### Run A2A Server
```bash
nextcloud-agent --provider openai --model-id gpt-4 --api-key sk-... --mcp-url http://localhost:8016/mcp
```
#### Run with Docker
```bash
docker run -e CMD=nextcloud-agent -p 9016:9016 nextcloud-agent
```
## Docker
### Build
```bash
docker build -t nextcloud-agent .
```
### Run MCP Server
```bash
docker run -p 8016:8016 nextcloud-agent
```
### Run A2A Server
```bash
docker run -e CMD=nextcloud-agent -p 9016:9016 nextcloud-agent
```
### Deploy MCP Server as a Service
The Nextcloud MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/nextcloud-agent:latest
docker run -d \
--name nextcloud-agent \
-p 8016:8016 \
-e HOST=0.0.0.0 \
-e PORT=8016 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
-e NEXTCLOUD_BASE_URL=https://cloud.example.com \
-e NEXTCLOUD_USERNAME=user \
-e NEXTCLOUD_PASSWORD=pass \
knucklessg1/nextcloud-agent:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name nextcloud-agent \
-p 8016:8016 \
-e HOST=0.0.0.0 \
-e PORT=8016 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
-e NEXTCLOUD_BASE_URL=https://cloud.example.com \
-e NEXTCLOUD_USERNAME=user \
-e NEXTCLOUD_PASSWORD=pass \
knucklessg1/nextcloud-agent:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
nextcloud-mcp:
image: knucklessg1/nextcloud-agent:latest
environment:
- HOST=0.0.0.0
- PORT=8016
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
- NEXTCLOUD_BASE_URL=https://cloud.example.com
- NEXTCLOUD_USERNAME=user
- NEXTCLOUD_PASSWORD=pass
ports:
- 8016:8016
```
For advanced setups with authentication and Eunomia:
```yaml
services:
nextcloud-mcp:
image: knucklessg1/nextcloud-agent:latest
environment:
- HOST=0.0.0.0
- PORT=8016
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
- NEXTCLOUD_BASE_URL=https://cloud.example.com
- NEXTCLOUD_USERNAME=user
- NEXTCLOUD_PASSWORD=pass
ports:
- 8016:8016
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"nextcloud": {
"command": "uv",
"args": [
"run",
"--with",
"nextcloud-agent"
],
"env": {
"NEXTCLOUD_BASE_URL": "https://cloud.example.com",
"NEXTCLOUD_USERNAME": "user",
"NEXTCLOUD_PASSWORD": "pass"
},
"timeout": 300000
}
}
}
```
## Install Python Package
```bash
python -m pip install nextcloud-agent
```
```bash
uv pip install nextcloud-agent
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.31.0",
"urllib3>=2.2.2",
"python-dateutil>=2.8.2",
"icalendar>=6.1.1",
"vobject>=0.9.9",
"fastmcp>=3.0.0b1",
"eunomia-mcp>=0.3.10",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0",
"pydantic-ai-skills>=v0.4.0",
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:44.740007 | nextcloud_agent-0.2.14.tar.gz | 39,455 | d3/bb/f34e31f0625e4f2d59ebe459e526602a922a791fd2dc5537e36cc523b7a0/nextcloud_agent-0.2.14.tar.gz | source | sdist | null | false | dd6bfcfab18659ef4bae16f812f79f7b | 906b8fa0cfc25b137cbf87a14a27913e963cff32fb631e5bb2c2d3f6b136a034 | d3bbf34e31f0625e4f2d59ebe459e526602a922a791fd2dc5537e36cc523b7a0 | null | [
"LICENSE"
] | 239 |
2.4 | repository-manager | 1.3.14 | Manage your git projects | # Repository Manager - A2A | AG-UI | MCP


















*Version: 1.3.14*
## Overview
A Ralph Wiggum inspired coding agent and repository manager!
This powerful agent can manage your repositories in bulk, implement new features using Ralph Wiggum methodology, run git commands, create and edit code in multiple projects, and query your code base!
Run all Git supported tasks using Git Actions command
Run as an MCP Server for Agentic AI with an A2A/AG-UI/Web Server!
## MCP
AI Prompt:
```text
Clone all the git projects located in the file "/home/genius/Development/repositories-list/repositories.txt" to my "/home/genius/Development" workspace.
Afterwards, pull all the projects located in the "/home/genius/Development" repository workspace.
```
AI Response:
```text
All projects in "/home/genius/Development/repositories-list/repositories.txt" have been cloned to "/home/genius/Development"
and all projects in "/home/genius/Development" and been pulled from the repositories. Let me know if you need any further actions! 🚀.
```
This repository is actively maintained - Contributions are welcome!
## A2A Agent
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### CLI
| Short Flag | Long Flag | Description |
|------------|------------------|----------------------------------------|
| -h | --help | See Usage |
| -b | --default-branch | Checkout default branch |
| -c | --clone | Clone projects specified |
| -w | --workspace | Workspace to clone/pull projects |
| -f | --file | File with repository links |
| -p | --pull | Pull projects in parent directory |
| -r | --repositories | Comma separated Git URLs |
| -t | --threads | Number of parallel threads - Default 4 |
```bash
repository-manager \
--clone \
--pull \
--workspace '/home/user/Downloads' \
--file '/home/user/Downloads/repositories.txt' \
--repositories 'https://github.com/Knucklessg1/media-downloader,https://github.com/Knucklessg1/genius-bot' \
--threads 8
```
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### A2A CLI
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen3:4b) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --smart-coding-mcp-enable | Enable Smart Coding MCP configuration |
| | --python-sandbox-enable | Enable Python Sandbox MCP configuration |
| | --workspace | Workspace to scan for git projects (default: current directory) |
### Smart Coding MCP Integration
The Repository Manager A2A Agent can automatically configure `smart-coding-mcp` for any Git projects found in a specified directory.
```bash
repository_manager_a2a --smart-coding-mcp-enable --workspace /path/to/my/projects
```
This will:
1. Scan `/path/to/my/projects` for any subdirectories containing a `.git` folder.
2. Update `mcp_config.json` to include a `smart-coding-mcp` server entry for each found project.
3. Start the agent with access to these new MCP servers, allowing for semantic code search within your projects.
### Python Sandbox Integration
The Agent can execute Python code in a secure Deno sandbox using `mcp-run-python`.
```bash
repository_manager_a2a --python-sandbox-enable
```
This will:
1. Configure `mcp_config.json` to include the `python-sandbox` server.
2. Enable the `Python Sandbox` skill, allowing the agent to run scripts for calculation, testing, or logic verification.
### Default Repository List
The agent will automatically load the `repositories-list.txt` file included in the package as the default project list if no `PROJECTS_FILE` environment variable is set. This ensures the agent always has a list of repositories to work with.
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
repository-manager-mcp --transport "stdio"
```
#### Run in HTTP mode:
```bash
repository-manager-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
### Use in Python
```python
from repository_manager.repository_manager import Git
gitlab = Git()
gitlab.set_workspace("<workspace>")
gitlab.set_threads(threads=8)
gitlab.set_git_projects("<projects>")
gitlab.set_default_branch(set_to_default_branch=True)
gitlab.clone_projects_in_parallel()
gitlab.pull_projects_in_parallel()
```
### Deploy MCP Server as a Service
The ServiceNow MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/repository-manager:latest
docker run -d \
--name repository-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
-v development:/root/Development \
knucklessg1/repository-manager:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name repository-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
-v development:/root/Development \
knucklessg1/repository-manager:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
repository-manager-mcp:
image: knucklessg1/repository-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
volumes:
- development:/root/Development
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
repository-manager-mcp:
image: knucklessg1/repository-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
ports:
- 8004:8004
volumes:
- development:/root/Development
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"repository_manager": {
"command": "uv",
"args": [
"run",
"--with",
"repository-manager",
"repository-manager-mcp"
],
"env": {
"REPOSITORY_MANAGER_WORKSPACE": "/home/user/Development/", // Optional - Can be specified at prompt
"REPOSITORY_MANAGER_THREADS": "12", // Optional - Can be specified at prompt
"REPOSITORY_MANAGER_DEFAULT_BRANCH": "True", // Optional - Can be specified at prompt
"REPOSITORY_MANAGER_PROJECTS_FILE": "/home/user/Development/repositories.txt" // Optional - Can be specified at prompt
},
"timeout": 300000
}
}
}
```
### A2A
#
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
#### A2A CLI
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen3:4b) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --api-key | LLM API Key |
| --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
## Install Python Package
```bash
python -m pip install --upgrade repository-manager
```
or
```bash
uv pip install --upgrade repository-manager
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"fastmcp>=3.0.0b1",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"beautifulsoup4>=4.14.3; extra == \"mcp\"",
"httpx>=0.28.1; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggi... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:42.176295 | repository_manager-1.3.14.tar.gz | 57,600 | b9/16/9cc9d926e3d477234e4dd32c17b51c6458cc0d9d2ca9fad2c40f8532e13f/repository_manager-1.3.14.tar.gz | source | sdist | null | false | 8a6295b6182983a69773aa5577505416 | a9f3e0aeda3780ab7dd275f0a1c250bc04ceeac3c9e0d235ec5c72bab104406f | b9169cc9d926e3d477234e4dd32c17b51c6458cc0d9d2ca9fad2c40f8532e13f | null | [
"LICENSE"
] | 245 |
2.4 | audio-transcriber | 0.6.15 | Transcribe your .wav .mp4 .mp3 .flac files to text or record your own audio! | # Audio-Transcriber - A2A | AG-UI | MCP


















*Version: 0.6.15*
## Overview
Transcribe your .wav .mp4 .mp3 .flac files to text or record your own audio!
This repository is actively maintained - Contributions are welcome!
Contribution Opportunities:
- Support new models
Wrapped around [OpenAI Whisper](https://pypi.org/project/openai-whisper)
## MCP
## MCP Tools
| Function Name | Description | Tag(s) |
|:-------------------|:----------------------------------------------------------------------------|:-------------------|
| `transcribe_audio` | Transcribes audio from a provided file or by recording from the microphone. | `audio_processing` |
## A2A Agent
### Architecture Summary
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### CLI
| Short Flag | Long Flag | Description |
|------------|------------------|----------------------------------------|
| -h | --help | See Usage |
| -b | --bitrate | Bitrate to use during recording |
| -c | --channels | Number of channels to use during recording |
| -d | --directory | Directory to save recording |
| -e | --export | Export txt, srt, and vtt files |
| -f | --file | File to transcribe |
| -l | --language | Language to transcribe |
| -m | --model | Model to use: <tiny, base, small, medium, large> |
| -n | --name | Name of recording |
| -r | --record | Specify number of seconds to record to record from microphone |
```bash
audio-transcriber --file '~/Downloads/Federal_Reserve.mp4' --model 'large'
```
```bash
audio-transcriber --record 60 --directory '~/Downloads/' --name 'my_recording.wav' --model 'tiny'
```
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
audio-transcriber-mcp
```
#### Run in HTTP mode:
```bash
audio-transcriber-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
#### Model Information
[Courtesy of and Credits to OpenAI: Whisper.ai](https://github.com/openai/whisper/blob/main/README.md)
| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
|:------:|:----------:|:------------------:|:------------------:|:-------------:|:--------------:|
| tiny | 39 M | `tiny.en` | `tiny` | ~1 GB | ~32x |
| base | 74 M | `base.en` | `base` | ~1 GB | ~16x |
| small | 244 M | `small.en` | `small` | ~2 GB | ~6x |
| medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x |
| large | 1550 M | N/A | `large` | ~10 GB | 1x |
### Deploy MCP Server as a Service
The ServiceNow MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/audio-transcriber:latest
docker run -d \
--name audio-transcriber-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
knucklessg1/audio-transcriber:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name audio-transcriber-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
knucklessg1/audio-transcriber:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
audio-transcriber-mcp:
image: knucklessg1/audio-transcriber:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
audio-transcriber-mcp:
image: knucklessg1/audio-transcriber:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
Configure `mcp.json`
```json
{
"mcpServers": {
"audio_transcriber": {
"command": "uv",
"args": [
"run",
"--with",
"audio-transcriber",
"audio-transcriber-mcp"
],
"env": {
"WHISPER_MODEL": "medium", // Optional
"TRANSCRIBE_DIRECTORY": "~/Downloads" // Optional
},
"timeout": 200000
}
}
}
```
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Short Flag | Long Flag | Description |
|------------|-------------------|------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --host | Host to bind the server to (default: 0.0.0.0) |
| | --port | Port to bind the server to (default: 9000) |
| | --reload | Enable auto-reload |
| | --provider | LLM Provider: 'openai', 'anthropic', 'google', 'huggingface' |
| | --model-id | LLM Model ID (default: qwen3:4b) |
| | --base-url | LLM Base URL (for OpenAI compatible providers) |
| | --api-key | LLM API Key |
| | --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| | --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
## Install Python Package
```bash
python -m pip install audio-transcriber
```
or
```bash
uv pip install --upgrade audio-transcriber
```
##### Ubuntu Dependencies
```bash
sudo apt-get update
sudo apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0 ffmpeg gcc -y
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


</details>
| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"fastmcp>=3.0.0b1",
"pyaudio>=0.2.14",
"faster-whisper>=1.2.1",
"setuptools-rust>=1.12.0",
"websockets>=13.0",
"openai-whisper>=20250625; extra == \"local\"",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:41.326454 | audio_transcriber-0.6.15.tar.gz | 38,480 | 97/33/864db1c4a1f6a827bde44c91ecac013f1ff9d56b9411e272a0f51418c613/audio_transcriber-0.6.15.tar.gz | source | sdist | null | false | 1a4236c5a90ba4cfa9eb97791c9ef71f | 632fffe4a066a9672471aaabed7360609f48ff9131f396b9c35d89c18e1dc2d6 | 9733864db1c4a1f6a827bde44c91ecac013f1ff9d56b9411e272a0f51418c613 | null | [
"LICENSE"
] | 254 |
2.4 | adguard-home-agent | 0.2.14 | AdGuard Home MCP Server for Agentic AI! | # AdGuard Home Agent - A2A | AG-UI | MCP | API


















*Version: 0.2.14*
## Overview
The **AdGuard Home MCP Server** provides a Model Context Protocol (MCP) interface to interact with the AdGuard Home API, enabling automation and management of AdGuard Home resources such as devices, DNS servers, filter lists, query logs, and statistics. This server is designed to integrate seamlessly with AI-driven workflows and can be deployed as a standalone service or used programmatically.
### Features
- **Comprehensive API Coverage**: Manage AdGuard Home resources including devices, DNS servers, filter lists, query logs, and statistics.
- **MCP Integration**: Exposes AdGuard Home API functionalities as MCP tools for use with AI agents or direct API calls.
- **Authentication**: Supports Basic Authentication.
- **Environment Variable Support**: Securely configure credentials and settings via environment variables.
- **Docker Support**: Easily deployable as a Docker container for scalable environments.
- **Extensive Documentation**: Clear examples and instructions for setup, usage, and testing.
## MCP
### MCP Tools
The `adguard-home-agent` package exposes the following MCP tools, organized by category:
### Account & Profile
- `get_account_limits()`: Get account limits.
- `get_profile()`: Get current user profile info.
- `update_profile(profile_data)`: Update current user profile info.
### Blocked Services
- `get_all_blocked_services()`: Get available services to block.
- `get_blocked_services_list()`: Get blocked services list.
- `update_blocked_services(services)`: Update blocked services list.
### Clients
- `list_clients()`: List all clients.
- `search_clients(query)`: Search for clients.
- `add_client(name, ids, ...)`: Add a new client.
- `update_client(name, data)`: Update a client.
- `delete_client(name)`: Delete a client.
### DHCP
- `get_dhcp_status()`: Get DHCP status.
- `get_dhcp_interfaces()`: Get available interfaces.
- `set_dhcp_config(config)`: Set DHCP configuration.
- `find_active_dhcp(interface)`: Find active DHCP server.
- `add_dhcp_static_lease(mac, ip, hostname)`: Add static lease.
- `remove_dhcp_static_lease(mac, ip, hostname)`: Remove static lease.
- `update_dhcp_static_lease(mac, ip, hostname)`: Update static lease.
- `reset_dhcp()`: Reset DHCP config.
- `reset_dhcp_leases()`: Reset DHCP leases.
### DNS
- `get_dns_info()`: Get DNS parameters.
- `set_dns_config(config)`: Set DNS parameters.
- `test_upstream_dns(upstreams)`: Test upstream configuration.
- `set_protection(enabled, duration)`: Set protection state.
- `clear_cache()`: Clear DNS cache.
### Filtering
- `get_filtering_status()`: Get filtering status.
- `set_filtering_config(enabled, interval)`: Set filtering config.
- `set_filtering_rules(rules)`: Set user-defined rules.
- `check_host_filtering(name)`: Check if host is filtered.
- `add_filter_url(name, url, whitelist)`: Add filter URL.
- `remove_filter_url(url, whitelist)`: Remove filter URL.
- `set_filter_url_params(url, name, whitelist)`: Set filter URL parameters.
- `refresh_filters(whitelist)`: Refresh filters.
### Mobile Config
- `get_doh_mobile_config(host, client_id)`: Get DNS over HTTPS .mobileconfig.
- `get_dot_mobile_config(host, client_id)`: Get DNS over TLS .mobileconfig.
### Query Log
- `get_query_log(limit, ...)`: Get query log.
- `get_query_log_config()`: Get query log config.
- `set_query_log_config(enabled, ...)`: Set query log config.
- `clear_query_log()`: Clear query log.
### Rewrites
- `list_rewrites()`: List DNS rewrites.
- `add_rewrite(domain, answer)`: Add DNS rewrite.
- `update_rewrite(target, update)`: Update DNS rewrite.
- `delete_rewrite(domain, answer)`: Delete DNS rewrite.
- `get_rewrite_settings()`: Get rewrite settings.
- `update_rewrite_settings(enabled)`: Update rewrite settings.
### Settings
- `get_safebrowsing_status()`: Get SafeBrowsing status.
- `enable_safebrowsing()`: Enable SafeBrowsing.
- `disable_safebrowsing()`: Disable SafeBrowsing.
- `get_safesearch_status()`: Get SafeSearch status.
- `update_safesearch_settings(enabled, ...)`: Update SafeSearch settings.
- `get_parental_status()`: Get parental control status.
- `enable_parental_control()`: Enable parental control.
- `disable_parental_control()`: Disable parental control.
### Statistics
- `get_stats()`: Get overall statistics.
- `get_stats_config()`: Get stats config.
- `set_stats_config(interval)`: Set stats config.
- `reset_stats()`: Reset all statistics.
### System
- `get_version()`: Get AdGuard Home version/status.
### TLS
- `get_tls_status()`: Get TLS status.
- `configure_tls(config)`: Configure TLS.
- `validate_tls(config)`: Validate TLS config.
## A2A Agent
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| | --auth-type | Authentication type (default: none) |
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Long Flag | Description | Default |
|------------------|--------------------------------------------------|-----------------------------|
| --host | Host to bind the server to | 0.0.0.0 |
| --port | Port to bind the server to | 9000 |
| --reload | Enable auto-reload | False |
| --provider | LLM Provider (openai, anthropic, google, etc) | openai |
| --model-id | LLM Model ID | qwen/qwen3-coder-next |
| --base-url | LLM Base URL (for OpenAI compatible providers) | http://host.docker.internal:1234/v1 |
| --api-key | LLM API Key | ollama |
| --mcp-url | MCP Server URL to connect to | None |
| --mcp-config | MCP Server Config | .../mcp_config.json |
| --skills-directory| Directory containing agent skills | ... |
| --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
adguard-home-mcp
```
#### Run in HTTP mode:
```bash
adguard-home-mcp --transport http --host 0.0.0.0 --port 8012
```
Set environment variables for authentication:
```bash
export ADGUARD_URL="http://adguard-home:3000"
export ADGUARD_USERNAME="your-username"
export ADGUARD_PASSWORD="your-password"
```
### Use API Directly
You can interact with the AdGuard Home API directly using the `Api` class from `adguard_api.py`. Below is an example of creating a device:
```python
from adguard_home_agent.adguard_api import Api
# Initialize the API client
client = Api(
base_url="http://adguard-home:3000",
username="your-username",
password="your-password"
)
# Create a device
device = client.create_device(
name="Test Device",
device_type="mobile",
dns_server_id="123"
)
print(device)
```
### Deploy MCP Server as a Service
The AdGuard Home MCP server can be deployed using Docker.
#### Using Docker Run
```bash
docker pull knucklessg1/adguard-home-agent:latest
docker run -d \
--name adguard-home-mcp \
-p 8012:8012 \
-e HOST=0.0.0.0 \
-e PORT=8012 \
-e TRANSPORT=http \
-e ADGUARD_URL=http://adguard-home:3000 \
-e ADGUARD_USERNAME=your-username \
-e ADGUARD_PASSWORD=your-password \
knucklessg1/adguard-home-agent:latest
```
#### Using Docker Compose
Create a `compose.yml` file:
```yaml
services:
adguard-home-mcp:
image: knucklessg1/adguard-home-agent:latest
environment:
- HOST=0.0.0.0
- PORT=8012
- TRANSPORT=http
- ADGUARD_URL=${ADGUARD_URL}
- ADGUARD_USERNAME=${ADGUARD_USERNAME}
- ADGUARD_PASSWORD=${ADGUARD_PASSWORD}
ports:
- "8012:8012"
```
Run the service:
```bash
docker-compose up -d
```
## Install Python Package
Install the `adguard-home-agent` package using pip:
```bash
python -m pip install adguard-home-agent[all]
```
### Dependencies
Ensure the following Python packages are installed:
- `requests`
- `fastmcp`
- `pydantic`
Install dependencies manually if needed:
```bash
python -m pip install requests fastmcp pydantic
```
## Tests
### Pre-commit Checks
Run pre-commit checks to ensure code quality and formatting:
```bash
pre-commit run --all-files
```
To set up pre-commit hooks:
```bash
pre-commit install
```
### Validate MCP Server
Validate the MCP server configuration and tools using the MCP inspector:
```bash
npx @modelcontextprotocol/inspector adguard-home-mcp
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a new branch (`git checkout -b feature/your-feature`).
3. Make your changes and commit (`git commit -m 'Add your feature'`).
4. Push to the branch (`git push origin feature/your-feature`).
5. Open a pull request.
Please ensure your code passes pre-commit checks and includes relevant tests.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Support
For issues or feature requests, please open an issue on the [GitHub repository](https://github.com/Knuckles-Team/adguard-home-agent). For general inquiries, contact the maintainers via GitHub.
| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.28.1",
"urllib3>=2.2.2",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pydan... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:37.610961 | adguard_home_agent-0.2.14.tar.gz | 47,291 | 70/52/9efa095ecfe902e21aadba182d22868fd31d449f563201c6d703224bd451/adguard_home_agent-0.2.14.tar.gz | source | sdist | null | false | b642a1baeb1dac38111759e6c9e6cec0 | b05750fb899035b12e40c81d5feb12cfdf9009d47be46a0c50f2083403d3ffa1 | 70529efa095ecfe902e21aadba182d22868fd31d449f563201c6d703224bd451 | null | [
"LICENSE"
] | 245 |
2.4 | container-manager-mcp | 1.3.14 | Container Manager - manage Docker, Docker Swarm, and Podman containers. MCP+A2A Servers Out of the Box! | # Container Manager - A2A | AG-UI | MCP


















*Version: 1.3.14*
## Overview
Container Manager provides a robust universal interface to manage Docker and Podman containers, networks, volumes,
and Docker Swarm services, enabling programmatic and remote container management.
This is package contains an MCP Server + A2A Server Out of the Box!
This repository is actively maintained - Contributions are welcome!
## Features
- Manage Docker and Podman containers, images, volumes, and networks
- Support for Docker Swarm operations
- Support for Docker Compose and Podman Compose operations
- FastMCP server for remote API access
- Comprehensive logging and error handling
- Extensible architecture for additional container runtimes
- Multi-agent A2A system for delegated container management
## MCP
### MCP Tools
- `get_version`: Retrieve version information of the container runtime
- `get_info`: Get system information about the container runtime
- `list_images`: List all available images
- `pull_image`: Pull an image from a registry
- `remove_image`: Remove an image
- `list_containers`: List running or all containers
- `run_container`: Run a new container
- `stop_container`: Stop a running container
- `remove_container`: Remove a container
- `get_container_logs`: Retrieve logs from a container
- `exec_in_container`: Execute a command in a container
- `list_volumes`: List all volumes
- `create_volume`: Create a new volume
- `remove_volume`: Remove a volume
- `list_networks`: List all networks
- `create_network`: Create a new network
- `remove_network`: Remove a network
- `compose_up`: Start services defined in a Compose file
- `compose_down`: Stop and remove services defined in a Compose file
- `compose_ps`: List containers for a Compose project
- `compose_logs`: View logs for a Compose project or specific service
- `init_swarm`: Initialize a Docker Swarm
- `leave_swarm`: Leave a Docker Swarm
- `list_nodes`: List nodes in a Docker Swarm
- `list_services`: List services in a Docker Swarm
- `create_service`: Create a new service in a Docker Swarm
- `remove_service`: Remove a service from a Docker Swarm
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --token-jwks-uri | JWKS URI for JWT verification |
| | --token-issuer | Issuer for JWT verification |
| | --token-audience | Audience for JWT verification |
| | --oauth-upstream-auth-endpoint | Upstream authorization endpoint for OAuth Proxy |
| | --oauth-upstream-token-endpoint | Upstream token endpoint for OAuth Proxy |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
## A2A Agent
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Long Flag | Description |
|-------------------|--------------------------------------------------|
| --host | Host to bind the server to (default: 0.0.0.0) |
| --port | Port to bind the server to (default: 9000) |
| --provider | LLM Provider: openai, anthropic, google, huggingface (default: openai) |
| --model-id | LLM Model ID (default: qwen3:4b) |
| --base-url | LLM Base URL (for OpenAI compatible providers) |
| --api-key | LLM API Key |
| --mcp-url | MCP Server URL (default: http://localhost:8000/mcp) |
| --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
## Usage
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
container-manager-mcp
```
#### Run in HTTP mode:
```bash
container-manager-mcp --transport "http" --host "0.0.0.0" --port "8000"
```
### Deploy MCP Server as a Service
The ServiceNow MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/container-manager:latest
docker run -d \
--name container-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=streamable-http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
knucklessg1/container-manager:latest
```
#### Run A2A Agent (Docker):
```bash
docker run -d \
--name container-manager-agent \
-p 9000:9000 \
-e PORT=9000 \
-e PROVIDER=openai \
-e MODEL_ID=qwen3:4b \
-e BASE_URL=http://host.docker.internal:11434/v1 \
-e MCP_URL=http://host.docker.internal:8004/mcp \
knucklessg1/container-manager:latest \
container-manager-agent
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name container-manager-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=streamable-http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
knucklessg1/container-manager:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
container-manager-mcp:
image: knucklessg1/container-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=streamable-http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
container-manager-mcp:
image: knucklessg1/container-manager:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=streamable-http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"container_manager": {
"command": "uv",
"args": [
"run",
"--with",
"container-manager-mcp",
"container-manager-mcp"
],
"env": {
"CONTAINER_MANAGER_SILENT": "False", //Optional
"CONTAINER_MANAGER_LOG_FILE": "~/Documents/container_manager_mcp.log" //Optional
"CONTAINER_MANAGER_TYPE": "podman", //Optional
"CONTAINER_MANAGER_PODMAN_BASE_URL": "tcp://127.0.0.1:8080" //Optional
},
"timeout": 200000
}
}
}
```
## Install Python Package
```bash
python -m pip install container-manager-mcp
```
or
```bash
uv pip install --upgrade container-manager-mcp
```
## Test
```bash
container-manager-mcp --transport streamable-http --host 127.0.0.1 --port 8080
```
This starts the MCP server using HTTP transport on localhost port 8080.
To interact with the MCP server programmatically, you can use a FastMCP client or make HTTP requests to the exposed endpoints. Example using curl to pull an image:
```bash
curl -X POST http://127.0.0.1:8080/pull_image \
-H "Content-Type: application/json" \
-d '{"image": "nginx", "tag": "latest", "manager_type": "docker"}'
```
Install the Python package:
```bash
python -m pip install container-manager-mcp
```
### Dependencies
- Python 3.7+
- `fastmcp` for MCP server functionality
- `docker` for Docker support
- `podman` for Podman support
- `pydantic` for data validation
Install dependencies:
```bash
python -m pip install fastmcp docker podman pydantic
```
Ensure Docker or Podman is installed and running on your system.
## Development and Contribution
Contributions are welcome! To contribute:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/your-feature`)
3. Commit your changes (`git commit -am 'Add your feature'`)
4. Push to the branch (`git push origin feature/your-feature`)
5. Create a new Pull Request
Please ensure your code follows the project's coding standards and includes appropriate tests.
## License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/Knuckles-Team/container-manager-mcp/blob/main/LICENSE) file for details.
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.28.1",
"fastmcp>=3.0.0b1",
"eunomia-mcp>=0.3.10",
"podman>=5.6.0",
"podman>=5.6.0; extra == \"podman\"",
"docker>=7.1.0; extra == \"docker\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"py... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:35.371748 | container_manager_mcp-1.3.14.tar.gz | 54,058 | 3d/29/1af09bead0d969310e9febd22ba59e385008ace85b2ec55e1a2c984bb6cf/container_manager_mcp-1.3.14.tar.gz | source | sdist | null | false | a12de18eeaae0699cac6923e88c6ea8b | 8c3a640510b6917143f8468339d0fe31550bf873b099c4c1351e072d5fe511d4 | 3d291af09bead0d969310e9febd22ba59e385008ace85b2ec55e1a2c984bb6cf | null | [
"LICENSE"
] | 252 |
2.4 | ansible-tower-mcp | 1.3.14 | Ansible Tower MCP Server for Agentic AI! | # Ansible Tower API - A2A | AG-UI | MCP


















*Version: 1.3.14*
## Overview
The **Ansible Tower MCP Server** provides a Model Context Protocol (MCP) interface to interact with the Ansible Tower (AWX) API, enabling automation and management of Ansible Tower resources such as inventories, hosts, groups, job templates, projects, credentials, organizations, teams, users, ad hoc commands, workflow templates, workflow jobs, schedules, and system information. This server is designed to integrate seamlessly with AI-driven workflows and can be deployed as a standalone service or used programmatically.
This repository is actively maintained - This is a fork of a37ai/ansible-tower-mcp, which had not been updated in 6 months.
Contributions are welcome!
### Features
- **Comprehensive API Coverage**: Manage Ansible Tower resources including inventories, hosts, groups, job templates, projects, credentials, organizations, teams, users, ad hoc commands, workflows, and schedules.
- **MCP Integration**: Exposes Ansible Tower API functionalities as MCP tools for use with AI agents or direct API calls.
- **Flexible Authentication**: Supports both username/password and token-based authentication.
- **Environment Variable Support**: Securely configure credentials and settings via environment variables.
- **Docker Support**: Easily deployable as a Docker container for scalable environments.
- **Extensive Documentation**: Clear examples and instructions for setup, usage, and testing.
## MCP
### MCP Tools
The `ansible-tower-mcp` package exposes the following MCP tools, organized by category:
### Inventory Management
- `list_inventories(limit, offset)`: List all inventories.
- `get_inventory(inventory_id)`: Get details of a specific inventory.
- `create_inventory(name, organization_id, description)`: Create a new inventory.
- `update_inventory(inventory_id, name, description)`: Update an existing inventory.
- `delete_inventory(inventory_id)`: Delete an inventory.
### Host Management
- `list_hosts(inventory_id, limit, offset)`: List hosts, optionally filtered by inventory.
- `get_host(host_id)`: Get details of a specific host.
- `create_host(name, inventory_id, variables, description)`: Create a new host.
- `update_host(host_id, name, variables, description)`: Update an existing host.
- `delete_host(host_id)`: Delete a host.
### Group Management
- `list_groups(inventory_id, limit, offset)`: List groups in an inventory.
- `get_group(group_id)`: Get details of a specific group.
- `create_group(name, inventory_id, variables, description)`: Create a new group.
- `update_group(group_id, name, variables, description)`: Update an existing group.
- `delete_group(group_id)`: Delete a group.
- `add_host_to_group(group_id, host_id)`: Add a host to a group.
- `remove_host_from_group(group_id, host_id)`: Remove a host from a group.
### Job Template Management
- `list_job_templates(limit, offset)`: List all job templates.
- `get_job_template(template_id)`: Get details of a specific job template.
- `create_job_template(name, inventory_id, project_id, playbook, credential_id, description, extra_vars)`: Create a new job template.
- `update_job_template(template_id, name, inventory_id, playbook, description, extra_vars)`: Update an existing job template.
- `delete_job_template(template_id)`: Delete a job template.
- `launch_job(template_id, extra_vars)`: Launch a job from a template.
### Job Management
- `list_jobs(status, limit, offset)`: List jobs, optionally filtered by status.
- `get_job(job_id)`: Get details of a specific job.
- `cancel_job(job_id)`: Cancel a running job.
- `get_job_events(job_id, limit, offset)`: Get events for a job.
- `get_job_stdout(job_id, format)`: Get the output of a job in specified format (txt, html, json, ansi).
### Project Management
- `list_projects(limit, offset)`: List all projects.
- `get_project(project_id)`: Get details of a specific project.
- `create_project(name, organization_id, scm_type, scm_url, scm_branch, credential_id, description)`: Create a new project.
- `update_project(project_id, name, scm_type, scm_url, scm_branch, description)`: Update an existing project.
- `delete_project(project_id)`: Delete a project.
- `sync_project(project_id)`: Sync a project with its SCM.
### Credential Management
- `list_credentials(limit, offset)`: List all credentials.
- `get_credential(credential_id)`: Get details of a specific credential.
- `list_credential_types(limit, offset)`: List all credential types.
- `create_credential(name, credential_type_id, organization_id, inputs, description)`: Create a new credential.
- `update_credential(credential_id, name, inputs, description)`: Update an existing credential.
- `delete_credential(credential_id)`: Delete a credential.
### Organization Management
- `list_organizations(limit, offset)`: List all organizations.
- `get_organization(organization_id)`: Get details of a specific organization.
- `create_organization(name, description)`: Create a new organization.
- `update_organization(organization_id, name, description)`: Update an existing organization.
- `delete_organization(organization_id)`: Delete an organization.
### Team Management
- `list_teams(organization_id, limit, offset)`: List teams, optionally filtered by organization.
- `get_team(team_id)`: Get details of a specific team.
- `create_team(name, organization_id, description)`: Create a new team.
- `update_team(team_id, name, description)`: Update an existing team.
- `delete_team(team_id)`: Delete a team.
### User Management
- `list_users(limit, offset)`: List all users.
- `get_user(user_id)`: Get details of a specific user.
- `create_user(username, password, first_name, last_name, email, is_superuser, is_system_auditor)`: Create a new user.
- `update_user(user_id, username, password, first_name, last_name, email, is_superuser, is_system_auditor)`: Update an existing user.
- `delete_user(user_id)`: Delete a user.
### Ad Hoc Commands
- `run_ad_hoc_command(inventory_id, credential_id, module_name, module_args, limit, verbosity)`: Run an ad hoc command.
- `get_ad_hoc_command(command_id)`: Get details of an ad hoc command.
- `cancel_ad_hoc_command(command_id)`: Cancel an ad hoc command.
### Workflow Templates
- `list_workflow_templates(limit, offset)`: List all workflow templates.
- `get_workflow_template(template_id)`: Get details of a specific workflow template.
- `launch_workflow(template_id, extra_vars)`: Launch a workflow from a template.
### Workflow Jobs
- `list_workflow_jobs(status, limit, offset)`: List workflow jobs, optionally filtered by status.
- `get_workflow_job(job_id)`: Get details of a specific workflow job.
- `cancel_workflow_job(job_id)`: Cancel a running workflow job.
### Schedule Management
- `list_schedules(unified_job_template_id, limit, offset)`: List schedules, optionally filtered by job/workflow template.
- `get_schedule(schedule_id)`: Get details of a specific schedule.
- `create_schedule(name, unified_job_template_id, rrule, description, extra_data)`: Create a new schedule.
- `update_schedule(schedule_id, name, rrule, description, extra_data)`: Update an existing schedule.
- `delete_schedule(schedule_id)`: Delete a schedule.
### System Information
- `get_ansible_version()`: Get the Ansible Tower version.
- `get_dashboard_stats()`: Get dashboard statistics.
- `get_metrics()`: Get system metrics.
## A2A Agent
### Architecture:
```mermaid
---
config:
layout: dagre
---
flowchart TB
subgraph subGraph0["Agent Capabilities"]
C["Agent"]
B["A2A Server - Uvicorn/FastAPI"]
D["MCP Tools"]
F["Agent Skills"]
end
C --> D & F
A["User Query"] --> B
B --> C
D --> E["Platform API"]
C:::agent
B:::server
A:::server
classDef server fill:#f9f,stroke:#333
classDef agent fill:#bbf,stroke:#333,stroke-width:2px
style B stroke:#000000,fill:#FFD600
style D stroke:#000000,fill:#BBDEFB
style F fill:#BBDEFB
style A fill:#C8E6C9
style subGraph0 fill:#FFF9C4
```
### Component Interaction Diagram
```mermaid
sequenceDiagram
participant User
participant Server as A2A Server
participant Agent as Agent
participant Skill as Agent Skills
participant MCP as MCP Tools
User->>Server: Send Query
Server->>Agent: Invoke Agent
Agent->>Skill: Analyze Skills Available
Skill->>Agent: Provide Guidance on Next Steps
Agent->>MCP: Invoke Tool
MCP-->>Agent: Tool Response Returned
Agent-->>Agent: Return Results Summarized
Agent-->>Server: Final Response
Server-->>User: Output
```
## Usage
### MCP CLI
| Short Flag | Long Flag | Description |
|------------|------------------------------------|-----------------------------------------------------------------------------|
| -h | --help | Display help information |
| -t | --transport | Transport method: 'stdio', 'http', or 'sse' [legacy] (default: stdio) |
| -s | --host | Host address for HTTP transport (default: 0.0.0.0) |
| -p | --port | Port number for HTTP transport (default: 8000) |
| | --auth-type | Authentication type: 'none', 'static', 'jwt', 'oauth-proxy', 'oidc-proxy', 'remote-oauth' (default: none) |
| | --oauth-upstream-client-id | Upstream client ID for OAuth Proxy |
| | --oauth-upstream-client-secret | Upstream client secret for OAuth Proxy |
| | --oauth-base-url | Base URL for OAuth Proxy |
| | --oidc-config-url | OIDC configuration URL |
| | --oidc-client-id | OIDC client ID |
| | --oidc-client-secret | OIDC client secret |
| | --oidc-base-url | Base URL for OIDC Proxy |
| | --remote-auth-servers | Comma-separated list of authorization servers for Remote OAuth |
| | --remote-base-url | Base URL for Remote OAuth |
| | --allowed-client-redirect-uris | Comma-separated list of allowed client redirect URIs |
| | --eunomia-type | Eunomia authorization type: 'none', 'embedded', 'remote' (default: none) |
| | --eunomia-policy-file | Policy file for embedded Eunomia (default: mcp_policies.json) |
| | --eunomia-remote-url | URL for remote Eunomia server |
### A2A CLI
#### Endpoints
- **Web UI**: `http://localhost:8000/` (if enabled)
- **A2A**: `http://localhost:8000/a2a` (Discovery: `/a2a/.well-known/agent.json`)
- **AG-UI**: `http://localhost:8000/ag-ui` (POST)
| Long Flag | Description | Default |
|------------------|--------------------------------------------------|-----------------------------|
| --host | Host to bind the server to | 0.0.0.0 |
| --port | Port to bind the server to | 9000 |
| --reload | Enable auto-reload | False |
| --provider | LLM Provider (openai, anthropic, google, etc) | openai |
| --model-id | LLM Model ID | qwen/qwen3-coder-next |
| --base-url | LLM Base URL (for OpenAI compatible providers) | http://host.docker.internal:1234/v1 |
| --api-key | LLM API Key | ollama |
| --mcp-url | MCP Server URL to connect to | None |
| --mcp-config | MCP Server Config | .../mcp_config.json |
| --skills-directory| Directory containing agent skills | ... |
| --web | Enable Pydantic AI Web UI | False (Env: ENABLE_WEB_UI) |
### Using as an MCP Server
The MCP Server can be run in two modes: `stdio` (for local testing) or `http` (for networked access). To start the server, use the following commands:
#### Run in stdio mode (default):
```bash
ansible-tower-mcp
```
#### Run in HTTP mode:
```bash
ansible-tower-mcp --transport http --host 0.0.0.0 --port 8012
```
Set environment variables for authentication:
```bash
export ANSIBLE_BASE_URL="https://your-ansible-tower-instance.com"
export ANSIBLE_USERNAME="your-username"
export ANSIBLE_PASSWORD="your-password"
# or
export ANSIBLE_TOKEN="your-api-token"
export VERIFY="False" # Set to True to enable SSL verification
```
### Use API Directly
You can interact with the Ansible Tower API directly using the `Api` class from `ansible_tower_api.py`. Below is an example of creating an inventory and launching a job:
```python
from ansible_tower_mcp.ansible_tower_api import Api
# Initialize the API client
client = Api(
base_url="https://your-ansible-tower-instance.com",
username="your-username",
password="your-password",
verify=False
)
# Create an inventory
inventory = client.create_inventory(
name="Test Inventory",
organization_id=1,
description="A test inventory"
)
print(inventory)
# Launch a job from a job template
job = client.launch_job(template_id=123, extra_vars='{"key": "value"}')
print(job)
```
### Deploy MCP Server as a Service
The ServiceNow MCP server can be deployed using Docker, with configurable authentication, middleware, and Eunomia authorization.
#### Using Docker Run
```bash
docker pull knucklessg1/ansible-tower-mcp:latest
docker run -d \
--name ansible-tower-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=none \
-e EUNOMIA_TYPE=none \
-e ANSIBLE_BASE_URL=https://your-ansible-tower-instance.com \
-e ANSIBLE_USERNAME=your-username \
-e ANSIBLE_PASSWORD=your-password \
-e ANSIBLE_TOKEN=your-api-token \
knucklessg1/ansible-tower-mcp:latest
```
For advanced authentication (e.g., JWT, OAuth Proxy, OIDC Proxy, Remote OAuth) or Eunomia, add the relevant environment variables:
```bash
docker run -d \
--name ansible-tower-mcp \
-p 8004:8004 \
-e HOST=0.0.0.0 \
-e PORT=8004 \
-e TRANSPORT=http \
-e AUTH_TYPE=oidc-proxy \
-e OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration \
-e OIDC_CLIENT_ID=your-client-id \
-e OIDC_CLIENT_SECRET=your-client-secret \
-e OIDC_BASE_URL=https://your-server.com \
-e ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/* \
-e EUNOMIA_TYPE=embedded \
-e EUNOMIA_POLICY_FILE=/app/mcp_policies.json \
-e ANSIBLE_BASE_URL=https://your-ansible-tower-instance.com \
-e ANSIBLE_USERNAME=your-username \
-e ANSIBLE_PASSWORD=your-password \
-e ANSIBLE_TOKEN=your-api-token \
knucklessg1/ansible-tower-mcp:latest
```
#### Using Docker Compose
Create a `docker-compose.yml` file:
```yaml
services:
ansible-tower-mcp:
image: knucklessg1/ansible-tower-mcp:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=none
- EUNOMIA_TYPE=none
- ANSIBLE_BASE_URL=https://your-ansible-tower-instance.com
- ANSIBLE_USERNAME=your-username
- ANSIBLE_PASSWORD=your-password
- ANSIBLE_TOKEN=your-api-token
- ANSIBLE_VERIFY=False
ports:
- 8004:8004
```
For advanced setups with authentication and Eunomia:
```yaml
services:
ansible-tower-mcp:
image: knucklessg1/ansible-tower-mcp:latest
environment:
- HOST=0.0.0.0
- PORT=8004
- TRANSPORT=http
- AUTH_TYPE=oidc-proxy
- OIDC_CONFIG_URL=https://provider.com/.well-known/openid-configuration
- OIDC_CLIENT_ID=your-client-id
- OIDC_CLIENT_SECRET=your-client-secret
- OIDC_BASE_URL=https://your-server.com
- ALLOWED_CLIENT_REDIRECT_URIS=http://localhost:*,https://*.example.com/*
- EUNOMIA_TYPE=embedded
- EUNOMIA_POLICY_FILE=/app/mcp_policies.json
- ANSIBLE_BASE_URL=https://your-ansible-tower-instance.com
- ANSIBLE_USERNAME=your-username
- ANSIBLE_PASSWORD=your-password
- ANSIBLE_TOKEN=your-api-token
- ANSIBLE_VERIFY=False
ports:
- 8004:8004
volumes:
- ./mcp_policies.json:/app/mcp_policies.json
```
Run the service:
```bash
docker-compose up -d
```
#### Configure `mcp.json` for AI Integration
```json
{
"mcpServers": {
"ansible-tower": {
"command": "uv",
"args": [
"run",
"--with",
"ansible-tower-mcp>=0.0.4",
"ansible-tower-mcp",
"--transport",
"stdio"
],
"env": {
"ANSIBLE_BASE_URL": "${ANSIBLE_BASE_URL}",
"ANSIBLE_USERNAME": "${ANSIBLE_USERNAME}",
"ANSIBLE_PASSWORD": "${ANSIBLE_PASSWORD}",
"ANSIBLE_CLIENT_ID": "${ANSIBLE_CLIENT_ID}",
"ANSIBLE_CLIENT_SECRET": "${ANSIBLE_CLIENT_SECRET}",
"ANSIBLE_TOKEN": "${ANSIBLE_TOKEN}",
"ANSIBLE_VERIFY": "${VERIFY:False}"
},
"timeout": 200000
}
}
}
```
Set environment variables:
```bash
export ANSIBLE_BASE_URL="https://your-ansible-tower-instance.com"
export ANSIBLE_USERNAME="your-username"
export ANSIBLE_PASSWORD="your-password"
export ANSIBLE_TOKEN="your-api-token"
export VERIFY="False"
```
For **testing only**, you can store credentials directly in `mcp.json` (not recommended for production):
```json
{
"mcpServers": {
"ansible-tower": {
"command": "uv",
"args": [
"run",
"--with",
"ansible-tower-mcp",
"ansible-tower-mcp",
"--transport",
"http",
"--host",
"0.0.0.0",
"--port",
"8012"
],
"env": {
"ANSIBLE_BASE_URL": "https://your-ansible-tower-instance.com",
"ANSIBLE_USERNAME": "your-username",
"ANSIBLE_PASSWORD": "your-password",
"ANSIBLE_TOKEN": "your-api-token",
"VERIFY": "False"
},
"timeout": 200000
}
}
}
```
## Install Python Package
Install the `ansible-tower-mcp` package using pip:
```bash
python -m pip install ansible-tower-mcp[all]
```
### Dependencies
Ensure the following Python packages are installed:
- `requests`
- `fastmcp`
- `pydantic`
Install dependencies manually if needed:
```bash
python -m pip install requests fastmcp pydantic
```
## Tests
### Pre-commit Checks
Run pre-commit checks to ensure code quality and formatting:
```bash
pre-commit run --all-files
```
To set up pre-commit hooks:
```bash
pre-commit install
```
### Validate MCP Server
Validate the MCP server configuration and tools using the MCP inspector:
```bash
npx @modelcontextprotocol/inspector ansible-tower-mcp
```
### Unit Tests
Run unit tests (if available in your project setup):
```bash
python -m pytest tests/
```
## Repository Owners
<img width="100%" height="180em" src="https://github-readme-stats.vercel.app/api?username=Knucklessg1&show_icons=true&hide_border=true&&count_private=true&include_all_commits=true" />


## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a new branch (`git checkout -b feature/your-feature`).
3. Make your changes and commit (`git commit -m 'Add your feature'`).
4. Push to the branch (`git push origin feature/your-feature`).
5. Open a pull request.
Please ensure your code passes pre-commit checks and includes relevant tests.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Support
For issues or feature requests, please open an issue on the [GitHub repository](https://github.com/Knuckles-Team/ansible-tower-mcp). For general inquiries, contact the maintainers via GitHub.
| text/markdown | null | Audel Rouhi <knucklessg1@gmail.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: Public Domain",
"Environment :: Console",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"tree-sitter>=0.23.2",
"requests>=2.8.1",
"urllib3>=2.2.2",
"fastmcp>=3.0.0b1; extra == \"mcp\"",
"eunomia-mcp>=0.3.10; extra == \"mcp\"",
"fastapi>=0.128.0; extra == \"mcp\"",
"pydantic-ai-slim[a2a,ag-ui,anthropic,fastmcp,google,groq,huggingface,mistral,openai,web]>=1.60.0; extra == \"a2a\"",
"pydant... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T06:33:33.976588 | ansible_tower_mcp-1.3.14.tar.gz | 54,623 | 3e/5e/68481610f0385f925c0ca643443567bc8745d460ba364d441c25ddde44d7/ansible_tower_mcp-1.3.14.tar.gz | source | sdist | null | false | 182ff3d60751b7820b1d1d49753005e0 | 52a589068267a45d41c7da6538f053cb0160d528a03090545d6a9512f614e74e | 3e5e68481610f0385f925c0ca643443567bc8745d460ba364d441c25ddde44d7 | null | [
"LICENSE"
] | 248 |
2.4 | mutouplotlib | 0.0.1 | my plotting library based on Matplotlib | # mutouplotlib
[](https://pypi.org/project/mutouplotlib)
[](https://pypi.org/project/mutouplotlib)
-----
## Table of Contents
- [Installation](#installation)
- [License](#license)
## Installation
```console
pip install mutouplotlib
```
## License
`mutouplotlib` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
| text/markdown | null | ZY Tan <2334247405@qq.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.8 | [] | [] | [] | [
"matplotlib>=3.5.0",
"numpy>=1.20.0",
"scipy>=1.7.0",
"seaborn>=0.11.0"
] | [] | [] | [] | [
"Documentation, https://github.com/ifsihj/mutouplotlib#readme",
"Issues, https://github.com/ifsihj/mutouplotlib/issues",
"Source, https://github.com/ifsihj/mutouplotlib"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-19T06:33:18.456429 | mutouplotlib-0.0.1.tar.gz | 6,316 | a7/5d/0d5b9b8b131500955abc1aa3b32c5b03cc585a2a554db675a9f3b41ac2a2/mutouplotlib-0.0.1.tar.gz | source | sdist | null | false | 5d05c30c02a633001009b7f8b90c6034 | 11a9fdfb13db181a6b8d0de49862ee5138aea23ab75692fe2c832f7bf3d266c9 | a75d0d5b9b8b131500955abc1aa3b32c5b03cc585a2a554db675a9f3b41ac2a2 | MIT | [
"LICENSE.txt"
] | 246 |
2.4 | july-caesar | 0.1.1 | Interactive case-conversion clipboard utility | # Caesar
## What it is
- Caesar is a variable name maker.
- It is supposed to get out of your way to let you quickly create variable names in the format you want.
- Caesar boasts intentionally rigid sessions, simple controls, and easy setup.
## How it works
1. You select a formatting mode. This starts a session
2. Type sentences separated by space
3. Formatted text is copied to your clipboard
## Installation
1. Go to your terminal and run the following command:
```powershell
pip install july-caesar
```
2. Once installed, you can run caesar with the simple command: `caesar [--flag]`
## Usage examples
- For Python variables with long names (following snake-case, as is the convention) you could follow these steps:
1. Start terminal
2. command: `caesar -s[q]`
3. input: `variable name for specific task`
4. copied output: `variable_name_for_specific_task` (fun fact, caesar formatted this here. I did not sprain my pinky by holding shift it)
5. simply paste it in your project!
## Tips
1. Use clipboard managers, especially which feature history. Not for caesar only, but it is also very productive in general. Windows has one built in (win+V), there are many free tools (like Maccy) for Mac and Linux users can use CopyQ or Klipper (KDE)
2. Caesar does not consume resources passively, keeping it running in the background is suggested.
## Flags
v0.1.1:
- Three modes available:
1. Snake case : `-s`, `--snake`
2. Camel case : `-c`, `--camel`
3. Pascal case : `-p`, `--pascal`
- Quick flag:
- `-q`, `--quick`
- No formalities.
## Version Notes
### v0.1.0:
- Primitive.
- Contained MVPs
- Fixed snake-case formatting and universal whitespace handling issues (#1 & #2)
- Featured minimal README.md
### v0.1.1:
- three modes: snake case, pascal case, camel case
- Formalities include: greeting, farewell, can be avoided with -q flag
- exit by Ctrl+C (KeyboardInterrupt)
- features README.md, working code.
| text/markdown | null | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pyperclip"
] | [] | [] | [] | [
"Homepage, https://github.com/DODO-unique/caesar"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T06:33:02.639291 | july_caesar-0.1.1.tar.gz | 3,537 | e4/82/25fc59d9a45351df4b967202cffaae551a96b0cda557c5ae4d112f706c4d/july_caesar-0.1.1.tar.gz | source | sdist | null | false | 8fe230568ffbc334a9a6f430832227d6 | 41583d3dcfa7c95b1bddc000880c04beca2d2d6fc0019a7a7def2313e650b911 | e48225fc59d9a45351df4b967202cffaae551a96b0cda557c5ae4d112f706c4d | null | [] | 232 |
2.4 | porkbun | 1.0.0 | Command-line tool for managing Porkbun domains and DNS records | # porkbun-cli

[](https://pypi.org/project/porkbun/)

A command-line tool for managing domains and DNS records through the [Porkbun API](https://porkbun.com/api/json/v3/documentation).
```
$ porkbun dns list example.com
ID Type Name Content Prio TTL
---------- ---- ----------------- -------------- ---- -----
123456789 A example.com 203.0.113.1 600
123456790 MX example.com mail.example.com 10 3600
123456791 TXT example.com v=spf1 ... 3600
```
## Install
```bash
pip install porkbun
# With interactive menu support
pip install porkbun[interactive]
```
## Setup
Get your API keys from [porkbun.com/account/api](https://porkbun.com/account/api). You need to enable API access per-domain in your Porkbun account settings.
Then run:
```bash
porkbun configure
```
This prompts for your API key and secret, then saves them to `~/.config/porkbun-cli/config.json` with `600` permissions.
To verify the connection works:
```bash
porkbun ping
# Success! Your IP: 203.0.113.42
```
## Commands
### Domains
```bash
# List all domains in your account
porkbun domain list
# Check availability and pricing for a domain
porkbun domain search coolname.com
# Register a domain (prompts for confirmation before charging)
porkbun domain buy coolname.com
# View nameservers
porkbun domain ns example.com
# Set custom nameservers
porkbun domain ns-set example.com ns1.host.com ns2.host.com
# Download SSL certificate bundle
porkbun domain ssl example.com --output ./certs/example
# Writes example.crt, example.key, example.ca
```
### DNS Records
```bash
# List all records
porkbun dns list example.com
# Filter by type
porkbun dns list example.com --type TXT
# Create records
porkbun dns create example.com A 203.0.113.1 --name www
porkbun dns create example.com MX mail.example.com --prio 10
porkbun dns create example.com TXT "v=spf1 include:example.com ~all"
porkbun dns create example.com CNAME target.example.com --name blog
# Edit a record by its ID
porkbun dns edit example.com 123456789 A 203.0.113.2 --name www
# Upsert — creates the record if it doesn't exist, updates it if it does
porkbun dns upsert example.com A 203.0.113.1 --name www
# Delete by ID
porkbun dns delete example.com 123456789
```
Supported record types: `A`, `AAAA`, `CNAME`, `ALIAS`, `MX`, `TXT`, `NS`, `SRV`, `TLSA`, `CAA`, `HTTPS`, `SVCB`, `SSHFP`
### URL Forwarding
```bash
# List URL forwards
porkbun url list example.com
# Set a redirect (302 by default)
porkbun url set example.com https://destination.com
# Set a permanent 301 redirect from a subdomain
porkbun url set example.com https://destination.com --subdomain blog --type 301
# Wildcard redirect with path passthrough
porkbun url set example.com https://destination.com --wildcard --path
# Delete a forward
porkbun url delete example.com 987654321
```
### Bulk Export / Import
Good for backups or migrating DNS between domains.
```bash
# Export all DNS records to JSON
porkbun bulk export example.com -o records.json
# Export as CSV
porkbun bulk export example.com --format csv -o records.csv
# Print to stdout (pipe-friendly)
porkbun bulk export example.com
# Preview what an import would do without changing anything
porkbun bulk import records.json --dry-run
# Import to original domain (reads domain from JSON file)
porkbun bulk import records.json
# Import to a different domain
porkbun bulk import records.json --domain newdomain.com
```
The JSON format supports an optional `action` field per record: `create` (default), `upsert`, or `delete`.
### Interactive Mode
Install with `[interactive]` to get a menu-driven interface instead of remembering all the flags:
```bash
porkbun interactive
# or shorthand:
porkbun i
```
Fetches your domain list on startup so you can pick from a menu — handy when you manage several domains.
## Config File
Credentials are stored at `~/.config/porkbun-cli/config.json`:
```json
{
"apikey": "pk1_...",
"secretapikey": "sk1_..."
}
```
The file is created with `600` permissions by `porkbun configure`. Do not commit it.
You can also pass keys directly via environment or by instantiating `PorkbunAPI` programmatically — see the `api.py` module if you want to use the client in your own scripts.
## Requirements
- Python 3.8+
- `requests`
- `tabulate`
- `questionary` (only for `porkbun interactive`, installed via `pip install porkbun-cli[interactive]`)
## License
MIT — see [LICENSE](LICENSE).
## Author
[Luke Steuber](https://lukesteuber.com) · [GitHub](https://github.com/lukeslp/porkbun-cli) · [Bluesky](https://bsky.app/profile/lukesteuber.com)
| text/markdown | null | Luke Steuber <luke@lukesteuber.com> | null | Luke Steuber <luke@lukesteuber.com> | MIT | porkbun, dns, domain, cli, api | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0",
"tabulate>=0.8.0",
"questionary>=1.10.0; extra == \"interactive\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"porkbun[dev,interactive]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/lukeslp/porkbun-cli",
"Documentation, https://github.com/lukeslp/porkbun-cli#readme",
"Repository, https://github.com/lukeslp/porkbun-cli.git",
"Issues, https://github.com/lukeslp/porkbun-cli/issues"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T06:32:27.026183 | porkbun-1.0.0.tar.gz | 13,696 | 29/8a/e920bd0e2257089ef00c9cd90d4c71652af169fa21af6b3db8a089615a1f/porkbun-1.0.0.tar.gz | source | sdist | null | false | 17980a59d661b7e97ae816e81c31fb12 | 9da1bb00864f5a89589591ecd745c02fdf29b55e74786cb58daf8147548afea8 | 298ae920bd0e2257089ef00c9cd90d4c71652af169fa21af6b3db8a089615a1f | null | [
"LICENSE"
] | 238 |
2.4 | revefi-llm-sdk | 0.1.1.2 | LLM observability SDK for Revefi - Traceloop-based monitoring with custom ingestor service | # Revefi LLM SDK
A minimal Python SDK for LLM observability with Revefi's monitoring platform.
## Installation
```bash
pip install revefi-llm-sdk
```
## Quick Start
```python
from revefi_llm_sdk import init_llm_observability
# Initialize the SDK
init_llm_observability(
api_key="your-revefi-api-key",
agent_name="my-llm-agent",
ingestor_url="https://your-revefi-instance.com"
)
# Your LLM calls will now be automatically tracked
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello!"}]
)
```
## Configuration
- `api_key`: Your Revefi API key for authentication
- `agent_name`: Name identifier for your agent/application
- `ingestor_url`: Revefi ingestor service URL (defaults to localhost:6556)
## Supported LLM Providers
- OpenAI
- Anthropic
## Development & Distribution
### Building Distribution Files
The `dist/` directory contains the distribution files that are built from the source code and uploaded to PyPI. These files are generated using Python's build tools:
```bash
# Build distribution files (generates .whl and .tar.gz files in dist/)
python -m build
# This creates:
# - dist/revefi_llm_sdk-{version}-py3-none-any.whl (wheel package)
# - dist/revefi_llm_sdk-{version}.tar.gz (source distribution)
```
Update the version in pyproject.toml before building to ensure the correct version is included in the distribution files.
### Publishing to PyPI
The distribution files are uploaded to the Python Package Index (PyPI) registry using `twine`:
```bash
# Upload to PyPI (requires valid PyPI credentials)
python -m twine upload dist/*
# For testing, upload to Test PyPI first:
python -m twine upload --repository testpypi dist/*
```
**Note**: The files in `dist/` are automatically generated and should not be manually edited. They are created from the source code defined in `revefi_llm_sdk/` and the project configuration in `pyproject.toml`.
## License
MIT
| text/markdown | null | Revefi <support@revefi.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.8 | [] | [] | [] | [
"opentelemetry-api>=1.20.0",
"opentelemetry-sdk>=1.20.0",
"opentelemetry-exporter-otlp-proto-http>=1.20.0",
"traceloop-sdk>=0.18.0",
"opentelemetry-instrumentation>=0.41b0",
"opentelemetry-instrumentation-openai>=0.18.0",
"opentelemetry-instrumentation-anthropic>=0.18.0"
] | [] | [] | [] | [
"Homepage, https://github.com/revefi/revefi-llm-sdk",
"Repository, https://github.com/revefi/revefi-llm-sdk",
"Issues, https://github.com/revefi/revefi-llm-sdk/issues"
] | twine/6.2.0 CPython/3.10.13 | 2026-02-19T06:27:30.081754 | revefi_llm_sdk-0.1.1.2.tar.gz | 4,656 | 65/f0/237e03583ce5d6361e2257c41e92d3942d731a103a7d265fbd0bad6556aa/revefi_llm_sdk-0.1.1.2.tar.gz | source | sdist | null | false | 7c6f683a9b650e85353eab43df02a241 | 3bf662924c9755e666c294164ed29f3d9bb9fce4bcefbc01622e8dd2fb24b0f5 | 65f0237e03583ce5d6361e2257c41e92d3942d731a103a7d265fbd0bad6556aa | null | [
"LICENSE"
] | 492 |
2.4 | aptoro | 0.4.0 | A minimal, functional Python ETL library for reading, validating, and transforming data using YAML schemas | # Aptoro
[](https://pypi.org/project/aptoro/)
[](https://pypi.org/project/aptoro/)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://github.com/astral-sh/ruff)
**Aptoro** is a Xavante word for *"preparing the arrows for hunting"*.
It is a minimal, functional Python ETL library for reading, validating, and transforming data using YAML schemas. Designed for simplicity and correctness, it bridges the gap between raw data files (CSV, JSON) and typed, validated Python objects.
## Features
- **Schema-First:** Define your data model in simple, readable YAML.
- **Strict Validation:** Ensures data quality with type checks, constraints, and range validation.
- **Rich Types:** Built-in support for `datetime` (ISO 8601), `url`, `file`, `dict`, nested objects, and standard primitives.
- **Multi-Format:** CSV, JSON, YAML, TOML, and Markdown front-matter (Jekyll/Hugo/Obsidian style).
- **Glob Patterns:** Read multiple files at once with `read("data/*.md")`.
- **Functional API:** Pure functions and immutable dataclasses make pipelines predictable.
- **Zero Boilerplate:** No complex class definitions—just load your schema and go.
## Installation
```bash
pip install aptoro
```
## CLI Usage
Aptoro provides a command-line interface for validating data files directly.
```bash
# Validate a CSV file against a schema
aptoro validate data.csv --schema schema.yaml
# Explicitly specify format
aptoro validate data.txt --schema schema.yaml --format json
```
## Quick Start
```python
from aptoro import load, load_schema, read, validate, to_json
# All-in-one: read + validate
entries = load(source="data.csv", schema="schema.yaml")
# Or step by step pipeline:
schema = load_schema("schema.yaml")
data = read("data.csv")
entries = validate(data, schema)
# Export to JSON
json_str = to_json(entries)
# Export with embedded metadata (self-describing files)
json_meta = to_json(entries, schema=schema, include_meta=True)
```
## Documentation
For full details on the schema language, advanced validation, and API reference, see the [Documentation](DOCS.md).
## Schema Language
Define your data schema in YAML:
```yaml
name: lexicon_entry
description: Dictionary entries
fields:
id: str
lemma: str
pos: str[noun|verb|adj|adv] # Constrained values (Enum)
definition: str
translation: str? # Optional field
examples: list[str]? # Optional list
frequency: int = 0 # Default value
created_at: datetime? # Optional ISO 8601 datetime
source_url: url? # Optional URL
```
### Type Syntax
- **Basic types:** `str`, `int`, `float`, `bool`
- **Specialized types:** `url`, `file`, `datetime`
- **Optional:** `str?`, `int?`, `url?`, `datetime?`
- **Default value:** `str = "default"`, `int = 0`, `list[str] = []`, `dict[str, int] = {}`
- **Constrained:** `str[a|b|c]`
- **Ranges:** `int[0..120]`, `float[0.0..1.0]`
- **Lists:** `list[str]`, `list[int]`
- **Dicts:** `dict`, `dict[str, int]`, `dict[str]`
- **Nested objects:** `type: object` with `fields` block
See [DOCS.md](DOCS.md) for full syntax, including inheritance, nested structures, and front-matter reading.
## Supported Formats
- **CSV** (auto-detects types)
- **JSON**
- **YAML**
- **TOML**
- **Markdown front-matter** (`.md` files with YAML front matter)
## License
GNU General Public License v3 (GPLv3)
| text/markdown | Plataformas Indígenas | null | null | null | null | data, etl, pydantic, schema, validation, yaml | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.0",
"pyyaml>=6.0",
"build; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"twine; extra == \"dev\"",
"types-pyyaml; extra == \"dev\"",
"openpyxl>=3.0; extra == \"excel\"",
"google... | [] | [] | [] | [
"Homepage, https://github.com/plataformasindigenas/aptoro",
"Documentation, https://github.com/plataformasindigenas/aptoro#readme",
"Repository, https://github.com/plataformasindigenas/aptoro"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T06:25:44.114195 | aptoro-0.4.0.tar.gz | 53,931 | b5/ce/d032303812cd228a890b89d214ebdc550a049ea0b3571acd9b8c6f6ae39d/aptoro-0.4.0.tar.gz | source | sdist | null | false | a96b692efff21b4030944c6502528ae2 | dd2a8e224be783d4a431fcd3e79d4f95cd12c188bf6a163958673bc2b244c1bd | b5ced032303812cd228a890b89d214ebdc550a049ea0b3571acd9b8c6f6ae39d | GPL-3.0-or-later | [
"LICENSE"
] | 352 |
2.4 | agentgit | 0.1.0 | Git for AI Thoughts — Checkpointing & Recovery Protocol for AI Agents | # 🧠 AgentGit — Git for AI Thoughts
**An open-source Checkpointing & Recovery Protocol for AI Agents**
> Save, branch, rollback, and recover an agent's "state of mind" at every reasoning step. If the model hits a bug or a timeout, it rolls back to the last logical state and tries a different path — without starting over.
[](https://www.python.org/downloads/)
[](LICENSE)
[]()
---
## The Problem
AI agents are **fragile**. When a multi-step reasoning chain fails at step 7, most frameworks throw everything away and start from scratch. That's like closing your entire Word document because you made a typo on page 3.
## The Solution
AgentGit gives your agent a **version-controlled brain**:
| Git Concept | AgentGit Equivalent | What It Means |
|---|---|---|
| `git commit` | `agent.checkpoint()` | Save the agent's current "thought" |
| `git reset` | `agent.rollback()` | Undo bad reasoning, go back to a good state |
| `git branch` | `agent.branch()` | Explore alternative solutions in parallel |
| `git merge` | `agent.merge()` | Combine the best ideas from different paths |
| `git log` | `agent.history()` | View the full reasoning timeline |
| `git diff` | `agent.diff()` | See what changed between two thought-states |
---
## Quick Start
```bash
pip install agentgit
```
```python
from agentgit import AgentGit
# Initialize — like 'git init' for your agent's brain
agent = AgentGit("my-research-agent")
# Step 1: Save the agent's initial understanding
agent.checkpoint(
state={"task": "Find best restaurants in NYC", "status": "parsing"},
metadata={"confidence": 0.9},
description="Parsed user request",
logic_step="parse_input"
)
# Step 2: Agent does some work...
agent.checkpoint(
state={"task": "Find best restaurants in NYC", "results": ["Le Bernardin", "Eleven Madison Park"]},
metadata={"confidence": 0.85},
description="Found top restaurants",
logic_step="search_restaurants"
)
# Step 3: Oh no, the API timed out on the next step!
# No problem — just roll back.
agent.rollback() # Goes back to "Found top restaurants"
# Or try a completely different approach
agent.branch("alternative-search")
agent.checkpoint(
state={"task": "Find best restaurants in NYC", "approach": "Use cached reviews"},
logic_step="cached_search"
)
```
---
## Core Features
### 1. Checkpointing — Save Points for the Brain
```python
cp = agent.checkpoint(
state={"reasoning": "The user wants X because of Y..."},
metadata={"confidence": 0.87, "tokens_used": 150},
description="Identified user intent",
logic_step="intent_classification"
)
# Every checkpoint gets a unique ID and content hash
print(f"Saved: {cp.id} (hash: {cp.hash})")
```
### 2. Rollback — Ctrl+Z for Thinking
```python
# Go back one step
agent.rollback()
# Go back 3 steps
agent.rollback(steps=3)
# Jump to a specific checkpoint
agent.rollback(to_checkpoint_id="abc123")
```
### 3. Branching — Explore Multiple Approaches
```python
# Main path: conservative approach
agent.checkpoint(state={"approach": "step-by-step"})
# Branch off to try something creative
agent.branch("creative")
agent.checkpoint(state={"approach": "lateral-thinking"})
# Go back to main if creative doesn't work
agent.switch_branch("main")
```
### 4. Safe Execution — Automatic Error Recovery
```python
def risky_api_call(state):
# This might fail due to rate limits, timeouts, etc.
return call_external_api(state["query"])
result, checkpoint = agent.safe_execute(
func=risky_api_call,
state={"query": "latest news"},
description="Fetch news articles",
max_retries=3,
fallback=use_cached_data # Plan B if all retries fail
)
```
### 5. Logic Tree — Full Decision Audit Trail
```python
print(agent.visualize_tree())
```
Output:
```
└── ✅ Task received [de3eec5f]
└── ✅ Plan created [3a7996fa]
├── ❌ API call failed [372aabc7]
├── ✅ Used cache instead [a4d4e95e]
└── ✅ Retry succeeded [330a3284]
└── ✅ Summary generated [67f8369a]
```
### 6. Decorators — Zero-Code-Change Integration
```python
from agentgit.decorators import agentgit_step
@agentgit_step("analyze_data")
def analyze(state):
# Your existing code — unchanged!
return {"analysis": process(state["data"])}
# Automatically checkpointed, with rollback on failure
result = analyze({"data": raw_data})
```
---
## Recovery Strategies
| Strategy | When to Use | What It Does |
|---|---|---|
| `RetryWithBackoff` | API timeouts, rate limits | Waits longer between each retry |
| `AlternativePathStrategy` | Logic dead-ends | Switches to a different approach |
| `DegradeGracefully` | Resource limits | Produces simpler output |
| `CompositeStrategy` | Complex scenarios | Chains multiple strategies |
```python
from agentgit import AgentGit
from agentgit.strategies import RetryWithBackoff, DegradeGracefully, CompositeStrategy
agent = AgentGit(
"resilient-agent",
recovery_strategies=[
CompositeStrategy([
RetryWithBackoff(base_delay=1.0, max_delay=30.0),
DegradeGracefully(),
])
]
)
```
---
## Storage Backends
```python
from agentgit.storage import FileSystemStorage, SQLiteStorage
# File-based (great for debugging — you can read the JSON files)
agent = AgentGit("my-agent", storage_backend=FileSystemStorage(".agentgit"))
# SQLite (fast queries, good for production)
agent = AgentGit("my-agent", storage_backend=SQLiteStorage("agent.db"))
```
---
## CLI
```bash
# Run the interactive demo
agentgit demo
# View checkpoint history
agentgit log
# Visualize the reasoning tree
agentgit tree
# List branches
agentgit branches
# Compare two checkpoints
agentgit diff abc123 def456
# View metrics
agentgit metrics
```
---
## Framework Integration
### LangChain
```python
from agentgit.decorators import AgentGitMiddleware
middleware = AgentGitMiddleware("langchain-agent")
wrapped_chain = middleware.wrap(my_chain.invoke, "process_query")
result = wrapped_chain({"input": "Hello"})
```
### Any Python Agent
```python
from agentgit.decorators import checkpoint_context
with checkpoint_context(description="Data processing") as ctx:
result = process_data(data)
ctx.state = {"result": result}
# Auto-rolls back on exception
```
---
## Architecture
```
┌──────────────────────────────────────────────────┐
│ Your AI Agent │
├──────────────────────────────────────────────────┤
│ Decorators & Middleware (agentgit.decorators) │
├──────────────────────────────────────────────────┤
│ AgentGit Core Engine (agentgit.engine) │
│ ┌─────────┐ ┌────────┐ ┌────────┐ ┌──────────┐ │
│ │Checkpoint│ │ Branch │ │Rollback│ │Logic Tree│ │
│ └─────────┘ └────────┘ └────────┘ └──────────┘ │
├──────────────────────────────────────────────────┤
│ Recovery Strategies (agentgit.strategies) │
├──────────────────────────────────────────────────┤
│ Serializers (agentgit.serializers) │
├──────────────────────────────────────────────────┤
│ Storage Backends: FileSystem │ SQLite │ Memory │
└──────────────────────────────────────────────────┘
```
---
## Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT License — see [LICENSE](LICENSE) for details.
| text/markdown | AgentGit Contributors | null | null | null | MIT | ai, agents, checkpointing, recovery, state-management, llm, reasoning, rollback, branching | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: P... | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/agentgit/agentgit",
"Documentation, https://agentgit.readthedocs.io",
"Repository, https://github.com/agentgit/agentgit",
"Issues, https://github.com/agentgit/agentgit/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-19T06:24:35.469815 | agentgit-0.1.0.tar.gz | 27,158 | de/4d/17ae4902a577fba63733935ad06e48a3030a1b407340005bc4ab940baf33/agentgit-0.1.0.tar.gz | source | sdist | null | false | 21dc7d441d7094a2a285e3a8d4822a64 | 2b44f8ac478a76ff66aa883f6eddbb85b43fe56a01ebc0a6f451cdda6eb9c4d3 | de4d17ae4902a577fba63733935ad06e48a3030a1b407340005bc4ab940baf33 | null | [
"LICENSE"
] | 153 |
2.4 | h4x-xd | 1.0.0 | A powerful, fast, and independent video downloader for YouTube, TikTok, and more. | # h4x-xd: High-Performance Independent Video Downloader
**h4x-xd** is a powerful, lightweight, and extremely fast Python library designed for downloading high-quality videos from various platforms. Unlike other tools, **h4x-xd** is built from the ground up with independent extractors and a multi-threaded download engine, making it ideal for web applications, Telegram bots, and automated workflows.
## Key Features
- **Independent Extractors**: Custom logic for YouTube, TikTok, and more – no reliance on `yt-dlp`.
- **Maximum Speed**: Built-in multi-chunked downloader that bypasses standard connection throttling.
- **Highest Quality**: Automatically identifies and merges the best available video and audio streams.
- **Async-First**: Fully compatible with `asyncio` for high-concurrency environments.
- **Bot-Ready**: Seamless integration for Telegram bots (stream to buffer) and Web APIs (stream to response).
## Installation
```bash
# Clone the repository and install dependencies
git clone https://github.com/your-repo/h4x-xd.git
cd h4x-xd
pip install aiohttp httpx tqdm
```
## Quick Start
```python
import asyncio
from h4x_xd import Downloader
async def main():
dl = Downloader(output_dir="videos", num_chunks=16)
# Download in highest quality
path = await dl.download("https://www.youtube.com/watch?v=...")
print(f"Downloaded to: {path}")
asyncio.run(main())
```
## Integration Examples
### Telegram Bot (Telethon)
```python
from h4x_xd import TelegramHelper
helper = TelegramHelper()
buffer = await helper.stream_to_buffer("https://tiktok.com/...")
await client.send_file(chat_id, buffer, caption="Downloaded via h4x-xd")
```
### Web API (FastAPI)
```python
from fastapi import FastAPI
from h4x_xd import web_router
app = FastAPI()
app.include_router(web_router, prefix="/api/v1")
```
## Why h4x-xd?
| Feature | yt-dlp | h4x-xd |
|---------|--------|---------|
| Dependency | Heavy | Lightweight |
| Architecture | CLI-first | Library-first |
| Speed | Standard | Multi-chunked (Turbo) |
| Async | Wrapper | Native |
| Integration | Complex | Plug-and-play |
## License
MIT License
| text/markdown | null | h4x-xd team <contact@h4x-xd.io> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"aiohttp",
"httpx",
"tqdm",
"fastapi",
"uvicorn"
] | [] | [] | [] | [
"Homepage, https://github.com/h4x-xd/h4x-xd",
"Bug Tracker, https://github.com/h4x-xd/h4x-xd/issues"
] | twine/6.2.0 CPython/3.11.0rc1 | 2026-02-19T06:24:33.204898 | h4x_xd-1.0.0.tar.gz | 8,359 | 3c/7a/e16d7d746fab2bdbb8ef44e841251cbd6a5b488eadc52841a09a0aed366e/h4x_xd-1.0.0.tar.gz | source | sdist | null | false | 81f50d74b9c51180b704e30594456a36 | c628c4a32b705acdeaed24f5c27ce136e689b1e9692c477d211c1b3976ec8744 | 3c7ae16d7d746fab2bdbb8ef44e841251cbd6a5b488eadc52841a09a0aed366e | null | [] | 254 |
2.3 | fastapi-boot | 0.0.50 | FastAPI tools | <h1 align="center">FastAPIBoot</h1>
<div align="center">
[](https://github.com/hfdy0935/fastapi-boot/actions/workflows/test.yml)
[](https://github.com/hfdy0935/fastapi-boot/actions/workflows/build_publish.yml)
[](https://codecov.io/gh/hfdy0935/fastapi-boot)
[](https://pypi.org/project/fastapi-boot/)
[]()
</div>
  简单易用、功能强大的FastAPI工具库,支持CBV、依赖注入、声明式公共路由依赖和生命周期等写法,为**提高效率**而生。
> cbv: class based view
**特点**
- 📦**无缝集成FastAPI,开箱即用**,继承FastAPI的优点,支持通过`CLI`初始化。
- 🐎**支持`CBV`、`FBV`**,想套多少层就套多少层,路由层级关系更清晰。
- ✅ **践行`IOC`、`DI`、`AOP`**,开发更高效。
- 🌈**公共依赖提取**,结合多层`CBV`,避免`endpoint`中大量的`Depends`。
- 🔨**丰富的工具**,生命周期、异常处理、中间件、`tortoise`工具。
## 1. 快速开始
### 1.1 安装
```bash
pip install fastapi-boot
# 或者使用uv
uv add fastapi-boot
```
### 1.2 和FastAPI比较
📌要实现这些接口
<img src="https://raw.githubusercontent.com/hfdy0935/fastapi-boot/refs/heads/main/assets/image.png"/>
1. 用fastapi_boot
```py
from typing import Annotated
from fastapi import Query
from fastapi_boot.core import Controller, Get, provide_app, Post
import uvicorn
# fbv, function based view
@Get('/r1')
def top_level_fbv1():
return '/r1'
# fbv
@Controller('/r2').get('')
def top_level_fbv2():
return '/r2'
# cbv, class based view
@Controller('/r3')
class CBVController:
@Get('/1')
async def cbv_endpoint1(self):
return '/r3/1'
@Post('/2')
def cbv_endpoint2(self, q: Annotated[str, Query()]):
return dict(query=q, path='/r3/2')
app = provide_app(controllers=[top_level_fbv1, top_level_fbv2, CBVController])
if __name__ == '__main__':
uvicorn.run('main:app', reload=True)
```
- 用fastapi
```py
from typing import Annotated
from fastapi import APIRouter, FastAPI, Query
import uvicorn
app = FastAPI()
@app.get('/r1')
def endpoint1():
return '/r1'
router1 = APIRouter(prefix='/r2')
@router1.get('')
def endpoint2():
return '/r2'
app.include_router(router1)
router2 = APIRouter(prefix='/r3')
@router2.get('/1')
async def endpoint3():
return '/r3/1'
@router2.post('/2')
def endpoint4(q: Annotated[str, Query()]):
return dict(query=q, path='/r3/2')
app.include_router(router2)
if __name__ == '__main__':
uvicorn.run('main:app', reload=True)
```
### 1.3 💡通过CLI生成:
```bash
fastapi-boot --host=localhost --port=8000 --reload --name=Demo
```
<img src="https://raw.githubusercontent.com/hfdy0935/fastapi-boot/refs/heads/main/assets/image-1.png"/>
## 2. 所有API
```py
from fastapi_boot.core import (
Injectable,
Bean,
provide_app,
use_dep,
use_http_middleware,
use_ws_middleware,
inject.
Controller,
Delete,
Get,
Head,
Options,
Patch,
Post,
Prefix,
Put,
Req,
Trace,
WebSocket as WS,
)
# tortoise工具
from fastapi_boot.tortoise_util import Sql, Select, Update, Insert, Delete as SqlDelete
```
| text/markdown | hfdy | hfdy <hfdy09354121794@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"fastapi>=0.125.0",
"pydantic>=2.12.5",
"uvicorn>=0.38.0",
"tortoise-orm<=0.24.1; extra == \"db\"",
"aiomysql>=0.3.2; extra == \"db-all\"",
"asyncpg>=0.31.0; extra == \"db-all\"",
"tortoise-orm<=0.24.1; extra == \"db-all\"",
"aiomysql>=0.3.2; extra == \"db-mysql\"",
"tortoise-orm<=0.24.1; extra == \... | [] | [] | [] | [
"Homepage, https://github.com/hfdy0935/fastapi-boot",
"Repository, https://github.com/hfdy0935/fastapi-boot"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:24:20.768657 | fastapi_boot-0.0.50.tar.gz | 14,940 | 08/99/097554bc21497dc897318b13036e0431a11de8d64f5ec1a1c827f6afae9e/fastapi_boot-0.0.50.tar.gz | source | sdist | null | false | a6b31000ea09244ee2b87b7cabaaa2a4 | 182a6d1f8ae56fd4833fd9f88551a77db1ceb68aae54fe22725e8a07643ff501 | 0899097554bc21497dc897318b13036e0431a11de8d64f5ec1a1c827f6afae9e | null | [] | 219 |
2.4 | django-managed-commands | 0.4.1 | Track and manage Django management command executions with execution history and standardized testing | # django-managed-commands
[](https://pypi.org/project/django-managed-commands/)
[](https://pypi.org/project/django-managed-commands/)
[](https://pypi.org/project/django-managed-commands/)
[](https://github.com/sbenemerito/django-managed-commands/blob/main/LICENSE)
[](https://github.com/sbenemerito/django-managed-commands/actions/workflows/ci.yml)
[](https://codecov.io/gh/sbenemerito/django-managed-commands)
## Overview
django-managed-commands is a Django library that provides robust tracking and management of Django management commands.
It helps prevent migration-related issues by tracking command execution history, provides standardized testing utilities, and offers a comprehensive API for managing command execution in your Django projects.
### Do you need this?
Too many times I've been involved in projects where somebody creates a management command and:
- it's supposed to be ran only once, but there are no guard rails to enforce that
- doing a `call_command()` inside an empty DB migration doesn't always work because when there are field changes later on, this raises exceptions
- we do not certainly know if it was already ran (especially difficult for multi-tenant projects)
- no unit tests were written to properly test side-effects
If you are on the same boat, then the answer is probably yes.
Also because we had something similar in a team I was previously involved in, and I thought it was nice to have this in Django projects I'm currently working on. <sub>Shoutout to the guys at Linkers: Suzuki R., Yokoyama I., Nathan W., Onodera Y.</sub>
## Installation
1) Install the package via pip:
```bash
pip install django-managed-commands
```
or via uv:
```bash
uv add django-managed-commands
```
2) Add `django_managed_commands` to your `INSTALLED_APPS` in `settings.py`:
```python
INSTALLED_APPS = [
# ... other apps
'django_managed_commands',
]
```
3) Run migrations to create the necessary database tables:
```bash
python manage.py migrate django_managed_commands
```
## Quick Start
Generate a new tracked management command using the built-in generator:
```bash
# Generate a command in your Django app
python manage.py create_managed_command myapp my_command
# Generate a run-once command (prevents duplicate executions)
python manage.py create_managed_command myapp setup_initial_data --run-once
```
where `myapp` is the name of the Django app you want to add the management command into, and `my_command` or `setup_initial_data` is the command name.
This creates a command file at `myapp/management/commands/my_command.py` with built-in execution tracking.
## Usage
### Creating a standard command
Generate a command that tracks every execution:
```bash
python manage.py create_managed_command myapp send_notifications
```
This creates `myapp/management/commands/send_notifications.py`:
```python
from django_managed_commands.base import ManagedCommand
class Command(ManagedCommand):
"""send_notifications management command."""
help = "send_notifications command - add your description here"
run_once = False
def execute_command(self, *args, **options):
# Your command logic here
self.stdout.write("Sending notifications...")
self.stdout.write(self.style.SUCCESS("send_notifications completed successfully"))
```
The `ManagedCommand` base class automatically handles:
- **Execution tracking**: Records success/failure in `CommandExecution` model
- **Timing**: Measures and stores execution duration
- **Database transactions**: Your logic runs inside `transaction.atomic()` - if an exception is raised, all database changes are rolled back
- **Dry-run mode**: Built-in `--dry-run` flag that executes your command but rolls back all database changes by reverting the database transaction.
- **Error recording**: Failures are logged with error messages before re-raising
### Creating a run-once command
Generate a command that only executes once successfully:
```bash
python manage.py create_managed_command myapp setup_initial_data --run-once
```
The generated command includes automatic duplicate prevention:
```python
from django_managed_commands.base import ManagedCommand
class Command(ManagedCommand):
"""One-time setup command."""
help = "setup_initial_data command"
run_once = True # Prevents duplicate executions
def execute_command(self, *args, **options):
# Your one-time setup logic here
self.stdout.write(self.style.SUCCESS("Setup complete"))
```
When `run_once=True`, the command automatically checks execution history and skips if already run successfully.
### Viewing execution history in Django admin
django-managed-commands automatically registers the `CommandExecution` model in Django admin. Access it at `/admin/django_managed_commands/commandexecution/`:
- View all command executions with timestamps
- Filter by command name, success status, or date
- Search by command name or error messages
- See execution duration, parameters, and output for each run
### Programmatically accessing command history
Query command execution history using the provided utility functions:
```python
from django_managed_commands.utils import get_command_history
from django_managed_commands.models import CommandExecution
# Get last 10 executions of a specific command
history = get_command_history('myapp.send_notifications', limit=10)
for execution in history:
print(f"{execution.executed_at}: {'Success' if execution.success else 'Failed'}")
print(f" Duration: {execution.duration}s")
print(f" Output: {execution.output}")
# Query all failed executions
failed_commands = CommandExecution.objects.filter(success=False)
for cmd in failed_commands:
print(f"{cmd.command_name} failed at {cmd.executed_at}")
print(f" Error: {cmd.error_message}")
# Check if a command has ever run successfully
latest = CommandExecution.objects.filter(
command_name='myapp.setup_initial_data',
success=True
).first()
if latest:
print(f"Last successful run: {latest.executed_at}")
else:
print("Command has never run successfully")
# Get execution statistics
from django.db.models import Avg, Count
stats = CommandExecution.objects.filter(
command_name='myapp.send_notifications'
).aggregate(
total_runs=Count('id'),
avg_duration=Avg('duration'),
success_count=Count('id', filter=models.Q(success=True))
)
print(f"Total runs: {stats['total_runs']}")
print(f"Average duration: {stats['avg_duration']:.2f}s")
print(f"Success rate: {stats['success_count'] / stats['total_runs'] * 100:.1f}%")
```
### Using ManagedCommand base class
The recommended approach is to extend `ManagedCommand`:
```python
from django_managed_commands.base import ManagedCommand
class Command(ManagedCommand):
help = "Process data with automatic tracking"
# Optional: override command_name (auto-derived from module path if not set)
# command_name = "myapp.custom_name"
def add_arguments(self, parser):
super().add_arguments(parser) # Keeps --dry-run flag
parser.add_argument("--limit", type=int, default=100)
def execute_command(self, *args, **options):
# This runs inside a database transaction
# Use --dry-run to preview changes without committing
limit = options["limit"]
processed = self.do_work(limit)
self.stdout.write(f"Processed {processed} items")
return processed # Optional: return value is stored in execution record
def do_work(self, limit):
# Your implementation
return 42
```
### Manual command tracking
For existing commands or special cases, you can manually track execution:
```python
import time
from django.core.management.base import BaseCommand
from django_managed_commands.utils import record_command_execution
class Command(BaseCommand):
help = "Custom command with manual tracking"
def handle(self, *args, **options):
command_name = "myapp.custom_command"
start_time = time.time()
try:
result = self.do_work()
record_command_execution(
command_name=command_name,
success=True,
parameters={"option": options.get("option")},
output=f"Processed {result} items",
duration=time.time() - start_time,
)
except Exception as e:
record_command_execution(
command_name=command_name,
success=False,
error_message=str(e),
duration=time.time() - start_time,
)
raise
def do_work(self):
return 42
```
## Configuration
### Run-once Behavior
Commands can be configured to run only once successfully by setting `run_once=True`:
```python
class Command(ManagedCommand):
run_once = True # Command will only execute once successfully
def execute_command(self, *args, **options):
# Your one-time logic here
pass
```
**How it works:**
1. Before execution, the base class checks the command history
2. If a successful execution with `run_once=True` exists, the command is skipped
3. If the previous execution failed, the command will run again
4. If no previous execution exists, the command runs normally
**Use cases for run-once commands:**
- Initial data setup or seeding
- One-time database migrations
- System initialization tasks
- Feature flag setup
- Configuration deployment
### Transaction Behavior
All commands extending `ManagedCommand` run inside a database transaction:
```python
class Command(ManagedCommand):
def execute_command(self, *args, **options):
# All database operations here are atomic
User.objects.create(username="alice")
Profile.objects.create(user=user) # If this fails, User creation is rolled back
```
**Key points:**
- Your `execute_command` logic runs inside `transaction.atomic()`
- If any exception is raised, all database changes are rolled back
- Execution recording happens *outside* the transaction, so failures are always logged
- This ensures data consistency without manual transaction management
### Dry-Run Mode
All commands extending `ManagedCommand` automatically have a `--dry-run` flag:
```bash
python manage.py my_command --dry-run
```
**What it does:**
1. Executes your entire `execute_command()` logic normally
2. At the end, rolls back all database changes (nothing is committed)
3. Skips creating a `CommandExecution` record
**Example output:**
```
DRY RUN - all database changes will be rolled back
Processing 100 items...
my_command completed successfully in 2.35s (dry run - changes rolled back)
```
**Use cases:**
- Preview what a command will do before running it for real
- Test commands in production without making changes
- Validate data imports before committing
- Debug command logic with real data
**Note:** The `--dry-run` flag is provided by `ManagedCommand.add_arguments()`. If you override `add_arguments()` in your command, call `super().add_arguments(parser)` to keep this functionality:
```python
def add_arguments(self, parser):
super().add_arguments(parser) # Keeps --dry-run
```
### Tracking Behavior
All command executions are automatically tracked with the following information:
- **command_name**: Unique identifier for the command (e.g., `myapp.my_command`)
- **executed_at**: Timestamp when the command started
- **success**: Boolean indicating if the command completed successfully
- **parameters**: JSON field storing command arguments and options
- **output**: Standard output from the command
- **error_message**: Error details if the command failed
- **duration**: Execution time in seconds
- **run_once**: Whether this command should only run once
### Database Configuration
The `CommandExecution` model uses Django's default database. No special configuration is required. The model includes:
- Automatic timestamp tracking (`auto_now_add=True`)
- JSON field for flexible parameter storage
- Indexed ordering by execution time (newest first)
- Admin interface integration
### Integration with Existing Projects
To add tracking to existing commands:
1. **Option A: Extend ManagedCommand (recommended)**
```python
# Before
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, *args, **options):
# Your logic
pass
# After
from django_managed_commands.base import ManagedCommand
class Command(ManagedCommand):
def execute_command(self, *args, **options):
# Your logic (now with automatic tracking + transactions)
pass
```
2. **Option B: Use the generator with --force**
```bash
python manage.py create_managed_command myapp existing_command --force
```
3. **Option C: Manual tracking** (for special cases where you can't change the base class)
```python
import time
from django_managed_commands.utils import record_command_execution
class Command(BaseCommand):
def handle(self, *args, **options):
start_time = time.time()
try:
# Your logic
record_command_execution(
command_name="myapp.existing_command",
success=True,
duration=time.time() - start_time,
)
except Exception as e:
record_command_execution(
command_name="myapp.existing_command",
success=False,
error_message=str(e),
duration=time.time() - start_time,
)
raise
```
## API Reference
### Base Classes
#### `ManagedCommand`
Base class for Django management commands with automatic tracking, transaction support, and dry-run mode.
**Location:** `django_managed_commands.base.ManagedCommand`
**Class Attributes:**
- `run_once` (bool, default=False): Set to `True` to prevent duplicate executions
- `command_name` (str, default=None): Override to customize the command name. If not set, auto-derived from module path (e.g., `myapp.management.commands.foo` → `myapp.foo`)
**Built-in Flags:**
- `--dry-run`: Execute command but roll back all database changes. No execution record is created.
**Methods to Override:**
- `execute_command(self, *args, **options)`: Your command logic. Runs inside a database transaction.
- `add_arguments(self, parser)`: Add command-line arguments. Call `super().add_arguments(parser)` to keep `--dry-run`.
**Example:**
```python
from django_managed_commands.base import ManagedCommand
class Command(ManagedCommand):
help = "Import data from external API"
run_once = False
def add_arguments(self, parser):
super().add_arguments(parser) # Keeps --dry-run flag
parser.add_argument("--source", type=str, required=True)
def execute_command(self, *args, **options):
source = options["source"]
# Your logic here - runs in a transaction
# Use --dry-run to execute without committing changes
count = self.import_data(source)
self.stdout.write(self.style.SUCCESS(f"Imported {count} records"))
return count # Stored in execution record
def import_data(self, source):
# Implementation
return 42
```
**Behavior:**
- `execute_command()` runs inside `transaction.atomic()`
- Execution is recorded in `CommandExecution` model (success or failure)
- Duration is automatically measured
- On exception: transaction rolls back, error is recorded, exception re-raised
- With `--dry-run`: executes normally, then rolls back all changes (no record created)
### Utility Functions
#### `record_command_execution()`
Records a command execution in the database.
**Signature:**
```python
record_command_execution(
command_name,
success=True,
parameters=None,
output="",
error_message="",
duration=None,
run_once=False
)
```
**Parameters:**
- `command_name` (str, required): Unique identifier for the command (e.g., `'myapp.my_command'`)
- `success` (bool, optional): Whether the command executed successfully. Default: `True`
- `parameters` (dict, optional): Dictionary of command arguments and options. Default: `None`
- `output` (str, optional): Standard output from the command. Default: `""`
- `error_message` (str, optional): Error message if the command failed. Default: `""`
- `duration` (float, optional): Execution duration in seconds. Default: `None`
- `run_once` (bool, optional): Whether this command should only run once. Default: `False`
**Returns:** `CommandExecution` instance
**Example:**
```python
from django_managed_commands.utils import record_command_execution
# Record successful execution
execution = record_command_execution(
command_name='myapp.send_emails',
success=True,
parameters={'recipient': 'user@example.com'},
output='Sent 5 emails',
duration=2.5
)
# Record failed execution
execution = record_command_execution(
command_name='myapp.process_data',
success=False,
error_message='Database connection failed',
duration=0.3
)
# Record run-once command
execution = record_command_execution(
command_name='myapp.setup_initial_data',
success=True,
output='Created 100 records',
duration=5.2,
run_once=True
)
```
#### `should_run_command()`
Checks if a command should execute based on its execution history.
**Signature:**
```python
should_run_command(command_name)
```
**Parameters:**
- `command_name` (str, required): The name of the command to check
**Returns:** `bool` - `True` if the command should run, `False` if it should be skipped
**Behavior:**
- Returns `True` if no previous execution exists
- Returns `True` if the most recent execution failed
- Returns `True` if the most recent execution had `run_once=False`
- Returns `False` only if the most recent execution was successful AND had `run_once=True`
**Example:**
```python
from django_managed_commands.utils import should_run_command, record_command_execution
# First time running
if should_run_command('myapp.setup_data'):
# Returns True - no previous execution
setup_data()
record_command_execution('myapp.setup_data', success=True, run_once=True)
# Second time running
if should_run_command('myapp.setup_data'):
# Returns False - already run successfully with run_once=True
setup_data()
else:
print('Command already executed successfully')
# After a failed execution
record_command_execution('myapp.setup_data', success=False, run_once=True)
if should_run_command('myapp.setup_data'):
# Returns True - previous execution failed, so retry is allowed
setup_data()
```
#### `get_command_history()`
Retrieves execution history for a specific command.
**Signature:**
```python
get_command_history(command_name, limit=10)
```
**Parameters:**
- `command_name` (str, required): The name of the command to retrieve history for
- `limit` (int, optional): Maximum number of records to return. Default: `10`
**Returns:** `QuerySet` of `CommandExecution` instances, ordered by execution time (newest first)
**Example:**
```python
from django_managed_commands.utils import get_command_history
# Get last 10 executions
history = get_command_history('myapp.send_notifications')
for execution in history:
print(f"{execution.executed_at}: {execution.success}")
# Get last 5 executions
recent = get_command_history('myapp.process_data', limit=5)
print(f"Found {recent.count()} executions")
# Check if command has ever run
history = get_command_history('myapp.new_command', limit=1)
if history.exists():
print(f"Last run: {history.first().executed_at}")
else:
print("Command has never been executed")
# Analyze execution patterns
history = get_command_history('myapp.daily_task', limit=30)
success_rate = history.filter(success=True).count() / history.count() * 100
print(f"Success rate: {success_rate:.1f}%")
```
### Management Commands
#### `create_managed_command`
Generates a new Django management command with built-in execution tracking.
**Usage:**
```bash
python manage.py create_managed_command <app_name> <command_name> [options]
```
**Arguments:**
- `app_name` (required): Name of the Django app (must be in `INSTALLED_APPS`)
- `command_name` (required): Name of the command to create (must be a valid Python identifier)
**Options:**
- `--run-once`: Set `run_once=True` in the generated command to prevent duplicate executions
- `--force`: Overwrite existing files if they exist
**Example:**
```bash
# Create a standard command
python manage.py create_managed_command myapp send_notifications
# Create a run-once command
python manage.py create_managed_command myapp setup_initial_data --run-once
# Overwrite existing command
python manage.py create_managed_command myapp existing_command --force
```
**Generated Files:**
1. Command file: `<app_name>/management/commands/<command_name>.py`
2. Test file: `<app_name>/tests/test_<command_name>.py`
### Models
#### `CommandExecution`
Tracks execution history of Django management commands.
**Fields:**
- `command_name` (CharField, max_length=255): Name of the management command
- `executed_at` (DateTimeField, auto_now_add=True): Timestamp when command was executed
- `success` (BooleanField, default=True): Whether the command completed successfully
- `parameters` (JSONField, null=True, blank=True): Command parameters as JSON
- `output` (TextField, blank=True): Command stdout output
- `error_message` (TextField, blank=True): Error message if command failed
- `duration` (FloatField, null=True, blank=True): Execution duration in seconds
- `run_once` (BooleanField, default=False): Whether this command should only run once
**Meta Options:**
- Ordering: `["-executed_at"]` (newest first)
- Verbose name: "Command Execution"
- Verbose name plural: "Command Executions"
**Methods:**
- `__str__()`: Returns `"{command_name} - {Success|Failed}"`
**Example Usage:**
```python
from django_managed_commands.models import CommandExecution
from django.utils import timezone
from datetime import timedelta
# Query all executions
all_executions = CommandExecution.objects.all()
# Filter by command name
send_email_history = CommandExecution.objects.filter(
command_name='myapp.send_emails'
)
# Filter by success status
failed_commands = CommandExecution.objects.filter(success=False)
# Get recent executions (last 24 hours)
recent = CommandExecution.objects.filter(
executed_at__gte=timezone.now() - timedelta(days=1)
)
# Get commands that took longer than 10 seconds
slow_commands = CommandExecution.objects.filter(
duration__gt=10.0
)
# Get run-once commands
one_time_commands = CommandExecution.objects.filter(run_once=True)
# Aggregate statistics
from django.db.models import Avg, Max, Min, Count
stats = CommandExecution.objects.aggregate(
total=Count('id'),
avg_duration=Avg('duration'),
max_duration=Max('duration'),
min_duration=Min('duration')
)
```
## Contributing
Contributions are welcome! Report bugs or request features via [GitHub Issues](https://github.com/sbenemerito/django-managed-commands/issues) 🙏
1. **Fork the repository** and create a new branch for your feature or bugfix
2. **Write tests** for any new functionality
3. **Follow code style**: Ensure your code follows PEP 8 and Django best practices. Use [ruff](https://github.com/astral-sh/ruff) for linting.
4. **Update documentation**: Add or update relevant documentation for your changes
5. **Submit a pull request**: Provide a clear description of your changes
*Note: Yes, I'm currently pushing directly to main - I know, I know. When contributors come around, I'll enforce proper branch protection and PR workflows 🙇♂️*
### Development Setup
```bash
# Clone the repository
git clone https://github.com/yourusername/django-managed-commands.git
cd django-managed-commands
# Run tests
uv run pytest -v
```
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
| text/markdown | null | Sam Benemerito <me@sambenemerito.com> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 6.0",
"Intended Audi... | [] | null | null | >=3.8 | [] | [] | [] | [
"django>=3.2",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/sbenemerito/django-managed-commands",
"Repository, https://github.com/sbenemerito/django-managed-commands",
"Issues, https://github.com/sbenemerito/django-managed-commands/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:23:09.322357 | django_managed_commands-0.4.1.tar.gz | 17,188 | c5/aa/230a5b7cc20519cb438b43c2c83f7b20a2c35e1cdc7434b1d7a1eddfa636/django_managed_commands-0.4.1.tar.gz | source | sdist | null | false | 580183720db5e05dc20ee28f123ea155 | 8dc250f62d97ecee6fa1c65566c4f02bc9759d929556d11cd74fc8abf65038fc | c5aa230a5b7cc20519cb438b43c2c83f7b20a2c35e1cdc7434b1d7a1eddfa636 | null | [
"LICENSE"
] | 276 |
2.4 | aiq-platform-api | 1.0.38 | Utility functions for AttackIQ Platform API usage | # AttackIQ Platform API
> ⚠️ **Beta** - Under active development. APIs subject to change. Feedback: rajesh.sharma@attackiq.com | Access: Request invite to AttackIQ GitHub.
Tools for interacting with the AttackIQ Platform API:
- **Python SDK** (`aiq-platform-api`) - Async library for Python applications
- **CLI** (`aiq`) - Command-line interface
---
## Python SDK
Install from PyPI:
```sh
pip install aiq-platform-api
```
### Usage
```python
import asyncio
from aiq_platform_api import AttackIQClient, Scenarios, Assets
async def main():
async with AttackIQClient(
"https://your-platform.attackiq.com",
"your-api-token"
) as client:
# Search scenarios
result = await Scenarios.search_scenarios(client, query="powershell", limit=10)
print(f"Found {result['count']} scenarios")
# List assets
async for asset in Assets.get_assets(client, limit=5):
print(asset["hostname"])
asyncio.run(main())
```
---
## Configuration
Both the SDK and CLI require these environment variables:
```sh
export ATTACKIQ_PLATFORM_URL="https://your-platform.attackiq.com"
export ATTACKIQ_PLATFORM_API_TOKEN="your-api-token"
```
Or create a `.env` file in your working directory (auto-loaded).
---
## CLI
### Quick Install (Recommended)
#### Linux / macOS
```sh
GITHUB_TOKEN="your_token" sh -c 'curl -fsSL -H "Authorization: token $GITHUB_TOKEN" \
https://raw.githubusercontent.com/AttackIQ/aiq-platform-api/main/install.sh | sh'
```
**Add to PATH** (first time only):
```sh
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc # or ~/.bashrc
```
Auto-detects OS/arch, installs to `~/.local/bin` (no sudo).
#### Windows (Native)
**PowerShell installer:**
```powershell
$env:GITHUB_TOKEN = "your_token"
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/AttackIQ/aiq-platform-api/main/install.ps1" -Headers @{Authorization="token $env:GITHUB_TOKEN"} -OutFile "$env:TEMP\install.ps1"
powershell -ExecutionPolicy Bypass -File "$env:TEMP\install.ps1"
```
Installs to `%LOCALAPPDATA%\Programs\aiq` and adds to PATH automatically.
### Usage
```sh
# List available commands
aiq --help
# List assessments
aiq assessments list
# Search assets
aiq assets search --query "hostname"
# Get scenario details
aiq scenarios get --scenario-id "abc123"
```
### Shell Completion
The CLI supports shell completion for bash, zsh, fish, and PowerShell.
#### Bash
**Current session:**
```sh
source <(aiq completion bash)
```
**Permanent installation:**
```sh
# Linux
aiq completion bash | sudo tee /etc/bash_completion.d/aiq
# macOS
aiq completion bash > $(brew --prefix)/etc/bash_completion.d/aiq
```
#### Zsh
**Current session:**
```sh
source <(aiq completion zsh)
```
**Permanent installation:**
```sh
# Add to ~/.zshrc
echo "source <(aiq completion zsh)" >> ~/.zshrc
# Or install to completions directory
aiq completion zsh > "${fpath[1]}/_aiq"
```
#### Fish
**Permanent installation:**
```sh
aiq completion fish | source
# Or save to completions directory
aiq completion fish > ~/.config/fish/completions/aiq.fish
```
#### PowerShell
**Current session:**
```powershell
aiq completion powershell | Out-String | Invoke-Expression
```
**Permanent installation:**
Add the following to your PowerShell profile:
```powershell
aiq completion powershell | Out-String | Invoke-Expression
```
## Contributing
We welcome feedback and contributions! For detailed contribution guidelines, please see [CONTRIBUTING.md](CONTRIBUTING.md).
Quick ways to contribute:
- Open issues for bugs or feature requests
- Submit pull requests
- Provide feedback on the API design
## License
MIT License - See LICENSE file for details | text/markdown | Rajesh Sharma | rajesh.sharma@attackiq.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"python-dotenv<2.0.0,>=1.0.1",
"httpx<1.0,>=0.27",
"tenacity>=8.2.3",
"ipython<9.0.0,>=7.34.0",
"pyzipper<0.4.0,>=0.3.6"
] | [] | [] | [] | [] | poetry/2.2.1 CPython/3.11.8 Linux/6.6.87.2-microsoft-standard-WSL2 | 2026-02-19T06:22:57.359456 | aiq_platform_api-1.0.38.tar.gz | 71,279 | 92/3b/aef74adcf21c715109fad3eada4ff7088569845932387dda58f1704526af/aiq_platform_api-1.0.38.tar.gz | source | sdist | null | false | f244a48064edc422479c71858c9194c0 | c89e078b38a5aa2b60c7c4c86e235cc031ac67d79c854d3991b82f71f3f5a5b8 | 923baef74adcf21c715109fad3eada4ff7088569845932387dda58f1704526af | null | [] | 244 |
2.4 | chargebee | 3.18.2 | Python wrapper for the Chargebee Subscription Billing API | # Chargebee Python Client Library v3
> [!NOTE]
> [](https://discord.gg/S3SXDzXHAg)
>
> We are trialing a Discord server for developers building with Chargebee. Limited spots are open on a first-come basis. Join [here](https://discord.gg/gpsNqnhDm2) if interested.
This is the official Python library for integrating with Chargebee.
- 📘 For a complete reference of available APIs, check out our [API Documentation](https://apidocs.chargebee.com/docs/api/?lang=python).
- 🧪 To explore and test API capabilities interactively, head over to our [API Explorer](https://api-explorer.chargebee.com).
If you're upgrading from an older version please refer to the [Migration Guide](https://github.com/chargebee/chargebee-python/wiki/Migration-guide-for-v3)
## Requirements
- Python 3.11+
## Installation
Install the latest version of the library with pip:
```sh
pip install chargebee
```
Install from source with:
```sh
python setup.py install
```
## Documentation
See our [Python API Reference](https://apidocs.chargebee.com/docs/api?lang=python "API Reference").
## Usage
The package needs to be configured with your site's API key, which is available under Configure Chargebee Section. Refer [here](https://www.chargebee.com/docs/2.0/api_keys.html) for more details.
### Configuring chargebee client
```python
from chargebee import Chargebee
cb_client = Chargebee(api_key="", site="")
```
### Configuring Timeouts
```python
from chargebee import Chargebee
cb_client = Chargebee(api_key="api_key", site="site")
cb_client.update_read_timeout_secs(3000)
cb_client.update_connect_timeout_secs(5000)
```
### Configuring Retry Delays
```python
from chargebee import Chargebee
cb_client = Chargebee(api_key="api_key", site="site")
cb_client.update_export_retry_delay_ms(3000)
cb_client.update_time_travel_retry_delay_ms(5000)
```
### Making API Request:
```python
# Create a Customer
response = cb_client.Customer.create(
cb_client.Customer.CreateParams(
first_name="John",
last_name="Doe",
email="john@test.com",
locale="fr-CA",
billing_address=cb_client.Customer.BillingAddress(
first_name="John",
last_name=" Doe",
line1="PO Box 9999",
city="Walnut",
state="California",
zip="91789",
country="US",
),
)
)
customer = response.customer
card = response.card
```
### Async HTTP client
Starting with version `3.9.0`, the Chargebee Python SDK can optionally be configured to use an asynchronous HTTP client which uses `asyncio` to perform non-blocking requests. This can be enabled by passing the `use_async_client=True` argument to the constructor:
```python
cb_client = Chargebee(api_key="api_key", site="site", use_async_client=True)
```
When configured to use the async client, all model methods return a coroutine, which will have to be awaited to get the response:
```python
async def get_customers():
response = await cb_client.Customer.list(
cb_client.Customer.ListParams(
first_name=Filters.StringFilter(IS="John")
)
)
return response
```
Note: The async methods will have to be wrapped in an event loop during invocation. For example, the `asyncio.run` method can be used to run the above example:
```python
import asyncio
response = asyncio.run(get_customers())
```
### List API Request With Filter
For pagination, `offset` is the parameter that is being used. The value used for this parameter must be the value returned in `next_offset` parameter from the previous API call.
```python
from chargebee import Filters
response = cb_client.Customer.list(
cb_client.Customer.ListParams(
first_name=Filters.StringFilter(IS="John")
)
)
offset = response.next_offset
print(offset)
```
### Using enums
There are two variants of enums in chargebee,
- Global enums - These are defined globally and can be accessed across resources.
- Resource specific enums - These are defined within a resource and can be accessed using the resource class name.
```python
# Global Enum
import chargebee
response = cb_client.Customer.create(
cb_client.Customer.CreateParams(
first_name="John",
auto_collection=chargebee.AutoCollection.ON, # global enum
)
)
print(response.customer)
```
```python
# Resource Specific Enum
response = cb_client.Customer.change_billing_date(
cb_client.Customer.ChangeBillingDateParams(
first_name="John",
billing_day_of_week=cb_client.Customer.BillingDayOfWeek.MONDAY, # resource specific enum
)
)
print(response.customer)
```
### Using custom fields
```python
response = cb_client.Customer.create(
cb_client.Customer.CreateParams(
first_name="John",
cf_host_url="https://john.com", # `cf_host_url` is a custom field in Customer object
)
)
print(response.customer.cf_host_url)
```
### Creating an idempotent request:
[Idempotency keys](https://apidocs.chargebee.com/docs/api/idempotency?prod_cat_ver=2) are passed along with request headers to allow a safe retry of POST requests.
```python
response = cb_client.Customer.create(
cb_client.Customer.CreateParams(
first_name="John",
last_name="Doe",
email="john@test.com",
locale="fr-CA",
billing_address=cb_client.Customer.BillingAddress(
first_name="John",
last_name=" Doe",
line1="PO Box 9999",
city="Walnut",
state="California",
zip="91789",
country="US",
),
),
None,
{
"chargebee-idempotency-key": "<<UUID>>"
}, # Replace <<UUID>> with a unique string
)
customer = response.customer
card = response.card
responseHeaders = response.headers # Retrieves response headers
print(responseHeaders)
idempotencyReplayedValue = response.is_idempotency_replayed # Retrieves Idempotency replayed header value
print(idempotencyReplayedValue)
```
### Waiting for Process Completion
The response from the previous API call must be passed as an argument for `wait_for_export_completion()` or `wait_for_time_travel_completion()`
```python
# Wait For Export Completion
from chargebee import Filters
response = cb_client.Export.customers(
cb_client.Export.CustomersParams(
customer=cb_client.Export.CustomersCustomerParams(
first_name=Filters.StringFilter(IS="John")
)
)
)
print(cb_client.Export.wait_for_export_completion(response.export))
```
### Retry Handling
Chargebee's SDK includes built-in retry logic to handle temporary network issues and server-side errors. This feature is **disabled by default** but can be **enabled when needed**.
#### Key features include:
- **Automatic retries for specific HTTP status codes**: Retries are automatically triggered for status codes `500`, `502`, `503`, and `504`.
- **Exponential backoff**: Retry delays increase exponentially to prevent overwhelming the server.
- **Rate limit management**: If a `429 Too Many Requests` response is received with a `Retry-After` header, the SDK waits for the specified duration before retrying.
> *Note: Exponential backoff and max retries do not apply in this case.*
- **Customizable retry behavior**: Retry logic can be configured using the `retryConfig` parameter in the environment configuration.
#### Example: Customizing Retry Logic
You can enable and configure the retry logic by passing a `retryConfig` object when initializing the Chargebee environment:
```python
from chargebee import Chargebee
from chargebee.retry_config import RetryConfig
retry_config = RetryConfig(
enabled=True,
max_retries=5,
delay_ms=1000,
retry_on=[500]
)
cb_client = Chargebee(api_key="api_key", site="site")
cb_client.update_retry_config(retry_config)
# ... your Chargebee API operations ...
```
#### Example: Rate Limit retry logic
You can enable and configure the retry logic for rate-limit by passing a `retryConfig` object when initializing the Chargebee environment:
```python
from chargebee import Chargebee
from chargebee.retry_config import RetryConfig
retry_config = RetryConfig(
enabled=True,
max_retries=5,
delay_ms=1000,
retry_on=[429]
)
cb_client = Chargebee(api_key="api_key", site="site")
cb_client.update_retry_config(retry_config)
# ... your Chargebee API operations ...
```
## Feedback
If you find any bugs or have any feedback, open an issue in this repository or email it to dx@chargebee.com
## License
See the [LICENSE](./LICENSE) file.
| text/markdown | Chargebee | dx@chargebee.com | null | null | null | null | [] | [] | https://apidocs.chargebee.com/docs/api?lang=python | null | >=3.11 | [] | [] | [] | [
"httpx"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:22:14.170357 | chargebee-3.18.2.tar.gz | 263,117 | df/98/63549204c27ca5a46baebe581289d47b23fa5359c08b74801c4c3c4992ba/chargebee-3.18.2.tar.gz | source | sdist | null | false | c371af8b4f9befdae08cd44459745c50 | 8997ca6415830dfe98505cac3cc06b0e3b2a343d364a2ed0edf8ead8469ec6ed | df9863549204c27ca5a46baebe581289d47b23fa5359c08b74801c4c3c4992ba | null | [
"LICENSE"
] | 6,277 |
2.4 | wilco | 0.4.0 | Serve React components from Python backends | # wilco
**Server-defined React components for Python backends.**
[](https://pypi.org/project/wilco/)
[](https://pypi.org/project/wilco/)
[](https://github.com/msqd/wilco/actions/workflows/cicd.yml)
[](https://python-wilco.readthedocs.io/)
[](LICENSE.md)
**Documentation:** [FastAPI Guide](https://python-wilco.readthedocs.io/en/latest/how-to/fastapi.html) | [Django Guide](https://python-wilco.readthedocs.io/en/latest/how-to/django.html) | [Flask Guide](https://python-wilco.readthedocs.io/en/latest/how-to/flask.html) | [Starlette Guide](https://python-wilco.readthedocs.io/en/latest/how-to/starlette.html)
## Features
- **Co-locate components with backend logic** — Keep UI components next to the Python code that powers them
- **No frontend build pipeline** — Components bundled on-the-fly with esbuild when requested
- **Full source map support** — Debug TypeScript directly in browser devtools
- **Component composition** — Components can dynamically load other components
- **Framework agnostic** — Works with FastAPI, Django, Flask, Starlette, or any ASGI/WSGI-compatible framework
## Quick Start
```bash
pip install wilco[fastapi] # or wilco[django], wilco[flask], wilco[starlette]
```
### Create a component
```
my_components/
└── greeting/
├── __init__.py
├── index.tsx
└── schema.json
```
```tsx
// index.tsx
interface GreetingProps {
name: string;
formal?: boolean;
}
export default function Greeting({ name, formal = false }: GreetingProps) {
const message = formal ? `Good day, ${name}.` : `Hey ${name}!`;
return <p>{message}</p>;
}
```
### Mount the API
```python
from pathlib import Path
from fastapi import FastAPI
from wilco import ComponentRegistry
from wilco.bridges.fastapi import create_router
app = FastAPI()
registry = ComponentRegistry(Path("./my_components"))
app.include_router(create_router(registry), prefix="/api")
```
### Load in React
```tsx
import { useComponent } from '@wilcojs/react';
function App() {
const Greeting = useComponent('greeting');
return <Greeting name="World" />;
}
```
For component schemas, composition patterns, and framework-specific guides, see the [documentation](https://python-wilco.readthedocs.io/).
## API Endpoints
| Endpoint | Description |
|----------|-------------|
| `GET /api/bundles` | List available components |
| `GET /api/bundles/{name}.js` | Get bundled JavaScript |
| `GET /api/bundles/{name}/metadata` | Get component metadata |
## Requirements
- Python 3.10+
- Node.js (for esbuild bundling)
- React 18+ on the frontend
## Development
This project follows strict TDD methodology.
```bash
make test # Run all tests
make docs # Build documentation
make help # Show all available commands
```
## License
Makersquad Source License 1.0 — see [LICENSE.md](LICENSE.md) for details.
Free for non-commercial use. Commercial use requires a license.
Contact licensing@makersquad.fr for inquiries.
| text/markdown | null | null | null | null | # Makersquad Source License 1.0
## Acceptance
By using the software, you agree to all of the terms and conditions below.
## Definitions
**"License"** means this Makersquad Source License.
**"Licensor"** means Makersquad, the copyright holder offering these terms.
**"Software"** means the software the Licensor makes available under these terms,
including any portion of it.
**"You"** means the individual or entity agreeing to these terms.
**"Your Company"** means any legal entity, sole proprietorship, or other kind of
organization that you work for, plus all organizations that have control over,
are under the control of, or are under common control with that organization.
Control means ownership of substantially all the assets of an entity, or the
power to direct its management and policies by vote, contract, or otherwise.
**"Commercial Purpose"** means any use intended for or directed toward commercial
advantage or monetary compensation. This includes, but is not limited to:
- Using the Software in a product or service you sell
- Using the Software to provide paid services to third parties
- Using the Software in internal business operations that generate revenue
- Using the Software to develop commercial products, even if the Software itself
is not distributed
**"Competing Use"** means making the Software available to third parties as a
commercial product or service that substitutes for the Software, or using the
Software to develop a product or service that competes with the Software.
**"Change Date"** means the date that is five (5) years after the release date
of each version of the Software.
**"Change License"** means the Apache License, Version 2.0, as published by the
Apache Software Foundation.
## License Grant
Subject to the terms and conditions of this License, the Licensor grants you a
non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable
license to use, copy, modify, and distribute the Software, for any purpose that
is not a Commercial Purpose and is not a Competing Use.
## Permitted Uses
You **may**, without a commercial license:
1. **Evaluate** the Software to determine whether it meets your needs
2. **Use** the Software for personal, educational, or research purposes
3. **Read** and study the source code to learn from it
4. **Modify** the Software and create derivative works for non-commercial purposes
5. **Contribute** patches, bug fixes, and improvements back to the project
6. **Use** the Software in non-profit organizations for non-commercial activities
## Prohibited Uses
You **may not**, without a commercial license or written exception:
1. Use the Software for any Commercial Purpose
2. Use the Software for any Competing Use
3. Remove or obscure any licensing, copyright, or other notices
4. Use the Licensor's trademarks without permission
## Open Source and Non-Profit Exceptions
If you wish to use the Software in an open source project or for non-profit
purposes that may have indirect commercial implications, you must obtain a
written exception from the Licensor. Such exceptions may be granted at the
Licensor's sole discretion and may include additional terms.
To request an exception, contact: licensing@makersquad.fr
## Commercial Licenses
Commercial licenses are available for use of the Software in commercial products
and services. Commercial licenses may include:
- Technical support
- Bug fixes and security updates
- Priority feature development
For commercial licensing inquiries, contact: licensing@makersquad.fr
## Change Date License Conversion
On the Change Date for each version of the Software, this License will
automatically convert to the Change License for that version only. After the
Change Date:
- That specific version becomes available under the Change License
- You may use that version for any purpose, including Commercial Purpose
- Newer versions released after the Change Date remain under this License until
their own Change Date
For clarity: if you obtain a commercial license for a version, you may continue
using that version indefinitely. Five years after that version's release, you
may also use it under the Change License without the commercial license.
## Notices
You must ensure that anyone who gets a copy of any part of the Software from
you also gets a copy of these terms, or the URL for this License:
https://github.com/msqd/wilco/blob/main/LICENSE.md
## No Trademark Rights
This License does not grant you any right to use the Licensor's name, logo,
or trademarks.
## Patents
The Licensor grants you a license, under any patent claims the Licensor can
license or becomes able to license, to make, have made, use, sell, offer for
sale, import and have imported the Software, only for the purposes permitted
under this License. This license does not cover any patent claims that you
cause to be infringed by modifications or additions to the Software.
## No Other Rights
These terms do not imply any licenses other than those expressly granted in
this License.
## Termination
If you violate any terms of this License, your rights under it terminate
immediately. The Licensor may, at its sole discretion, reinstate your rights
upon written notice.
## No Liability
***To the maximum extent permitted by applicable law, the Software is provided
"as is", without any warranty or condition of any kind, express or implied,
including but not limited to warranties of merchantability, fitness for a
particular purpose, or non-infringement.***
***To the maximum extent permitted by applicable law, the Licensor shall not be
liable for any direct, indirect, incidental, special, consequential, or
exemplary damages arising out of or in connection with these terms or the use
of the Software, regardless of the cause of action or the theory of liability.***
***Nothing in this License shall limit or exclude liability that cannot be
limited or excluded under applicable law, including mandatory consumer
protection laws.***
## Moral Rights
The Licensor retains all moral rights in the Software as recognized under
applicable law, including under French intellectual property law (Code de la
propriété intellectuelle). These rights are personal, perpetual, inalienable,
and imprescriptible.
## Severability
If any provision of this License is held to be invalid, illegal, or
unenforceable, the remaining provisions shall continue in full force and
effect. The invalid provision shall be modified to the minimum extent necessary
to make it valid and enforceable while preserving the parties' original intent.
## Governing Law and Jurisdiction
This License shall be governed by and construed in accordance with the laws of
France, without regard to its conflict of law provisions. Any dispute arising
out of or in connection with this License shall be submitted to the exclusive
jurisdiction of the courts of Paris, France.
---
Copyright (c) 2025-present Makersquad
For licensing inquiries: licensing@makersquad.fr | components, django, esbuild, fastapi, react, typescript | [
"Development Status :: 3 - Alpha",
"Framework :: Django",
"Framework :: FastAPI",
"Framework :: Flask",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"... | [] | null | null | >=3.11 | [] | [] | [] | [
"django>=4.2.0; extra == \"dev\"",
"fastapi>=0.115.0; extra == \"dev\"",
"flask>=3.0.0; extra == \"dev\"",
"furo>=2024.0.0; extra == \"dev\"",
"httpx>=0.27.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-benchmark>=4.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/msqd/wilco",
"Documentation, https://python-wilco.readthedocs.io/",
"Repository, https://github.com/msqd/wilco",
"Issues, https://github.com/msqd/wilco/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:21:01.607468 | wilco-0.4.0.tar.gz | 4,063,319 | b0/45/de60891317766c8b0a826ad4b911016e4ea2d7268b33f98c4270705317db/wilco-0.4.0.tar.gz | source | sdist | null | false | 6dbd25b7118d8faab319646b4157a163 | da5657771a0ac8f0d06a15b7707bab82354c24767546d9725348c6e5610e3def | b045de60891317766c8b0a826ad4b911016e4ea2d7268b33f98c4270705317db | null | [
"LICENSE.md"
] | 232 |
2.4 | deadends-dev | 0.4.0 | Structured failure knowledge for AI agents — dead ends, workarounds, error chains | # deadends.dev
<!-- mcp-name: dev.deadends/deadends-dev -->
[](https://deadends.dev)
[](https://deadends.dev)
[](https://smithery.ai/server/deadend/deadends-dev)
[](https://pypi.org/project/deadends-dev/)
[](LICENSE)
**Structured failure knowledge for AI coding agents.**
1028 error entries across 20 domains. When AI coding agents encounter errors, they waste tokens on approaches that are known to fail. deadends.dev tells agents what NOT to try, what actually works, and what error comes next.
> **Website:** [deadends.dev](https://deadends.dev) · **MCP Server:** [Smithery](https://smithery.ai/server/deadend/deadends-dev) · **PyPI:** [deadends-dev](https://pypi.org/project/deadends-dev/) · **API:** [/api/v1/index.json](https://deadends.dev/api/v1/index.json)
## Installation
```bash
pip install deadends-dev
```
**Requirements:** Python 3.10+
## MCP Server
The MCP server exposes 8 tools for AI coding agents:
| Tool | Description |
|------|-------------|
| `lookup_error` | Match an error message against 1028 known patterns. Returns dead ends, workarounds, and error chains. |
| `get_error_detail` | Get full details for a specific error by ID (e.g., `python/modulenotfounderror/py311-linux`). |
| `list_error_domains` | List all 20 error domains and their counts. |
| `search_errors` | Fuzzy keyword search across all domains (e.g., "memory limit", "permission denied"). |
| `list_errors_by_domain` | List all errors in a specific domain, sorted by fix rate, name, or confidence. |
| `batch_lookup` | Look up multiple error messages at once (max 10). |
| `get_domain_stats` | Get quality metrics for a domain: avg fix rate, resolvability, confidence breakdown. |
| `get_error_chain` | Traverse the error transition graph: what errors follow, precede, or get confused with this one. |
### Local (Claude Desktop / Cursor)
Add to `~/.claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"deadend": {
"command": "python",
"args": ["-m", "mcp.server"],
"cwd": "/path/to/deadends.dev"
}
}
}
```
### Hosted (Smithery — no local setup)
Install via [Smithery](https://smithery.ai/server/deadend/deadends-dev):
```bash
# Claude Code
npx -y @smithery/cli@latest install deadend/deadends-dev --client claude
# Cursor
npx -y @smithery/cli@latest install deadend/deadends-dev --client cursor
```
Or connect directly: `https://server.smithery.ai/deadend/deadends-dev`
### Example Response
When an agent encounters `ModuleNotFoundError: No module named 'torch'`, the `lookup_error` tool returns:
```
## ModuleNotFoundError: No module named 'X' (Python 3.11+)
Resolvable: true | Fix rate: 0.88
### Dead Ends (DO NOT TRY):
- pip install X with system Python (fails 70%): venv not activated
### Workarounds (TRY THESE):
- Create venv, activate, then pip install (works 95%)
- Use python -m pip install instead of bare pip (works 90%)
```
## Quick Start — Python SDK
```python
from generator.lookup import lookup, batch_lookup, search
# Single error lookup
result = lookup("ModuleNotFoundError: No module named 'torch'")
# What NOT to try (saves tokens and time)
for d in result["dead_ends"]:
print(f"AVOID: {d['action']} — fails {int(d['fail_rate']*100)}%")
# What actually works
for w in result["workarounds"]:
print(f"TRY: {w['action']} — works {int(w['success_rate']*100)}%")
# Batch lookup (multiple errors at once)
results = batch_lookup([
"ModuleNotFoundError: No module named 'torch'",
"CUDA error: out of memory",
"CrashLoopBackOff",
])
# Keyword search
hits = search("memory limit", domain="docker", limit=5)
```
## Quick Start — CLI
```bash
pip install deadends-dev
deadends "CUDA error: out of memory"
deadends --list # show all known errors
```
## API Endpoints
| Endpoint | Description |
|----------|-------------|
| [`/api/v1/match.json`](https://deadends.dev/api/v1/match.json) | Lightweight regex matching (fits in context window) |
| [`/api/v1/index.json`](https://deadends.dev/api/v1/index.json) | Full error index with all metadata |
| [`/api/v1/{domain}/{slug}/{env}.json`](https://deadends.dev/api/v1/python/modulenotfounderror/py311-linux.json) | Individual ErrorCanon ([example](https://deadends.dev/api/v1/python/modulenotfounderror/py311-linux.json)) |
| [`/api/v1/openapi.json`](https://deadends.dev/api/v1/openapi.json) | OpenAPI 3.1 spec with response examples |
| [`/api/v1/stats.json`](https://deadends.dev/api/v1/stats.json) | Dataset quality metrics by domain |
| [`/api/v1/errors.ndjson`](https://deadends.dev/api/v1/errors.ndjson) | NDJSON streaming (one error per line) |
| [`/api/v1/version.json`](https://deadends.dev/api/v1/version.json) | Service metadata and endpoint directory |
| [`/llms.txt`](https://deadends.dev/llms.txt) | LLM-optimized error listing ([llmstxt.org](https://llmstxt.org) standard) |
| [`/llms-full.txt`](https://deadends.dev/llms-full.txt) | Complete database dump |
| [`/.well-known/ai-plugin.json`](https://deadends.dev/.well-known/ai-plugin.json) | AI plugin manifest |
| [`/.well-known/agent-card.json`](https://deadends.dev/.well-known/agent-card.json) | Google A2A agent card |
| [`/.well-known/security.txt`](https://deadends.dev/.well-known/security.txt) | Security contact (RFC 9116) |
## Covered Domains (20)
| Domain | Errors | Examples |
|--------|--------|----------|
| Python | 85 | ModuleNotFoundError, TypeError, KeyError, MemoryError, RecursionError |
| Node | 67 | ERR_MODULE_NOT_FOUND, EACCES, EADDRINUSE, heap OOM, ERR_REQUIRE_ESM |
| Docker | 63 | no space left, exec format error, bind address in use, healthcheck |
| Kubernetes | 61 | CrashLoopBackOff, ImagePullBackOff, OOMKilled, RBAC forbidden, HPA |
| Git | 59 | failed to push, merge conflicts, detached HEAD, stash apply, tags |
| Go | 53 | nil pointer, unused import, interface conversion, slice out of range |
| Java | 52 | NullPointerException, ClassNotFound, OutOfMemoryError, connection pool |
| Database | 52 | deadlock, connection pool, slow query, replication lag, constraint violation |
| AWS | 51 | AccessDenied, S3 NoSuchBucket, Lambda timeout, CloudFormation rollback |
| TypeScript | 49 | TS2307, TS2322, TS2345, TS2532, TS7053, TS2769, TS18048 |
| Rust | 47 | E0382 borrow, E0308 mismatch, E0277 trait, E0106 lifetime, E0507 |
| PHP | 47 | headers already sent, too many connections, autoload, memory exhaustion |
| CUDA | 47 | OOM, device-side assert, NCCL, cuDNN, tensor device mismatch |
| Terraform | 46 | state lock, cycle, provider not found, moved block, backend init |
| CI/CD | 46 | GitHub Actions timeout, secret not found, Docker rate limit, cache miss |
| React | 44 | invalid hook call, too many re-renders, unique key, context, act() |
| Next.js | 44 | hydration failed, dynamic server, server-only import, RSC serialization |
| Networking | 42 | connection refused, ECONNRESET, SSL certificate, DNS timeout, EPIPE |
| pip | 41 | build wheel failed, conflicting deps, externally-managed, hash mismatch |
| .NET | 32 | NullReferenceException, LINQ translation, DI circular, EF concurrency |
## ErrorCanon Data Format
Each error is a JSON file with:
```json
{
"error": { "signature": "...", "regex": "...", "domain": "..." },
"verdict": { "resolvable": "true|partial|false", "fix_success_rate": 0.88 },
"dead_ends": [{ "action": "...", "why_fails": "...", "fail_rate": 0.75 }],
"workarounds": [{ "action": "...", "success_rate": 0.92, "how": "..." }],
"transition_graph": { "leads_to": [...], "preceded_by": [...] }
}
```
## AI Coding Agent Integration — 18 Discovery Formats
Every page on deadends.dev includes machine-readable data in 18 formats:
| Format | Location | Purpose |
|--------|----------|---------|
| JSON API | `/api/v1/{id}.json` | RESTful error data per ErrorCanon |
| match.json | `/api/v1/match.json` | Compact regex-only file (load entire DB into context) |
| index.json | `/api/v1/index.json` | Master error index with metadata |
| stats.json | `/api/v1/stats.json` | Dataset quality metrics per domain |
| errors.ndjson | `/api/v1/errors.ndjson` | Streaming NDJSON for batch processing |
| OpenAPI | `/api/v1/openapi.json` | Full API spec with response examples |
| JSON-LD | Every `<head>` | Schema.org TechArticle + FAQPage |
| ai-summary | Every page | `<pre id="ai-summary">` KEY=VALUE blocks |
| llms.txt | `/llms.txt` | llmstxt.org standard |
| llms-full.txt | `/llms-full.txt` | Complete database dump |
| ai-plugin.json | `/.well-known/` | OpenAI plugin manifest |
| agent-card.json | `/.well-known/` | Google A2A protocol |
| security.txt | `/.well-known/` | RFC 9116 security contact |
| robots.txt | `/robots.txt` | 34 AI crawlers explicitly welcomed |
| CLAUDE.md | `/CLAUDE.md` | Claude Code instructions |
| AGENTS.md | `/AGENTS.md` | OpenAI Codex CLI instructions |
| .clinerules | `/.clinerules` | Cline AI instructions |
## Development
```bash
pip install -e ".[dev]"
# Full pipeline (validate → generate → build → test)
python -m generator.pipeline
# Individual steps
python -m generator.bulk_generate # Generate canons from seeds
python -m generator.build_site # Build static site
python -m generator.validate # Validate data + site
python -m pytest tests/ -v # Run tests
```
## Contributing
Add error definitions to `generator/bulk_generate.py` or create JSON files directly in `data/canons/`.
```bash
python -m generator.validate --data-only # Validate before submitting
```
## License
MIT (code) · CC BY 4.0 (data)
<!-- mcp-name: io.github.dbwls99706/deadends-dev -->
| text/markdown | YuJin Hong | null | null | null | MIT | error, debugging, dead-end, workaround, ai-agent, mcp, error-handling, troubleshooting | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Debuggers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jinja2>=3.1",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"jsonschema>=4.20; extra == \"dev\"",
"jsonschema>=4.20; extra == \"mcp\"",
"jsonschema>=4.20; extra == \"pipeline\"",
"requests>=2.31; extra == \"pipeline\"",
"anthropic>=0.25; extra == \"pipeline\""
] | [] | [] | [] | [
"Homepage, https://deadends.dev",
"Repository, https://github.com/dbwls99706/deadends.dev",
"API, https://deadends.dev/api/v1/index.json"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-19T06:20:56.477653 | deadends_dev-0.4.0.tar.gz | 1,038,170 | 8c/aa/8008b3667d5e801f9b8197f80720a11420884b60bad6228fb4e007b74c77/deadends_dev-0.4.0.tar.gz | source | sdist | null | false | c014c08750520df81c32587729623f28 | ed18d17edaa809852f48ef382b18f7a0f555b97b283574e160dd59f4e51cf16e | 8caa8008b3667d5e801f9b8197f80720a11420884b60bad6228fb4e007b74c77 | null | [
"LICENSE"
] | 230 |
2.4 | smolpack | 0.0.1 | Multidimensional Quadrature Using Sparse Grids | # smolpack
Multidimensional Quadrature Using Sparse Grids
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T06:19:47.925741 | smolpack-0.0.1-py3-none-any.whl | 10,941 | d3/27/393598e43ec8b4c4222f75409779ccdf3e94e08096fc6c4122439bf23a52/smolpack-0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 71b20e4f1b394c43a9688281563f0980 | fb029bd963be5e2e5a164dafcb9f14743439f26725980cd377f3cc993872cd67 | d327393598e43ec8b4c4222f75409779ccdf3e94e08096fc6c4122439bf23a52 | null | [
"LICENSE"
] | 247 |
2.3 | mcp-zen-of-languages | 0.2.0 | Multi-language architectural and idiomatic code analysis via CLI and MCP server. | <p align="center">
<img src="https://github.com/Anselmoo/mcp-zen-of-languages/blob/59dcb31c4c3f38547f4a212be58704825177df19/docs/assets/logo.png" alt="MCP Zen of Languages" width="460" />
</p>
<h1 align="center">Zen of Languages</h1>
<p align="center">
<em>🖌️ Write code the way the language intended.</em>
</p>
<p align="center">
<a href="https://pypi.org/project/mcp-zen-of-languages"><img src="https://img.shields.io/pypi/v/mcp-zen-of-languages?style=flat-square&color=989cff" alt="PyPI"></a>
<a href="https://pypi.org/project/mcp-zen-of-languages"><img src="https://img.shields.io/pypi/pyversions/mcp-zen-of-languages?style=flat-square" alt="Python"></a>
<a href="https://github.com/Anselmoo/mcp-zen-of-languages/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Anselmoo/mcp-zen-of-languages?style=flat-square" alt="License"></a>
<a href="https://github.com/Anselmoo/mcp-zen-of-languages/actions"><img src="https://img.shields.io/github/actions/workflow/status/Anselmoo/mcp-zen-of-languages/cicd.yml?style=flat-square&label=CI" alt="CI"></a>
<a href="https://anselmoo.github.io/mcp-zen-of-languages/"><img src="https://img.shields.io/badge/docs-mkdocs-c9b3ff?style=flat-square" alt="Docs"></a>
</p>
<p align="center">
<a href="https://insiders.vscode.dev/redirect/mcp/install?name=zen-of-languages&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22--from%22%2C%22mcp-zen-of-languages%22%2C%22zen-mcp-server%22%5D%7D"><img src="https://img.shields.io/badge/VS_Code-Install_MCP-007ACC?style=flat-square&logo=visualstudiocode&logoColor=white" alt="Install in VS Code"></a>
<a href="https://insiders.vscode.dev/redirect/mcp/install?name=zen-of-languages&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22--from%22%2C%22mcp-zen-of-languages%22%2C%22zen-mcp-server%22%5D%7D&quality=insiders"><img src="https://img.shields.io/badge/VS_Code_Insiders-Install_MCP-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white" alt="Install in VS Code Insiders"></a>
<a href="https://github.com/Anselmoo/mcp-zen-of-languages/pkgs/container/mcp-zen-of-languages"><img src="https://img.shields.io/badge/Docker-GHCR-2496ED?style=flat-square&logo=docker&logoColor=white" alt="Docker"></a>
</p>
---
Multi-language architectural and idiomatic code analysis, exposed as an **MCP server** and a **CLI**. Zen of Languages codifies idiomatic best practices ("zen principles") for 14 languages into machine-readable rules, then detects violations automatically — so AI agents and developers get actionable, language-aware feedback in every review.
<!-- --8<-- [start:what-you-get] -->
- **151 zen principles** across 14 languages
- **163 focused detectors** with severity scoring
- **MCP server** for IDE and agent workflows (13 tools, 3 resources, 1 prompt)
- **CLI reports** with remediation prompts and JSON / Markdown export
- **Rule-driven pipelines** configurable per language and project
<!-- --8<-- [end:what-you-get] -->
## Quickstart
<!-- --8<-- [start:quickstart] -->
```bash
# MCP server (IDE/agent workflows)
uvx --from mcp-zen-of-languages zen-mcp-server
# CLI without installing (recommended)
uvx --from mcp-zen-of-languages zen --help
# Or install globally
pip install mcp-zen-of-languages
# Analyze a file (CLI)
zen report path/to/file.py
# Analyze a project with remediation prompts (CLI)
zen report path/to/project --include-prompts
```
<!-- --8<-- [end:quickstart] -->
## Naming Guide
Keep these names distinct to avoid setup confusion:
- **Package name**: `mcp-zen-of-languages` (for `pip install` and `uvx --from`)
- **CLI command**: `zen`
- **MCP server command**: `zen-mcp-server`
- **MCP client server key**: `zen-of-languages` (JSON config label in VS Code/Claude/Cursor)
## Installation
### One-Click (VS Code)
<!-- --8<-- [start:vscode-integration] -->
| Method | VS Code | VS Code Insiders |
| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **UVX** (native) | [](https://insiders.vscode.dev/redirect/mcp/install?name=zen-of-languages&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22--from%22%2C%22mcp-zen-of-languages%22%2C%22zen-mcp-server%22%5D%7D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=zen-of-languages&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22--from%22%2C%22mcp-zen-of-languages%22%2C%22zen-mcp-server%22%5D%7D&quality=insiders) |
| **Docker** (isolated) | [](https://insiders.vscode.dev/redirect/mcp/install?name=zen-of-languages&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22ghcr.io/anselmoo/mcp-zen-of-languages%3Alatest%22%5D%7D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=zen-of-languages&config=%7B%22command%22%3A%22docker%22%2C%22args%22%3A%5B%22run%22%2C%22--rm%22%2C%22-i%22%2C%22ghcr.io/anselmoo/mcp-zen-of-languages%3Alatest%22%5D%7D&quality=insiders) |
<!-- --8<-- [end:vscode-integration] -->
### Docker
```bash
# CLI via Docker
docker run --rm ghcr.io/anselmoo/mcp-zen-of-languages:latest zen --help
# MCP server via Docker
docker run --rm -i ghcr.io/anselmoo/mcp-zen-of-languages:latest
```
### From Source
```bash
git clone https://github.com/Anselmoo/mcp-zen-of-languages.git
cd mcp-zen-of-languages
uv sync --all-groups --all-extras
# Start the MCP server
zen-mcp-server
# Run a CLI report
zen report path/to/file.py
```
## MCP Tools
The server exposes **13 tools**, **3 resources**, and **1 prompt** for AI-assisted code analysis.
| Family | Tools | Purpose |
| ----------------- | ------------------------------------------------------------------------------ | --------------------------------------------- |
| **Analysis** | `analyze_zen_violations`, `analyze_repository`, `check_architectural_patterns` | Idiomatic and structural analysis |
| **Reporting** | `generate_prompts`, `generate_agent_tasks`, `generate_report` | Remediation guidance, task lists, gap reports |
| **Configuration** | `get_config`, `set_config_override`, `clear_config_overrides` | Read and tune thresholds at runtime |
| **Metadata** | `detect_languages`, `get_supported_languages`, `export_rule_detector_mapping` | Discover languages, rules, detector coverage |
| **Onboarding** | `onboard_project` | Initialize `zen-config.yaml` for a project |
See the full [MCP Tools Reference](https://anselmoo.github.io/mcp-zen-of-languages/user-guide/mcp-tools-reference/) for parameters, return types, and workflow diagrams.
### Use Cases
1. **AI Code Review** — Call `analyze_zen_violations` on a file, then `generate_prompts` for remediation instructions in a single editor round-trip.
2. **Project-Wide Gap Analysis** — `analyze_repository` scans a codebase, `generate_report` produces a Markdown/JSON report, and `generate_agent_tasks` creates a prioritised fix list.
3. **One-Click Onboarding** — `onboard_project` detects languages and writes a tuned `zen-config.yaml`, making analysis immediately project-aware.
## Supported Languages
| Tier | Languages | Notes |
| ---------------- | -------------------------------- | --------------------------------------- |
| **Stable** | Python | Full parser + richest detector coverage |
| **Beta** | TypeScript, Go, Rust, JavaScript | Rule-driven pipelines, partial parsing |
| **Experimental** | Bash, PowerShell, Ruby, C++, C# | Heuristic detectors |
| **Data/Config** | YAML, TOML, JSON, XML | Structure and schema checks |
## Configuration
Analysis pipelines are derived from language zen rules and merged with project overrides in `zen-config.yaml`. See the [Configuration Guide](https://anselmoo.github.io/mcp-zen-of-languages/user-guide/configuration/) for the full reference.
```bash
# Generate reports in multiple formats
zen report path/to/project --export-json report.json --export-markdown report.md
```
## Documentation
Full documentation is available at **[anselmoo.github.io/mcp-zen-of-languages](https://anselmoo.github.io/mcp-zen-of-languages/)**.
## Contributing
See [Adding a Language](https://anselmoo.github.io/mcp-zen-of-languages/contributing/adding-language/) and [Development Guide](https://anselmoo.github.io/mcp-zen-of-languages/contributing/development/) to get started.
## License
[MIT](https://github.com/Anselmoo/mcp-zen-of-languages/blob/main/LICENSE)
---
<p align="center">
<img src="https://github.com/Anselmoo/mcp-zen-of-languages/blob/59dcb31c4c3f38547f4a212be58704825177df19/docs/assets/social-card-github.png" alt="Zen garden — sumi-e landscape" width="100%" />
</p>
| text/markdown | Anselm Hahn | Anselm Hahn <Anselm.Hahn@gmail.com> | null | null | null | mcp, code-analysis, code-quality, linting, static-analysis | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Information Technology",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Intended Audience :: ... | [] | null | null | >=3.12 | [] | [] | [] | [
"fastmcp>=2.14.4",
"networkx>=3.6.1",
"pydantic>=2.12.5",
"pygments>=2.19.2",
"radon>=6.0.1",
"typer>=0.12.0",
"tree-sitter>=0.25.2",
"pyfiglet>=1.0.0; extra == \"tui\""
] | [] | [] | [] | [
"Homepage, https://github.com/Anselmoo/mcp-zen-of-languages",
"Documentation, https://anselmoo.github.io/mcp-zen-of-languages/",
"Issues, https://github.com/Anselmoo/mcp-zen-of-languages/issues",
"Source, https://github.com/Anselmoo/mcp-zen-of-languages"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:18:01.848970 | mcp_zen_of_languages-0.2.0.tar.gz | 255,365 | f2/97/536fc83fa0b7c7927e44162a715c2db4de41fbd1f880d4e0a14c13efa439/mcp_zen_of_languages-0.2.0.tar.gz | source | sdist | null | false | f0db3f85e022d9715f295391d475e910 | 35ebb8cca5db77ba0f3618de818dcf8dd1755fc87c89471462429f1d14bf99f6 | f297536fc83fa0b7c7927e44162a715c2db4de41fbd1f880d4e0a14c13efa439 | null | [] | 274 |
2.4 | sparse | 0.18.0 | Sparse n-dimensional arrays for the PyData ecosystem | 
# Sparse Multidimensional Arrays
[](
https://github.com/pydata/sparse/actions/workflows/ci.yml)
[](
http://sparse.pydata.org/en/latest/?badge=latest)
[](
https://codecov.io/gh/pydata/sparse)
## This library provides multi-dimensional sparse arrays.
- 📚 [Documentation](http://sparse.pydata.org)
- 🙌 [Contributing](https://github.com/pydata/sparse/blob/main/docs/contributing.md)
- 🪲 [Bug Reports/Feature Requests](https://github.com/pydata/sparse/issues)
- 💬 [Discord Server](https://discord.gg/vur45CbwMz) [Channel](https://discord.com/channels/786703927705862175/1301155724646289420)
| text/markdown | null | null | null | Hameer Abbasi <hameerabbasi@yahoo.com> | BSD 3-Clause License
Copyright (c) 2018, Sparse developers
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| sparse, numpy, scipy, dask | [
"Development Status :: 2 - Pre-Alpha",
"Operating System :: OS Independent",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: P... | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.17",
"numba>=0.49",
"mkdocs-material; extra == \"docs\"",
"mkdocstrings[python]; extra == \"docs\"",
"mkdocs-gen-files; extra == \"docs\"",
"mkdocs-literate-nav; extra == \"docs\"",
"mkdocs-section-index; extra == \"docs\"",
"mkdocs-jupyter; extra == \"docs\"",
"sparse[extras]; extra == \"... | [] | [] | [] | [
"Documentation, https://sparse.pydata.org/",
"Source, https://github.com/pydata/sparse/",
"Repository, https://github.com/pydata/sparse.git",
"Issue Tracker, https://github.com/pydata/sparse/issues",
"Discussions, https://github.com/pydata/sparse/discussions"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T06:17:55.376503 | sparse-0.18.0.tar.gz | 791,987 | 56/64/46da3957f8f9af03179eca946d786c56f22b9458cedf472850e02948a8c1/sparse-0.18.0.tar.gz | source | sdist | null | false | 7c7dc7b32b69c0034d1c90489c37a6c9 | 57f92661eb0ec0c764b450c72f3c0d869ea7f32e5e4ca0a335f9d6a7d79bbff4 | 566446da3957f8f9af03179eca946d786c56f22b9458cedf472850e02948a8c1 | null | [
"LICENSE"
] | 38,611 |
2.4 | datadog-checks-base | 37.31.0 | The Datadog Check Toolkit | # Datadog Checks Base
[![Latest PyPI version][1]][7]
[![Supported Python versions][2]][7]
## Overview
This package provides the Python bits needed by the [Datadog Agent][4]
to run Agent-based Integrations (also known as _Checks_).
This package is used in two scenarios:
1. When used from within the Python interpreter embedded in the Agent, it
provides all the base classes and utilities needed by any Check.
2. When installed in a local environment with a regular Python interpreter, it
mocks the presence of a running Agent so checks can work in standalone mode,
mostly useful for testing and development.
Please refer to the [docs][5] for details.
## Installation
Checks from [integrations-core][6] already
use the toolkit in a transparent way when you run the tests but you can
install the toolkit locally and play with it:
```shell
pip install datadog-checks-base
```
## Performance Optimizations
We strive to balance lean resource usage with a "batteries included" user experience.
We employ a few tricks to achieve this.
One of them is the [lazy-loader][9] library that allows us to expose a nice API (simple, short imports) without the baseline memory overhead of importing everything all the time.
Another trick is to import some of our dependencies inside functions that use them instead of the more conventional import section at the top of the file. We rely on this the most in the `AgentCheck` base class.
## Troubleshooting
Need help? Contact [Datadog support][8].
[1]: https://img.shields.io/pypi/v/datadog-checks-base.svg
[2]: https://img.shields.io/pypi/pyversions/datadog-checks-base.svg
[4]: https://github.com/DataDog/datadog-agent
[5]: https://datadoghq.dev/integrations-core/base/about/
[6]: https://github.com/DataDog/integrations-core
[7]: https://pypi.org/project/datadog-checks-base/
[8]: https://docs.datadoghq.com/help/
[9]: https://github.com/scientific-python/lazy-loader
| text/markdown | null | Datadog <packages@datadoghq.com> | null | null | null | agent, checks, datadog | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Monitoring"
] | [] | null | null | null | [] | [] | [] | [
"mmh3==5.2.0; extra == \"db\"",
"binary==1.0.2; extra == \"deps\"",
"cachetools==6.2.0; extra == \"deps\"",
"cryptography==46.0.5; extra == \"deps\"",
"ddtrace==3.19.5; extra == \"deps\"",
"jellyfish==1.2.0; extra == \"deps\"",
"lazy-loader==0.4; extra == \"deps\"",
"prometheus-client==0.22.1; extra =... | [] | [] | [] | [
"Source, https://github.com/DataDog/integrations-core"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:17:18.346443 | datadog_checks_base-37.31.0-py2.py3-none-any.whl | 280,442 | 57/19/f3721d235d57e825969e48993d5380e1edfd0f13af1fddd8b80b94f9b935/datadog_checks_base-37.31.0-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | 1c9e317eaad463a0ea6c1d8fd941c6e9 | dc76d81f2334775849da94e7df83cc1c233a2dbe325b986b18d66143a8290645 | 5719f3721d235d57e825969e48993d5380e1edfd0f13af1fddd8b80b94f9b935 | BSD-3-Clause | [] | 25,837 |
2.4 | magpie-mags | 1.3.3 | Mags SDK - Execute scripts on Magpie's instant VM infrastructure | # Mags Python SDK
Execute scripts on [Magpie's](https://mags.run) instant VM infrastructure from Python.
## Install
```bash
pip install magpie-mags
```
## Quick Start
```python
from mags import Mags
m = Mags(api_token="your-token")
# Run a script and wait for the result
result = m.run_and_wait("echo 'Hello from a VM!'")
print(result["status"]) # "completed"
print(result["exit_code"]) # 0
for log in result["logs"]:
print(log["message"])
```
## Authentication
Pass `api_token` directly, or set one of these environment variables:
```bash
export MAGS_API_TOKEN="your-token"
# or
export MAGS_TOKEN="your-token"
```
## Usage
### Run a Script
```python
# Fire-and-forget
job = m.run("apt install -y ffmpeg && ffmpeg -version")
print(job["request_id"])
# Run and wait for completion
result = m.run_and_wait(
"python3 -c 'print(sum(range(100)))'",
timeout=30,
)
```
### Persistent Workspaces
```python
# First run: creates the workspace
m.run_and_wait(
"pip install pandas && echo 'setup done'",
workspace_id="my-project",
persistent=True,
)
# Second run: reuses the workspace (pandas is already installed)
m.run_and_wait(
"python3 -c 'import pandas; print(pandas.__version__)'",
workspace_id="my-project",
)
```
### Always-On VMs
```python
# VM that never auto-sleeps — stays running 24/7
job = m.run(
"python3 server.py",
workspace_id="my-api",
persistent=True,
no_sleep=True,
)
# Auto-recovers if the host goes down
```
### Enable URL / SSH Access
```python
job = m.run("python3 -m http.server 8080", persistent=True)
# HTTP access
access = m.enable_access(job["request_id"], port=8080)
print(access["url"])
# SSH access
ssh = m.enable_access(job["request_id"], port=22)
print(f"ssh root@{ssh['ssh_host']} -p {ssh['ssh_port']}")
```
### Upload Files
```python
file_ids = m.upload_files(["data.csv", "config.json"])
result = m.run_and_wait(
"ls /uploads && wc -l /uploads/data.csv",
file_ids=file_ids,
)
```
### Cron Jobs
```python
cron = m.cron_create(
name="nightly-backup",
cron_expression="0 0 * * *",
script="tar czf /workspace/backup.tar.gz /data",
workspace_id="backups",
)
jobs = m.cron_list()
m.cron_update(cron["id"], enabled=False)
m.cron_delete(cron["id"])
```
### Check Usage
```python
usage = m.usage(window_days=7)
print(f"Jobs: {usage['total_jobs']}, VM seconds: {usage['vm_seconds']:.0f}")
```
## API Reference
| Method | Description |
|--------|-------------|
| `run(script, **opts)` | Submit a job (`persistent`, `no_sleep`, `workspace_id`, ...) |
| `run_and_wait(script, **opts)` | Submit and block until done |
| `new(name, **opts)` | Create a persistent VM workspace |
| `find_job(name_or_id)` | Find a running/sleeping job by name or workspace |
| `exec(name_or_id, command)` | Run a command on an existing VM via SSH |
| `stop(name_or_id)` | Stop a running job |
| `resize(workspace, disk_gb)` | Resize a workspace's disk |
| `status(request_id)` | Get job status |
| `logs(request_id)` | Get job logs |
| `list_jobs(page, page_size)` | List recent jobs |
| `update_job(request_id, startup_command)` | Update job config |
| `enable_access(request_id, port)` | Enable URL or SSH access |
| `usage(window_days)` | Get usage summary |
| `upload_file(path)` | Upload a file, returns file ID |
| `upload_files(paths)` | Upload multiple files |
| `cron_create(**opts)` | Create a cron job |
| `cron_list()` | List cron jobs |
| `cron_get(id)` | Get a cron job |
| `cron_update(id, **updates)` | Update a cron job |
| `cron_delete(id)` | Delete a cron job |
## Links
- Website: [mags.run](https://mags.run)
- Node.js SDK: `npm install @magpiecloud/mags`
- CLI: `go install` or download from releases
| text/markdown | Magpie Cloud | null | null | null | MIT | magpie, mags, vm, microvm, cloud, serverless, sandbox | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0"
] | [] | [] | [] | [
"Homepage, https://mags.run",
"Repository, https://github.com/magpiecloud/mags",
"Issues, https://github.com/magpiecloud/mags/issues"
] | twine/6.0.1 CPython/3.9.6 | 2026-02-19T06:15:57.456249 | magpie_mags-1.3.3.tar.gz | 8,029 | 1b/82/ea7ab7fc94ad4f04fcb3a850a95346ac3ce1f3946fdae04a1aa7b3a5abb1/magpie_mags-1.3.3.tar.gz | source | sdist | null | false | 82e92271423235dc3d06d682414cc9a5 | e2d1b074d655db6481fb640105ba1d0475916b4382ca83c33b1973e3cc35da31 | 1b82ea7ab7fc94ad4f04fcb3a850a95346ac3ce1f3946fdae04a1aa7b3a5abb1 | null | [] | 245 |
2.4 | aiops-sdk | 0.1.6 | AIOps Platform SDK — exception capture, heartbeat, and Flask integration | # AIOps SDK
Python SDK for the [AIOps Platform](https://aiops.1genimpact.cloud) — automatic exception capture, heartbeat monitoring, and Flask integration.
## Installation
```bash
pip install aiops-sdk # core only
pip install "aiops-sdk[flask]" # with Flask integration
```
## Quick Start
```python
import aiops_sdk
from aiops_sdk.integrations.flask import init_flask
# Initialise once at startup (reads AIOPS_API_KEY from env if not passed)
aiops_sdk.init(api_key="your-api-key")
# Register Flask integration (call after your Flask app is created)
init_flask(app)
```
That's it. The SDK will automatically:
- Report unhandled exceptions before the process exits
- Capture every HTTP 4xx/5xx error response (extracts the real error from your JSON response body)
- Forward ERROR and CRITICAL log records as incidents
- Send heartbeats every 30 seconds so the platform knows the service is alive
## Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
| `AIOPS_API_KEY` | **Yes** | — | Platform API key |
| `AIOPS_SERVICE_NAME` | No | `unknown-service` | Service name shown in incidents |
| `AIOPS_ENV` | No | `production` | Environment (`production` / `staging` / etc.) |
| `AIOPS_PLATFORM_URL` | No | `https://aiops.1genimpact.cloud` | Override if self-hosting the platform |
| `AIOPS_SERVICE_BASE_URL` | No | auto-detected | This service's own base URL (used for automated fix callbacks) |
| `AIOPS_SKIP_PATHS` | No | — | Extra paths to exclude from monitoring (comma-separated, e.g. `/admin/health,/internal/ping`) |
## What Gets Captured
| Source | Mechanism | Notes |
|---|---|---|
| Unhandled exceptions | `sys.excepthook` replacement | Blocks until delivered (sync send) |
| Flask exceptions | `got_request_exception` signal | Full stack trace preserved |
| HTTP 4xx/5xx responses | `after_request` hook | Extracts real error from `jsonify` body |
| ERROR / CRITICAL logs | Root logging handler | Stack trace from `exc_info` when available |
## What's Excluded (Built-in Noise Filtering)
The SDK is designed for zero false positives in production:
- **`OPTIONS` and `HEAD` requests** — CORS preflights and HTTP HEAD probes are never application errors
- **Health-check paths** — `/health`, `/healthz`, `/ping`, `/ready`, `/alive`, `/readiness`, `/liveness`, `/metrics`, `/status`, `/favicon.ico`, `/robots.txt`
- **Static files** — `.js`, `.css`, `.ico`, `.png`, `.jpg`, `.svg`, `.woff`, `.woff2`, `.ttf`, `.map`, and more
- **HTTP 404** — "path not found" is expected client behaviour, not a server bug
- **HTTP 429** — rate-limiting is working as intended, not an error state
- **Werkzeug access log noise** — bot/scanner traffic logged by the HTTP server layer
- **`urllib3` / `requests` library errors** — connection pool noise when the platform itself is temporarily unreachable
- **SDK's own logs** — prevents feedback loops if the background worker logs a failure
### Adding Custom Exclusions
```bash
# Exclude additional paths without code changes:
export AIOPS_SKIP_PATHS=/admin/health,/internal/ping,/ops/ready
```
## How It Works
```
Flask request
│
├─► got_request_exception ──► capture_exception() ──► send_async() ──► queue
│ (real exceptions, full stack trace)
│
└─► after_request hook
│
├─ [noise filter] OPTIONS / HEAD → skip
├─ [noise filter] /health → skip
├─ [noise filter] .js/.css → skip
├─ [dedup guard] already captured by signal → skip
│
└─ status in {400,401,403,500,502,503,504}
│
├─ extract real error from jsonify({message/error/detail})
├─ infer error type (AttributeError, KeyError, DatabaseError, …)
└─► send_async() ──► queue
Background worker (daemon thread)
└─ drains queue → POST /v1/sdk/exception (retry 3×, backoff 1s/2s)
```
## License
MIT
| text/markdown | null | null | null | null | MIT | aiops, monitoring, exception, observability, devops | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating Sy... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"Flask>=2.0.0; extra == \"flask\""
] | [] | [] | [] | [
"Homepage, https://pypi.org/project/aiops-sdk/",
"Source, https://github.com/arnav1/aiops-sdk"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-19T06:14:52.888771 | aiops_sdk-0.1.6.tar.gz | 14,025 | 84/7f/68849ee0618c544c2c6de28892a2b043c058ba203334078478e4b392d796/aiops_sdk-0.1.6.tar.gz | source | sdist | null | false | 2435ff1d0b56cecd967bfc8cdde340f2 | f427ca984eaaa0a5730ccf4bd840cb54a3c2c94ca243b7bf2b9591a38974224c | 847f68849ee0618c544c2c6de28892a2b043c058ba203334078478e4b392d796 | null | [] | 259 |
2.4 | fts-mcp | 0.0.11 | Full-text search with MCP and REST API support | # Full Text Search MCP
A full-text search server using Tantivy that can be used as both an MCP (Model Context Protocol) server and a Python library.
## Installation
```bash
pip install -e .
```
## Usage
### Combined FastAPI + MCP Server (one process)
```bash
fts-mcp-plus-api --config example_config.yaml
# MCP served at /mcp, REST API at the root paths (e.g., /search, /fetch)
```
### As an MCP Server (CLI)
```bash
python -m full_text_search.mcp \
--data-file data.jsonl \
--id-column id \
--text-column content \
--searchable-columns title content tags \
--description "My document search index" \
--host 0.0.0.0 \
--port 8000
```
### As a Python Library
The `FullTextSearchMCP` class can be imported and used programmatically:
```python
from full_text_search.mcp import FullTextSearchMCP
# Create and initialize the search server
server = FullTextSearchMCP()
server.initialize(
data_file="data.jsonl",
id_column="id",
text_column="content",
searchable_columns=["title", "content", "tags"],
description="My search index"
)
# Use search functionality directly
results = server.search("python programming", limit=10)
print(results)
# Retrieve specific documents
docs = server.read_documents(["doc1", "doc2"])
print(docs)
# Or run as MCP server programmatically
server.run_server(host="localhost", port=8080)
```
### Creating an MCP Server Instance
You can also create a FastMCP server instance for integration with other applications:
```python
from full_text_search.mcp import FullTextSearchMCP
server = FullTextSearchMCP()
server.initialize(
data_file="data.jsonl",
id_column="id",
text_column="content",
searchable_columns=["title", "content"],
description="My search index"
)
# Get the FastMCP server instance
mcp_server = server.create_mcp_server()
# Use mcp_server with your preferred transport
```
## Data Format
The input data should be in JSONL format (one JSON object per line):
```jsonl
{"id": "1", "title": "Python Basics", "content": "Introduction to Python programming..."}
{"id": "2", "title": "Advanced Python", "content": "Advanced concepts in Python..."}
```
## Features
- **Full-text search** using Tantivy search engine
- **Modular design** - can be used as a library or standalone server
- **No global state** - all functionality encapsulated in the `FullTextSearchMCP` class
- **MCP protocol support** for integration with AI assistants
- **Flexible column mapping** - specify which columns to search and how to identify documents
- **Document retrieval** by ID for full content access
## API Methods
### `FullTextSearchMCP.initialize()`
Initialize the search index with your data.
**Parameters:**
- `data_file`: Path to JSONL file containing documents
- `id_column`: Name of column containing unique document IDs
- `text_column`: Name of main text column for content
- `searchable_columns`: List of column names to make searchable
- `description`: Description of what this search index contains
- `index_path`: Optional path for index storage (defaults to `{data_file}_index`)
### `FullTextSearchMCP.search(query, limit=5)`
Search documents and return previews.
**Parameters:**
- `query`: Search query string
- `limit`: Maximum number of results to return
**Returns:** Formatted string with search results and previews
### `FullTextSearchMCP.read_documents(document_ids)`
Retrieve full content for specific document IDs.
**Parameters:**
- `document_ids`: List of document IDs to retrieve
**Returns:** Formatted string with full document content
### `FullTextSearchMCP.create_mcp_server()`
Create a FastMCP server instance with search tools.
**Returns:** FastMCP server instance
### `FullTextSearchMCP.run_server(host="0.0.0.0", port=8000)`
Run the MCP server.
**Parameters:**
- `host`: Host address for the server
- `port`: Port number for the server
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"tantivy",
"pyyaml",
"fastapi",
"pydantic",
"tqdm",
"python-docx>=1.2.0",
"boto3>=1.42.28",
"fastmcp>=2.12.5; extra == \"mcp\"",
"uvicorn; extra == \"server\"",
"aiohttp; extra == \"server\"",
"mangum; extra == \"lambda\"",
"html-to-markdown; extra == \"ingest\"",
"markdownify-rs==0.1.2; ext... | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.0 | 2026-02-19T06:13:46.530372 | fts_mcp-0.0.11.tar.gz | 57,538 | 3f/0c/a9d1216015d7a7c620439fdf3f17b7b251c559642a7144e798c074ba7f7c/fts_mcp-0.0.11.tar.gz | source | sdist | null | false | 9092864c8310b4e6d2819e0fcbb3783c | 13e37ef327ca479dba8e9106cfad520daf4ff71be23e50086434ec94f6e4d26a | 3f0ca9d1216015d7a7c620439fdf3f17b7b251c559642a7144e798c074ba7f7c | null | [] | 246 |
2.4 | archive-r-python | 0.1.17 | Python bindings for archive_r: libarchive-based streaming traversal for recursive nested archives (no temp files, no large in-memory buffers) | # archive_r Python Bindings
> ⚠️ **Development Status**: This library is currently under development. The API may change without notice.
## Overview
Python bindings for archive_r, a libarchive-based library for processing many archive formats.
It streams entry data directly from the source to recursively read nested archives without extracting to temporary files or loading large in-memory buffers.
The bindings expose a Pythonic iterator API with context manager support.
---
## Installation
### From PyPI
```bash
pip install archive_r_python
```
### From Source
```bash
cd archive_r/bindings/python
pip install .
```
### Development Installation (Editable Mode)
```bash
cd archive_r/bindings/python
pip install -e .
```
### Building with Parent Build Script
```bash
cd archive_r
./build.sh --with-python
```
This builds the core library and Python bindings, placing artifacts in `build/bindings/python/`.
---
## Basic Usage
### Simple Traversal
```python
import archive_r
# Context manager ensures proper resource cleanup
with archive_r.Traverser("test.zip") as traverser:
for entry in traverser:
print(f"Path: {entry.path} (depth={entry.depth})")
if entry.is_file:
print(f" Size: {entry.size} bytes")
```
### Reading Entry Content
```python
import archive_r
with archive_r.Traverser("archive.tar.gz") as traverser:
for entry in traverser:
if entry.is_file and entry.path.endswith('.txt'):
# Read full content
content = entry.read()
print(f"Content of {entry.path}:")
print(content.decode('utf-8', errors='replace'))
```
### Chunked Reading (Large Files)
```python
import archive_r
with archive_r.Traverser("large_archive.zip") as traverser:
for entry in traverser:
if entry.is_file:
# Read in 8KB chunks
chunk_size = 8192
total_bytes = 0
while True:
chunk = entry.read(chunk_size)
if not chunk:
break
total_bytes += len(chunk)
# Process chunk...
print(f"{entry.path}: {total_bytes} bytes read")
```
### Searching in Entry Content
```python
import archive_r
def search_in_entry(entry, keyword):
"""Stream search within entry content (buffer boundary aware)"""
overlap = b''
buffer_size = 8192
keyword_bytes = keyword.encode('utf-8')
while True:
chunk = entry.read(buffer_size)
if not chunk:
break
search_text = overlap + chunk
if keyword_bytes in search_text:
return True
# Preserve tail for next iteration
if len(chunk) >= len(keyword_bytes) - 1:
overlap = chunk[-(len(keyword_bytes) - 1):]
else:
overlap = chunk
return False
with archive_r.Traverser("documents.zip") as traverser:
for entry in traverser:
if entry.is_file and entry.path.endswith('.txt'):
if search_in_entry(entry, "important"):
print(f"Found keyword in: {entry.path}")
```
### Controlling Archive Descent
```python
import archive_r
with archive_r.Traverser("test.zip") as traverser:
for entry in traverser:
# Don't expand Office files (they are ZIP internally)
if entry.path.endswith(('.docx', '.xlsx', '.pptx')):
entry.set_descent(False)
print(f"Path: {entry.path}, Will descend: {entry.descent_enabled}")
```
You can also disable automatic descent globally:
```python
# Disable automatic descent for all entries
with archive_r.Traverser("test.zip", descend_archives=False) as traverser:
for entry in traverser:
# Manually enable descent for specific entries
if entry.path.endswith('.tar.gz'):
entry.set_descent(True)
```
> ⚠️ **Note**: Reading entry content automatically disables descent. Call `entry.set_descent(True)` if you need to descend after reading.
---
## Path Representation
The Python bindings provide three ways to access entry paths:
```python
with archive_r.Traverser("outer.zip") as traverser:
for entry in traverser:
# Full path including top-level archive
# Example: "outer.zip/inner.tar/file.txt"
print(f"path: {entry.path}")
# Last element of path_hierarchy
# Example: "inner.tar/file.txt"
print(f"name: {entry.name}")
# Path hierarchy as list
# Example: ["outer.zip", "inner.tar/file.txt"]
print(f"path_hierarchy: {entry.path_hierarchy}")
```
`path_hierarchy` is particularly useful when you need custom path separators or want to represent the nesting structure explicitly.
---
## Metadata Access
### Basic Metadata
Entry objects provide common metadata through properties:
```python
with archive_r.Traverser("archive.tar") as traverser:
for entry in traverser:
print(f"Path: {entry.path}")
print(f" Type: {'file' if entry.is_file else 'directory'}")
print(f" Size: {entry.size} bytes")
print(f" Depth: {entry.depth}")
```
### Extended Metadata
For additional metadata (permissions, ownership, timestamps), specify `metadata_keys`:
```python
with archive_r.Traverser("archive.tar", metadata_keys=["uid", "gid", "mtime", "mode"]) as traverser:
for entry in traverser:
# Retrieve all specified metadata as dictionary
metadata = entry.metadata()
print(f"{entry.path}:")
print(f" UID: {metadata.get('uid')}")
print(f" GID: {metadata.get('gid')}")
print(f" Mode: {oct(metadata.get('mode', 0))}")
# Or retrieve specific metadata
mtime = entry.find_metadata("mtime")
if mtime is not None:
print(f" Modified: {mtime}")
```
Available metadata keys depend on the archive format. Common keys include:
- `uid`, `gid`: User/group ID
- `mtime`, `atime`, `ctime`: Timestamps (Unix time)
- `mode`: File permissions
- `uname`, `gname`: User/group names
- `hardlink`, `symlink`: Link targets
---
## Processing Split Archives
For split archive files (e.g., `.zip.001`, `.zip.002`), use `set_multi_volume_group()`:
```python
import archive_r
with archive_r.Traverser("container.tar") as traverser:
for entry in traverser:
# Detect split archive parts
if '.part' in entry.path:
# Extract base name (e.g., "archive.zip.part001" → "archive.zip")
pos = entry.path.rfind('.part')
base_name = entry.path[:pos]
entry.set_multi_volume_group(base_name)
# After parent traversal, grouped parts are merged and expanded
```
---
## Format Specification
By default, all formats supported by libarchive are enabled. To restrict to specific formats:
```python
# Enable only ZIP and TAR
with archive_r.Traverser("test.zip", formats=["zip", "tar"]) as traverser:
for entry in traverser:
print(entry.path)
```
Common format names: `"7zip"`, `"ar"`, `"cab"`, `"cpio"`, `"iso9660"`, `"lha"`, `"rar"`, `"tar"`, `"warc"`, `"xar"`, `"zip"`
> 💡 **Tip**: Exclude pseudo-formats like `"mtree"` and `"raw"` if you encounter false positives on non-archive files.
---
## Custom Stream Factories
You can provide custom stream objects (file-like objects with `read()` method) to override the default file opening behavior:
```python
import archive_r
import io
# Register a custom stream factory
def custom_stream_factory(path):
"""Return a file-like object for the given path"""
if path == "special_file.bin":
# Return custom data source
return io.BytesIO(b"custom content")
# Return None to use default file opening
return None
archive_r.register_stream_factory(custom_stream_factory)
with archive_r.Traverser("test.zip") as traverser:
for entry in traverser:
# When traverser needs to open "special_file.bin",
# your factory will provide the BytesIO stream
pass
```
Stream objects must provide:
- `read(size)`: Read up to `size` bytes
- Optional: `seek(offset, whence)`, `tell()` for seekable streams
- Optional: `rewind()` (defaults to `seek(0, 0)` if not provided)
---
## Error Handling
### Fault Callbacks
Data errors (corrupted archives, I/O failures) are reported via callbacks without stopping traversal:
```python
import archive_r
def fault_handler(fault_info):
"""Called when data errors occur during traversal"""
print(f"Warning at {fault_info['hierarchy']}: {fault_info['message']}")
if fault_info.get('errno'):
print(f" Error code: {fault_info['errno']}")
archive_r.on_fault(fault_handler)
with archive_r.Traverser("potentially_corrupted.zip") as traverser:
for entry in traverser:
# Valid entries are processed normally
# Corrupted entries trigger fault_handler
print(entry.path)
```
### Read Errors
Errors during `read()` raise exceptions:
```python
try:
with archive_r.Traverser("test.zip") as traverser:
for entry in traverser:
if entry.is_file:
content = entry.read()
except RuntimeError as e:
print(f"Read error: {e}")
```
---
## Thread Safety
The Python bindings follow the same thread safety constraints as the C++ core:
- ✓ **Thread-safe**: Each thread can create and use its own `Traverser` instance independently
- ✗ **Not thread-safe**: A single `Traverser` or `Entry` instance must not be shared across threads
### Example
```python
import threading
import archive_r
# ✓ SAFE: Each thread has its own Traverser
def worker():
with archive_r.Traverser("archive.tar.gz") as traverser:
for entry in traverser:
# Process entry...
pass
t1 = threading.Thread(target=worker)
t2 = threading.Thread(target=worker)
t1.start()
t2.start()
t1.join()
t2.join()
# ✗ UNSAFE: Sharing a single Traverser instance across threads
shared_traverser = archive_r.Traverser("archive.tar.gz")
def unsafe_worker():
for entry in shared_traverser: # Race condition!
pass
# Don't do this!
# t1 = threading.Thread(target=unsafe_worker)
# t2 = threading.Thread(target=unsafe_worker)
```
Additionally:
- **Global registration functions** (`register_stream_factory`, `on_fault`) should be called during single-threaded initialization
- **Entry objects** should not be shared between threads (they are tied to the Traverser's internal state)
---
## Advanced Examples
### Full Example: Recursive Archive Analyzer
```python
import archive_r
import sys
from collections import defaultdict
def analyze_archive(archive_path):
"""Analyze archive contents and print statistics"""
stats = defaultdict(int)
file_types = defaultdict(int)
with archive_r.Traverser(archive_path, metadata_keys=["mtime"]) as traverser:
for entry in traverser:
stats['total_entries'] += 1
if entry.is_file:
stats['files'] += 1
stats['total_size'] += entry.size
# Count by extension
if '.' in entry.name:
ext = entry.name.rsplit('.', 1)[1]
file_types[ext] += 1
# Find largest file
if entry.size > stats.get('max_file_size', 0):
stats['max_file_size'] = entry.size
stats['max_file_path'] = entry.path
else:
stats['directories'] += 1
# Track maximum depth
if entry.depth > stats.get('max_depth', 0):
stats['max_depth'] = entry.depth
# Print results
print(f"\nArchive Analysis: {archive_path}")
print(f" Total entries: {stats['total_entries']}")
print(f" Files: {stats['files']}")
print(f" Directories: {stats['directories']}")
print(f" Total size: {stats['total_size']:,} bytes")
print(f" Maximum depth: {stats['max_depth']}")
if 'max_file_path' in stats:
print(f" Largest file: {stats['max_file_path']} ({stats['max_file_size']:,} bytes)")
if file_types:
print("\n File types:")
for ext, count in sorted(file_types.items(), key=lambda x: x[1], reverse=True)[:10]:
print(f" .{ext}: {count}")
if __name__ == '__main__':
if len(sys.argv) < 2:
print("Usage: python analyze.py <archive_path>")
sys.exit(1)
analyze_archive(sys.argv[1])
```
---
## Testing
Run the Python binding tests:
```bash
cd archive_r/bindings/python
python -m unittest discover test
```
Or use the project-wide test runner:
```bash
cd archive_r
./bindings/python/run_binding_tests.sh
```
---
## API Reference
### Module: `archive_r`
#### Class: `Traverser`
Constructor:
```python
Traverser(
roots, # str or list of str/list (path hierarchy)
formats=None, # list of format names (default: all)
descend_archives=True, # automatically expand archives
metadata_keys=None, # list of metadata keys to capture
passphrases=None # list of passphrases for encrypted archives
)
```
Methods:
- `__iter__()`: Returns self (iterator protocol)
- `__next__()`: Returns next `Entry` or raises `StopIteration`
- `__enter__()`: Context manager entry (returns self)
- `__exit__(exc_type, exc_val, exc_tb)`: Context manager exit
#### Class: `Entry`
Properties:
- `path`: Full path string (read-only)
- `name`: Last element of path hierarchy (read-only)
- `path_hierarchy`: List representation of path (read-only)
- `depth`: Nesting depth (read-only)
- `is_file`: True if entry is a file (read-only)
- `size`: File size in bytes, 0 for directories (read-only)
- `descent_enabled`: Whether this entry will be expanded as an archive (read-only)
Methods:
- `read(size=None)`: Read entry content (bytes). If `size` is omitted, reads all remaining data
- `set_descent(enabled)`: Enable/disable archive expansion for this entry
- `set_multi_volume_group(group_name)`: Register this entry as part of a split archive group
- `metadata()`: Return dictionary of all captured metadata
- `find_metadata(key)`: Return value for specific metadata key, or None if not found
#### Function: `register_stream_factory`
```python
archive_r.register_stream_factory(factory_func)
```
Register a callback to provide custom stream objects for file access.
**Parameters**:
- `factory_func`: Callable that takes a file path (str) and returns a file-like object or None
**Stream object requirements**:
- Must provide `read(size)` method
- Optional: `seek(offset, whence)`, `tell()`, `rewind()`
#### Function: `on_fault`
```python
archive_r.on_fault(callback)
```
Register a callback to receive fault notifications during traversal.
**Parameters**:
- `callback`: Callable that takes a dict with keys:
- `hierarchy`: List of path components where fault occurred
- `message`: Human-readable error description
- `errno`: Optional error number from system calls
---
## Packaging
### Building Wheels
```bash
cd archive_r
./build.sh --package-python
```
This creates wheel (`.whl`) and source distribution (`.tar.gz`) in `build/bindings/python/dist/`.
### Manual Packaging
```bash
cd bindings/python
python setup.py sdist bdist_wheel
```
---
## Requirements
- Python 3.8 or later
- libarchive 3.x (runtime dependency)
- setuptools, wheel (build dependencies)
- pybind11 >= 2.6.0 (build dependency, automatically vendored during packaging)
---
## License
The Python bindings are distributed under the MIT License, consistent with the archive_r core library.
### Third-Party Licenses
- **pybind11**: BSD-style License (used for C++/Python interfacing)
- **libarchive**: New BSD License (runtime dependency)
---
## See Also
- [archive_r Core Documentation](../../README.md)
- [Ruby Bindings](../ruby/README.md)
- [Example Scripts](examples/)
---
| text/markdown | archive_r Team | raizo.tcs@users.noreply.github.com | null | null | MIT | archive libarchive traversal nested multi-volume | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: POSIX :: Linux",
"Operating System... | [] | https://github.com/Raizo-TCS/archive_r | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Source, https://github.com/Raizo-TCS/archive_r",
"Bug Tracker, https://github.com/Raizo-TCS/archive_r/issues",
"Documentation, https://github.com/Raizo-TCS/archive_r#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:12:51.153973 | archive_r_python-0.1.17.tar.gz | 92,460 | 73/79/f1970b42ae3d98bc896b9950b6e40e13c8deb865ab411902a6fb3330a9a6/archive_r_python-0.1.17.tar.gz | source | sdist | null | false | 2096398b78ca0c180cfa23ddb2928210 | 2d56f550f661baef81c8d13265c8f5a8f63e84eb9681d7c9eb39e333a8e6332c | 7379f1970b42ae3d98bc896b9950b6e40e13c8deb865ab411902a6fb3330a9a6 | null | [
"LICENSE",
"NOTICE",
"LICENSES/libarchive-COPYING",
"LICENSES/xz-COPYING",
"LICENSES/xz-COPYING.0BSD",
"LICENSES/nettle-COPYING.LESSERv3",
"LICENSES/nettle-COPYINGv2",
"LICENSES/attr-COPYING.LGPL",
"LICENSES/acl-COPYING.LGPL",
"LICENSES/libxml2-Copyright",
"LICENSES/lz4-LICENSE",
"LICENSES/zst... | 2,060 |
2.4 | huira | 0.8.1 | Python bindings for the Huira library | 
*Huira* is a ray-tracing library for rendering large scenes, star fields, and simulating solar radiation pressure.
[](https://github.com/huira-render/huira/actions/workflows/linux-ci-cd.yml?query=branch%3Amain)
[](https://github.com/huira-render/huira/actions/workflows/windows-ci-cd.yml?query=branch%3Amain)
[](https://github.com/huira-render/huira/actions/workflows/macos-ci-cd.yml?query=branch%3Amain)
[](https://app.codecov.io/gh/huira-render/huira/tree/main)
[](https://github.com/huira-render/huira/actions/workflows/conda-build.yml?query=branch%3Amain)
[](https://github.com/huira-render/huira/actions/workflows/python.yml?query=branch%3Amain)
***
# Features
Initial work on Huira has been on the basic architecture as well as distribution/cross-platform compatability. As much of that work is now completed, new features are expected to be released in relatively short order.
If there are features you wish to see, that you don't see listed here, please feel free to submit a [Feature Request](https://github.com/huira-render/huira/issues/new?template=feature_request.md)
## Currently Stable Features (as of v0.8.1)
- Radiometrically accurate unresolved rendering with calibrated camera distortion models and common camera controls
- SPICE toolkit integration for spacecraft ephemeris and reference frames
- Star field rendering with accurate celestial coordinates
- Python Bindings
- Logging and crash report generation
- API Reference Documentation (NOTE: Some docs may appear incomplete or poorly formatted)
## Features Coming Soon
- 3D mesh and material support
- Motion blur
- Camera Depth-of-Field
- Digital Elevation Maps
- Level-of-detail support
- Solar Radiation Pressure simulation
- LIDAR simulation
- TLE support
- Improved API Reference Documentation and Quick-start guides
## Long Term Plans
- Vulkan based GPU Acceleration
- Desktop application (GUI)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://huira.space",
"Documentation, https://docs.huira.space",
"Source, https://github.com/huira-render/huira",
"Issues, https://github.com/huira-render/huira/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:12:49.127484 | huira-0.8.1-cp39-cp39-win_amd64.whl | 6,370,468 | aa/93/d9d26c4c4c9bd475adc1425da98b32d329069a6134d5e259803125eef34d/huira-0.8.1-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | 045c7ef0f9db576c14ccad2837f4d7a3 | 0d98787da30ff0c0772d080758cef7b7fea96fa6a73b92e7b3c0701e43b515ef | aa93d9d26c4c4c9bd475adc1425da98b32d329069a6134d5e259803125eef34d | MIT | [] | 1,470 |
2.4 | aacrgenie | 17.1.0 | AACR Project GENIE ETL | 
# AACR Project GENIE
[](https://pypi.org/project/aacrgenie)
[](https://github.com/orgs/sage-bionetworks/packages/container/package/genie)
[](https://github.com/Sage-Bionetworks/Genie)
## Table of Contents
- [Introduction](#introduction)
- [Documentation](#documentation)
- [Dependencies](#dependencies)
- [File Validator](#file-validator)
- [Contributing](#contributing)
- [Sage Bionetworks Only](#sage-bionetworks-only)
- [Running locally](#running-locally)
- [Using conda](#using-conda)
- [Using pipenv](#using-pipenv)
- [Using docker (**HIGHLY** Recommended)](#using-docker-highly-recommended)
- [Setting up](#setting-up)
- [Developing](#developing)
- [Developing with Docker](#developing-with-docker)
- [Modifying Docker](#modifying-docker)
- [Testing](#testing)
- [Running unit tests](#running-unit-tests)
- [Running integration tests](#running-integration-tests)
- [Production](#production)
- [Github Workflows](#github-workflows)
## Introduction
This repository documents code used to gather, QC, standardize, and analyze data uploaded by institutes participating in AACR's Project GENIE (Genomics, Evidence, Neoplasia, Information, Exchange).
## Documentation
For more information about the AACR genie repository, [visit the GitHub Pages site.](https://sage-bionetworks.github.io/Genie/)
## Dependencies
This package contains both R, Python and cli tools. These are tools or packages you will need, to be able to reproduce these results:
- Python >=3.10 or <3.12
- `pip install -r requirements.txt`
- [bedtools](https://bedtools.readthedocs.io/en/latest/content/installation.html)
- R 4.3.3
- `renv::install()`
- Follow instructions [here](https://r-docs.synapse.org/#note-for-windows-and-mac-users) to install synapser
- [Java = 21](https://www.java.com/en/download/)
- For mac users, it seems to work better to run `brew install java`
- [wget](https://www.gnu.org/software/wget/)
- For mac users, have to run `brew install wget`
## File Validator
Please see the [local file validation tutorial](/docs/tutorials/local_file_validation.md) for more information on this and how to use it.
## Contributing
Please view [contributing guide](CONTRIBUTING.md) to learn how to contribute to the GENIE package.
# Sage Bionetworks Only
## Running locally
These are instructions on how you would setup your environment and run the pipeline locally.
1. Make sure you have read through the [GENIE Onboarding Docs](https://sagebionetworks.jira.com/wiki/spaces/APGD/pages/2163344270/Onboarding) and have access to all of the required repositories, resources and synapse projects for Main GENIE.
1. Be sure you are invited to the Synapse GENIE Admin team.
1. Make sure you are a Synapse certified user: [Certified User - Synapse User Account Types](https://help.synapse.org/docs/Synapse-User-Account-Types.2007072795.html#SynapseUserAccountTypes-CertifiedUser)
1. (**OPTIONAL** if developing with `docker`) Be sure to clone the cbioportal repo: https://github.com/cBioPortal/cbioportal and `git checkout` the version of the repo pinned to the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile)
1. (**OPTIONAL** if developing with `docker`) Be sure to clone the annotation-tools repo: https://github.com/Sage-Bionetworks/annotation-tools and `git checkout` the version of the repo pinned to the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile). (Not needed if developing with `docker`)
1. (**HIGHLY RECOMMENDED**) It is highly recommended to develop in an ec2-instance as the dockerfile building/other environment setup is not as stable under Mac/Windows local computer environments (specifically the dockerfile building is unstable in Mac/Windows). Follow instructions using [Service-Catalog-Provisioning](https://help.sc.sageit.org/sc/Service-Catalog-Provisioning.938836322.html) to create an ec2 on service catalog.
### Using `conda`
Follow instructions to install conda on your computer:
Install `conda-forge` and [`mamba`](https://github.com/mamba-org/mamba)
```
conda install -n base -c conda-forge mamba
```
Install Python and R versions via `mamba`
```
mamba create -n genie_dev -c conda-forge python=3.10 r-base=4.3
```
### Using `pipenv`
Installing via [pipenv](https://pipenv.pypa.io/en/latest/installation.html)
1. Specify a python version that is supported by this repo:
```
pipenv --python <python_version>
```
1. [pipenv install from requirements file](https://docs.pipenv.org/en/latest/advanced.html#importing-from-requirements-txt)
1. Activate your `pipenv`:
```
pipenv shell
```
### Using `docker` (**HIGHLY** Recommended)
This is the most reproducible method even though it will be the most tedious to develop with. See [CONTRIBUTING docs for how to locally develop with docker.](/CONTRIBUTING.md). This will setup the docker image in your environment.
1. Pull pre-existing docker image or build from Dockerfile:
Pull pre-existing docker image. You can find the list of images [from here.](https://github.com/Sage-Bionetworks/Genie/pkgs/container/genie)
```
docker pull <some_docker_image_name>
```
Build from Dockerfile.
```
docker build -f Dockerfile -t <some_docker_image_name> .
```
1. Run docker image:
```
docker run --rm -it -e SYNAPSE_AUTH_TOKEN=$YOUR_SYNAPSE_TOKEN <some_docker_image_name>
```
### Setting up
1. Clone this repo and install the package locally.
Install Python packages. This is the more traditional way of installing dependencies. Follow instructions [here](https://pip.pypa.io/en/stable/installation/) to learn how to install pip.
```
pip install -e .
pip install -r requirements.txt
pip install -r requirements-dev.txt
```
Install R packages. Note that the R package setup of this is the most unpredictable so it's likely you have to manually install specific packages first before the rest of it will install.
```
Rscript R/install_packages.R
```
1. Configure the Synapse client to authenticate to Synapse.
1. Create a Synapse [Personal Access token (PAT)](https://help.synapse.org/docs/Managing-Your-Account.2055405596.html#ManagingYourAccount-PersonalAccessTokens).
1. Add a `~/.synapseConfig` file
```
[authentication]
authtoken = <PAT here>
```
1. OR set an environmental variable
```
export SYNAPSE_AUTH_TOKEN=<PAT here>
```
1. Confirm you can log in your terminal.
```shell
synapse login
```
1. Run the different steps of the pipeline on the test project. The `--project_id syn7208886` points to the test project. You should always be using the test project when developing, testing and running locally.
1. Validate all the files **excluding vcf files**:
```
python3 bin/input_to_database.py main --project_id syn7208886 --onlyValidate
```
1. Validate **all** the files:
```
python3 bin/input_to_database.py mutation --project_id syn7208886 --onlyValidate --genie_annotation_pkg ../annotation-tools
```
1. Process all the files aside from the mutation (maf, vcf) files. The mutation processing was split because it takes at least 2 days to process all the production mutation data. Ideally, there is a parameter to exclude or include file types to process/validate, but that is not implemented.
```
python3 bin/input_to_database.py main --project_id syn7208886 --deleteOld
```
1. Process the mutation data. This command uses the `annotation-tools` repo that you cloned previously which houses the code that standardizes/merges the mutation (both maf and vcf) files and re-annotates the mutation data with genome nexus. The `--createNewMafDatabase` will create a new mutation tables in the test project. This flag is necessary for production data for two main reasons:
* During processing of mutation data, the data is appended to the data, so without creating an empty table, there will be duplicated data uploaded.
* By design, Synapse Tables were meant to be appended to. When a Synapse Tables is updated, it takes time to index the table and return results. This can cause problems for the pipeline when trying to query the mutation table. It is actually faster to create an entire new table than updating or deleting all rows and appending new rows when dealing with millions of rows.
* If you run this more than once on the same day, you'll run into an issue with overwriting the narrow maf table as it already exists. Be sure to rename the current narrow maf database under `Tables` in the test synapse project and try again.
```
python3 bin/input_to_database.py mutation --project_id syn7208886 --deleteOld --genie_annotation_pkg ../annotation-tools --createNewMafDatabase
```
1. Create a consortium release. Be sure to add the `--test` parameter. For consistency, the `processingDate` specified here should match the one used in the `consortium_map` for the `TEST` key [nf-genie.](https://github.com/Sage-Bionetworks-Workflows/nf-genie/blob/main/main.nf)
```
python3 bin/database_to_staging.py <processingDate> ../cbioportal TEST --test
```
1. Create a public release. Be sure to add the `--test` parameter. For consistency, the `processingDate` specified here should match the one used in the `public_map` for the `TEST` key [nf-genie.](https://github.com/Sage-Bionetworks-Workflows/nf-genie/blob/main/main.nf)
```
python3 bin/consortium_to_public.py <processingDate> ../cbioportal TEST --test
```
## Developing
1. Navigate to your cloned repository on your computer/server.
1. Make sure your `develop` branch is up to date with the `Sage-Bionetworks/Genie` `develop` branch.
```
cd Genie
git checkout develop
git pull
```
1. Create a feature branch which off the `develop` branch. If there is a GitHub/JIRA issue that you are addressing, name the branch after the issue with some more detail (like `{GH|GEN}-123-add-some-new-feature`).
```
git checkout -b GEN-123-new-feature
```
1. At this point, you have only created the branch locally, you need to push this remotely to Github.
```
git push -u origin GEN-123-new-feature
```
1. Add your code changes and push them via useful commit message
```
git add
git commit changed_file.txt -m "Remove X parameter because it was unused"
git push
```
1. Once you have completed all the steps above, in Github, create a pull request (PR) from your feature branch to the `develop` branch of Sage-Bionetworks/Genie.
### Developing with Docker
See [using `docker`](#using-docker-highly-recommended) for setting up the initial docker environment.
A docker build will be created for your feature branch every time you have an open PR on github and add the label `run_integration_tests` to it.
It is recommended to develop with docker. You can either write the code changes locally, push it to your remote and wait for docker to rebuild OR do the following:
1. Make any code changes. These cannot be dependency changes - those would require a docker rebuild.
1. Create a running docker container with the image that you pulled down or created earlier
```
docker run -d <docker_image_name> /bin/bash -c "while true; do sleep 1; done"
```
1. Copy your code changes to the docker image:
```
docker cp <folder or name of file> <docker_image_name>:/root/Genie/<folder or name of files>
```
1. Run your image in interactive mode:
```
docker exec -it -e SYNAPSE_AUTH_TOKEN=$YOUR_SYNAPSE_TOKEN <docker_image_name> /bin/bash
```
1. Do any commands or tests you need to do
### Modifying Docker
Follow this section when modifying the [Dockerfile](https://github.com/Sage-Bionetworks/Genie/blob/main/Dockerfile):
1. Have your synapse authentication token handy
1. ```docker build -f Dockerfile -t <some_docker_image_name> .```
1. ```docker run --rm -it -e SYNAPSE_AUTH_TOKEN=$YOUR_SYNAPSE_TOKEN <some_docker_image_name>```
1. Run [test code](README.md#developing-locally) relevant to the dockerfile changes to make sure changes are present and working
1. Once changes are tested, follow [genie contributing guidelines](#developing) for adding it to the repo
1. Once deployed to main, make sure the CI/CD build successfully completed (our docker image gets automatically deployed via Github Actions CI/CD) [here](https://github.com/Sage-Bionetworks/Genie/actions/workflows/ci.yml)
1. Check that your docker image got successfully deployed [here](https://github.com/Sage-Bionetworks/Genie/pkgs/container/genie)
## Testing
Currently our Github Actions will run unit tests from our test suite `/tests` and run integration tests - each of the [pipeline steps here](README.md#developing-locally) on the test pipeline.
These are all triggered by adding the Github label `run_integration_tests` on your open PR.
To trigger `run_integration_tests`:
- Add `run_integration_tests` for the first time when you just open your PR
- Remove `run_integration_tests` label and re-add it
- Make any commit and pushes when the PR is still open
If you are developing with docker, docker images for your feature branch also gets build via the `run_integration_tests` trigger so check that your docker image got successfully deployed[here](https://github.com/Sage-Bionetworks/Genie/pkgs/container/genie).
### Running unit tests
Unit tests in Python are also run automatically by Github Actions on any PR and are required to pass before merging.
Otherwise, if you want to add tests and run tests outside of the CI/CD, see [how to run tests and general test development](./CONTRIBUTING.md#testing)
### Running integration tests
See [running pipeline steps here](README.md#developing-locally) if you want to run the integration tests locally.
You can also run them in nextflow via [nf-genie](https://github.com/Sage-Bionetworks-Workflows/nf-genie/blob/main/README.md)
## Production
The production pipeline is run on Nextflow Tower and the Nextflow workflow is captured in [nf-genie](https://github.com/Sage-Bionetworks-Workflows/nf-genie). It is wise to create an ec2 via the Sage Bionetworks service catalog to work with the production data, because there is limited PHI in GENIE.
## Github Workflows
For technical details about our CI/CD, please see [the github workflows README](.github/workflows/README.md)
| text/markdown | Thomas Yu | thomas.yu@sagebionetworks.org | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: ... | [
"any"
] | https://github.com/Sage-Bionetworks/Genie | null | <3.12,>=3.10 | [] | [] | [] | [
"pyranges==0.1.4",
"synapseclient[pandas]<5.0.0,>=4.8.0",
"httplib2>=0.11.3",
"PyYAML>=5.1",
"chardet>=3.0.4",
"pytest; extra == \"dev\"",
"black; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"mypy; extra == \"dev\"",
"mkdocs<=1.6.0; extra == \"docs\"",
"mkdocs-material<=9.5.23; extra == \... | [] | [] | [] | [
"Bug Tracker, https://github.com/Sage-Bionetworks/Genie/issues",
"Source Code, https://github.com/Sage-Bionetworks/Genie"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:12:43.863853 | aacrgenie-17.1.0.tar.gz | 210,096 | 99/c4/b40c9051997a86e5d92b07497d759d69d3bf50d41f7bd110d1c7880a417d/aacrgenie-17.1.0.tar.gz | source | sdist | null | false | 136f0ca673bb79dfc9b18417a3b564f2 | 61396a883ed046364c38a5a978ffe0dcfab5f54b6ee704cf7d2fbde821e12449 | 99c4b40c9051997a86e5d92b07497d759d69d3bf50d41f7bd110d1c7880a417d | null | [
"LICENSE"
] | 261 |
2.1 | boldsign | 3.1.0 | BoldSign API | # BoldSign
Easily integrate BoldSign's e-signature features into your Python applications. This package simplifies sending documents for signature, embedding signing ceremonies, tracking document status, downloading signed documents, and managing e-signature workflows.
## Prerequisites
- Python 3.7+
- Free [developer account](https://boldsign.com/esignature-api/)
## Documentation
- [Official API documentation](https://developers.boldsign.com/)
## Installation & Usage
You can install this package by using the pip tool:
```sh
pip install boldsign
```
(You may need to run pip with root permission: sudo pip install boldsign)
Then import the package:
```python
import boldsign
```
## Dependencies
This package requires the following dependencies to function properly. They will be installed automatically when you install the package:
- urllib3>=1.25.3
- python-dateutil
- pydantic>=2
- typing-extensions>=4.7.1
## Getting Started
Please follow the [installation procedure](#installation--usage) and then run the following:
```python
import boldsign
configuration = boldsign.Configuration(
api_key = "***your_api_key***"
)
# Enter a context with an instance of the API client
with boldsign.ApiClient(configuration) as api_client:
# Create an instance of the DocumentApi class
document_api = boldsign.DocumentApi(api_client)
# Define the signature field to be added to the document
signatureField = boldsign.FormField(
fieldType="Signature", # Field type is Signature
pageNumber=1, # Specify the page number
bounds=boldsign.Rectangle(x=100, y=100, width=100, height=50), # Position and size of the signature field
)
# Define the signer with a name and email address
signer = boldsign.DocumentSigner(
name="David", # Name of the signer
emailAddress="david@example.com", # Signer's email address
signerType="Signer", # Specify the signer type
formFields=[signatureField] # Assign the signature field to the signer
)
# Prepare the request body for sending the document for signature
send_for_sign = boldsign.SendForSign(
title="Agreement", # Title of the document
signers=[signer], # List of signers
files=["/documents/agreement.pdf"] # Path to the document file to be signed
)
# Send the document for signature and capture the response
api_response = document_api.send_document(send_for_sign=send_for_sign)
```
| text/markdown | BoldSign | support@boldsign.com | null | null | MIT | boldsign, api, sdk, BoldSign API | [] | [] | https://github.com/boldsign/boldsign-python-sdk | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T06:10:34.050276 | boldsign-3.1.0.tar.gz | 146,370 | de/cd/32551f1cdac236b42ebde8c9bf4afa4796a31bb691ae6dd2fa1d87e70376/boldsign-3.1.0.tar.gz | source | sdist | null | false | bd0f7443d0d1cb666cf2c7e33b291ef0 | 5456fbf0365343ad009114e6ded08f5146c887637322d6f4c4ae71c361c35446 | decd32551f1cdac236b42ebde8c9bf4afa4796a31bb691ae6dd2fa1d87e70376 | null | [] | 320 |
2.4 | whatiz | 0.1.0 | A lightweight CLI tool to get quick information about any topic using DuckDuckGo search | # whatiz 🔍
A lightweight CLI tool to get quick information about any topic right from your terminal using DuckDuckGo search.
## Features
**Lightning-fast information retrieval** - Get answers in one line
**Uses DuckDuckGo** - No ads, no tracking (respects privacy)
## Installation
### Using pip (Recommended)
```bash
pip install whatiz
```
The `whatiz` command will be automatically added to your PATH!
### Using pipx (For isolated environments)
```bash
pipx install whatiz
```
## Usage
Get quick information about anything:
```bash
whatiz python
whatiz climate change
whatiz quantum computing
whatiz "machine learning"
```
## Examples
```bash
$ whatiz blockchain
Blockchain is a distributed ledger technology that underlies cryptocurrencies like Bitcoin...
$ whatiz photosynthesis
Photosynthesis is the process by which plants convert light energy into chemical energy...
$ whatiz "dark matter"
Dark matter is a form of matter composed of particles that neither emit nor absorb light...
```
## How It Works
1. Takes your query from the command line
2. Searches using DuckDuckGo API (via `ddgs` library)
3. Returns the first result as a one-line summary
4. Displays it in your terminal
## Requirements
- Python 3.8+
- Internet connection (for search)
## Development
### Clone and setup
```bash
git clone https://github.com/yourusername/whatiz.git
cd whatiz
pip install -e ".[dev]"
```
### Install from source
```bash
pip install .
```
## Dependencies
- `ddgs` - DuckDuckGo search
- `requests` - HTTP library
- `beautifulsoup4` - HTML parsing
- `sumy` - Text summarization
- `nltk` - Natural language processing
- `numpy` - Numerical computing
## Troubleshooting
**Command not found after installation?**
```bash
# Try this
python -m whatiz.main "your query"
# Or reinstall
pip install --force-reinstall whatiz
```
**Connection errors?**
- Check your internet connection
- DuckDuckGo servers might be temporarily down
## License
MIT License - see LICENSE file for details
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Author
Your Name - [GitHub Profile](https://github.com/yourusername)
---
Made with ❤️ for the terminal enthusiasts
| text/markdown | Your Name | you@example.com | null | null | MIT | cli, search, duckduckgo, information, terminal | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Pr... | [] | null | null | >=3.8 | [] | [] | [] | [
"beautifulsoup4<5.0.0,>=4.14.3",
"ddgs<10.0.0,>=9.10.0",
"nltk<4.0.0,>=3.9.2",
"numpy<3.0.0,>=2.4.2",
"requests<3.0.0,>=2.32.5",
"sumy<0.13.0,>=0.12.0"
] | [] | [] | [] | [
"Documentation, https://github.com/yourusername/whatiz#readme",
"Homepage, https://github.com/yourusername/whatiz",
"Issues, https://github.com/yourusername/whatiz/issues",
"Repository, https://github.com/yourusername/whatiz"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:10:13.899885 | whatiz-0.1.0.tar.gz | 3,315 | e3/8e/323d2ff0e0b3d3c729c9527ac9744a0f2419c70d277b417cdbce942345de/whatiz-0.1.0.tar.gz | source | sdist | null | false | bb88bf26093ecbaa7748728e4919b44a | a27df52ef0efeef6b3527256c007dee7c53e32ce77f34f376bb0f70614c22d77 | e38e323d2ff0e0b3d3c729c9527ac9744a0f2419c70d277b417cdbce942345de | null | [
"LICENSE"
] | 252 |
2.4 | speechcortex-sdk | 0.1.2 | The official Python SDK for SpeechCortex ASR platform. | # SpeechCortex Python SDK
[](https://github.com/speechcortex/speechcortex-sdk/actions/workflows/ci.yml)
[](https://badge.fury.io/py/speechcortex-sdk)
[](https://pypi.org/project/speechcortex-sdk/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)
Official Python SDK for SpeechCortex ASR (Automatic Speech Recognition) platform.
## Features
- **Real-time Speech Recognition**: WebSocket-based streaming ASR
- **Pre-recorded Transcription**: REST API for batch processing (coming soon)
- **Easy Integration**: Simple, intuitive API
- **Async Support**: Full async/await support for modern Python applications
## Requirements
- Python 3.10 or higher
## Installation
```bash
pip install "git+https://github.com/speechcortex/speechcortex-sdk.git@package_init"
```
```bash
export SPEECHCORTEX_API_KEY=your_api_key_here
export SPEECHCORTEX_HOST=wss://api.speechcortex.com
```
## Quick Start
### Real-time Transcription
```python
from speechcortex import SpeechCortexClient, LiveTranscriptionEvents, LiveOptions
# Initialize the client
speechcortex = SpeechCortexClient(api_key="your_api_key_here")
# Get WebSocket connection
connection = speechcortex.listen.websocket.v("1")
# Set up event handlers
def on_message(self, result, **kwargs):
sentence = result.channel.alternatives[0].transcript
if result.is_final:
print(f"Final: {sentence}")
else:
print(f"Interim: {sentence}")
def on_error(self, error, **kwargs):
print(f"Error: {error}")
# Register event handlers
connection.on(LiveTranscriptionEvents.Transcript, on_message)
connection.on(LiveTranscriptionEvents.Error, on_error)
# Configure options
options = LiveOptions(
model="zeus-v1",
language="en-US",
smart_format=True,
)
# Start the connection
connection.start(options)
# Send audio data
connection.send(audio_data)
# Close when done
connection.finish()
```
### Using with Microphone
```python
from speechcortex import SpeechCortexClient, LiveTranscriptionEvents, LiveOptions, Microphone
speechcortex = SpeechCortexClient()
connection = speechcortex.listen.websocket.v("1")
# Set up event handlers...
connection.on(LiveTranscriptionEvents.Transcript, on_message)
# Start connection
options = LiveOptions(model="zeus-v1", smart_format=True)
connection.start(options)
# Use microphone helper
microphone = Microphone(connection.send)
microphone.start()
# Microphone will stream audio automatically
# Press Ctrl+C to stop
microphone.finish()
connection.finish()
```
## Configuration
### API Key
Set your API key via environment variable:
```bash
export SPEECHCORTEX_API_KEY=your_api_key_here
```
Or pass it directly:
```python
speechcortex = SpeechCortexClient(api_key="your_api_key_here")
```
### Custom Endpoints
```python
from speechcortex import SpeechCortexClient, SpeechCortexClientOptions
config = SpeechCortexClientOptions(
api_key="your_api_key",
url="https://custom-api.speechcortex.com"
)
speechcortex = SpeechCortexClient(config=config)
```
## Features
### Real-time Transcription Options
- `model`: ASR model to use (e.g., "zeus-v1")
- `language`: Language code (e.g., "en-US")
- `smart_format`: Enable smart formatting
- `punctuate`: Enable punctuation
- `interim_results`: Receive interim results
- `utterance_end_ms`: Utterance end timeout in milliseconds
- `vad_events`: Enable voice activity detection events
### Events
- `Open`: Connection opened
- `Transcript`: Transcription result received
- `Metadata`: Metadata received
- `SpeechStarted`: Speech detected
- `UtteranceEnd`: End of utterance detected
- `Close`: Connection closed
- `Error`: Error occurred
- `Unhandled`: Unhandled message received
## Development
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/speechcortex/speechcortex-sdk.git
cd speechcortex-sdk
# Install dependencies
pip install -r requirements-dev.txt
# Run tests
pytest
# Run linting
pylint speechcortex/
# Format code
black speechcortex/
```
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Support
For issues, questions, or contributions, please visit our [GitHub repository](https://github.com/speechcortex/speechcortex-sdk).
| text/markdown | SpeechCortex Team | SpeechCortex Team <team@speechcortex.com> | null | null | MIT | speechcortex, asr, speech-to-text, speech recognition | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/speechcortex/speechcortex-sdk | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.2",
"websockets>=12.0",
"dataclasses-json>=0.6.3",
"typing-extensions>=4.9.0",
"aiohttp>=3.9.1",
"aiofiles>=23.2.1",
"aenum>=3.1.0",
"deprecation>=2.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/speechcortex/speechcortex-sdk",
"Bug Tracker, https://github.com/speechcortex/speechcortex-sdk/issues",
"Source Code, https://github.com/speechcortex/speechcortex-sdk",
"Documentation, https://github.com/speechcortex/speechcortex-sdk#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:09:54.245142 | speechcortex_sdk-0.1.2.tar.gz | 20,078 | 61/1d/49dc52dcd64445c398e68bf199edde40a92392039c5f0d250c19e2d6d162/speechcortex_sdk-0.1.2.tar.gz | source | sdist | null | false | d16cb87b92def254d07088fda1598626 | 6ac3e38508dc1d96596867316eb2fdff82bf32fe445425f2b0fd8b3112cc4cde | 611d49dc52dcd64445c398e68bf199edde40a92392039c5f0d250c19e2d6d162 | null | [
"LICENSE"
] | 239 |
2.4 | specmatic | 2.39.6.post1 | A Python module for using the Specmatic Library. | # Specmatic Python
This is a Python library to run [Specmatic](https://specmatic.io).
Specmatic is a contract driven development tool that allows us to turn OpenAPI contracts into executable specifications.
<br/>Click below to learn more about Specmatic and Contract Driven Development<br/><br/>
[](https://www.youtube.com/watch?v=3HPgpvd8MGg "Specmatic - Contract Driven Development")
The specmatic python library provides three main functions:
- The ability to start and stop a python web app like flask/sanic.
- The ability to run specmatic in test mode against an open api contract/spec.
- The ability to mock out an api dependency using the specmatic mock feature.
#### Running Contract Tests
A contract test validates an open api specification against a running api service.
The open api specification can be present either locally or in a [Central Contract Repository](https://specmatic.io/documentation/central_contract_repository.html)
[Click here](https://specmatic.io/documentation/contract_tests.html) to learn more about contract tests.
#### How to use
- Create a file called test_contract.py in your test folder.
- Declare an empty class in it called 'TestContract'.
This is could either be a normal class like:
``````python
class TestContract:
pass
``````
Or you could also have a class which inherits from unittest.TestCase:
``````python
class TestContract(unittest.TestCase):
pass
``````
#### How does it work
- Specmatic uses the TestContract class defined above to inject tests dynamically into it when you run it via PyTest or UnitTest.
- The Specmatic Python package, invokes the Specmatic executable jar (via command line) in a separate process to start mocks and run tests.
- It is the specmatic jar which runs the contract tests and generates a JUnit test summary report.
- The Specmatic Python package ingests the JUnit test summary report and generates test methods corresponding to every contract test.
- These dynamic test methods are added to the ```TestContract``` class and hence we seem them reported seamlessly by PyTest/Unittest like this:
```python
test/test_contract_with_coverage.py::TestContract::test_Scenario: GET /products -> 200 | SEARCH_2 PASSED
test/test_contract_with_coverage.py::TestContract::test_Scenario: GET /products -> 500 | SEARCH_ERROR PASSED
test/test_contract_with_coverage.py::TestContract::test_Scenario: GET /products -> 200 | SEARCH_1 PASSED
```
## WSGI Apps
#### To run contract tests with a mock for a wsgi app (like Flask):
``````python
class TestContract:
pass
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file])
.with_wsgi_app(app, app_host, app_port)
.test(TestContract)
.run()
if __name__ == '__main__':
pytest.main()
``````
- In this, we are passing:
- an instance of your wsgi app like flask
- app_host and app_port. If they are not specified, the app will be started on a random available port on 127.0.0.1.
- You would need a [specmatic config](https://specmatic.io/documentation/specmatic_json.html) file to be present in the root directory of your project.
- an empty test class.
- mock_host, mock_port, optional list of json files to set expectations on the mock.
The mock_host, mock_port will be used to run the specmatic mock server.
If they are not supplied, the mock will be started on a random available port on 127.0.0.1.
[Click here](https://specmatic.io/documentation/service_virtualization_tutorial.html) to learn more about mocking/service virtualization.
- You can run this test from either your IDE or command line by pointing pytest to your test folder:
``````pytest test -v -s``````
- NOTE: Please ensure that you set the '-v' and '-s' flags while running pytest as otherwise pytest may swallow up the console output.
#### To run contract tests without a mock:
``````python
class TestContract:
pass
Specmatic() \
.with_project_root(PROJECT_ROOT) \
.with_wsgi_app(app, app_host, app_port) \
.test(TestContract) \
.run()
``````
## ASGI Apps
#### To run contract tests with a mock for an asgi app (like sanic):
- If you are using an asgi app like sanic, fastapi, use the ``````with_asgi_app`````` function and pass it a string in the 'module:app' format.
``````python
class TestContract:
pass
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file])
.with_asgi_app('main:app', app_host, app_port)
.test(TestContract)
.run()
``````
### Passing extra arguments to mock/test
- To pass arguments like '--strict', '--testBaseUrl', pass them as a list to the 'args' parameter:
``````python
class TestContract:
pass
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file], ['--strict'])
.with_wsgi_app(app, port=app_port)
.test(TestContract, args=['--testBaseURL=http://localhost:5000'])
.run()
``````
## Coverage
Specmatic can generate a coverage summary report which will list out all the apis exposed by your app/service with a status next to it indicating if it has been covered in your contract tests.
### Enabling api coverage for Flask apps
``````python
class TestContract:
pass
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file])
.with_wsgi_app(app, app_host, app_port)
.test_with_api_coverage_for_flask_app(TestContract, app)
.run()
``````
### Enabling api coverage for Sanic apps
``````python
class TestContract:
pass
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file])
.with_asgi_app('main:app', app_host, app_port)
.test_with_api_coverage_for_sanic_app(TestContract, app)
.run()
``````
### Enabling api coverage for FastApi apps
``````python
class TestContract:
pass
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file])
.with_asgi_app('main:app', app_host, app_port)
.test_with_api_coverage_for_fastapi_app(TestContract, app)
.run()
``````
### Enabling api coverage for any other type of app
For any app other than Flask, Sanic, and FastApi, you would need to implement an ``````AppRouteAdapter`````` class.
The idea is to implement ``````to_coverage_routes`````` method, which returns a list of ``````CoverageRoute`````` objects corresponding to all the routes defined in your app.
The ``````CoverageRoute`````` class has two properties:
``````url`````` : This represents your route url in this format: `````` /orders/{order_id}``````
``````method`````` : A list of HTTP methods supported on the route, for instance : ``````['GET', 'POST']``````
You can then enable coverage by passing your adapter like this:
``````python
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file])
.with_asgi_app('main:app', app_host, app_port)
.test_with_api_coverage(TestContract, MyAppRouteAdapter(app))
.run()
``````
### Enabling api coverage by setting the EndPointsApi property
You can also start your coverage server externally and use the EndPointsApi method to enable coverage.
We have provided ready to use Coverage Server classes for:
Flask: ``````FlaskAppCoverageServer``````
Sanic: ``````SanicAppCoverageServer``````
FastApi ``````FastApiAppCoverageServer``````
You can also easily implement your own coverage server if you have written a custom implementation of the ``````AppRouteAdapter`````` class.
The only point to remember in mind is that the EndPointsApi url should return a list of routes in the format used buy Spring Actuator's ```````/actuator/mappings``````` endpoint
as described [here](https://docs.spring.io/spring-boot/docs/current/actuator-api/htmlsingle/#mappings).
Here's an example where we start both our FastApi app and coverage server outside the specmatic api call.
``````python
app_server = ASGIAppServer('test.apps.fast_api:app', app_host, app_port)
coverage_server = FastApiAppCoverageServer(app)
app_server.start()
coverage_server.start()
class TestContract:
pass
Specmatic()
.with_project_root(PROJECT_ROOT)
.with_mock(mock_host, mock_port, [expectation_json_file])
.with_endpoints_api(coverage_server.endpoints_api)
.test(TestContract, app_host, app_port)
.run()
app_server.stop()
coverage_server.stop()
``````
## Common Issues
- **'Error loading ASGI app'**
This error occurs when an incorrect app module string is passed to the ``````with_asgi_app`````` function.
#### Solutions:
- Try to identify the correct module in which your app variable is instantiated/imported.
For example if your 'app' variable is declared in main.py, try passing 'main:app'.
- Try running the app using uvicorn directly:
`````` uvciron 'main:app' ``````
If you are able to get the app started using uvicorn, it will work with specmatic too.
## Sample Projects
- [Check out the Specmatic Order BFF Python repo](https://github.com/specmatic/specmatic-order-bff-python/) to see more examples of how to use specmatic with a Flask app.
- [Check out the Specmatic Order BFF Python Sanic repo](https://github.com/specmatic/specmatic-order-bff-python-sanic/) to see more examples of how to use specmatic with a Sanic app.
- [Check out the Specmatic Order API Python repo](https://github.com/specmatic/specmatic-order-api-python/) to see an examples of how to just run tests without using a mock.
| text/markdown | Specmatic Builders | info@core.in | null | null | MIT | null | [] | [] | https://github.com/specmatic/specmatic-python-extensions | null | >=3.11 | [] | [] | [] | [
"pytest>=7.3.1",
"requests>=2.32.4",
"Werkzeug>=3.1.4",
"uvicorn>=0.18.0",
"fastapi>=0.70.0",
"flask>=2.2.5",
"sanic>=22.12.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:09:39.233523 | specmatic-2.39.6.post1.tar.gz | 74,235,126 | d5/77/739a11f156dc3d2dcc154e76f7fb80d2c63a8e4978bc14b77f464eedaf29/specmatic-2.39.6.post1.tar.gz | source | sdist | null | false | 0514c1ce63e72635e4ed521a1b250d24 | 28337ab54e00c9b9f2efb6e263eac8dfc59fbad7fca4d92a7ac2f45b77cd6870 | d577739a11f156dc3d2dcc154e76f7fb80d2c63a8e4978bc14b77f464eedaf29 | null | [] | 364 |
2.4 | random-walk-lib | 1.0.0 | A Python library for 2D random walk with visualization | # Random Walk Lib
一个实现二维随机游走的Python库,支持步数控制、位置记录和路径可视化。
## 安装
### 本地安装
```bash
# 进入项目根目录
cd path/to/random_walk_lib
pip install .
```
---
### 第二步:本地安装并测试
打开终端,按以下步骤操作:
#### 1. 进入项目根目录
```bash
cd path/to/random_walk_lib # 替换成你的项目根目录路径
```
| text/markdown | hello1-UI | freetongdynastynet@hotmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/hello1-UI/random-walk-lib | null | >=3.8 | [] | [] | [] | [
"matplotlib>=3.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T06:08:35.117010 | random_walk_lib-1.0.0.tar.gz | 2,996 | 53/20/a914ab86e4846ad72002c49fad732f93a8265dbf565ed4f950cf167f6b72/random_walk_lib-1.0.0.tar.gz | source | sdist | null | false | eff9859114d30048ca41ffff47186458 | 7e225d4fec28901e427fff32c707958e721d925e2007e47ef558285c65f662db | 5320a914ab86e4846ad72002c49fad732f93a8265dbf565ed4f950cf167f6b72 | null | [] | 263 |
2.4 | quantecon-book-theme | 0.17.1 | A clean book theme for scientific explanations and documentation with Sphinx | # quantecon-book-theme
A Jupyter Book Theme for QuantEcon Book Style Projects
## Features
- **Clean, professional design** optimized for technical and academic documentation
- **Git-based metadata** - Automatic display of last modified dates and interactive changelog with commit history
- **Collapsible stderr warnings** - Automatically wraps verbose warnings in notebook cells with an expandable interface
- **Jupyter Notebook support** with visual classes for cell inputs, outputs, and interactive functionality
- **Configurable code syntax highlighting** - Choose between custom QuantEcon styles or Pygments built-in themes
- **Launch buttons** for online interactivity via BinderHub
- **Flexible content layout** inspired by beautiful online books
- **Bootstrap 4** for visual elements and functionality
- **Built on PyData Sphinx Theme** inheriting robust features and design patterns
## Usage
To use this theme in [Jupyter Book](https://github.com/executablebooks/jupyter-book):
1. Install the theme
```bash
pip install quantecon-book-theme
```
2. Add the theme to your `_config.yml` file:
```yaml
sphinx:
config:
html_theme: quantecon_book_theme
```
### Configuration Options
The theme supports various configuration options in your `conf.py` or `_config.yml`:
```python
html_theme_options = {
"repository_url": "https://github.com/{your-org}/{your-repo}",
"use_repository_button": True,
"use_issues_button": True,
"use_edit_page_button": True,
# Git metadata (new in v0.12.0)
"last_modified_date_format": "%b %d, %Y", # Date format for last modified
"changelog_max_entries": 10, # Number of commits to show in changelog
# Code highlighting (new in v0.10.0)
"qetheme_code_style": True, # False to use Pygments built-in styles
}
# When using Pygments styles
pygments_style = 'friendly' # or 'monokai', 'github-dark', etc.
```
See the [full documentation](https://quantecon-book-theme.readthedocs.io/) for all configuration options.
## Development
### Testing
This project uses `tox` for running tests across multiple Python versions:
```bash
# Run full test suite
tox
# Run pre-commit checks
pre-commit run --all-files
```
**Important**: Always use `tox` instead of running `pytest` directly to ensure proper environment isolation and multi-version testing.
## Updating Fixtures for Tests
### Updating test regression files on layout changes
It is advisable to update the test files for file regression checks when releavant layout files change.
For example, at present we have a sidebar file-regression check to validate html across tests.
The file which it compares against is `tests/test_build/test_build_book.html`.
If updating the sidebar html, then one of the easier steps to update this test file is:
1. Delete the file `tests/test_build/test_build_book.html`.
2. Run `pytest` in your command line, which will then generate a new file. Check if the file is at par with your expectations, contains elements which you added/modified.
Now future pytests will test against this file, and the subsequent tests should pass.
## Contributing Guide
The docs for the contributing guide of this repository: https://github.com/QuantEcon/quantecon-book-theme/blob/master/docs/contributing/index.md
| text/markdown | null | null | null | Executable Books Team <executablebooks@gmail.com> | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Framework :: Sphinx",
"Framework :: Sphinx :: Theme",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pyyaml>=6.0",
"sphinx<9,>=7",
"docutils>=0.20",
"libsass~=0.23.0",
"sphinx_book_theme~=1.1.4",
"beautifulsoup4>=4.12",
"flake8>=7.0.0; extra == \"code-style\"",
"black; extra == \"code-style\"",
"pre-commit; extra == \"code-style\"",
"folium; extra == \"doc\"",
"numpy; extra == \"doc\"",
"mat... | [] | [] | [] | [
"Repository, https://github.com/QuantEcon/quantecon-book-theme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:08:33.861645 | quantecon_book_theme-0.17.1.tar.gz | 3,367,835 | 93/8e/8d9743e2503dbd62aef9b7b4ce32949f36e53bd2bd614f694c7611deea58/quantecon_book_theme-0.17.1.tar.gz | source | sdist | null | false | dca716cb533d5a8044c38583d5d96f17 | b5606a14e011265a747aed9085ab0140a29664340540b17d48426ae878610119 | 938e8d9743e2503dbd62aef9b7b4ce32949f36e53bd2bd614f694c7611deea58 | null | [
"LICENSE"
] | 266 |
2.3 | fedibooster | 2026.2.19 | Bot to boost/reblog posts with specified tags | # Fedibooster
I am taking fedibooster out of 'retirement' for my own **_personal use_**.
[](https://codeberg.org/MarvinsMastodonTools/fediboster) [](https://ci.codeberg.org/repos/13923) [](https://pepy.tech/project/fedibooster)
[](https://codeberg.org/MarvinsMastodonTools/fedinesia/src/branch/main/LICENSE.md)
Fedibooster is a command line (CLI) tool / bot / robot to re-blog / boost statuses with hash tags in a given list.
It respects rate limits imposed by servers.
## Install and run from [PyPi](https://pypi.org)
It's ease to install fedibooster from Pypi using the following command
```bash
pip install fedibooster
```
Once installed fedibooster can be started by typing `fedibooster` into the command line.
## Install and run from [Source](https://codeberg.org/marvinsmastodontools/fedibooster)
Alternatively you can run fedibooster from source by cloning the repository using the following command line
```bash
git clone https://codeberg.org/marvinsmastodontools/fedibooster.git
```
fedibooster uses [uv](https://docs.astral.sh/uv/) for dependency control, please install UV before proceeding further.
Before running, make sure you have all required python modules installed. With uv this is as easy as
```bash
uv sync
```
Run fedibooster with the command `uv run fedibooster`
## Configuration / First Run
fedibooster will ask for all necessary parameters when run for the first time and store them in ``config.toml`
file in the current directory.
## License
Fedibooster is licensed under the [GNU Affero General Public License v3.0 ](http://www.gnu.org/licenses/agpl-3.0.html)
## Supporting fedibooster
Fedibooster is a personal project, scratching a personal itch. So by supporting fedibooster you are
in effect supporting me personally. I **don't** provide priority or special support in return for support.
With all that said, there are a number of ways you can support fedibooster:
- You can [buy me a coffee ](https://www.buymeacoffee.com/marvin8).
- You can send me small change in Monero to the address below:
## Monero donation address
`88xtj3hqQEpXrb5KLCigRF1azxDh8r9XvYZPuXwaGaX5fWtgub1gQsn8sZCmEGhReZMww6RRaq5HZ48HjrNqmeccUHcwABg`
| text/markdown | marvin8 | marvin8 <marvin8@tuta.io> | null | null | AGPL-3.0-or-later | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"cyclopts~=4.5.3",
"diskcache~=5.6.3",
"h11~=0.16.0",
"h2~=4.3.0",
"httpx~=0.28.1",
"loguru~=0.7.3",
"minimal-activitypub~=1.5.5",
"msgspec~=0.20.0",
"pybreaker~=1.4.1",
"stamina~=25.2.0",
"tomli-w~=1.2.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T06:08:12.703383 | fedibooster-2026.2.19.tar.gz | 13,617 | af/4e/b9dab614c59fbd0c0be131b725edcec23ff746a4b0a355e3265be9e065d1/fedibooster-2026.2.19.tar.gz | source | sdist | null | false | 078e09397e8560ca1748d702fa1a38a4 | 6666c202bbb414245779722c7ff3b6ca7a3ca836eaa5fe690fb1d5f4cbe0c0fd | af4eb9dab614c59fbd0c0be131b725edcec23ff746a4b0a355e3265be9e065d1 | null | [] | 250 |
2.4 | credit_mutuel_pdf_extractor | 0.1.0 | Extract bank transactions from Crédit Mutuel PDF statements | # Crédit Mutuel PDF Extractor
[](https://pypi.org/project/credit_mutuel_pdf_extractor/)
A robust Python utility to extract transaction data from Crédit Mutuel bank statement PDFs, validate data integrity, and export to structured formats (JSON/CSV) or Google Sheets.
## Features
- **Automated Extraction**: Parses transaction dates, descriptions, and amounts from multiple accounts per PDF.
- **Balance Validation**: Computes the sum of transactions and cross-references them with the starting and ending balances provided in the statement.
- **Strict CLI**: Explicit input file list and mandatory `--output` flag (with `.csv` or `.json` validation).
- **French Format Support**: Handles French number formatting (e.g., `1.234,56` or `1 234,56`).
- **Structured Logging**: Uses the Python `logging` module for clean, professional output and error reporting.
- **Automation**: Includes a `Justfile` for common tasks like `run` and `clean`.
- **Account Mapping**: Support for custom account labels via YAML configuration.
- **Google Sheets Export**: Direct export to a Google Spreadsheet.
## Installation
You can install the extractor directly from PyPI:
```bash
pip install credit_mutuel_pdf_extractor
```
Or using [uv](https://github.com/astral-sh/uv):
```bash
uv tool install credit_mutuel_pdf_extractor
```
## Usage
### Global Command
Once installed, you can use the `cmut_process_pdf` command from anywhere:
```bash
cmut_process_pdf data/*.pdf --output results.csv --config config.yaml
```
### Using Just (Development)
If you have the source code and [just](https://github.com/casey/just) installed:
To process all PDFs in the `data/` directory using the labels defined in `config.yaml` (outputs to `transactions.csv`):
```bash
just run
```
To output in JSON format:
```bash
just run json
```
To clean up all generated files:
```bash
just clean
```
### Configuration
#### Account Mapping
You can map account numbers to custom labels by creating a `config.yaml` file. See `config.example.yaml` for a template.
```yaml
account_mapping:
21945407: "Crequi"
21945409: "Prevost"
```
> [!NOTE]
> Account numbers are matched as integers (leading zeros are ignored).
#### Description Mapping
You can automatically rename transactions by adding a `description_mapping` section. If any key is found as a **substring** (case-insensitive) in the transaction description, it will be replaced by the corresponding label.
```yaml
description_mapping:
"VIR SEPA FROM": "Transfer"
"NETFLIX": "Entertainment"
"AMAZON": "Shopping"
```
#### Google Sheets Export
To enable Google Sheets export, add a `google_sheets` section to your `config.yaml`:
```yaml
google_sheets:
spreadsheet_id: "your-spreadsheet-id"
sheet_name: "Transactions"
credentials_file: "credentials.json"
```
**Service Account Setup:**
1. Create a project in [Google Cloud Console](https://console.cloud.google.com/).
2. Enable both **Google Sheets API** and **Google Drive API**.
3. Create a **Service Account** (APIs & Services > Credentials > Create Credentials > Service Account).
4. Create a **JSON Key** for that service account and download it.
5. Save the key as `credentials.json` (or any path specified in your `config.yaml`).
6. **Permission**: Share your Google Spreadsheet with the service account email (found in the JSON) with **Editor** access. (No broad IAM roles are needed if shared directly).
### Command Line Interface
You can explicitly specify files, the output format, and enable Google Sheets export:
```bash
uv run credit-mutuel-extractor data/*.pdf --output results.csv --config config.yaml --gsheet --include-source-file
```
**Requirements:**
- At least one input PDF file.
- The `--output` flag is mandatory and must end in `.csv` or `.json`.
## Technical Details
- **Account Identification**: Uses vertical Y-coordinate mapping to associate tables with the correct account number headers.
- **Data Normalization**: Amounts are cleaned and converted to standard floats.
- **Validation**: If `Starting Balance + Σ(Transactions) != Ending Balance`, the script will report a `CRITICAL` error and halt execution.
- **Modular Design**: Utility functions are separated into `utils.py` for maintainability.
## Security & Publishing
### Secret Leak Prevention
This project uses `pre-commit` and `detect-secrets` to prevent accidental commits of sensitive data.
Before committing, the hooks will scan for potential secrets.
### Publishing to PyPI
Publishing is automated via the `Justfile` and integrated with **1Password** for security.
1. **Store your PyPI Token**: Create a "Login" or "Password" item in 1Password.
2. **Add Environment Variable**: Add a field named `UV_PUBLISH_TOKEN` containing your PyPI API token.
3. **Publish**:
```bash
just publish
```
This uses `op run` to securely inject the token into the `uv publish` command without it ever being stored in plain text or history.
| text/markdown | Max | null | null | null | MIT | bank, credit-mutuel, extractor, google-sheets, pdf | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"gspread>=6.1.4",
"pandas>=2.0.0",
"pdfplumber>=0.11.9",
"pyyaml>=6.0.1"
] | [] | [] | [] | [
"Homepage, https://github.com/maximehk/credit_mutuel_pdf_extractor"
] | uv/0.8.2 | 2026-02-19T06:06:35.063885 | credit_mutuel_pdf_extractor-0.1.0.tar.gz | 61,671 | 53/b4/36dad4085910f5841edef72c989d8c22d4999a8a4f7262ba0fb730d05d01/credit_mutuel_pdf_extractor-0.1.0.tar.gz | source | sdist | null | false | 583c1664988ae0031a06ee018e3adafa | 455b2a52e1eacec0514a37c974cf740d6f2123b4c1648cf52d1d954fa14c2522 | 53b436dad4085910f5841edef72c989d8c22d4999a8a4f7262ba0fb730d05d01 | null | [] | 0 |
2.4 | shrew-python | 0.1.0 | Python bindings for the Shrew deep learning library | # shrew-python
Python bindings for [Shrew](https://github.com/ginozza/shrew), a modular deep learning framework written in Rust.
## Features
- **High Performance**: Native Rust implementation with minimal Python overhead.
- **GPU Support**: CUDA acceleration via `shrew-cuda` (if valid CUDA toolkit is present).
- **Declarative Models**: Full support for Shrew's `.sw` intermediate representation.
- **Autograd**: Reverse-mode automatic differentiation.
- **Interoperability**: Zero-copy tensor conversion from/to NumPy.
## Installation
```bash
pip install shrew-python
```
## Usage
```python
import shrew_python as shrew
# Create tensors
x = shrew.tensor([1.0, 2.0, 3.0])
y = shrew.tensor([4.0, 5.0, 6.0])
# Operations
z = x + y
print(z) # Tensor([5.0, 7.0, 9.0], dtype=F64, dev=Cpu)
# Load a .sw model
executor = shrew.Executor.load("my_model.sw")
result = executor.run("forward", {"input": x})
```
## Building from Source
Requires [Rust](https://rustup.rs/) and [Maturin](https://github.com/PyO3/maturin).
```bash
git clone https://github.com/ginozza/shrew
cd shrew
maturin develop --manifest-path crates/shrew-python/Cargo.toml --release
```
## License
Apache-2.0
| text/markdown; charset=UTF-8; variant=GFM | null | ginozza <jsimancas@unimagdalena.edu.co> | null | null | Apache-2.0 | deep-learning, machine-learning, rust, neural-network, tensor | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Rust",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: ... | [] | https://github.com/ginozza/shrew | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/ginozza/shrew#readme",
"Homepage, https://github.com/ginozza/shrew",
"Repository, https://github.com/ginozza/shrew"
] | maturin/1.12.2 | 2026-02-19T06:05:55.658613 | shrew_python-0.1.0.tar.gz | 369,884 | d2/c4/ec34a229a4c90283c947319a38136d1fc5dc12267a6afdd790aa04487fad/shrew_python-0.1.0.tar.gz | source | sdist | null | false | 75025c18dc6c3525e4f138229ca1684f | 78f3d5ad430047f3e48579abd44f972d566fd7df65b6ef65f7f261da2049818f | d2c4ec34a229a4c90283c947319a38136d1fc5dc12267a6afdd790aa04487fad | null | [] | 266 |
2.4 | FourCIPP | 1.63.0 | A streamlined Python Parser for 4C input files | <p align="center">
<picture>
<source
srcset="https://raw.githubusercontent.com/4C-multiphysics/fourcipp/refs/heads/main/docs/assets/fourcipp_logo_white.svg"
media="(prefers-color-scheme: dark)">
<img
src="https://raw.githubusercontent.com/4C-multiphysics/fourcipp/refs/heads/main/docs/assets/fourcipp_logo_black.svg"
width="300"
title="FourCIPP"
alt="FourCIPP logo">
</picture>
</p>
FourCIPP (**FourC** **I**nput **P**ython **P**arser) holds a Python Parser to simply interact with [4C](https://github.com/4C-multiphysics/4C) YAML input files. This tool provides a streamlined approach to data handling for third party tools.
## Overview <!-- omit from toc -->
- [Installation](#installation)
- [Python Environment](#python-environment)
- [Installation from PyPI](#installation-from-pypi)
- [Installation from Github (most recent version)](#installation-from-github-most-recent-version)
- [Installation from source](#installation-from-source)
- [Quickstart example](#quickstart-example)
- [Configuration](#configuration)
- [Developing FourCIPP](#developing-fourcipp)
- [Dependency Management](#dependency-management)
- [License](#license)
## Installation
### Python Environment
FourCIPP is a Python project supporting Python versions 3.10 - 3.13. To use FourCIPP it is recommended to install it into a virtual Python environment such as [Conda](https://anaconda.org/anaconda/conda)/[Miniforge](https://conda-forge.org/download/) or [venv](https://docs.python.org/3/library/venv.html).
An exemplary [Conda](https://anaconda.org/anaconda/conda)/[Miniforge](https://conda-forge.org/download/) environment can be created and loaded with
```bash
# Create the environment (this only has to be done once)
conda create -n fourcipp python=3.13
# Activate the environment
conda activate fourcipp
```
To now install FourCIPP different ways exist.
### Installation from PyPI
FourCIPP is published on [PyPI](https://pypi.org/project/FourCIPP/) as a universal wheel, meaning you can install it on Windows, Linux and macOS with:
```bash
pip install fourcipp
```
or a specific version with:
```bash
pip install fourcipp==0.28.0
```
### Installation from Github (most recent version)
Additionally, the latest `main` version of FourCIPP can be installed directly from Github via:
```bash
pip install git+https://github.com/4C-multiphysics/fourcipp.git@main
```
### Installation from source
If you intend on developing FourCIPP it is crucial to install FourCIPP from source, i.e., cloning the repository from Github. You can then either install it in a non-editable or editable fashion.
- Install all requirements without fixed versions in a non-editable fashion via:
```bash
# located at the root of the repository
pip install .
```
and without fixed versions in an editable fashion via:
```bash
# located at the root of the repository
pip install -e .
```
> Note: This is the default behavior. This allows to use fourcipp within other projects without version conflicts.
- Alternatively, you can install all requirements with fixed versions in a non-editable fashion with:
```bash
pip install .[safe]
```
and with fixed versions in an editable fashion via:
```bash
# located at the root of the repository
pip install -e .[safe]
```
Once installed, FourCIPP is ready to be used 🎉
## Quickstart example
<!--example, do not remove this comment-->
```python
from fourcipp.fourc_input import FourCInput
# Create a new 4C input via
input_4C = FourCInput()
# Or load an existing input file
input_4C = FourCInput.from_4C_yaml(input_file_path)
# Add or overwrite sections
input_4C["PROBLEM TYPE"] = {"PROBLEMTYPE": "Structure"}
input_4C["PROBLEM SIZE"] = {"DIM": 3, "ELEMENTS": 1_000}
# Update section parameter
input_4C["PROBLEM SIZE"]["ELEMENTS"] = 1_000_000
# Add new parameter
input_4C["PROBLEM SIZE"]["NODES"] = 10_000_000
# Remove section
removed_section = input_4C.pop("PROBLEM SIZE")
# Dump to file
input_4C.dump(input_file_path, validate=True)
```
<!--example, do not remove this comment-->
## Configuration
FourCIPP utilizes the `4C_metadata.yaml` and `schema.json` files generated during the 4C build to remain up-to-date with your 4C build. By default, the files for the latest 4C input version can be found in `src/fourcipp/config`. You can add custom metadata and schema paths to the configuration file `src/fourcipp/config/config.yaml` by adding a new profile:
```yaml
profile: your_custom_files
profiles:
your_custom_files:
4C_metadata_path: /absolute/path/to/your/4C_metadata.yaml
json_schema_path: /absolute/path/to/your/4C_schema.json
default:
4C_metadata_path: 4C_metadata.yaml
json_schema_path: 4C_schema.json
description: 4C metadata from the latest successful nightly 4C build
4C_docker_main:
4C_metadata_path: /home/user/4C/build/4C_metadata.yaml
json_schema_path: /home/user/4C/build/4C_schema.json
description: 4C metadata in the main 4C docker image
```
and select it using the `profile` entry.
## Developing FourCIPP
If you plan on actively developing FourCIPP it is advisable to install in editable mode with the additional developer requirements like
```bash
pip install -e .[dev]
```
> Note: The developer requirements can also be installed in non-editable installs. Finally, you can install the pre-commit hook with:
```bash
pre-commit install
```
## Dependency Management
To ease the dependency update process [`pip-tools`](https://github.com/jazzband/pip-tools) is utilized. To create the necessary [`requirements.txt`](./requirements.txt) file simply execute
```
pip-compile --all-extras --output-file=requirements.txt requirements.in
````
To upgrade the dependencies simply execute
```
pip-compile --all-extras --output-file=requirements.txt --upgrade requirements.in
````
## License
This project is licensed under a MIT license. For further information check [`LICENSE`](./LICENSE).
| text/markdown | FourCIPP Authors | null | null | null | The MIT License (MIT)
Copyright (c) 2025 FourCIPP Authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonschema-rs",
"loguru",
"numpy",
"rapidyaml",
"regex",
"pre-commit; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pip-tools; extra == \"dev\"",
"pytest-xdist; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/4C-multiphysics/fourcipp/",
"Issues, https://github.com/4C-multiphysics/fourcipp/issues/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T06:04:41.947416 | fourcipp-1.63.0.tar.gz | 682,269 | 56/f6/faaec7d5d5f2b31f656daa05abff82c6400ea840b0c3fbd4fdda69f8fd35/fourcipp-1.63.0.tar.gz | source | sdist | null | false | 056d2646cf8279cd8ecd2db4c47def46 | 95344a139991c880ab3be5ecb69f5845c755b6f735939e79003d60e09d0994b9 | 56f6faaec7d5d5f2b31f656daa05abff82c6400ea840b0c3fbd4fdda69f8fd35 | null | [
"LICENSE"
] | 0 |
2.4 | agentui-starter-pack | 0.3.2 | Scaffolding CLI for Optimized Agent Stack | # Agent UI Starter Pack (A2UI)
### High-Fidelity Agent-Driven User Interfaces for Google Cloud.
The **Agent UI Starter Pack** is a professional distribution for developers building high-fidelity AI applications on Gemini. We provide the architectural "Golden Path" for bridging the gap between conversational intelligence and actionable software.
---
## 💡 The Core Mission
### ❌ The Problem: The "Wall of Text"
Conversational AI today is highly intelligent but **low-utility**. Users are often stuck behind a "Wall of Text"—trying to parse complex stats, project roadmaps, or financial data out of a chat bubble. This leads to **Text Fatigue** and limits the agent's ability to act as a real tool.
### ✅ The Solution: From Chatbot to Cockpit
We move users away from talking to a "box" and into an **Agent Cockpit**. Instead of just sending text, your agent "manifests" high-fidelity, interactive UI components on the fly using the **A2UI Protocol**.
**Software isn't static anymore; it's synthesized by the Agent based on the user's intent.**
---
## 🎁 What do you get?
By using this starter pack, you aren't just getting template code—you are getting a **Production Framework**:
* **Instant 0-to-1**: Skip 3 weeks of setting up Vertex AI Auth, NL-to-JSON parsing, and dynamic React rendering.
* **The Artifact Registry**: A library of 20+ premium React components (`StatBars`, `QuizCards`, `Timeline`) that are native to JSON.
* **The Bridge Orchestrator**: A specialized backend that triages "Conversational Intent" and resolves which UI surfaces to manifest.
* **Observability-by-Default**: A built-in "Ops Console" to inspect the raw NDJSON thought-process of the agent in real-time.
---
## 🏗️ Core Pillars
### 🎭 The Face (Front End) - *Primary Focus*
**Role: The Experience.** Adaptive surfaces that change based on what the agent is doing.
* **CLI**: `agent-ui-starter-pack`
* **Powered by**: React, Vite, A2UI Protocol.
* **Feature**: Dynamic A2UI Renderer and a library of high-fidelity components.
### ⚙️ The Engine (Agent)
**Role: The Brain.** Internal reasoning and tool execution.
* **CLI**: `agent-starter-pack`
* **Powered by**: Python, Vertex AI SDK, ADK.
* **Feature**: Native integration with Agent Engine for managed runtimes.
---
## 🚀 Key Features
### 💎 A2UI Protocol Native
The entire stack is built on the **Agent-Driven User Interface (A2UI)** protocol. Your agent doesn't just send text; it sends structured JSON that manifests as premium UI components (Timelines, Trophies, Quizzes) in real-time.
### 🔄 State Synchronization
Seamlessly sync agent reasoning steps with frontend state. Build "Human-in-the-loop" workflows where the user can inspect and refine agent actions before they finalize.

---
## 🛠️ Usage (Prescribed Examples)
### Scaffolding
Create a new project in seconds using our specialized CLIs:
**To create the High-Fidelity Front End:**
```bash
uvx agent-ui-starter-pack create my-ui-project
```
**To hydrate a Figma wireframe into A2UI:**
```bash
uvx agent-ui-starter-pack hydrate <figma-url>
```
### Local Development
Start the integrated Vite + API bridge:
```bash
make dev
```
### Production Deployment
Deploy the full stack to Google Cloud:
```bash
make deploy-prod
```
---
## 📁 Repository Structure
- `/src/a2ui`: Core A2UI rendering logic and base components.
- `/src/backend`: The agent engine logic and API bridge.
- `/src/components`: Premium UI building blocks.
- `/docs`: Detailed integration guides and protocol specifications.
---
## License
MIT
| text/markdown | null | Enrique <enrique@example.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"gitpython>=3.1.0",
"google-adk>=0.1.0",
"google-cloud-aiplatform>=1.70.0",
"pytest-asyncio>=0.23.0",
"pytest>=8.0.0",
"rich>=13.0.0",
"typer>=0.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/enriquekalven/agent-ui-starter-pack",
"Bug Tracker, https://github.com/enriquekalven/agent-ui-starter-pack/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T06:04:38.116302 | agentui_starter_pack-0.3.2.tar.gz | 1,354,146 | 4c/a7/ce80a608aeb58786f20b4c78ca946e37b8d8b432ad201173a094b77e6a7c/agentui_starter_pack-0.3.2.tar.gz | source | sdist | null | false | ab29602d57d27c3c95833daaee91e8fd | 13da55296b71a7cba81f33ad9ddb5256ae7fc4820832760b6d8dbd337b0b4794 | 4ca7ce80a608aeb58786f20b4c78ca946e37b8d8b432ad201173a094b77e6a7c | null | [] | 261 |
2.4 | vectorizer-svg | 0.1.3 | Simple linear algebra library | # Vectorizer-svg
Simple Linear Algebra Library with SVG Manipulation
## Installation
```bash
pip install vectorizer-svg
```
[PyPi](https://pypi.org/project/vectorizer-svg/0.1.1/)
## Usage
To see simple tutorial :
[usage.md](docs/usage.md)
## Features
- Vectors graph
- Operations on vectors
- Gradient vector
- Dot and Cross products
### Credits :
[0xSaad](https://x.com/0xdonzdev)
| text/markdown | 0xSaad | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.7 | 2026-02-19T06:01:04.852685 | vectorizer_svg-0.1.3.tar.gz | 15,976 | 07/dd/10be844127c5e7cbf2889261f17d9f535e0d5eda567bf4f16b42ec8325df/vectorizer_svg-0.1.3.tar.gz | source | sdist | null | false | d90f41f3e3ced50434ee0fcc876db326 | 94bbaa38303e58483408550f38610d0c0382229907061db7b12366996a87728b | 07dd10be844127c5e7cbf2889261f17d9f535e0d5eda567bf4f16b42ec8325df | null | [
"LICENSE"
] | 261 |
2.4 | pyfileindex | 0.1.4 | pyfileindex - pythonic file system index | # pyfileindex
PyFileIndex - pythonic file system index
[](https://github.com/pyiron/pyfileindex/actions/workflows/pipeline.yml)
[](https://codecov.io/gh/pyiron/pyfileindex)
[](https://mybinder.org/v2/gh/pyiron/pyfileindex/main?filepath=notebooks%2Fdemo.ipynb)
The pyfileindex helps to keep track of files in a specific directory, to monitor added files, modified files and deleted files. The module is compatible with Python 3.7 or later but restricted to Unix-like system - Windows is not supported.

# Installation
The pyfileindex can either be installed via pip using:
```shell
pip install pyfileindex
```
Or via anaconda from the conda-forge channel
```shell
conda install -c conda-forge pyfileindex
```
# Usage
Import pyfileindex:
```python
from pyfileindex import PyFileIndex
pfi = PyFileIndex(path='.')
```
Or you can filter for a specifc file extension:
```python
def filter_function(file_name):
return '.txt' in file_name
pfi = PyFileIndex(path='.', filter_function=filter_function)
```
List files in the file system index:
```python
pfi.dataframe
```
Update file system index:
```python
pfi.update()
```
And open a subdirectory using:
```python
pfi.open(path='subdirectory')
```
For more details, take a look at the example notebook: https://github.com/pyiron/pyfileindex/blob/main/notebooks/demo.ipynb
# License
The pyfileindex is released under the BSD license https://github.com/pyiron/pyfileindex/blob/main/LICENSE . It is a spin-off of the pyiron project https://github.com/pyiron/pyiron therefore if you use the pyfileindex for your publication, please cite:
```
@article{pyiron-paper,
title = {pyiron: An integrated development environment for computational materials science},
journal = {Computational Materials Science},
volume = {163},
pages = {24 - 36},
year = {2019},
issn = {0927-0256},
doi = {https://doi.org/10.1016/j.commatsci.2018.07.043},
url = {http://www.sciencedirect.com/science/article/pii/S0927025618304786},
author = {Jan Janssen and Sudarsan Surendralal and Yury Lysogorskiy and Mira Todorova and Tilmann Hickel and Ralf Drautz and Jörg Neugebauer},
keywords = {Modelling workflow, Integrated development environment, Complex simulation protocols},
}
```
| text/markdown | null | Jan Janssen <janssen@mpie.de> | null | null | BSD 3-Clause License
Copyright (c) 2019, Jan Janssen
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | pyiron | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programm... | [] | null | null | <3.15,>=3.9 | [] | [] | [] | [
"pandas<=3.0.1,>=1.5.3"
] | [] | [] | [] | [
"Homepage, https://github.com/pyiron/pyfileindex",
"Documentation, https://github.com/pyiron/pyfileindex",
"Repository, https://github.com/pyiron/pyfileindex"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:55:33.839280 | pyfileindex-0.1.4.tar.gz | 6,154 | 48/54/f9dcbe451382f5ada3740d576595536e09ad8f49699f15bef1a7b70808a8/pyfileindex-0.1.4.tar.gz | source | sdist | null | false | 7c85b82ff07cf1d6330e454a310f29e9 | 2c78ddca80381832cbda63ced079f906754f62befe0c40dfe684271fafc3352c | 4854f9dcbe451382f5ada3740d576595536e09ad8f49699f15bef1a7b70808a8 | null | [
"LICENSE"
] | 373 |
2.4 | coop | 7.2.1 | Standard base to build Wagtail sites from | Coop - base for all (most) Neon Jungle sites
============================================
This is a base to build all Neon Jungle sites off.
This package contains all the common code shared
between sites, with the ideal Neon Jungle site containing only
model definitions, templates, and front end assets.
Making a release
----------------
Upgrade the version in ``pyproject.toml``.
Coops version stays in step with Wagtail. i.e. Coop 2.4.x uses Wagtail 2.4.x.
Point releases are used to add features during a Wagtail version lifespan.
Update the CHANGELOG. Please.
Tag your release:
After your branch has been merged to master, checkout master locally, pull remote master and
.. code-block:: bash
$ git tag "x.y.z"
$ git push --tags
And you are done! Gitlab is set up to automatically push the new package to pypi when a tag is pushed.
Local dev
---------
Create a virtual environment, activate and install the requirements:
.. code-block:: bash
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install poetry
$ poetry install
First time you should run migrations and setup a superuser:
.. code-block:: bash
$ ./manage.py migrate
$ ./manage.py createsuperuser
You can run and debug the project locally using `./manage.py runserver`, or included is a launch.json for vscode to debug using the debugger.
| text/x-rst | null | Neon Jungle <developers@neonjungle.studio> | null | null | null | null | [] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"django-htmx<1.27.0,>=1.26.0",
"django<5.3,>=4.2",
"jinja2>=3.1.0",
"psycopg[binary]<4.0.0,>=3.2.6",
"pytz",
"sentry-sdk[django]>=2.13.0",
"wagtail-cache<3.1.0,>=3.0.0",
"wagtail-factories~=4.3.0",
"wagtail-icomoon",
"wagtail-metadata==5.0.0",
"wagtail<7.3.0,>=7.2.0",
"diskcache~=5.2.0; extra ... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.11 | 2026-02-19T05:54:49.612871 | coop-7.2.1.tar.gz | 7,395,646 | 67/e7/fa00cc2473348f11f687e5e7321998b41c5195b3fc9805fa5aad4c029fd4/coop-7.2.1.tar.gz | source | sdist | null | false | c0027b95a9a931415141d3ad41ff1a6a | 61496488f7bbff4a963166316a0bb0cde467c12e0d8f5b438473634559b2dd10 | 67e7fa00cc2473348f11f687e5e7321998b41c5195b3fc9805fa5aad4c029fd4 | null | [] | 283 |
2.4 | panoseti-grpc | 0.3.5 | gRPC for the PANOSETI project. | 

# PANOSETI gRPC Services
This repository contains the microservice architecture for the PANOSETI observatory. It provides gRPC interfaces for real-time data access, observatory control, and general telemetry logging.
See [here](https://github.com/panoseti/panoseti) for the main software repo.
## Service Directory
Each service operates independently. Click the links below for detailed API documentation and configuration guides.
| Service | Description | Status | Documentation |
| :--- |:-------------------------------------------------------|:--------------| :--- |
| **DAQ Data** | Streams real-time science data directly from Hashpipe. | 🟢 Production | [**Read Docs**](./src/panoseti_grpc/daq_data/README.md) |
| **U-blox Control** | Controls and configures GNSS chips (F9T/F9P). | 🟢 Production | [**Read Docs**](./src/panoseti_grpc/ublox_control/README.md) |
| **Telemetry** | Collects metadata from remote Linux machines. | 🟡 Beta | [**Read Docs**](./src/panoseti_grpc/telemetry/README.md) |
---
## 📦 Installation (Client Mode)
If you only need to write scripts to control the observatory or analyze data, install the package from PyPI:
```bash
pip install panoseti-grpc
```
Example Usage:
```python
from panoseti_grpc.telemetry.client import TelemetryClient
# Connect to a running Telemetry Service
client = TelemetryClient("localhost", 50051)
# Upload metadata
client.log_flexible("example", "weather-01", {"status": "Online", "is-raining": True})
```
---
# 🛠️ Development & Contribution
## Environment Setup
If you are deploying the servers on the head node or contributing to the codebase, we recommend installing `miniconda` ([link](https://www.anaconda.com/docs/getting-started/miniconda/install)), then following these steps to setup your environment:
```bash
# 0. Clone this repo and go to the repo root
git clone https://github.com/panoseti/panoseti_grpc.git
cd panoseti_grpc
# 1. Create the grpc-py39 conda environment
conda create -n grpc-py39 python=3.9
conda activate grpc-py39
# 2. Install in editable mode with development dependencies
pip install -e .
```
## 🧪 Testing
We use a comprehensive CI pipeline (GitHub Actions) to verify every commit. You can—and should—run these same tests locally before pushing code.
### Run CI Tests Locally via Bash Scripts (Recommended)
To run a CI test locally, use one of the scripts in `scripts/run-ci-tests/`.
Each service has an associated script which builds the Docker containers and runs the appropriate test suites.
#### Examples:
```bash
# Run DAQ Data Service tests
./scripts/run-ci-tests/run-daq-data-ci-test.sh
# Run U-blox Control Service tests
./scripts/run-ci-tests/run-ublox-control-ci-test.sh
```
---
## 🚀 Adding New Services
The PANOSETI gRPC architecture is designed to be extensible. If you are developing a new service (e.g., the upcoming `daq_control`), follow this standard workflow.
### 0. Branching Strategy
Always create a new feature branch off the development branch:
```bash
git checkout dev
git checkout -b feature/daq-control-service
```
### 1. Define the Interface (.proto)
Create a new Protocol Buffer definition file in the `protos/` directory. This defines the contract between your client and server.
* **File:** `protos/daq_control.proto`
* **Example:**
```protobuf
syntax = "proto3";
package panoseti.daq_control;
service DaqControl {
rpc SetHighVoltage (VoltageRequest) returns (StatusResponse) {}
}
message VoltageRequest { float voltage = 1; }
message StatusResponse { bool success = 1; }
```
### 2. Compile the Protos
Run the compilation script to generate the Python gRPC code.
```bash
python scripts/compile_protos.py
```
This will automatically generate two files in `src/panoseti_grpc/generated/`:
* `daq_control_pb2.py` (Message definitions)
* `daq_control_pb2_grpc.py` (Client/Server stubs)
### 3. Create the Service Module
Create a new directory for your service source code. You **must** include an `__init__.py` file for Python to recognize it as a package.
```bash
mkdir -p src/panoseti_grpc/daq_control
touch src/panoseti_grpc/daq_control/__init__.py
```
### 4. Implement Client & Server
Develop your application logic. You can now import your generated protobuf code using the package path.
**Example `src/panoseti_grpc/daq_control/server.py`:**
```python
import grpc
from panoseti_grpc.generated import daq_control_pb2, daq_control_pb2_grpc
class DaqControlServicer(daq_control_pb2_grpc.DaqControlServicer):
def SetHighVoltage(self, request, context):
print(f"Setting voltage to {request.voltage}")
return daq_control_pb2.StatusResponse(success=True)
```
### 5. Add CI Tests
Finally, ensure your new service is robust by adding a test suite.
1. Create a test directory: `tests/daq_control/`
2. Add a `Dockerfile` for your test environment.
3. Add a generic runner script in `scripts/run-ci-tests/run-daq-control-ci-test.sh`.
4. Create unit and integration tests with [pytest](https://docs.pytest.org/en/stable/).
| text/markdown | Nicolas Rault-Wang, Ben Godfrey | null | null | null | null | panoseti, gRPC, observatory, data-acquisition, astronomy, real-time | [
"Programming Language :: Python :: 3",
"Operating System :: Unix",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Astronomy",
"Framework :: AsyncIO"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"grpcio==1.70.0",
"grpcio-tools==1.70.0",
"grpcio-reflection==1.70.0",
"protobuf<6.0.0,>=5.26.1",
"coverage>=7.10.7",
"cython",
"wheel",
"rich",
"redis>=7",
"snakeviz",
"json5",
"pyserial",
"pyubx2",
"seaborn",
"matplotlib",
"pandas",
"numpy",
"psutil",
"watchfiles",
"aiofiles"... | [] | [] | [] | [
"Homepage, https://github.com/panoseti/panoseti_grpc",
"Documentation, https://github.com/panoseti/panoseti_grpc/blob/main/README.md",
"Repository, https://github.com/panoseti/panoseti_grpc",
"Issues, https://github.com/panoseti/panoseti_grpc/issues"
] | twine/6.2.0 CPython/3.9.23 | 2026-02-19T05:54:47.133585 | panoseti_grpc-0.3.5.tar.gz | 16,122,353 | cd/e1/dc569d4829b6dc83cf78dd10afe52ed70b67c2e73de4be66512406676bbd/panoseti_grpc-0.3.5.tar.gz | source | sdist | null | false | d4dd94652d6189fb69316eed03180073 | 68aabaaa52115f8c818e18adef09c812400b184d45125845289c132dd6903e9f | cde1dc569d4829b6dc83cf78dd10afe52ed70b67c2e73de4be66512406676bbd | null | [
"LICENSE"
] | 267 |
2.4 | pysqa | 0.3.5 | Simple HPC queuing system adapter for Python on based jinja templates to automate the submission script creation. | # pysqa
[](https://github.com/pyiron/pysqa/actions/workflows/pipeline.yml)
[](https://pysqa.readthedocs.io/en/latest/?badge=latest)
[](https://codecov.io/gh/pyiron/pysqa)
[](https://mybinder.org/v2/gh/pyiron/pysqa/HEAD?labpath=example_config.ipynb)

High-performance computing (HPC) does not have to be hard. In this context the aim of the Python Simple Queuing System
Adapter (`pysqa`) is to simplify the submission of tasks from python to HPC clusters as easy as starting another
`subprocess` locally. This is achieved based on the assumption that even though modern HPC queuing systems offer a wide
range of different configuration options, most users submit the majority of their jobs with very similar parameters.
Therefore, in `pysqa` users define submission script templates once and reuse them to submit many different tasks and
workflows afterwards. These templates are defined in the [jinja2 template language](https://palletsprojects.com/p/jinja/),
so current submission scripts can be easily converted to templates. In addition, to the submission of new tasks to HPC
queuing systems, `pysqa` also allows the users to track the progress of their tasks, delete them or enable reservations
using the built-in functionality of the queuing system. Finally, `pysqa` enables remote connections to HPC clusters
using SSH including support for two factor authentication via [pyauthenticator](https://github.com/jan-janssen/pyauthenticator),
this allows the users to submit task from a python process on their local workstation to remote HPC clusters.
All this functionality is available from both the [Python interface](https://pysqa.readthedocs.io/en/latest/example.html)
as well as the [command line interface](https://pysqa.readthedocs.io/en/latest/command.html).
## Features
The core feature of `pysqa` is the communication to HPC queuing systems including ([Flux](https://pysqa.readthedocs.io/en/latest/queue.html#flux),
[LFS](https://pysqa.readthedocs.io/en/latest/queue.html#lfs), [MOAB](https://pysqa.readthedocs.io/en/latest/queue.html#moab),
[SGE](https://pysqa.readthedocs.io/en/latest/queue.html#sge), [SLURM](https://pysqa.readthedocs.io/en/latest/queue.html#slurm)
and [TORQUE](https://pysqa.readthedocs.io/en/latest/queue.html#torque)). This includes:
* `QueueAdapter().submit_job()` - Submission of new tasks to the queuing system.
* `QueueAdapter().get_queue_status()` - List of calculation currently waiting or running on the queuing system.
* `QueueAdapter().delete_job()` - Deleting calculation which are currently waiting or running on the queuing system.
* `QueueAdapter().queue_list` - List of available queue templates created by the user.
* `QueueAdapter().config` - Templates to a specific number of cores, run time or other computing resources. With
integrated checks if a given submitted task follows these restrictions.
In addition to these core features, `pysqa` is continuously extended to support more use cases for a larger group of
users. These new features include the support for remote queuing systems:
* Remote connection via the secure shell protocol (SSH) to access remote HPC clusters.
* Transfer of files to and from remote HPC clusters, based on a predefined mapping of the remote file system into the
local file system.
* Support for both individual connections as well as continuous connections depending on the network availability.
Finally, there is current work in progress to support a combination of [multiple local and remote queuing systems](https://pysqa.readthedocs.io/en/latest/advanced.html)
from within `pysqa`, which are represented to the user as a single resource.
## Documentation
* [Installation](https://pysqa.readthedocs.io/en/latest/installation.html)
* [pypi-based installation](https://pysqa.readthedocs.io/en/latest/installation.html#pypi-based-installation)
* [conda-based installation](https://pysqa.readthedocs.io/en/latest/installation.html#conda-based-installation)
* [Queuing Systems](https://pysqa.readthedocs.io/en/latest/queue.html)
* [Flux](https://pysqa.readthedocs.io/en/latest/queue.html#flux)
* [LFS](https://pysqa.readthedocs.io/en/latest/queue.html#lfs)
* [MOAB](https://pysqa.readthedocs.io/en/latest/queue.html#moab)
* [SGE](https://pysqa.readthedocs.io/en/latest/queue.html#sge)
* [SLURM](https://pysqa.readthedocs.io/en/latest/queue.html#slurm)
* [TORQUE](https://pysqa.readthedocs.io/en/latest/queue.html#torque)
* [Python Interface Dynamic](https://pysqa.readthedocs.io/en/latest/example_queue_type.html)
* [Submit job to queue](https://pysqa.readthedocs.io/en/latest/example_queue_type.html#submit-job-to-queue)
* [Show jobs in queue](https://pysqa.readthedocs.io/en/latest/example_queue_type.html#show-jobs-in-queue)
* [Delete job from queue](https://pysqa.readthedocs.io/en/latest/example_queue_type.html#delete-job-from-queue)
* [Python Interface Config](https://pysqa.readthedocs.io/en/latest/example_config.html)
* [List available queues](https://pysqa.readthedocs.io/en/latest/example_config.html#list-available-queues)
* [Submit job to queue](https://pysqa.readthedocs.io/en/latest/example_config.html#submit-job-to-queue)
* [Show jobs in queue](https://pysqa.readthedocs.io/en/latest/example_config.html#show-jobs-in-queue)
* [Delete job from queue](https://pysqa.readthedocs.io/en/latest/example_config.html#delete-job-from-queue)
* [Command Line Interface](https://pysqa.readthedocs.io/en/latest/command.html)
* [Submit job](https://pysqa.readthedocs.io/en/latest/command.html#submit-job)
* [Enable reservation](https://pysqa.readthedocs.io/en/latest/command.html#enable-reservation)
* [List jobs](https://pysqa.readthedocs.io/en/latest/command.html#list-jobs)
* [Delete job](https://pysqa.readthedocs.io/en/latest/command.html#delete-job)
* [List files](https://pysqa.readthedocs.io/en/latest/command.html#list-files)
* [Help](https://pysqa.readthedocs.io/en/latest/command.html#help)
* [Advanced Configuration](https://pysqa.readthedocs.io/en/latest/advanced.html)
* [Remote HPC Configuration](https://pysqa.readthedocs.io/en/latest/advanced.html#remote-hpc-configuration)
* [Access to Multiple HPCs](https://pysqa.readthedocs.io/en/latest/advanced.html#access-to-multiple-hpcs)
* [Debugging](https://pysqa.readthedocs.io/en/latest/debug.html)
* [Local Queuing System](https://pysqa.readthedocs.io/en/latest/debug.html#local-queuing-system)
* [Remote HPC](https://pysqa.readthedocs.io/en/latest/debug.html#remote-hpc)
## License
`pysqa` is released under the [BSD license](https://github.com/pyiron/pysqa/blob/main/LICENSE) . It is a spin-off of the
[pyiron project](https://pyiron.org) therefore if you use `pysqa` for calculation which result in a scientific
publication, please cite:
@article{pyiron-paper,
title = {pyiron: An integrated development environment for computational materials science},
journal = {Computational Materials Science},
volume = {163},
pages = {24 - 36},
year = {2019},
issn = {0927-0256},
doi = {https://doi.org/10.1016/j.commatsci.2018.07.043},
url = {http://www.sciencedirect.com/science/article/pii/S0927025618304786},
author = {Jan Janssen and Sudarsan Surendralal and Yury Lysogorskiy and Mira Todorova and Tilmann Hickel and Ralf Drautz and Jörg Neugebauer},
keywords = {Modelling workflow, Integrated development environment, Complex simulation protocols},
}
| text/markdown | null | Jan Janssen <janssen@mpie.de> | null | null | BSD 3-Clause License
Copyright (c) 2019, Jan Janssen
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | pyiron | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programm... | [] | null | null | <3.15,>=3.9 | [] | [] | [] | [
"jinja2<=3.1.6,>=2.11.3",
"pandas<=3.0.1,>=1.5.3",
"pyyaml<=6.0.3,>=5.3.1",
"paramiko<=4.0.0,>=2.7.1; extra == \"remote\"",
"tqdm<=4.67.3,>=4.66.1; extra == \"remote\"",
"defusedxml<=0.7.1,>=0.7.0; extra == \"sge\"",
"pyauthenticator==0.3.0; extra == \"twofactor\""
] | [] | [] | [] | [
"Homepage, https://github.com/pyiron/pysqa",
"Documentation, https://pysqa.readthedocs.io",
"Repository, https://github.com/pyiron/pysqa"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:54:38.633852 | pysqa-0.3.5.tar.gz | 22,712 | f2/e9/71866618efd0c3321c9c8a5f32a736f20e4c3754a525d2f1e55ed0d23dca/pysqa-0.3.5.tar.gz | source | sdist | null | false | 640783659e4c5c2fc548742bed25daa3 | 7ea1b42e457ef1a47367f68c3919e4ae904a01e6278157bdabe2e1527be612b6 | f2e971866618efd0c3321c9c8a5f32a736f20e4c3754a525d2f1e55ed0d23dca | null | [
"LICENSE"
] | 658 |
2.4 | h5io-browser | 0.2.11 | Easy navigation and data storage for HDF5 | # Easy navigation and data storage for HDF5
[](https://github.com/h5io/h5io_browser/actions/workflows/pipeline.yml)
[](https://coveralls.io/github/h5io/h5io_browser?branch=main)
[](https://mybinder.org/v2/gh/h5io/h5io_browser/HEAD?labpath=notebooks%2Fexample.ipynb)
The [hierarchical data format (HDF)](https://www.hdfgroup.org) is aimed to ensure efficient and equitable access to
science and engineering data across platforms and environments. The [h5py](https://www.h5py.org) package provides a
pythonic interface to the HDF5 binary data format and the [h5io](https://github.com/h5io/h5io) package simplifies this
interface by introducing the `read_hdf5()` and `write_hdf5()` functions for loading and storing python objects in HDF5.
The [h5io](https://github.com/h5io/h5io) package also provides a `list_file_contents()` function to print the internal,
structure of an HDF5 file.

The `h5io_browser` package extends this interface by providing a pointer `h5io_browser.Pointer` to a specific path
inside the hierarchical structure of the HDF5 file. With this pointer, data can be read, stored, copied and deleted from
the HDF5 file, while at the same time simplifying the navigation inside the hierarchy of the file. The `h5io_browser`
package is developed with three constraints and goals:
* Simplify navigating HDF5 files created by the [h5io](https://github.com/h5io/h5io) package. This includes interactive
navigation inside an interactive Python shell or a Jupyter Notebook environment.
* Integrate standard functionality to interact with the data stored in the HDF5 file like read, write, copy and delete
using the interface defined by the [h5py](https://www.h5py.org) package and the [h5io](https://github.com/h5io/h5io)
package.
* Finally, balance flexibility and performance. Just like the [h5io](https://github.com/h5io/h5io) package, the
`h5io_browser` only opens the HDF5 file when accessing the data and does not maintain an open file handle while
waiting for user input. At the same time the interface defined by the [h5io](https://github.com/h5io/h5io) package
is extended to store multiple python objects at the same time for improved performance.
## Installation
The `h5io_browser` package can be installed either via the [Python Package Index](https://pypi.org):
```
pip install h5io_browser
```
Or alternatively, via the community channel on the [conda package manager](https://anaconda.org/conda-forge/h5io_browser)
maintained by the [conda-forge community](https://conda-forge.org):
```
conda install -c conda-forge h5io_browser
```
## Example
Demonstration of the basic functionality of the `h5io_browser` module.
### Import Module
Start by importing the `h5io_browser` module:
```python
import h5io_browser as hb
```
From the `h5io_browser` module the `Pointer()` object is created to access a new HDF5 file named `new.h5`:
```python
hp = hb.Pointer(file_name="new.h5")
```
### Write Data
For demonstration three different objects are written to the HDF5 file:
* a list with the numbers one and two is stored in the HDF5 path `data/a_list`
* an integer number is stored in the HDF5 path `data/an_integer_number`
* a dictionary is stored in the HDF5 path `data/sub_path/a_dictionary`
This can either be done using the edge notation, known from accessing python dictionaries, or alternatively using the
`write_dict()` function which can store multiple objects in the HDF5 file, while opening it only once.
```python
hp["data/a_list"] = [1, 2]
hp.write_dict(data_dict={
"data/an_integer_number": 3,
"data/sub_path/a_dictionary": {"d": 4, "e": 5},
})
```
### Read Data
One strength of the `h5io_browser` package is the support for interactive python environments like, Jupyter notebooks.
To browse the HDF5 file by executing the `Pointer()` object:
```python
hp
```
In comparison the string representation lists the `file_name`, `h5_path` as well as the `nodes` and `groups` at this
`h5_path`:
```python
str(hp)
>>> 'Pointer(file_name="/Users/jan/test/new.h5", h5_path="/") {"groups": ["data"], "nodes": []}'
```
List content of the HDF5 file at the current `h5_path` using the `list_all()` function:
```python
hp.list_all()
>>> ['data']
```
In analogy the `groups` and `nodes` of any `h5_path` either relative to the current `h5_path` or as absolute `h5_path`
can be analysed using the `list_h5_path()`:
```python
hp.list_h5_path(h5_path="data")
>>> {'groups': ['sub_path'], 'nodes': ['a_list', 'an_integer_number']}
```
To continue browsing the HDF5 file the edge bracket notation can be used, just like it s commonly used for python
dictionaries to browse the HDF5 file:
```python
hp["data"].list_all()
>>> ['a_list', 'an_integer_number', 'sub_path']
```
The object which is returned is again a Pointer with the updated `h5_path`, which changed from `/` to `/data`:
```python
hp.h5_path, hp["data"].h5_path
>>> ('/', '/data')
```
Finally, individual nodes of the HDF5 file can be loaded with the same syntax using the `/` notation known from the
file system, or by combining multiple edge brackets:
```python
hp["data/a_list"], hp["data"]["a_list"]
>>> ([1, 2], [1, 2])
```
### Convert to Dictionary
To computationally browse through the contents of an HDF5 file, the `to_dict()` method extends the interactive browsing
capabilities. By default it returns a flat dictionary with the keys representing the `h5_path` of the individual nodes
and the values being the data stored in these nodes. Internally, this loads the whole tree structure, starting from the
current `h5_path`, so depending on the size of the HDF5 file this can take quite some time:
```python
hp.to_dict()
>>> {'data/a_list': [1, 2],
>>> 'data/an_integer_number': 3,
>>> 'data/sub_path/a_dictionary': {'d': 4, 'e': 5}}
```
An alternative representation, is the hierarchical representation which can be enabled by the `hierarchical` being set
to `True`. Then the data is represented as a nested dictionary:
```python
hp.to_dict(hierarchical=True)
>>> {'data': {'a_list': [1, 2],
>>> 'an_integer_number': 3,
>>> 'sub_path': {'a_dictionary': {'d': 4, 'e': 5}}}}
```
### With Statement
For compatibility with other file access methods, the `h5io_browser` package also supports the with statement notation.
Still technically this does not change the behavior, even when opened with a with statement the HDF5 file is closed
between individual function calls.
```python
with hb.Pointer(file_name="new.h5") as hp:
print(hp["data/a_list"])
>>> [1, 2]
```
### Delete Data
To delete data from an HDF5 file using the `h5io_browser` the standard python `del` function can be used in analogy to
deleting items from a python dictionary. To demonstrate the deletion a new node is added named `data/new/entry/test`:
```python
hp["data/new/entry/test"] = 4
```
To list the node, the `to_dict()` function is used with the `hierarchical` parameter to highlight the nested structure:
```python
hp["data/new"].to_dict(hierarchical=True)
>>> {'entry': {'test': 4}}
```
The node is then deleted using the `del` function. While this removes the node from the index the file size remains the
same, which is one of the limitations of the HDF5 format. Consequently, it is not recommended to create and remove nodes
in the HDF5 files frequently:
```python
print(hp.file_size())
del hp["data/new/entry/test"]
print(hp.file_size())
>>> (18484, 18484)
```
Even after the deletion of the last node the groups are still included in the HDF5 file. They are not listed by the
`to_dict()` function, as it recursively iterates over all nodes below the current `h5_path`:
```python
hp["data/new"].to_dict(hierarchical=True)
>>> {}
```
Still with the `list_all()` function lists all nodes and groups at a current `h5_path` including empty groups, like the
`entry` group in this case:
```python
hp["data/new"].list_all()
>>> ['entry']
```
To remove the group from the HDF5 file the same `del` command is used:
```python
del hp["data/new"]
```
After deleting both the newly created groups and their nodes the original hierarchy of the HDF5 file is restored:
```python
hp.to_dict(hierarchical=True)
>>> {'data': {'a_list': [1, 2],
>>> 'an_integer_number': 3,
>>> 'sub_path': {'a_dictionary': {'d': 4, 'e': 5}}}}
```
Still even after deleting the nodes from the HDF5 file, the file size remains the same:
```python
hp.file_size()
>>> 18484
```
### Loop over Nodes
To simplify iterating recursively over all nodes contained in the selected `h5_path` the `Pointer()` object can be used
as iterator:
```python
hp_data = hp["data"]
{h5_path: hp_data[h5_path] for h5_path in hp_data}
>>> {'a_list': [1, 2],
>>> 'an_integer_number': 3,
>>> 'sub_path/a_dictionary': {'d': 4, 'e': 5}}
```
### Copy Data
In addition to adding, browsing and removing data from an existing HDF5 file, the `Pointer()` object can also be used to
copy data inside a given HDF5 file or copy data from one HDF5 file to another. A new HDF5 file is created, named
`copy.h5`:
```python
hp_copy = hb.Pointer(file_name="copy.h5")
```
The data is transferred from the existing `Pointer()` object to the new HDF5 file using the `copy_to()` functions:
```python
hp["data"].copy_to(hp_copy)
hp_copy
```
## Disclaimer
While we try to develop a stable and reliable software library, the development remains a opensource project under the
BSD 3-Clause License without any warranties:
```
BSD 3-Clause License
Copyright (c) 2023, Jan Janssen
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
```
| text/markdown | null | Jan Janssen <janssen@mpie.de> | null | null | BSD 3-Clause License
Copyright (c) 2023, Jan Janssen
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | h5io, hdf5 | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programm... | [] | null | null | >=3.9 | [] | [] | [] | [
"h5io<=0.2.5,>=0.2.2",
"h5py<=3.15.1,>=3.6.0",
"numpy<=2.4.2,>=1.23.5",
"pandas<=3.0.1,>=1.5.3",
"tables==3.10.2; extra == \"pytables\""
] | [] | [] | [] | [
"Homepage, https://github.com/h5io/h5io_browser",
"Documentation, https://github.com/h5io/h5io_browser",
"Repository, https://github.com/h5io/h5io_browser"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:53:45.862318 | h5io_browser-0.2.11.tar.gz | 15,274 | e3/82/6c3c167d0b6592cbb4fdff47fbb3a7ca2fef9e564ee7a8c98d06888f612a/h5io_browser-0.2.11.tar.gz | source | sdist | null | false | 2399f19f00076a546c38a45d60600918 | 95f86d49b46e6574689adc573f5dac8e73a8f370b50dd4d6e53ed85ea7fcd430 | e3826c3c167d0b6592cbb4fdff47fbb3a7ca2fef9e564ee7a8c98d06888f612a | null | [
"LICENSE"
] | 346 |
2.4 | bojstat | 0.1.0 | 日本銀行 時系列統計データ検索サイト API Python クライアント | # bojstat
**日本銀行 時系列統計データ検索サイト API** の非公式 Python クライアントです。
2026年2月18日に公開された [日銀統計API](https://www.boj.or.jp/statistics/outline/notice_2026/not260218a.htm) を使い、金利・為替・マネーストック・短観など200,000以上の時系列統計データをPythonから簡単に取得できます。
> **注意**: このパッケージは日本銀行が公式に提供・保証するものではありません。
---
## インストール
```bash
pip install bojstat
# pandas 対応
pip install bojstat[pandas]
# polars 対応
pip install bojstat[polars]
# 両方
pip install bojstat[all]
# GitHubから(開発版)
pip install "git+https://github.com/kigasudayooo/bojstat.git#subdirectory=python"
```
## クイックスタート
```python
from bojstat import BOJStatClient
client = BOJStatClient()
# 利用可能なDB一覧を確認
print(client.list_databases())
# 外国為替市況(FM08)のメタデータを取得
meta = client.get_metadata(db="FM08")
for m in meta[:3]:
print(m["SERIES_CODE"], m.get("NAME_OF_TIME_SERIES_J"))
# 無担保コールO/N物レートを取得
data = client.get_data(
db="FM01",
codes=["STRDCLUCON", "STRDCLUCONH", "STRDCLUCONL"],
start="202501",
)
# pandas DataFrameに変換
df = client.to_dataframe(data)
print(df.head())
```
## API リファレンス
### `BOJStatClient(lang="jp", timeout=30, request_interval=1.0)`
クライアントを初期化します。
| パラメータ | 型 | デフォルト | 説明 |
|---|---|---|---|
| `lang` | str | `"jp"` | 出力言語(`"jp"` or `"en"`) |
| `timeout` | int | `30` | リクエストタイムアウト(秒) |
| `request_interval` | float | `1.0` | 連続リクエスト間の待機時間(秒)。高頻度アクセスはサーバーに接続遮断される可能性があります |
---
### `get_data(db, codes, start=None, end=None, start_position=None)`
**コード API**。系列コードを指定してデータを取得します(最大250件/リクエスト)。
```python
data = client.get_data(
db="CO",
codes=["TK99F1000601GCQ01000", "TK99F2000601GCQ01000"],
start="202401", # 2024年Q1
end="202504", # 2025年Q4
)
```
**開始期・終了期の形式**
| 期種 | 形式 | 例 |
|---|---|---|
| 月次/週次/日次 | `YYYYMM` | `"202501"` = 2025年1月 |
| 四半期 | `YYYYQQ` | `"202502"` = 2025年Q2 |
| 暦年半期/年度半期 | `YYYYHH` | `"202501"` = 2025年上期 |
| 暦年/年度 | `YYYY` | `"2025"` |
---
### `get_data_all(db, codes, start=None, end=None)`
250件超の系列を自動ページネーションして全件取得します。
```python
all_data = client.get_data_all(db="PR01", codes=my_500_codes)
```
---
### `get_layer(db, frequency, layer, start=None, end=None, start_position=None)`
**階層 API**。階層情報でデータを絞り込んで取得します。
```python
data = client.get_layer(
db="BP01",
frequency="M",
layer="1,1,1", # 階層1=1, 階層2=1, 階層3=1
start="202504",
end="202509",
)
```
`layer` のワイルドカード指定例:
| 指定 | 意味 |
|---|---|
| `"*"` | 全系列 |
| `"1,1"` | 階層1=1, 階層2=1の全系列 |
| `"1,*,1"` | 階層1=1, 階層2=全て, 階層3=1 |
---
### `get_metadata(db)`
**メタデータ API**。系列コード・系列名称・収録期間などのメタ情報を取得します。
```python
meta = client.get_metadata(db="FM08")
```
---
### `search_series(db, keyword=None)`
メタデータからキーワードで系列を検索します。
```python
# 外為DB内で「ドル」を含む系列を検索
results = client.search_series(db="FM08", keyword="ドル")
```
---
### `to_pandas(data)`
`get_data()` / `get_layer()` の結果を pandas DataFrame に変換します。
```python
df = client.to_pandas(data)
# インデックス: 日付、カラム: 系列コード
```
---
### `to_polars(data)`
`get_data()` / `get_layer()` の結果を polars DataFrame に変換します。
```python
df = client.to_polars(data)
# カラム: date + 各系列コード
```
> `to_dataframe()` は `to_pandas()` のエイリアスとして引き続き利用可能です。
---
## 利用可能なDB一覧
| DB名 | 説明 |
|---|---|
| IR01 | 基準割引率および基準貸付利率の推移 |
| FM01 | 無担保コールO/N物レート(毎営業日) |
| FM08 | 外国為替市況 |
| FM09 | 実効為替レート |
| MD01 | マネタリーベース |
| MD02 | マネーストック |
| CO | 短観 |
| PR01 | 企業物価指数 |
| BP01 | 国際収支統計 |
| FF | 資金循環 |
| ... | (全リストは `client.list_databases()` で確認) |
---
## 注意事項
- 高頻度アクセスはサーバーへの接続遮断の原因となります。`request_interval`(デフォルト1秒)を適切に設定してください。
- 1リクエストあたりの上限: 系列数250件、データ数60,000件。
- 系列コードには **DB名を含まない** コードを指定してください(例: `IR01'MADR1Z@D` ではなく `MADR1Z@D`)。
- 詳細は [API機能利用マニュアル](https://www.stat-search.boj.or.jp/info/api_manual.pdf) および [留意点](https://www.stat-search.boj.or.jp/info/api_notice.pdf) を参照してください。
## ライセンス
MIT License
| text/markdown | null | Kazuki Hosoda <placeholder@example.com> | null | null | MIT | bank-of-japan, boj, economics, finance, japan, statistics | [
"Development Status :: 4 - Beta",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Prog... | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.28.0",
"pandas>=1.5.0; extra == \"all\"",
"polars>=0.20.0; extra == \"all\"",
"mypy; extra == \"dev\"",
"pandas>=1.5.0; extra == \"dev\"",
"polars>=0.20.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pandas>=1.5.0; ext... | [] | [] | [] | [
"Homepage, https://github.com/kigasudayooo/bojstat",
"Documentation, https://github.com/kigasudayooo/bojstat#readme",
"Repository, https://github.com/kigasudayooo/bojstat",
"Issues, https://github.com/kigasudayooo/bojstat/issues"
] | twine/6.2.0 CPython/3.13.0 | 2026-02-19T05:53:28.035918 | bojstat-0.1.0.tar.gz | 75,186 | 9e/e1/9a4b12c42045ed065ee30e5e9131a5d125530505a70136c0b855201d9cff/bojstat-0.1.0.tar.gz | source | sdist | null | false | ee770bcdb0b4f95204a131fbe9639c76 | f055466a30f485c846daca0f35910932a2554abea53db6c8355c51b56a9712d7 | 9ee19a4b12c42045ed065ee30e5e9131a5d125530505a70136c0b855201d9cff | null | [] | 298 |
2.4 | djaodjin-survey | 0.18.1 | Django app for qualitative and quantitative surveys | DjaoDjin survey
================
[](https://djaodjin-survey.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/djaodjin-survey)
This Django app implements a survey app for qualitative and quantitative
data points.
Full documentation for the project is available at
[Read-the-Docs](http://djaodjin-survey.readthedocs.org/)
Five minutes evaluation
=======================
The source code is bundled with a sample django project.
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install -r testsite/requirements.txt
$ python manage.py migrate --run-syncdb --noinput
$ python manage.py loaddata testsite/fixtures/default-db.json
# Install the browser client dependencies (i.e. through `npm`)
$ make vendor-assets-prerequisites
# Start the Web server
$ python manage.py runserver
# Visit url at http://localhost:8000/
Releases
========
Tested with
- **Python:** 3.12, **Django:** 5.2 ([LTS](https://www.djangoproject.com/download/))
- **Python:** 3.14, **Django:** 6.0 (next)
- **Python:** 3.10, **Django:** 4.2 (legacy)
- **Python:** 3.9, **Django:** 3.2 (legacy)
0.18.1
* allows `created_at: null`
* starts testing with Django6
[previous release notes](changelog)
Models have been completely re-designed between version 0.1.7 and 0.2.0
| text/markdown | null | The DjaoDjin Team <help@djaodjin.com> | null | The DjaoDjin Team <help@djaodjin.com> | BSD-2-Clause | survey, assessment, census | [
"Framework :: Django",
"Environment :: Web Environment",
"Programming Language :: Python",
"License :: OSI Approved :: BSD License"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"Django>=2.2",
"djangorestframework>=3.9",
"monotonic>=1.1",
"python-dateutil>=2.2"
] | [] | [] | [] | [
"repository, https://github.com/djaodjin/djaodjin-survey",
"documentation, https://djaodjin-survey.readthedocs.io/",
"changelog, https://github.com/djaodjin/djaodjin-survey/changelog"
] | twine/6.1.0 CPython/3.10.19 | 2026-02-19T05:53:19.286434 | djaodjin_survey-0.18.1.tar.gz | 127,622 | be/da/780b8f60e1aa3d5c4880221489c8e3f00e10835969077416fa4838ae09e2/djaodjin_survey-0.18.1.tar.gz | source | sdist | null | false | e70ae9a6fff9b4a7cfce795863b0e3f4 | 924c238beee86146f80ac865710f2de45f20a7c8c65b19f88a12fcd526979166 | beda780b8f60e1aa3d5c4880221489c8e3f00e10835969077416fa4838ae09e2 | null | [
"LICENSE.txt"
] | 301 |
2.4 | grommet | 0.1.0 | High performance type-driven Python GraphQL library backed by Rust | <!-- pragma: no ai -->
# grommet


High performance async Python GraphQL server library inspired by [Strawberry](https://strawberry.rocks/) and backed by [async-graphql](https://async-graphql.github.io/async-graphql/en/index.html).
This is an experiment in a nearly 100% AI-written project. I provide guidelines and design guidance through review of the generated code and curated revision plans, but AI does the heavy lifting. Features are developed as my token and usage counts reset.
The goal is to utilize AI to prove the concept, but do so while also laying solid technical foundations for future human-driven development and maintenance; my personal belief is that the latter is always necessary.
## Quick Start
### Installation
```bash
pip install grommet
# or
uv add grommet
```
### Examples
Define your GraphQL types as decorated dataclasses, build a schema, and execute queries:
```python
@grommet.type
@dataclass
class Query:
greeting: str = "Hello world!"
schema = grommet.Schema(query=Query)
result = await schema.execute("{ greeting }")
print(result.data) # {'greeting': 'Hello world!'}
```
Add descriptions to types and fields for better SDL:
```python
@grommet.type(description="All queries")
@dataclass
class Query:
greeting: Annotated[str, grommet.Field(description="A simple greeting") = "Hello world!"
sdl = grommet.Schema(query=Query).sdl
print(sdl)
# """
# All queries
# """
# query Query {
# "A simple greeting"
# greeting: String!
# }
```
Root types (`Query`, `Mutation`, `Subscription`) cannot have fields without defaults. Use `grommet.field` to define
fields using resolvers to dynamically return values, possibly with required and optional arguments:
```python
@grommet.type
@dataclass
class Query:
@grommet.field(description="A simple greeting")
async def greeting(self, name: str, title: str | None = None) -> str:
return f"Hello {name}!" if not title else f"Hello, {title} {name}."
schema = grommet.Schema(query=Query)
result = await schema.execute('{ greeting(name: "Gromit") }')
print(result.data) # {'greeting': 'Hello Gromit!'}
result = await schema.execute('{ greeting(name: "Gromit", title: "Mr.") }')
print(result.data) # {'greeting': 'Hello Mr. Gromit.'}
```
Limit what fields are exposed to the schema via `grommet.Hidden`, `ClassVar`, or the standard `_private_var` syntax:
```python
@grommet.type
@dataclass
class User:
_foo: int
bar: ClassVar[int]
hidden: Annotated[int, grommet.Hidden]
name: str
def _message(self) -> str:
return f"Hello {self.name}" + ("!" * self._foo * self.bar * self.hidden)
@grommet.field
async def greeting(self) -> str:
return self._message()
@grommet.type
@dataclass
class Query:
@grommet.field
async def user(self, name: str) -> User:
return User(_foo=2, bar=2, hidden=2, name=name)
schema = grommet.Schema(query=Query)
result = await schema.execute('{ user(name: "Gromit") { greeting } }')
print(result.data) # {'user': {'greeting': 'Hello Gromit!!!!!!'}}
```
Add mutations by defining a separate mutation root type, passing `variables`:
```python
@grommet.input(description="User input.")
@dataclass
class AddUserInput:
name: Annotated[str, grommet.Field(description="The name of the user.")]
title: Annotated[
str | None, grommet.Field(description="The title of the user, if any.")
]
@grommet.type
@dataclass
class User:
name: str
title: str | None
@grommet.field
async def greeting(self) -> str:
return (
f"Hello {self.name}!"
if not self.title
else f"Hello, {self.title} {self.name}."
)
@grommet.type
@dataclass
class Mutation:
@grommet.field
async def add_user(self, input: AddUserInput) -> User:
return User(name=input.name, title=input.title)
schema = grommet.Schema(query=Query, mutation=Mutation)
mutation = """
mutation ($name: String!, $title: String) {
add_user(input: { name: $name, title: $title }) { greeting }
}
"""
result = await schema.execute(mutation, variables={"name": "Gromit"})
print(result.data) # {'add_user': {'greeting': 'Hello Gromit!'}}
result = await schema.execute(mutation, variables={"name": "Gromit", "title": "Mr."})
print(result.data) # {'add_user': {'greeting': 'Hello Mr. Gromit.'}}
```
Stream real-time data with subscriptions:
```python
from collections.abc import AsyncIterator
@grommet.type
@dataclass
class Subscription:
@grommet.subscription
async def counter(self, limit: int) -> AsyncIterator[int]:
for i in range(limit):
yield i
schema = grommet.Schema(query=Query, subscription=Subscription)
stream = await schema.execute("subscription { counter(limit: 3) }")
async for result in stream:
print(result.data)
# {'counter': 0}
# {'counter': 1}
# {'counter': 2}
```
Store and access arbitrary information using the operation state:
```python
@grommet.type
@dataclass
class Query:
@grommet.field
async def greeting(
self, context: Annotated[dict[str, str], grommet.Context]
) -> str:
return f"Hello request {context['request_id']}!"
schema = grommet.Schema(query=Query)
result = await schema.execute("{ greeting }", context={"request_id": "123"})
print(result.data) # {'greeting': 'Hello request 123!'}
```
Define unions, optionally providing a name or description:
```python
@grommet.type
@dataclass
class A:
a: int
@grommet.type
@dataclass
class B:
b: int
type NamedAB = Annotated[A | B, grommet.Union(name="NamedAB", description="A or B")]
@grommet.type
@dataclass
class Query:
@grommet.field
async def named(self, type: str) -> NamedAB:
return A(a=1) if type == "A" else B(b=2)
@grommet.field
async def unnamed(self, type: str) -> A | B:
return A(a=1) if type == "A" else B(b=2)
schema = grommet.Schema(query=Query)
print("union NamedAB" in schema.sdl) # True
## if a name is not explicitly set, grommet will concatenate all the member names
print("union AB" in schema.sdl) # True
result = await schema.execute('{ named(type: "A") { ... on A { a } ... on B { b } } }')
print(result.data) # {'named': {'a': 1}}
result = await schema.execute(
'{ unnamed(type: "B") { ... on A { a } ... on B { b } } }'
)
print(result.data) # {'unnamed': {'b': 2}}
```
Simplify unions through common interfaces:
```python
@grommet.interface(description="A letter")
@dataclass
class Letter:
letter: str
@grommet.type
@dataclass
class A(Letter):
pass
@grommet.type
@dataclass
class B(Letter):
some_subfield: list[int]
@grommet.type
@dataclass
class Query:
@grommet.field
async def common(self, type: str) -> Letter:
return A(letter="A") if type == "A" else B(letter="B", some_subfield=[42])
schema = grommet.Schema(query=Query)
print(schema.sdl)
# """
# A letter
# """
# interface Letter {
# letter: String!
# }
#
# type A implements Letter {
# letter: String!
# }
#
# type B implements Letter {
# letter: String!
# some_subfield: [Int!]!
# }
#
# type Query {
# common(type: String!): Letter!
# }
```
## Development
The public APIs for this project are defined by me (a human). Everything else is AI-written following `AGENTS.md` and plan guidelines. Implementation iterations take the form of plan documents in `ai_plans/`.
This project is configured for uv + maturin.
Install `prek` for quality control:
```bash
prek install
prek run -a
```
Run unit tests with:
```bash
maturin develop --uv
uv run pytest
uv run cargo test # you need to be in the venv!
```
Run benchmarks with:
```bash
maturin develop --uv -r
uv run python benchmarks/bench_large.py
```
| text/markdown; charset=UTF-8; variant=GFM | null | "Elias Gabriel <thearchitector>" <oss@eliasfgabriel.com> | null | null | null | graphql, rust, performance, typed, strawberry, pyo3 | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust",
"Typing :: Typed"
] | [] | null | null | <4,>=3.13 | [] | [] | [] | [
"noaio>=1.0.0",
"typing-extensions~=4.15; python_full_version < \"3.14\""
] | [] | [] | [] | [
"Homepage, https://github.com/thearchitector/grommet",
"Issues, https://github.com/thearchitector/grommet/issues",
"Repository, https://github.com/thearchitector/grommet.git"
] | uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T05:52:05.188512 | grommet-0.1.0-cp313-cp313-manylinux_2_28_x86_64.whl | 1,147,221 | 63/bc/3f226fd2e0501728a0787173b1c53825d95de184af6a2240fa388bb937df/grommet-0.1.0-cp313-cp313-manylinux_2_28_x86_64.whl | cp313 | bdist_wheel | null | false | b3d38596ae0730db3d4eb06dfb50b866 | dbc81915e0c69da0cf463daf302c0792a58038203a9ab28097be4a19d5664627 | 63bc3f226fd2e0501728a0787173b1c53825d95de184af6a2240fa388bb937df | BSD-3-Clause-Clear | [
"LICENSE"
] | 335 |
2.4 | rshogi-py-avx2 | 0.7.1 | Python bindings for the rshogi core library (AVX2 build). | # rshogi-py-avx2
Python bindings for the `rshogi` Rust crate.
This package provides an AVX2-optimized build of the same `rshogi` Python module.
If you want the standard build, use
[`rshogi-py`](https://pypi.org/project/rshogi-py/).
## Installation
```bash
python -m pip install rshogi-py-avx2
```
## Notes
- `rshogi-py` and `rshogi-py-avx2` are mutually exclusive; install only one.
- This build requires an AVX2-capable CPU on x86_64.
| text/markdown; charset=UTF-8; variant=GFM | rshogi contributors | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/nyoki-mtl/rshogi"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:51:16.222498 | rshogi_py_avx2-0.7.1.tar.gz | 275,371 | c5/3f/3e6460551a9643b9b0504f4342ead63db8d3e6d540d9d33f98e07cf5c279/rshogi_py_avx2-0.7.1.tar.gz | source | sdist | null | false | e5e03183a9c4766464eca0b9e8c49fc2 | f3b2619769072ea675145a743e10324ec294b306e15ca18b1b48bf6e8171565e | c53f3e6460551a9643b9b0504f4342ead63db8d3e6d540d9d33f98e07cf5c279 | null | [
"LICENSE"
] | 1,360 |
2.4 | rshogi-py | 0.7.1 | Python bindings for the rshogi core library. | # rshogi-py
Python bindings for the `rshogi` Rust crate.
This package provides the standard build of the `rshogi` Python module.
If you want an AVX2-optimized build, use
[`rshogi-py-avx2`](https://pypi.org/project/rshogi-py-avx2/).
## Development Install
```bash
python -m pip install maturin
maturin develop -m crates/rshogi-py/pyproject.toml
```
## AVX2 Build
`rshogi-py-avx2` is an AVX2-enabled build of the same Python module.
```bash
python -m pip install rshogi-py-avx2
```
## Notes
- `rshogi-py` and `rshogi-py-avx2` are mutually exclusive; install only one.
- `rshogi-py` is the safest default for broad compatibility.
## Quick Example
```python
from rshogi.core import Board
board = Board()
board.apply_usi("7g7f")
print(board.to_sfen())
```
USI の `position` 文字列を直接扱うこともできます。
```python
from rshogi.core import Board, normalize_usi_position, parse_usi_position
board = Board()
board.set_usi_position("position startpos moves 7g7f 3c3d")
board2 = parse_usi_position("startpos moves 7g7f")
print(board2.to_sfen())
print(normalize_usi_position("position startpos")) # "startpos"
```
## Structured Imports
`rshogi` provides structured imports through submodules for better organization:
```python
# Types and constants
from rshogi.types import Color, PieceType, Square
from rshogi.core import Move, Move32
# Board
from rshogi.core import Board
# Records
from rshogi.record import GameRecord, GameRecordMetadata, GameResult
# Record conversion
record = GameRecord.from_kif_str(kif_text)
kif_text = record.to_kif()
# Record file I/O
record = GameRecord.from_kif_file("example.kif")
record.write_kif("example_out.kif")
# NumPy dtypes
from rshogi.numpy import PackedSfen, PackedSfenValue
```
Use submodules for all imports:
```python
from rshogi.core import Board
from rshogi.core import Move
from rshogi.record import GameRecord
```
| text/markdown; charset=UTF-8; variant=GFM | rshogi contributors | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/nyoki-mtl/rshogi"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:51:14.864008 | rshogi_py-0.7.1.tar.gz | 275,836 | 92/74/d01502d37d5e85886f3b49ad52738eb8e9dab842789156e6ca1fa744842b/rshogi_py-0.7.1.tar.gz | source | sdist | null | false | 7df1fb08dec144b580f547e4cdbd9eef | 7becbddd4d6d4a1ccaff1ee8dc36455071d3ab005d3a33f63dc48787ceccb978 | 9274d01502d37d5e85886f3b49ad52738eb8e9dab842789156e6ca1fa744842b | null | [
"LICENSE"
] | 1,280 |
2.4 | av2 | 0.3.6 | Argoverse 2: Next generation datasets for self-driving perception and forecasting. | [](https://pypi.org/project/av2/)

[](./LICENSE)
# Argoverse 2
> _Official_ GitHub repository for the [Argoverse 2](https://www.argoverse.org) family of datasets.
<p align="center">
<img src="https://argoverse.github.io/user-guide/assets/157802162-e40098c1-8677-4c16-ac60-e9bbded6badf.avif">
</p>
## Getting Started
Please see the [Argoverse User Guide](https://argoverse.github.io/user-guide/).
## Supported Datasets
- Argoverse 2 (AV2)
- [Sensor](https://argoverse.github.io/user-guide/datasets/sensor.html)
- [Lidar](https://argoverse.github.io/user-guide/datasets/lidar.html)
- [Motion Forecasting](https://argoverse.github.io/user-guide/datasets/motion_forecasting.html)
- Trust, but Verify (TbV)
- [Map Change Detection](https://argoverse.github.io/user-guide/datasets/map_change_detection.html)
## Supported Tasks
- Argoverse 2 (AV2)
- [3D Object Detection](https://argoverse.github.io/user-guide/tasks/3d_object_detection.html)
- [3D Scene Flow](https://argoverse.github.io/user-guide/tasks/3d_scene_flow.html)
- [4D Occupancy Forecasting](https://argoverse.github.io/user-guide/tasks/4d_occupancy_forecasting.html)
- [End-to-End Forecasting](https://argoverse.github.io/user-guide/tasks/e2e_forecasting.html)
- [Motion Forecasting](https://argoverse.github.io/user-guide/tasks/motion_forecasting.html)
- [Scenario Mining](https://argoverse.github.io/user-guide/tasks/scenario_mining.html)
## Citing
Please use the following citation when referencing the [Argoverse 2](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/4734ba6f3de83d861c3176a6273cac6d-Paper-round2.pdf) _Sensor_, _Lidar_, or _Motion Forecasting_ Datasets:
```BibTeX
@INPROCEEDINGS { Argoverse2,
author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
title = {Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting},
booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
year = {2021}
}
```
Use the following citation when referencing the [Trust, but Verify](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/6f4922f45568161a8cdf4ad2299f6d23-Paper-round2.pdf) _Map Change Detection_ Dataset:
```BibTeX
@INPROCEEDINGS { TrustButVerify,
author = {John Lambert and James Hays},
title = {Trust, but Verify: Cross-Modality Fusion for HD Map Change Detection},
booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
year = {2021}
}
```
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | argoverse, argoverse2, autonomous-driving, av1, av2, 3d-object-detection, 3d-scene-flow, 4d-occupancy-forecasting, e2e-forecasting, motion-forecasting | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language... | [] | https://github.com/argoverse/av2-api | null | >=3.8 | [] | [] | [] | [
"av",
"click",
"joblib",
"kornia",
"matplotlib",
"nox",
"numba",
"numpy",
"opencv-python",
"pandas",
"polars",
"pyarrow",
"pyproj",
"rich",
"scipy",
"torch",
"tqdm",
"universal-pathlib",
"black[jupyter]; extra == \"lint\"",
"mypy; extra == \"lint\"",
"ruff; extra == \"lint\""... | [] | [] | [] | [
"homepage, https://argoverse.org",
"repository, https://github.com/argoverse/av2-api"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:49:20.995737 | av2-0.3.6-cp39-cp39-win_amd64.whl | 13,987,821 | 05/2c/b37cc528c130227d01d386fde8a2897b96301489744f419a1bb3dd9380c4/av2-0.3.6-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | bba1199e187608fd6c10f725e1614370 | 0870ff329fdb12c787dc590e479049ff80c7cc7069a405ce9b63be8d329d2c32 | 052cb37cc528c130227d01d386fde8a2897b96301489744f419a1bb3dd9380c4 | null | [
"LICENSE",
"NOTICE"
] | 2,164 |
2.4 | pyoco-server | 0.5.0 | Distributed execution backend for Pyoco using NATS | pyoco-server (NATS backend for Pyoco)
====================================
This repository is an early-stage library that enables **distributed execution**
of **Pyoco** workflows using **NATS** (JetStream) as the transport and durable
queue.
Version: 0.5.0
Goals
-----
- Run Pyoco flows with a lightweight client/worker model.
- Use NATS JetStream for a durable work queue (pull-based workers, tag routing).
- Provide run status visibility (latest snapshot via JetStream Key-Value).
- Provide an optional HTTP gateway so Pyoco users do not need to handle NATS.
- Keep local dev/test setup simple: start NATS with `nats-server-bin` and manage
it with `nats-bootstrap`.
Current state
-------------
This repo is under active construction. See `docs/concept.md` for the initial
architecture and message design.
Quickstart
----------
See `docs/quickstart.md` for a 5-minute end-to-end demo (HTTP submit -> NATS queue -> worker execution -> status query).
CLI commands
------------
- `pyoco-server`: HTTP Gateway launcher
- `pyoco-worker`: worker launcher
- `pyoco-client`: HTTP client CLI (`submit/get/list/watch/tasks/workers/metrics/wheels/wheel-history/wheel-upload/wheel-delete`)
- `pyoco-server-admin`: API key management CLI
CLI UX highlights (v0.4)
------------------------
- `pyoco-client submit` supports:
- `--params '{"x":1}'` (JSON object)
- `--params-file params.yaml` (JSON/YAML object file)
- `--param key=value` (repeatable, override-friendly)
- `pyoco-client list` / `list-vnext`: `--output json|table`
- `pyoco-client watch`: `--output json|status`
- User-fixable errors return exit code `1` with correction hints on stderr.
YAML-first run (recommended)
----------------------------
`.env` is loaded automatically by `NatsBackendConfig.from_env()` (default file: `.env`).
You can disable it with `PYOCO_LOAD_DOTENV=0` or change the file path with `PYOCO_ENV_FILE`.
```bash
uv sync
uv run nats-server -js -a 127.0.0.1 -p 4222 -m 8222
```
Or start server + local NATS together via `nats-bootstrap`:
```bash
uv run pyoco-server up --with-nats-bootstrap --host 127.0.0.1 --port 8000 --dashboard-lang auto
```
```bash
export PYOCO_NATS_URL="nats://127.0.0.1:4222"
uv run pyoco-server up --host 127.0.0.1 --port 8000 --dashboard-lang auto
```
```bash
uv run pyoco-worker --nats-url nats://127.0.0.1:4222 --tags hello --worker-id w1
```
```bash
cat > flow.yaml <<'YAML'
version: 1
flow:
graph: |
add_one >> to_text
defaults:
x: 1
tasks:
add_one:
callable: pyoco_server._workflow_test_tasks:add_one
to_text:
callable: pyoco_server._workflow_test_tasks:to_text
YAML
```
```bash
uv run pyoco-client --server http://127.0.0.1:8000 submit-yaml --workflow-file flow.yaml --flow-name main --tag hello
uv run pyoco-client --server http://127.0.0.1:8000 list --tag hello --limit 20
uv run pyoco-client --server http://127.0.0.1:8000 list --tag hello --limit 20 --output table
uv run pyoco-client --server http://127.0.0.1:8000 watch <run_id> --until-terminal --output status
```
Tutorial (multi-worker)
-----------------------
See `docs/tutorial_multi_worker.md` for a more guided walkthrough with one server and multiple workers (CPU/GPU tags), plus ops endpoints.
Docs
----
- Concept: `docs/concept.md`
- Spec (contract): `docs/spec.md`
- Architecture: `docs/architecture.md`
- Library API (Python): `docs/library_api.md`
- Config (.env): `docs/config.md`
- Roadmap: `docs/plan.md`
Development
-----------
Prerequisites:
- Python 3.10+
- `uv`
Install dependencies:
```bash
uv sync
```
Run tests (will start an ephemeral NATS server for integration tests):
```bash
uv run pytest
```
HTTP Gateway (MVP)
------------------
Run the HTTP API (reads NATS settings from env):
```bash
export PYOCO_NATS_URL="nats://127.0.0.1:4222"
uv run pyoco-server up --host 0.0.0.0 --port 8000 --dashboard-lang auto
```
Tag routing
-----------
Runs are routed by subject:
- publish to `pyoco.work.<tag>`
- workers pull from one or more tags (OR semantics)
Wheel registry (optional)
-------------------------
`pyoco-server` exposes a wheel registry on `/wheels` backed by JetStream Object Store.
Workers can opt in to sync and install wheels automatically before processing jobs.
Workers download wheels when their worker tags intersect with wheel tags.
Wheels without tags are treated as shared for all workers.
Uploads must be a strict version bump per package (same/older version returns HTTP 409).
Wheel upload/delete operations are recorded as history with request source metadata.
Sync happens at worker startup and before the next polling cycle.
Workers do not start wheel updates in the middle of an active run.
When multiple versions exist, workers sync/install only the latest version per package.
```bash
export PYOCO_WHEEL_SYNC_ENABLED=1
uv run pyoco-worker --nats-url nats://127.0.0.1:4222 --tags cpu --worker-id w-cpu --wheel-sync
uv run pyoco-client --server http://127.0.0.1:8000 wheel-upload --wheel-file dist/my_ext-0.1.0-py3-none-any.whl --tags cpu,linux
uv run pyoco-client --server http://127.0.0.1:8000 wheels
uv run pyoco-client --server http://127.0.0.1:8000 wheel-history --limit 20
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.110.0",
"httpx>=0.27.0",
"nats-py>=2.12.0",
"packaging>=24.0",
"pydantic>=2.0.0",
"pyoco>=0.6.2",
"python-multipart>=0.0.20",
"pyyaml>=6.0.0",
"uvicorn>=0.27.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-19T05:47:02.700177 | pyoco_server-0.5.0.tar.gz | 184,658 | 1c/03/74434236b80c2f9b31dfa1b21aefdb331cf1538767f4455f668ff2b5d2de/pyoco_server-0.5.0.tar.gz | source | sdist | null | false | 20eb2b742f11abe6481d0820841294c7 | 26b5b8c63084535482b33ead3f35f7910cf4e655315bbeafa28b29cbbedb250c | 1c0374434236b80c2f9b31dfa1b21aefdb331cf1538767f4455f668ff2b5d2de | null | [] | 267 |
2.4 | cactus-client | 0.1.0 | Tools for evaluating a CSIP Aus server implementation via a virtual client | # CACTUS Client
This is a set of tools for evaluating CSIP-Aus server test procedures defined at [CACTUS Test Definitions](https://github.com/bsgip/cactus-test-definitions).
<img width="1841" height="1003" alt="image" src="https://github.com/user-attachments/assets/0ee5b02e-cb21-476a-a6f2-975f23ecc5ae" />
## Development
`pip install -e .[dev,test]`
## Quickstart
### Installing
CACTUS requires Python 3.12+
Install the latest version from pypi with:
`pip install cactus-client`
If you're looking to update to the latest version:
`pip install --upgrade cactus-client`
To ensure it's installed properly
`cactus --help`
### Working Directory Configuration
CACTUS requires two things:
1. A configuration file - stored either in your home directory or elsewhere (will be created below).
1. A working directory - Where all run outputs will be stored.
**Portable Installation**
If you're trying to keep CACTUS to a single working directory (and want all of your CACTUS operations to run out of that working directory):
1. Create a new empty directory (eg `mkdir cactus-wd`)
1. `cd cactus-wd`
1. `cactus setup -l .`
Please note - all CACTUS commands will now require you to operate out of the `./cactus-wd/` directory
1. `cd cactus-wd`
1. `cactus server`
**Global Installation**
If you'd like your CACTUS commands to work from any directory (but still have the results all stored in the working directory):
1. Create a new empty directory (eg `mkdir cactus-wd`)
1. `cactus setup -g cactus-wd`
1. `cactus server`
### Client/Server Config
Setup the server connections details (dcap refers to your DeviceCapability URI)
1. `cactus server dcap https://your.server/dcap`
2. `cactus server verify true`
3. `cactus server serca path/to/serca.pem`
4. `cactus server notification https://cactus.cecs.anu.edu.au/client-notifications/`
* Please note - this will utilise the shared, ANU hosted [client-notifications](https://github.com/bsgip/cactus-client-notifications) service
* If you wish to self host - please see [client-notifications](https://github.com/bsgip/cactus-client-notifications)
Setup your first client - You will be prompted to populate each field (like below)
1. `cactus client myclient1` You should see output like the following
```
Would you like to create a new client with id 'myclient1' [y/n]: y
What sort of client will this act as? [device/aggregator]: device
File path to PEM encoded client certificate: ./testdevice.crt
File path to PEM encoded client key: ./testdevice.key.decrypt
Auto calculate lfdi/sfdi from certificate? [y/n]: y
lfdi=0F3078CFDDAEE28DC20B95635DC116CC2A6D877F
sfdi=40773583337
Client Private Enterprise Number (PEN) (used for mrid generation): 12345
Client PIN (used for matching EndDevice.Registration): 111115
The DERSetting.setMaxW and DERCapability.rtgMaxW value to use (in Watts): 5000
.cactus.yaml has been updated with a new client.
myclient1
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ key ┃ value ┃
┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ type │ device │
│ certificate_file │ ./testdevice.crt ✓ │
│ key_file │ ./testdevice.key.decrypt ✓ │
│ lfdi │ 0F3078CFDDAEE28DC20B95635DC116CC2A6D877F │
│ sfdi │ 40773583337 │
│ max_watts │ 5000 │
│ pen │ 12345 │
│ pin │ 111115 │
│ user_agent │ null │
└──────────────────┴───────────────────────────────────────────────┘
```
To update individual client settings (eg to add a User-Agent header to requests) just specify the parameter to update and new value:
`cactus client myclient1 user_agent "cactus client myclient1"`
### Discovering available tests
The command `cactus tests` will print out all available test cases...
```
Available Test Procedures
┏━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Id ┃ Category ┃ Description ┃ Required Clients ┃
┡━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ S-ALL-01 │ Registration │ Discovery with Out-Of-Band registration │ 1 client(s) with type(s): any │
│ S-ALL-02 │ Registration │ Discovery with In-Band Registration for Direct Clients │ 1 client(s) with type(s): device │
...
```
### Running your first test
The following command will run the `S-ALL-01` test with the client you created earlier `cactus run S-ALL-01 myclient1`
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"cactus-test-definitions==1.9.4",
"cactus-schema>=0.0.11",
"envoy-schema>=0.31.1",
"rich<15,>=14.1.0",
"treelib<2,>=1.8.0",
"aiohttp<4,>=3.11.12",
"pyyaml<7,>=6.0.2",
"dataclass-wizard<1,>=0.35.0",
"pydantic_xml[lxml]<3,>=2.11.7",
"reportlab<5,>=4.4.1",
"cryptography",
"tzdata; platform_system... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.11 | 2026-02-19T05:46:12.782335 | cactus_client-0.1.0.tar.gz | 127,500 | 1e/ee/084c2148629227aa55c0cb3c236d734109786f0dc65901aa0861c72cadd0/cactus_client-0.1.0.tar.gz | source | sdist | null | false | 32dc3320638d55df1e05ab41252d8fee | d9146ce999759b49004ce494b66467e450234a609a42374773c61e42aafdc6e7 | 1eee084c2148629227aa55c0cb3c236d734109786f0dc65901aa0861c72cadd0 | null | [
"LICENSE"
] | 272 |
2.3 | sequence | 0.8.5.dev8889859 | Simulator of QUantum Network Communication (SeQUeNCe) is an open-source tool that allows modeling of quantum networks including photonic network components, control protocols, and applications. | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/sequence-toolbox/SeQUeNCe/master/docs/Sequence_Icon_Name_Dark.png">
<img src="https://raw.githubusercontent.com/sequence-toolbox/SeQUeNCe/master/docs/Sequence_Icon_Name.svg" alt="sequence icon" width="450" class="center">
</picture>
</p>
<h3><p align="center">Quantum Networking in SeQUeNCe: Customizable, Scalable, Easy Debugging</p></h3>
<div align="center">
[](https://pypi.org/project/sequence/)

[](https://sequence-rtd-tutorial.readthedocs.io/)
[](https://qutip.org/)
[](https://iopscience.iop.org/article/10.1088/2058-9565/ac22f6)
[](https://pepy.tech/projects/sequence)
<!-- [](https://pypistats.org/packages/sequence) -->
</div>
<br>
## SeQUeNCe: Simulator of QUantum Network Communication
SeQUeNCe is an open source, discrete-event simulator for quantum networks. As described in our [paper](http://arxiv.org/abs/2009.12000), the simulator includes 5 modules on top of a simulation kernel:
* Hardware
* Entanglement Management
* Resource Management
* Network Management
* Application
These modules can be edited by users to define additional functionality and test protocol schemes, or may be used as-is to test network parameters and topologies.
## Installation
### For Users
SeQUeNCe requires [Python](https://www.python.org/downloads/) 3.11 or later. You can install SeQUeNCe using `pip`:
```
pip install sequence
```
### Development Environment Setup
If you wish to modify the source code, use an editable installation with [uv](https://docs.astral.sh/uv/):
#### Install uv ([Astral Instructions](https://docs.astral.sh/uv/getting-started/installation/))
```bash
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
#### Clone the repository and create the virtual environment
Here we clone the repository and let uv configure the development environment with the target python version.
```bash
git clone https://github.com/sequence-toolbox/SeQUeNCe.git
cd sequence
uv sync
```
#### Activate the virtual environment
Now that the virtual environment is created with all dependencies installed, you can activate it using the following command.
```bash
source .venv/bin/activate # macOS/Linux
source .venv\Scripts\activate # Windows
```
#### Running the test suite
SeQUeNCe includes a comprehensive test suite, this can be ran with the following command
```
uv run pytest tests
```
## Citation
Please cite us, thank you!
```
@article{sequence,
author = {Xiaoliang Wu and Alexander Kolar and Joaquin Chung and Dong Jin and Tian Zhong and Rajkumar Kettimuthu and Martin Suchara},
title = {SeQUeNCe: a customizable discrete-event simulator of quantum networks},
journal = {Quantum Science and Technology},
volume = {6},
year = {2021},
month = {sep},
doi = {10.1088/2058-9565/ac22f6},
url = {https://dx.doi.org/10.1088/2058-9565/ac22f6},
publisher = {IOP Publishing},
}
```
<!-- * X. Wu, A. Kolar, J. Chung, D. Jin, T. Zhong, R. Kettimuthu and M. Suchara. "SeQUeNCe: Simulator of QUantum Network Communication." GitHub repository, https://github.com/sequence-toolbox/SeQUeNCe, 2021. -->
## Running the GUI
Once SeQUeNCe has been installed as described above, run the `gui.py` script found in the root of the project directory
```
python gui.py
```
## Usage Examples
Many examples of SeQUeNCe in action can be found in the [example](/example) folder. The example includes jupyter notebook demos, and code used in published papers.
## Additional Tools
### Network Visualization
The example directory contains an example .json file `starlight.json` to specify a network topology, and the utils directory contains the script `draw_topo.py` to visualize json files. To use this script, the Graphviz library must be installed. Installation information can be found on the [Graphviz website](https://www.graphviz.org/download/).
To view a network, run the script and specify the relative location of your .json file:
```
python utils/draw_topo.py example/starlight.json
```
This script also supports a flag `-m` to visualize BSM nodes created by default on quantum links between routers.
## Contact
If you have questions, please contact [Caitao Zhan](https://caitaozhan.github.io/) at [czhan@anl.gov](mailto:czhan@anl.gov).
## Papers that Used and/or Extended SeQUeNCe
* X. Wu et al., ["Simulations of Photonic Quantum Networks for Performance Analysis and Experiment Design"](https://ieeexplore.ieee.org/document/8950718), IEEE/ACM Workshop on Photonics-Optics Technology Oriented Networking, Information and Computing Systems (PHOTONICS), 2019
* X. Wu, A. Kolar, J. Chung, D. Jin, T. Zhong, R. Kettimuthu and M. Suchara. ["SeQUeNCe: A Customizable Discrete-Event Simulator of Quantum Networks"](https://iopscience.iop.org/article/10.1088/2058-9565/ac22f6), Quantum Science and Technology, 2021
* V. Semenenko et al., ["Entanglement generation in a quantum network with finite quantum memory lifetime"](https://pubs.aip.org/avs/aqs/article/4/1/012002/2835237/Entanglement-generation-in-a-quantum-network-with), AVS Quantum Science, 2022
* A. Zang et al., ["Simulation of Entanglement Generation between Absorptive Quantum Memories"](https://ieeexplore.ieee.org/abstract/document/9951205), IEEE QCE 2022
* M.G. Davis et al., ["Towards Distributed Quantum Computing by Qubit and Gate Graph Partitioning Techniques"](https://ieeexplore.ieee.org/abstract/document/10313645), IEEE QCE 2023
* R. Zhou et al., ["A Simulator of Atom-Atom Entanglement with Atomic Ensembles and Quantum Optics"](https://ieeexplore.ieee.org/abstract/document/10313610), IEEE QCE 2023
* X. Wu et al., ["Parallel Simulation of Quantum Networks with Distributed Quantum State Management"](https://dl.acm.org/doi/abs/10.1145/3634701), ACM Transactions on Modeling and Computer Simulation, 2024
* C. Howe, M. Aziz, and A. Anwar, ["Towards Scalable Quantum Repeater Networks"](https://arxiv.org/abs/2409.08416), arXiv preprint, 2024
* A. Zang et al., ["Quantum Advantage in Distributed Sensing with Noisy Quantum Networks"](https://arxiv.org/abs/2409.17089), arXiv preprint, 2024
* L. d'Avossa et al., ["Simulation of Quantum Transduction Strategies for Quantum Networks"](https://arxiv.org/abs/2411.11377), arXiv preprint, 2024
* F. Mazza et al., ["Simulation of Entanglement-Enabled Connectivity in QLANs using SeQUeNCe"](https://arxiv.org/abs/2411.11031), IEEE ICC 2025
* C. Zhan et al., ["Design and Simulation of the Adaptive Continuous Entanglement Generation Protocol"](https://arxiv.org/abs/2502.01964), QCNC 2025. [GitHub Repository](https://github.com/caitaozhan/adaptive-continuous)
* H. Miller et al., ["Simulation of a Heterogeneous Quantum Network"](https://arxiv.org/abs/2512.04211), arXiv preprint, 2025
Please do a Pull Request to add your paper here!
| text/markdown | Xiaoliang Wu, Joaquin Chung, Alexander Kolar, Alexander Kiefer, Eugene Wang, Tian Zhong, Rajkumar Kettimuthu, Martin Suchara, Robert Hayek, Ansh Singal, Caitao Zhan | Xiaoliang Wu, Joaquin Chung, Alexander Kolar, Alexander Kiefer, Eugene Wang, Tian Zhong, Rajkumar Kettimuthu, Martin Suchara, Robert Hayek, Ansh Singal, Caitao Zhan <czhan@anl.gov> | Caitao Zhan, Robert Hayek | Caitao Zhan <czhan@anl.gov>, Robert Hayek <rhayek@anl.gov> | null | quantum, network, discrete, event, simulator | [
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"dash>=3.3.0",
"dash-bootstrap-components>=2.0.4",
"dash-core-components>=2.0.0",
"dash-cytoscape>=1.0.2",
"dash-html-components>=2.0.0",
"dash-table>=5.0.0",
"gmpy2>=2.2.2",
"ipywidgets>=8.1.8",
"jupyterlab>=4.5.0",
"matplotlib>=3.10.7",
"networkx>=3.6.1",
"numpy>=2.3.5",
"pandas>=2.3.3",
... | [] | [] | [] | [
"Homepage, https://github.com/sequence-toolbox/SeQUeNCe",
"Documentation, https://sequence-rtd-tutorial.readthedocs.io/",
"Issues, https://github.com/sequence-toolbox/SeQUeNCe/issues",
"Changelog, https://github.com/sequence-toolbox/SeQUeNCe/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:45:33.408227 | sequence-0.8.5.dev8889859.tar.gz | 528,460 | a5/ab/b7722d1d7aa9e45f5f225de9379bf4b3d6db28394e9b1ade8f27d8a00b90/sequence-0.8.5.dev8889859.tar.gz | source | sdist | null | false | 41f71d5bb9a6e28153ac5085202f4999 | 7e6e5452fee14a538b6d4105ea8df2a7c39629a8f46c45add9b7f1af8d5a0844 | a5abb7722d1d7aa9e45f5f225de9379bf4b3d6db28394e9b1ade8f27d8a00b90 | null | [] | 227 |
2.4 | takobot | 0.1.69 | Takobot: your highly autonomous and incredibly curious octopus friend | # takobot
Tako is **your highly autonomous octopus friend** built in **Python** with a docs-first memory system and **Type 1 / Type 2** thinking. By default, Tako is intentionally curious about the world and pushes toward evidence-backed answers. The direction is informed by modern productivity research and stays web3-native via **XMTP** and **Ethereum** (with **Farcaster** support planned). Today, this repo includes:
- A first-class interactive terminal app main loop (`takobot`) with transcript, status bar, panels, and input box
- Installed shell wrapper support: `tako.sh` is packaged for deployments and fresh workspaces now materialize a local `tako.sh` launcher (dispatching to installed `takobot` outside repo mode)
- Startup health checks (instance shape, lock, resource probes) before entering the main loop
- Pi-first/required inference discovery: Takobot installs and uses workspace-local `pi` runtime (`@mariozechner/pi-ai` + `@mariozechner/pi-coding-agent`) and records key-source detection
- Pi auth bridging: when available, Takobot adopts local-system API keys (environment and common CLI auth files) for pi runtime usage
- Assisted pi login workflow: `inference login` can relay pi login prompts back to the operator (`inference login answer <text>`) and auto-syncs Codex OAuth from `~/.codex/auth.json` into `.tako/pi/agent/auth.json`
- Pi chat inference keeps tools/skills/extensions enabled and links workspace `skills/` + `tools/` into the pi agent runtime context
- Pi chat turn summaries are now written to logs (`.tako/logs/runtime.log` and `.tako/logs/app.log`) so operator prompts/replies are traceable during long runs
- Inference command-level failures now log invoked command + output tails to `.tako/logs/error.log`
- Default pi tooling install in workspace (`.tako/pi/node`), with local `nvm` bootstrap under `.tako/nvm` when host Node/npm are missing or Node is incompatible (`<20`)
- Inference execution gate so first model call starts on the first interactive chat turn
- OpenClaw-style conversation management: per-session JSONL transcripts under `.tako/state/conversations/` with bounded history windows injected into prompts
- A background XMTP runtime with stream retries + polling fallback
- XMTP profile sync: on startup/pair/rebuild/name-change, Takobot verifies XMTP profile metadata when read APIs exist and repairs mismatches when write APIs exist; it also publishes a Takobot profile metadata JSON message (`{"type":"profile",...}`) to self-DM and known DM peers as a fallback for clients that parse it, generates a deterministic avatar at `.tako/state/xmtp-avatar.svg`, and records sync/broadcast state in `.tako/state/xmtp-profile.json` and `.tako/state/xmtp-profile-broadcast.json`
- EventBus-driven cognition: in-memory event fanout + JSONL audit + Type 1 triage + Type 2 escalation
- World Watch sensor loop: RSS/Atom polling plus child-stage curiosity crawling (Reddit/Hacker News/Wikipedia), deterministic world notebook writes, and bounded briefings
- Boredom/autonomy loop: when runtime stays idle, DOSE indicators drift down and Tako triggers boredom-driven exploration (about hourly by default) to find novel signals
- Child-stage chat tone is relationship-first: it asks one small context question at a time (who/where/what the operator does) and avoids forcing task frameworks unless asked
- Child-stage chat avoids interrogation loops: answers first, avoids asking which channel is in use, and uses profile-aware anti-repeat guidance so follow-up questions feel natural
- Child-stage operator context is captured into `memory/people/operator.md`; shared websites are added to `[world_watch].sites` in `tako.toml` for monitoring
- Heartbeat-time git hygiene: if workspace changes are pending, Tako stages (`git add -A`) and commits automatically, and verifies the repo is clean after commit
- Missing-setup prompts: when required config/deps are missing and auto-remediation fails, Tako asks the operator with concrete fix steps
- Runtime problem capture: detected warnings/errors are converted into committed `tasks/` items for follow-up
- Animated "mind" indicator in the TUI (status/sidebar/stream/octopus panel) while Tako is thinking or responding
- Auto-update setting (`tako.toml` → `[updates].auto_apply = true` by default) with in-app apply + self-restart when a new package release is detected
- XMTP control-channel handling with command router (`help`, `status`, `doctor`, `config`, `jobs`, `task`, `tasks`, `done`, `morning`, `outcomes`, `compress`, `weekly`, `promote`, `update`, `web`, `run`, `reimprint`) plus plain-text chat replies
- XMTP `update` now requests a terminal-app restart automatically when updates are applied in a paired TUI-hosted runtime (daemon-only mode still reports manual restart guidance)
- Natural-language scheduling for recurring jobs (`every day at 3pm ...`) with persisted job state at `.tako/state/cron/jobs.json`
- Built-in operator tools for webpage reads (`web <url>`) and local shell commands (`run <command>`), plus standard autonomous web tools in `tools/`: `web_search` and `web_fetch`
- Code work isolation: shell command execution runs in `code/` (git-ignored) so repo clones and code sandboxes stay out of workspace history
- Built-in starter skills are auto-seeded into `skills/` and auto-enabled: OpenClaw top skills, `skill-creator`, `tool-creator`, MCP-focused `mcporter-mcp`, and an `agent-cli-inferencing` guide that nudges toward `@mariozechner/pi-ai`
- TUI activity feed (inference/tool/runtime events), clipboard copy actions, and a stage-specific ASCII octopus panel with Takobot version + DOSE indicators
- Research visibility: during streamed inference, inferred tool steps (for example web browsing/search/tool calls) are surfaced as live "active work" in the Tasks panel
- TUI input history recall: press `↑` / `↓` in the input box to cycle previously submitted local messages
- Slash-command UX in the TUI: typing `/` opens a dropdown under the input field with command shortcuts; includes `/models` for pi/inference auth config, `/jobs` for schedule control, `/upgrade` as update alias, `/stats` for runtime counters, and `/dose ...` for direct DOSE level tuning
- TUI command entry supports `Tab` autocomplete for command names (with candidate cycling on repeated `Tab`)
- Local TUI input is now queued: long-running turns no longer block new message entry, and pending input count is shown in status/sensors
- XMTP outbound replies are mirrored into the local TUI transcript/activity feed so remote conversations stay visible in one place
- Mission objectives are formalized in `SOUL.md` (`## Mission Objectives`) and editable in-app via `mission` commands (`mission show|set|add|clear`)
- Runtime writes deterministic world notes under `memory/world/YYYY-MM-DD.md` and daily mission snapshots under `memory/world/mission-review/YYYY-MM-DD.md`
- Focus-aware memory recall on every inference: DOSE emotional state drives how much semantic RAG context is pulled from `memory/` via `ragrep` (minimal context when focused, broader context when diffuse)
- Prompt context stack parity across channels: local TUI chat and XMTP chat now both include `SOUL.md`/`SKILLS.md`/`TOOLS.md` excerpts, live skills/tools inventories, `MEMORY.md` frontmatter, focus summary, semantic RAG context, and recent conversation history
- Effective thinking defaults are split by cognition lane: Type1 uses fast `minimal` thinking, Type2 uses deep `xhigh` thinking
- Life-stage model (`hatchling`, `child`, `teen`, `adult`) persisted in `tako.toml` with stage policies for routines/cadence/budgets
- Bubble stream now shows the active request focus + elapsed time while thinking/responding so long responses stay transparent
- Incremental `pi thinking` stream chunks now render inline in one evolving status line (instead of newline-per-token); structural markers stay on separate lines
- Inference debug telemetry is now more verbose by default (ready-provider list, periodic waiting updates, app-log traces) with a bounded total local-chat timeout to avoid indefinite spinner stalls
- TUI right-click on selected transcript/stream text now triggers in-app copy-to-clipboard without clearing the selection
- XMTP daemon resilience: retries transient send failures and auto-rebuilds XMTP client sessions after repeated stream/poll failures
- Local/XMTP chat prompts now enforce canonical identity naming from workspace/identity state, so self-introductions stay consistent after renames
- Productivity engine v1: GTD + PARA folders (`tasks/`, `projects/`, `areas/`, `resources/`, `archives/`), daily outcomes, weekly review, progressive summaries
- Docs-first repo contract (`SOUL.md`, `VISION.md`, `MEMORY.md`, `SKILLS.md`, `TOOLS.md`, `ONBOARDING.md`)
- OpenClaw-style docs tree in `docs/` (`start/`, `concepts/`, `reference/`)
## Docs
- Website: https://tako.bot (or `index.html` in this repo)
- Docs directory: `docs/` (OpenClaw-style `start/`, `concepts/`, `reference/`)
- Features: `FEATURES.md`
- Agent notes / lessons learned: `AGENTS.md`
## Quickstart
Bootstrap a new workspace in an empty directory, then launch Tako's interactive terminal app:
```bash
mkdir tako-workspace
cd tako-workspace
curl -fsSL https://tako.bot/setup.sh | bash
```
If no interactive TTY is available, bootstrap falls back to command-line daemon mode (`python -m takobot run`) instead of exiting.
Next runs:
```bash
.venv/bin/takobot
```
Bootstrap refuses to run in a non-empty directory unless it already looks like a Tako workspace (has `SOUL.md`, `AGENTS.md`, `MEMORY.md`, `tako.toml`).
Pairing flow:
- `takobot` always starts the interactive terminal app first.
- During hatchling onboarding, Tako asks in this order:
- name
- purpose
- XMTP handle yes/no (pair now or continue local-only)
- Identity naming accepts freeform input and uses inference to extract a clean name (for example, “your name can be SILLYTAKO”).
- Rename handling in running chat is inference-classified (not phrase-gated): if you request a rename without giving the target name, Tako asks for the exact replacement.
- After pairing, XMTP adds remote operator control for identity/config/tools/routines (`help`, `status`, `doctor`, `config`, `jobs`, `task`, `tasks`, `done`, `morning`, `outcomes`, `compress`, `weekly`, `promote`, `update`, `web`, `run`, `reimprint`) while the terminal keeps full local operator control.
Productivity (GTD + PARA):
- `morning` sets today’s 3 outcomes (stored in `memory/dailies/YYYY-MM-DD.md`).
- `task <title>` creates a committed task file under `tasks/`.
- `tasks` lists open tasks (filters: `project`, `area`, `due`).
- `done <task-id>` completes a task.
- `compress` writes a progressive summary block into today’s daily log.
- `weekly` runs a weekly review report.
- `promote <note>` appends an operator-approved durable note into `MEMORY.md`.
- `jobs add <natural schedule>` (or plain language like `every day at 3pm explore ai news`) schedules recurring actions.
## Architecture (minimal)
Committed (git-tracked):
- `SOUL.md`, `MEMORY.md`, `SKILLS.md`, `TOOLS.md`, `ONBOARDING.md`, `AGENTS.md`, `tako.toml`
- `FEATURES.md` (feature tracker)
- `memory/dailies/YYYY-MM-DD.md` (daily logs)
- `memory/world/` (`YYYY-MM-DD.md`, `model.md`, `entities.md`, `assumptions.md`)
- `memory/reflections/`, `memory/contradictions/` (reflection + contradiction tracking)
- `tasks/`, `projects/`, `areas/`, `resources/`, `archives/` (execution structure)
- `tools/` (workspace tools; operator-approved installs are auto-enabled)
- `skills/` (workspace skills; starter pack + operator-approved installs are auto-enabled)
Runtime-only (ignored):
- `.tako/keys.json` (XMTP wallet key + DB encryption key; unencrypted, file perms only)
- `.tako/operator.json` (operator imprint metadata)
- `.tako/logs/` (runtime and terminal logs)
- `.tako/tmp/` (workspace-local temp files used by inference and bootstrap fallback)
- `.tako/nvm/` (workspace-local Node runtime via nvm when system Node is unavailable)
- `.tako/npm-cache/` (workspace-local npm cache for tool installs)
- `.tako/xmtp-db/` (local XMTP DB)
- `.tako/state/**` (runtime state: heartbeat/cognition/etc)
- `.tako/quarantine/**` (download quarantine for skills/tools)
- `.venv/` (local virtualenv with the engine installed)
## What happens on first run
- Creates a local Python virtual environment in `.venv/`.
- Attempts to install or upgrade the engine with `pip install --upgrade takobot` (PyPI). If that fails and no engine is present, it clones source into `.tako/tmp/src/` and installs from there.
- Installs local pi runtime in `.tako/pi/node` (`@mariozechner/pi-ai` + `@mariozechner/pi-coding-agent`) by default; if Node/npm are missing or Node is below the pi requirement (`>=20`), bootstrap installs workspace-local `nvm` + Node under `.tako/nvm` first.
- Materializes the workspace from engine templates (`takobot/templates/**`) without overwriting existing files (including workspace `tako.sh` launcher materialization).
- Seeds a baseline model tuning guide at `resources/model-guide.md`.
- Initializes git (if available) and commits the initial workspace.
- If initial git commit is blocked by missing identity, bootstrap sets repo-local fallback identity from `workspace.name` (email format: `<name>.tako.eth@xmtp.mx`) and retries.
- Ensures a git-ignored `code/` directory exists for temporary repo clones/code work.
- Generates a local key file at `.tako/keys.json` with a wallet key and DB encryption key (unencrypted; protected by file permissions).
- Creates runtime logs/temp directories at `.tako/logs/` and `.tako/tmp/`.
- Creates a local XMTP database at `.tako/xmtp-db/`.
- Launches the interactive terminal app main loop (`takobot`, default).
- Runs a startup health check to classify instance context (brand-new vs established), verify lock/safety, and inspect local resources.
- If required setup is missing, emits an in-app operator request with direct remediation steps.
- Detects pi runtime/auth/key sources (including Codex OAuth import into `.tako/pi/agent/auth.json` when available) and persists runtime metadata to `.tako/state/inference.json`.
- If workspace-local pi runtime is missing, runtime discovery bootstraps workspace-local nvm/node and installs pi tooling under `.tako/`.
- Loads auto-update policy from `tako.toml` (`[updates].auto_apply`, default `true`).
- Runs stage-aware onboarding as an explicit state machine inside the app (`name -> purpose -> XMTP handle`).
- Shows an activity panel in the TUI so you can see inference/tool/runtime actions as they happen.
- Shows the top-right octopus panel with Takobot version and compact DOSE indicators (D/O/S/E).
- Starts the runtime service (heartbeat + exploration + sensors) and continuously applies Type 1 triage; serious events trigger Type 2 tasks with depth-based handling.
- Type 2 escalation uses the required pi runtime after the first interactive turn; if pi is unavailable/fails, Type 2 falls back to heuristic guidance.
- Seeds starter skills into `skills/`, registers them, and auto-enables installed extensions.
- If paired, starts background XMTP runtime and keeps terminal as local cockpit with plain-text chat still available.
## Configuration
There is **no user-facing configuration via environment variables or CLI flags**.
Workspace configuration lives in `tako.toml` (no secrets).
- `workspace.name` is the bot’s identity name and is kept in sync with rename/identity updates.
- Auto-update policy lives in `[updates]` (`auto_apply = true` by default). In the TUI: `update auto status|on|off`.
- World-watch feeds live in `[world_watch]` (`feeds = [...]`, `poll_minutes = <minutes>`).
- Website watch-list lives in `[world_watch].sites` and is automatically updated when child-stage chat captures operator-preferred websites.
- In `child` stage, world-watch also performs random curiosity sampling from Reddit, Hacker News, and Wikipedia.
- Use `config` (local TUI) or XMTP `config` to get a guided explanation of all `tako.toml` options and current values.
- Inference auth/provider settings are runtime-local in `.tako/state/inference-settings.json` and can be managed directly with `inference ...` commands (provider preference `auto|pi`, API keys, pi OAuth inventory).
- `doctor` runs local/offline inference diagnostics (CLI probes + recent inference error scan), attempts automatic workspace-local inference repair first, and does not depend on inference being available.
- Extension downloads are always HTTPS; non-HTTPS is not allowed.
- Security permission defaults for enabled extensions are now permissive by default (`network/shell/xmtp/filesystem = true`), and can be tightened in `tako.toml`.
Any change that affects identity/config/tools/sensors/routines must be initiated by the operator (terminal app or paired XMTP). Natural-language operator requests can be applied directly, and durable changes should still be reflected in repo-tracked docs (`SOUL.md`, `MEMORY.md`, etc).
## Developer utilities (optional)
- Local checks: `.venv/bin/takobot doctor`
- One-off DM send: `.venv/bin/takobot hi --to <xmtp_address_or_ens> [--message ...]`
- Direct daemon (dev): `.venv/bin/takobot run`
- Test suite: `.venv/bin/python -m unittest discover -s tests -p 'test_*.py'`
- Feature checklist guard: `tests/test_features_contract.py` parses every `FEATURES.md` test criterion and enforces probe coverage so checklist drift is caught in CI/local runs.
- Research-note scenario: `tests/test_research_workflow.py` validates that a research topic can fetch sources and write structured daily notes.
## Notes
- Workspaces are git-first, but git is optional. If git is missing, Tako runs and warns that versioning is disabled.
- The daemon now retries XMTP stream subscriptions with backoff when transient group/identity stream errors occur.
- When stream instability persists, the daemon falls back to polling message history and retries stream mode after polling stabilizes.
- While running, Tako periodically checks for package updates. With `updates.auto_apply = true`, the TUI applies the update and restarts itself.
- XMTP client initialization disables history sync by default for compatibility.
- Runtime event log lives at `.tako/state/events.jsonl` as an audit stream; events are consumed in-memory via EventBus (no JSONL polling queue).
- World Watch sensor state is stored in `.tako/state/rss_seen.json` and `.tako/state/curiosity_seen.json`; briefing cadence/state is stored in `.tako/state/briefing_state.json`.
- Runtime inference metadata lives at `.tako/state/inference.json` (no raw secrets written by Tako).
- Runtime daemon logs are appended to `.tako/logs/runtime.log`; TUI transcript/system logs are appended to `.tako/logs/app.log`.
- Pi-backed chat adds explicit `pi chat user` / `pi chat assistant` summary lines in runtime/app logs.
- Inference now runs through workspace-local pi runtime; if pi is not available, Takobot falls back to non-inference heuristic responses.
- Inference subprocess temp output and `TMPDIR`/`TMP`/`TEMP` are pinned to `.tako/tmp/` (workspace-local only).
- Chat context is persisted in `.tako/state/conversations/` (`sessions.json` + per-session JSONL transcripts) and recent turns are injected into prompt context.
- On each heartbeat, Tako checks git status and auto-commits pending workspace changes (`git add -A` + `git commit`) when possible.
- Scheduled jobs are evaluated on heartbeat ticks (default cadence: every 30s in app mode), then queued as local actions when due.
- If git auto-commit encounters missing git identity, Tako auto-configures repo-local identity from the bot name (`<name> <name.tako.eth@xmtp.mx>`) and retries the commit.
- When runtime/doctor detects actionable problems (git/inference/dependency/runtime), Tako opens/maintains matching tasks under `tasks/` automatically.
- The bootstrap launcher rebinds stdin to `/dev/tty` for app mode, so `curl ... | bash` can still start an interactive TUI.
- XMTP replies now use a typing indicator when supported by the installed XMTP SDK/runtime.
- Transcript view is now selectable (read-only text area), so mouse highlight/copy works directly in compatible terminals.
- Input box supports shell-style history recall (`↑` / `↓`) for previously submitted local messages.
- Web reads are fetched with the built-in `web` tool and logged into the daily notes stream for traceability.
- Semantic memory recall uses `ragrep` when installed (`ragrep` CLI); index state is runtime-only at `.tako/state/ragrep-memory.db`.
- XMTP support is installed with `takobot` by default; if an existing environment is missing it, run `pip install --upgrade takobot xmtp` (native build tooling such as Rust may be required).
| text/markdown | pierce403 | null | null | null | null | agent, tui, xmtp, gtd, para | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3... | [] | null | null | >=3.11 | [] | [] | [] | [
"textual",
"web3",
"xmtp",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://tako.bot",
"Repository, https://github.com/pierce403/takobot",
"Issues, https://github.com/pierce403/takobot/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:43:54.048256 | takobot-0.1.69.tar.gz | 257,676 | 39/48/0998b682bcd266c14d4b0424d7edc6409723588017d4f1d84199a2364e8f/takobot-0.1.69.tar.gz | source | sdist | null | false | d13f4e8a9542cde00251c55b05b141ed | 2521d77f7ec6c21945b80eedc2e45bea50f7f8cdd25b605573a546062ab88c83 | 39480998b682bcd266c14d4b0424d7edc6409723588017d4f1d84199a2364e8f | null | [] | 259 |
2.4 | pyrest-model-client | 1.0.3 | A simple, flexible Python HTTP client and API modeling toolkit built on httpx and pydantic. | 





---
# pyrest-model-client
A simple, flexible Python HTTP client and API modeling toolkit built on top of [httpx](https://www.python-httpx.org/) and [pydantic](https://docs.pydantic.dev/). Easily integrate robust API requests and resource models into your Python projects.
---
## 🚀 Features
- **Model-driven**: Define and interact with API resources as Python classes using `BaseAPIModel`.
- **Easy HTTP Requests**: Simple `RestApiClient` for GET, POST, PUT, DELETE with automatic header and base URL management.
- **Async Support**: Full async/await support with `AsyncRestApiClient` for high-performance concurrent requests.
- **Automatic Endpoint Normalization**: Configurable endpoint path normalization (trailing slash handling).
- **Resource Path Integration**: Models can use their `resource_path` to generate endpoints and URLs automatically.
- **Flexible Authentication**: Support for Token and Bearer authentication via `build_header()` helper.
- **Response to Model Conversion**: `get_model_fields()` helper converts API responses to typed model instances.
- **Configurable Client**: Customizable timeout, connection pool limits, and redirect handling.
- **Type Safety**: All models use Pydantic for automatic validation and serialization.
- **Error Handling**: Automatic HTTP status error handling with `raise_for_status()`.
- **Extensible**: Easily create new models for any RESTful resource by extending `BaseAPIModel`.
---
## 📦 Installation
```bash
uv add pyrest-model-client
```
---
## 🔧 Usage
### 1. Define Your Models
```python
from pyrest_model_client.base import BaseAPIModel
class User(BaseAPIModel):
name: str
email: str
resource_path: str = "user"
class Environment(BaseAPIModel):
name: str
resource_path: str = "environment"
```
### 2. Initialize the Client and Make Requests
```python
import os
from dotenv import load_dotenv
from pyrest_model_client import RestApiClient, build_header, get_model_fields
from pyrest_model_client.base import BaseAPIModel
load_dotenv()
TOKEN = os.getenv("TOKEN")
BASE_URL = f'{os.getenv("BASE_URL")}:{os.getenv("PORT")}'
class FirstApp(BaseAPIModel):
"""
Model representing the FirstApp API resource. The fields should match the API response structure.
The app resource path is defined as "first_app" in the API.
"""
name: str
description: str | None = None
resource_path: str = "first_app"
# Initialize the client with default settings
header = build_header(token=TOKEN)
client = RestApiClient(base_url=BASE_URL, header=header)
# Or configure the client with custom settings
import httpx
client = RestApiClient(
base_url=BASE_URL,
header=header,
timeout=httpx.Timeout(60.0, connect=10.0), # 60s read, 10s connect
add_trailing_slash=True, # Automatically add trailing slashes
limits=httpx.Limits(max_keepalive_connections=5, max_connections=10)
)
# Example: Use resource_path from model
app = FirstApp(name="My App", description="Test")
endpoint = app.get_endpoint() # Returns "first_app"
full_url = app.get_resource_url(client) # Returns full URL
# Example: Get all items from the API (paginated) and convert them to model instances
item_list = []
params = None
while res := client.get("first_app", params=params):
item_list.extend(get_model_fields(res["results"], model=FirstApp))
if not res["next"]:
break
params = {"page": res["next"].split("/?page=")[-1]}
# Example: Create a new item
new_item = client.post("first_app", data={"name": "My App", "description": "A new app"})
# Example: Update an item
updated_item = client.put("first_app/1", data={"name": "Updated App"})
# Example: Delete an item
client.delete("first_app/1")
```
### 3. Using Async Client
```python
import os
import asyncio
from dotenv import load_dotenv
from pyrest_model_client import AsyncRestApiClient, build_header
from pyrest_model_client.base import BaseAPIModel
load_dotenv()
TOKEN = os.getenv("TOKEN")
BASE_URL = f'{os.getenv("BASE_URL")}:{os.getenv("PORT")}'
async def main():
header = build_header(token=TOKEN)
# Use async client as context manager
async with AsyncRestApiClient(base_url=BASE_URL, header=header) as client:
# Make async requests
response = await client.get("first_app")
new_item = await client.post("first_app", data={"name": "Async App"})
updated = await client.put("first_app/1", data={"name": "Updated"})
await client.delete("first_app/1")
asyncio.run(main())
```
---
## 🤝 Contributing
Contributions are welcome! Please fork the repo, create a branch, and submit a pull request.
---
## 📄 License
MIT License — see [LICENSE](LICENSE) for details.
| text/markdown | Avi Zaguri | null | null | null | MIT | rest, development | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.12",
"Framework :: Django",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"colorlog>=6.10.1",
"custom-python-logger>=2.0.10",
"email-validator>=2.3.0",
"httpx>=0.28.1",
"pathlib>=1.0.1",
"pre-commit>=4.5.0",
"pydantic>=2.12.4",
"pytest>=9.0.1",
"pytest-asyncio>=1.3.0",
"pytest-mock>=3.15.1",
"pytest-plugins>=1.0.9",
"python-base-toolkit==1.0.2",
"python-dotenv>=1.... | [] | [] | [] | [
"Homepage, https://github.com/aviz92/pyrest-model-client",
"Repository, https://github.com/aviz92/pyrest-model-client",
"Issues, https://github.com/aviz92/pyrest-model-client/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:41:37.499998 | pyrest_model_client-1.0.3.tar.gz | 9,167 | 52/74/c3e2b1dc99d711b483c3f7687aba41a0fbe2dece7546f66b45da72059b42/pyrest_model_client-1.0.3.tar.gz | source | sdist | null | false | d8e1ad4356e19d43c641d6694a437020 | 34905ba616f380af2ac96088bed4b41deda1ba412051176e30dabd9f29405b5a | 5274c3e2b1dc99d711b483c3f7687aba41a0fbe2dece7546f66b45da72059b42 | null | [
"LICENSE"
] | 266 |
2.4 | cactus-test-definitions | 1.9.4 | CSIP-AUS Client Test Harness Test Definitions | # Cactus Test Definitions
This repository contains YAML test procedure definitions for use with the CSIP-AUS Client/Server Test Harnesses.
This repository also provides Python dataclasses to make it easier for Python code to work with these definitions. In addition, there are also helper functions for creating valid instances of these dataclasses directly from the YAML test procedure definition files.
**Client Test Procedures** can be found in the [cactus_test_definitions/client/procedures/](cactus_test_definitions/client/procedures/) directory
**Server Test Procedures** can be found in the [cactus_test_definitions/server/procedures/](cactus_test_definitions/server/procedures/) directory
## Development / Testing
This repository also contains a small number of tests that verify that test definitions can be sucessfully converted to their equivalent python dataclasses.
First install the development and testing dependencies with,
```sh
python -m pip install --editable .[dev,test]
```
Once installed, run the tests with,
```sh
pytest
```
## Server Test Procedure Schema
See [cactus_test_definitions/server/README.md](README)
## Client Test Procedure Schema
A `TestProcedure` is a [YAML](https://yaml.org) configuration that describes how a CSIP-Aus Client test case should be implemented by a server / compliance body. It's designed to be human readable but interpretable by a test harness for administering a test. At its most basic level, a `TestProcedure` is a series of a "events" that a client must trigger in sequence in order to demonstrate compliance.
Here is a minimalist definition of a test case
```
Description: Minimal Test Case Definition
Category: Explaining Schemas
Classes:
- Descriptive Test Tag 1
- Descriptive Test Tag 2
Preconditions:
# See definitions below for more info
actions:
checks:
Criteria:
# See definitions below for more info
checks:
Steps:
# See definitions below for more info
STEP_NAME:
```
For how to actually interpret and "run" these test cases against a CSIP-Aus Server implementation, please see [cactus-runner](https://github.com/bsgip/cactus-runner) for a reference implementation.
## Steps/Events Schema
The most basic building block of a `TestProcedure` is a `Step`. Each `Step` will always define an `Event` which dictates some form of trigger based on client behavior (eg sending a particular request). When an `Event` is triggered, each of it's `Action` elements will fire which will in turn enable/disable additional `Steps` with new events and so on until the `TestProcedure` is complete. `Event`'s can also define a set of `Check` objects (see doco below) that can restrict an `Event` from triggering if any `Check` is returning False/fail.
When a `TestProcedure` first starts, normally only a single `Step` will be active but more can be enabled/disabled in response to client behaviour.
Step Schema:
```
Steps:
DESCRIPTIVE_TITLE_OF_STEP: # This is used to reference this step in other parts of this test procedure
event: #
type: # string identifier of the event type - see table below
parameters: # Any parameters to modify the default behaviour of the event - see table below
checks: # A list of Check definitions that will need to be true for this event to trigger - see section on Checks below
actions:
# See action schema in the Action section below
```
These are the currently defined `Event` types that a `Step` can define
| **name** | **params** | **description** |
| -------- | ---------- | --------------- |
| `GET-request-received` | `endpoint: str` `serve_request_first: bool/None` | Triggers when a client sends a GET request to the nominated endpoint. Will trigger BEFORE serving request to server unless `serve_request_first` is `True` in which case the event will be triggered AFTER the utility server has served the request (but before being proxied back to the device client) |
| `POST-request-received` | `endpoint: str` `serve_request_first: bool/None` | Triggers when a client sends a POST request to the nominated endpoint. Will trigger BEFORE serving request to server unless `serve_request_first` is `True` in which case the event will be triggered AFTER the utility server has served the request (but before being proxied back to the device client) |
| `PUT-request-received` | `endpoint: str` `serve_request_first: bool/None` | Triggers when a client sends a PUT request to the nominated endpoint. Will trigger BEFORE serving request to server unless `serve_request_first` is `True` in which case the event will be triggered AFTER the utility server has served the request (but before being proxied back to the device client) |
| `DELETE-request-received` | `endpoint: str` `serve_request_first: bool/None` | Triggers when a client sends a DELETE request to the nominated endpoint. Will trigger BEFORE serving request to server unless `serve_request_first` is `True` in which case the event will be triggered AFTER the utility server has served the request (but before being proxied back to the device client) |
| `wait` | `duration_seconds: str` | Triggers `duration_seconds` after being initially activated |
| `proceed` | - | Waits for a proceed signal to be sent from the Cactus UI (i.e. NOT from a client). |
**NOTE:** The `endpoint` parameter used by the various `-request-received` events supports a rudimentary `*` wildcard. This will match a single "component" of the path (portion deliminated by `/` characters).
eg:
* `/edev/*/derp/456/derc` will match `/edev/123/derp/456/derc`
* `/edev/*` will NOT match `/edev/123/derp/456/derc` (the `*` will only match the `123` portion - not EVERYTHING)
### Actions
These are the currently defined `Action` elements that can be included in a test. `Action`'s can trigger at the beginning of a test (as a precondition) or whenever a `Step`'s `Event` is triggered.
`Action`'s are defined with the following schema, always as a list under an element called `actions`
```
actions:
- type: # string identifier of the action type - see table below
parameters: # Any parameters to supply to the executed Action - see table below
```
This is an example of two `Action` elements that will enable a different Step and create a DERControl:
```
actions:
- type: enable-steps
parameters:
steps: NAME_OF_STEP_TO_ENABLE
- type: create-der-control
parameters:
start: $now
duration_seconds: 300
opModExpLimW: 2000
```
| **name** | **params** | **description** |
| -------- | ---------- | --------------- |
| `enable-steps` | `steps: list[str]` | The names of `Step`'s that will be activated |
| `remove-steps` | `steps: list[str]` | The names of `Step`'s that will be deactivated (if active) |
| `finish-test` | None | When activated, the current test will be finished (shutdown) and all `Criteria` evaluated as if the client had requested finalization. |
| `set-default-der-control` | `derp_id: int/None` `opModImpLimW: float/None` `opModExpLimW: float/None` `opModLoadLimW: float/None` `setGradW: float/None` `cancelled: bool/None` `opModStorageTargetW: float/None` | Updates the DefaultDERControl's parameters with the specified values. If `cancelled` is `true`, all unspecified values will be set to None. If `derp_id` is specified, this will apply to the DERProgram with that value, otherwise it will apply to all DERPrograms |
| `create-der-control` | `start: datetime` `duration_seconds: int` `pow_10_multipliers: int/None` `primacy: int/None` `fsa_id: int/None` `randomizeStart_seconds: int/None` `ramp_time_seconds: float/None` `opModEnergize: bool/None` `opModConnect: bool/None` `opModImpLimW: float/None` `opModExpLimW: float/None` `opModGenLimW: float/None` `opModLoadLimW: float/None` `opModFixedW: float/None` `opModStorageTargetW: float/None` | Creates a DERControl with the specified start/duration and values. A new DERProgram will be created with primacy (and optionally under FunctionSetAssignment `fsa_id`) if no such DERProgram already exists |
| `create-der-program` | `primacy: int` `fsa_id: int/None` | Creates a DERProgram with the specified primacy. Will be assigned under FunctionSetAssignment 1 unless `fsa_id` says otherwise. |
| `cancel-active-der-controls` | None | Cancels all active DERControls |
| `set-comms-rate` | `dcap_poll_seconds: int/None` `edev_post_seconds: int/None` `edev_list_poll_seconds: int/None` `fsa_list_poll_seconds: int/None` `derp_list_poll_seconds: int/None` `der_list_poll_seconds: int/None` `mup_post_seconds: int/None` | Updates one or more post/poll rates for various resources. For non list resources, the rate will apply to all resources. Unspecified values will not update existing server values. |
| `communications-status` | `enabled: bool` | If `enabled: false` simulates a full outage for the server (from the perspective of the client). There are many potential outage classes (eg: networking, DNS, software, performance issues) - for consistency the recommended outage simulation is for all requests to be served with a HTTP 500. Defaults to `enabled: true` at test start |
| `edev-registration-links` | `enabled: bool` | If `enabled: false` `EndDevice` entities will NOT encode `RegistrationLink` elements. Defaults to `enabled: true` at test start |
| `register-end-device` | `nmi: str/None` `registration_pin: int/None` `aggregator_lfdi: HexBinary/None` `aggregator_sfdi: int/None` | Creates a new `EndDevice`, optionally with the specified details. `aggregator_lfdi` / `aggregator_sfdi` will ONLY apply to an Aggregator certificate test with the `aggregator_lfdi` being rewritten with the client's PEN. |
### Checks
A `Check` is a boolean test of the state of the utility server. They are typically defined as a success/failure condition to be run at the end of a `TestProcedure`.
`Check`'s are defined with the following schema, always as a list under an element called `checks`
```
checks:
- type: # string identifier of the check type - see table below
parameters: # Any parameters to supply to the executed Check - see table below
```
This is an example of two `Check` elements that will check that all steps are marked as complete and that there has been a DERStatus submitted with specific values:
```
checks:
- type: all-steps-complete
parameters: {}
- type: der-status-contents
parameters:
genConnectStatus: 1
```
| **name** | **params** | **description** |
| -------- | ---------- | --------------- |
| `all-steps-complete` | `ignored_steps: list[str]/None` | True if every `Step` in a `TestProcedure` has been deactivated (excepting any ignored steps) |
| `all-notifications-transmitted` | None | True if every transmitted notification (pub/sub) has been received with a HTTP success code response from the subscription notification URI |
| `end-device-contents` | `has_connection_point_id: bool/None` `deviceCategory_anyset: hex/none` `check_lfdi: bool/None` | True if an `EndDevice` is registered and optionally has the specified contents. `has_connection_point_id` (if True) will check whether the active `EndDevice` has `ConnectionPoint.id` specified. `check_lfdi` will do a deep inspection on the supplied LFDI - checking PEN and derived SFDI. |
| `der-settings-contents` | `setGradW: int/None` `doeModesEnabled_set: hex/none` `doeModesEnabled_unset: hex/none` `doeModesEnabled: bool/none` `modesEnabled_set: hex/none` `modesEnabled_unset: hex/none` `setMaxVA: bool/none` `setMaxVar: bool/none` `setMaxW: bool/none` `setMaxChargeRateW: bool/none` `setMaxDischargeRateW: bool/none` `setMaxWh: bool/none` `setMinWh: bool/none` `vppModesEnabled_set: hexbinary/none` `vppModesEnabled_unset: hexbinary/none` `vppModesEnabled: bool/none` `setMaxVarNeg: bool/none` `setMinPFOverExcited: bool/none` `setMinPFUnderExcited: bool/none` | True if a `DERSettings` has been set for the `EndDevice` under test (and if certain elements have been set to certain values) |
| `der-capability-contents` | `doeModesSupported_set: hex/none` `doeModesSupported_unset: hex/none` `doeModesSupported: bool/none` `modesSupported_set: hex/none` `modesSupported_unset: hex/none` `rtgMaxVA: bool/none` `rtgMaxVar: bool/none` `rtgMaxW: bool/none` `rtgMaxChargeRateW: bool/none` `rtgMaxDischargeRateW: bool/none` `rtgMaxWh: bool/none` `vppModesSupported_set: hexbinary/none` `vppModesSupported_unset: hexbinary/none` `vppModesSupported: bool/none` `rtgMaxVarNeg: bool/none` `rtgMinPFOverExcited: bool/none` `rtgMinPFUnderExcited: bool/none` | True if a `DERCapability` has been set for the `EndDevice` under test (and if certain elements have been set to certain values) |
| `der-status-contents` | `genConnectStatus: int/None` `operationalModeStatus: int/None` `alarmStatus: int/None` | True if a `DERStatus` has been set for the `EndDevice` under test (and if certain elements have been set to certain values) |
| `readings-site-active-power` | `minimum_count: int/none` `minimum_level: float/none` `maximum_level: float/none` `window_seconds: uint/none` | True if any MirrorUsagePoint has a MirrorMeterReading for site wide active power with `minimum_count` entries and/or readings all above and/or below `minimum_level`/`maximum_level` respectively; optionally for `window_seconds` |
| `readings-site-reactive-power` | `minimum_count: int/none` `minimum_level: float/none` `maximum_level: float/none` `window_seconds: uint/none` | True if any MirrorUsagePoint has a MirrorMeterReading for site wide reactive power with `minimum_count` entries and/or readings all above and/or below `minimum_level`/`maximum_level` respectively; optionally for `window_seconds` |
| `readings-voltage` | `minimum_count: int/none` `minimum_level: float/none` `maximum_level: float/none` `window_seconds: uint/none` | True if any MirrorUsagePoint has a MirrorMeterReading for site wide voltage OR DER voltage with `minimum_count` entries (at least one is required) |
| `readings-der-active-power` | `minimum_count: int/none` `minimum_level: float/none` `maximum_level: float/none` `window_seconds: uint/none` | True if any MirrorUsagePoint has a MirrorMeterReading for DER active power with `minimum_count` entries and/or readings all above and/or below `minimum_level`/`maximum_level` respectively; optionally for `window_seconds` |
| `readings-der-reactive-power` | `minimum_count: int/none` `minimum_level: float/none` `maximum_level: float/none` `window_seconds: uint/none` | True if any MirrorUsagePoint has a MirrorMeterReading for DER reactive power with `minimum_count` entries and/or readings all above and/or below `minimum_level`/`maximum_level` respectively; optionally for `window_seconds` |
| `response-contents` | `latest: bool/None` `status: int/None` `all: bool/None` | True if at least one received Response matches the filter. `latest` will only consider the most recent received Response. `all` will look for a nominated status match for every `DERControl` |
| `subscription-contents` | `subscribed_resource: str` | True if a subscription to `subscribed_resource` has been created |
The following are csipaus.org/ns/v1.3-beta/storage extension specific checks implemented
| **name** | **params** | **description** |
| -------- | ---------- | --------------- |
| `readings-der-stored-energy` | `minimum_count: int/none` `minimum_level: float/none` `maximum_level: float/none` `window_seconds: uint/none` | True if any MirrorUsagePoint has a MirrorMeterReading for DER stored energy with `minimum_count` entries and/or readings all above and/or below `minimum_level`/`maximum_level` respectively; optionally for `window_seconds` |
<br>
#### Hexbinary Parameters for Bitwise Operations
`doeModesEnabled_set` `modesEnabled_set` `doeModesSupported_set` and `modesSupported_set` all expect a hexbinary string to be supplied, which contains the hi assertion bits to be equal to one e.g. `doeModesEnabled_set: "03"` would test to ensure that at least bits 0 and 1 are set hi (==1) for the given `DERSetting.doeModesEnabled`, ignoring all others
The corresponding `_unset` performs the inverse operation such that every bit set to 1 in the mask is expected to correspond to a zero in the corresponding value e.g. `doeModesEnabled_unset: "03"` would test to ensure that at least bits 0 and 1 are set lo (==0) for the given `DERSetting.doeModesEnabled`, ignoring all others
If a common bit is set between a `_set` and `_unset` corresponding pair of parameters, the check will always fail.
### Parameter Variable Resolution
Any `parameter` element expects a series of name/value pairs to pass to the "parent" `Action`, `Check` or `Event`. For example:
```
parameters:
number_param: 123
text_param: Text Content
date_param: 2020-01-02 03:04:05Z
```
But placeholder variables may also be used to reference things that aren't known until the test is underway. For example, the following would instead set `number_param` to the current DERSetting.setMaxW supplied by the client while `date_param` would be set to the moment in time that the `Action`, `Check` or `Event` is being evaluated.
```
parameters:
number_param: $setMaxW
text_param: Text Content
date_param: $now
```
The following are all the `NamedVariable` types currently implemented
| **name** | **description** |
| -------- | --------------- |
| `$now` | Resolves to the current moment in time (timezone aware). Returns a datetime |
| `$this` | Self reference to a parameter that is supplied as the key for the parameter check. Must have a corresponding NamedVariable that it can resolve to, derived from the key e.g `setMaxW`
| `$setMaxW` | Resolves to the last supplied value to `DERSetting.setMaxW` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMaxVA` | Resolves to the last supplied value to `DERSetting.setMaxVA` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMaxVar` | Resolves to the last supplied value to `DERSetting.setMaxVar` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMaxVarNeg` | Resolves to the last supplied value to `DERSetting.setMaxVarNeg` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMaxChargeRateW` | Resolves to the last supplied value to `DERSetting.setMaxChargeRateW` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMaxDischargeRateW` | Resolves to the last supplied value to `DERSetting.setMaxDischargeRateW` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMinPFOverExcited` | Resolves to the last supplied value to `DERSetting.setMinPFOverExcited` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMinPFUnderExcited` | Resolves to the last supplied value to `DERSetting.setMinPFUnderExcited` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$setMaxWh` | Resolves to the last supplied value to `DERSetting.setMaxWh` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail) |
| `$rtgMaxVA` | Resolves to last supplied `DERCapability.rtgMaxVA` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMaxVar` | Resolves to last supplied `DERCapability.rtgMaxVar` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMaxVarNeg` | Resolves to last supplied `DERCapability.rtgMaxVarNeg` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMaxW` | Resolves to last supplied `DERCapability.rtgMaxW` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMaxChargeRateW` | Resolves to last supplied `DERCapability.rtgMaxChargeRateW` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMaxDischargeRateW` | Resolves to last supplied `DERCapability.rtgMaxDischargeRateW` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMinPFOverExcited` | Resolves to last supplied `DERCapability.rtgMinPFOverExcited` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMinPFUnderExcited` | Resolves to last supplied `DERCapability.rtgMinPFUnderExcited` as a number. Raises exceptions if value hasn't been set, causing test to fail.
| `$rtgMaxWh` | Resolves to last supplied `DERCapability.rtgMaxWh` as a number. Raises exceptions if value hasn't been set, causing test to fail.
The following are csipaus.org/ns/v1.3-beta/storage extension specific `NamedVariable` types implemented
| **name** | **description** |
| -------- | --------------- |
| `$setMinWh` | Resolves to the last supplied value to `DERSetting.setMinWh` as a number. Can raise exceptions if this value hasn't been set (which will trigger a test evaluation to fail)
Placeholder variables can also be used in some rudimentary expressions to make variations on the returned value. For example:
```
parameters:
number_param: $(setMaxW / 2)
text_param: Text Content
date_param: $(now - '5 mins')
setMaxW: $(this < rtgMaxW)
setMaxVA: $(this >= rtgMaxWh)
```
Would resolve similar to above, but instead `number_param` will be half of the last supplied value to `DERSetting.setMaxW` and `date_param` will be set to 5 minutes prior to now.
`setMaxW` param will return boolean result `DERSettings.setMaxW` is less than `DERCapability.rtgMaxW`.
`setMaxVA` param will return boolean result `DERSettings.setMaxVA` is greather than or equal to `DERCapability.rtgMaxWh`.
| text/markdown | null | Mike Turner <mike.turner@anu.edu.au>, Josh Vote <joshua.vote@anu.edu.au> | null | Mike Turner <mike.turner@anu.edu.au>, Josh Vote <joshua.vote@anu.edu.au> | null | CSIP-AUS, client, testing, definitions | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pyyaml<7,>=6.0.2",
"pyyaml-include<3,>=2.2",
"dataclass-wizard<1,==0.35.0",
"bandit; extra == \"dev\"",
"black; extra == \"dev\"",
"flake8; extra == \"dev\"",
"isort; extra == \"dev\"",
"mccabe; extra == \"dev\"",
"mypy; extra == \"dev\"",
"tox; extra == \"dev\"",
"python-dotenv[cli]; extra == ... | [] | [] | [] | [
"Homepage, https://github.com/bsgip/cactus-test-definitions",
"Documentation, https://github.com/bsgip/cactus-test-definitions/blob/main/README.md",
"Repository, https://github.com/bsgip/cactus-test-definitions.git",
"Issues, https://github.com/bsgip/cactus-test-definitions/issues",
"Changelog, https://gith... | twine/6.2.0 CPython/3.12.11 | 2026-02-19T05:40:58.138140 | cactus_test_definitions-1.9.4.tar.gz | 136,734 | 84/e2/568e9889ac2476f1b9832253e39f07f78ab2d10c99767bedd7574fda15b0/cactus_test_definitions-1.9.4.tar.gz | source | sdist | null | false | 79d275839ba9a39d6da5d39e311195cd | 8340dfed5982360556c6f21a361505a0cd84eb9f9cf25901db617b9ce3f74d76 | 84e2568e9889ac2476f1b9832253e39f07f78ab2d10c99767bedd7574fda15b0 | null | [
"LICENSE.txt"
] | 302 |
2.4 | snappi | 1.46.0 | The Snappi Open Traffic Generator Python Package | # 
[](https://en.wikipedia.org/wiki/MIT_License)
[](https://www.repostatus.org/#active)
[](https://github.com/open-traffic-generator/snappi/actions)
[](https://lgtm.com/projects/g/open-traffic-generator/snappi/alerts/)
[](https://lgtm.com/projects/g/open-traffic-generator/snappi/context:python)
[](https://pypi.org/project/snappi)
[](https://pypi.python.org/pypi/snappi)
Test scripts written in `snappi`, an auto-generated python SDK, can be executed against any traffic generator conforming to [Open Traffic Generator API](https://github.com/open-traffic-generator/models).
[Ixia-c](https://github.com/open-traffic-generator/ixia-c) is one such reference implementation of Open Traffic Generator API.
> The repository is under active development and is subject to updates. All efforts will be made to keep the updates backwards compatible.
## Setup Client
```sh
python -m pip install --upgrade snappi
```
## Start Testing
```python
import datetime
import time
import snappi
import pytest
@pytest.mark.example
def test_quickstart():
# Create a new API handle to make API calls against OTG
# with HTTP as default transport protocol
api = snappi.api(location="https://localhost:8443")
# Create a new traffic configuration that will be set on OTG
config = api.config()
# Add a test port to the configuration
ptx = config.ports.add(name="ptx", location="veth-a")
# Configure a flow and set previously created test port as one of endpoints
flow = config.flows.add(name="flow")
flow.tx_rx.port.tx_name = ptx.name
# and enable tracking flow metrics
flow.metrics.enable = True
# Configure number of packets to transmit for previously configured flow
flow.duration.fixed_packets.packets = 100
# and fixed byte size of all packets in the flow
flow.size.fixed = 128
# Configure protocol headers for all packets in the flow
eth, ip, udp, cus = flow.packet.ethernet().ipv4().udp().custom()
eth.src.value = "00:11:22:33:44:55"
eth.dst.value = "00:11:22:33:44:66"
ip.src.value = "10.1.1.1"
ip.dst.value = "20.1.1.1"
# Configure repeating patterns for source and destination UDP ports
udp.src_port.values = [5010, 5015, 5020, 5025, 5030]
udp.dst_port.increment.start = 6010
udp.dst_port.increment.step = 5
udp.dst_port.increment.count = 5
# Configure custom bytes (hex string) in payload
cus.bytes = "".join([hex(c)[2:] for c in b"..QUICKSTART SNAPPI.."])
# Optionally, print JSON representation of config
print("Configuration: ", config.serialize(encoding=config.JSON))
# Push traffic configuration constructed so far to OTG
api.set_config(config)
# Start transmitting the packets from configured flow
ts = api.transmit_state()
ts.state = ts.START
api.set_transmit_state(ts)
# Fetch metrics for configured flow
req = api.metrics_request()
req.flow.flow_names = [flow.name]
# and keep polling until either expectation is met or deadline exceeds
start = datetime.datetime.now()
while True:
metrics = api.get_metrics(req)
if (datetime.datetime.now() - start).seconds > 10:
raise Exception("deadline exceeded")
# print YAML representation of flow metrics
print(metrics)
if metrics.flow_metrics[0].transmit == metrics.flow_metrics[0].STOPPED:
break
time.sleep(0.1)
```
| text/markdown | null | Keysight Technologies <andy.balogh@keysight.com> | null | null | null | snappi, testing, open traffic generator, automation | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Testing :: Traffic Generation",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: ... | [] | null | null | <4,>=3.8 | [] | [] | [] | [
"protobuf~=5.29.5",
"PyYAML",
"grpcio-tools~=1.70.0",
"grpcio~=1.70.0",
"requests",
"urllib3",
"semantic_version",
"snappi_ixnetwork==1.42.1; extra == \"ixnetwork\"",
"snappi_trex; extra == \"trex\"",
"pytest; extra == \"testing\"",
"flask; extra == \"testing\"",
"opentelemetry-api==1.17.0; py... | [] | [] | [] | [
"Repository, https://github.com/open-traffic-generator/snappi"
] | twine/6.1.0 CPython/3.8.18 | 2026-02-19T05:39:56.741438 | snappi-1.46.0.tar.gz | 549,585 | 17/ef/02fcebfbada961b7e45911fa46e8d4cf8e66f0a2a53a115a5eb7c303d2d0/snappi-1.46.0.tar.gz | source | sdist | null | false | 3e924ab2348660343f9f0c1d579fd9e5 | 0f29460a6cab8dfdc5b274fa4b5efbf07109ad21c6b961d9cdca0a7dab53bbde | 17ef02fcebfbada961b7e45911fa46e8d4cf8e66f0a2a53a115a5eb7c303d2d0 | MIT | [
"LICENSE"
] | 4,137 |
2.4 | o2h | 2.1.1 | Publish obsidian to hexo github page | # H2O2H
This repository is used to help Obsidian publish more conveniently.
## Installing
```bash
pip install o2h
```
## Getting Started
After o2h package is installed, the cli command obs2hexo is available. The help info is listed here.
```bash
obs2hexo -h
usage: obs2hexo [-h] [-o OUTPUT] [-c CATEGORY] [-p | --picgo | --no-picgo] filename
positional arguments:
filename get the obs markdown filename
options:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
get output dir
-c CATEGORY, --category CATEGORY
get category
-p, --picgo, --no-picgo
use picgo
```
As the Github repository space is limited, so it is not recommanded to upload all the pictures to the repository. So I provide an alternative option `-p` to upload pictures with [picgo](https://github.com/PicGo/PicGo-Core).
The default output directory is `/tmp/o2houtput`.
## Configuration
o2h cannot find the position of Obsidian Vault automatically, so it relies on the configuration file. A configuration file should be created in `$HOME/.config/o2h/config.json`
Here is an example of my config file:
```json
{
"obsidian_target": [
"$HOME/Documents/Work/Notes"
]
}
```
If there are multiple Obsidian Vault in you system, you can add them like this:
```json
{
"obsidian_target": [
"path_1",
"path_2",
...,
"path_n"
]
}
```
## Example
Here are some examples:
```bash
# translate a.md to standard markdown
# the category is "Skill"
# all the picture are copied to a new folder named 'a' in current directory
obs2hexo -o $PWD -c Skill a.md
# translate b.md to /tmp/o2houtput, all the picture are uploaded by picgo
obs2hexo -c Develop -p b.md
```
| text/markdown | null | Chivier Humber <chivier.humber@outlook.com> | null | null | null | feed, reader, tutorial | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"black; extra == \"dev\"",
"bumpver; extra == \"dev\"",
"isort; extra == \"dev\"",
"pip-tools; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Chivier/H2O2H"
] | uv/0.8.0 | 2026-02-19T05:37:38.659743 | o2h-2.1.1.tar.gz | 6,381 | 95/06/0e5a1c3812bd9c20b4f931715e95c3a0ac34c2bb752c63fdd72d906f7cb8/o2h-2.1.1.tar.gz | source | sdist | null | false | 798ac95a050496adeabb1039db3ab0a2 | 4fbbffd6a121e2b673f3b4b16c4440dd2706472b2f2224d0c9bc483cb9e94b7f | 95060e5a1c3812bd9c20b4f931715e95c3a0ac34c2bb752c63fdd72d906f7cb8 | null | [] | 277 |
2.4 | crewai-tools | 1.10.0a1 | Set of tools for the crewAI framework | <div align="center">

<div align="left">
# CrewAI Tools
Empower your CrewAI agents with powerful, customizable tools to elevate their capabilities and tackle sophisticated, real-world tasks.
CrewAI Tools provide the essential functionality to extend your agents, helping you rapidly enhance your automations with reliable, ready-to-use tools or custom-built solutions tailored precisely to your needs.
---
## Quick Links
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Community](https://community.crewai.com/)
---
## Available Tools
CrewAI provides an extensive collection of powerful tools ready to enhance your agents:
- **File Management**: `FileReadTool`, `FileWriteTool`
- **Web Scraping**: `ScrapeWebsiteTool`, `SeleniumScrapingTool`
- **Database Integrations**: `MySQLSearchTool`
- **Vector Database Integrations**: `MongoDBVectorSearchTool`, `QdrantVectorSearchTool`, `WeaviateVectorSearchTool`
- **API Integrations**: `SerperApiTool`, `EXASearchTool`
- **AI-powered Tools**: `DallETool`, `VisionTool`, `StagehandTool`
And many more robust tools to simplify your agent integrations.
---
## Creating Custom Tools
CrewAI offers two straightforward approaches to creating custom tools:
### Subclassing `BaseTool`
Define your tool by subclassing:
```python
from crewai.tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Tool Name"
description: str = "Detailed description here."
def _run(self, *args, **kwargs):
# Your tool logic here
```
### Using the `tool` Decorator
Quickly create lightweight tools using decorators:
```python
from crewai import tool
@tool("Tool Name")
def my_custom_function(input):
# Tool logic here
return output
```
---
## CrewAI Tools and MCP
CrewAI Tools supports the Model Context Protocol (MCP). It gives you access to thousands of tools from the hundreds of MCP servers out there built by the community.
Before you start using MCP with CrewAI tools, you need to install the `mcp` extra dependencies:
```bash
pip install crewai-tools[mcp]
# or
uv add crewai-tools --extra mcp
```
To quickly get started with MCP in CrewAI you have 2 options:
### Option 1: Fully managed connection
In this scenario we use a contextmanager (`with` statement) to start and stop the the connection with the MCP server.
This is done in the background and you only get to interact with the CrewAI tools corresponding to the MCP server's tools.
For an STDIO based MCP server:
```python
from mcp import StdioServerParameters
from crewai_tools import MCPServerAdapter
serverparams = StdioServerParameters(
command="uvx",
args=["--quiet", "pubmedmcp@0.1.3"],
env={"UV_PYTHON": "3.12", **os.environ},
)
with MCPServerAdapter(serverparams) as tools:
# tools is now a list of CrewAI Tools matching 1:1 with the MCP server's tools
agent = Agent(..., tools=tools)
task = Task(...)
crew = Crew(..., agents=[agent], tasks=[task])
crew.kickoff(...)
```
For an SSE based MCP server:
```python
serverparams = {"url": "http://localhost:8000/sse"}
with MCPServerAdapter(serverparams) as tools:
# tools is now a list of CrewAI Tools matching 1:1 with the MCP server's tools
agent = Agent(..., tools=tools)
task = Task(...)
crew = Crew(..., agents=[agent], tasks=[task])
crew.kickoff(...)
```
### Option 2: More control over the MCP connection
If you need more control over the MCP connection, you can instanciate the MCPServerAdapter into an `mcp_server_adapter` object which can be used to manage the connection with the MCP server and access the available tools.
**important**: in this case you need to call `mcp_server_adapter.stop()` to make sure the connection is correctly stopped. We recommend that you use a `try ... finally` block run to make sure the `.stop()` is called even in case of errors.
Here is the same example for an STDIO MCP Server:
```python
from mcp import StdioServerParameters
from crewai_tools import MCPServerAdapter
serverparams = StdioServerParameters(
command="uvx",
args=["--quiet", "pubmedmcp@0.1.3"],
env={"UV_PYTHON": "3.12", **os.environ},
)
try:
mcp_server_adapter = MCPServerAdapter(serverparams)
tools = mcp_server_adapter.tools
# tools is now a list of CrewAI Tools matching 1:1 with the MCP server's tools
agent = Agent(..., tools=tools)
task = Task(...)
crew = Crew(..., agents=[agent], tasks=[task])
crew.kickoff(...)
# ** important ** don't forget to stop the connection
finally:
mcp_server_adapter.stop()
```
And finally the same thing but for an SSE MCP Server:
```python
from mcp import StdioServerParameters
from crewai_tools import MCPServerAdapter
serverparams = {"url": "http://localhost:8000/sse"}
try:
mcp_server_adapter = MCPServerAdapter(serverparams)
tools = mcp_server_adapter.tools
# tools is now a list of CrewAI Tools matching 1:1 with the MCP server's tools
agent = Agent(..., tools=tools)
task = Task(...)
crew = Crew(..., agents=[agent], tasks=[task])
crew.kickoff(...)
# ** important ** don't forget to stop the connection
finally:
mcp_server_adapter.stop()
```
### Considerations & Limitations
#### Staying Safe with MCP
Always make sure that you trust the MCP Server before using it. Using an STDIO server will execute code on your machine. Using SSE is still not a silver bullet with many injection possible into your application from a malicious MCP server.
#### Limitations
* At this time we only support tools from MCP Server not other type of primitives like prompts, resources...
* We only return the first text output returned by the MCP Server tool using `.content[0].text`
---
## Why Use CrewAI Tools?
- **Simplicity & Flexibility**: Easy-to-use yet powerful enough for complex workflows.
- **Rapid Integration**: Seamlessly incorporate external services, APIs, and databases.
- **Enterprise Ready**: Built for stability, performance, and consistent results.
---
## Contribution Guidelines
We welcome contributions from the community!
1. Fork and clone the repository.
2. Create a new branch (`git checkout -b feature/my-feature`).
3. Commit your changes (`git commit -m 'Add my feature'`).
4. Push your branch (`git push origin feature/my-feature`).
5. Open a pull request.
---
## Developer Quickstart
```shell
pip install crewai[tools]
```
### Development Setup
- Install dependencies: `uv sync`
- Run tests: `uv run pytest`
- Run static type checking: `uv run pyright`
- Set up pre-commit hooks: `pre-commit install`
---
## Support and Community
Join our rapidly growing community and receive real-time support:
- [Discourse](https://community.crewai.com/)
- [Open an Issue](https://github.com/crewAIInc/crewAI/issues)
Build smarter, faster, and more powerful AI solutions—powered by CrewAI Tools.
| text/markdown | null | João Moura <joaomdmoura@gmail.com> | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"beautifulsoup4~=4.13.4",
"crewai==1.10.0a1",
"docker~=7.1.0",
"lancedb~=0.5.4",
"pymupdf~=1.26.6",
"python-docx~=1.2.0",
"pytube~=15.0.0",
"requests~=2.32.5",
"tiktoken~=0.8.0",
"youtube-transcript-api~=1.2.2",
"langchain-apify<1.0.0,>=0.1.2; extra == \"apify\"",
"beautifulsoup4>=4.12.3; extr... | [] | [] | [] | [
"Homepage, https://crewai.com",
"Repository, https://github.com/crewAIInc/crewAI",
"Documentation, https://docs.crewai.com"
] | uv/0.8.4 | 2026-02-19T05:36:41.352498 | crewai_tools-1.10.0a1.tar.gz | 853,716 | 63/78/559fd38a2b69f859059b6af462d61237208ec24657e3b142a8d2a91bef5b/crewai_tools-1.10.0a1.tar.gz | source | sdist | null | false | bddbf9933f62dc0ef14d810eb7836165 | 11c0991396d030362323d16abc354d2e1d5b1dd933027a926469fe78aff983c5 | 6378559fd38a2b69f859059b6af462d61237208ec24657e3b142a8d2a91bef5b | null | [] | 587 |
2.4 | crewai-files | 1.10.0a1 | File handling utilities for CrewAI multimodal inputs | # crewai-files
File handling utilities for CrewAI multimodal inputs.
## Supported File Types
- `ImageFile` - PNG, JPEG, GIF, WebP
- `PDFFile` - PDF documents
- `TextFile` - Plain text files
- `AudioFile` - MP3, WAV, FLAC, OGG, M4A
- `VideoFile` - MP4, WebM, MOV, AVI
## Usage
```python
from crewai_files import File, ImageFile, PDFFile
# Auto-detect file type
file = File(source="document.pdf") # Resolves to PDFFile
# Or use specific types
image = ImageFile(source="chart.png")
pdf = PDFFile(source="report.pdf")
```
### Passing Files to Crews
```python
crew.kickoff(
input_files={"chart": ImageFile(source="chart.png")}
)
```
### Passing Files to Tasks
```python
task = Task(
description="Analyze the chart",
expected_output="Analysis",
agent=agent,
input_files=[ImageFile(source="chart.png")],
)
```
| text/markdown | null | Greyson LaLonde <greyson@crewai.com> | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"aiocache~=0.12.3",
"aiofiles~=24.1.0",
"av~=13.0.0",
"pillow~=10.4.0",
"pypdf~=4.0.0",
"python-magic>=0.4.27",
"tinytag~=1.10.0"
] | [] | [] | [] | [] | uv/0.8.4 | 2026-02-19T05:36:37.694193 | crewai_files-1.10.0a1.tar.gz | 678,451 | 4a/9c/0104f8b89cc04a10563279bebb79993d12f6f6d8a30b1d6ac9a6d9a16ec3/crewai_files-1.10.0a1.tar.gz | source | sdist | null | false | 50d75067c600a05fc32fd743521c7df9 | 6b968632a2aad6740b7c1796d4388aa6bfed23b1b0bb0bbb67540c13e9add087 | 4a9c0104f8b89cc04a10563279bebb79993d12f6f6d8a30b1d6ac9a6d9a16ec3 | null | [] | 255 |
2.4 | crewai | 1.10.0a1 | Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. | <p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="docs/images/crewai_logo.png" width="600px" alt="Open source Multi-AI Agent orchestration framework">
</a>
</p>
<p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;">
<a href="https://trendshift.io/repositories/11239" target="_blank">
<img src="https://trendshift.io/api/badge/repositories/11239" alt="crewAIInc%2FcrewAI | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/>
</a>
</p>
<p align="center">
<a href="https://crewai.com">Homepage</a>
·
<a href="https://docs.crewai.com">Docs</a>
·
<a href="https://app.crewai.com">Start Cloud Trial</a>
·
<a href="https://blog.crewai.com">Blog</a>
·
<a href="https://community.crewai.com">Forum</a>
</p>
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="https://img.shields.io/github/stars/crewAIInc/crewAI" alt="GitHub Repo stars">
</a>
<a href="https://github.com/crewAIInc/crewAI/network/members">
<img src="https://img.shields.io/github/forks/crewAIInc/crewAI" alt="GitHub forks">
</a>
<a href="https://github.com/crewAIInc/crewAI/issues">
<img src="https://img.shields.io/github/issues/crewAIInc/crewAI" alt="GitHub issues">
</a>
<a href="https://github.com/crewAIInc/crewAI/pulls">
<img src="https://img.shields.io/github/issues-pr/crewAIInc/crewAI" alt="GitHub pull requests">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
</a>
</p>
<p align="center">
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/v/crewai" alt="PyPI version">
</a>
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/dm/crewai" alt="PyPI downloads">
</a>
<a href="https://twitter.com/crewAIInc">
<img src="https://img.shields.io/twitter/follow/crewAIInc?style=social" alt="Twitter Follow">
</a>
</p>
### Fast and Flexible Multi-Agent Automation Framework
> CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely **independent of LangChain or other agent frameworks**.
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI AMP Suite
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
## Crew Control Plane Key Features:
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
## Table of contents
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Understanding Flows and Crews](#understanding-flows-and-crews)
- [CrewAI vs LangGraph](#how-crewai-compares)
- [Examples](#examples)
- [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Using Crews and Flows Together](#using-crews-and-flows-together)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
- [Contribution](#contribution)
- [Telemetry](#telemetry)
- [License](#license)
## Why CrewAI?
<div align="center" style="margin-bottom: 30px;">
<img src="docs/images/asset.png" alt="CrewAI Logo" width="100%">
</div>
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
## Getting Started
Setup and run your first CrewAI agents by following this tutorial.
[](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learn CrewAI through our comprehensive courses:
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
### Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
### Getting Started with Installation
To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
```shell
pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
#### Common Issues
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
```shell
crewai create crew <project_name>
```
This command creates a new project folder with the following structure:
```
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your crew by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents, and the `tasks.yaml` file is where you define your tasks.
#### To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
#### Example of a simple crew with a sequential process:
Instantiate your crew:
```shell
crewai create crew latest-ai-development
```
Modify the files as needed to fit your use case:
**agents.yaml**
```yaml
# src/my_project/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
```
**tasks.yaml**
````yaml
# src/my_project/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
````
**crew.py**
```python
# src/my_project/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)
```
**main.py**
```python
#!/usr/bin/env python
# src/my_project/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
```
### 3. Running Your Crew
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
Lock the dependencies and install them by using the CLI command but first, navigate to your project directory:
```shell
cd my_project
crewai install (Optional)
```
To run your crew, execute the following command in the root of your project:
```bash
crewai run
```
or
```bash
python src/my_project/main.py
```
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
```bash
crewai update
```
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
## Key Features
CrewAI stands apart as a lean, standalone, high-performance multi-AI Agent framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
## Examples
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis)
### Quick Tutorial
[](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting) or watch a video below:
[](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) or watch a video below:
[](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) or watch a video below:
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
### Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines.
CrewAI flows support logical operators like `or_` and `and_` to combine multiple conditions. This can be used with `@start`, `@listen`, or `@router` decorators to create complex triggering conditions.
- `or_`: Triggers when any of the specified conditions are met.
- `and_`Triggers when all of the specified conditions are met.
Here's how you can orchestrate multiple Crews within a Flow:
```python
from crewai.flow.flow import Flow, listen, start, router, or_
from crewai import Crew, Agent, Task, Process
from pydantic import BaseModel
# Define structured state for precise control
class MarketState(BaseModel):
sentiment: str = "neutral"
confidence: float = 0.0
recommendations: list = []
class AdvancedAnalysisFlow(Flow[MarketState]):
@start()
def fetch_market_data(self):
# Demonstrate low-level control with structured state
self.state.sentiment = "analyzing"
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
@listen(fetch_market_data)
def analyze_with_crew(self, market_data):
# Show crew agency through specialized roles
analyst = Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher = Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task = Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task = Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew = Crew(
agents=[analyst, researcher],
tasks=[analysis_task, research_task],
process=Process.sequential,
verbose=True
)
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
@router(analyze_with_crew)
def determine_next_steps(self):
# Show flow control with conditional routing
if self.state.confidence > 0.8:
return "high_confidence"
elif self.state.confidence > 0.5:
return "medium_confidence"
return "low_confidence"
@listen("high_confidence")
def execute_strategy(self):
# Demonstrate complex decision making
strategy_crew = Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
return strategy_crew.kickoff()
@listen(or_("medium_confidence", "low_confidence"))
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
```
This example demonstrates how to:
1. Use Python code for basic data operations
2. Create and execute Crews as steps in your workflow
3. Use Flow decorators to manage the sequence of operations
4. Implement conditional branching based on Crew results
## Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
_P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb))._
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
- Fork the repository.
- Create a new branch for your feature.
- Add your feature or improvement.
- Send a pull request.
- We appreciate your input!
### Installing Dependencies
```bash
uv lock
uv sync
```
### Virtual Env
```bash
uv venv
```
### Pre-commit hooks
```bash
pre-commit install
```
### Running Tests
```bash
uv run pytest .
```
### Running static type checks
```bash
uvx mypy src
```
### Packaging
```bash
uv build
```
### Installing Locally
```bash
pip install dist/*.tar.gz
```
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
Data collected includes:
- Version of CrewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publicly available tools, which ones are being used the most so we can improve them
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
## License
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
## Frequently Asked Questions (FAQ)
### General
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
- [Is CrewAI open-source?](#q-is-crewai-open-source)
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
### Features and Capabilities
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
### Resources and Community
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
### Enterprise Features
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-enterprise-for-free)
### Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
### Q: How do I install CrewAI?
A: Install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
### Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
### Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
### Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
### Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
### Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
### Q: How can I contribute to CrewAI?
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI AMP offer?
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI AMP for free?
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
### Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
### Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
### Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
### Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.
| text/markdown | null | Joao Moura <joao@crewai.com> | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"aiosqlite~=0.21.0",
"appdirs~=1.4.4",
"chromadb~=1.1.0",
"click~=8.1.7",
"instructor>=1.3.3",
"json-repair~=0.25.2",
"json5~=0.10.0",
"jsonref~=1.1.0",
"lancedb>=0.4.0",
"mcp~=1.26.0",
"openai<3,>=1.83.0",
"openpyxl~=3.1.5",
"opentelemetry-api~=1.34.0",
"opentelemetry-exporter-otlp-proto-... | [] | [] | [] | [
"Homepage, https://crewai.com",
"Documentation, https://docs.crewai.com",
"Repository, https://github.com/crewAIInc/crewAI"
] | uv/0.8.4 | 2026-02-19T05:36:34.754750 | crewai-1.10.0a1.tar.gz | 6,806,577 | ff/5e/99cf02f5e7e9ce711b6fd518c87855d82dd8bf3e79d7cbd9c5b4be799115/crewai-1.10.0a1.tar.gz | source | sdist | null | false | d45d5cf9fed5b95d13cfcd723e1b7284 | d26ee5070b38490b13b9dfe030038e45a0a5d4817710d4fe84305a33f6f849f1 | ff5e99cf02f5e7e9ce711b6fd518c87855d82dd8bf3e79d7cbd9c5b4be799115 | null | [] | 688 |
2.4 | pmdsky-debug-py | 10.2.30 | pmdsky-debug symbols for Python. | pmdsky-debug-py
===============
Autogenerated and statically check-able pmdsky-debug_ symbol definitions
for Python.
To use the symbols in your projects, you can install the package from PyPi::
pip install pmdsky-debug-py
Symbols are grouped by regions. Each region implements the
``pmdsky_debug_py.SymbolsProtocol``.
To get a symbol, you can query it like this::
pmdsky_debug_py.eu.functions.InitMemAllocTable.address
See the source code and the symbol definitions of pmdsky-debug_ for more information.
See the `README.rst`_ on the root of the repository for additional information.
.. _pmdsky-debug: https://github.com/UsernameFodder/pmdsky-debug
.. _README.rst: https://github.com/SkyTemple/pmdsky-debug-py
| text/x-rst | null | Marco 'Capypara' Köpcke <hello@capypara.de> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"repository, https://github.com/SkyTemple/pmdsky-debug-py"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T05:35:19.323527 | pmdsky_debug_py-10.2.30-py3-none-any.whl | 1,409,203 | 27/5e/0668c5a71d2ff67f9563e3c2d1101770bb8cc5f0d82fcc5c0580f1a76313/pmdsky_debug_py-10.2.30-py3-none-any.whl | py3 | bdist_wheel | null | false | 3bb1477c822d524d738a1c86a15b0817 | a449ee2e89807373c42abec6be6aa4a3ecf8ac4931f4ef5c5783a32f2215919e | 275e0668c5a71d2ff67f9563e3c2d1101770bb8cc5f0d82fcc5c0580f1a76313 | null | [] | 122 |
2.4 | vibe-clock | 1.3.0 | Track AI coding agent usage across Claude Code, Codex, and OpenCode | # vibe-clock
[简体中文](README.zh-CN.md) | [日本語](README.ja.md) | [Español](README.es.md)
**WakaTime for AI coding agents.** Track usage across Claude Code, Codex, and OpenCode — then show it off on your GitHub profile.
[](LICENSE)
[](https://www.python.org/)
[](https://github.com/dexhunter/vibe-clock)
<p align="center">
<img src="https://raw.githubusercontent.com/dexhunter/dexhunter/master/images/vibe-clock-card.svg" alt="Vibe Clock Stats" />
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/dexhunter/dexhunter/master/images/vibe-clock-donut.svg" alt="Model Usage" width="400" />
<img src="https://raw.githubusercontent.com/dexhunter/dexhunter/master/images/vibe-clock-token-bars.svg" alt="Token Usage by Model" width="400" />
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/dexhunter/dexhunter/master/images/vibe-clock-hourly.svg" alt="Activity by Hour" width="400" />
<img src="https://raw.githubusercontent.com/dexhunter/dexhunter/master/images/vibe-clock-weekly.svg" alt="Activity by Day of Week" width="400" />
</p>
---
## Quick Start
```bash
pip install vibe-clock
vibe-clock init # auto-detects agents, sets up config
vibe-clock summary # see your stats in the terminal
```
## Privacy & Security
**Your code never leaves your machine.** vibe-clock reads only session metadata (timestamps, token counts, model names) from local JSONL logs. Before anything is pushed:
1. **Sanitizer strips all PII** — file paths, project names, usernames, and code are removed ([`sanitizer.py`](vibe_clock/sanitizer.py))
2. **Projects are anonymized** — real names become "Project A", "Project B"
3. **`--dry-run` lets you inspect** exactly what will be pushed before it goes anywhere
**What is pushed** (to your own public gist):
- Session counts, message counts, durations
- Token usage totals per model
- Model and agent names
- Daily activity aggregates
**What is NEVER pushed**: file paths, project names, message content, code snippets, git info, or any PII.
## Configurable Charts
Generate only the charts you want with `--type`:
```bash
vibe-clock render --type card,donut # just these two
vibe-clock render --type all # all 7 charts (default)
```
| Chart | File | Description |
|-------|------|-------------|
| `card` | `vibe-clock-card.svg` | Summary stats card |
| `heatmap` | `vibe-clock-heatmap.svg` | Daily activity heatmap |
| `donut` | `vibe-clock-donut.svg` | Model usage breakdown |
| `bars` | `vibe-clock-bars.svg` | Project session bars |
| `token_bars` | `vibe-clock-token-bars.svg` | Token usage by model |
| `hourly` | `vibe-clock-hourly.svg` | Activity by hour of day |
| `weekly` | `vibe-clock-weekly.svg` | Activity by day of week |
## GitHub Actions Setup
Add to your `<username>/<username>` profile repo to auto-update SVGs daily.
### 1. Push your stats
```bash
vibe-clock push # creates a public gist with sanitized data
# Note the gist ID printed
```
### 2. Add the secret
In your profile repo: **Settings → Secrets → Actions** → add:
- `VIBE_CLOCK_GIST_ID` — the gist ID from step 1
### 3. Create the workflow
`.github/workflows/vibe-clock.yml`:
```yaml
name: Update Vibe Clock Stats
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dexhunter/vibe-clock@v1.1.0
with:
gist_id: ${{ secrets.VIBE_CLOCK_GIST_ID }}
```
### 4. Add SVGs to your README
```html
<img src="images/vibe-clock-card.svg" alt="Vibe Clock Stats" />
<img src="images/vibe-clock-heatmap.svg" alt="Activity Heatmap" />
<img src="images/vibe-clock-donut.svg" alt="Model Usage" />
<img src="images/vibe-clock-bars.svg" alt="Projects" />
```
### 5. Run it
Go to **Actions** tab → "Update Vibe Clock Stats" → **Run workflow**
### Action Inputs
| Input | Default | Description |
|-------|---------|-------------|
| `gist_id` | *required* | Gist ID containing `vibe-clock-data.json` |
| `theme` | `dark` | `dark` or `light` |
| `output_dir` | `./images` | Where to write SVG files |
| `chart_types` | `all` | Comma-separated: `card,heatmap,donut,bars,token_bars,hourly,weekly` or `all` |
| `commit` | `true` | Auto-commit generated SVGs |
| `commit_message` | `chore: update vibe-clock stats` | Commit message |
### How it works
```
You (local) GitHub
───────── ──────
vibe-clock push ──▶ Gist (sanitized JSON)
│
Actions (daily cron)
│
fetch gist JSON
generate SVGs
commit to profile repo
```
## Supported Agents
| Agent | Log Location | Status |
|-------|-------------|--------|
| **Claude Code** | `~/.claude/` | Supported |
| **Codex** | `~/.codex/` | Supported |
| **OpenCode** | `~/.local/share/opencode/` | Supported |
## Commands
| Command | Description |
|---------|-------------|
| `vibe-clock init` | Interactive setup — detects agents, asks for GitHub token |
| `vibe-clock summary` | Rich terminal summary of usage stats |
| `vibe-clock status` | Show current configuration and connection status |
| `vibe-clock render` | Generate SVG visualizations locally |
| `vibe-clock export` | Export raw stats as JSON |
| `vibe-clock push` | Push sanitized stats to a GitHub gist |
| `vibe-clock push --dry-run` | Preview what would be pushed |
| `vibe-clock schedule` | Auto-schedule periodic push (launchd / systemd / cron) |
| `vibe-clock unschedule` | Remove the scheduled push task |
## Configuration
Config file: `~/.config/vibe-clock/config.toml`
Environment variable overrides:
- `GITHUB_TOKEN` — GitHub PAT with `gist` scope
- `VIBE_CLOCK_GIST_ID` — Gist ID for push/pull
- `VIBE_CLOCK_DAYS` — Number of days to aggregate
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"httpx[socks]>=0.25",
"pydantic>=2.0",
"rich>=13.0",
"tomli>=2.0; python_version < \"3.11\"",
"fastapi>=0.100; extra == \"api\"",
"uvicorn>=0.20; extra == \"api\"",
"pytest-cov; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T05:34:02.944191 | vibe_clock-1.3.0.tar.gz | 71,752 | 84/97/1f904bd453334e51aee36ad64068a090f97c85cc556412f9c69f2e82b5e7/vibe_clock-1.3.0.tar.gz | source | sdist | null | false | 9441ea31d5e8fa1770dd9e70ff6c4df2 | 1aa86f90e9d0ead005e2a48aa49894ea3499cbd85672c94ae8f1dad104a52215 | 84971f904bd453334e51aee36ad64068a090f97c85cc556412f9c69f2e82b5e7 | MIT | [
"LICENSE"
] | 271 |
2.4 | graphbit | 0.6.5 | GraphBit - Advanced workflow automation and AI agent orchestration library | <div align="center">
# GraphBit - High Performance Agentic Framework
<p align="center">
<img src="https://raw.githubusercontent.com/InfinitiBit/graphbit/refs/heads/main/assets/logo(circle).png" width="180" alt="GraphBit Logo" />
</p>
<!-- Added placeholders for links, fill it up when the corresponding links are available. -->
<p align="center">
<a href="https://graphbit.ai/">Website</a> |
<a href="https://docs.graphbit.ai/">Docs</a> |
<a href="https://discord.com/invite/FMhgB3paMD">Discord</a>
<br /><br />
</p>
<p align="center">
<a href="https://trendshift.io/repositories/14884" target="_blank"><img src="https://trendshift.io/api/badge/repositories/14884" alt="InfinitiBit%2Fgraphbit | Trendshift" width="250" height="55"/></a>
<br>
<a href="https://pepy.tech/projects/graphbit"><img src="https://static.pepy.tech/personalized-badge/graphbit?period=total&units=INTERNATIONAL_SYSTEM&left_color=GREY&right_color=GREEN&left_text=Downloads" alt="PyPI Downloads"/></a>
</p>
<p align="center">
<a href="https://pypi.org/project/graphbit/"><img src="https://img.shields.io/pypi/v/graphbit?color=blue&label=PyPI" alt="PyPI"></a>
<!-- <a href="https://pypi.org/project/graphbit/"><img src="https://img.shields.io/pypi/dm/graphbit?color=blue&label=Downloads" alt="PyPI Downloads"></a> -->
<a href="https://github.com/InfinitiBit/graphbit/actions/workflows/update-docs.yml"><img src="https://img.shields.io/github/actions/workflow/status/InfinitiBit/graphbit/update-docs.yml?branch=main&label=Build" alt="Build Status"></a>
<a href="https://github.com/InfinitiBit/graphbit/blob/main/CONTRIBUTING.md"><img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg" alt="PRs Welcome"></a>
<br>
<a href="https://www.rust-lang.org"><img src="https://img.shields.io/badge/rust-1.70+-orange.svg?logo=rust" alt="Rust Version"></a>
<a href="https://www.python.org"><img src="https://img.shields.io/badge/python-3.9--3.13-blue.svg?logo=python&logoColor=white" alt="Python Version"></a>
<a href="https://github.com/InfinitiBit/graphbit/blob/main/LICENSE.md"><img src="https://img.shields.io/badge/license-Apache%202.0-blue.svg" alt="License"></a>
</p>
<p align="center">
<a href="https://www.youtube.com/@graphbitAI"><img src="https://img.shields.io/badge/YouTube-FF0000?logo=youtube&logoColor=white" alt="YouTube"></a>
<a href="https://x.com/graphbit_ai"><img src="https://img.shields.io/badge/X-000000?logo=x&logoColor=white" alt="X"></a>
<a href="https://discord.com/invite/FMhgB3paMD"><img src="https://img.shields.io/badge/Discord-7289da?logo=discord&logoColor=white" alt="Discord"></a>
<a href="https://www.linkedin.com/showcase/graphbitai/"><img src="https://img.shields.io/badge/LinkedIn-0077B5?logo=linkedin&logoColor=white" alt="LinkedIn"></a>
</p>
**Type-Safe AI Agent Workflows with Rust Performance**
</div>
---
**Read this in other languages**: [🇨🇳 简体中文](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.zh-CN.md) | [🇨🇳 繁體中文](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.zh-TW.md) | [🇪🇸 Español](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.es.md) | [🇫🇷 Français](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.fr.md) | [🇩🇪 Deutsch](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.de.md) | [🇯🇵 日本語](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.ja.md) | [🇰🇷 한국어](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.ko.md) | [🇮🇳 हिन्दी](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.hi.md) | [🇸🇦 العربية](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.ar.md) | [🇮🇹 Italiano](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.it.md) | [🇧🇷 Português](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.pt-BR.md) | [🇷🇺 Русский](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.ru.md) | [🇧🇩 বাংলা](https://github.com/InfinitiBit/graphbit/blob/main/README_Multi_Lingual_i18n_Files/README.bn.md)
---
## What is GraphBit?
GraphBit is a source-available agentic AI framework for developers who need deterministic, concurrent, and low-overhead execution.
Built with a **Rust core** and a minimal **Python layer**, GraphBit delivers up to **68× lower CPU usage** and **140× lower memory footprint** than other frameworks, while maintaining equal or greater throughput.
It powers multi-agent workflows that run in parallel, persist memory across steps, self-recover from failures, and ensure **100% task reliability**. GraphBit is built for production workloads, from enterprise AI systems to low-resource edge deployments.
---
## Why GraphBit?
Efficiency decides who scales. GraphBit is built for developers who need deterministic, concurrent, and ultra-efficient AI execution without the overhead.
- **Rust-Powered Performance** - Maximum speed and memory safety at the core
- **Production-Ready** - Circuit breakers, retry policies, and fault recovery built-in
- **Resource Efficient** - Run on enterprise servers or low-resource edge devices
- **Multi-Agent Ready** - Parallel execution with shared memory across workflow steps
- **Observable** - Built-in tracing, structured logs, and performance metrics
---
## Benchmark
GraphBit was built for efficiency at scale—not theoretical claims, but measured results.
Our internal benchmark suite compared GraphBit to leading Python-based agent frameworks across identical workloads.
| Metric | GraphBit | Other Frameworks | Gain |
|:--------------------|:---------------:|:----------------:|:-------------------------|
| CPU Usage | 1.0× baseline | 68.3× higher | ~68× CPU |
| Memory Footprint | 1.0× baseline | 140× higher | ~140× Memory |
| Execution Speed | ≈ equal / faster| — | Consistent throughput |
| Determinism | 100% success | Variable | Guaranteed reliability |
GraphBit consistently delivers production-grade efficiency across LLM calls, tool invocations, and multi-agent chains.
**[View Full Benchmark Report](https://github.com/InfinitiBit/graphbit/blob/main/benchmarks/report/framework-benchmark-report.md)** for detailed methodology, test scenarios, and complete results.
**[Watch Benchmark Demo Video](https://www.youtube.com/watch?v=MaCl5oENeAY)** to see GraphBit's performance in action.
---
## When to Use GraphBit
Choose GraphBit if you need:
- **Production-grade multi-agent systems** that won't collapse under load
- **Type-safe execution** and reproducible outputs
- **Real-time orchestration** for hybrid or streaming AI applications
- **Rust-level efficiency** with Python-level ergonomics
If you're scaling beyond prototypes or care about runtime determinism, GraphBit is for you.
---
## Key Features
- **Tool Selection** - LLMs intelligently choose tools based on descriptions
- **Type Safety** - Strong typing through every execution layer
- **Reliability** - Circuit breakers, retry policies, error handling and fault recovery
- **Multi-LLM Support** - OpenAI, Azure OpenAI, Anthropic, OpenRouter, DeepSeek, Replicate, Ollama, TogetherAI and more
- **Resource Management** - Concurrency controls and memory optimization
- **Observability** - Built-in tracing, structured logs, and performance metrics
---
## Quick Start
### Installation
Recommended to use virtual environment.
```bash
pip install graphbit
```
### Environment Setup
Set up API keys you want to use in your project:
```bash
# OpenAI (optional – required if using OpenAI models)
export OPENAI_API_KEY=your_openai_api_key_here
# Anthropic (optional – required if using Anthropic models)
export ANTHROPIC_API_KEY=your_anthropic_api_key_here
```
> **Security Note**: Never commit API keys to version control. Always use environment variables or secure secret management.
### Basic Usage
```python
import os
from graphbit import LlmConfig, Executor, Workflow, Node, tool
# Initialize and configure
config = LlmConfig.openai(os.getenv("OPENAI_API_KEY"), "gpt-4o-mini")
# Create executor
executor = Executor(config)
# Create tools with clear descriptions for LLM selection
@tool(_description="Get current weather information for any city")
def get_weather(location: str) -> dict:
return {"location": location, "temperature": 22, "condition": "sunny"}
@tool(_description="Perform mathematical calculations and return results")
def calculate(expression: str) -> str:
return f"Result: {eval(expression)}"
# Build workflow
workflow = Workflow("Analysis Pipeline")
# Create agent nodes
smart_agent = Node.agent(
name="Smart Agent",
prompt="What's the weather in Paris and calculate 15 + 27?",
system_prompt="You are an assistant skilled in weather lookup and math calculations. Use tools to answer queries accurately.",
tools=[get_weather, calculate]
)
processor = Node.agent(
name="Data Processor",
prompt="Process the results obtained from Smart Agent.",
system_prompt="""You process and organize results from other agents.
- Summarize and clarify key points
- Structure your output for easy reading
- Focus on actionable insights
"""
)
# Connect and execute
id1 = workflow.add_node(smart_agent)
id2 = workflow.add_node(processor)
workflow.connect(id1, id2)
result = executor.execute(workflow)
print(f"Workflow completed: {result.is_success()}")
print("\nSmart Agent Output: \n", result.get_node_output("Smart Agent"))
print("\nData Processor Output: \n", result.get_node_output("Data Processor"))
```
**[Watch Quick Start Video Tutorial](https://youtu.be/ti0wbHFKKFM?si=hnxi-1W823z5I_zs)** - Complete walkthrough: Install GraphBit via PyPI, setup, and run your first workflow
**[Watch: Building Your First Agent Workflow](https://www.youtube.com/watch?v=gKvkMc2qZcA)** - Advanced tutorial on creating multi-agent workflows with GraphBit
---
## High-Level Architecture
<p align="center">
<img src="https://raw.githubusercontent.com/InfinitiBit/graphbit/092af98af8bd0f00ec924d3879dbcb98353cfd8d/assets/architecture.svg" width="600" alt="GraphBit Architecture">
</p>
Three-tier design for reliability and performance:
- **Rust Core** - Workflow engine, agents, and LLM providers
- **Orchestration Layer** - Project management and execution
- **Python API** - PyO3 bindings with async support
---
## Python API Integrations
GraphBit provides a rich Python API for building and integrating agentic workflows:
- **LLM Clients** - Multi-provider integrations (OpenAI, Anthropic, Azure, and more)
- **Workflows** - Define and manage multi-agent workflow graphs with state management
- **Nodes** - Agent nodes, tool nodes, and custom workflow components
- **Executors** - Workflow execution engine with configuration management
- **Tool System** - Function decorators, registry, and execution framework
- **Embeddings** - Vector embeddings for semantic search and retrieval
- **Document Loaders** - Load and parse documents (PDF, DOCX, TXT, JSON, CSV, XML, HTML)
- **Text Splitters** - Split documents into chunks (character, token, sentence, recursive)
For the complete list of classes, methods, and usage examples, see the [Python API Reference](https://docs.graphbit.ai/).
**[Watch: GraphBit Observability & Tracing](https://www.youtube.com/watch?v=nzwrxSiRl2U)** - Learn how to monitor and trace your AI workflows
---
## Contributing to GraphBit
We welcome contributions. To get started, please see the [Contributing](https://github.com/InfinitiBit/graphbit/blob/main/CONTRIBUTING.md) file for development setup and guidelines.
GraphBit is built by a wonderful community of researchers and engineers.
---
## Security
GraphBit is committed to maintaining security standards for our agentic framework. We recommend:
- Using environment variables for API keys
- Keeping GraphBit updated to the latest version
- Using proper secret management for production environments
If you discover a security vulnerability, please report it responsibly through [GitHub Security](https://github.com/InfinitiBit/graphbit/security) or via email rather than creating a public issue.
For detailed reporting procedures and response timelines, see our [Security Policy](https://github.com/InfinitiBit/graphbit/blob/main/SECURITY.md).
---
## License
GraphBit is licensed under the Apache License, Version 2.0.
For complete terms and conditions, see the [Full License](https://github.com/InfinitiBit/graphbit/blob/main/LICENSE.md).
---
<div align="center">
**Copyright © 2023–2026 InfinitiBit GmbH. All rights reserved.**
</div>
| text/markdown; charset=UTF-8; variant=GFM | null | InfinitiBit Team <contact@infinitibit.ai> | null | InfinitiBit Team <contact@infinitibit.ai> | null | ai, agents, workflow, automation, rust, python, llm, orchestration | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: Apache Software License",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Langua... | [] | https://graphbit.ai | null | <3.14,>=3.9 | [] | [] | [] | [
"aiofiles>=23.0.0",
"aiohttp>=3.12.15",
"python-dotenv>=1.0.0",
"rich>=13.0.0",
"typer>=0.9.0",
"huggingface-hub>=0.33.4",
"numpy>=1.24.0",
"litellm>=1.80.5"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/InfinitiBit/graphbit/issues",
"Changelog, https://github.com/InfinitiBit/graphbit/blob/main/CHANGELOG.md",
"Discord, https://discord.com/invite/huVJwkyu",
"Documentation, https://docs.graphbit.ai",
"Homepage, https://github.com/InfinitiBit/graphbit",
"Repository, https://g... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:30:11.306151 | graphbit-0.6.5.tar.gz | 266,769 | 38/0b/cad8e12e840b7a6a7431c7f56a968b2be9f055c506109c729344ec6a1905/graphbit-0.6.5.tar.gz | source | sdist | null | false | 4c62b94874e696b998f7e0b45f0a3610 | 283b98e5555d5c48317a2edc67a079fc7791e7b686a67f695012e82c3ebb2f88 | 380bcad8e12e840b7a6a7431c7f56a968b2be9f055c506109c729344ec6a1905 | null | [
"LICENSE.md"
] | 655 |
2.4 | mipcandy | 1.1.1a0 | A Candy for Medical Image Processing | # MIP Candy: A Candy for Medical Image Processing





MIP Candy is Project Neura's next-generation infrastructure framework for medical image processing. It defines a handful
of common network architectures with their corresponding training, inference, and evaluation pipelines that are
out-of-the-box ready to use. Additionally, it also provides integrations with popular frontend dashboards such as
Notion, WandB, and TensorBoard.
We provide a flexible and extensible framework for medical image processing researchers to quickly prototype their
ideas. MIP Candy takes care of all the rest, so you can focus on only the key experiment designs.
:link: [Home](https://mipcandy.projectneura.org)
:link: [Docs](https://mipcandy-docs.projectneura.org)
## Key Features
Why MIP Candy? :thinking:
<details>
<summary>Easy adaptation to fit your needs</summary>
We provide tons of easy-to-use techniques for training that seamlessly support your customized experiments.
- Sliding window
- ROI inspection
- ROI cropping to align dataset shape (100% or 33% foreground)
- Automatic padding
- ...
You only need to override one method to create a trainer for your network architecture.
```python
from typing import override
from torch import nn
from mipcandy import SegmentationTrainer
class MyTrainer(SegmentationTrainer):
@override
def build_network(self, example_shape: tuple[int, ...]) -> nn.Module:
...
```
</details>
<details>
<summary>Satisfying command-line UI design</summary>
<img src="home/assets/cli-ui.png" alt="cmd-ui"/>
</details>
<details>
<summary>Built-in 2D and 3D visualization for intuitive understanding</summary>
<img src="home/assets/visualization.png" alt="visualization"/>
</details>
<details>
<summary>High availability with interruption tolerance</summary>
Interrupted experiments can be resumed with ease.
<img src="home/assets/recovery.png" alt="recovery"/>
</details>
<details>
<summary>Support of various frontend platforms for remote monitoring</summary>
MIP Candy Supports [Notion](https://mipcandy-projectneura.notion.site), WandB, and TensorBoard.
<img src="home/assets/notion.png" alt="notion"/>
</details>
## Installation
Note that MIP Candy requires **Python >= 3.12**.
```shell
pip install "mipcandy[standard]"
```
## Quick Start
Below is a simple example of a nnU-Net style training. The batch size is set to 1 due to the varying shape of the
dataset, although you can use a `ROIDataset` to align the shapes.
```python
from typing import override
import torch
from mipcandy_bundles.unet import UNetTrainer
from torch.utils.data import DataLoader
from mipcandy import download_dataset, NNUNetDataset
class PH2(NNUNetDataset):
@override
def load(self, idx: int) -> tuple[torch.Tensor, torch.Tensor]:
image, label = super().load(idx)
return image.squeeze(0).permute(2, 0, 1), label
download_dataset("nnunet_datasets/PH2", "tutorial/datasets/PH2")
dataset, val_dataset = PH2("tutorial/datasets/PH2", device="cuda").fold()
dataloader = DataLoader(dataset, 1, shuffle=True)
val_dataloader = DataLoader(val_dataset, 1, shuffle=False)
trainer = UNetTrainer("tutorial", dataloader, val_dataloader, device="cuda")
trainer.train(1000, note="a nnU-Net style example")
``` | text/markdown | null | Project Neura <central@projectneura.org> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"matplotlib",
"numpy",
"pandas",
"psutil",
"ptflops",
"pyyaml",
"requests",
"rich",
"safetensors",
"simpleitk",
"torch",
"torchvision",
"mipcandy-bundles; extra == \"all\"",
"pyvista; extra == \"all\"",
"pyvista; extra == \"standard\""
] | [] | [] | [] | [
"Homepage, https://mipcandy.projectneura.org",
"Documentation, https://mipcandy-docs.projectneura.org",
"Repository, https://github.com/ProjectNeura/MIPCandy"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:29:54.484985 | mipcandy-1.1.1a0.tar.gz | 39,525 | 7a/8a/d28fe0644553f65db12389908a5c31ebfc885ddfd705237c8a55c9e9cef8/mipcandy-1.1.1a0.tar.gz | source | sdist | null | false | de1aafdc7cc99a096befc77a1a78d97d | 8981a34c1b17b19053e70f0c51278fcefa8a3522cced17ddcc5b09cc821e4b80 | 7a8ad28fe0644553f65db12389908a5c31ebfc885ddfd705237c8a55c9e9cef8 | Apache-2.0 | [
"LICENSE"
] | 228 |
2.4 | clrz | 0.2.1 | Add your description here | # CLRZ
Colorize standard output for better readability.
## Usage
To use this tool, simply preface another command with `clrz`.
For example, to colorize the output of `go test`:
```bash
clrz go test -v
```
## Installation
You can install CLRZ via `uv`:
```bash
uv tool install clrz
```
You can run it without installing:
```bash
uv tool run clrz
```
On windows you can use [Scoop](https://scoop.sh/):
```bash
scoop bucket add maciak https://github.com/maciakl/bucket
scoop update
scoop install clrz
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.8.3 | 2026-02-19T05:29:32.776129 | clrz-0.2.1.tar.gz | 2,366 | 21/e8/bac287a6460ae07220b9ea90d9048073b9f8a6b79d4ef990810375b60875/clrz-0.2.1.tar.gz | source | sdist | null | false | 65ccd6c187cd1c3569c5c81d7961addc | f1fc3a27b04b7fe44e2507a9fbad0eeee14628f1d5121d6fe044a4acdb9c53c7 | 21e8bac287a6460ae07220b9ea90d9048073b9f8a6b79d4ef990810375b60875 | null | [] | 282 |
2.4 | scomv | 0.1.1 | Spatial omics analysis tools for cell/gene clustering from a standard region | # SpatialCompassV
<p align="left">
<img
src="https://raw.githubusercontent.com/RyosukeNomural/SpatialCompassV/main/images/logo.png"
width="370"
height="145"
alt="SCOMV logo"
/>
</p>

[](https://spatialcompassv.readthedocs.io/en/latest/?badge=latest)
Spatial omics analysis tools for cell/gene clustering from a astandard region
* PyPI package: https://pypi.org/project/scomv/
* Free software: MIT License
* Documentation: https://spatialcompassv.readthedocs.io
## Overview of the SpatialCompassV (SCOMV) Workflow
The overall workflow of **SpatialCompassV (SCOMV)** is summarized as follows:
- **Extraction of a reference region**
A reference region (e.g., a tumor region) is identified using the **[SpatialKnifeY (SKNY)](https://github.com/shusakai/skny)** algorithm.
### Vector construction from spatial grids
<table border="0" style="border-collapse: collapse; border: none;">
<tr>
<td style="vertical-align: top; padding-right: 14px; border: none;">
The AnnData object is discretized into spatial grids, and for each grid,
the shortest-distance vector to the reference region is computed.
</td>
<td style="vertical-align: top; width: 200px; border: none;">
<img width="200" alt="vector"
src="https://raw.githubusercontent.com/RyosukeNomural/SpatialCompassV/main/images/vector.png" />
</td>
</tr>
</table>
<table border="0" style="border-collapse: collapse; border: none;">
<tr>
<td style="vertical-align: top; padding-right: 14px; border: none;">
This vector information is stored for each cell/gene and projected onto a
<b>polar coordinate map</b>.
The horizontal axis represents distance, and the vertical axis also represents distance.
Distances are defined as negative for locations inside the reference region.
</td>
<td style="vertical-align: top; border: none;">
<img
alt="polar_map"
src="https://raw.githubusercontent.com/RyosukeNomural/SpatialCompassV/main/images/polar.png"
style="width:400px; height:auto; display:block;"
/>
</td>
</tr>
</table>
<table border="0" style="border-collapse: collapse; border: none;">
<tr>
<td style="vertical-align: top; padding-right: 14px; border: none;">
A <b>similarity matrix</b> is then constructed, followed by <b>PCoA and clustering</b>,
to classify spatial distribution patterns.
</td>
<td style="vertical-align: top; border: none;">
<img
alt="PCoA"
src="https://raw.githubusercontent.com/RyosukeNomural/SpatialCompassV/main/images/pcoa.png"
style="width:600px; height:auto; display:block;"
/>
</td>
</tr>
</table>
- **Integration across multiple fields of view**
By integrating results from multiple regions of interest, clustering of the reference region itself (e.g., tumor malignancy states) can be performed.
- Gene-wise contributions are calculated using **PCA**, enabling the identification of **spatially differentially expressed genes (Spatial DEGs)**.
### Additional functionality
- Gene distributions can also be visualized as **3D density maps**, allowing direct comparison of the spatial distributions of two genes.
<p>
<img src="https://raw.githubusercontent.com/RyosukeNomural/SpatialCompassV/main/images/overview.png"
alt="overview"
width="700"/>
</p>
## Credits
This package was created with [Cookiecutter](https://github.com/audreyfeldroy/cookiecutter) and the [audreyfeldroy/cookiecutter-pypackage](https://github.com/audreyfeldroy/cookiecutter-pypackage) project template.
>>>>>>> 79d3344 (Initial commit (cookiecutter-scientific-python))
| text/markdown | null | Ryosuke Nomura <nomubare123@g.ecc.u-tokyo.ac.jp> | null | Ryosuke Nomura <nomubare123@g.ecc.u-tokyo.ac.jp> | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"adjustText",
"anndata==0.10.5.post1",
"dask==2024.8.0",
"dask-expr==1.1.10",
"fsspec==2023.6.0",
"geopandas==1.0.1",
"llvmlite==0.41.1",
"matplotlib",
"numba==0.58.1",
"numcodecs==0.12.1",
"numpy==1.26.4",
"ome-zarr==0.10.2",
"opencv-python==4.11.0.86",
"pandas==2.3.0",
"plotly",
"pyo... | [] | [] | [] | [
"bugs, https://github.com/RyosukeNomural/scomv/issues",
"changelog, https://github.com/RyosukeNomural/scomv/blob/master/changelog.md",
"homepage, https://github.com/RyosukeNomural/scomv"
] | twine/6.2.0 CPython/3.11.13 | 2026-02-19T05:28:06.742070 | scomv-0.1.1.tar.gz | 2,036,657 | cd/3f/d6c9cf598f6f98c6ee1466ff1d19dae73d98a7192dcc5b245e62fccd50da/scomv-0.1.1.tar.gz | source | sdist | null | false | a0cb64046da6ac5f241fb588f93f1839 | 1f59992abb5496b21ac97f50209e684946af4a9503e9f38df74a100a1a4c58be | cd3fd6c9cf598f6f98c6ee1466ff1d19dae73d98a7192dcc5b245e62fccd50da | null | [
"LICENSE"
] | 252 |
2.4 | pyflowreg | 0.1.0a9 | Variational optical-flow motion correction for 2-photon microscopy videos and 3D scans | [](https://pypi.org/project/pyflowreg/)
[](https://pypi.org/project/pyflowreg/)
[](LICENSE)
[](https://pypistats.org/packages/pyflowreg)
[](https://pepy.tech/projects/pyflowreg)[](https://github.com/FlowRegSuite/pyflowreg/actions/workflows/pypi-release.yml)
[](https://pyflowreg.readthedocs.io/en/latest/?badge=latest)
## 🚧 Under Development
This project is still in an **alpha stage**. Expect rapid changes, incomplete features, and possible breaking updates between releases.
- The API may evolve as we stabilize core functionality.
- Documentation and examples are incomplete.
- Feedback and bug reports are especially valuable at this stage.
# <img src="https://raw.githubusercontent.com/FlowRegSuite/pyflowreg/refs/heads/main/img/flowreglogo.png" alt="FlowReg logo" height="64"> PyFlowReg
Python bindings for Flow-Registration - variational optical-flow motion correction for 2-photon (2P) microscopy videos and volumetric 3D scans.
Derived from the Flow-Registration toolbox for compensation and stabilization of multichannel microscopy videos. The original implementation spans MATLAB, Java (ImageJ/Fiji plugin), and C++. See the [publication](https://doi.org/10.1002/jbio.202100330) and the [project website](https://www.snnu.uni-saarland.de/flow-registration/) for method details and video results.
**[📖 Read the Documentation](https://pyflowreg.readthedocs.io/)**
**Related projects**
- Original Flow-Registration repo: https://github.com/FlowRegSuite/flow_registration
- ImageJ/Fiji plugin: https://github.com/FlowRegSuite/flow_registration_IJ
- Napari plugin: https://github.com/FlowRegSuite/napari-flowreg

## Requirements
This code requires python 3.10 or higher.
Initialize the environment with
```bash
mamba create --name pyflowreg python=3.10
mamba activate pyflowreg
pip install -r requirements.txt
```
or on windows
```bash
pip install -r requirements_win.txt
```
to enable Sutter MDF file support.
## Installation via pip and mamba
```bash
mamba create --name pyflowreg python=3.10
pip install pyflowreg
```
To install the project with full visualization support, you can install it with the ```vis``` extra:
```bash
pip install pyflowreg[vis]
```
## Getting started
This repository contains demo scripts under ```experiments``` and
demo notebooks under ```notebooks```. The demos with the jupiter sequence should run out of the box.
The plugin supports most of the commonly used file types such as HDF5, tiff stacks and matlab mat files. To run the motion compensation, the options need to be defined into a ```OF_options``` object.
The python version of Flow-Registration aims at full MATLAB compatibility, any missing functionality should be reported as an issue. The API is designed to be similar to the original MATLAB code, with some adjustments for Python conventions.
## Development
### Code Quality & Pre-commit Hooks
We use [pre-commit](https://pre-commit.com) for automated code quality checks before each commit. Our hooks are centralized in [FlowRegSuite/flowreg-hooks](https://github.com/FlowRegSuite/flowreg-hooks) for consistency across all FlowRegSuite projects.
**What's checked:**
- Ruff linting and formatting (Python code style)
- NumPy docstring validation (documentation standards)
- README image URLs (PyPI compatibility) are checked and automatically replaced
- YAML/TOML validation and general file hygiene
**Setup:**
```bash
# Install pre-commit
pip install pre-commit
# Install the git hook scripts
pre-commit install
# Run on all files to check current status
pre-commit run --all-files
```
**Alternative installations (without pip):**
If you're using pipx (isolated tool installation):
```bash
# Install pipx if not available
python -m pip install --user pipx
python -m pipx ensurepath
# Install pre-commit via pipx
pipx install pre-commit
# For uv support (optional, if using uv as package installer)
pipx inject pre-commit pre-commit-uv
# Then install hooks as usual
pre-commit install
```
Or using uv's tool management directly:
```bash
# Install uv if not available
pip install uv
# Install pre-commit as an isolated tool
uv tool install pre-commit
# Then install hooks as usual
pre-commit install
```
**Daily usage:**
Simply commit as usual - checks run automatically. Failed hooks show what needs fixing, and many issues are auto-corrected.
```bash
# Manual run when needed
pre-commit run --all-files
# Skip hooks in emergency (use sparingly)
git commit -m "message" --no-verify
```
**Troubleshooting (Windows/Anaconda):**
If you encounter registry or path errors on Windows, try:
```powershell
# Clear pre-commit cache
pre-commit clean
# Set environment variable (if needed)
$env:PROGRAMDATA = "C:\ProgramData"
# Try again
pre-commit run --all-files
```
**Updating hooks** (maintainers only):
```bash
pre-commit autoupdate
git add .pre-commit-config.yaml
git commit -m "chore: update pre-commit hooks"
```
### Docstring Standards
We follow NumPy-style docstrings for compatibility with Sphinx documentation. The `numpydoc-validation` hook ensures consistency. See the [NumPy docstring guide](https://numpydoc.readthedocs.io/en/latest/format.html) for examples.
## Dataset
The dataset which we used for our evaluations is available as [2-Photon Movies with Motion Artifacts](https://drive.google.com/drive/folders/1fPdzQo5SiA-62k4eHF0ZaKJDt1vmTVed?usp=sharing).
## Citation
Details on the original method and video results can be found [here](https://www.snnu.uni-saarland.de/flow-registration/).
If you use parts of this code or the plugin for your work, please cite
> “Pyflowreg,” (in preparation), 2025.
and for Flow-Registration
> P. Flotho, S. Nomura, B. Kuhn and D. J. Strauss, “Software for Non-Parametric Image Registration of 2-Photon Imaging Data,” J Biophotonics, 2022. [doi:https://doi.org/10.1002/jbio.202100330](https://doi.org/10.1002/jbio.202100330)
BibTeX entry
```
@article{flotea2022a,
author = {Flotho, P. and Nomura, S. and Kuhn, B. and Strauss, D. J.},
title = {Software for Non-Parametric Image Registration of 2-Photon Imaging Data},
year = {2022},
journal = {J Biophotonics},
doi = {https://doi.org/10.1002/jbio.202100330}
}
```
| text/markdown | null | Philipp Flotho <Philipp.Flotho@uni-saarland.de> | null | null | CC BY-NC-SA 4.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.0.0",
"scipy>=1.14.0",
"scikit-image>=0.24.0",
"opencv-contrib-python>=4.10.0",
"h5py>=3.12.0",
"tifffile>=2024.9.0",
"numba>=0.60.0",
"pydantic<3.0.0,>=2.11.0",
"hdf5storage>=0.2.0",
"pywin32>=311; platform_system == \"Windows\"",
"pyyaml>=6.0",
"Pillow>=10.0",
"tomli>=2.0.0; pyth... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:28:01.687262 | pyflowreg-0.1.0a9.tar.gz | 4,221,780 | af/a1/5348d246bc1a43affdd5bbd291dea85cdd6acf90db5648a81883d958b041/pyflowreg-0.1.0a9.tar.gz | source | sdist | null | false | cc6cfbf3cdf91b9ed45612706616fd76 | 07c5490e9bbb9e64bd5ac45ea61d5f9dfaa95a9dff9c66d11aebd81654d43cf8 | afa15348d246bc1a43affdd5bbd291dea85cdd6acf90db5648a81883d958b041 | null | [] | 308 |
2.4 | chuscraper | 0.19.3 | A blazing fast, async-first, undetectable webscraping/web automation framework | <p align="center">
<img src="https://i.ibb.co/HLyG7BBK/Chat-GPT-Image-Feb-16-2026-11-13-14-AM.png" alt="Chuscraper Logo" width="180" />
</p>
<h1 align="center">🕷️ Chuscraper</h1>
<p align="center">
<strong>Stealth-focused web scraping & automation framework powered by CDP</strong><br/>
You Only Scrape Once — data extraction made smarter, faster, and more resilient.
</p>
<p align="center">
<a href="https://pypi.org/project/chuscraper/"><img src="https://static.pepy.tech/personalized-badge/chuscraper?period=total&units=INTERNATIONAL_SYSTEM&left_color=BLACK&right_color=GREEN&left_text=downloads"/></a>
<a href="https://opensource.org/licenses/AGPL-3.0"><img src="https://img.shields.io/badge/License-AGPL%20v3-blue.svg?style=for-the-badge"/></a>
<a href="https://github.com/ToufiqQureshi/chuscraper"><img src="https://img.shields.io/badge/GitHub-Trending-blue?style=for-the-badge&logo=github"/></a>
</p>
---
## 🚀 What is Chuscraper?
Chuscraper is a Python web scraping & automation library that uses **CDP (Chrome DevTools Protocol)** to extract structured data, interact with pages, and automate workflows — with a heavy focus on **Anti-Detection** and **Stealth**.
It converts standard Chromium instances into undetectable agents that can bypass bot verification systems.
---
## 🌟 Features
### 🕵️♂️ Stealth & Anti-Detection
- Hides `navigator.webdriver`, user agent rotation
- Canvas/WebGL noise + hardware spoofing
- Timezone & geolocation spoofing
### ⚡ Async + Fast
Built on async CDP, low overhead, no heavy browser bundles.
### 🔄 Flexible Outputs
Supports JSON, CSV, Markdown, Excel, Pydantic, and more.
---
## 📦 Installation
```bash
pip install chuscraper
```
> [!TIP]
> Use within a virtual environment to avoid conflicts.
---
---
## 💻 Quick Start (The "Easy" Way)
Chuscraper is designed for **Zero Boilerplate**. You don't need complex configuration objects just to start a stealthy session.
```python
import asyncio
import chuscraper as zd
async def main():
# DIRECT START: Specify stealth, proxy, or headless directly in start()
async with await zd.start(headless=False, stealth=True) as browser:
# 🟢 BROWSER-LEVEL SHORTCUT
await browser.goto("https://www.makemytrip.com/")
# 🟢 INTUITIVE ALIASES (goto, title, select_text)
page = browser.main_tab
await page.goto("https://example.com")
title = await page.title()
header = await page.select_text("h1")
print(f"Bhai, Title hai: {title}")
print(f"Header: {header}")
if __name__ == "__main__":
asyncio.run(main())
```
> [!NOTE]
> `chuscraper` automatically handles Chrome process cleanup and Local Proxy lifecycle.
---
## ⚙️ Configuration Switches (Parameters)
Chuscraper gives you full control via `zd.start()`. Here are the powerful switches you can use:
### 🛠️ Core Switches
| Switch | Description | Default |
| :--- | :--- | :--- |
| `headless` | Run without a visible window (`True`/`False`) | `False` |
| `stealth` | **Master Switch** for Anti-Detection features | `False` |
| `user_data_dir` | Path to save/load browser profile (keep logins/cookies) | `Temp` |
| `proxy` | Proxy URL (e.g. `http://user:pass@host:port`) | `None` |
### 🚀 Advanced Switches
| Switch | Description |
| :--- | :--- |
| `browser_executable_path` | Custom path to Chrome/Brave binary |
| `user_agent` | Spoof specific User-Agent string |
| `sandbox` | Set `False` for Linux/Docker environments |
| `disable_webgl` | Disable graphics for performance (`True`) |
| `disable_webrtc` | Prevent IP leaks via WebRTC (`True` recommended for proxies) |
| `lang` | Browser language (e.g., `en-US`, `hi-IN`) |
### 🕵️♂️ Granular Stealth Options
When `stealth=True`, you can fine-tune specific patches by passing a `stealth_options` dict:
```python
await zd.start(stealth=True, stealth_options={
"patch_webdriver": True, # Hide WebDriver
"patch_webgl": True, # Spoof Graphics Card
"patch_canvas": True, # Add Canvas Noise
"patch_audio": False # Disable Audio Fingerprinting noise
})
```
---
## 🛡️ Stealth & Anti-Detection Proof
We don't just claim to be stealthy; we prove it. Below are the results from top anti-bot detection suites, all passed with **100% "Human" status**.
👉 **[View Full Visual Proofs & Screenshots Here](docs/STEALTH_PROOF.md)**
| Detection Suite | Result | Status |
| ------------------------- | ------------------------ | ------- |
| **SannySoft** | No WebDriver detected | ✅ Pass |
| **BrowserScan** | 100% Trust Score | ✅ Pass |
| **PixelScan** | Consistent Fingerprint | ✅ Pass |
| **IPHey** | Software Clean (Green) | ✅ Pass |
| **CreepJS** | 0% Stealth / 0% Headless | ✅ Pass |
| **Fingerprint.com** | No Bot Detected | ✅ Pass |
### 🌍 Real-World Protection Bypass
We tested `chuscraper` against live websites protected by major security providers:
| Provider | Target | Result |
| -------------------- | ----------------------- | ----------------------- |
| **Cloudflare** | Turnstile Demo | ✅ Solved Automatically |
| **DataDome** | Antoine Vastel Research | ✅ Accessed |
| **Akamai** | Nike Product Page | ✅ Bypassed |
---
## 📖 Documentation
Full technical guides are available in the `docs/` folder:
- [English (Main)](README.md)
- [Production Readiness](website/docs/production.md)
- [Project API Guide](docs/api_guide_v2.md)
- [Stealth Comparison](docs/stealth_comparison.md)
*Translations (Chinese, Japanese, etc.) coming soon.*
## 💖 Support & Sponsorship
`chuscraper` is an open-source project maintained by [Toufiq Qureshi]. If the library has helped you or your business, please consider supporting its development:
- **GitHub Sponsors**: [Sponsor me on GitHub](https://github.com/sponsors/ToufiqQureshi)
- **Corporate Sponsorship**: If you are a **Proxy Provider** or **Data Company**, we offer featured placement in our documentation. Contact us for partnership opportunities.
- **Custom Scraping Solutions**: Need a private, high-performance scraper? We offer professional consulting.
---
## 🛠️ Contributing
Want to contribute? Open an issue or send a pull request — all levels welcome! Please follow the `CONTRIBUTING.md` guidelines.
---
## 📜 License
Chuscraper is licensed under the **AGPL-3.0 License**. This ensures that any software using Chuscraper must also be open-source, protecting the community and your freedom.
Made with ❤️ by [Toufiq Qureshi]
| text/markdown | null | Toufiq Qureshi <toufiq@neurofiq.in> | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
which gives you legal permission to copy, distribute and/or modify the
software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they receive
widespread use, become available for other developers to incorporate.
Many developers of free software are heartened and encouraged by the
resulting cooperation. However, in the case of software used on
network servers, this result may fail to come about. The GNU General
Public License permits making a modified version and letting the public
access it on a server without ever releasing its source code to the
public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public
License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
Corresponding Source under the terms of this License, in one of these
ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to
a network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License. However, if you cease all violation of this License,
then your license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not convey it at all. For example, if you agree to terms that
obligate you to collect a royalty for further conveying from those to
whom you convey the Program, the only way you could satisfy both those
terms and this License would be to refrain entirely from conveying the
Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero
General Public License "or any later version" applies to it, you have
the option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever
published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that
proxy's public statement of acceptance of a version permanently
authorizes you to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING
ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF
THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO
LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY
OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF
THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>. | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"... | [] | null | null | >=3.10 | [] | [] | [] | [
"asyncio-atexit>=1.0.1",
"beautifulsoup4>=4.12.0",
"deprecated>=1.2.14",
"emoji>=2.14.1",
"grapheme>=0.6.0",
"html2text>=2024.2.26",
"mss>=9.0.2",
"psutil>=7.1.0",
"pydantic>=2.0.0",
"websockets>=14.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ToufiqQureshi/chuscraper",
"Repository, https://github.com/ToufiqQureshi/chuscraper",
"Issues, https://github.com/ToufiqQureshi/chuscraper/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-19T05:27:10.027340 | chuscraper-0.19.3.tar.gz | 486,853 | 9c/19/ba6bd8771f928ebf4d6bafa44b847ebba73618d1653175cf44d2ab520196/chuscraper-0.19.3.tar.gz | source | sdist | null | false | 0fb20268d83d70951ff02d6883e79969 | df16f32844acbcac6a252521fad1e20d8f305098cd273844edb5c0ca03eb3e59 | 9c19ba6bd8771f928ebf4d6bafa44b847ebba73618d1653175cf44d2ab520196 | null | [
"LICENSE"
] | 287 |
2.4 | pyucis | 0.1.5.22169728945 | PyUCIS provides a Python API for manipulating UCIS coverage data. | # pyucis
Python API to Unified Coverage Interoperability Standard (UCIS) Data Model
[](https://dev.azure.com/mballance/fvutils/_build/latest?definitionId=16&branchName=master)
[](https://badge.fury.io/py/pyucis)
[](https://pypi.org/project/pyucis/)
## Features
- **Python API** for manipulating UCIS coverage databases
- **CLI Tools** for database conversion, merging, and reporting
- **Coverage Import** - Import from Verilator, cocotb-coverage, and AVL frameworks
- **MCP Server** for AI agent integration (see [MCP_SERVER.md](MCP_SERVER.md))
- **AgentSkills Support** for enhanced AI agent understanding (see https://agentskills.io)
- Support for XML, YAML, and UCIS binary formats
- Multiple export formats: LCOV, Cobertura, JaCoCo, Clover
## Installation
```bash
# Standard installation
pip install pyucis
# With MCP server support
pip install pyucis[dev]
```
## Usage
### Command Line Interface
```bash
# Convert coverage database
pyucis convert --input-format xml --output-format yaml input.xml -o output.yaml
# Merge multiple databases
pyucis merge db1.xml db2.xml db3.xml -o merged.xml
# Generate reports
pyucis report coverage.xml -o report.txt
# Interactive Terminal UI (NEW!)
pyucis view coverage.xml
# Query coverage information
pyucis show summary coverage.xml
pyucis show gaps coverage.xml --threshold 80
pyucis show covergroups coverage.xml
```
### Interactive Terminal UI
PyUCIS now includes an interactive Terminal User Interface (TUI) for exploring coverage databases:
```bash
# Launch the TUI
pyucis view coverage.xml
```
**Features:**
- **Dashboard View** - High-level coverage overview with statistics
- **Hierarchy View** - Navigate design structure with interactive tree
- **Gaps View** - Identify uncovered items for test planning
- **Hotspots View** - Priority-based improvement targets with P0/P1/P2 classification
- **Metrics View** - Statistical analysis, distributions, and quality indicators
- **Help System** - Comprehensive keyboard shortcuts (press `?`)
- Color-coded coverage indicators (red <50%, yellow 50-80%, green 80%+)
- Keyboard-driven navigation optimized for terminal workflows
- Fast, responsive interface using Rich library
**Keyboard Shortcuts:**
- `1` - Dashboard view
- `2` - Hierarchy view
- `3` - Gaps view
- `4` - Hotspots view
- `5` - Metrics view
- `↑/↓` - Navigate items
- `←/→` - Collapse/expand tree nodes
- `?` - Help overlay
- `q` - Quit
**Hotspots Analysis:**
The Hotspots view intelligently identifies high-priority coverage targets:
- **P0/P1 (Critical/High):** Low coverage modules (<50%) requiring immediate attention
- **P1/P2 (High/Medium):** Near-complete items (90%+) - low hanging fruit
- **P1 (High):** Completely untested coverpoints
**Metrics & Statistics:**
The Metrics view provides in-depth analysis:
- Coverage distribution by hit count (0, 1-10, 11-100, 100+)
- Statistical measures (mean, median, min, max)
- Quality indicators (complete, high, medium, low tiers)
- Bin utilization and zero-hit ratios
The TUI provides an efficient alternative to HTML reports for real-time coverage analysis, especially useful for:
- Quick coverage assessment during verification
- Identifying coverage gaps without generating static reports
- Priority-driven test planning with hotspot analysis
- Remote terminal sessions over SSH
- CI/CD pipeline integration
### MCP Server for AI Agents
PyUCIS now includes a Model Context Protocol (MCP) server that enables AI agents to interact with coverage databases:
```bash
# Start the MCP server
pyucis-mcp-server
```
See [MCP_SERVER.md](MCP_SERVER.md) for detailed documentation on:
- Available MCP tools (17+ coverage analysis tools)
- Integration with Claude Desktop and other AI platforms
- API usage examples
### AgentSkills Support
PyUCIS includes an [AgentSkills](https://agentskills.io) skill definition that provides LLM agents with comprehensive information about PyUCIS capabilities. When you run any PyUCIS command, the absolute path to the skill file is displayed, allowing agents to reference it for better understanding of UCIS coverage data manipulation.
```bash
# Running any pyucis command displays the skill file location
pyucis --help
# Output includes: Skill Definition: /path/to/ucis/share/SKILL.md
```
### Python API
```python
from ucis import UCIS
# Open a database
db = UCIS("coverage.xml")
# Access coverage data
for scope in db.scopes():
print(f"Scope: {scope.name}")
for coveritem in scope.coveritems():
print(f" {coveritem.name}: {coveritem.count} hits")
```
### Coverage Import
Import coverage from various verification frameworks:
**Command Line (CLI):**
```bash
# cocotb-coverage (XML/YAML)
pyucis convert --input-format cocotb-xml coverage.xml --output-format xml --out output.xml
pyucis convert --input-format cocotb-yaml coverage.yml --output-format xml --out output.xml
# AVL (JSON)
pyucis convert --input-format avl-json coverage.json --output-format xml --out output.xml
# Merge multiple runs
pyucis merge --input-format cocotb-xml run1.xml run2.xml --output-format xml --out merged.xml
```
**Python API:**
cocotb-coverage (Python testbenches):
```python
from ucis.format_detection import read_coverage_file
# Automatically detects cocotb XML or YAML format
db = read_coverage_file('cocotb_coverage.xml')
db = read_coverage_file('cocotb_coverage.yml')
```
AVL - Apheleia Verification Library:
```python
from ucis.avl import read_avl_json
# Import AVL JSON coverage (all variations supported)
db = read_avl_json('avl_coverage.json')
```
Verilator:
```bash
# Import Verilator coverage via CLI
pyucis convert --input-format vltcov coverage.dat --out coverage.xml
```
See documentation for complete import examples and supported formats.
## Documentation
- [MCP Server Documentation](MCP_SERVER.md)
- [Python SQLite Performance Report](PYTHON_SQLITE_PERFORMANCE_REPORT.md)
- [UCIS Standard](https://www.accellera.org/activities/committees/ucis)
## License
Apache 2.0
## Links
- [PyPI](https://pypi.org/project/pyucis/)
- [GitHub](https://github.com/fvutils/pyucis)
- [Issue Tracker](https://github.com/fvutils/pyucis/issues)
| text/markdown | Matthew Ballance | Matthew Ballance <matt.ballance@gmail.com> | null | null | Apache 2.0 | SystemVerilog, Verilog, RTL, Coverage, UCIS, verification | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Lan... | [] | https://github.com/fvutils/pyucis | null | >=3.8 | [] | [] | [] | [
"lxml",
"python-jsonschema-objects",
"jsonschema",
"pyyaml",
"mcp>=0.9.0",
"rich>=13.0.0",
"ivpm; extra == \"dev\"",
"pylint; extra == \"dev\"",
"flake8; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/fvutils/pyucis",
"Repository, https://github.com/fvutils/pyucis",
"Issues, https://github.com/fvutils/pyucis/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:26:57.484260 | pyucis-0.1.5.22169728945-py2.py3-none-any.whl | 382,976 | 98/f1/b7593795bff1156e012d25c1e8ba38c37b4b71f494e0240104a7aca28d21/pyucis-0.1.5.22169728945-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | 6af3943d96964769d4358e3b0fb3af59 | e33e13aab19faf97e1959c1238d79f5d016f58171503492d8285afbe677adfb8 | 98f1b7593795bff1156e012d25c1e8ba38c37b4b71f494e0240104a7aca28d21 | null | [
"LICENSE"
] | 1,298 |
2.5 | bin-upload | 0.0.1 | Publish binaries to npm or pypi. | # bin-to-npm
To install dependencies:
```bash
bun install
```
To run:
```bash
bun run index.ts
```
This project was created using `bun init` in bun v1.3.9. [Bun](https://bun.com) is a fast all-in-one JavaScript runtime.
| text/markdown | Derek Worthen | worthend.derek@gmail.com | null | null | null | bin-upload, binary, pypi, npm, publish | [] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"source, https://github.com/dworthen/bin-upload"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T05:25:42.160829 | bin_upload-0.0.1-py3-none-manylinux_2_17_x86_64.whl | 39,931,701 | a0/37/f2dcf6f4e1a32a61208421e110a71730337a9424452cc63d4a9386441b97/bin_upload-0.0.1-py3-none-manylinux_2_17_x86_64.whl | py3 | bdist_wheel | null | false | 051a2c629d7fc361517bd82f53d01405 | b8b75e93fb3f36769a922a08d6d150cdf7ca73d05b26c4cb9cf1efb6ba5d13b9 | a037f2dcf6f4e1a32a61208421e110a71730337a9424452cc63d4a9386441b97 | MIT | [] | 547 |
2.4 | meupackage | 0.1.0 | Descrição curta do package | # Regressão Linear Simples
Package educacional para entender regressão linear do zero.
## Exemplo de uso
```python
from regressaosimples import RegressaoLinearSimples
X = [1, 2, 3, 4, 5]
Y = [2, 3, 5, 4, 6]
modelo = RegressaoLinearSimples()
modelo.fit(X, Y)
print(modelo.predict([6, 7]))
| text/markdown | null | Luan L <luan.emerson.lima@gmail.com> | null | null | MIT | null | [] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T05:24:25.205599 | meupackage-0.1.0.tar.gz | 1,510 | 4b/3e/89951972e85eda7ea41c9561bfebf4d7e51cbe1ccd35b79892348d7d88e6/meupackage-0.1.0.tar.gz | source | sdist | null | false | 19647be2d7c0c190119804ff07b426cd | d76672edf2d6eca2f7f9ebc0a631405f724cb8ee2a3918ad9fb3f249cc4658f2 | 4b3e89951972e85eda7ea41c9561bfebf4d7e51cbe1ccd35b79892348d7d88e6 | null | [] | 280 |
2.3 | orm-loader | 0.3.22 | Generic base classes to handle ORM functionality for multiple downstream datamodels | ## orm-loader
[](
https://github.com/AustralianCancerDataNetwork/orm-loader/actions/workflows/tests.yml
)
A lightweight, reusable foundation for building and validating SQLAlchemy-based clinical (and non-clinical) data models.
This library provides general-purpose ORM infrastructure that sits below any specific data model (OMOP, PCORnet, custom CDMs, etc.), focusing on:
* declarative base configuration
* bulk ingestion patterns
* file-based validation & loading
* table introspection
* model-agnostic validation scaffolding
* safe, database-portable operational helpers
It intentionally contains no domain logic and no assumptions about a specific schema.
### What this library provides:
This library provides a small set of composable building blocks for defining, loading, inspecting, and validating SQLAlchemy-based data models.
All components are model-agnostic and can be selectively combined in downstream libraries.
1. A minimal, opinionated ORM table base
ORMTableBase provides structural introspection utilities for SQLAlchemy-mapped tables, without imposing any domain semantics.
It supports:
* mapper access and inspection
* primary key discovery
* required (non-nullable) column detection
* consistent primary key handling across models
* simple ID allocation helpers for sequence-less databases
```python
from orm_loader.tables import ORMTableBase
class MyTable(ORMTableBase, Base):
__tablename__ = "my_table"
```
This base is intended to be inherited by all ORM tables, either directly or via higher-level mixins.
2. CSV-based ingestion mixins
CSVLoadableTableInterface adds opt-in CSV loading support for ORM tables using pandas, with a focus on correctness and scalability.
Features include:
* chunked loading for large files
* optional per-table normalisation logic
* optional deduplication against existing database rows
* safe bulk inserts using SQLAlchemy sessions
```python
class MyTable(CSVLoadableTableInterface, ORMTableBase, Base):
__tablename__ = "my_table"
```
Downstream models may override:
* normalise_dataframe(...)
* dedupe_dataframe(...)
* csv_columns()
to implement table-specific ingestion policies.
3. Structured serialisation and hashing
SerialisableTableInterface adds lightweight, explicit serialisation helpers for ORM rows.
It supports:
* conversion to dictionaries
* JSON serialisation
* stable row-level fingerprints
* iterator-style access to field/value pairs
```python
row = session.get(MyTable, 1)
row.to_dict()
row.to_json()
row.fingerprint()
```
This is useful for:
* debugging
* auditing
* reproducibility checks
* downstream APIs or exports
4. Model registry and validation scaffolding
The library includes model-agnostic validation infrastructure, designed to compare ORM models against external specifications.
This includes:
* a model registry
* table and field descriptors
* validator contracts
* a validation runner
* structured validation reports
Specifications can be loaded from CSV today, with support for other formats (e.g. LinkML) planned.
```python
registry = ModelRegistry(model_version="1.0")
registry.load_table_specs(table_csv, field_csv)
registry.register_models([MyTable])
runner = ValidationRunner(validators=always_on_validators())
report = runner.run(registry)
```
Validation output is available as:
* human-readable text
* structured dictionaries
* JSON (for CI/CD integration)
* exit codes suitable for pipelines
5. Database bootstrap helpers
The library provides lightweight helpers for schema creation and bootstrapping, without imposing a migration strategy.
```python
from orm_loader.metadata import Base
from orm_loader.bootstrap import bootstrap
bootstrap(engine, create=True)
```
6. Safe bulk-loading utilities
A reusable context manager simplifies trusted bulk ingestion workflows:
* temporarily disables foreign key checks where supported
* suppresses autoflush for performance
* ensures reliable rollback on failure
## Summary
This library intentionally focuses on infrastructure, not semantics.
It provides:
* reusable ORM mixins
* safe ingestion patterns
* validation scaffolding
* database-portable utilities
while leaving domain rules, business logic, and schema semantics to downstream libraries.
This makes it suitable as a shared foundation for:
* clinical data models
* research data marts
* registry schemas
* synthetic data pipelines
| text/markdown | gkennos | gkennos <georgina.kennedy@unsw.edu.au> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"chardet>=5.2.0",
"pandas>=2.3.3",
"pyarrow>=23.0.0",
"sqlalchemy>=2.0.45",
"sqlalchemy-utils>=0.42.1",
"mypy>=1.19.1; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"ruff>=0.14.11; extra == \"dev\"",
"mkdocs-material>=9.7.1; extra == \"dev\"",
"mkdocstrings-python>=2.0.1; extra == \"dev\""... | [] | [] | [] | [
"Homepage, https://AustralianCancerDataNetwork.github.io/orm-loader",
"Documentation, https://AustralianCancerDataNetwork.github.io/orm-loader",
"Repository, https://github.com/AustralianCancerDataNetwork/orm-loader",
"Issues, https://github.com/AustralianCancerDataNetwork/orm-loader/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:23:47.893712 | orm_loader-0.3.22.tar.gz | 31,662 | d8/d8/3fe9fc8d6d264f7d9e772a5847ab8dc0f5eb869e0823d67bcc9567cf97f3/orm_loader-0.3.22.tar.gz | source | sdist | null | false | 95e3225ebaa58bbde32b6ca9049719f6 | 72c0d0b8c798c07cdca1468a815f78120e8e5afa02914ee2b53452c573b4d55a | d8d83fe9fc8d6d264f7d9e772a5847ab8dc0f5eb869e0823d67bcc9567cf97f3 | null | [] | 267 |
2.4 | PyCfeMx | 0.1.0 | Cliente Python para el portal de CFE (Comisión Federal de Electricidad) México | # PyCfeMx
Cliente Python para el portal de CFE (Comisión Federal de Electricidad) México.
Permite consultar información de servicios de electricidad desde el portal "Mi Espacio" de CFE.
## Instalación
```bash
pip install pycfemx
```
## Uso
### Async
```python
import asyncio
from pycfemx import CFEClient
async def main():
async with CFEClient() as client:
await client.login("tu_usuario", "tu_password")
# Obtener información de un servicio específico
info = await client.get_info()
print(f"Monto: {info.amount_due}")
asyncio.run(main())
```
### Sync
```python
from pycfemx import CFESyncClient
with CFESyncClient() as client:
client.login("tu_usuario", "tu_password")
info = client.get_info()
print(f"Monto: {info.amount_due}")
```
### Múltiples servicios
Un usuario de CFE puede tener varios servicios (domicilios) asociados a su cuenta:
```python
import asyncio
from pycfemx import CFEClient
async def main():
async with CFEClient() as client:
await client.login("tu_usuario", "tu_password")
# Listar todos los servicios
services = await client.get_services()
# Obtener información de cada servicio
for service in services:
info = await client.get_info(service.service_number)
print(f"\n{service.alias}:")
print(f" Monto: {info.amount_due}")
print(f" Vence: {info.due_date}")
print(f" Estado: {info.payment_status}")
asyncio.run(main())
```
### Historial de recibos
```python
import asyncio
from pycfemx import CFEClient
async def main():
async with CFEClient() as client:
await client.login("tu_usuario", "tu_password")
info = await client.get_info()
print("Últimos recibos:")
for receipt in info.receipts:
pdf = "PDF" if receipt.pdf_available else ""
xml = "XML" if receipt.xml_available else ""
print(f" {receipt.period}: {pdf} {xml}")
asyncio.run(main())
```
## API
### CFEClient (async) / CFESyncClient (sync)
Ambos clientes exponen los mismos métodos. `CFEClient` es async, `CFESyncClient` es su equivalente síncrono.
| Método | Descripción |
|--------|-------------|
| `login(username, password)` | Inicia sesión en el portal de CFE |
| `get_services()` | Devuelve la lista de servicios disponibles |
| `get_info(service_number=None)` | Devuelve información del servicio especificado (o el actual) |
| `close()` | Cierra la sesión |
### Modelos
#### Service
| Atributo | Tipo | Descripción |
|----------|------|-------------|
| `service_number` | `str` | Número de servicio (12 dígitos) |
| `alias` | `str` | Nombre corto del servicio |
| `display_name` | `str` | Nombre completo (alias - número) |
#### ServiceInfo
| Atributo | Tipo | Descripción |
|----------|------|-------------|
| `service_number` | `str` | Número de servicio |
| `owner_name` | `str` | Nombre del titular |
| `address` | `str` | Dirección del servicio |
| `amount_due` | `str` | Monto a pagar (ej: "$65") |
| `consumption_period_text` | `str` | Período de consumo (ej: "10 DIC 25 al 09 FEB 26") |
| `due_date_text` | `str` | Fecha límite de pago (ej: "26 FEB 26") |
| `payment_status` | `str` | Estado: VIGENTE, PAGADO o VENCIDO |
| `receipts` | `list[Receipt]` | Historial de recibos |
| `user_type` | `str \| None` | Tipo de usuario (ej: "CFECONTIGO") |
| `due_date` | `date \| None` | Fecha límite de pago parseada |
| `consumption_start` | `date \| None` | Inicio del período de consumo |
| `consumption_end` | `date \| None` | Fin del período de consumo |
#### Receipt
| Atributo | Tipo | Descripción |
|----------|------|-------------|
| `period` | `str` | Período del recibo (ej: "FEB 2026") |
| `pdf_available` | `bool` | Si el PDF está disponible |
| `xml_available` | `bool` | Si el XML (CFDI) está disponible |
### Excepciones
| Excepción | Descripción |
|-----------|-------------|
| `CFEError` | Excepción base |
| `CFEAuthError` | Error de autenticación |
| `CFEConnectionError` | Error de conexión |
| `CFESessionExpiredError` | Sesión expirada |
| `CFEServiceNotFoundError` | Servicio no encontrado |
## Requisitos
- Python 3.10+
- aiohttp
- beautifulsoup4
## Licencia
MIT
| text/markdown | OAmilkar | null | null | null | null | cfe, electricity, energy, home-assistant, mexico, utility | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pytho... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.8.0",
"beautifulsoup4>=4.11.0",
"certifi>=2023.0.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-aiohttp>=1.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"types-beautifulsoup4>=4.11.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/oamilkar/PyCfeMx",
"Documentation, https://github.com/oamilkar/PyCfeMx#readme",
"Repository, https://github.com/oamilkar/PyCfeMx.git",
"Issues, https://github.com/oamilkar/PyCfeMx/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:23:16.420074 | pycfemx-0.1.0.tar.gz | 9,712 | 37/09/bacafee6c62ea975386a890c58dbd20090d4f6589755bc0a6235fdaa0023/pycfemx-0.1.0.tar.gz | source | sdist | null | false | cdac2a30a1f7698b4d71005614103c80 | 39a2a32a1e745ba61f5f92e076fef837f1504bf741fd517f0510b949d788ae5d | 3709bacafee6c62ea975386a890c58dbd20090d4f6589755bc0a6235fdaa0023 | MIT | [
"LICENSE"
] | 0 |
2.4 | briefcase-ai | 2.1.24 | Python SDK for AI observability, replay, and decision tracking - Rust-powered performance | # Briefcase AI Python SDK
Python SDK for AI observability, replay, and decision tracking. This package provides high-performance Rust-powered tools for monitoring and debugging AI applications in Python.
[](https://crates.io/crates/briefcase-python)
[](https://pypi.org/project/briefcase-ai/)
## Overview
This Python SDK provides a complete toolkit for AI observability, decision tracking, and replay functionality. Built on a high-performance Rust core with Python bindings via [PyO3](https://pyo3.rs/), it offers the speed of Rust with the convenience of Python.
## Features
- **High Performance** - Rust core with minimal Python overhead
- **Memory Efficient** - Zero-copy operations where possible
- **Type Safe** - Full Python type hints and error handling
- **Async Support** - Async/await compatible operations
## Installation
Install the Python SDK using pip:
```bash
pip install briefcase-ai
```
**Platform Support**: Currently supports Linux and macOS. Windows support is temporarily disabled due to build issues.
## Quick Start
```python
import briefcase_ai
import openai
# Initialize the SDK
briefcase_ai.init()
# Track an OpenAI API call
def track_openai_call(user_message: str) -> str:
# Create decision snapshot
decision = briefcase_ai.DecisionSnapshot("openai_chat")
# Add input
decision.add_input(briefcase_ai.Input("user_message", user_message, "string"))
decision.add_input(briefcase_ai.Input("model", "gpt-3.5-turbo", "string"))
# Make API call (your existing code)
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": user_message}]
)
ai_response = response.choices[0].message.content
# Add output with confidence
decision.add_output(
briefcase_ai.Output("ai_response", ai_response, "string")
.with_confidence(0.95)
)
decision.with_execution_time(response.usage.total_tokens * 0.1) # Estimate
# Save for analysis
storage = briefcase_ai.SqliteBackend.file("ai_decisions.db")
decision_id = storage.save_decision(decision)
return ai_response
# Use it
result = track_openai_call("Explain quantum computing")
print(f"AI Response: {result}")
```
## Advanced Usage Examples
### 1. Model Performance Monitoring
```python
import briefcase_ai
from datetime import datetime
def monitor_model_performance():
storage = briefcase_ai.SqliteBackend.file("model_performance.db")
# Track multiple model variants
models = ["gpt-3.5-turbo", "gpt-4", "claude-3"]
for model in models:
decision = briefcase_ai.DecisionSnapshot(f"comparison_test")
decision.add_input(briefcase_ai.Input("prompt", "Write a poem", "string"))
decision.add_input(briefcase_ai.Input("model", model, "string"))
decision.add_tag("experiment", "model_comparison")
decision.add_tag("timestamp", str(datetime.now()))
# Your model call here...
# decision.add_output(...)
storage.save_decision(decision)
```
### 2. Cost Tracking for AI Operations
```python
import briefcase_ai
def track_ai_costs():
# Initialize cost calculator
cost_calc = briefcase_ai.CostCalculator()
# Track costs for different models
models_usage = [
("gpt-4", 1500, 800), # model, input_tokens, output_tokens
("gpt-3.5-turbo", 2000, 1200),
("claude-3", 1800, 900)
]
total_cost = 0
for model, input_tokens, output_tokens in models_usage:
cost_estimate = cost_calc.estimate_cost(model, input_tokens, output_tokens)
total_cost += cost_estimate.total_cost
print(f"{model}: ${cost_estimate.total_cost:.4f}")
print(f"Total estimated cost: ${total_cost:.4f}")
```
### 3. Drift Detection
```python
import briefcase_ai
def detect_model_drift():
# Monitor for changes in model outputs
drift_calc = briefcase_ai.DriftCalculator()
# Sample outputs from the same prompt over time
outputs = [
"The sky is blue because of light scattering",
"The sky appears blue due to Rayleigh scattering",
"Blue sky results from atmospheric light scattering",
"The sky looks green" # Anomaly!
]
metrics = drift_calc.calculate_drift(outputs)
print(f"Consistency score: {metrics.consistency_score:.2f}")
if metrics.consistency_score < 0.8:
print("⚠️ Potential model drift detected!")
```
### 4. Data Sanitization
```python
import briefcase_ai
def sanitize_sensitive_data():
# Remove sensitive information before logging
sanitizer = briefcase_ai.Sanitizer()
user_input = "My email is john@example.com and SSN is 123-45-6789"
# Sanitize the input
result = sanitizer.sanitize(user_input)
print(f"Original: {user_input}")
print(f"Sanitized: {result.sanitized}")
print(f"Patterns found: {result.patterns_found}")
# Use sanitized version in your AI tracking
decision = briefcase_ai.DecisionSnapshot("user_query")
decision.add_input(briefcase_ai.Input("query", result.sanitized, "string"))
```
### 5. Replay and Debugging
```python
import briefcase_ai
def replay_ai_decision():
storage = briefcase_ai.SqliteBackend.file("ai_decisions.db")
# Load a previous decision
decision_id = "decision-123"
original_decision = storage.load_decision(decision_id)
if original_decision:
# Create replay engine
replay_engine = briefcase_ai.ReplayEngine()
# Replay the decision with same inputs
replay_result = replay_engine.replay_decision(original_decision)
if replay_result.is_deterministic:
print("✅ Decision is reproducible")
else:
print("❌ Non-deterministic behavior detected")
print(f"Differences: {replay_result.differences}")
```
## For Rust Developers
This crate is primarily used for building the Python extension. To use Briefcase AI in Rust, use the core library directly:
```toml
[dependencies]
briefcase-core = "2.0.4"
```
## Building
This crate requires:
- Rust 1.70+
- Python 3.10+
- PyO3 dependencies
Build the Python extension:
```bash
cd crates/python
maturin build --release
```
## Development
For development with maturin:
```bash
cd crates/python
maturin develop
```
Then test in Python:
```python
import briefcase_ai
print(briefcase_ai.__version__)
```
## Dependencies
- [`briefcase-core`](https://crates.io/crates/briefcase-core) - Core Rust library
- [`pyo3`](https://crates.io/crates/pyo3) - Python FFI bindings
- [`tokio`](https://crates.io/crates/tokio) - Async runtime
## License
MIT License - see [LICENSE](https://github.com/briefcasebrain/briefcase-ai-core/blob/main/LICENSE) file for details.
## Related Packages
- [`briefcase-ai`](https://pypi.org/project/briefcase-ai/) - Python package (end users)
- [`briefcase-core`](https://crates.io/crates/briefcase-core) - Core Rust library
- [`briefcase-wasm`](https://www.npmjs.com/package/briefcase-wasm) - WebAssembly bindings
| text/markdown; charset=UTF-8; variant=GFM | Briefcase AI Team | null | null | null | GPL-3.0 | ai, ml, observability, replay, telemetry, decision-tracking | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T05:20:26.008426 | briefcase_ai-2.1.24-cp311-cp311-win_amd64.whl | 3,547,298 | 04/82/d9fd12100dfe46a0778a64710a94a7eaecc001166f2b83290dcd1800f344/briefcase_ai-2.1.24-cp311-cp311-win_amd64.whl | cp311 | bdist_wheel | null | false | 2371a9ce9741f1b3224c76bda651edd0 | 38addbb0d0a0fb6ca1478b4267ea0411c28c5b8a050e840c6287f4a693e3a318 | 0482d9fd12100dfe46a0778a64710a94a7eaecc001166f2b83290dcd1800f344 | null | [] | 274 |
2.4 | mangledotdev | 0.1.2 | The universal programming language communication system | # Installation
Mangle.dev for Python can be installed in multiple ways depending on your preference.
## Option 1: Using pip (Recommended)
Install Mangle.dev directly from PyPI:
```bash
pip install mangledotdev
```
Or with pip3:
```bash
pip3 install mangledotdev
```
## Option 2: Manual Installation
Download the `mangledotdev.py` file and place it in your project directory or Python path.
### Project Structure Example
```
project/
├── mangledotdev.py
├── main.py
└── process.py
```
## Verifying Installation
After installation, verify that Mangle.dev is working correctly:
```python
# test_mangledotdev.py
try:
from mangledotdev import InputManager, OutputManager
print("Mangle imported successfully!")
except ImportError as e:
print(f"Import error: {e}")
```
Run the test:
```bash
python test_mangledotdev.py
```
| text/markdown | null | "Wass B." <wass.be@proton.me> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/WassBe/mangle.dev"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T05:17:29.210248 | mangledotdev-0.1.2.tar.gz | 9,747 | 80/d2/2fd9789cb9e85c9837789a0879f36dab2bd2c6ec5ab117797f378075ea0c/mangledotdev-0.1.2.tar.gz | source | sdist | null | false | cd1bd876ee100a43a37059c15c6bad5f | 51698ac68a40e8ef79b44ff6afdf3d390f516a9bde070e6160434b513ec85e88 | 80d22fd9789cb9e85c9837789a0879f36dab2bd2c6ec5ab117797f378075ea0c | null | [
"LICENSE"
] | 290 |
2.4 | nxlook-client | 0.1.1 | NxLook Inference library for Epistemic AI models. Non-reverse-engineerable, inference-only. | # NxLook Epistemic AI Inference Library
> **Inference-only** Python library for Epistemic AI trained models. No training, no reverse engineering.
## Installation
```bash
pip install nxlook-client
```
# Usage
## Non-interactive Inference
```
from nxlook_client import non_interactive_inference
response = non_interactive_inference(
model_path="path/to/your/model.pth",
prompt="What is the meaning of life?"
)
print(response)
```
##Interactive Chat Mode
```
from nxlook_client import interactive_inference
interactive_inference("path/to/your/model.pth", temperature=0.6)
Unified inference() Function
from epistemic_infer import inference
# Non-interactive
response = inference("model.pth", prompt="Hello world")
# Interactive
inference("model.pth", interactive=True)
```
#Requirements
Python 3.8+
PyTorch 2.0+
#License
MIT | text/markdown | null | "A.Fatykhov" <a.fatykhov@nxlook.com> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"torch>=2.0",
"black; extra == \"dev\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/afatykhov-ai/nxlook-client/",
"Repository, https://github.com/afatykhov-ai/nxlook-client/"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T05:15:25.632878 | nxlook_client-0.1.1.tar.gz | 6,010 | 5b/07/9a8db76c35d505b5d077e6a8b994918bb898ae807798debf8768a164ac23/nxlook_client-0.1.1.tar.gz | source | sdist | null | false | 83879c2e36ebfcb0b624fb728825f9ff | 1cf447ef32640e2f2a19a9c0e906f1cff268def66deb0dea9b13a43c2064dd63 | 5b079a8db76c35d505b5d077e6a8b994918bb898ae807798debf8768a164ac23 | null | [
"LICENSE"
] | 288 |
2.4 | auraview | 0.1.0 | A fast, distraction-free photo viewer. | # AuraView
a minimal, elegant image viewer inspired by the art of melody.
---
## ✨ About
AuraView is a minimalist photo viewer designed to stay out of your way.
No clutter. No unnecessary controls. Just your images — fast and clear.
It focuses on:
- ⚡ Speed
- 🧼 Clean interface
- 🖼️ Smooth image viewing
- 🪶 Lightweight footprint
---
## 🐍 Requirements
- Python 3.14.2 (Tested)
- pandas
- pillow
- pillow_heif
---
## 🚀 Features
- View photos smoothly and instantly
- Supports Apple image formats (including HEIF/HEIC)
- Keyboard-based image navigation
- Image rotation support
- In-place rotation (changes persist)
- Copy images to another location
- Move images between directories
---
## 📦 Installation
Install AuraView using pip:
```bash
pip install auraview
```
## 🖥️ Usage
Launch AuraView:
```bash
auraview
````
Open a specific directory:
```bash
auraview /path/to/folder
```
Open a specific image file:
```bash
auraview /path/to/image.jpg
```
---
### 📌 Command Line Options
Show version:
```bash
auraview --version
auraview -v
```
Show help:
```bash
auraview --help
auraview -h
```
Show author:
```bash
auraview --author
auraview -a
```
Show author email:
```bash
auraview --email
auraview -e
```
Show release date:
```bash
auraview --date
auraview -d
```
# Changelog
## Version 0.1.0 - 19-02-2026
- Initial release
- Able to view photos.
- supports apple image extensions.
- Navigation supported.
- Image transform: Rotation possible.
- Rotations are considered inplace.
- Copy and Move functions available.
| text/markdown | null | Benevant Mathew <benevantmathewv@gmail.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"pandas",
"pillow",
"pillow_heif"
] | [] | [] | [] | [
"Homepage, https://github.com/benevantmathew/auraview"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T05:11:52.105347 | auraview-0.1.0.tar.gz | 11,050 | 0c/59/f080531d10f56e9bd6ac6c5af5c27cc7e740aa8ed7e375f1b92ede92dab3/auraview-0.1.0.tar.gz | source | sdist | null | false | 1e5cb02c95b2eb0d309e34f5c4172e49 | 47be6d46cbbee36ed1d64010335b53a222f8e57110244db032895efad034df7c | 0c59f080531d10f56e9bd6ac6c5af5c27cc7e740aa8ed7e375f1b92ede92dab3 | null | [
"LICENSE"
] | 313 |
2.4 | crawlerverse | 0.2.0 | Python SDK for the Crawler Agent API | # crawlerverse
Python SDK for the [Crawler Agent API](https://crawlerver.se/docs/agent-api). Build AI agents that play the Crawler roguelike game.
## Installation
```bash
pip install crawlerverse
```
## Quick Start
```python
from crawlerverse import CrawlerClient, run_game, Attack, Wait, Direction, Observation, Action
# Map (dx, dy) offsets to Direction values
OFFSET_TO_DIR = {
(0, -1): Direction.NORTH, (0, 1): Direction.SOUTH,
(1, 0): Direction.EAST, (-1, 0): Direction.WEST,
(1, -1): Direction.NORTHEAST, (-1, -1): Direction.NORTHWEST,
(1, 1): Direction.SOUTHEAST, (-1, 1): Direction.SOUTHWEST,
}
def my_agent(observation: Observation) -> Action:
# Attack any adjacent monster
monster = observation.nearest_monster()
if monster:
tile, _ = monster
dx = tile.x - observation.player.position[0]
dy = tile.y - observation.player.position[1]
direction = OFFSET_TO_DIR.get((dx, dy))
if direction is not None:
return Attack(direction=direction)
# Otherwise just wait
return Wait()
with CrawlerClient(api_key="cra_...") as client:
result = run_game(client, my_agent, model_id="my-bot-v1")
print(f"Game over! Floor {result.outcome.floor}, result: {result.outcome.status}")
```
## Authentication
Set your API key via parameter or environment variable:
```python
# Option 1: Pass directly
client = CrawlerClient(api_key="cra_...")
# Option 2: Environment variable
# export CRAWLERVERSE_API_KEY=cra_...
client = CrawlerClient()
```
## Async Support
```python
from crawlerverse import AsyncCrawlerClient, async_run_game
async with AsyncCrawlerClient() as client:
result = await async_run_game(client, my_agent)
```
## API Reference
### Client Methods
```python
client.games.create(model_id="gpt-4o") # Start a new game
client.games.list(status="completed") # List your games
client.games.get(game_id) # Get game state
client.games.action(game_id, Move(...)) # Submit action
client.games.abandon(game_id) # Abandon game
client.health() # Health check
```
### Actions
```python
Move(direction=Direction.NORTH)
Attack(direction=Direction.EAST)
Wait()
Pickup()
Drop(item_type="health-potion")
Use(item_type="health-potion")
Equip(item_type="iron-sword")
EnterPortal()
RangedAttack(direction=Direction.SOUTH, distance=5)
```
### Observation Helpers
```python
obs.tile_at(x, y) # Look up tile by coordinates
obs.monsters() # All visible monsters
obs.nearest_monster() # Closest monster
obs.items_at_feet() # Items at player's position
obs.has_item("sword") # Check inventory
obs.can_move(Direction.NORTH) # Check if direction is walkable
```
## Logging
The SDK uses Python's standard `logging` module under the `crawlerverse` logger. To see game progress and debug info:
```python
import logging
logging.basicConfig(level=logging.INFO)
```
For more detail (e.g., retry attempts, action payloads):
```python
logging.getLogger("crawlerverse").setLevel(logging.DEBUG)
```
## Examples
All examples default to a local API at `http://localhost:3000/api/agent`. Set `CRAWLERVERSE_BASE_URL` to point at production.
### OpenAI
See [`examples/openai_agent.py`](examples/openai_agent.py):
```bash
pip install openai
export CRAWLERVERSE_API_KEY=cra_...
export OPENAI_API_KEY=sk-...
python examples/openai_agent.py
```
Works with any OpenAI-compatible provider (Ollama, LMStudio, Azure, etc.) via `OPENAI_BASE_URL`.
### Anthropic (Claude)
See [`examples/anthropic_agent.py`](examples/anthropic_agent.py):
```bash
pip install anthropic
export CRAWLERVERSE_API_KEY=cra_...
export ANTHROPIC_API_KEY=sk-ant-...
python examples/anthropic_agent.py
```
Uses Claude Haiku 4.5 by default. Override with `ANTHROPIC_MODEL=claude-sonnet-4-5`.
### Local LLM (Ollama / LMStudio)
See [`examples/local_llm_agent.py`](examples/local_llm_agent.py) for a script with configurable turn limits and error recovery:
```bash
pip install openai
export CRAWLERVERSE_API_KEY=cra_...
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3
python examples/local_llm_agent.py
```
Supports `MAX_TURNS` (default 25) and `MODEL_ID` env vars.
## License
MIT
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://crawlerver.se",
"Documentation, https://crawlerver.se/docs/agent-api",
"Repository, https://github.com/crawlerverse/crawlerverse-sdks"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T05:11:10.160203 | crawlerverse-0.2.0-py3-none-any.whl | 13,169 | 63/63/679258773f5d2cb25a1e6592f0d8b0dd436c96eb9d1ac0c2638f917a9dbd/crawlerverse-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | a6ce03ab57419291d09aaad0e727121d | 71c53f40b1297a463a66bfe6f2df5f521ded781544c46c349b7d918ae9262aab | 6363679258773f5d2cb25a1e6592f0d8b0dd436c96eb9d1ac0c2638f917a9dbd | null | [] | 305 |
2.4 | pyextremes | 2.5.0 | Extreme Value Analysis (EVA) in Python | <p align="center" style="font-size:40px; margin:0px 10px 0px 10px">
<em>pyextremes</em>
</p>
<p align="center">
<em>Extreme Value Analysis (EVA) in Python</em>
</p>
<p align="center">
<a href="https://github.com/georgebv/pyextremes/actions/workflows/test.yml" target="_blank">
<img src="https://github.com/georgebv/pyextremes/actions/workflows/test.yml/badge.svg?event=pull_request" alt="Test">
</a>
<a href="https://codecov.io/gh/georgebv/pyextremes" target="_blank">
<img src="https://codecov.io/gh/georgebv/pyextremes/branch/master/graph/badge.svg" alt="Coverage">
</a>
<a href="https://pypi.org/project/pyextremes" target="_blank">
<img src="https://badge.fury.io/py/pyextremes.svg" alt="PyPI Package">
</a>
<a href="https://anaconda.org/conda-forge/pyextremes" target="_blank">
<img src="https://img.shields.io/conda/vn/conda-forge/pyextremes.svg" alt="Anaconda Package">
</a>
</p>
# About
**Documentation:** https://georgebv.github.io/pyextremes/
**License:** [MIT](https://opensource.org/licenses/MIT)
**Support:** [ask a question](https://github.com/georgebv/pyextremes/discussions)
or [create an issue](https://github.com/georgebv/pyextremes/issues/new/choose),
any input is appreciated and would help develop the project
**pyextremes** is a Python library aimed at performing univariate
[Extreme Value Analysis (EVA)](https://en.wikipedia.org/wiki/Extreme_value_theory).
It provides tools necessary to perform a wide range of tasks required to
perform EVA, such as:
- extraction of extreme events from time series using methods such as
Block Maxima (BM) or Peaks Over Threshold (POT)
- fitting continuous distributions, such as GEVD, GPD, or user-specified
continous distributions to the extracted extreme events
- visualization of model inputs, results, and goodness-of-fit statistics
- estimation of extreme events of given probability or return period
(e.g. 100-year event) and of corresponding confidence intervals
- tools assisting with model selection and tuning, such as selection of
block size in BM and threshold in POT
Check out [this repository](https://github.com/georgebv/pyextremes-notebooks)
with Jupyter notebooks used to produce figures for this readme
and for the official documentation.
# Installation
Get latest version from PyPI:
```shell
pip install pyextremes
```
Install with optional dependencies:
```shell
pip install pyextremes[full]
```
Get latest experimental build from GitHub:
```shell
pip install "git+https://github.com/georgebv/pyextremes.git#egg=pyextremes"
```
Get pyextremes for the Anaconda Python distribution:
```shell
conda install -c conda-forge pyextremes
```
# Illustrations
<p align="center" style="font-size:20px; margin:10px 10px 0px 10px">
<em>Model diagnostic</em>
</p>
<p align="center" style="font-size:20px; margin:10px 10px 40px 10px">
<img src="https://raw.githubusercontent.com/georgebv/pyextremes-notebooks/master/notebooks/documentation/readme%20figures/diagnostic.png" alt="Diagnostic plot" width="600px">
</p>
<p align="center" style="font-size:20px; margin:10px 10px 0px 10px">
<em>Extreme value extraction</em>
</p>
<p align="center" style="font-size:20px; margin:10px 10px 40px 10px">
<img src="https://raw.githubusercontent.com/georgebv/pyextremes-notebooks/master/notebooks/documentation/readme%20figures/extremes.png" alt="Diagnostic plot" width="600px">
</p>
<p align="center" style="font-size:20px; margin:10px 10px 0px 10px">
<em>Trace plot</em>
</p>
<p align="center" style="font-size:20px; margin:10px 10px 40px 10px">
<img src="https://raw.githubusercontent.com/georgebv/pyextremes-notebooks/master/notebooks/documentation/readme%20figures/trace.png" alt="Diagnostic plot" width="600px">
</p>
<p align="center" style="font-size:20px; margin:10px 10px 0px 10px">
<em>Corner plot</em>
</p>
<p align="center" style="font-size:20px; margin:10px 10px 40px 10px">
<img src="https://raw.githubusercontent.com/georgebv/pyextremes-notebooks/master/notebooks/documentation/readme%20figures/corner.png" alt="Diagnostic plot" width="600px">
</p>
# Acknowledgements
I wanted to thank Max Larson who has inspired me to start this project
and who taught me a lot about extreme value theory.
| text/markdown | George Bocharov | George Bocharov <bocharovgeorgii@gmail.com> | null | null | null | statistics, extreme, extreme value analysis, eva, coastal, ocean, oceanography, marine, environmental, engineering | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: P... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy<3.0.0,>=1.19.0",
"scipy<2.0.0,>=1.5.0",
"pandas<3.0.0,>=1.0.0",
"emcee<4.0.0,>=3.0.3",
"matplotlib!=3.9.1,<4.0.0,>=3.3.0",
"lmoments3>=1.0.8; extra == \"full\"",
"tqdm<5.0.0,>=4.0.0; extra == \"full\"",
"lmoments3>=1.0.8; extra == \"lmoments\""
] | [] | [] | [] | [
"github, https://github.com/georgebv/pyextremes",
"homepage, https://georgebv.github.io/pyextremes"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T05:10:16.361862 | pyextremes-2.5.0-py3-none-any.whl | 59,242 | 78/84/9e80a364631b956046202828e28c8ba9f33d4762ad8eae36655554b378da/pyextremes-2.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | cddacf72bb2d49bf6d39734313df8428 | 293049da0bc4656859a5336d6eebfab34d6356969191ef871a8c20134643cfde | 78849e80a364631b956046202828e28c8ba9f33d4762ad8eae36655554b378da | MIT | [
"LICENSE"
] | 966 |
2.1 | databricks-vectorsearch | 0.66 | Databricks Vector Search Client | **DB license**
Copyright (2022) Databricks, Inc.
This library (the "Software") may not be used except in connection with the Licensee's use of the Databricks Platform Services
pursuant to an Agreement (defined below) between Licensee (defined below) and Databricks, Inc. ("Databricks"). This Software
shall be deemed part of the Downloadable Services under the Agreement, or if the Agreement does not define Downloadable Services,
Subscription Services, or if neither are defined then the term in such Agreement that refers to the applicable Databricks Platform
Services (as defined below) shall be substituted herein for "Downloadable Services". Licensee's use of the Software must comply at
all times with any restrictions applicable to the Downlodable Services and Subscription Services, generally, and must be used in
accordance with any applicable documentation.
Additionally, and notwithstanding anything in the Agreement to the contrary:
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
If you have not agreed to an Agreement or otherwise do not agree to these terms, you may not use the Software.
This license terminates automatically upon the termination of the Agreement or Licensee's breach of these terms.
Agreement: the agreement between Databricks and Licensee governing the use of the Databricks Platform Services, which shall be, with
respect to Databricks, the Databricks Terms of Service located at www.databricks.com/termsofservice, and with respect to Databricks
Community Edition, the Community Edition Terms of Service located at www.databricks.com/ce-termsofuse, in each case unless Licensee
has entered into a separate written agreement with Databricks governing the use of the applicable Databricks Platform Services.
Databricks Platform Services: the Databricks services or the Databricks Community Edition services, according to where the Software is used.
Licensee: the user of the Software, or, if the Software is being used on behalf of a company, the company.
| text/markdown | Databricks | feedback@databricks.com | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"deprecation>=2",
"mlflow-skinny<4,>=2.11.3",
"protobuf<7,>=3.12.0",
"requests>=2"
] | [] | [] | [] | [] | twine/3.8.0 pkginfo/1.10.0 readme-renderer/34.0 requests/2.22.0 requests-toolbelt/1.0.0 urllib3/1.26.20 tqdm/4.64.1 importlib-metadata/4.8.3 keyring/18.0.1 rfc3986/1.5.0 colorama/0.4.3 CPython/3.6.15 | 2026-02-19T05:10:07.121328 | databricks_vectorsearch-0.66-py3-none-any.whl | 21,979 | 40/43/da072250ebbc1898a92fe2384177e99a40540b35045752da48db9c2a61dd/databricks_vectorsearch-0.66-py3-none-any.whl | py3 | bdist_wheel | null | false | 9472cb67c2ef96282bd114dfc145182a | ca71ecb29687c0444fd3c1f2053dd250c2296b8a3e277f87231d66c7e85533bd | 4043da072250ebbc1898a92fe2384177e99a40540b35045752da48db9c2a61dd | null | [] | 155,144 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.