metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | chesserp-api | 0.1.0 | Python client library for ChessERP API | # ChessERP API Python Library
[](https://www.python.org/downloads/)
[](LICENSE)
Libreria Python robusta y reutilizable para interactuar con la API de ChessERP. Abstrae la complejidad de la autenticacion, manejo de sesiones y paginacion automatica, permitiendo a los desarrolladores trabajar con objetos Python tipados y validados.
## Caracteristicas Principales
- **Cliente Unificado**: Interfaz simple para acceder a todos los endpoints de ChessERP
- **Validacion Automatica**: Modelos Pydantic v2 para garantizar integridad de datos
- **Manejo de Sesiones**: Autenticacion automatica con reintentos en caso de expiracion (401)
- **Paginacion Transparente**: Obtencion automatica de todos los lotes de datos
- **Tipado Estatico**: Soporte completo para autocompletado en IDEs
- **Metodos Raw & Parsed**: Acceso a datos crudos (JSON) o validados (Pydantic)
- **Logging Integrado**: Trazabilidad completa de operaciones (file + console)
## Endpoints Soportados
| Recurso | Metodos | Paginado |
|---------|---------|----------|
| **Ventas** | `get_sales()`, `get_sales_raw()`, `export_sales_report()` | Si |
| **Articulos** | `get_articles()`, `get_articles_raw()` | Si |
| **Stock** | `get_stock()`, `get_stock_raw()` | No |
| **Clientes** | `get_customers()`, `get_customers_raw()` | Si |
| **Pedidos** | `get_orders()`, `get_orders_raw()` | No |
| **Personal Comercial** | `get_staff()`, `get_staff_raw()` | No |
| **Rutas de Venta** | `get_routes()`, `get_routes_raw()` | No |
| **Marketing** | `get_marketing()`, `get_marketing_raw()` | No |
## Instalacion
### Clonar el Repositorio
```bash
git clone https://github.com/tuusuario/chesserp-api.git
cd chesserp-api
```
### Instalar
```bash
pip install -e . # Modo desarrollo (editable)
pip install -e ".[dev]" # Con dependencias de testing
```
### Configuracion
Copia el archivo de ejemplo y completa con tus credenciales:
```bash
cp .env.example .env
```
El `.env` usa un patron de prefijos por empresa:
```env
# Empresa 1
EMPRESA1_API_URL=http://tu-servidor:puerto/
EMPRESA1_USERNAME=tu_usuario
EMPRESA1_PASSWORD=tu_password
# Empresa 2 (opcional)
EMPRESA2_API_URL=http://otro-servidor:puerto/
EMPRESA2_USERNAME=tu_usuario
EMPRESA2_PASSWORD=tu_password
```
## Uso Rapido
### Inicializacion del Cliente
```python
from chesserp import ChessClient
# Desde variables de entorno con prefijo
client = ChessClient.from_env(prefix="EMPRESA1_")
# O directo con credenciales
client = ChessClient(
api_url="http://tu-servidor:puerto",
username="usuario",
password="clave"
)
```
### Consultar Ventas
```python
ventas = client.get_sales(
fecha_desde="2025-01-01",
fecha_hasta="2025-01-31",
detallado=True
)
for venta in ventas:
print(f"{venta.letra} {venta.serie}-{venta.nro_doc}")
print(f"Cliente: {venta.nombre_cliente}")
print(f"Total: ${venta.imp_total}")
for linea in venta.lines:
print(f" - {linea.ds_articulo} x{linea.cantidad_solicitada}")
```
### Consultar Stock
```python
stock = client.get_stock(id_deposito=1)
for item in stock:
print(f"{item.ds_articulo}: {item.cant_bultos} bultos")
```
### Exportar Reporte de Ventas a Excel
```python
excel_bytes = client.export_sales_report(
fecha_desde="2025-01-01",
fecha_hasta="2025-01-31",
empresas="1",
tiposdoc="FCVTA,DVVTA"
)
with open("reporte_ventas.xlsx", "wb") as f:
f.write(excel_bytes)
```
### Acceso a Datos Raw (JSON)
Para pipelines ETL o cuando se necesita el JSON sin validar:
```python
raw_data = client.get_sales_raw(
fecha_desde="2025-01-01",
fecha_hasta="2025-01-31",
nro_lote=1
)
```
Todos los metodos `get_*()` aceptan `raw=True` para retornar listas de dicts en vez de objetos Pydantic.
### Mas Ejemplos
```python
# Clientes
clientes = client.get_customers(anulado=False)
# Pedidos
pedidos = client.get_orders(fecha_pedido="2025-01-15")
# Personal comercial
personal = client.get_staff(sucursal=1)
# Rutas de venta
rutas = client.get_routes(sucursal=1, fuerza_venta=10)
# Jerarquia de marketing
segmentos = client.get_marketing(cod_scan=0)
```
## Manejo de Errores
```python
from chesserp import ChessClient, AuthError, ApiError, ChessError
try:
ventas = client.get_sales("2025-01-01", "2025-01-31")
except AuthError as e:
print(f"Error de autenticacion: {e}")
except ApiError as e:
print(f"Error en la API: {e.status_code} - {e.message}")
except ChessError as e:
print(f"Error general: {e}")
```
## Estructura del Proyecto
```
chesserp-api/
├── chesserp/ # Paquete principal
│ ├── __init__.py # Exports: ChessClient, excepciones
│ ├── client.py # Cliente principal (auth, paginacion, endpoints)
│ ├── exceptions.py # ChessError, AuthError, ApiError
│ ├── logger.py # Logger centralizado (file + console)
│ ├── sales.py # Servicio de ventas
│ ├── stock.py # Servicio de stock (pandas)
│ ├── config/
│ │ └── settings.py # PathConfig, LogLevel, Settings
│ └── models/ # Modelos Pydantic v2
│ ├── __init__.py # Re-exports de todos los modelos
│ ├── sales.py # Sale
│ ├── inventory.py # Articulo, StockFisico
│ ├── clients.py # Cliente
│ ├── orders.py # Pedido, LineaPedido
│ ├── routes.py # RutaVenta, ClienteRuta
│ ├── staff.py # PersonalComercial
│ └── marketing.py # JerarquiaMkt, CanalMkt, SubCanalMkt
├── live_test.py # Tests contra API real
├── usage_example.py # Menu interactivo de pruebas
├── main.py # Script batch de exportacion
├── pyproject.toml # Config de proyecto y dependencias
├── requirements.txt # Dependencias
└── .env.example # Plantilla de variables de entorno
```
## Scripts de Prueba
### Testing contra API real
```bash
python live_test.py --prefix EMPRESA1_ --test all
python live_test.py --prefix EMPRESA1_ --test sales
python live_test.py --test quick
```
### Menu interactivo
```bash
python usage_example.py
```
## Dependencias
| Paquete | Uso |
|---------|-----|
| `requests` | HTTP client |
| `python-dotenv` | Variables de entorno desde .env |
| `pydantic` | Validacion de modelos |
| `pandas` | Manipulacion de datos |
| `openpyxl` | Export Excel |
| `numpy` | Operaciones numericas |
## Roadmap
- [ ] Empaquetado PyPI (`pip install chesserp-api`)
- [ ] Soporte para operaciones POST/PUT (crear pedidos, actualizar stock)
- [ ] Cache de resultados con TTL configurable
- [ ] Async support (httpx)
- [ ] CLI para operaciones comunes
## Licencia
Este proyecto esta bajo la licencia MIT. Ver archivo `LICENSE` para mas detalles.
---
**Python 3.10+ | Pydantic v2 | Requests**
| text/markdown | Nahuel | null | null | null | MIT | chesserp, erp, api, client | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.28.0",
"python-dotenv>=1.0.0",
"pandas>=2.0.0",
"pydantic>=2.0.0",
"openpyxl>=3.1.0",
"numpy>=1.24.0",
"pytest>=7.0.0; extra == \"dev\"",
"requests-mock>=1.11.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/nahuel/chesserp-api",
"Repository, https://github.com/nahuel/chesserp-api"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T00:31:53.292303 | chesserp_api-0.1.0.tar.gz | 23,208 | d3/bf/41f83f56e1e3bf642ea1adff9e88fdd08ef7c6cb12ce852382c5aa2f4645/chesserp_api-0.1.0.tar.gz | source | sdist | null | false | e8fba9ec4f3fd80d4e76a1718e5b6802 | ba6bcb6a276cc53fc2e50e7294fdca5300ec1103cc1fdef586384ef32a3a80af | d3bf41f83f56e1e3bf642ea1adff9e88fdd08ef7c6cb12ce852382c5aa2f4645 | null | [] | 255 |
2.4 | ado-asana-sync | 1.25.0 | Tool to sync work items and pull requests from Azure DevOps to Asana | # ado-asana-sync
[](https://github.com/danstis/ado-asana-sync/actions/workflows/build.yml)
[](https://sonarcloud.io/summary/new_code?id=danstis_ado-asana-sync)
[](https://sonarcloud.io/summary/new_code?id=danstis_ado-asana-sync)
[](https://github.com/danstis/ado-asana-sync/releases/latest)
[](https://pypi.org/project/ado-asana-sync/)
[](https://open.vscode.dev/danstis/ado-asana-sync)
This project aims to synchronize work items and pull requests between Azure DevOps (ADO) and Asana. It's currently in development and not ready for use. Breaking changes will occur as needed.
## How to use
- Get the latest container image from the [Github Container Registry](https://github.com/danstis/ado-asana-sync/pkgs/container/ado-asana-sync).
- Configure the environment variables with the relevant values:
- `ADO_PAT` - Your Personal Access Token for ADO to accesst the work items.
- `ADO_URL` - The full URL of your Azure DevOps instance.
- `ASANA_TOKEN` - Your Personal Access Token for Asana to access the work items.
- `ASANA_WORKSPACE_NAME` - Name of the Asana workspace to sync with.
- `CLOSED_STATES` - Comma separated list of states that will be considered closed.
- `THREAD_COUNT` - Number of projects to sync in parallel. Must be a positive integer.
- `SYNC_THRESHOLD` - Number of days to continue syncing closed tasks before removing their mappings (default: 30). Must be a non-negative integer.
- `SLEEP_TIME` - Duration in seconds to sleep between sync runs. Must be a positive integer.
- `SYNCED_TAG_NAME` - Name of the tag in Asana to append to all synced items. Must be a valid Asana tag name.
- `LOGLEVEL` - Console log level (default: INFO). Controls what is shown in the terminal.
- `APPINSIGHTS_LOGLEVEL` - Application Insights log level (default: WARNING). Controls minimum level sent to telemetry.
- `APPINSIGHTS_SAMPLE_DEBUG` - Sampling rate for DEBUG logs sent to telemetry (default: 0.05 = 5%).
- `APPINSIGHTS_SAMPLE_INFO` - Sampling rate for INFO logs sent to telemetry (default: 0.05 = 5%).
- `APPINSIGHTS_SAMPLE_WARNING` - Sampling rate for WARNING logs sent to telemetry (default: 1.0 = 100%).
- `APPINSIGHTS_SAMPLE_ERROR` - Sampling rate for ERROR logs sent to telemetry (default: 1.0 = 100%).
- `APPINSIGHTS_SAMPLE_CRITICAL` - Sampling rate for CRITICAL logs sent to telemetry (default: 1.0 = 100%).
- Run the container with the configured environment variables.
- The application will start syncing work items and pull requests between ADO and Asana based on the configured settings.
## Features
### Work Item Synchronization
- Synchronizes Azure DevOps work items (User Stories, Bugs, Tasks, etc.) to Asana tasks
- Maintains bidirectional sync for updates, assignments, and status changes
- Automatic user matching between ADO and Asana based on email addresses
- Configurable closed states mapping
### Pull Request Synchronization
- Synchronizes active Pull Requests from Azure DevOps to Asana
- Creates separate reviewer tasks for each assigned reviewer
- Task titles follow the format: "Pull Request 5: Update readme (Reviewer Name)"
- Automatic status management:
- Approved reviews (approve/approve with suggestions) → Close Asana task
- Other review states (waiting for author, reject, no vote) → Keep task open
- PR completion/abandonment → Close all reviewer tasks
- Reviewer removal → Close reviewer's task
- Handles reviewer additions, removals, and approval resets
- Syncs PR title changes to Asana task titles
#### Pull Request Selection Logic
The system follows this logic to determine which PRs to sync:
1. **Repository Discovery**: For each configured ADO project, discover all Git repositories
1. **Active PR Filtering**: Query only PRs with `status="active"` (excludes completed/abandoned PRs)
1. **Reviewer Requirements**: Only sync PRs that have at least one assigned reviewer
1. **User Matching**: Only create tasks for reviewers who have matching Asana accounts (by email)
1. **Deduplication**: Prevent duplicate reviewer processing by unique email identifier
1. **Cleanup Processing**: Additionally process previously synced PRs that may now be closed/completed
**Exclusion Criteria:**
- PRs without reviewers are skipped (logs: "No reviewers found for PR X")
- Reviewers not found in Asana are skipped (logs: "PR X: reviewer Y not found in Asana")
- Repositories/projects without Git API access are skipped gracefully
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for a detailed history of changes and new features.
## Development
### Commit message style
This repo uses [Conventional Commits](https://www.conventionalcommits.org/) to ensure the build numbering is generated correctly
### Manual testing
To test the application manually, you can use the following steps:
#### Work Item Testing
1. Create new ADO work item and ensure it is synced to Asana.
1. Rename Asana task and ensure it is reverted back to the ADO name.
1. Rename ADO task and ensure it is synced to Asana.
1. Remove Synced tag from item in Asana and ensure it is replaced.
1. Delete synced tag from Asana workspace and from appdata.json file and ensure it is re-created and assigned to all synced tasks.
1. Mark Asana task as complete and ensure it is re-opened.
1. Mark ADO task as complete and ensure it is marked as complete in Asana.
1. Re-open ADO task and ensure it is re-opened in Asana.
#### Pull Request Testing
1. Create new Pull Request in ADO with reviewers and ensure reviewer tasks are created in Asana.
1. Change the PR title in ADO and ensure the title updates in Asana tasks on next sync.
1. Add a reviewer to the PR and ensure a new task is created for them.
1. Remove a reviewer from the PR and ensure their task is closed.
1. Remove all reviewers from the PR and ensure all tasks are closed.
1. Approve the PR as a reviewer and ensure the reviewer's task is closed.
1. Approve with suggestions and ensure the reviewer's task is closed.
1. Reject or request changes and ensure the reviewer's task remains open.
1. Reset approval and ensure the reviewer's task is reopened.
1. Complete/abandon the PR and ensure all reviewer tasks are closed.
### Reference
#### ADO
- [azure-devops PyPi](https://pypi.org/project/azure-devops/)
- [azure-devops GitHub](https://github.com/microsoft/azure-devops-python-api)
- [azure-devops API reference](https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1&viewFallbackFrom=azure-devops-rest-5.1)
- [azure-devops samples](https://github.com/microsoft/azure-devops-python-samples/blob/main/src/samples/work_item_tracking.py)
#### Asana
- [Asana PyPi](https://pypi.org/project/asana/)
- [Asana GitHub](https://github.com/asana/python-asana)
- [Asana API Reference](https://developers.asana.com/docs/rich-text)
| text/markdown | null | Dan Anstis <dan@bsod.co.nz> | null | null | MIT | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"asana>=5.0.3",
"azure-devops>=7.1.0b3",
"azure-monitor-opentelemetry>=1.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:31:16.263436 | ado_asana_sync-1.25.0.tar.gz | 153,067 | b5/46/fc2fe971914f3a88d6c582ff4b24c6bab1670a9fdede180736bfcc6c15ca/ado_asana_sync-1.25.0.tar.gz | source | sdist | null | false | 0047d2d37196167b077daedd37939ff4 | bc0a8662ac206ce60270e8459483a08250f8fcc1a2bab8c57132aca00ab50904 | b546fc2fe971914f3a88d6c582ff4b24c6bab1670a9fdede180736bfcc6c15ca | null | [
"LICENSE"
] | 247 |
2.4 | cukks | 0.1.2 | PyTorch-compatible encrypted deep learning inference using CKKS homomorphic encryption | <p align="center">
<a href="README.md">English</a> |
<a href="README.ko.md">한국어</a>
</p>
<h1 align="center">CuKKS</h1>
<p align="center">
<strong>GPU-accelerated CKKS Homomorphic Encryption for PyTorch</strong>
</p>
<p align="center">
<a href="https://github.com/devUuung/CuKKS/actions"><img src="https://github.com/devUuung/CuKKS/actions/workflows/build-wheels.yml/badge.svg" alt="Build Status"></a>
<a href="https://github.com/devUuung/CuKKS/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.10--3.13-blue.svg" alt="Python 3.10-3.13"></a>
</p>
<p align="center">
Run trained PyTorch models on <strong>encrypted data</strong> — preserving privacy while maintaining accuracy.<br>
Built on OpenFHE with CUDA acceleration.
</p>
---
## Quick Start
```python
import torch.nn as nn
import cukks
# 1. Define and train your model (standard PyTorch)
model = nn.Sequential(nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, 10))
# 2. Convert to encrypted model (polynomial ReLU approximation)
enc_model, ctx = cukks.convert(model)
# 3. Run encrypted inference
enc_input = ctx.encrypt(test_input)
enc_output = enc_model(enc_input)
output = ctx.decrypt(enc_output)
```
## Installation
### Automatic (Recommended)
```bash
pip install cukks # Auto-detects PyTorch's CUDA and installs matching backend
```
`pip install cukks` detects the CUDA version your PyTorch was built with and automatically installs the matching `cukks-cuXXX` GPU backend. No manual version matching needed.
### Manual
```bash
pip install cukks-cu121 # Explicitly install for CUDA 12.1
```
| Package | CUDA | Supported GPUs |
|---------|------|----------------|
| `cukks-cu118` | 11.8 | V100, T4, RTX 20/30/40xx, A100, H100 |
| `cukks-cu121` | 12.1 | V100, T4, RTX 20/30/40xx, A100, H100 |
| `cukks-cu124` | 12.4 | V100, T4, RTX 20/30/40xx, A100, H100 |
| `cukks-cu128` | 12.8 | All above + **RTX 50xx** |
Or use extras: `pip install cukks[cu121]`
<details>
<summary><strong>Post-install CLI & environment variables</strong></summary>
```bash
cukks-install-backend # Auto-detect & install
cukks-install-backend cu128 # Install specific backend
cukks-install-backend --status # Show CUDA compatibility status
```
| Variable | Effect |
|----------|--------|
| `CUKKS_BACKEND=cukks-cu128` | Force a specific backend |
| `CUKKS_NO_BACKEND=1` | Skip backend (CPU-only) |
</details>
<details>
<summary><strong>Docker images</strong></summary>
| CUDA | Compatible Docker Images |
|------|-------------------------|
| 11.8 | `pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime` |
| 12.1 | `pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime` |
| 12.4 | `pytorch/pytorch:2.4.0-cuda12.4-cudnn9-runtime` |
| 12.8 | `nvidia/cuda:12.8.0-cudnn9-runtime-ubuntu22.04` |
```bash
docker run --gpus all -it pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime bash
pip install cukks # auto-detects CUDA 12.1
```
</details>
<details>
<summary><strong>Build from source</strong></summary>
```bash
git clone https://github.com/devUuung/CuKKS.git && cd CuKKS
pip install -e .
# Build OpenFHE backend
cd openfhe-gpu-public && mkdir build && cd build
cmake .. -DWITH_CUDA=ON && make -j$(nproc)
cd ../../bindings/openfhe_backend
pip install -e .
```
</details>
## Features
| Feature | Description |
|---------|-------------|
| **PyTorch API** | Familiar interface — just call `cukks.convert(model)` |
| **GPU Acceleration** | CUDA-accelerated HE operations via OpenFHE |
| **Auto Optimization** | BatchNorm folding, BSGS matrix multiplication |
| **Wide Layer Support** | Linear, Conv2d, ReLU/GELU/SiLU, Pool, LayerNorm, Attention |
## Supported Layers
| Layer | Encrypted Version | Notes |
|-------|------------------|-------|
| `nn.Linear` | `EncryptedLinear` | BSGS optimization |
| `nn.Conv2d` | `EncryptedConv2d` | im2col method |
| `nn.ReLU/GELU/SiLU` | Polynomial approx | Configurable degree |
| `nn.AvgPool2d` | `EncryptedAvgPool2d` | Rotation-based |
| `nn.BatchNorm` | Folded | Merged into prev layer |
| `nn.LayerNorm` | `EncryptedLayerNorm` | Polynomial approx |
| `nn.Attention` | `EncryptedApproxAttention` | seq_len=1 |
<details>
<summary><strong>Full layer support table</strong></summary>
| PyTorch Layer | Encrypted Version | Notes |
|--------------|-------------------|-------|
| `nn.Linear` | `EncryptedLinear` | Full support with BSGS optimization |
| `nn.Conv2d` | `EncryptedConv2d` | Via im2col method |
| `nn.ReLU` | `EncryptedReLU` | Polynomial approximation |
| `nn.GELU` | `EncryptedGELU` | Polynomial approximation |
| `nn.SiLU` | `EncryptedSiLU` | Polynomial approximation |
| `nn.Sigmoid` | `EncryptedSigmoid` | Polynomial approximation |
| `nn.Tanh` | `EncryptedTanh` | Polynomial approximation |
| `nn.AvgPool2d` | `EncryptedAvgPool2d` | Full support |
| `nn.MaxPool2d` | `EncryptedMaxPool2d` | Approximate via polynomial |
| `nn.Flatten` | `EncryptedFlatten` | Logical reshape |
| `nn.BatchNorm1d/2d` | Folded | Merged into preceding layer |
| `nn.Sequential` | `EncryptedSequential` | Full support |
| `nn.Dropout` | `EncryptedDropout` | No-op during inference |
| `nn.LayerNorm` | `EncryptedLayerNorm` | Pure HE polynomial approximation |
| `nn.MultiheadAttention` | `EncryptedApproxAttention` | Polynomial softmax (seq_len=1) |
</details>
## Activation Functions
CKKS only supports polynomial operations. CuKKS approximates activations (ReLU, GELU, SiLU, etc.) using polynomial fitting:
```python
# Default: degree-4 polynomial approximation (recommended)
enc_model, ctx = cukks.convert(model)
# Higher degree for better accuracy (costs more multiplicative depth)
enc_model, ctx = cukks.convert(model, activation_degree=8)
```
The default `activation_degree=4` provides a good balance between accuracy and depth consumption. Higher degrees approximate the original activation more closely but require deeper circuits.
## GPU Acceleration
| Operation | Accelerated |
|-----------|-------------|
| Add/Sub/Mul/Square | ✅ GPU |
| Rotate/Rescale | ✅ GPU |
| Bootstrap | ✅ GPU |
| Encrypt/Decrypt | CPU |
```python
from ckks.torch_api import CKKSContext, CKKSConfig
config = CKKSConfig(poly_mod_degree=8192, scale_bits=40)
ctx = CKKSContext(config, enable_gpu=True) # GPU enabled by default
```
## Examples
```bash
# Quick demo (no GPU required)
python -m cukks.examples.encrypted_inference --demo conversion
# MNIST encrypted inference
python examples/mnist_encrypted.py --hidden 64 --samples 5
```
<details>
<summary><strong>CNN example</strong></summary>
```python
import torch.nn as nn
import cukks
class MNISTCNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 8, kernel_size=3, padding=1)
self.act1 = nn.ReLU()
self.pool1 = nn.AvgPool2d(2)
self.flatten = nn.Flatten()
self.fc = nn.Linear(8 * 14 * 14, 10)
def forward(self, x):
return self.fc(self.flatten(self.pool1(self.act1(self.conv1(x)))))
model = MNISTCNN()
enc_model, ctx = cukks.convert(model)
enc_input = ctx.encrypt(image)
prediction = ctx.decrypt(enc_model(enc_input)).argmax()
```
> **Note**: All operations in `forward()` must be layer attributes (e.g., `self.act1`), not inline operations like `x ** 2`.
</details>
<details>
<summary><strong>Batch processing</strong></summary>
```python
# Pack multiple samples into a single ciphertext (SIMD)
samples = [torch.randn(784) for _ in range(8)]
enc_batch = ctx.encrypt_batch(samples)
enc_output = enc_model(enc_batch)
outputs = ctx.decrypt_batch(enc_output, num_samples=8)
```
</details>
## Troubleshooting
| Issue | Solution |
|-------|----------|
| Out of Memory | Reduce `poly_mod_degree` (8192 instead of 16384) |
| Low Accuracy | Increase `activation_degree` (e.g., 8 or 16) for better approximation |
| Slow Performance | Enable batch processing, reduce network depth |
## Documentation
- [API Reference](docs/api.md)
- [GPU Acceleration Guide](docs/gpu-acceleration.md)
- [CKKS Concepts](docs/concepts.md)
## License
Apache License 2.0
## Citation
```bibtex
@software{cukks,
title = {CuKKS: PyTorch-compatible Encrypted Deep Learning},
year = {2024},
url = {https://github.com/devUuung/CuKKS}
}
```
## Related
### Libraries
- [OpenFHE](https://github.com/openfheorg/openfhe-development) — Underlying HE library
- [Microsoft SEAL](https://github.com/microsoft/SEAL) — Alternative HE library
### Papers
- [Homomorphic Encryption for Arithmetic of Approximate Numbers](https://eprint.iacr.org/2016/421) — Cheon et al. (CKKS)
- [Bootstrapping for Approximate Homomorphic Encryption](https://eprint.iacr.org/2018/153) — Cheon et al.
- [Faster Homomorphic Linear Transformations in HElib](https://eprint.iacr.org/2018/244) — Halevi & Shoup (BSGS)
| text/markdown | CuKKS Team | null | null | null | Apache-2.0 | homomorphic encryption, CKKS, deep learning, PyTorch, privacy, secure inference | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Langua... | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.0",
"numpy>=1.23",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"cukks-cu118; extra == \"cu118\"",
"cukks-cu121; extra == \"cu121\"",
"cukks-cu124; extra == \"cu124\"",... | [] | [] | [] | [
"Homepage, https://github.com/devUuung/CuKKS",
"Documentation, https://github.com/devUuung/CuKKS#readme",
"Repository, https://github.com/devUuung/CuKKS"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:29:44.449387 | cukks-0.1.2.tar.gz | 118,900 | 72/88/676ff93b1caf8d2091a109c2073e3da55f071ef6353f5c5479a990efbdc3/cukks-0.1.2.tar.gz | source | sdist | null | false | 82b0d4a287f3ed919106a336d0ea4acd | 8e9177d5013da628ce8344ab36c06e64cecfe7b0304363287dc730391e1a1795 | 7288676ff93b1caf8d2091a109c2073e3da55f071ef6353f5c5479a990efbdc3 | null | [
"LICENSE",
"NOTICE"
] | 248 |
2.4 | configgle | 1.1.10 | Hierarchical experiment configuration using pure Python dataclass factories and dependency injection. | # configgle🤭
Hierarchical configuration using pure Python dataclasses, with typed factory
methods, covariant protocols, and full inheritance support.
## Installation
```bash
python -m pip install configgle
```
## Example
```python
from configgle import Fig
class Model:
class Config(Fig):
hidden_size: int = 256
num_layers: int = 4
def __init__(self, config: Config):
self.config = config
# Create and modify config
config = Model.Config(hidden_size=512)
# Instantiate the parent class
model = config.make()
print(model.config.hidden_size) # 512
```
Configs are plain mutable dataclasses, so experiments are just functions that
tweak a baseline:
```python
def exp000() -> Model.Config:
return Model.Config()
def exp001() -> Model.Config:
cfg = exp000()
cfg.hidden_size = 512
cfg.num_layers = 8
return cfg
```
Or use `@autofig` to auto-generate the Config from `__init__`:
```python
from configgle import autofig
@autofig
class Model:
def __init__(self, hidden_size: int = 256, num_layers: int = 4):
self.hidden_size = hidden_size
self.num_layers = num_layers
# Config is auto-generated from __init__ signature
model = Model.Config(hidden_size=512).make()
print(model.hidden_size) # 512
```
## Features
### Type-safe `make()`
When `Config` is defined as a nested class, `MakerMeta.__get__` uses the
descriptor protocol to infer the parent class automatically. The return type
of `__get__` is `Intersection[type[Config], type[Makeable[Parent]]]`, so
`make()` knows the exact return type with zero annotation effort:
```python
class Model:
class Config(Fig):
hidden_size: int = 256
def __init__(self, config: Config):
self.hidden_size = config.hidden_size
model = Model.Config(hidden_size=512).make() # inferred as Model
```
Type checkers that support `Intersection` (like `ty`) resolve this fully --
bare `Fig` is all you need. For type checkers that don't yet support
`Intersection` (like `basedpyright`), parameterize with the parent class
name to give the checker the same information explicitly:
```python
class Model:
class Config(Fig["Model"]): # explicit type parameter only for basedpyright
hidden_size: int = 256
def __init__(self, config: Config):
self.hidden_size = config.hidden_size
model: Model = Model.Config(hidden_size=512).make() # returns Model, not object
```
Without `["Model"]`, non-`ty` checkers fall back to `Any` (so attribute access
works without typecheck suppressions).
Both `ty` and `basedpyright` are first-class supported. Here's the full
picture (including [`Makes`](#inheritance-with-makes), introduced next):
| | `ty` | `basedpyright` |
|---|:---:|:---:|
| Bare `Fig` infers parent type | ✅ | 🟡 (`Any` fallback) |
| `Fig["Parent"]` | ✅ | ✅ |
| `Makes["Child"]` needed for inheritance | ❌ | ✅ |
| `@autofig` `.Config` access | ❌ ([#143](https://github.com/astral-sh/ty/issues/143)) | ✅ |
`ty` gets full inference from `Intersection` -- bare `Fig` and inherited
configs just work. `basedpyright` doesn't support `Intersection` yet, so it
needs explicit `Fig["Parent"]` and `Makes["Child"]` annotations. `ty` doesn't
yet support class decorator return types, so `@autofig`-decorated classes need
`# ty: ignore[unresolved-attribute]` to access `.Config`; `basedpyright`
handles this correctly. When `Intersection` lands in the
[type spec](https://github.com/python/typing/issues/213), `Makes` becomes
unnecessary and both checkers will infer everything from bare `Fig`.
### Inheritance with `Makes` (only for `basedpyright`)
When a child class inherits a parent's Config, the `make()` return type would
normally be the parent. Use `Makes` to re-bind it (again, only needed for `basedpyright`):
```python
class Animal:
class Config(Fig["Animal"]):
name: str = "animal"
def __init__(self, config: Config):
self.name = config.name
class Dog(Animal):
class Config(Makes["Dog"], Animal.Config):
breed: str = "mutt"
def __init__(self, config: Config):
super().__init__(config)
self.breed = config.breed
dog: Dog = Dog.Config(name="Rex", breed="labrador").make() # returns Dog, not Animal
```
`Makes` contributes nothing to the MRO at runtime -- it exists purely for the
type checker (see the [type checker table](#type-safe-make) above). When
[Intersection](https://github.com/python/typing/issues/213) lands, `Makes`
becomes unnecessary.
### Covariant `Makeable` protocol
`Makeable[T]` is a covariant protocol satisfied by any `Fig`, `InlineConfig`,
or custom class with `make()`, `finalize()`, and `update()`. Because it's
covariant, `Makeable[Dog]` is assignable to `Makeable[Animal]`:
```python
from configgle import Makeable
def train(config: Makeable[Animal]) -> Animal:
return config.make()
# All valid:
train(Animal.Config())
train(Dog.Config(breed="poodle"))
```
This makes it easy to write functions that accept any config for a class
hierarchy without losing type information.
### Nested config finalization
Override `finalize()` to compute derived fields before instantiation. Nested
configs are finalized recursively:
```python
class Encoder:
class Config(Fig):
c_in: int = 256
mlp: Configurable[nn.Module] = field(default_factory=MLP.Config)
def finalize(self) -> Self:
self = super().finalize()
self.mlp.c_in = self.c_in # propagate dimensions
return self
```
### `update()` for bulk mutation
Configs support bulk updates from another config or keyword arguments:
```python
cfg = Model.Config(hidden_size=256)
cfg.update(hidden_size=512, num_layers=8)
# Or copy from another config (kwargs take precedence):
cfg.update(other_cfg, num_layers=12)
```
### `InlineConfig` / `PartialConfig`
`InlineConfig` wraps an arbitrary callable and its arguments into a config
object with deferred execution. Use it for classes where all constructor
arguments are known at config time:
```python
from configgle import InlineConfig
import torch.nn as nn
cfg = InlineConfig(nn.Linear, in_features=256, out_features=128, bias=False)
cfg.out_features = 64 # attribute-style access to kwargs
layer = cfg.make() # calls nn.Linear(in_features=256, out_features=64, bias=False)
y = layer(x) # use the constructed module
```
`PartialConfig` is shorthand for `InlineConfig(functools.partial, fn, ...)`
-- use it for functions where some arguments aren't known at config time:
```python
from configgle import PartialConfig
import torch.nn.functional as F
cfg = PartialConfig(F.cross_entropy, label_smoothing=0.1)
loss_fn = cfg.make() # returns functools.partial(F.cross_entropy, label_smoothing=0.1)
loss = loss_fn(logits, targets) # calls F.cross_entropy(logits, targets, label_smoothing=0.1)
```
Nested configs in args/kwargs are finalized and `make()`-d recursively, so
both compose naturally with `Fig` configs.
### `CopyOnWrite`
`CopyOnWrite` wraps a config tree and lazily copies objects only when mutations
occur. Copies propagate up to parents automatically, so the original is never
touched. This is especially useful inside `finalize()`, where you want to
derive a variant of a shared sub-config without mutating the original:
```python
from configgle import CopyOnWrite, Fig
class Encoder:
class Config(Fig):
hidden_size: int = 256
encoder: Configurable[nn.Module] = field(default_factory=MLP.Config)
decoder: Configurable[nn.Module] = field(default_factory=MLP.Config)
def finalize(self) -> Self:
self = super().finalize()
# encoder and decoder can share the same MLP.Config object.
# CopyOnWrite lets us tweak the decoder's copy without
# touching the encoder's (or the shared original).
with CopyOnWrite(self) as cow:
cow.decoder.c_out = self.hidden_size * 2
return cow.unwrap
```
Only the mutated nodes (and their ancestors) are shallow-copied; everything
else stays shared.
### `pprint` / `pformat`
Config-aware pretty printing that hides default values, auto-finalizes before
printing, and scrubs memory addresses:
```python
from configgle import Configurable, Fig, pformat
from dataclasses import field
class MLP:
class Config(Fig):
c_in: int = 256
c_out: int = 256
num_layers: int = 2
dropout: float = 0.1
use_bias: bool = True
def __init__(self, config: Config): ...
class Model:
class Config(Fig):
hidden_size: int = 256
num_layers: int = 4
mlp: Configurable[nn.Module] = field(default_factory=MLP.Config)
output_mlp: Configurable[nn.Module] = field(default_factory=MLP.Config)
def __init__(self, config: Config): ...
def exp001():
cfg = Model.Config()
cfg.hidden_size = 512
cfg.num_layers = 12
cfg.mlp.c_in = 512
cfg.mlp.c_out = 1024
cfg.mlp.num_layers = 4
cfg.mlp.dropout = 0.2
cfg.mlp.use_bias = False
cfg.output_mlp.c_in = 1024
cfg.output_mlp.c_out = 256
cfg.output_mlp.dropout = 0.3
return cfg
print(pformat(exp001(), continuation_pipe=0))
# Model.Config(
# hidden_size=512,
# num_layers=12,
# mlp=MLP.Config(
# │ c_in=512,
# │ c_out=1_024,
# │ num_layers=4,
# │ dropout=0.2,
# │ use_bias=False
# ),
# output_mlp=MLP.Config(c_in=1_024, dropout=0.3)
# )
```
Default values are hidden, continuation pipes show where nested blocks belong,
large numbers get underscores (`1_024`), and short sub-configs collapse onto
one line.
### `@autofig` for zero-boilerplate configs
When you don't need a hand-written Config, `@autofig` generates one from
`__init__` (see [Example](#example) above).
### Pickling and cloudpickle
Configs are fully compatible with `pickle` and `cloudpickle`, including the
parent class reference. This is important for distributed workflows (e.g.,
sending configs across processes):
```python
import cloudpickle, pickle
cfg = Model.Config(hidden_size=512)
cfg_ = pickle.loads(cloudpickle.dumps(cfg))
model = cfg_.make() # parent_class is preserved
```
## Comparison
| | [configgle](https://github.com/jvdillon/configgle) | [Hydra](https://github.com/facebookresearch/hydra) | [Sacred](https://github.com/IDSIA/sacred) | [OmegaConf](https://github.com/omry/omegaconf) | [Gin](https://github.com/google/gin-config) | [ml_collections](https://github.com/google/ml_collections) | [Fiddle](https://github.com/google/fiddle) | [Confugue](https://github.com/cifkao/confugue) |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Pure Python (no YAML/strings) | ✅ | ❌ | ❌ | 🟡 | ❌ | ✅ | ✅ | ❌ |
| Typed `make()`/`build()` return | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ |
| Config inheritance | ✅ | 🟡 | ❌ | 🟡 | ❌ | ❌ | ❌ | 🟡 |
| Covariant protocol | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Nested finalization | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Copy-on-write | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| `pickle`/`cloudpickle` | ✅ | 🟡 | ❌ | ✅ | ❌ | 🟡 | ✅ | ❌ |
| Auto-generated configs | ✅ | 🟡 | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ |
| GitHub stars | -- | 10.2k | 4.4k | 2.3k | 2.1k | 1.0k | 374 | 21 |
✅ = yes, 🟡 = partial, ❌ = no. Corrections welcome --
[open a PR](https://github.com/jvdillon/configgle/pulls).
### How each library works
**[Hydra](https://github.com/facebookresearch/hydra)** (Meta) --
YAML-centric with optional "structured configs" (Python dataclasses registered
in a `ConfigStore`). Instantiation uses `hydra.utils.instantiate()`, which
resolves a string `_target_` field to an import path -- the return type is
`Any`. Config composition is done via YAML defaults lists, not class
inheritance. Dataclass inheritance works at the schema level. `configen` is
an experimental code-generation tool (v0.9.0.dev8) that produces structured
configs from class signatures. Configs survive pickle trivially since
`_target_` is a string, not a class reference.
**[Sacred](https://github.com/IDSIA/sacred)** --
Experiment management framework. Config is defined via `@ex.config` scopes
(local variables become config entries) or loaded from YAML/JSON files. Sacred
auto-*injects* config values into captured functions by parameter name
(dependency injection), but does not auto-*generate* configs from function
signatures. No typed factory methods, no config inheritance, no pickle
support for the experiment/config machinery.
**[OmegaConf](https://github.com/omry/omegaconf)** --
YAML-native configuration with a "structured config" mode that accepts
`@dataclass` schemas. Configs are always wrapped in `DictConfig` proxy objects
at runtime (not actual dataclass instances). Supports dataclass inheritance
for schema definition. Good pickle support (`__getstate__`/`__setstate__`).
No factory method (`to_object()` returns `Any`), no auto-generation, no
protocols.
**[Gin](https://github.com/google/gin-config)** (Google) --
Global string-based registry. You decorate functions with `@gin.configurable`
and bind parameters via `.gin` files or `gin.bind_parameter('fn.param', val)`.
There are no config objects -- parameter values live in a global dict keyed by
dotted strings. No typed returns, no config inheritance. The docs state
"gin-configurable functions are not pickleable," though a 2020 PR added
`__reduce__` methods that improve support.
**[ml_collections](https://github.com/google/ml_collections)** (Google) --
Dict-like `ConfigDict` with dot-access, type-checking on mutation, and
`FieldReference` for lazy cross-references between values. Pure Python, no
YAML. No factory method or typed instantiation. Pickle works for plain configs,
but `FieldReference` operations that use lambdas internally (`.identity()`,
`.to_int()`) fail with standard pickle (cloudpickle handles them).
**[Fiddle](https://github.com/google/fiddle)** (Google) --
Python-first. You build config graphs with `fdl.Config[MyClass]` objects and
call `fdl.build()` to instantiate them. `build(Config[T]) -> T` is typed via
`@overload`. Config modification is functional (`fdl.copy_with`), not
inheritance-based -- there are no config subclasses. `@auto_config` rewrites a
factory function's AST to produce a config graph automatically. Full
pickle/cloudpickle support.
**[Confugue](https://github.com/cifkao/confugue)** --
YAML-based hierarchical configuration. The `configure()` method instantiates
objects from YAML dicts, with the class specified via a `!type` YAML tag.
Returns `Any`. Partial config inheritance via YAML merge keys (`<<: *base`).
No pickle support, no auto-generation, no protocols.
## Citing
If you find our work useful, please consider citing:
```bibtex
@misc{dillon2026configgle,
title={Configgle - Hierarchical experiment configuration using pure Python dataclass factories and dependency injection.},
author={Joshua V. Dillon},
year={2026},
howpublished={Github},
url={https://github.com/jvdillon/configgle},
}
```
## License
Apache License 2.0
| text/markdown | Joshua V. Dillon | null | null | null | null | ai, machine-learning | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | >=3.12 | [] | [] | [] | [
"ty-extensions",
"wrapt"
] | [] | [] | [] | [
"Repository, https://github.com/jvdillon/configgle",
"Issues, https://github.com/jvdillon/configgle/issues",
"Discussions, https://github.com/jvdillon/configgle/discussions"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T00:29:39.216249 | configgle-1.1.10.tar.gz | 144,317 | d6/ef/225fb7950e86dd417669ec3dfa12de143eecc1df4c3579c4eeb6e985c9e1/configgle-1.1.10.tar.gz | source | sdist | null | false | e42591191833aeea2dca589afa925d8d | 70003c4cbb76ebe3995ae197a15faaaa65ffae7fed99c81d52466300eabdafba | d6ef225fb7950e86dd417669ec3dfa12de143eecc1df4c3579c4eeb6e985c9e1 | Apache-2.0 | [
"LICENSE"
] | 247 |
2.4 | entropy-profiler | 0.2.1 | Extract, analyze, and visualize entropy profiles from transformer models using the logit-lens technique. | # entropy-profiler
Extract, analyse, and visualize entropy profiles from transformer models using
the logit-lens technique.
`entropy-profiler` computes per-layer Shannon or Rényi entropy by passing
hidden states through the model's own unembedding head (layer norm + lm_head).
It works on any HuggingFace `CausalLM` without architecture-specific hooks.
```python
from entropy_profiler import EntropyProfiler, plot_profile
import torch
profiler = EntropyProfiler("gpt2", dtype=torch.float32)
profile = profiler.profile_text("The meaning of life is", max_new_tokens=32)
plot_profile(profile)
```
---
## Installation
### From source (recommended for development)
```bash
git clone https://github.com/TODO/entropy-profiler
cd entropy-profiler
# Using uv (fast, handles venvs automatically)
uv sync # core dependencies
uv sync --extra notebook # + Jupyter support
uv sync --extra dev # + pytest, ruff
# Or using pip
pip install -e .
pip install -e ".[quantize]" # + 8-bit/4-bit quantization (bitsandbytes, accelerate)
pip install -e ".[notebook]"
pip install -e ".[dev]"
```
### From PyPI (once published)
```bash
pip install entropy-profiler
```
---
## Quick Start
### Profile a single prompt
```python
from entropy_profiler import EntropyProfiler, plot_profile
import torch
profiler = EntropyProfiler("gpt2", dtype=torch.float32)
profile = profiler.profile_text("The capital of France is", max_new_tokens=32)
print(profile.entropy.shape) # (n_tokens, n_layers)
print(profile.mean_profile()) # (n_layers,) tensor
plot_profile(profile)
```
### Profile multiple prompts
```python
from entropy_profiler import plot_aggregated
agg = profiler.profile_batch([
"The stock market experienced significant",
"In quantum mechanics, the wave function",
"Modern neural networks learn by",
], max_new_tokens=24)
print(agg.to_matrix().shape) # (3, n_layers)
plot_aggregated(agg)
```
### Compare prompts with distances
```python
from entropy_profiler import profile_distance
p1 = profiler.profile_text("Water boils at", max_new_tokens=24)
p2 = profiler.profile_text("Once upon a time", max_new_tokens=24)
result = profile_distance(p1, p2, metric="jsd")
print(f"JSD distance: {result.aggregate:.4f}")
```
### Analyse layer dynamics
```python
from entropy_profiler import LayerAnalyzer
profile, hidden_states = profiler.profile_text_with_states(
"Hello world", max_new_tokens=32
)
analyzer = LayerAnalyzer(profiler, profile, hidden_states=hidden_states)
print(analyzer.layer_entropy()) # (n_layers,)
print(analyzer.information_velocity()) # (n_layers,)
print(analyzer.layer_mi(method="cka")) # (n_layers,)
```
---
## Core Concepts
### Logit-Lens Decoding
At each transformer layer, the hidden state is projected through the model's
final layer norm and language model head to produce a vocabulary distribution.
The entropy of this distribution measures how "decided" the model is at that
layer — low entropy means a peaked distribution (confident prediction), high
entropy means a flat distribution (uncertain).
### Entropy Profiles
An **entropy profile** is a matrix of shape `(n_tokens, n_layers)` where each
entry is the entropy of the vocabulary distribution at that token position and
layer depth. The **mean profile** `(n_layers,)` averages across tokens to give
a single curve showing how entropy evolves through the network.
### Why Rényi Entropy?
Shannon entropy (`alpha=1`) is the default, but Rényi entropy at other orders
provides complementary views:
- `alpha < 1` — emphasises rare events (tail sensitivity)
- `alpha = 1` — Shannon entropy (standard)
- `alpha = 2` — collision entropy (sensitive to mode)
- `alpha > 2` — increasingly dominated by the most probable token
---
## API Reference
### Core Module (`entropy_profiler.profiler`)
| Symbol | Description |
|--------|-------------|
| `EntropyProfiler(model, dtype, alpha, layer_stride, load_in_8bit, load_in_4bit)` | Main class. Loads model, runs generation, computes entropy. Use `load_in_8bit` or `load_in_4bit` for quantized loading (requires bitsandbytes). |
| `EntropyProfile` | Dataclass: `entropy`, `token_ids`, `layer_indices`, `alpha`, `model_name`, `metadata`. |
| `AggregatedProfile` | Collection of profiles with `mean_profile()` and `to_matrix()`. |
| `shannon_entropy(probs)` | `H(p) = -sum(p log p)` on the last dimension. |
| `renyi_entropy(probs, alpha)` | Rényi entropy of order α. Falls back to Shannon when α ≈ 1. |
**`EntropyProfiler` methods:**
| Method | Returns | Description |
|--------|---------|-------------|
| `profile_text(prompt, max_new_tokens, ...)` | `EntropyProfile` | Profile generated text. |
| `profile_text_with_states(prompt, ...)` | `(EntropyProfile, Tensor)` | Profile + raw hidden states. |
| `profile_batch(prompts, ...)` | `AggregatedProfile` | Profile multiple prompts. |
| `unload()` | `None` | Free model memory. |
**`EntropyProfile` attributes and methods:**
| Member | Type | Description |
|--------|------|-------------|
| `entropy` | `Tensor (n_tokens, n_layers)` | Per-token, per-layer entropy. |
| `token_ids` | `Tensor (n_tokens,)` | Generated token IDs. |
| `n_layers` | `int` | Number of profiled layers. |
| `n_tokens` | `int` | Number of profiled tokens. |
| `mean_profile()` | `Tensor (n_layers,)` | Mean entropy at each layer. |
| `to_numpy()` | `ndarray` | Convert to NumPy (float32). |
### Distances (`entropy_profiler.distances`)
| Function | Type | Description |
|----------|------|-------------|
| `profile_distance(p1, p2, metric, aggregation)` | `DistanceResult` | Unified entry point. |
| `pairwise_distances(profiles, metric)` | `ndarray (N, N)` | Symmetric distance matrix. |
| `jsd_layer(p1, p2, n_bins)` | `ndarray (n_layers,)` | Per-layer Jensen-Shannon divergence. |
| `wasserstein_layer(p1, p2)` | `ndarray (n_layers,)` | Per-layer Wasserstein-1 distance. |
| `fisher_rao_distance(p1, p2)` | `float` | Geodesic on probability simplex. |
| `srvf_distance(p1, p2)` | `float` | Elastic SRVF curve distance. |
**Available metrics for `profile_distance`:** `"jsd"`, `"wasserstein"`, `"fisher_rao"`, `"srvf"`.
**Aggregation methods:** `"mean"`, `"max"`, `"sum"` (for layer-wise metrics).
### Layer Analysis (`entropy_profiler.analysis`)
| Symbol | Description |
|--------|-------------|
| `LayerAnalyzer(profiler, profile, hidden_states)` | Per-layer metric computation. |
Additional functions available via `from entropy_profiler.analysis import ...`:
`compare_models`, `plot_layer_importance`, `plot_information_plane`, `plot_velocity_entropy`.
**`LayerAnalyzer` methods:**
| Method | Returns | Description |
|--------|---------|-------------|
| `layer_entropy()` | `ndarray (n_layers,)` | Mean Shannon entropy per layer. |
| `information_velocity()` | `ndarray (n_layers,)` | Wasserstein between consecutive layers. |
| `distance_to_output()` | `ndarray (n_layers,)` | Fisher-Rao distance to final layer. |
| `jsd_to_output(n_bins)` | `ndarray (n_layers,)` | JSD from each layer to final. |
| `layer_mi(method)` | `ndarray (n_layers,)` | MI with final layer (Rényi or CKA). |
| `layer_importance()` | `dict` | All four non-MI metrics. |
### Visualization (`entropy_profiler.viz`)
| Function | Description |
|----------|-------------|
| `plot_profile(profile, ax, ...)` | Line plot with ±1 std fill. |
| `plot_profiles(profiles, labels, ...)` | Overlay multiple profiles. |
| `plot_heatmap(profile, ax, ...)` | Token × layer entropy heatmap. |
| `plot_aggregated(agg, ax, ...)` | Aggregated mean ± std curve. |
| `plot_cluster(profiles, labels, method, feature, metric, ...)` | 2D scatter via t-SNE/UMAP/PCA. |
### Estimators (`entropy_profiler.estimators`)
| Symbol | Description |
|--------|-------------|
| `MatrixRenyiMI(alpha, device)` | Matrix-based Rényi MI via Gram matrices. Used by `LayerAnalyzer.layer_mi()`. |
---
## Supported Models
Any HuggingFace `AutoModelForCausalLM` is supported. The profiler automatically
detects the unembedding architecture:
| Model Family | Layer Norm Path | Status |
|-------------|-----------------|--------|
| GPT-2 | `transformer.ln_f` | Tested |
| LLaMA / LLaMA 2 / LLaMA 3 | `model.norm` | Tested |
| Mistral | `model.norm` | Tested |
| Gemma / Gemma 2 | `model.norm` | Tested |
| Qwen / Qwen 2 | `model.norm` | Tested |
| OPT | `model.norm` (fallback) | Tested |
To add a new architecture, add a resolution pattern to
`_get_unembedding()` in `entropy_profiler/profiler.py`.
### Tips for large models
```python
# Use float16 for 7B+ models to fit in GPU memory
profiler = EntropyProfiler("meta-llama/Llama-2-7b-hf", dtype=torch.float16)
# Load in 8-bit or 4-bit to fit even larger models (requires: pip install bitsandbytes accelerate)
profiler = EntropyProfiler("meta-llama/Llama-2-7b-hf", load_in_8bit=True)
profiler = EntropyProfiler("meta-llama/Llama-2-7b-hf", load_in_4bit=True)
# Profile every other layer to reduce computation
profiler = EntropyProfiler("gpt2", layer_stride=2)
# Use context manager to auto-unload
with EntropyProfiler("gpt2") as profiler:
profile = profiler.profile_text("Hello world")
```
---
## Design Decisions
**No hooks.** HuggingFace's `output_hidden_states=True` returns all hidden
states without hook infrastructure. This works across all CausalLM
architectures with zero architecture-specific code.
**Logit-lens, not probing.** The unembedding head is the model's own decoder.
No training of linear probes is needed — the entropy values are directly
interpretable as "how peaked is the vocabulary distribution at this layer."
**Float32 entropy.** Entropy is always computed in float32 regardless of model
dtype, avoiding numerical issues with half-precision softmax.
**Dataclass outputs.** `EntropyProfile` and `DistanceResult` are plain
dataclasses — easy to inspect, serialize, and compose.
---
## Notebooks
| Notebook | Description |
|----------|-------------|
| `exploration.ipynb` | Multi-model exploration: entropy curves, heatmaps, velocities, distances, clustering. Requires GPU and gated-model access. |
| `api_tour.ipynb` | Complete API tour exercising every public function with GPT-2. |
```bash
uv sync --extra notebook
uv run jupyter notebook notebooks/
```
---
## Development
```bash
# Install dev dependencies
uv sync --extra dev
# Lint
uv run ruff check .
uv run ruff check --fix .
# Test
uv run pytest
# Run a script without activating venv
uv run python your_script.py
```
---
## License
MIT
| text/markdown | entropy-profiler contributors | null | null | null | null | entropy, interpretability, logit-lens, machine-learning, nlp, profiling, transformer | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Languag... | [] | null | null | >=3.10 | [] | [] | [] | [
"matplotlib>=3.7.0",
"numpy>=1.24.0",
"scikit-learn>=1.3.0",
"scipy>=1.11.0",
"seaborn>=0.12.0",
"torch>=2.1.0",
"transformers>=4.40.0",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"datasets>=3.0.0; extra == \"notebook\"",
"ipywidgets>=8.0.0; extra == \"notebook\"",
"ju... | [] | [] | [] | [
"Homepage, https://github.com/TheGitCommit/entropy-profiler",
"Documentation, https://github.com/TheGitCommit/entropy-profiler/blob/master/README.md",
"Repository, https://github.com/TheGitCommit/entropy-profiler",
"Issues, https://github.com/TheGitCommit/entropy-profiler/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:29:28.617397 | entropy_profiler-0.2.1.tar.gz | 1,678,435 | d4/58/881d985b412b3cb8eebf78c4dc9061822cd1da07f1702dd7221820af47a0/entropy_profiler-0.2.1.tar.gz | source | sdist | null | false | 31da9e515ef042d33375248e7b895c1e | 9db9b2b4ca1665becc7b513bcf9fdc7fc28aae4eb1fbe8d012ba12b8760f860b | d458881d985b412b3cb8eebf78c4dc9061822cd1da07f1702dd7221820af47a0 | MIT | [
"LICENSE"
] | 240 |
2.4 | scduck | 0.1.1 | SCD Type 2 tables with DuckDB. Track historical changes to slowly-changing data. | # scduck
Store time series of snapshots in a SCD Type 2 table.
**13 days of data: 65 MB CSV -> 6.3 MB DuckDB (~10x compression)**
## How it works
Records are stored with `valid_from` / `valid_to` date ranges. When data doesn't change, no new rows are written. Only changes generate new records.
```
id | name | price | valid_from | valid_to
P001 | Widget| 9.99 | 2025-01-01 | 2025-03-15 # original price
P001 | Widget| 12.99 | 2025-03-15 | NULL # price changed
P002 | Gadget| 4.99 | 2025-01-01 | NULL # unchanged
```
- `valid_from`: inclusive (>=)
- `valid_to`: exclusive (<), NULL = current
## Usage
```python
from scduck import SCDTable
# Define your schema
with SCDTable(
"products.duckdb",
table="products",
keys=["product_id"],
values=["name", "price", "category"]
) as db:
# Sync daily snapshots (pandas, polars, or pyarrow)
result = db.sync("2025-01-01", df_jan1) # returns SyncResult
db.sync("2025-01-02", df_jan2)
# Reconstruct any historical snapshot
snapshot = db.get_data("2025-01-01") # returns pyarrow Table
# Check synced dates
db.get_synced_dates() # ['2025-01-01', '2025-01-02']
```
### Out-of-order sync
Dates can be synced in any order:
```python
db.sync("2025-01-15", df) # sync Jan 15 first
db.sync("2025-01-01", df) # backfill Jan 1
db.get_data("2025-01-01") # returns correct snapshot
```
## Example: SecurityMaster
```python
import pandas as pd
from scduck import SCDTable
with SCDTable(
"security_master.duckdb",
table="securities",
keys=["security_id"],
values=["ticker", "mic", "isin", "description",
"sub_industry", "country", "currency", "country_risk"]
) as db:
df = pd.read_csv("SecurityMaster_20251201.csv")
db.sync("2025-12-01", df)
```
## Installation
```bash
pip install scduck
# With pandas/polars support
pip install scduck[all]
```
## Sync Logic
See [SYNC_LOGIC.md](SYNC_LOGIC.md) for detailed operation cases.
| text/markdown | papasaidfine | null | null | null | MIT | data-warehouse, duckdb, history, scd, slowly-changing-dimension, temporal | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"duckdb>=0.9.0",
"pyarrow>=14.0.0",
"pandas>=2.0.0; extra == \"all\"",
"polars>=0.19.0; extra == \"all\"",
"pandas>=2.0.0; extra == \"dev\"",
"polars>=0.19.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"pandas>=2.0.0; extra == \"pandas\"",
"polars>=0.19.0; extra == \"polars\""
] | [] | [] | [] | [
"Homepage, https://github.com/wolferesearch/scduck",
"Repository, https://github.com/wolferesearch/scduck"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:28:59.962791 | scduck-0.1.1.tar.gz | 48,434 | 37/0d/28a011cd3da89372c5b76656b68d31671cf52a2b7a995786b5e6b538dc5a/scduck-0.1.1.tar.gz | source | sdist | null | false | cb6478d7e45c7db3a863e0c42bd808e9 | 025b5b2c291cf92bd29ac7984c03198070e9f673a910253d782e5cc8c1666030 | 370d28a011cd3da89372c5b76656b68d31671cf52a2b7a995786b5e6b538dc5a | null | [] | 231 |
2.4 | uipath-langchain-client | 1.2.1 | LangChain-compatible chat models and embeddings for UiPath's LLM services | # UiPath LangChain Client
LangChain-compatible chat models and embeddings for accessing LLMs through UiPath's infrastructure.
## Installation
```bash
# Base installation (normalized API only)
pip install uipath-langchain-client
# With specific provider extras for passthrough mode
pip install "uipath-langchain-client[openai]" # OpenAI/Azure models
pip install "uipath-langchain-client[google]" # Google Gemini models
pip install "uipath-langchain-client[anthropic]" # Anthropic Claude models
pip install "uipath-langchain-client[azure]" # Azure AI models
pip install "uipath-langchain-client[aws]" # AWS Bedrock models
pip install "uipath-langchain-client[vertexai]" # Google VertexAI models
pip install "uipath-langchain-client[fireworks]" # Fireworks AI models
pip install "uipath-langchain-client[all]" # All providers
```
## Quick Start
### Using Factory Functions (Recommended)
The factory functions automatically detect the model vendor and return the appropriate client:
```python
from uipath_langchain_client import get_chat_model, get_embedding_model
from uipath_langchain_client.settings import get_default_client_settings
# Get default settings (uses UIPATH_LLM_BACKEND env var or defaults to AgentHub)
settings = get_default_client_settings()
# Chat model - vendor auto-detected from model name
chat_model = get_chat_model(
model_name="gpt-4o-2024-11-20",
client_settings=settings,
)
response = chat_model.invoke("Hello, how are you?")
print(response.content)
# Embeddings model
embeddings = get_embedding_model(
model_name="text-embedding-3-large",
client_settings=settings,
)
vectors = embeddings.embed_documents(["Hello world"])
print(f"Embedding dimension: {len(vectors[0])}")
```
### Using Direct Client Classes
For more control, instantiate provider-specific classes directly:
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.anthropic.chat_models import UiPathChatAnthropic
from uipath_langchain_client.clients.normalized.chat_models import UiPathChat
from uipath_langchain_client.settings import get_default_client_settings
settings = get_default_client_settings()
# OpenAI/Azure
openai_chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20", settings=settings)
# Google Gemini
gemini_chat = UiPathChatGoogleGenerativeAI(model="gemini-2.5-flash", settings=settings)
# Anthropic Claude (via AWS Bedrock)
claude_chat = UiPathChatAnthropic(
model="anthropic.claude-sonnet-4-5-20250929-v1:0",
settings=settings,
vendor_type="awsbedrock",
)
# Normalized (provider-agnostic)
normalized_chat = UiPathChat(model="gpt-4o-2024-11-20", settings=settings)
```
## Available Client Types
### Passthrough Mode (Default)
Uses vendor-specific APIs through UiPath's gateway. Full feature parity with native SDKs.
**Chat Models:**
| Class | Provider | Extra | Models |
|-------|----------|-------|--------|
| `UiPathAzureChatOpenAI` | OpenAI/Azure (UiPath-owned) | `[openai]` | GPT-4o, GPT-4, o1, o3, etc. |
| `UiPathChatOpenAI` | OpenAI (BYO) | `[openai]` | GPT-4o, GPT-4, etc. |
| `UiPathChatGoogleGenerativeAI` | Google | `[google]` | Gemini 2.5, 2.0, 1.5 |
| `UiPathChatAnthropic` | Anthropic (via Bedrock) | `[anthropic]` | Claude Sonnet 4.5, Opus, etc. |
| `UiPathChatAnthropicVertex` | Anthropic (via VertexAI) | `[vertexai]` | Claude models |
| `UiPathChatBedrock` | AWS Bedrock (invoke API) | `[aws]` | Bedrock-hosted models |
| `UiPathChatBedrockConverse` | AWS Bedrock (Converse API) | `[aws]` | Bedrock-hosted models |
| `UiPathChatFireworks` | Fireworks AI | `[fireworks]` | Various open-source models |
| `UiPathAzureAIChatCompletionsModel` | Azure AI | `[azure]` | Various Azure AI models |
**Embeddings:**
| Class | Provider | Extra | Models |
|-------|----------|-------|--------|
| `UiPathAzureOpenAIEmbeddings` | OpenAI/Azure (UiPath-owned) | `[openai]` | text-embedding-3-large/small |
| `UiPathOpenAIEmbeddings` | OpenAI (BYO) | `[openai]` | text-embedding-3-large/small |
| `UiPathGoogleGenerativeAIEmbeddings` | Google | `[google]` | text-embedding-004 |
| `UiPathBedrockEmbeddings` | AWS Bedrock | `[aws]` | Titan Embeddings, etc. |
| `UiPathFireworksEmbeddings` | Fireworks AI | `[fireworks]` | Various |
| `UiPathAzureAIEmbeddingsModel` | Azure AI | `[azure]` | Various Azure AI models |
### Normalized Mode
Uses UiPath's normalized API for a consistent interface across all providers. No extra dependencies required.
| Class | Type | Description |
|-------|------|-------------|
| `UiPathChat` | Chat | Provider-agnostic chat completions |
| `UiPathEmbeddings` | Embeddings | Provider-agnostic embeddings |
## Features
### Streaming
```python
from uipath_langchain_client import get_chat_model
from uipath_langchain_client.settings import get_default_client_settings
settings = get_default_client_settings()
chat_model = get_chat_model(model_name="gpt-4o-2024-11-20", client_settings=settings)
# Sync streaming
for chunk in chat_model.stream("Write a haiku about Python"):
print(chunk.content, end="", flush=True)
# Async streaming
async for chunk in chat_model.astream("Write a haiku about Python"):
print(chunk.content, end="", flush=True)
```
### Tool Calling
```python
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"Sunny, 72°F in {city}"
chat_model = get_chat_model(model_name="gpt-4o-2024-11-20", client_settings=settings)
model_with_tools = chat_model.bind_tools([get_weather])
response = model_with_tools.invoke("What's the weather in Tokyo?")
print(response.tool_calls)
```
### LangGraph Agents
```python
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
@tool
def search(query: str) -> str:
"""Search the web."""
return f"Results for: {query}"
chat_model = get_chat_model(model_name="gpt-4o-2024-11-20", client_settings=settings)
agent = create_react_agent(chat_model, [search])
result = agent.invoke({"messages": [("user", "Search for UiPath documentation")]})
```
### Extended Thinking (Model-Specific)
```python
# OpenAI o1/o3 reasoning
chat_model = get_chat_model(
model_name="o3-mini",
client_settings=settings,
client_type="normalized",
reasoning_effort="medium", # "low", "medium", "high"
)
# Anthropic Claude thinking
chat_model = get_chat_model(
model_name="claude-sonnet-4-5",
client_settings=settings,
client_type="normalized",
thinking={"type": "enabled", "budget_tokens": 10000},
)
# Gemini thinking
chat_model = get_chat_model(
model_name="gemini-2.5-pro",
client_settings=settings,
client_type="normalized",
thinking_level="medium",
include_thoughts=True,
)
```
## Configuration
### Retry Configuration
```python
# RetryConfig is a TypedDict - all fields are optional with sensible defaults
retry_config = {
"initial_delay": 2.0, # Initial delay before first retry
"max_delay": 60.0, # Maximum delay between retries
"exp_base": 2.0, # Exponential backoff base
"jitter": 1.0, # Random jitter to add
}
chat_model = get_chat_model(
model_name="gpt-4o-2024-11-20",
client_settings=settings,
max_retries=3,
retry_config=retry_config,
)
```
### Request Timeout
```python
chat_model = get_chat_model(
model_name="gpt-4o-2024-11-20",
client_settings=settings,
request_timeout=120, # Client-side timeout in seconds
)
```
## API Reference
### `get_chat_model()`
Factory function to create a chat model. Automatically detects the model vendor by querying UiPath's discovery endpoint and returns the appropriate LangChain model class.
**Parameters:**
- `model_name` (str): Name of the model (e.g., "gpt-4o-2024-11-20")
- `byo_connection_id` (str | None): Optional BYO connection ID for custom-enrolled models (default: None)
- `client_settings` (UiPathBaseSettings | None): Client settings for authentication (default: auto-detected)
- `client_type` (Literal["passthrough", "normalized"]): API mode (default: "passthrough")
- `**model_kwargs`: Additional arguments passed to the model constructor (e.g., `max_retries`, `retry_config`, `request_timeout`)
**Returns:** `UiPathBaseChatModel` - A LangChain-compatible chat model
**Raises:** `ValueError` - If the model is not found in available models or vendor is not supported
### `get_embedding_model()`
Factory function to create an embeddings model. Automatically detects the model vendor by querying UiPath's discovery endpoint and returns the appropriate LangChain embeddings class.
**Parameters:**
- `model_name` (str): Name of the embeddings model (e.g., "text-embedding-3-large")
- `byo_connection_id` (str | None): Optional BYO connection ID for custom-enrolled models (default: None)
- `client_settings` (UiPathBaseSettings | None): Client settings for authentication (default: auto-detected)
- `client_type` (Literal["passthrough", "normalized"]): API mode (default: "passthrough")
- `**model_kwargs`: Additional arguments passed to the embeddings constructor (e.g., `max_retries`, `retry_config`, `request_timeout`)
**Returns:** `UiPathBaseEmbeddings` - A LangChain-compatible embeddings model
**Raises:** `ValueError` - If the model is not found or the vendor is not supported
## UiPathChat Parameter Reference
The normalized `UiPathChat` model supports the following parameters:
### Standard Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `model` (alias: `model_name`) | `str` | Required | Model identifier (e.g., `"gpt-4o-2024-11-20"`, `"gemini-2.5-flash"`) |
| `max_tokens` | `int \| None` | `None` | Maximum number of tokens in the response |
| `temperature` | `float \| None` | `None` | Sampling temperature (0.0 to 2.0) |
| `stop` (alias: `stop_sequences`) | `list[str] \| str \| None` | `None` | Stop sequences to end generation |
| `n` | `int \| None` | `None` | Number of completions to generate |
| `top_p` | `float \| None` | `None` | Nucleus sampling probability mass |
| `presence_penalty` | `float \| None` | `None` | Penalty for repeated tokens (-2.0 to 2.0) |
| `frequency_penalty` | `float \| None` | `None` | Frequency-based repetition penalty (-2.0 to 2.0) |
| `verbosity` | `str \| None` | `None` | Response verbosity: `"low"`, `"medium"`, or `"high"` |
| `model_kwargs` | `dict[str, Any]` | `{}` | Additional model-specific parameters |
| `disabled_params` | `dict[str, Any] \| None` | `None` | Parameters to exclude from requests |
### Extended Thinking Parameters
| Parameter | Provider | Type | Description |
|-----------|----------|------|-------------|
| `reasoning` | OpenAI (o1/o3) | `dict[str, Any] \| None` | Reasoning config, e.g., `{"effort": "medium", "summary": "auto"}` |
| `reasoning_effort` | OpenAI (o1/o3) | `str \| None` | Shorthand: `"minimal"`, `"low"`, `"medium"`, or `"high"` |
| `thinking` | Anthropic Claude | `dict[str, Any] \| None` | Thinking config, e.g., `{"type": "enabled", "budget_tokens": 10000}` |
| `thinking_level` | Google Gemini | `str \| None` | Thinking depth level |
| `thinking_budget` | Google Gemini | `int \| None` | Token budget for thinking |
| `include_thoughts` | Google Gemini | `bool \| None` | Whether to include thinking in responses |
### Base Client Parameters (All Models)
All LangChain model classes (`UiPathChat`, `UiPathAzureChatOpenAI`, etc.) inherit these from `UiPathBaseLLMClient`:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `model` (alias: `model_name`) | `str` | Required | Model identifier |
| `settings` (alias: `client_settings`) | `UiPathBaseSettings` | Auto-detected | Client settings for auth and routing |
| `byo_connection_id` | `str \| None` | `None` | BYO connection ID for custom-enrolled models |
| `request_timeout` (aliases: `timeout`, `default_request_timeout`) | `int \| None` | `None` | Client-side request timeout in seconds |
| `max_retries` | `int` | `0` | Maximum number of retries for failed requests |
| `retry_config` | `RetryConfig \| None` | `None` | Retry configuration for failed requests |
| `logger` | `logging.Logger \| None` | `None` | Logger instance for request/response logging |
| `default_headers` | `Mapping[str, str] \| None` | See note | Additional request headers (see [Default Headers](../../README.md#default-headers)) |
### Low-Level Methods
`UiPathBaseLLMClient` also exposes these methods for advanced use cases:
| Method | Description |
|--------|-------------|
| `uipath_request(method, url, *, request_body, **kwargs)` | Synchronous HTTP request, returns `httpx.Response` |
| `uipath_arequest(method, url, *, request_body, **kwargs)` | Asynchronous HTTP request, returns `httpx.Response` |
| `uipath_stream(method, url, *, request_body, stream_type, **kwargs)` | Synchronous streaming, yields `str \| bytes` |
| `uipath_astream(method, url, *, request_body, stream_type, **kwargs)` | Asynchronous streaming, yields `str \| bytes` |
The `stream_type` parameter controls iteration: `"lines"` (default, best for SSE), `"text"`, `"bytes"`, or `"raw"`.
## See Also
- [Main README](../../README.md) - Overview and core client documentation
- [UiPath LLM Client](../../src/uipath_llm_client/) - Low-level HTTP client
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"langchain>=1.2.7",
"uipath-llm-client>=1.2.1",
"anthropic[bedrock,vertex]>=0.77.0; extra == \"all\"",
"langchain-anthropic>=1.3.1; extra == \"all\"",
"langchain-aws>=1.2.1; extra == \"all\"",
"langchain-azure-ai>=1.0.0; extra == \"all\"",
"langchain-fireworks>=1.1.0; extra == \"all\"",
"langchain-goo... | [] | [] | [] | [] | uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T00:27:02.443092 | uipath_langchain_client-1.2.1.tar.gz | 23,615 | 1a/9a/47cedc2ec7c17fcc40d85a1cf609bfd9ac2c57d7a0cf4dcb402ed16f2666/uipath_langchain_client-1.2.1.tar.gz | source | sdist | null | false | 11c67fb30da2f6d80a935d7ed122813d | bea451efead50770e4e59af67d1752381509017e57104ebd3aa6e41d22129048 | 1a9a47cedc2ec7c17fcc40d85a1cf609bfd9ac2c57d7a0cf4dcb402ed16f2666 | null | [] | 420 |
2.4 | testing-fixtures | 1.0.1 | New approach to Python test fixtures (compatible with pytest) | # Beyond Pytest Fixtures
This repo contains an implementation of a new approach to fixtures for use with
`pytest`.
In addition we demonstrate how to use these new fixtures in both unit and
integration tests.
## Utility Fixtures
The library also comes with some utility fixtures that often come in handy.
For example `create_temp_dir`
(injects the `Path` to a temporary directory into the test using the fixture) and
`create_temp_cwd` (switches the cwd to a temporary directory and
injects its `Path` into the test).
## Project Evolution
The evolution of this project is being tracked in this [doc](./evolution.md).
## Advantages of `pytest` fixtures
`pytest` fixtures are extremely powerful and flexible.
- Provides a pythonic mechanism for setup, teardown, and injection of state into tests.
- Fixture scope is configurable allowing for heavy computation to be
carried out once per test function (default), test module or session.
- Allows fixtures to be composed which sets up a causal relation between fixtures and
sharing of state between multiple fixtures and the test function.
## Disadvantages of `pytest` fixtures
### Tunability
`pytest` fixtures lack a straight-forward mechanism for passing arguments to them from
the test function definition site.
It is not uncommon to require that a specific piece of state be injected before
running a specific test.
A "tunable" fixture would solve this requirement.
`pytest` solves this by *magically* allowing `pytest.mark.parametrize` values to be
passed through to a fixture being used by a test.
This is not obvious and uses a mechanism that is primarily used for
injecting multiple states into a test to create multiplicity.
### Importability
`pytest` recommends that fixtures be defined in a `conftest.py` file (a most non-obvious
name) and that they **not** be imported directly.
When tests are executed `pytest` parses `conftest.py` and magically inserts
the fixtures (setup, teardown, and interjection) into the test execution.
This is completely different from how the rest of Python operates and
is a source of great confusion to newcomers.
### Fixtures vs Injected Value
`pytest` fixtures overlap two distinct concepts when connecting a test function to
a fixture.
One is the name/handle to the fixture definition (generator function), and
the other is the variable inside the test function which is
bound to the value yielded by the fixture.
This over-use of a single name is evident every time one is choosing the name for
the fixture + variable.
Does one name it for the variable or for the operation that the fixture carries out
whose side-effect is the value in the variable e.g.
`add_portfolio` vs `portfolio_name`.
### Type Annotation
The way `pytest` *registers* fixtures and then *injects/interleaves* them into/with
test functions means it is practically impossible for a type engine to match and
enforce types between the fixture definition and the value injected into
the test function.
This is a source of considerable frustration for anyone who has gone through the
effort to annotate their code and their tests.
## Prototype
We provide a prototype for a new type of fixtures beyond what is provided by `pytest`.
### Objectives
- Works seamlessly with `pytest`.
- Importable from another module (no more `conftest.py`).
- Composable.
One fixture can be connected to another fixture and receive a value from it.
- Tunable.
Fixtures definitions can declare parameters.
These parameters can either be provided at **either**
the test definition site **or**
inside the fixture definition module.
The value(s) provided to the parameter(s) will remain consistent throughout
the execution of any given test.
The same value will be visible to all participating entities:
the test function and all fixtures composed with said fixture.
- Fully typed and type-aware.
Provides enforceable bindings between fixture definitions,
values injected into fixtures, and
values injected from them into test functions.
### Interface
To achieve **all** of the objectives listed above the interface for these fixtures is
slightly more verbose than `pytest` fixtures while
being significantly less magical.
The following four decorators are provided for defining these fixtures:
1. `@fixture`: Applied to a fixture definition (one-shot generator function).
Creates an instance of the `Fixture` class.
**This instance is both a decorator as well as a
reusable and reentrant context manager.**
This instance is applied as a decorator to test functions and injects
the yielded value into it.
Example (extracted from `tests/unit/utils.py`):
```python
@fixture
def fixture_b(b1: Bi1, b2: Bi2) -> FixtureDefinition[Bo]:
"""A fixture that takes injected value from the test function decoration."""
yield Bo(b1=b1, b2=b2)
```
*Note* One can use `NewType` and `TypedDict` to constrain the fixture parameters and
the value it yields which allows for tightly binding the yielded value
to any location where it is used (test function or composed fixture).
Similarly the `FixtureDefinition[]` generic type constrains how the fixture is
allowed to be used.
Each instance has a `.set()` method which is used to provide values for
any parameters declared in the fixture definition.
`.set()` can be called on either the test function decoration site,
inside the module defining the fixture,
or while composing the fixture with another.
Examples (extracted from `tests/unit/utils.py` and
`tests/unit/test_new_fixtures.py`):
```python
@fixture_b.set(Bi1(42), Bi2(3.14))
def test_b(b: Bo) -> None:
"""Test parametrized fixture_b in isolation."""
assert b == {"b1": 42, "b2": 3.14}
```
or
```python
@fixture
@compose(fixture_b.set(Bi1(13), Bi2(1.44)))
def fixture_c(b: Bo) -> FixtureDefinition[Co]:
"""A fixture that takes an injected value from ANOTHER fixture."""
yield Co(c=b)
```
All fixture composition and test decoration creates a strict ordering of when the
fixture's context manager is entered and exited.
Only the value at the first entry is available throughout the execution of a single
test.
1. `@compose`: A function that takes a single argument which must be an instance of
`Fixture` and returns a decorator that is applied to another fixture definition.
Designed to be applied **before** the fixture definition is wrapped inside
`@fixture` (don't worry,
the type system will shout at you if you get the order wrong).
The value yielded by the composed fixture is injected as the first parameter of
the decorated fixture definition.
It essentially creates a simpler fixture definition with one less parameter.
Example:
```python
@fixture
@compose(fixture_b.set(Bi1(13), Bi2(1.44)))
def fixture_c(b: Bo) -> FixtureDefinition[Co]:
"""A fixture that takes an injected value from ANOTHER fixture."""
yield Co(c=b)
```
The composed fixture can also have its value set from the test site but
be available to the composition.
Example:
```python
@fixture
@compose(fixture_b)
def fixture_g(b: Bo, g: Gi) -> FixtureDefinition[Go]:
"""Fixture that uses a late-injected fixture_b and a value from the test site."""
yield {"b": b, "g": g}
@fixture_b.set(Bi1(56), Bi2(9.7))
@fixture_g.set(Gi(41))
def test_g(g: Go, b: Bo) -> None:
"""Inject args into fixture from test site and trickle down to pulled in fixture."""
assert b == {"b1": 56, "b2": 9.7}
assert g == {"b": b, "g": 41}
```
1. `@noinject`: Used at the test definition decoration site to wrap fixtures.
The wrapped fixture's yielded values will **not** be injected into the test function.
Example:
```python
@noinject(fixture_b.set(Bi1(75), Bi2(2.71)))
def test_b_no_injection() -> None:
"""The value yielded by fixture_b is NOT injected into the test."""
```
The type engine will understand *not* injecting and validates accordingly.
1. `@compose_noinject`: Applied to composed fixtures to *stop* them from injecting
their yielded value into the fixture they are composed with.
Example:
```python
@fixture
@compose_noinject(fixture_b.set(Bi1(39), Bi2(8.1)))
def fixture_h(h: Hi) -> FixtureDefinition[Ho]:
"""Fixture that uses a composed fixture_b but NOT its yielded value."""
yield Ho(h=h)
```
Again, the type engine is aware of the mechanics.
## Implementation
The implementation can be found in [testing.fixtures](./testing/fixtures).
It consists of nested decorators, modified context managers, and parameter injection,
all fully typed.
## Demo
### Integration Tests
To demonstrate this in action we have a REST server that:
- receives POST requests
- fetches data from a (postgres) database
- uses the fetched data to construct the response
This has been setup as a composable environment.
First build the local images: `docker-compose build`.
Then, run the tests: `docker-compose run --rm test`.
### Unit Tests
We have also provided a number of unit tests, unrelated to the REST server application,
which focus on demonstrating all the possible permutations of fixture usage,
composition, and state injection at both the test and fixture definition site.
## Local Development
To make changes to this code base the recommendation is to use a virtual env:
```console
python3.11 -m venv .venv
source .venv/bin/activate
pip install ".[dev]"
```
Your IDE should be able to now access this virtual env and
provide you with autocomplete, intellisense, etc.
## How to Deploy
1. Build the package: `python3.11 -m build`
This will create the source tarball and wheel in the `dist/` folder.
2. Deploy to pypi: `python3.11 -m twine upload dist/*`
Enter your pypi username and password.
| text/markdown | null | "Abid H. Mujtaba" <abid.naqvi83@gmail.com> | null | null | null | fixture, fixtures, testing, tests | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typing-extensions",
"black; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pylint; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/abid-mujtaba/testing-fixtures"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:24:25.212506 | testing_fixtures-1.0.1.tar.gz | 9,521 | ea/9f/580ccdc42307f828e75c33aa4ab3960d26beed90adbd60caca78b8a770f8/testing_fixtures-1.0.1.tar.gz | source | sdist | null | false | 35b137f15bae9ec00870e29c1e44ca3c | 9842681a115c32472c19846315abd8337cf480ade8620d2cfc7a392f219448cf | ea9f580ccdc42307f828e75c33aa4ab3960d26beed90adbd60caca78b8a770f8 | null | [] | 245 |
2.4 | beaker-gantry | 3.5.0 | Gantry streamlines running Python experiments in Beaker by managing containers and boilerplate for you | <div align="center">
<br>
<img src="https://raw.githubusercontent.com/allenai/beaker-py/main/docs/source/_static/beaker-500px-transparent.png" width="200"/>
<br>
<h1>Beaker Gantry</h1>
<p>Gantry is a CLI that streamlines running experiments in <a href="https://beaker.org">Beaker</a>.</p>
<hr/>
<!-- TODO: Add badges once this is open source -->
<a href="https://github.com/allenai/beaker-gantry/actions">
<img alt="CI" src="https://github.com/allenai/beaker-gantry/actions/workflows/main.yml/badge.svg">
</a>
<a href="https://pypi.org/project/beaker-gantry/">
<img alt="PyPI" src="https://img.shields.io/pypi/v/beaker-gantry">
</a>
<a href="https://github.com/allenai/beaker-gantry/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/allenai/beaker-gantry.svg?color=blue&cachedrop">
</a>
<br/><br/>
</div>
<!-- begin intro -->

⚡️*Easy to use*
- **No Docker required!** 🚫 🐳
- No writing Beaker YAML experiment specs.
- Easy setup.
- Simple CLI.
🏎 *Fast*
- Fire off Beaker experiments from your laptop instantly!
- No local image build or upload.
🪶 *Lightweight*
- Pure Python (built on top of [beaker](https://github.com/allenai/beaker)'s Python client).
- Minimal dependencies.
### Who is this for?
Gantry is for both new and seasoned Beaker users who need to run batch jobs (as opposed to interactive sessions) from a rapidly changing repository, especially Python-based jobs.
*Without* Gantry, this workflow usually looks like this:
1. Add a Dockerfile to your repository.
2. Build the Docker image locally.
3. Push the Docker image to Beaker.
4. Write a YAML Beaker experiment spec that points to the image you just uploaded.
5. Submit the experiment spec.
6. Make changes and repeat from step 2.
This requires experience with Docker, experience writing Beaker experiment specs, and a fast and reliable internet connection.
*With* Gantry, on the other hand, that same workflow simplifies down to this:
1. (Optional) Write a `pyproject.toml`/`setup.py` file, a PIP `requirements.txt` file, a or conda `environment.yml` file to specify your Python environment.
2. Commit and push your changes.
3. Submit and track a Beaker experiment with the `gantry run` command.
4. Make changes and repeat from step 2.
<!-- end intro -->
## In this README
- 💾 **[Installing](#installing)**
- 🚀 **[Quick start](#quick-start)**
- ❓ **[FAQ](#faq)**
### Additional info
#### 👋 *Examples*
- [Savings results / metrics from an experiment](./examples/metrics)
#### 💻 *For developers*
- [CHANGELOG](https://github.com/allenai/beaker-gantry/blob/main/CHANGELOG.md)
- [CONTRIBUTING](https://github.com/allenai/beaker-gantry/blob/main/CONTRIBUTING.md)
<!-- begin install -->
## Installing
### Installing with `pip`
Gantry is available [on PyPI](https://pypi.org/project/gantry/). Just run
```bash
pip install beaker-gantry
```
### Installing globally with `uv`
Gantry can be installed and made available on the PATH using [uv](https://docs.astral.sh/uv/):
```bash
uv tool install beaker-gantry
```
With this command, beaker-gantry is automatically installed to an isolated virtual environment.
### Installing from source
To install Gantry from source, first clone [the repository](https://github.com/allenai/beaker-gantry):
```bash
git clone https://github.com/allenai/beaker-gantry.git
cd beaker-gantry
```
Then run
```bash
pip install -e .
```
<!-- end install -->
<!-- begin quickstart -->
## Quick start
### One-time setup
1. **Create and clone your repository.**
If you haven't already done so, create a GitHub repository for your project and clone it locally.
**Every `gantry` command you run must be invoked from the root directory of your repository.**
2. **Configure Gantry.**
If you've already configured the [Beaker command-line client](https://github.com/allenai/beaker/), Gantry will
find and use the existing configuration file (usually located at `$HOME/.beaker/config.yml`).
Otherwise just set the environment variable `BEAKER_TOKEN` to your Beaker [user token](https://beaker.org/user).
Some gantry settings can also be specified in a `pyproject.toml` file under the section `[tool.gantry]`. For now those settings are:
1. `workspace` - The default Beaker workspace to use.
2. `gh_token_secret` - The name of the Beaker secret with your GitHub API token.
3. `budget` - The default Beaker budget to use.
4. `log_level` - The (local) Python log level. Defaults to "warning".
5. `quiet` - A boolean. If true the gantry logo won't be displayed on the command line.
For example:
```toml
# pyproject.toml
[tool.gantry]
workspace = "ai2/my-default-workspace"
gh_token_secret = "GITHUB_TOKEN"
budget = "ai2/my-teams-budget"
log_level = "warning"
quiet = false
```
The first time you call `gantry run ...` you'll also be prompted to provide a [GitHub personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) with the `repo` scope if your repository is private. This allows Gantry to clone your private repository when it runs in Beaker. You don't have to do this just yet (Gantry will prompt you for it), but if you need to update this token later you can use the `gantry config set-gh-token` command.
3. (Optional) **Specify your Python environment.**
Typically you'll have to create one of several different files to specify your Python environment. There are three widely used options:
1. A [`pyproject.toml`](https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/) or [`setup.py`](https://docs.python.org/3/distutils/introduction.html#a-simple-example) file.
2. A PIP [`requirements.txt`](https://pip.pypa.io/en/stable/user_guide/#requirements-files) file.
3. A conda [`environment.yml`](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#create-env-file-manually) file.
Gantry will automatically find and use these files to reconstruct your Python environment at runtime.
Alternatively you can provide a custom Python install command with the `--install` option to `gantry run`, or skip the Python setup completely with `--no-python`.
### Submit your first experiment with Gantry
Let's spin up a Beaker experiment that just prints "Hello, World!" from Python.
First make sure you've committed *and* pushed all changes so far in your repository.
Then (from the root of your repository) run:
```bash
gantry run --show-logs -- python -c 'print("Hello, World!")'
```
*❗Note: Everything after the `--` is the command + arguments you want to run on Beaker. It's necessary to include the `--` if any of your arguments look like options themselves (like `-c` in this example) so gantry can differentiate them from its own options.*
In this case we didn't request any GPUs nor a specific cluster, so this could run on any Beaker cluster.
We can use the `--gpu-type` and `--gpus` options to get GPUs. For example:
```bash
gantry run --show-logs --gpu-type=h100 --gpus=1 -- python -c 'print("Hello, World!")'
```
Or we can use the `--cluster` option to request clusters by their name or aliases. For example:
```bash
gantry run --show-logs --cluster=ai2/jupiter --gpus=1 -- python -c 'print("Hello, World!")'
```
Try `gantry run --help` to see all of the available options.
<!-- end quickstart -->
<!-- begin faq -->
## FAQ
### Can I use my own Docker/Beaker image?
<details>
<summary>Click to expand 💬</summary>
You sure can! Just set the `--beaker-image TEXT` or `--docker-image TEXT` option.
Gantry can use any image that has bash, curl, and git installed.
If your image comes with a Python environment that you want gantry to use, add the flag `--system-python`.
For example:
```bash
gantry run --show-logs --docker-image='python:3.10' --system-python -- python --version
```
</details>
### Will Gantry work for GPU experiments?
<details>
<summary>Click to expand 💬</summary>
Absolutely! This was the main use-case Gantry was developed for. Just set the `--gpus INT` option for `gantry run` to the number of GPUs you need, and optionally `--gpu-type TEXT` (e.g. `--gpu-type=h100`).
</details>
### How can I save results or metrics from an experiment?
<details>
<summary>Click to expand 💬</summary>
By default Gantry uses the `/results` directory on the image as the location of the results dataset, which will also be set as the environment variable `RESULTS_DIR`.
That means that everything your experiment writes to this directory will be persisted as a Beaker dataset when the experiment finalizes.
And you can also attach metrics in Beaker for your experiment by writing a JSON file called `metrics.json` to the results directory, or by calling the function `gantry.api.write_metrics()` from within your experiment.
</details>
### How can I see the Beaker experiment spec that Gantry uses?
<details>
<summary>Click to expand 💬</summary>
You can use the `--dry-run` option with `gantry run` to see what Gantry will submit without actually submitting an experiment.
You can also use `--save-spec PATH` in combination with `--dry-run` to save the actual experiment spec to a YAML file.
</details>
### How can I update Gantry's GitHub token?
<details>
<summary>Click to expand 💬</summary>
Use the command `gantry config set-gh-token`.
</details>
### How can I attach Beaker datasets to an experiment?
<details>
<summary>Click to expand 💬</summary>
Use the `--dataset` option for `gantry run`. For example:
```bash
gantry run --show-logs --dataset='petew/squad-train:/input-data' -- ls /input-data
```
</details>
### How can I attach a WEKA bucket to an experiment?
<details>
<summary>Click to expand 💬</summary>
Use the `--weka` option for `gantry run`. For example:
```bash
gantry run --show-logs --weka='oe-training-default:/mount/weka' -- ls -l /mount/weka
```
</details>
### How can I run distributed multi-node batch jobs with Gantry?
<details>
<summary>Click to expand 💬</summary>
If you're using `torchrun` you can simply set the option `--replicas INT` along with the flag `--torchrun`.
Gantry will automatically configure your experiment and `torchrun` to run your command with all GPUs across all replicas.
For example:
```bash
gantry run \
--show-logs \
--gpus=8 \
--gpu-type='h100' \
--replicas=2 \
--torchrun \
--install 'uv pip install . torch numpy --torch-backend=cu129' \
-- python -m gantry.all_reduce_bench
```
In general, the three options `--replicas INT`, `--leader-selection`, `--host-networking` used together give you the ability to run distributed batch jobs. See the [Beaker docs](https://beaker-docs.apps.allenai.org/experiments/distributed-training.html#batch-jobs) for more information.
Consider also setting `--propagate-failure`, `--propagate-preemption`, and `--synchronized-start-timeout TEXT` depending on your workload.
Here's a complete example using `torchrun` manually (without the `--torchrun` flag):
```bash
gantry run \
--show-logs \
--gpus=8 \
--gpu-type='h100' \
--replicas=2 \
--leader-selection \
--host-networking \
--propagate-failure \
--propagate-preemption \
--synchronized-start-timeout='5m' \
--install 'uv pip install . torch numpy --torch-backend=cu129' \
--exec-method=bash \
-- torchrun \
'--nnodes="$BEAKER_REPLICA_COUNT:$BEAKER_REPLICA_COUNT"' \
'--nproc-per-node="$BEAKER_ASSIGNED_GPU_COUNT"' \
'--rdzv-id=12347' \
'--rdzv-backend=static' \
'--rdzv-endpoint="$BEAKER_LEADER_REPLICA_HOSTNAME:29400"' \
'--node-rank="$BEAKER_REPLICA_RANK"' \
'--rdzv-conf="read_timeout=420"' \
-m gantry.all_reduce_bench
```
Note that we have environment variables like `BEAKER_REPLICA_COUNT` in the arguments to our `torchrun` command that we want to have expanded *at runtime*.
To accomplish this we do two things:
1. We wrap those arguments in single quotes to avoid expanding them locally.
2. We set `--exec-method=bash` to tell gantry to run our command and arguments with `bash -c`, which will do variable expansion.
Alternatively you could put your whole `torchrun` command into a script, let's call it `launch-torchrun.sh`, without single quotes around the arguments.
Then change your `gantry run` command like this:
```diff
gantry run \
--show-logs \
--gpus=8 \
--gpu-type='h100' \
--replicas=2 \
--leader-selection \
--host-networking \
--propagate-failure \
--propagate-preemption \
--synchronized-start-timeout='5m' \
--install 'uv pip install . torch numpy --torch-backend=cu129' \
- --exec-method='bash' \
- -- torchrun \
- '--nnodes="$BEAKER_REPLICA_COUNT:$BEAKER_REPLICA_COUNT"' \
- '--nproc-per-node="$BEAKER_ASSIGNED_GPU_COUNT"' \
- '--rdzv-id=12347' \
- '--rdzv-backend=static' \
- '--rdzv-endpoint="$BEAKER_LEADER_REPLICA_HOSTNAME:29400"' \
- '--node-rank="$BEAKER_REPLICA_RANK"' \
- '--rdzv-conf="read_timeout=420"' \
- -m gantry.all_reduce_bench
+ -- ./launch-torchrun.sh
```
</details>
### How can I customize the Python setup steps?
<details>
<summary>Click to expand 💬</summary>
If gantry's default Python setup steps don't work for you, you can override them through the `--install TEXT` option with a custom command or shell script.
For example:
```bash
gantry run --show-logs --install='pip install -r custom_requirements.txt' -- echo "Hello, World!"
```
</details>
### Can I use conda like with older versions of gantry?
<details>
<summary>Click to expand 💬</summary>
Yes, you can still use conda if you wish by committing a conda `environment.yml` file to your repo or by simply specifying `--python-manager=conda`.
For example:
```bash
gantry run --show-logs --python-manager=conda -- which python
```
</details>
### Can I use gantry with non-Python workloads?
<details>
<summary>Click to expand 💬</summary>
Absolutely, just add the flag `--no-python` and optionally set `--install` or `--post-setup` to a custom command or shell script if you need custom setup steps.
</details>
### Can I use gantry to launch Beaker jobs from GitHub Actions?
<details>
<summary>Click to expand 💬</summary>
Yes, in fact this is a great way to utilize otherwise idle on-premise hardware, especially with short-running, preemptible jobs such as those you might launch to run unit tests that require accelerators.
To do this you should set up a Beaker API token as a GitHub Actions Secret, named `BEAKER_TOKEN`, in your repository.
Then copy and modify this workflow for your needs:
```yaml
name: Beaker
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
gpu_tests:
name: GPU Tests
runs-on: ubuntu-latest
timeout-minutes: 15
env:
BEAKER_TOKEN: ${{ secrets.BEAKER_TOKEN }}
GANTRY_GITHUB_TESTING: 'true' # force better logging for CI
BEAKER_WORKSPACE: 'ai2/your-workspace' # TODO: change this to your Beaker workspace
steps:
- uses: actions/checkout@v5
with:
ref: ${{ github.event.pull_request.head.sha }} # check out PR head commit instead of merge commit
- uses: astral-sh/setup-uv@v6
with:
python-version: '3.12'
- name: install gantry
run:
uv tool install 'beaker-gantry>=3.1,<4.0'
- name: Determine current commit SHA (pull request)
if: github.event_name == 'pull_request'
run: |
echo "COMMIT_SHA=${{ github.event.pull_request.head.sha }}" >> $GITHUB_ENV
echo "BRANCH_NAME=${{ github.head_ref }}" >> $GITHUB_ENV
- name: Determine current commit SHA (push)
if: github.event_name != 'pull_request'
run: |
echo "COMMIT_SHA=$GITHUB_SHA" >> $GITHUB_ENV
echo "BRANCH_NAME=${{ github.ref_name }}" >> $GITHUB_ENV
- name: launch job
run: |
exec gantry run \
--show-logs \
--yes \
--workspace ${{ env.BEAKER_WORKSPACE }} \
--description 'GitHub Actions GPU tests' \
--ref ${{ env.COMMIT_SHA }} \
--branch ${{ env.BRANCH_NAME }} \
--priority normal \
--preemptible \
--gpus 1 \
--gpu-type h100 \
--gpu-type a100 \
-- pytest -v tests/cuda_tests/ # TODO: change to your own command
```
Note that we use `exec gantry run ...` instead of just `gantry run`. This ensures that if GitHub Actions cancels the job, the SIGINT and SIGTERM signals will propagate to `gantry`, allowing it to clean up gracefully and cancel the running job on Beaker.
</details>
### Can I use gantry outside of a git repository?
<details>
<summary>Click to expand 💬</summary>
Yes, you'll just need to provide the `--remote` option along with `--ref` and/or `--branch`.
For example: `gantry run --show-logs --yes --dry-run --remote allenai/beaker-gantry --branch main -- echo 'hello, world!'`
</details>
### Why "Gantry"?
<details>
<summary>Click to expand 💬</summary>
A gantry is a structure that's used, among other things, to lift containers off of ships. Analogously Beaker Gantry's purpose is to lift Docker containers (or at least the *management* of Docker containers) away from users.
</details>
<!-- end faq -->
| text/markdown | null | Allen Institute for Artificial Intelligence <contact@allenai.org>, Pete Walsh <petew@allenai.org> | null | null | null | null | [
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"beaker-py<3.0,>=2.5.1",
"GitPython<4.0,>=3.0",
"rich",
"click",
"click-help-colors",
"click-option-group",
"petname<3.0,>=2.6",
"requests",
"packaging",
"tomli",
"dataclass-extensions",
"PyYAML",
"ruff; extra == \"dev\"",
"mypy<2.0,>=1.19.1; extra == \"dev\"",
"types-requests; extra == ... | [] | [] | [] | [
"homepage, https://github.com/allenai/beaker-gantry",
"repository, https://github.com/allenai/beaker-gantry"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T00:23:56.031721 | beaker_gantry-3.5.0.tar.gz | 63,187 | 98/5b/91da479e9f2359ccc5a0a8eaf375b90501c9423862a00ab54d9efeac4023/beaker_gantry-3.5.0.tar.gz | source | sdist | null | false | bf02e5c4ca9262eb9b907321093b9904 | a3027f96c0cb190aba4f669681e520e9111dee2826ec06ef31d64cb08b56eb62 | 985b91da479e9f2359ccc5a0a8eaf375b90501c9423862a00ab54d9efeac4023 | null | [
"LICENSE"
] | 4,479 |
2.4 | cnoe-agent-utils | 0.3.10 | Core utilities for CNOE agents including LLM factory, tracing, and base agent classes | # 🤖 cnoe-agent-utils
[](https://pypi.org/project/cnoe-agent-utils/)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/unit-tests.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/pypi.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/unit-tests.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-aws-bedrock.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-azure-openai.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-openai.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-gcp-vertex.yml)
[](https://github.com/cnoe-io/cnoe-agent-utils/actions/workflows/test-google-gemini.yml)
* **Reusable utilities and abstractions** for building agent-based (LLM-powered) systems.
* **Centralized LLM Factory** supporting major providers (AWS, Azure, GCP, OpenAI, Gemini, Anthropic).
* **Centralized Tracing Utilities** (since v0.2.0) to eliminate duplicated tracing code across CNOE agents.
* **Agent Base Classes** (since v0.4.0) for LangGraph and Strands agent frameworks with A2A protocol support.
## Key Features
### **Core Utilities**
* Unified interface (LLM Factory) for seamless LLM instantiation across multiple clouds and vendors.
- 🏭 **LLM Factory** for easy model instantiation across:
- ☁️ AWS
- ☁️ Azure
- ☁️ GCP Vertex
- 🤖 Google Gemini
- 🤖 Anthropic Claude
- 🤖 OpenAI
- 🤖 Groq
* Simple, environment-variable-driven configuration.
* Example scripts for each LLM provider with setup instructions.
### **Agent Tracing (since v0.2.0)**
* **Centralized tracing logic:** Removes 350+ lines of repeated code per agent.
* **Single import/decorator:** No more copy-pasting tracing logic.
* **Environment-based toggling:** Use `ENABLE_TRACING` env var to control all tracing.
* **A2A Tracing Disabling:** Single method to monkey-patch/disable agent-to-agent tracing everywhere.
* **Graceful fallback:** Works with or without Langfuse; tracing is zero-overhead when disabled.
### **Agent Base Classes (since v0.4.0)**
* **Multi-Framework Support:** Base classes for LangGraph and Strands agent frameworks
* **A2A Protocol Integration:** Seamless integration with Agent-to-Agent protocol for distributed agent systems
* **Context Management:** Automatic context window management with token counting and intelligent message trimming
* **Streaming Support:** Built-in streaming capabilities for real-time agent responses with tool notifications
* **Optional Dependencies:** Graceful handling of missing dependencies - install only what you need
* **MCP Integration:** Built-in support for Model Context Protocol (MCP) with multi-server configurations
---
**Note:**
- Checkout this tutorial on [Tracing](TRACING.md)
- See [Agent Base Classes Documentation](cnoe_agent_utils/agents/README.md) for detailed agent utilities guide
## 🚀 LLM Factory Getting Started
### 🛡️ Create and Activate a Virtual Environment
It is recommended to use a virtual environment to manage dependencies:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
### ⚡ Prerequisite: Install `uv`
Before running the examples, install [`uv`](https://github.com/astral-sh/uv):
```bash
pip install uv
```
### 📦 Installation
#### Installation Options
**Default Installation (recommended for most users):**
```bash
pip install cnoe-agent-utils
```
This installs all dependencies and provides full functionality. It's equivalent to `pip install 'cnoe-agent-utils[all]'`.
**Minimal Installation (specific functionality only):**
Use these when you only need specific functionality or want to minimize package size:
```bash
# Anthropic Claude support only
pip install "cnoe-agent-utils[anthropic]"
# OpenAI support (openai.com GPT models) only
pip install "cnoe-agent-utils[openai]"
# Azure OpenAI support (Azure-hosted GPT models) only
pip install "cnoe-agent-utils[azure]"
# AWS support (Bedrock, etc.) only
pip install "cnoe-agent-utils[aws]"
# Google Cloud support (Vertex AI, Gemini) only
pip install "cnoe-agent-utils[gcp]"
# Groq support only
pip install "cnoe-agent-utils[groq]"
# Advanced tracing and observability (Langfuse, OpenTelemetry) only
pip install "cnoe-agent-utils[tracing]"
# Agent base classes and utilities only
pip install "cnoe-agent-utils[agents]"
# LangGraph agent framework support
pip install "cnoe-agent-utils[langgraph]"
# Strands agent framework support
pip install "cnoe-agent-utils[strands]"
# A2A protocol support for agent executors
pip install "cnoe-agent-utils[a2a]"
# Complete agent stack (all agent frameworks)
pip install "cnoe-agent-utils[agents-all]"
# Development dependencies (testing, linting, etc.)
pip install "cnoe-agent-utils[dev]"
```
#### Using uv
```bash
# Default installation (all dependencies)
uv add cnoe-agent-utils
# Minimal installation (specific functionality only)
uv add "cnoe-agent-utils[anthropic]"
uv add "cnoe-agent-utils[openai]"
uv add "cnoe-agent-utils[azure]"
uv add "cnoe-agent-utils[aws]"
uv add "cnoe-agent-utils[groq]"
uv add "cnoe-agent-utils[gcp]"
uv add "cnoe-agent-utils[tracing]"
uv add "cnoe-agent-utils[agents]"
uv add "cnoe-agent-utils[langgraph]"
uv add "cnoe-agent-utils[strands]"
uv add "cnoe-agent-utils[a2a]"
uv add "cnoe-agent-utils[agents-all]"
```
#### Local Development
If you are developing locally:
```bash
git clone https://github.com/cnoe-agent-utils/cnoe-agent-utils.git
cd cnoe-agent-utils
uv sync
```
---
## 🧑💻 Usage
To test integration with different LLM providers, configure the required environment variables for each provider as shown below. Then, run the corresponding example script using `uv`.
---
### 🤖 Anthropic
Set the following environment variables:
```bash
export ANTHROPIC_API_KEY=<your_anthropic_api_key>
export ANTHROPIC_MODEL_NAME=<model_name>
# Optional: Enable extended thinking for Claude 4+ models
export ANTHROPIC_THINKING_ENABLED=true
export ANTHROPIC_THINKING_BUDGET=1024 # Default: 1024, Min: 1024
```
Run the example:
```bash
uv run examples/test_anthropic.py
```
---
### ☁️ AWS Bedrock (Anthropic Claude)
Set the following environment variables:
```bash
export AWS_PROFILE=<your_aws_profile>
export AWS_REGION=<your_aws_region>
export AWS_BEDROCK_MODEL_ID="us.anthropic.claude-3-7-sonnet-20250219-v1:0"
export AWS_BEDROCK_PROVIDER="anthropic"
# Optional: Enable extended thinking for Claude 4+ models
export AWS_BEDROCK_THINKING_ENABLED=true
export AWS_BEDROCK_THINKING_BUDGET=1024 # Default: 1024, Min: 1024
```
Run the example:
```bash
uv run examples/test_aws_bedrock_claude.py
```
### 🤖 Groq
Set the following environment variable:
## Groq Configuration
GROQ_API_KEY=<your API Key>
GROQ_MODEL_NAME=<model name>
GROQ_TEMPERATURE=<Your value>
Run the example:
```bash
uv run examples/groq_stream.py
```
#### AWS Bedrock Prompt Caching
AWS Bedrock supports **prompt caching** to reduce latency and costs by caching repeated context across requests. This feature is particularly beneficial for:
- Multi-turn conversations with long system prompts
- Repeated use of large context documents
- Agent systems with consistent instructions
**Enable prompt caching:**
```bash
export AWS_BEDROCK_ENABLE_PROMPT_CACHE=true
```
**Supported Models:**
For the latest list of models that support prompt caching and their minimum token requirements, see the [AWS Bedrock Prompt Caching documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html).
**Implementation Note:** When `AWS_BEDROCK_ENABLE_PROMPT_CACHE=true`, the library uses `ChatBedrockConverse` which has native prompt caching support. If your model doesn't support caching, AWS Bedrock will return a clear error message. There's no need to validate model compatibility in advance—AWS handles this automatically.
**Note:** Model IDs may include regional prefixes (`us.`, `eu.`, `ap.`, etc.) depending on your AWS account configuration. Pass the full model ID as provided by AWS:
- Example: `us.anthropic.claude-3-7-sonnet-20250219-v1:0`
- Example: `anthropic.claude-opus-4-1-20250805-v1:0`
**Benefits:**
- Up to **85% reduction in latency** for cached content
- Up to **90% reduction in costs** for cached tokens
- **5-minute cache TTL** (automatically managed by AWS)
- Maximum **4 cache checkpoints** per request
**Usage Example:**
```python
import os
from cnoe_agent_utils.llm_factory import LLMFactory
from langchain_core.messages import SystemMessage, HumanMessage
# Enable caching
os.environ["AWS_BEDROCK_ENABLE_PROMPT_CACHE"] = "true"
# Initialize LLM
llm = LLMFactory("aws-bedrock").get_llm()
# Create cache point for system message
cache_point = llm.create_cache_point()
# Build messages with cache control
messages = [
SystemMessage(content=[
{"text": "You are a helpful AI assistant with expertise in..."},
cache_point # Marks cache checkpoint
]),
HumanMessage(content="What is your primary function?")
]
# Invoke with caching
response = llm.invoke(messages)
# Check cache statistics in response metadata
if hasattr(response, 'response_metadata'):
usage = response.response_metadata.get('usage', {})
print(f"Cache read tokens: {usage.get('cacheReadInputTokens', 0)}")
print(f"Cache creation tokens: {usage.get('cacheCreationInputTokens', 0)}")
```
**Run the caching example:**
```bash
uv run examples/aws_bedrock_cache_example.py
```
**Monitoring Cache Performance:**
Cache hit/miss statistics are available in:
1. **Response metadata** - `cacheReadInputTokens` and `cacheCreationInputTokens`
2. **CloudWatch metrics** - Track cache performance across all requests
3. **Application logs** - Enable via `AWS_CREDENTIALS_DEBUG=true`
**Best Practices:**
- Use cache for system prompts and context that remain consistent across requests
- Ensure cached content meets minimum token requirements (see AWS documentation for model-specific limits)
- Place cache points strategically (after system messages, large context documents, or tool definitions)
- Monitor cache hit rates to optimize placement
---
### ☁️ Azure OpenAI
Set the following environment variables:
```bash
export AZURE_OPENAI_API_KEY=<your_azure_openai_api_key>
export AZURE_OPENAI_API_VERSION=<api_version>
export AZURE_OPENAI_DEPLOYMENT=gpt-4.1
export AZURE_OPENAI_ENDPOINT=<your_azure_openai_endpoint>
```
Run the example:
```bash
uv run examples/test_azure_openai.py
```
---
### 🤖 OpenAI
Set the following environment variables:
```bash
export OPENAI_API_KEY=<your_openai_api_key>
export OPENAI_ENDPOINT=https://api.openai.com/v1
export OPENAI_MODEL_NAME=gpt-4.1
```
Optional configuration:
```bash
export OPENAI_DEFAULT_HEADERS='{"my-header-key":"my-value"}'
export OPENAI_USER=user-identifier
```
Run the example:
```bash
uv run examples/test_openai.py
```
---
### 🤖 Google Gemini
Set the following environment variable:
```bash
export GOOGLE_API_KEY=<your_google_api_key>
```
Run the example:
```bash
uv run examples/test_google_gemini.py
```
---
### ☁️ GCP Vertex AI
Set the following environment variables:
```bash
export GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcp.json
export VERTEXAI_MODEL_NAME="gemini-2.0-flash-001"
# Optional: Enable extended thinking for Claude 4+ models on Vertex AI
export VERTEXAI_THINKING_ENABLED=true
export VERTEXAI_THINKING_BUDGET=1024 # Default: 1024, Min: 1024
```
Run the example:
```bash
uv run examples/test_gcp_vertexai.py
```
This demonstrates how to use the LLM Factory and other utilities provided by the library.
---
## 🔧 Middleware
The `cnoe_agent_utils.middleware` module provides a collection of reusable middleware components for LangGraph agents, extending the [DeepAgents library](https://github.com/langchain-ai/deepagents) from LangChain. Middleware allows you to intercept and modify agent behavior at various stages of execution without changing the core agent logic.
> [!NOTE]
> The middleware listed below extends the default DeepAgents middleware (such as `PlanningMiddleware`, `FilesystemMiddleware`, and `SubAgentMiddleware`) with additional specialized capabilities for advanced agent workflows.
### Extended Middleware
#### **CallToolWithFileArgMiddleware**
Automatically substitutes file paths with their contents when calling non-filesystem tools.
**Features:**
- Intercepts tool calls after model generation
- Replaces file path arguments with actual file contents from the in-memory FS
- Preserves original behavior for filesystem-specific tools
- Generates acknowledgment messages for transformed calls
**How it works:**
1. Agent calls a tool with a file path as an argument
2. Middleware detects the file path and replaces it with file contents
3. Creates a `ToolMessage` acknowledging the original call
4. Emits a rewritten `AIMessage` with the actual tool call using file contents
**Usage:**
```python
from cnoe_agent_utils.middleware import CallToolWithFileArgMiddleware
middleware = [CallToolWithFileArgMiddleware()]
agent = create_agent(model, tools=tools, middleware=middleware)
```
---
#### **QuickActionTasksAnnouncementMiddleware**
Manages task announcements and execution flow for quick action scenarios.
**Features:**
- Announces the next task via `AIMessage` without immediate execution
- Updates todo status to "in_progress" for the current task
- Removes and replaces previous `write_todos` tool calls
- Coordinates with `SubAgentMiddleware` for task execution
---
#### **RemoveToolsForSubagentMiddleware**
Conditionally removes tools when an agent is called as a sub-agent.
**Features:**
- Detects when agent is running as a sub-agent
- Removes `write_todos` and `task` tools in sub-agent mode
- Prevents recursive task management in nested agent hierarchies
### Middleware Execution Flow
Middleware hooks are executed at different stages:
1. **`before_model`**: Called before the LLM is invoked
- Modify state before model sees it
- Inject messages or update context
2. **`modify_model_request`**: Called to modify the model request
- Change system prompts
- Filter or add tools
- Adjust model parameters
3. **`after_model`**: Called after the LLM generates a response
- Transform tool calls
- Add acknowledgment messages
- Update state based on model output
## 📜 License
Apache 2.0 (see [LICENSE](./LICENSE))
---
## 👥 Maintainers
See [MAINTAINERS.md](MAINTAINERS.md)
- Contributions welcome via PR or issue! | text/markdown | null | CNOE Contributors <cnoe-steering@googlegroups.com> | null | null | null | agents, cnoe, llm, observability, tracing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"boto3>=1.34.0",
"botocore>=1.34.0",
"google-auth<3.0.0,>=2.40.2",
"google-cloud-aiplatform>=1.38.0",
"langchain-anthropic>=0.3.14",
"langchain-aws>=1.1.0",
"langchain-google-genai>=2.1.5",
"langchain-google-vertexai>=3.0.1",
"langchain-groq>=0.1.0",
"langchain-openai>=0.3.18",
"langfuse<4.0.0,>... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:23:53.187862 | cnoe_agent_utils-0.3.10.tar.gz | 197,097 | fc/52/1a4aa263532396905c03c3c3d3fe5fdcb595c0e0d7d5916cf58c34da09f8/cnoe_agent_utils-0.3.10.tar.gz | source | sdist | null | false | 5bddcf56bc3a1a6fe4e1449d1abb33e2 | 76bbe65d9438bf550384bd4ace6dfdbeec285c9c4b2098657450a254d5e65575 | fc521a4aa263532396905c03c3c3d3fe5fdcb595c0e0d7d5916cf58c34da09f8 | null | [
"LICENSE"
] | 596 |
2.4 | uipath-llm-client | 1.2.1 | UiPath LLM Client | # UiPath LLM Client
A Python client for interacting with UiPath's LLM services. This package provides both a low-level HTTP client and framework-specific integrations (LangChain, LlamaIndex) for accessing LLMs through UiPath's infrastructure.
## Architecture Overview
This repository is organized as a monorepo with the following packages:
- **`uipath_llm_client`** (root): Core HTTP client with authentication, retry logic, and request handling
- **`uipath_langchain_client`** (packages/): LangChain-compatible chat models and embeddings
- **`uipath_llamaindex_client`** (packages/): LlamaIndex-compatible integrations
### Supported Backends
The client supports two UiPath backends:
| Backend | Description | Default |
|---------|-------------|---------|
| **AgentHub** | UiPath's AgentHub infrastructure with automatic CLI-based authentication | Yes |
| **LLMGateway** | UiPath's LLM Gateway with S2S authentication | No |
### Supported Providers
| Provider | Chat Models | Embeddings | Vendor Type |
|----------|-------------|------------|-------------|
| OpenAI/Azure | GPT-4o, GPT-4, etc. | text-embedding-3-large/small | `openai` |
| Google | Gemini 2.5, Gemini 2.0, etc. | text-embedding-004 | `vertexai` |
| Anthropic | Claude Sonnet 4.5, etc. | - | `awsbedrock`, `vertexai` |
| AWS Bedrock | Claude, Titan, etc. | Titan Embeddings, etc. | `awsbedrock` |
| Fireworks AI | Various open-source models | Various | `openai` |
| Azure AI | Various Azure AI models | Various | `azure` |
## Installation
### Using `pip`
```bash
# Base installation (core client only)
pip install uipath-llm-client
# With LangChain support
pip install uipath-langchain-client
# With specific provider extras for passthrough mode
pip install "uipath-langchain-client[openai]" # OpenAI/Azure OpenAI models
pip install "uipath-langchain-client[google]" # Google Gemini models
pip install "uipath-langchain-client[anthropic]" # Anthropic Claude models
pip install "uipath-langchain-client[aws]" # AWS Bedrock models
pip install "uipath-langchain-client[azure]" # Azure AI models
pip install "uipath-langchain-client[vertexai]" # Google Vertex AI (Anthropic on Vertex)
pip install "uipath-langchain-client[fireworks]" # Fireworks AI models
pip install "uipath-langchain-client[all]" # All providers
```
### Using `uv`
1. Add the custom index to your `pyproject.toml`:
```toml
[[tool.uv.index]]
name = "uipath"
url = "https://uipath.pkgs.visualstudio.com/_packaging/ml-packages/pypi/simple/"
publish-url = "https://uipath.pkgs.visualstudio.com/_packaging/ml-packages/pypi/upload/"
```
2. Install the packages:
```bash
# Core client
uv add uipath-llm-client
# LangChain integration with all providers
uv add "uipath-langchain-client[all]"
```
## Configuration
### AgentHub Backend (Default)
The AgentHub backend uses the UiPath CLI for authentication. On first use, it will prompt you to log in via browser.
```bash
# Optional: Pre-authenticate via CLI
uv run uipath auth login
# Or set environment variables directly
export UIPATH_ENVIRONMENT="cloud" # Environment: "cloud", "staging", or "alpha" (default: "cloud")
export UIPATH_URL="https://cloud.uipath.com"
export UIPATH_ORGANIZATION_ID="your-org-id"
export UIPATH_TENANT_ID="your-tenant-id"
export UIPATH_ACCESS_TOKEN="your-access-token" # Optional if using CLI auth
# For S2S authentication (alternative to CLI)
export UIPATH_CLIENT_ID="your-client-id"
export UIPATH_CLIENT_SECRET="your-client-secret"
export UIPATH_CLIENT_SCOPE="your-scope" # Optional: custom OAuth scope
```
### LLMGateway Backend
To use the LLMGateway backend, set the following environment variables:
```bash
# Select the backend
export UIPATH_LLM_BACKEND="llmgateway"
# Required configuration
export LLMGW_URL="https://your-llmgw-url.com"
export LLMGW_SEMANTIC_ORG_ID="your-org-id"
export LLMGW_SEMANTIC_TENANT_ID="your-tenant-id"
export LLMGW_REQUESTING_PRODUCT="your-product-name"
export LLMGW_REQUESTING_FEATURE="your-feature-name"
# Authentication (choose one)
export LLMGW_ACCESS_TOKEN="your-access-token"
# OR for S2S authentication:
export LLMGW_CLIENT_ID="your-client-id"
export LLMGW_CLIENT_SECRET="your-client-secret"
# Optional tracking
export LLMGW_SEMANTIC_USER_ID="your-user-id"
```
## Settings Reference
### AgentHubSettings
Configuration settings for UiPath AgentHub client requests. These settings control routing, authentication, and tracking for requests to AgentHub.
```python
from uipath_llm_client.settings import AgentHubSettings
settings = AgentHubSettings(
environment="cloud", # UiPath environment
access_token="...", # Optional: pre-set access token
base_url="...", # Optional: custom base URL
tenant_id="...", # Optional: tenant ID
organization_id="...", # Optional: organization ID
)
```
| Attribute | Environment Variable | Type | Default | Description |
|-----------|---------------------|------|---------|-------------|
| `environment` | `UIPATH_ENVIRONMENT` | `"cloud"` \| `"staging"` \| `"alpha"` | `"cloud"` | The UiPath environment to connect to |
| `access_token` | `UIPATH_ACCESS_TOKEN` | `SecretStr \| None` | `None` | Access token for authentication (auto-populated via CLI if not set) |
| `base_url` | `UIPATH_URL` | `str \| None` | `None` | Base URL of the AgentHub API (auto-populated via CLI if not set) |
| `tenant_id` | `UIPATH_TENANT_ID` | `str \| None` | `None` | Tenant ID for request routing (auto-populated via CLI if not set) |
| `organization_id` | `UIPATH_ORGANIZATION_ID` | `str \| None` | `None` | Organization ID for request routing (auto-populated via CLI if not set) |
| `client_id` | `UIPATH_CLIENT_ID` | `SecretStr \| None` | `None` | Client ID for OAuth/S2S authentication |
| `client_secret` | `UIPATH_CLIENT_SECRET` | `SecretStr \| None` | `None` | Client secret for OAuth/S2S authentication |
| `client_scope` | `UIPATH_CLIENT_SCOPE` | `str \| None` | `None` | Custom OAuth scope for authentication |
| `agenthub_config` | `UIPATH_AGENTHUB_CONFIG` | `str` | `"agentsruntime"` | AgentHub configuration for tracing |
| `process_key` | `UIPATH_PROCESS_KEY` | `str \| None` | `None` | Process key for tracing |
| `job_key` | `UIPATH_JOB_KEY` | `str \| None` | `None` | Job key for tracing |
**Authentication behavior:**
- If `access_token`, `base_url`, `tenant_id`, and `organization_id` are all provided, they are used directly
- Otherwise, the client uses the UiPath CLI (`uipath auth`) to authenticate automatically
- For S2S authentication, provide `client_id` and `client_secret`
### LLMGatewaySettings
Configuration settings for LLM Gateway client requests. These settings control routing, authentication, and tracking for requests to LLM Gateway.
```python
from uipath_llm_client.settings import LLMGatewaySettings
settings = LLMGatewaySettings(
base_url="https://your-llmgw-url.com",
org_id="your-org-id",
tenant_id="your-tenant-id",
requesting_product="your-product",
requesting_feature="your-feature",
client_id="your-client-id", # For S2S auth
client_secret="your-client-secret", # For S2S auth
)
```
| Attribute | Environment Variable | Type | Required | Description |
|-----------|---------------------|------|----------|-------------|
| `base_url` | `LLMGW_URL` | `str` | Yes | Base URL of the LLM Gateway |
| `org_id` | `LLMGW_SEMANTIC_ORG_ID` | `str` | Yes | Organization ID for request routing |
| `tenant_id` | `LLMGW_SEMANTIC_TENANT_ID` | `str` | Yes | Tenant ID for request routing |
| `requesting_product` | `LLMGW_REQUESTING_PRODUCT` | `str` | Yes | Product name making the request (for tracking) |
| `requesting_feature` | `LLMGW_REQUESTING_FEATURE` | `str` | Yes | Feature name making the request (for tracking) |
| `access_token` | `LLMGW_ACCESS_TOKEN` | `SecretStr \| None` | Conditional | Access token for authentication |
| `client_id` | `LLMGW_CLIENT_ID` | `SecretStr \| None` | Conditional | Client ID for S2S authentication |
| `client_secret` | `LLMGW_CLIENT_SECRET` | `SecretStr \| None` | Conditional | Client secret for S2S authentication |
| `user_id` | `LLMGW_SEMANTIC_USER_ID` | `str \| None` | No | User ID for tracking and billing |
| `action_id` | `LLMGW_ACTION_ID` | `str \| None` | No | Action ID for tracking |
| `operation_code` | `LLMGW_OPERATION_CODE` | `str \| None` | No | Operation code to identify BYO models |
| `additional_headers` | `LLMGW_ADDITIONAL_HEADERS` | `Mapping[str, str]` | No | Additional custom headers to include in requests |
**Authentication behavior:**
- Either `access_token` OR both `client_id` and `client_secret` must be provided
- S2S authentication uses `client_id`/`client_secret` to obtain tokens automatically
## Usage Examples
### Quick Start with Direct Client Classes
The simplest way to get started - settings are automatically loaded from environment variables:
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
# No settings needed - uses defaults from environment (AgentHub backend)
chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
response = chat.invoke("What is the capital of France?")
print(response.content)
```
### Using Different Providers
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.anthropic.chat_models import UiPathChatAnthropic
from uipath_langchain_client.clients.openai.embeddings import UiPathAzureOpenAIEmbeddings
# OpenAI/Azure models
openai_chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
response = openai_chat.invoke("Hello!")
print(response.content)
# Google Gemini models
gemini_chat = UiPathChatGoogleGenerativeAI(model="gemini-2.5-flash")
response = gemini_chat.invoke("Hello!")
print(response.content)
# Anthropic Claude models (via AWS Bedrock)
claude_chat = UiPathChatAnthropic(model="anthropic.claude-sonnet-4-5-20250929-v1:0", vendor_type="awsbedrock")
response = claude_chat.invoke("Hello!")
print(response.content)
# Embeddings
embeddings = UiPathAzureOpenAIEmbeddings(model="text-embedding-3-large")
vectors = embeddings.embed_documents(["Hello world", "How are you?"])
print(f"Generated {len(vectors)} embeddings of dimension {len(vectors[0])}")
```
### Using Factory Functions (Auto-Detect Vendor)
Factory functions automatically detect the model vendor but require settings to be passed:
```python
from uipath_langchain_client import get_chat_model, get_embedding_model
from uipath_llm_client.settings import get_default_client_settings
settings = get_default_client_settings()
# Create a chat model - vendor is auto-detected from model name
chat_model = get_chat_model(model_name="gpt-4o-2024-11-20", client_settings=settings)
response = chat_model.invoke("What is the capital of France?")
print(response.content)
# Create an embeddings model
embeddings_model = get_embedding_model(model_name="text-embedding-3-large", client_settings=settings)
vectors = embeddings_model.embed_documents(["Hello world", "How are you?"])
```
### Using the Normalized API (Provider-Agnostic)
The normalized API provides a consistent interface across all LLM providers:
```python
from uipath_langchain_client import get_chat_model
from uipath_llm_client.settings import get_default_client_settings
settings = get_default_client_settings()
# Use normalized API for provider-agnostic calls
chat_model = get_chat_model(
model_name="gpt-4o-2024-11-20",
client_settings=settings,
client_type="normalized",
)
# Works the same way regardless of the underlying provider
response = chat_model.invoke("Explain quantum computing in simple terms.")
print(response.content)
```
### Streaming Responses
All chat models support streaming for real-time output:
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
for chunk in chat_model.stream("Write a short poem about coding."):
print(chunk.content, end="", flush=True)
print()
```
### Async Operations
For async/await support:
```python
import asyncio
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
async def main():
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
# Async invoke
response = await chat_model.ainvoke("What is 2 + 2?")
print(response.content)
# Async streaming
async for chunk in chat_model.astream("Tell me a joke."):
print(chunk.content, end="", flush=True)
print()
asyncio.run(main())
```
### Tool/Function Calling
Use tools with LangChain's standard interface:
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is sunny and 72°F."
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression."""
return str(eval(expression))
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
# Bind tools to the model
model_with_tools = chat_model.bind_tools([get_weather, calculate])
# The model can now use tools
response = model_with_tools.invoke("What's the weather in Paris?")
print(response.tool_calls)
```
### Using with LangChain Agents
Integrate with LangChain's agent framework:
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
@tool
def search(query: str) -> str:
"""Search for information."""
return f"Search results for: {query}"
chat_model = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
agent = create_react_agent(chat_model, [search])
# Run the agent
result = agent.invoke({"messages": [("user", "Search for Python tutorials")]})
print(result["messages"][-1].content)
```
### Native SDK Wrappers (Without LangChain)
The core `uipath_llm_client` package provides thin wrappers around native vendor SDKs. These are drop-in replacements that route requests through UiPath's infrastructure while preserving the original SDK's interface:
```python
from uipath_llm_client.clients.openai import UiPathOpenAI, UiPathAzureOpenAI
# Drop-in replacement for openai.OpenAI — routes through UiPath
client = UiPathOpenAI(model_name="gpt-4o-2024-11-20")
response = client.chat.completions.create(
model="gpt-4o-2024-11-20",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
# Azure OpenAI variant
azure_client = UiPathAzureOpenAI(model_name="gpt-4o-2024-11-20")
```
```python
from uipath_llm_client.clients.anthropic import UiPathAnthropic
# Drop-in replacement for anthropic.Anthropic
client = UiPathAnthropic(model_name="anthropic.claude-sonnet-4-5-20250929-v1:0")
response = client.messages.create(
model="anthropic.claude-sonnet-4-5-20250929-v1:0",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.content[0].text)
```
```python
from uipath_llm_client.clients.google import UiPathGoogle
# Drop-in replacement for google.genai.Client
client = UiPathGoogle(model_name="gemini-2.5-flash")
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Hello!",
)
print(response.text)
```
All native SDK wrappers are available in sync and async variants:
| Class | SDK | Description |
|-------|-----|-------------|
| `UiPathOpenAI` / `UiPathAsyncOpenAI` | `openai.OpenAI` | OpenAI models (BYO) |
| `UiPathAzureOpenAI` / `UiPathAsyncAzureOpenAI` | `openai.AzureOpenAI` | Azure OpenAI models |
| `UiPathAnthropic` / `UiPathAsyncAnthropic` | `anthropic.Anthropic` | Anthropic models |
| `UiPathAnthropicBedrock` / `UiPathAsyncAnthropicBedrock` | `anthropic.AnthropicBedrock` | Anthropic via AWS Bedrock |
| `UiPathAnthropicVertex` / `UiPathAsyncAnthropicVertex` | `anthropic.AnthropicVertex` | Anthropic via Vertex AI |
| `UiPathAnthropicFoundry` / `UiPathAsyncAnthropicFoundry` | `anthropic.AnthropicFoundry` | Anthropic via Azure Foundry |
| `UiPathGoogle` | `google.genai.Client` | Google Gemini models |
### Low-Level HTTP Client
For completely custom HTTP requests, use the low-level HTTPX client directly:
```python
from uipath_llm_client import UiPathHttpxClient
from uipath_llm_client.settings import UiPathAPIConfig, get_default_client_settings
settings = get_default_client_settings()
# Create a low-level HTTP client with UiPath auth and routing
client = UiPathHttpxClient(
base_url=settings.build_base_url(model_name="gpt-4o-2024-11-20"),
auth=settings.build_auth_pipeline(),
headers=settings.build_auth_headers(model_name="gpt-4o-2024-11-20"),
model_name="gpt-4o-2024-11-20",
api_config=UiPathAPIConfig(
api_type="completions",
client_type="passthrough",
vendor_type="openai",
api_flavor="chat-completions",
),
max_retries=2,
)
# Make a raw HTTP request
response = client.post(
"/chat/completions",
json={
"model": "gpt-4o-2024-11-20",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 100,
},
)
response.raise_for_status()
print(response.json())
```
### Custom Configuration
Pass custom settings when you need more control:
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath_llm_client.settings import AgentHubSettings
from uipath_llm_client.utils.retry import RetryConfig
# Custom settings for AgentHub
settings = AgentHubSettings(environment="cloud") # or "staging", "alpha"
# With retry configuration
retry_config: RetryConfig = {
"initial_delay": 2.0,
"max_delay": 60.0,
"exp_base": 2.0,
"jitter": 1.0,
}
chat_model = UiPathAzureChatOpenAI(
model="gpt-4o-2024-11-20",
client_settings=settings,
max_retries=3,
retry_config=retry_config,
)
```
### Switching Between Backends
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath_llm_client.settings import get_default_client_settings
# Explicitly specify the backend
agenthub_settings = get_default_client_settings(backend="agenthub")
llmgw_settings = get_default_client_settings(backend="llmgateway")
chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20", client_settings=llmgw_settings)
# Or use environment variable (no code changes needed)
# export UIPATH_LLM_BACKEND="llmgateway"
```
### Using LLMGatewaySettings Directly
You can instantiate `LLMGatewaySettings` directly for full control over configuration:
**With Direct Client Classes:**
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.openai.embeddings import UiPathAzureOpenAIEmbeddings
from uipath_llm_client.settings import LLMGatewaySettings
# Create LLMGatewaySettings with explicit configuration
settings = LLMGatewaySettings(
base_url="https://your-llmgw-url.com",
org_id="your-org-id",
tenant_id="your-tenant-id",
requesting_product="my-product",
requesting_feature="my-feature",
client_id="your-client-id",
client_secret="your-client-secret",
user_id="optional-user-id", # Optional: for tracking
)
# Use with OpenAI/Azure chat model
openai_chat = UiPathAzureChatOpenAI(
model="gpt-4o-2024-11-20",
settings=settings,
)
response = openai_chat.invoke("Hello!")
print(response.content)
# Use with Google Gemini
gemini_chat = UiPathChatGoogleGenerativeAI(
model="gemini-2.5-flash",
settings=settings,
)
response = gemini_chat.invoke("Hello!")
print(response.content)
# Use with embeddings
embeddings = UiPathAzureOpenAIEmbeddings(
model="text-embedding-3-large",
settings=settings,
)
vectors = embeddings.embed_documents(["Hello world"])
```
**With Factory Methods:**
```python
from uipath_langchain_client import get_chat_model, get_embedding_model
from uipath_llm_client.settings import LLMGatewaySettings
# Create LLMGatewaySettings
settings = LLMGatewaySettings(
base_url="https://your-llmgw-url.com",
org_id="your-org-id",
tenant_id="your-tenant-id",
requesting_product="my-product",
requesting_feature="my-feature",
client_id="your-client-id",
client_secret="your-client-secret",
)
# Factory auto-detects vendor from model name
chat_model = get_chat_model(
model_name="gpt-4o-2024-11-20",
client_settings=settings,
)
response = chat_model.invoke("What is the capital of France?")
print(response.content)
# Use normalized API for provider-agnostic interface
normalized_chat = get_chat_model(
model_name="gemini-2.5-flash",
client_settings=settings,
client_type="normalized",
)
response = normalized_chat.invoke("Explain quantum computing.")
print(response.content)
# Embeddings with factory
embeddings = get_embedding_model(
model_name="text-embedding-3-large",
client_settings=settings,
)
vectors = embeddings.embed_documents(["Hello", "World"])
```
### Bring Your Own (BYO) Model Connections
If you have enrolled your own model deployment into UiPath's LLMGateway, you can use it by providing your BYO connection ID. This allows you to route requests through LLMGateway to your custom-enrolled models.
```python
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
# Use your BYO connection ID from LLMGateway enrollment
chat = UiPathAzureChatOpenAI(
model="your-custom-model-name",
byo_connection_id="your-byo-connection-id", # UUID from LLMGateway enrollment
)
response = chat.invoke("Hello from my custom model!")
print(response.content)
```
This works with any client class:
```python
from uipath_langchain_client.clients.google.chat_models import UiPathChatGoogleGenerativeAI
from uipath_langchain_client.clients.openai.embeddings import UiPathAzureOpenAIEmbeddings
# BYO chat model
byo_chat = UiPathChatGoogleGenerativeAI(
model="my-custom-gemini",
byo_connection_id="f1d29b49-0c7b-4c01-8bc4-fc1b7d918a87",
)
# BYO embeddings model
byo_embeddings = UiPathAzureOpenAIEmbeddings(
model="my-custom-embeddings",
byo_connection_id="a2e38c51-1d8a-5e02-9cd5-ge2c8e029b98",
)
```
## Error Handling
The client provides a hierarchy of typed exceptions for handling API errors. All exceptions extend `UiPathAPIError` (which extends `httpx.HTTPStatusError`):
```python
from uipath_llm_client import (
UiPathAPIError,
UiPathAuthenticationError,
UiPathRateLimitError,
UiPathNotFoundError,
)
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
chat = UiPathAzureChatOpenAI(model="gpt-4o-2024-11-20")
try:
response = chat.invoke("Hello!")
except UiPathRateLimitError as e:
print(f"Rate limited. Retry after: {e.retry_after} seconds")
except UiPathAuthenticationError:
print("Authentication failed — check your credentials")
except UiPathAPIError as e:
print(f"API error {e.status_code}: {e.message}")
```
### Exception Reference
| Exception | HTTP Status | Description |
|-----------|-------------|-------------|
| `UiPathAPIError` | Any | Base exception for all UiPath API errors |
| `UiPathBadRequestError` | 400 | Invalid request parameters |
| `UiPathAuthenticationError` | 401 | Invalid or expired credentials |
| `UiPathPermissionDeniedError` | 403 | Insufficient permissions |
| `UiPathNotFoundError` | 404 | Model or resource not found |
| `UiPathConflictError` | 409 | Request conflicts with current state |
| `UiPathRequestTooLargeError` | 413 | Request payload too large |
| `UiPathUnprocessableEntityError` | 422 | Request is well-formed but semantically invalid |
| `UiPathRateLimitError` | 429 | Rate limit exceeded (has `retry_after` property) |
| `UiPathInternalServerError` | 500 | Server-side error |
| `UiPathServiceUnavailableError` | 503 | Service temporarily unavailable |
| `UiPathGatewayTimeoutError` | 504 | Gateway timeout |
| `UiPathTooManyRequestsError` | 529 | Anthropic overload (too many requests) |
## UiPathAPIConfig Reference
The `UiPathAPIConfig` class controls how requests are routed through UiPath's infrastructure:
```python
from uipath_llm_client.settings import UiPathAPIConfig
config = UiPathAPIConfig(
api_type="completions",
client_type="passthrough",
vendor_type="openai",
api_flavor="chat-completions",
api_version="2025-03-01-preview",
)
```
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `api_type` | `"completions"` \| `"embeddings"` \| `None` | `None` | Type of API call |
| `client_type` | `"passthrough"` \| `"normalized"` \| `None` | `None` | `"passthrough"` uses vendor-native APIs; `"normalized"` uses UiPath's unified API |
| `vendor_type` | `str \| None` | `None` | LLM vendor identifier: `"openai"`, `"vertexai"`, `"awsbedrock"`, `"anthropic"`, `"azure"` |
| `api_flavor` | `str \| None` | `None` | Vendor-specific API flavor (e.g., `"chat-completions"`, `"responses"`, `"generate-content"`, `"converse"`, `"invoke"`, `"anthropic-claude"`) |
| `api_version` | `str \| None` | `None` | Vendor-specific API version (e.g., `"2025-03-01-preview"`, `"v1beta1"`) |
| `freeze_base_url` | `bool` | `False` | Prevents httpx from modifying the base URL (required for some vendor SDKs) |
## Advanced Configuration
### SSL Configuration
The client supports custom SSL/TLS configuration through environment variables:
| Environment Variable | Description |
|---------------------|-------------|
| `UIPATH_DISABLE_SSL_VERIFY` | Set to `"1"`, `"true"`, `"yes"`, or `"on"` to disable SSL verification (not recommended for production) |
| `SSL_CERT_FILE` | Path to a custom SSL certificate file |
| `REQUESTS_CA_BUNDLE` | Path to a custom CA bundle file |
| `SSL_CERT_DIR` | Path to a directory containing SSL certificate files |
By default, the client uses [truststore](https://pypi.org/project/truststore/) (if available) or falls back to [certifi](https://pypi.org/project/certifi/) for SSL certificate verification.
### Logging
Enable request/response logging by passing a logger instance:
```python
import logging
from uipath_langchain_client.clients.openai.chat_models import UiPathAzureChatOpenAI
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("uipath_llm")
chat = UiPathAzureChatOpenAI(
model="gpt-4o-2024-11-20",
logger=logger, # Enables request/response logging with timing
)
response = chat.invoke("Hello!")
```
The logger will record:
- Request start time and URL
- Response duration (in milliseconds)
- Error responses with status codes and body content
### Default Headers
All requests automatically include the following default headers:
| Header | Value | Description |
|--------|-------|-------------|
| `X-UiPath-LLMGateway-TimeoutSeconds` | `295` | Server-side timeout for LLM Gateway |
| `X-UiPath-LLMGateway-AllowFull4xxResponse` | `true` | Returns full error response bodies for 4xx errors |
### Authentication Auto-Refresh
Both AgentHub and LLMGateway authentication pipelines automatically handle token expiry:
- When a request receives a **401 Unauthorized** response, the auth pipeline refreshes the token and retries the request
- Token refresh is handled transparently — no user intervention required
- Auth instances use the **singleton pattern** to reuse tokens across multiple client instances
## Development
```bash
# Clone and install with dev dependencies
git clone https://github.com/UiPath/uipath-llm-client.git
cd uipath-llm-client
uv sync
# Run tests
uv run pytest
# Format and lint
uv run ruff format .
uv run ruff check .
uv run pyright
```
### Testing
Tests use [VCR.py](https://vcrpy.readthedocs.io/) to record and replay HTTP interactions. Cassettes (recorded responses) are stored in `tests/cassettes/` using Git LFS.
**Important:** Tests must pass locally before submitting a PR. The CI pipeline does not make any real API requests—it only runs tests using the pre-recorded cassettes.
**Prerequisites:**
- Install [Git LFS](https://git-lfs.com/): `brew install git-lfs` (macOS) or `apt install git-lfs` (Ubuntu)
- Initialize Git LFS: `git lfs install`
- Pull cassettes: `git lfs pull`
**Running tests locally:**
```bash
# Run all tests using cassettes (no API credentials required)
uv run pytest
# Run specific test files
uv run pytest tests/langchain/
uv run pytest tests/core/
```
**Updating cassettes:**
When adding new tests or modifying existing ones that require new API interactions:
1. Set up your environment with valid credentials (see [Configuration](#configuration))
2. Run the tests—VCR will record new interactions automatically
3. Commit the updated cassettes along with your code changes
**Note:** The CI pipeline validates that all tests pass using the committed cassettes. If your tests require new API calls, you must record and commit the corresponding cassettes for the pipeline to pass.
## Project Structure
```
uipath-llm-client/
├── src/uipath_llm_client/ # Core package
│ ├── httpx_client.py # UiPathHttpxClient / UiPathHttpxAsyncClient
│ ├── clients/ # Native SDK wrappers
│ │ ├── openai/ # UiPathOpenAI, UiPathAzureOpenAI, etc.
│ │ ├── anthropic/ # UiPathAnthropic, UiPathAnthropicBedrock, etc.
│ │ └── google/ # UiPathGoogle
│ ├── settings/ # Backend-specific settings & auth
│ │ ├── base.py # UiPathBaseSettings, UiPathAPIConfig
│ │ ├── agenthub/ # AgentHubSettings, AgentHubAuth
│ │ └── llmgateway/ # LLMGatewaySettings, LLMGatewayS2SAuth
│ └── utils/ # Exceptions, retry, logging, SSL
│ ├── exceptions.py # UiPathAPIError hierarchy (12 classes)
│ ├── retry.py # RetryConfig, RetryableHTTPTransport
│ ├── logging.py # LoggingConfig
│ └── ssl_config.py # SSL/TLS configuration
├── packages/
│ ├── uipath_langchain_client/ # LangChain integration
│ │ └── src/uipath_langchain_client/
│ │ ├── base_client.py # UiPathBaseLLMClient mixin
│ │ ├── factory.py # get_chat_model(), get_embedding_model()
│ │ └── clients/
│ │ ├── normalized/ # UiPathChat, UiPathEmbeddings
│ │ ├── openai/ # UiPathAzureChatOpenAI, UiPathChatOpenAI, etc.
│ │ ├── google/ # UiPathChatGoogleGenerativeAI, etc.
│ │ ├── anthropic/ # UiPathChatAnthropic
│ │ ├── vertexai/ # UiPathChatAnthropicVertex
│ │ ├── bedrock/ # UiPathChatBedrock, UiPathChatBedrockConverse
│ │ ├── fireworks/ # UiPathChatFireworks, UiPathFireworksEmbeddings
│ │ └── azure/ # UiPathAzureAIChatCompletionsModel
│ └── uipath_llamaindex_client/ # LlamaIndex integration (planned)
└── tests/ # Test suite with VCR cassettes
```
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contact
For any questions or issues, please contact the maintainers at [UiPath GitHub Repository](https://github.com/UiPath/uipath-llm-client).
| text/markdown | null | Cosmin Maria <cosmin.maria@uipath.com>, Dragos Bobolea <dragos.bobolea@uipath.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.28.1",
"pydantic-settings>=2.12.0",
"pydantic>=2.12.5",
"tenacity>=9.1.2",
"uipath>=2.5.17",
"anthropic>=0.76.0; extra == \"all\"",
"google-genai>=1.59.0; extra == \"all\"",
"openai>=2.15.0; extra == \"all\"",
"anthropic>=0.76.0; extra == \"anthropic\"",
"google-genai>=1.59.0; extra == \... | [] | [] | [] | [] | uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T00:23:25.079449 | uipath_llm_client-1.2.1.tar.gz | 396,130 | 5b/fd/be30c1fffa4b5e285e74ea6eab39f3b3c527a99b6a0d0b19577175267732/uipath_llm_client-1.2.1.tar.gz | source | sdist | null | false | b5edbaa1b01f32e31ba8fc2b4d7acfc3 | 30c95229bb0c7db7e5b3d9d8632430f0f77af83ea509e838a139fd2d072ed260 | 5bfdbe30c1fffa4b5e285e74ea6eab39f3b3c527a99b6a0d0b19577175267732 | null | [
"LICENSE"
] | 449 |
2.4 | workbench | 0.8.267 | Workbench: A Dashboard and Python API for creating and deploying AWS SageMaker Model Pipelines |
## Live Dashboard Demo
You can explore a live demo of the Workbench Dashboard at: [Workbench Dashboard Demo](https://workbench-dashboard.com)
## Recent News
**Chemprop Models!** All the rage for the Open ADMET Challenge.
ADMET Workbench now supports:
- Single Task Chemprop Models
- Multi Task Chemprop Models
- Chemprop Hybrid Models (MPNN + Descriptors)
- Foundation Chemprop Models (CheMeleon Pretrained)
Examples:
- [Deploying Chemprop Models](examples/models/chemprop.py)
- [Deploying Foundation Chemprop Models](examples/models/chemprop_foundation.py)
**References**
- [Open ADMET Challenge](https://huggingface.co/spaces/openadmet/OpenADMET-ExpansionRx-Challenge)
- **ChemProp:** Yang et al. "Analyzing Learned Molecular Representations for Property Prediction" *J. Chem. Inf. Model.* 2019 — [GitHub](https://github.com/chemprop/chemprop) | [Paper](https://pubs.acs.org/doi/10.1021/acs.jcim.9b00237)
- [CheMeleon Github](https://github.com/JacksonBurns/chemeleon)
### Chemprop Action Shots!
<table>
<tr>
<td>
<a href="https://github.com/user-attachments/assets/a36c6eff-c464-4c9a-9859-a45cd7e35145">
<img width="800" alt="theme_dark" src="https://github.com/user-attachments/assets/a36c6eff-c464-4c9a-9859-a45cd7e35145" />
</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/user-attachments/assets/d65ec1da-e04e-44fe-8782-4da0fb50588a">
<img width="800" alt="theme_quartz" src="https://github.com/user-attachments/assets/d65ec1da-e04e-44fe-8782-4da0fb50588a" />
</a>
</td>
</tr>
</table>
# Welcome to ADMET Workbench
The ADMET Workbench framework makes AWS® both easier to use and more powerful. Workbench handles all the details around updating and managing a complex set of AWS Services. With a simple-to-use Python API and a beautiful set of web interfaces, Workbench makes creating AWS ML pipelines a snap. It also dramatically improves both the usability and visibility across the entire spectrum of services: Glue Job, Athena, Feature Store, Models, and Endpoints, Workbench makes it easy to build production ready, AWS powered, machine learning pipelines.
<img align="right" width="480" alt="workbench_new_light" src="https://github.com/SuperCowPowers/workbench/assets/4806709/ed2ed1bd-e2d8-49a1-b350-b2e19e2b7832">
### Full AWS ML OverView
- Health Monitoring 🟢
- Dynamic Updates
- High Level Summary
### Drill-Down Views
- Incoming Data
- Glue Jobs
- DataSources
- FeatureSets
- Models
- Endpoints
## Private SaaS Architecture
*Secure your Data, Empower your ML Pipelines*
ADMET Workbench is architected as a **Private SaaS** (also called BYOC: Bring Your Own Cloud). This hybrid architecture is the ultimate solution for businesses that prioritize data control and security. Workbench deploys as an AWS Stack within your own cloud environment, ensuring compliance with stringent corporate and regulatory standards. It offers the flexibility to tailor solutions to your specific business needs through our comprehensive plugin support. By using Workbench, you maintain absolute control over your data while benefiting from the power, security, and scalability of AWS cloud services. [Workbench Private SaaS Architecture](https://docs.google.com/presentation/d/1f_1gmE4-UAeUDDsoNdzK_d_MxALFXIkxORZwbJBjPq4/edit?usp=sharing)
<img alt="private_saas_compare" src="https://github.com/user-attachments/assets/2f6d3724-e340-4a70-bb97-d05383917cfe">
### API Installation
- ```pip install workbench``` Installs Workbench
- ```workbench``` Runs the Workbench REPL/Initial Setup
For the full instructions for connecting your AWS Account see:
- Getting Started: [Initial Setup](https://supercowpowers.github.io/workbench/getting_started/)
- One time AWS Onboarding: [AWS Setup](https://supercowpowers.github.io/workbench/aws_setup/core_stack/)
### ADMET Workbench up on the AWS Marketplace
Powered by AWS® to accelerate your Machine Learning Pipelines development with our new [Dashboard for ML Pipelines](https://aws.amazon.com/marketplace/pp/prodview-5idedc7uptbqo). Getting started with Workbench is a snap and can be billed through AWS.
### ADMET Workbench Presentations
Even though ADMET Workbench makes AWS easier, it's taking something very complex (the full set of AWS ML Pipelines/Services) and making it less complex. Workbench has a depth and breadth of functionality so we've provided higher level conceptual documentation See: [Workbench Presentations](https://supercowpowers.github.io/workbench/presentations/)
<img align="right" width="420" alt="workbench_api" style="padding-left: 10px;" src="https://github.com/SuperCowPowers/workbench/assets/4806709/bf0e8591-75d4-44c1-be05-4bfdee4b7186">
### ADMET Workbench Documentation
The ADMET Workbench documentation [Workbench Docs](https://supercowpowers.github.io/workbench/) covers the Python API in depth and contains code examples. The documentation is fully searchable and fairly comprehensive.
The code examples are provided in the Github repo `examples/` directory. For a full code listing of any example please visit our [Workbench Examples](https://github.com/SuperCowPowers/workbench/blob/main/examples)
## Questions?
The SuperCowPowers team is happy to answer any questions you may have about AWS and Workbench. Please contact us at [workbench@supercowpowers.com](mailto:workbench@supercowpowers.com) or chat us up on [Discord](https://discord.gg/WHAJuz8sw8)
### ADMET Workbench Beta Program
Using ADMET Workbench will minimize the time and manpower needed to incorporate AWS ML into your organization. If your company would like to be a Workbench Beta Tester, contact us at [workbench@supercowpowers.com](mailto:workbench@supercowpowers.com).
### Using ADMET Workbench with Additional Packages
```
pip install workbench # Installs Workbench with Core Dependencies
pip install 'workbench[ui]' # + Plotly/Dash
pip install 'workbench[dev]' # + Pytest/flake8/black
pip install 'workbench[all]' # + All the things :)
*Note: Shells may interpret square brackets as globs, so the quotes are needed
```
### Contributions
If you'd like to contribute to the ADMET Workbench project, you're more than welcome. All contributions will fall under the existing project [license](https://github.com/SuperCowPowers/workbench/blob/main/LICENSE). If you are interested in contributing or have questions please feel free to contact us at [workbench@supercowpowers.com](mailto:workbench@supercowpowers.com).
<img align="right" src="docs/images/scp.png" width="180">
® Amazon Web Services, AWS, the Powered by AWS logo, are trademarks of Amazon.com, Inc. or its affiliates
| text/markdown | null | SuperCowPowers LLC <support@supercowpowers.com> | null | null | MIT License
Copyright (c) 2021-2026 SuperCowPowers LLC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| SageMaker, Machine Learning, AWS, Python, Utilities | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.31.76",
"botocore>=1.31.76",
"redis>=5.0.1",
"numpy>=1.26.4",
"pandas<3.0,>=2.2.1",
"awswrangler>=3.4.0",
"sagemaker<3.0,>=2.143",
"cryptography>=44.0.2",
"ipython>=8.37.0",
"pyreadline3; sys_platform == \"win32\"",
"scikit-learn>=1.5.2",
"umap-learn>=0.5.8",
"xgboost>=3.0.3",
"j... | [] | [] | [] | [
"Homepage, https://github.com/SuperCowPowers/workbench"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-19T00:21:30.757237 | workbench-0.8.267.tar.gz | 2,838,773 | c2/ef/365440957649b70ee8298992dcc59b46ef6653095855374b44e357527143/workbench-0.8.267.tar.gz | source | sdist | null | false | 78134abbff49f8b5b103e7495ce2ac09 | bed0b27aaf54edd7bcafa62ec33c4c5a58edfaefce700e771d924e8ab1851e73 | c2ef365440957649b70ee8298992dcc59b46ef6653095855374b44e357527143 | null | [
"LICENSE"
] | 315 |
2.4 | overcode | 0.2.0 | A supervisor for managing multiple Claude Code instances in tmux | # overcode
A TUI supervisor for managing multiple Claude Code agents in tmux.
Launch autonomous coding agents, monitor their progress in real-time, track costs and activity, and coordinate work across your projects—all from a single dashboard.
## Why overcode?
Running multiple Claude Code agents is powerful, but managing them gets chaotic fast. Overcode solves this by giving you:
- **Unified visibility** - See all agents at a glance: what they're working on, whether they need input, and how much they're costing you
- **Smart orchestration** - An optional supervisor daemon can approve prompts and keep agents moving without constant attention
- **Efficiency metrics** - Track "green time" (Claude actively working) vs idle time to understand where time goes
- **Session persistence** - Agents run in tmux, surviving terminal disconnects. Pick up where you left off
## Screenshots
**Split-screen with tmux sync** - Monitor agents in the top pane while viewing live agent output below. Press `p` to enable pane sync, then navigate with `j/k` to switch the bottom pane to the selected agent's window.

> **iTerm2 setup**: Use `Cmd+Shift+D` to split horizontally. Run `overcode monitor` in the top pane and `tmux attach -t agents` in the bottom pane.
**Preview mode** - Press `m` to toggle List+Preview mode. Shows collapsed agent list with detailed terminal output preview for the selected agent.

## Quick Start
Try it instantly with [uvx](https://docs.astral.sh/uv/):
```bash
uvx overcode monitor
```
This opens the dashboard. Press `n` to create your first agent—give it a name, point it at a project directory, and optionally provide an initial prompt. Create a few agents to see them work in parallel.
**Requirements:** Python 3.12+, tmux, [Claude Code CLI](https://docs.anthropic.com/en/docs/claude-code)
For permanent installation: `pip install overcode`
See the [Getting Started Guide](docs/getting-started.md) for a complete walkthrough.
## Features
### Real-time Dashboard
The TUI displays all agents with live status updates, showing:
- Current activity and AI-generated summaries
- Status indicators (running/waiting/stalled)
- Cost and token usage per agent
- Git repo and branch information
- Timeline showing status history
### Agent Management
- **Launch agents** with custom prompts and permission settings
- **Send instructions** directly from the dashboard
- **Standing orders** - persistent instructions that guide agent behavior
- **Sleep mode** - pause agents and exclude them from stats
- **Priority sorting** - organize agents by importance
### Supervisor Daemon
An optional Claude-powered orchestrator that:
- Monitors agents for prompts requiring approval
- Automatically handles routine confirmations
- Follows per-agent standing orders
- Tracks interventions and steering decisions
### Analytics & Export
- **Web dashboard** - mobile-friendly monitoring from any device
- **Historical analytics** - browse session history with charts
- **Parquet export** - analyze data in Jupyter notebooks
- **Presence tracking** - correlate activity with your availability (macOS)
## TUI Controls
| Key | Action |
|-----|--------|
| `j/k` or `↑/↓` | Navigate agents |
| `Enter` | Approve/send Enter to agent |
| `i` or `:` | Send instruction |
| `m` | Toggle list+preview mode |
| `t` | Toggle timeline |
| `z` | Toggle sleep mode |
| `x` | Kill agent (double-press) |
| `b` | Jump to next agent needing attention |
| `h` or `?` | Show all shortcuts |
| `q` | Quit |
See the [TUI Guide](docs/tui-guide.md) for all keyboard shortcuts.
## Documentation
- [Getting Started](docs/getting-started.md) - Installation and first steps
- [CLI Reference](docs/cli-reference.md) - All commands and options
- [TUI Guide](docs/tui-guide.md) - Keyboard shortcuts and display modes
- [Configuration](docs/configuration.md) - Config file and environment variables
- [Advanced Features](docs/advanced-features.md) - Sleep mode, handover, remote monitoring
## License
MIT
| text/markdown | Mike Bond | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | >=3.12 | [] | [] | [] | [
"textual>=0.40.0",
"rich>=13.0.0",
"typer>=0.9.0",
"pyyaml>=6.0",
"libtmux>=0.37.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-timeout>=2.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; e... | [] | [] | [] | [
"Homepage, https://github.com/mkb23/overcode",
"Repository, https://github.com/mkb23/overcode",
"Issues, https://github.com/mkb23/overcode/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:19:28.816316 | overcode-0.2.0.tar.gz | 327,792 | e8/fd/836cd81e374bf17dd3ad65c55be2dc8b05d1c795fe30d80b85689f205881/overcode-0.2.0.tar.gz | source | sdist | null | false | c805d61b0eca353e4304e8d3f22b2479 | abc3f52bb855a5eac8c9d6c0926f0f9f1063e0b0bc3a010f11dd6f2dad2562f5 | e8fd836cd81e374bf17dd3ad65c55be2dc8b05d1c795fe30d80b85689f205881 | null | [
"LICENSE"
] | 243 |
2.4 | muaddib | 1.2.0 | A secure, multi-user AI agent for IRC, Discord, and Slack | # 🐁 Muaddib - a secure, multi-user AI assistant
> [!WARNING]
> **This Python package is deprecated.** Muaddib has been rewritten in TypeScript and this Python version is no longer maintained.
> Please use the current TypeScript version at **[github.com/pasky/muaddib](https://github.com/pasky/muaddib)** (the `main` branch).
> This release (1.2.0) is a final farewell release with no new features.
<p align="center">
<a href="https://discord.gg/rGABHaDEww"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Discord"></a>
<a href="https://github.com/pasky/muaddib/releases"><img src="https://img.shields.io/github/v/release/pasky/muaddib?include_prereleases&style=for-the-badge" alt="GitHub release"></a>
<a href="https://deepwiki.com/pasky/muaddib"><img src="https://img.shields.io/badge/DeepWiki-muaddib-111111?style=for-the-badge" alt="DeepWiki"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue.svg?style=for-the-badge" alt="MIT License"></a>
</p>
**Muaddib** is an AI assistant that's been built from the ground up *not* as a private single-user assistant (such as the amazing Clawdbot / Moltbot), but as a resilient entity operating in an inherently untrusted public environment (public IRC / Discord / Slack servers).
What does it take to talk to many strangers?
1. It operates sandboxed, and with complete channel isolation.
2. It has been optimized for high cost and token efficiency (using a variety of context engineering etc. techniques).
3. It operates in "lurk" mode by default (rather than replying to everything, Muaddib replies when highlighted, but can also interject proactively when it seems useful).
Other work-in-progress features are also going to be tailored to this scenario (e.g. per-user token usage tracking and limiting / billing, per-channel code secrets and persistent workspaces, ...).
Of course, this means a tradeoff. Muaddib is not designed to sift through your email and manage your personal calendar!
It is tailored for **public and team environments, where it's useful to have an AI agent as a "virtual teammate"** - both as an AI colleague in chat for public many-to-many collaboration, and allowing personal or per-channel contexts.
## Quick Demo
Muaddib maintains a refreshing, very un-assistanty tone of voice that **optimizes for short, curt responses** (sometimes sarcastic, always informative) with great information density.
And you may quickly find that Muaddib (in this case equipped with Opus 4.5) can [do things](https://x.com/xpasky/status/2009380722855890959?s=20) that official Claude app does much worse (let alone other apps like ChatGPT or Gemini!).

[➜ Generated image](https://pbs.twimg.com/media/G-LAy5yXcAAhV4d?format=jpg&name=large)
_(By the way, the token usage has been optimized since!)_
Of course, as with any AI agent, the real magic is in chatting back and forth. (Multiple conversations with several people involved can go on simultaneously on a channel and Muaddib will keep track!)

[(➜ Generated image, in case you are curious)](https://pbs.twimg.com/media/G-LA8VGWAAED6sn?format=jpg&name=large)
_(Note that this particular task is on the edge of raw Opus 4.5 capability and all other harnesses and apps I tried failed it completely.)_
Discord is of course supported:

So is Slack - including threads:

## Features
- **AI Integrations**: Anthropic Claude (Opus 4.5 recommended), OpenAI, DeepSeek, any OpenRouter model (including Gemini models)
- **Agentic Capability**: Ability to visit websites, view images, perform deep research, execute Python/Bash code via Sprites, publish artifacts
- **Restartable and Persistent Memory**: All state is persisted; AI agent maintains a continuous chronicle of events and experiences to refer to
- **Command System**: Automatic model routing (to balance cost, speed and intelligence) plus extensible command-based interaction with prefixes for various modes
- **Proactive Interjecting**: Channel-based whitelist system for automatic participation in relevant conversations
- [BETA] **Long-running Projects**: A *quest* mode (opt-in) that enables Muaddib to work on longer-horizon, many-step tasks in public, using the channel for long-term context and external steering
Muaddib has been **battle-tested since July 2025** in a (slightly) hostile IRC environment, lurking at a variety of [libera.chat](https://libera.chat/) channels. However, bugs are possible (no warranty etc.) and LLM usage carries some inherent risks (e.g. a Sprites code execution sandbox with your API keys preloaded *plus* an access to the internet [*can* be fooled](https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/) by a highly crafted malicious website that the agent visits to upload these API keys somewhere).
## Getting Started
### Configuration
All muaddib data lives in `$MUADDIB_HOME` (defaults to `~/.muaddib/`):
```
~/.muaddib/
├── config.json # Configuration
├── chat_history.db # Chat history database
├── chronicle.db # Chronicle database
└── logs/ # Per-message log files
```
Copy `config.json.example` to `~/.muaddib/config.json` (or `$MUADDIB_HOME/config.json`) and set your:
- API keys (you can get started with just a small subset)
- Paths for tools and artifacts (relative paths are resolved against `$MUADDIB_HOME`)
- Custom prompts for various modes
- integration settings such as channel modes
**Tip:** Set `MUADDIB_HOME=.` to use the current directory (useful for development).
### Installation
Recommended for Discord:
1. Follow [Discord setup instructions](docs/discord.md) to create a bot account and obtain a token. Set it in `~/.muaddib/config.json` Discord section.
2. Install dependencies: `uv sync --dev`
3. Run the service: `uv run muaddib`
Recommended for Slack:
1. Follow [Slack setup instructions](docs/slack.md) to create a Slack app, enable Socket Mode, and obtain tokens.
2. Set the Slack config block in `~/.muaddib/config.json`.
3. Install dependencies: `uv sync --dev`
4. Run the service: `uv run muaddib`
Recommended for an IRC bot: See [Docker instructions](docs/docker.md) for running a Muaddib service + irssi in tandem in a Docker compose setup.
Manual for IRC ("bring your own irssi"):
1. Ensure `irssi-varlink` is loaded in your irssi, and your varlink path is set up properly in `~/.muaddib/config.json` IRC section.
2. Install dependencies: `uv sync --dev`
3. Run the service: `uv run muaddib`
### Commands
- `mynick: message` - Automatic mode
- `mynick: !h` - Show help and info about other modes
## Development
```bash
# Install development dependencies
uv sync --dev
# Run tests
uv run pytest
# Run linting and formatting
uv run ruff check .
uv run ruff format .
# Type checking
uv run pyright
# Install pre-commit hooks
uv run pre-commit install
```
### CLI Testing Mode
You can test the bot's message handling including command parsing from the command line:
```bash
uv run muaddib --message "!h"
uv run muaddib --message "tell me a joke"
uv run muaddib --message "!d tell me a joke"
uv run muaddib --message "!a summarize https://python.org"
# Or with explicit config: uv run muaddib --message "!a summarize https://python.org" --config /path/to/config.json
```
This simulates full IRC message handling including command parsing and automatic mode classification, useful for testing your configuration and API keys without setting up the full IRC bot.
#### Chronicler
The Chronicler maintains persistent memory across conversations using a Chronicle (arcs → chapters → paragraphs) provided via a NLI-based subagent.
```bash
# Record information
uv run muaddib --chronicler "Record: Completed API migration" --arc "project-x"
# View current chapter
uv run muaddib --chronicler "Show me the current chapter" --arc "project-x"
```
### Classifier Analysis
Evaluate the performance of the automatic mode classifier on historical data:
```bash
# Analyze classifier performance on database history (uses $MUADDIB_HOME/chat_history.db by default)
uv run python analyze_classifier.py
# Analyze classifier performance on IRC log files
uv run python analyze_classifier.py --logs ~/.irssi/logs/freenode/*.log
# Combine both sources with explicit paths
uv run python analyze_classifier.py --db /path/to/chat_history.db --logs ~/.irssi/logs/ --config /path/to/config.json
```
Results are saved to `classifier_analysis.csv` with detailed metrics and misclassification analysis.
### Proactive Interjecting Analysis
Evaluate the performance of the proactive interjecting feature on historical data:
```bash
# Analyze proactive interjecting performance on database history
uv run python analyze_proactive.py --limit 20
# Analyze proactive interjecting on IRC log files with channel exclusions
uv run python analyze_proactive.py --logs ~/.irssi/logs/ --limit 50 --exclude-news
# Combine both sources with explicit paths
uv run python analyze_proactive.py --db /path/to/chat_history.db --logs ~/.irssi/logs/ --config /path/to/config.json
```
Results are saved to `proactive_analysis.csv` with detailed interjection decisions and reasoning.
| text/markdown | null | pasky <pasky@ucw.cz> | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.9.0",
"aiosqlite>=0.19.0",
"ddgs>=0.1.0",
"discord-py>=2.4.0",
"markdownify>=0.11.0",
"openai>=1.40.0",
"slack-bolt>=1.20.0",
"slack-sdk>=3.26.0",
"sprites-py>=0.0.1a1",
"pre-commit>=3.0.0; extra == \"dev\"",
"pyright>=1.1.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"d... | [] | [] | [] | [
"Homepage, https://github.com/pasky/muaddib",
"Repository, https://github.com/pasky/muaddib",
"Issues, https://github.com/pasky/muaddib/issues"
] | uv/0.5.29 | 2026-02-19T00:17:47.009715 | muaddib-1.2.0.tar.gz | 51,974,130 | 47/11/7f0d5f826661962faada2dac96bb3bcdb51d4afb7546af49a13327d4880b/muaddib-1.2.0.tar.gz | source | sdist | null | false | a690fa2405af14e2d5f2fc1d7dc8b037 | 6c0a96834ea997d192e06de0716fab9dac71e572aefd3bf143bb3b6ca16908ae | 47117f0d5f826661962faada2dac96bb3bcdb51d4afb7546af49a13327d4880b | null | [
"LICENSE"
] | 263 |
2.4 | agentspend | 0.1.0 | Python SDK for AgentSpend — card & crypto paywalls for AI agents | # agentspend
Python SDK for AgentSpend — card & crypto paywalls for AI agents.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25",
"django>=4.0; extra == \"django\"",
"fastapi>=0.100; extra == \"fastapi\"",
"uvicorn>=0.20; extra == \"fastapi\"",
"flask>=2.0; extra == \"flask\""
] | [] | [] | [] | [
"Homepage, https://agentspend.co",
"Repository, https://github.com/agentspend/agentspend"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T00:15:35.035846 | agentspend-0.1.0.tar.gz | 7,219 | a5/3f/f8c981b4592ac734655be34f86653007fdac670a4cd4ae834d90de02bdc8/agentspend-0.1.0.tar.gz | source | sdist | null | false | ef1f984245e4fc127ccca758ab0682c2 | ea78d2ccd2f11525c42fea2778b3c31c7f04b8174bc910e896db933614437e54 | a53ff8c981b4592ac734655be34f86653007fdac670a4cd4ae834d90de02bdc8 | MIT | [] | 287 |
2.4 | not-grep | 1.0.1 | kinda like grep but not quite | ########
not-grep
########
.. image:: https://img.shields.io/pypi/v/not-grep.svg
:target: https://pypi.python.org/pypi/not-grep
:alt: Latest Version
.. image:: https://img.shields.io/pypi/pyversions/not-grep.svg
:target: https://pypi.python.org/pypi/not-grep
:alt: Supported Python Versions
.. image:: https://img.shields.io/badge/code_style-black-000000.svg
:target: https://github.com/ambv/black
:alt: Code style: black
.. image:: https://readthedocs.org/projects/not-grep/badge/
:target: https://not-grep.readthedocs.io
:alt: Documentation Status
``not-grep`` is kind of like grep, but different.
WAT?
====
If you have ever needed to inspect a file for particular patterns,
you probably used ``grep``.
.. code-block:: bash
grep FooClassName file.py
If you needed to do that for a lot of files, you might have combined it with ``find``.
.. code-block:: bash
find . -type f -name "*.py" -exec grep -n FooClassName {} /dev/null \;
This works great for one-off checks
but less great if you need to do those checks repeatedly,
if you need to do lots of such checks,
if you need to do those checks somewhere that you don't have access to ``grep``,
or if you need to do things that ``grep`` cannot do.
Not Grep?
=========
``not-grep`` is designed for static use, not ad-hoc use.
For example, as part of a continuous integration test suite.
This is why it gets its configuration from a config file, not the CLI.
Because of this, the ``not-grep`` CLI is very simple:
the only things you can specify are the config file and verbosity.
.. code-block:: bash
not-grep --config config.toml -vv
Inside the config file, things start to get interesting.
``not-grep`` is built around checker plugins.
Each plugin takes a map as input:
the file glob pattern for the files you want to check
and a value that tells the plugin what to do with that file.
The config file is a collection of TOML tables.
The table name identifies the plugin
and the table members are the input to that plugin.
.. code-block:: toml
# The "include" checker will error unless the specified value is include.
[include]
"src/**/*.py" = "__all__"
# The "exclude" checker will error if the specified value is include.
[exclude]
"src/**/*.py" = "FooClassName"
The output shows you, for each plugin,
whether each matched file met or failed the plugin requirements.
In lower verbosity levels, ``not-grep`` only shows failed checks.
.. code-block:: bash
$ not-grep --config config.toml -vv
================Running include checks================
-----------Checking src/**/*.py for pattern-----------
__all__
******************************************************
src/foo/__init__.py.............................. PASS
src/foo/bar.py................................... FAIL
================Running exclude checks================
-----------Checking src/**/*.py for pattern-----------
FooClassName
******************************************************
src/foo/__init__.py.............................. PASS
src/foo/bar.py................................... PASS
Awesome! Can I use it in GitHub Actions?
========================================
Yes. Yes you can.
.. code-block:: yaml
- uses: mattsb42/not-grep@master
with:
# If you don't set config-file the action uses ".github/not-grep.toml".
config-file: ./github/config/check-things.toml
# If you don't set debug, passing checks will be hidden.
debug: true
| null | Matt Bullock | m@ttsb42.com | Matt Bullock | null | Apache 2.0 | not-grep not_grep | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Utilities",
"Natural Language :: English",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",... | [] | https://github.com/mattsb42-meta/not-grep | null | null | [] | [] | [] | [
"click>=7.1.1",
"attrs>=19.3.0",
"toml>=0.10.0",
"setuptools<=81.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T00:15:14.306520 | not_grep-1.0.1-py2.py3-none-any.whl | 17,357 | 1f/88/962d8aa368a8d871f11b0e7a87a664e3361ef6aaa9e5b4d49de296d8d04c/not_grep-1.0.1-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | dc8c7db0e6f4c0bb1c2e077022ef0f47 | c133aef0011b7a9b7d85f132f0d166541279d2dbcff0f53b93107c1b162dc3e8 | 1f88962d8aa368a8d871f11b0e7a87a664e3361ef6aaa9e5b4d49de296d8d04c | null | [
"LICENSE"
] | 107 |
2.4 | enkasia | 0.0.2 | Enkasia | # Enkasia
enkasia
<pre>
pip install enkasia
</pre>
Then:
```Python
# Python
import enkasia
```
| text/markdown | null | Machina Ratiocinatrix <machina.ratio@github.io> | null | null | null | enkasia | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"PyGithub>=2.6.0",
"requests>=2.32.3",
"click>=8.3.0"
] | [] | [] | [] | [
"Homepage, https://github.com/machina-ratiocinatrix/enkasia",
"Bug Tracker, https://github.com/machina-ratiocinatrix/enkasia/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T00:14:32.703818 | enkasia-0.0.2.tar.gz | 3,883 | 1a/76/1ee5d566aa62fc021eac0a48124b1532f0d1d4ad8860f3437a40b4e2775a/enkasia-0.0.2.tar.gz | source | sdist | null | false | 225760db191e2617c69169ebd90f6da5 | a176f95d8e2369625ba3502ea1cb152c80edd5f763393b6bf3265c213a238cdd | 1a761ee5d566aa62fc021eac0a48124b1532f0d1d4ad8860f3437a40b4e2775a | null | [
"LICENSE"
] | 259 |
2.4 | multicsv | 1.1.1 | A Python library for handling multi-CSV format. |
# MultiCSV
[](https://codecov.io/gh/cfe-lab/multicsv)
[](https://github.com/python/mypy)
[](https://spdx.org/licenses/)
[](https://github.com/cfe-lab/multicsv/pulls)
Python library `multicsv` is designed for handling multi-CSV format
files. It provides an interface for reading, writing, and manipulating
sections of a CSV file as individual text file objects.
### Key Features
- **Efficient Section Management:** Read and write multiple
independent sections within a single CSV file.
- **TextIO Interface:** Sections are treated as TextIO objects,
enabling familiar file operations.
- **Flexible Operations:** Supports reading, writing, iterating, and
deleting sections.
- **Context Management:** Ensures resource safety with `with`
statement compatibility.
- **Integrated Testing:** Includes comprehensive unit tests, covering
100% of the functionality.
## The Multi-CSV Format
The multi-CSV format is an extension of the traditional CSV
(Comma-Separated Values) format that supports dividing a single file
into multiple independent sections. Each section is demarcated by a
header enclosed in square brackets, e.g., `[section_name]`.
This format is commonly known for usage in Illumina-MiSeq sample sheet
files.
Conceptually, this file format provides the ability to store a whole
SQL database in a single, human readable file.
### Example
Here's a simplified example of a multi-CSV file:
```csv
[section1]
header1,header2,header3
value1,value2,value3
[section2]
headerA,headerB,headerC
valueA,valueB,valueC
```
In the example above, the file contains two sections: `section1` and
`section2`. Each section has its own headers and rows of data.
## Usage
Here's a quick example of how to use the `multicsv` library:
```python
import csv
import multicsv
with multicsv.open('example.csv', mode='w+') as csv_file:
# Write the CSV content to the file
csv_file.section('section1').write("header1,header2,header3\nvalue1,value2,value3\n")
csv_file.section('section2').write("header4,header5,header6\nvalue4,value5,value6\n")
# Read a section using the csv module
csv_reader = csv.reader(csv_file['section1'])
assert list(csv_reader) == [['header1', 'header2', 'header3'],
['value1', 'value2', 'value3']]
```
There are only two methods exported in `multicsv`: `open` and `wrap`.
This is how the latter one is meant to be used:
```python
import io
import multicsv
# Initialize the MultiCSVFile with a base CSV string
csv_content = io.StringIO("""\
[section1]
a,b,c
1,2,3
[section2]
d,e,f
4,5,6
""")
csv_file = multicsv.wrap(csv_content)
# Accessing a section
section1 = csv_file["section1"]
print(section1.read()) # Outputs: "a,b,c\n1,2,3\n"
# Adding a new section
new_section = io.StringIO("g,h,i\n7,8,9\n")
csv_file["section3"] = new_section
csv_file.flush()
# Verify the new section is added
csv_content.seek(0)
print(csv_content.read())
# Outputs:
# [section1]
# a,b,c
# 1,2,3
# [section2]
# d,e,f
# 4,5,6
# [section3]
# g,h,i
# 7,8,9
```
Both exported methods return a `MultiCSVFile` object.
Objects of that class are `MutableMapping`s from names of sections (`: str`) to contents of sections (`: TextIO`).
So, for instance, this is how to print all sections in a multi-csv file:
```python
import multicsv
for section in multicsv.open("example.csv"):
print(section)
```
## Installation
Install the library using pip:
```bash
pip install multicsv
```
## Development
### Setting Up
Set up your environment for development as follows:
1. Clone the repository:
```bash
git clone https://github.com/cfe-lab/multicsv.git
```
2. Navigate to the project directory:
```bash
cd multicsv
```
3. Create a virtual environment:
```bash
python3 -m venv venv
source venv/bin/activate
```
4. Install dependencies:
```bash
pip install -e .[dev,test]
```
### Running Tests
Run the test suite to ensure everything is functioning correctly:
```bash
pytest
```
## Contributing
Contributions are welcome! Please follow these steps for contributions:
1. Fork the repository.
2. Create a new branch with a descriptive name.
3. Make your changes and ensure the test suite passes.
4. Open a pull request with a clear description of what you've done.
## License
This project is licensed under the GPL-3.0 License - see the
[LICENSE](COPYING) file for details.
| text/markdown | null | British Columbia Centre for Excellence in HIV/AIDS <vmysak@bccfe.ca> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programmi... | [] | null | null | >=3.8 | [] | [] | [] | [
"bandit; extra == \"dev\"",
"mypy; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pytest-cov; extra == \"test\"",
"pytest>=6.0; extra == \"test\""
] | [] | [] | [] | [] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T00:14:17.903907 | multicsv-1.1.1.tar.gz | 23,479 | 7a/dd/160afab342de2eb13df042f0f904713f4a23cb80a3fde6a2e4846b962993/multicsv-1.1.1.tar.gz | source | sdist | null | false | c1f467674ead39515d428cef11c26391 | 60e9e768244ccf6d2b68daa0e66afa18a4ef80fb4e07dc0ce9ec6548a5265442 | 7add160afab342de2eb13df042f0f904713f4a23cb80a3fde6a2e4846b962993 | GPL-3.0 | [
"COPYING"
] | 302 |
2.4 | dotenvironment | 1.1.1 | Pequeña librería diseñada para facilitar la obtención y organización de variables de entorno, casteos y valores predeterminados centralizando todo el flujo de trabajo en un solo lugar. | ## Variables de entorno
Esta librería permite declarar el uso de variables de entorno, conversión al tipode dato requerido y sus valores predeterminados de una forma rápida y centralizada desde un archivo `.env`.
Instalación:
```bash
pip install dotenvironment
```
----
Uso:
```py
from dotenvironment import DotEnvironment
# Inicialización de una instancia
env = DotEnvironment()
# Carga DB_PORT desde el .env
DB_PORT = env.variable('DB_PORT', int)
```
En este ejemplo se declara el uso de una variable de entorno declarada en el archivo `.env` como `DB_PORT`.
----
### Prefijos
Puede usarse un prefijo para evitar colisiones en proyectos grandes.
```py
# Altamente recomendado
env = DotEnvironment('ONNYMM_')
```
Y luego buscar una variable de entorno declarada, por ejemplo, como
`ONNYMM_DB_PORT` de esta forma:
```py
# Carga ONNYMM_DB_PORT desde el .env
DB_PORT = env.variable('DB_PORT', int)
```
----
### Valores predeterminados
En caso de usar un valor predeterminado en ausencia de un valor declarado en las variables de entorno, se puede usar un tercer argumento posicional:
```py
DB_PORT = env.variable('DB_PORT', int, 5432)
```
En caso de no encontrarse un valor, se usa el valor predeterminado proporcionado, que en este caso es `5432`.
Si no se requiere usar un valor predeterminado puede dejarse el tercer argumento sin declararse o especificarse explícitamente. El uso de `...` indica que la variable es requerida y no tiene valor predeterminado:
```py
# Ambos ejemplos funcionan igual
DB_PORT = env.variable('DB_PORT', int)
DB_PORT = env.variable('DB_PORT', int, ...)
```
También puede proporcionarse una función como valor predeterminado. Si el valor predeterminado es *callable*, éste será ejecutado únicamente cuando la variable de entorno no esté definida:
```py
from datetime import date
DEFAULT_DATE = env.variable('DATE', datetime.fromisoformat, date.today)
```
Esto puede ser útil en casos donde obtener un valor predeterminado requiere cómputo complejo o lento innecesario de ejecutar si en realidad sí fue provisto un valor predeterminado:
```py
def my_complex_computing_here() -> CustomObject:
# some complex computing algorythms
MY_VALUE = env.variable('MY_VALUE', CustomObject, my_complex_computing_here)
```
Finalmente, si no se requiere castear a un tipo de dato puede omitirse este argumento y el valor se cargará como `str`:
```py
# Ambos ejemplos funcionan igual
DB_PORT = env.variable('DB_PORT') # <class 'str'>
```
----
### Casteos
Al cargar una variable se declara el tipo de dato de ésta. Es importante definirlo correctamente:
```py
DB_PORT = env.variable('DB_PORT', int) # 5432
DB_PORT = env.variable('DB_PORT', str) # "5432"
```
También se pueden usar funciones para castear el valor. El valor leído desde las variables de entorno siempre se recibe como `str`:
```py
# Valores válidos como True
truthy_values = {'1', 'true', 'True', 'TRUE'}
DEBUG_MODE = env.variable('DEBUG', lambda v: v in truthy_values)
```
Se puede tipar una función sin perder información:
```py
from dotenvironment import CastFunction
# Función declarada fuera de la obtención de la variable de entorno
cast_fn: CastFunction[bool] = lambda v: v in truthy_values
DEBUG_MODE = env.variable('DEBUG', cast_fn)
```
O métodos/funciones de librerías estándar o externas que usan el valor entrante
como string:
```py
DATE = env.variable('DATE', date.fromisoformat) # Valor obligatorio
DATE = env.variable('DATE', date.fromisoformat, date.today()) # Valor predeterminado
```
----
### Debug
Para saber qué variables han sido cargadas puede imprimirse la instancia. Las
variables que no hayan sido encontradas y que tomaron valores predeterminados
se mostrarán con una leyenda de `(default)`:
```py
env = DotEnvironment('ONNYMM_')
ONNYMM_DB_USER = env.variable('DB_USER', str)
ONNYMM_DB_PORT = env.variable('DB_PORT', int, 5432)
print(env)
# DotEnvironment([
# <ONNYMM_DB_USER[<class 'str'>]= 'root'>,
# <ONNYMM_DB_PORT[<class 'int'>]= 5432 (default)>
# ])
```
Puedes acceder a ellas por medio de la instancia:
```py
print(env['DB_PORT']) # Puede accederse a la variable sin especificar el prefijo
print(env['ONNYMM_DB_PORT']) # O con el prefijo
```
El acceso es de solo lectura. La instancia no permite modificar valores.
O incluso comprobar si una variable fue cargada:
```py
'DB_PORT' in env
'ONNYMM_DB_PORT' in env
```
| text/markdown | null | Pável Hernández Reza <phr2807@gmail.com> | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"python-dotenv==1.2.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T00:09:39.676949 | dotenvironment-1.1.1.tar.gz | 7,279 | 5b/ff/c536358b89dfd21e64496f6d9b86c17107c83cbedc229f3130df42ab8c6b/dotenvironment-1.1.1.tar.gz | source | sdist | null | false | 2b65e4672d9e28888ed21960f89bc4fd | b24144905af0d53728c923a2475fb96a4e40c6e57a2190b68edea56b0bb0e871 | 5bffc536358b89dfd21e64496f6d9b86c17107c83cbedc229f3130df42ab8c6b | MIT | [
"LICENSE"
] | 275 |
2.4 | frontrun | 0.0.2 | A library for deterministic concurrency testing that helps you reliably reproduce and test race conditions | # Frontrun
A library for deterministic concurrency testing that helps you reliably reproduce and test race conditions.
```bash
pip install frontrun
```
## Overview
Frontrun provides tools for controlling thread interleaving at a fine-grained level, allowing you to:
- **Deterministically reproduce race conditions** - Force specific execution ordering to make race conditions happen reliably in tests
- **Test concurrent code exhaustively** - Explore different execution orders to find bugs
- **Verify synchronization correctness** - Ensure that proper locking prevents race conditions
Instead of relying on timing-based race detection (which is unreliable), Frontrun lets you control exactly when threads execute, making concurrency testing deterministic and reproducible.
## Quick Start: Bank Account Race Condition
Here's a pytest test that uses Frontrun to trigger a race condition:
```python
from frontrun.trace_markers import Schedule, Step, TraceExecutor
class BankAccount:
def __init__(self, balance=0):
self.balance = balance
def transfer(self, amount):
current = self.balance # frontrun: read_balance
new_balance = current + amount
self.balance = new_balance # frontrun: write_balance
def test_transfer_lost_update():
account = BankAccount(balance=100)
# Both threads read before either writes
schedule = Schedule([
Step("thread1", "read_balance"), # T1 reads 100
Step("thread2", "read_balance"), # T2 reads 100 (both see same value!)
Step("thread1", "write_balance"), # T1 writes 150
Step("thread2", "write_balance"), # T2 writes 150 (overwrites T1's update!)
])
executor = TraceExecutor(schedule)
executor.run("thread1", lambda: account.transfer(50))
executor.run("thread2", lambda: account.transfer(50))
executor.wait(timeout=5.0)
# One update was lost: balance is 150, not 200
assert account.balance == 150
```
## Case Studies
See [detailed case studies](docs/CASE_STUDIES.rst) of searching for concurrency bugs in ten libraries: TPool, threadpoolctl, cachetools, PyDispatcher, pydis, pybreaker, urllib3, SQLAlchemy, amqtt, and pykka. Run the test suites with: `PYTHONPATH=frontrun python frontrun/docs/tests/run_external_tests.py`
## Usage Approaches
Frontrun provides two different ways to control thread interleaving:
### 1. Trace Markers
Trace markers are special comments (`# frontrun: <marker-name>`) which mark particular synchronization points in multithreaded or async code. These are intended to make it easier to reproduce race conditions in test cases and inspect whether some race conditions are possible.
The execution ordering is controlled with a "schedule" object that says what order the threads / markers should run in.
Each thread runs with a [`sys.settrace`](https://docs.python.org/3/library/sys.html#sys.settrace) callback that pauses at markers and waits for a schedule to grant the next execution turn. This gives deterministic control over execution order without modifying code semantics — markers are just comments. A marker **gates** the code that follows it: the thread pauses at the marker and only executes the gated code after the scheduler grants it a turn. Name markers after the operation they gate (e.g. `read_value`, `write_balance`) rather than with temporal prefixes like `before_` or `after_`.
Markers can be placed inline or on a separate line before the operation:
```python
from frontrun.trace_markers import Schedule, Step, TraceExecutor
class Counter:
def __init__(self):
self.value = 0
def increment(self):
temp = self.value # frontrun: read_value
temp += 1
self.value = temp # frontrun: write_value
def test_counter_lost_update():
counter = Counter()
schedule = Schedule([
Step("thread1", "read_value"),
Step("thread2", "read_value"),
Step("thread1", "write_value"),
Step("thread2", "write_value"),
])
executor = TraceExecutor(schedule)
executor.run("thread1", counter.increment)
executor.run("thread2", counter.increment)
executor.wait(timeout=5.0)
assert counter.value == 1 # One increment lost
```
### 2. Bytecode Manipulation (Experimental)
> ⚠️ **Experimental:** Bytecode instrumentation is experimental and may change. It requires monkey-patching concurrency primitives and relies on `f_trace_opcodes` (Python 3.7+). Use with caution.
Automatically instrument functions using bytecode rewriting — no markers needed. Each thread fires a [`sys.settrace`](https://docs.python.org/3/library/sys.html#sys.settrace) callback at every bytecode instruction, pausing at each one to wait for its scheduler turn. This gives fine-grained control but requires monkey-patching standard threading primitives (`Lock`, `Semaphore`, `Event`, `Queue`, etc.) to prevent deadlocks.
`explore_interleavings()` does property-based exploration in the style of [Hypothesis](https://hypothesis.readthedocs.io/): it generates random opcode-level schedules and checks that an invariant holds under each one, returning any counterexample schedule.
```python
from frontrun.bytecode import explore_interleavings
class Counter:
def __init__(self, value=0):
self.value = value
def increment(self):
temp = self.value
self.value = temp + 1
def test_counter_no_race():
result = explore_interleavings(
setup=lambda: Counter(value=0),
threads=[
lambda c: c.increment(),
lambda c: c.increment(),
],
invariant=lambda c: c.value == 2,
max_attempts=200,
max_ops=200,
seed=42,
)
assert not result.property_holds, "Expected a race condition"
assert result.counterexample.value == 1
```
## Async Support
Both approaches have async variants.
### Async Trace Markers
```python
from frontrun.async_trace_markers import AsyncTraceExecutor
from frontrun.common import Schedule, Step
class AsyncCounter:
def __init__(self):
self.value = 0
async def get_value(self):
return self.value
async def set_value(self, new_value):
self.value = new_value
async def increment(self):
# frontrun: read_value
temp = await self.get_value()
# frontrun: write_value
await self.set_value(temp + 1)
def test_async_counter_lost_update():
counter = AsyncCounter()
schedule = Schedule([
Step("task1", "read_value"),
Step("task2", "read_value"),
Step("task1", "write_value"),
Step("task2", "write_value"),
])
executor = AsyncTraceExecutor(schedule)
executor.run({
"task1": counter.increment,
"task2": counter.increment,
})
assert counter.value == 1 # One increment lost
```
## Development
### Running Tests
```bash
make test
```
| text/markdown | null | Lucas Wiman <lucas.wiman@gmail.com> | null | null | null | concurrency, testing, race-conditions, threads, async | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"To... | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=7.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"hypothesis>=6.0; extra == \"dev\"",
"sphinx>=4.0; extra == \"dev\"",
"sphinx-rtd-theme>=1.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pyright>=1.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/lucaswiman/frontrun",
"Repository, https://github.com/lucaswiman/frontrun.git",
"Documentation, https://lucaswiman.github.io/frontrun",
"Changelog, https://github.com/lucaswiman/frontrun/blob/main/CHANGELOG.rst",
"Issues, https://github.com/lucaswiman/frontrun/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T00:08:47.050778 | frontrun-0.0.2.tar.gz | 30,741 | 66/2f/db8946cddfcfd251da4ea385e2f62992703ebbffb03583cfb9f6a90898d2/frontrun-0.0.2.tar.gz | source | sdist | null | false | 9f4cbe255eb1806cc2f1181a1d69a382 | 1b205f4eaaabf7d1d8c9609696157a45a4dcb9c95ebf842d05fe91f4c9efe850 | 662fdb8946cddfcfd251da4ea385e2f62992703ebbffb03583cfb9f6a90898d2 | MPL-2.0 | [
"LICENSE"
] | 270 |
2.4 | gpuq | 1.5.5 | A multi-vendor GPU querying utility with minimal dependencies | # *gpuq* - multi-vendor *GPU* *q*uerying utility with minimal dependencies
This small library is a direct answer to the lack of a lightweight, cross-compatible utility to query available GPUs - regardless of what vendor, distro, or overall environment one might be using.
In particular, the implementation meets the following requirements:
- works with multiple downstream runtimes (currently supported: CUDA and HIP)
- will also work if you have multiple present at the same time (do you really, though?)
- no build- or install-time dependencies (including any python packages)
- any runtime dependencies are soft - unless the user explicitly asks for the status/presence of a particular downstream runtime, most methods will fail silently
- consequently, the package should install and run on pretty much any machine
- your laptop does not have a GPU? -> the package will report 0 GPUs available (duh), no exceptions, linker errors, etc.
- allows for easy mocking (for unit tests, etc.)
- fully typed (conforms to `mypy --strict` checking)
Compared to some existing alternatives, it has the following differences:
- `torch.cuda` - not lightweight, also requires a different wheel for NVidia and HIP
- `gputil` - NVidia specific, also broken dependencies (as of 2025)
- `gpuinfo` - NVidia specific, broken `import gpuinfo`...
- `gpuinfonv` - NVidia specific, requires pynvml
- `pyamdgpuinfo` - AMD specific
- `igpu` - NVidia specific, broken installation (as of 2025)
- and so on...
The primary functionality offered is:
- check how many GPUs are available
- query properties for each available device - will tell you some basic info about the provider (CUDA/HIP) and other info similar to `cudaGetDeviceProperties`
- the returned list is not comprehensive, though
- respects `*_VISIBLE_DEVICES` and provides mapping between local (visible) and global indices
- **NOTE: this temporarily modifies env variables and therefore is not thread-safe**
- if requested, lazily provides some runtime information about each GPU as well
- in particular, PIDs of processes using the GPU will be returned
- NOTE: this is currently done rather naively by parsing outputs of tools like `nvidia-smi` or `rocm-smi`
- allows to check for runtime errors that might have occurred while trying to load
### How it works:
The implementation will attempt to dynamically lazy-load `libcudart.so` and `libamdhip64.so` at runtime.
For GPUs to be properly reported, the libraries have to be found by the dynamic linker at the moment any relevant function call is made for the first time.
(If a library fails to load, loading will be retried every time a function call is made).
## Examples
Install with:
```bash
pip install gpuq
```
Return the number of available GPUs:
```python
import gpuq as G
print(G.count()) # this includes GPUs from all providers, disregarding *_VISIBLE_DEVICES
print(G.count(visible_only=True)) # only visible GPUs from all providers
print(G.count(provider=G.Provider.HIP, visible_only=True)) # only visible HIP devices
# etc.
```
Return a list of gpu properties:
```python
import gpuq as G
for gpu in G.query(visible_only=True, provider=G.Provider.ANY):
print(gpu.name)
# return all visible GPUs, raise an error if no CUDA devices are present
# (note: the required check if done against the global set, not the return set - see the docs)
gpus = G.query(visible_only=True, provider=G.Provider.ANY, required=G.Provider.CUDA)
```
Provide mapping between local and global GPU indices:
```python
import gpuq as G
# assume a system with 8 GPUs and CUDA_VISIBLE_DEVICES=1,7
for gpu in G.query(): # by default return visible GPUs only
print(gpu.index, gpu.system_index)
# should print:
# 0 1
# 1 7
for gpu in G.query(visible_only=False):
print(gpu.index, gpu.system_index, gpu.is_visible)
# should print:
# None 0 False
# 0 1 True
# None 2 False
# None 3 False
# None 4 False
# None 5 False
# None 6 False
# 1 7 True
```
| text/markdown | Mako | support@mako.dev | null | null | null | null | [] | [] | https://github.com/makodevai/gpuq | https://github.com/makodevai/gpuq | >=3.10.0 | [] | [] | [] | [
"GitPython; extra == \"dev\"",
"mypy; extra == \"dev\"",
"black; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T00:08:46.459191 | gpuq-1.5.5-cp314-cp314t-musllinux_1_2_aarch64.whl | 56,491 | 63/7e/8e7f5dc4ba4d568f9c8e52304d026b1f09c6db53d2fdd87b929ea32d728a/gpuq-1.5.5-cp314-cp314t-musllinux_1_2_aarch64.whl | cp314 | bdist_wheel | null | false | d3338fdf8c3978150d21e619e4565d24 | 5d956499f86a7cda791c1d582cab3cad3ea76906e8f8f4785894afcdefb7c71f | 637e8e7f5dc4ba4d568f9c8e52304d026b1f09c6db53d2fdd87b929ea32d728a | null | [
"LICENSE"
] | 1,352 |
2.4 | clawlens | 0.9.5 | ClawLens - Real-time observability dashboard for OpenClaw AI agents | # 🦞 ClawLens
[](https://pypi.org/project/clawlens/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/0xChitlin/clawlens/stargazers)
**Full observability for your .claw agent.** Watch your agent think, track costs, debug crons, and browse memory — all in one dashboard.
One command. Zero config. Auto-detects everything.
```bash
pip install clawlens && clawlens
```
Opens at **http://localhost:8900** and you're done.

## What You Get
- **Flow** — Live animated diagram showing messages flowing through channels, brain, tools, and back
- **Overview** — Health checks, activity heatmap, session counts, model info
- **Usage** — Token and cost tracking with daily/weekly/monthly breakdowns
- **Sessions** — Active agent sessions with model, tokens, last activity
- **Crons** — Scheduled jobs with status, next run, duration
- **Logs** — Color-coded real-time log streaming
- **Memory** — Browse SOUL.md, MEMORY.md, AGENTS.md, daily notes
- **Transcripts** — Chat-bubble UI for reading session histories
## Screenshots
| Flow | Overview | Sub-Agent |
|------|----------|-----------|
|  |  |  |
| Summary | Crons | Memory |
|---------|-------|--------|
|  |  |  |
## Install
**pip (recommended):**
```bash
pip install clawlens
clawlens
```
**One-liner:**
```bash
curl -sSL https://raw.githubusercontent.com/0xChitlin/clawlens/main/install.sh | bash
```
**From source:**
```bash
git clone https://github.com/0xChitlin/clawlens.git
cd clawlens && pip install flask && python3 dashboard.py
```
## Configuration
Most people don't need any config. ClawLens auto-detects your workspace, logs, sessions, and crons.
If you do need to customize:
```bash
clawlens --port 9000 # Custom port (default: 8900)
clawlens --host 127.0.0.1 # Bind to localhost only
clawlens --workspace ~/mybot # Custom workspace path
clawlens --name "Alice" # Your name in Flow visualization
```
All options: `clawlens --help`
## Requirements
- Python 3.8+
- Flask (installed automatically via pip)
- OpenClaw running on the same machine
- Linux or macOS
## Cloud Deployment
See the **[Cloud Testing Guide](docs/CLOUD_TESTING.md)** for SSH tunnels, reverse proxy, and Docker.
## License
MIT
---
<p align="center">
<strong>🦞 Full observability for your .claw agent</strong><br>
<sub>Built by <a href="https://github.com/vivekchand">@vivekchand</a> · <a href="https://clawlens.com">clawlens.com</a> · Part of the <a href="https://github.com/openclaw/openclaw">OpenClaw</a> ecosystem</sub>
</p>
| text/markdown | ClawWallet | 0xChitlin@proton.me | null | null | MIT | clawlens openclaw moltbot dashboard observability ai agent monitoring opentelemetry | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | https://github.com/0xChitlin/clawlens | null | >=3.8 | [] | [] | [] | [
"flask>=2.0",
"opentelemetry-proto>=1.20.0; extra == \"otel\"",
"protobuf>=4.21.0; extra == \"otel\""
] | [] | [] | [] | [
"Homepage, https://clawlens.com",
"Bug Reports, https://github.com/0xChitlin/clawlens/issues",
"Source, https://github.com/0xChitlin/clawlens"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T00:08:40.051937 | clawlens-0.9.5.tar.gz | 133,159 | e2/5e/095d96a9a670a809b67aa879edfc991ad166f4865e42fdb08cbefae77265/clawlens-0.9.5.tar.gz | source | sdist | null | false | efd33e1e7fa6a373981b9d0eec4cd8e3 | 677e84829a7dd0aa4f520d65d472116afd629becaa06370d0613fcd8c51f3d64 | e25e095d96a9a670a809b67aa879edfc991ad166f4865e42fdb08cbefae77265 | null | [
"LICENSE"
] | 280 |
2.4 | pygame-intro | 1.3.3 | A intro module for pygame Community Edition. | # pygame-intro




A minimal Python library to create intros for [`pygame community edition`](https://github.com/pygame-community/pygame-ce).
## Features
- Load and display custom image(s)
- Load and play custom sound
- Progress bar and skippable intro options
- Customizable: duration, fade-in/fade-oud and scaling
- Async support for pygbag compatibility
- Set background (color or image/surface)
## Getting Started
Install:
```bash
pip install pygame_intro
```
Desktop Example:
```python
import pygame
import pygame_intro
pygame.init()
pygame_intro.init()
pygame.display.set_mode((1000,600))
# Optional: customize intro settings
pygame_intro.settings(
duration=2,
fade_in=0.25,
fade_out=1,
scale=0.7,
progress_bar=True,
skippable=True,
)
# Optional: add image(s)
pygame_intro.add_image("my_image.png", "my_image2.png", "my_image3.png")
# Optional: add sound
pygame_intro.add_sound("path/my_sound.mp3", volume=0.7)
# Optional: change background color/surface
pygame_intro.change_background((30, 30, 30))
# Start the intro
pygame_intro.start()
```
Pygbag Example:
```python
# /// script
# dependencies = [
# "pygame-intro",
# ]
# ///
import pygame
import pygame_intro
import asyncio
# Make sure to implement any changes needed for pygbag
async def main():
pygame.init()
pygame_intro.init()
pygame.display.set_mode((600,600))
pygame_intro.add_image("path/my_image.png")
pygame_intro.settings(duration=2, fade_in=0.25, fade_out=1)
await pygame_intro.web_start()
# Start the intro
asyncio.run(main())
```
## License
This project is licensed under the MIT License.
See the [`LICENSE`](LICENSE.txt) file for the full license text.
| text/markdown | null | AntonisPylos <antonis@pylos.dev> | null | null | null | pygame, intro, splash screen | [
"Development Status :: 5 - Production/Stable",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: pygame"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pygame-ce>=2.5.0"
] | [] | [] | [] | [
"Repository, https://github.com/AntonisPylos/pygame-intro",
"Issues, https://github.com/AntonisPylos/pygame-intro/issues",
"Releases, https://github.com/AntonisPylos/pygame-intro/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:08:03.974240 | pygame_intro-1.3.3.tar.gz | 6,850 | 10/d3/c1a7f8a6187c6f165544fd05041378d95bc9a361ef5c02a6dfb8d432ac07/pygame_intro-1.3.3.tar.gz | source | sdist | null | false | 22b20acd7b77a5de44a77e2ea1c7a4eb | 66b6cca1999fd3d095f61d54d8d4fa39f82d80480677e9ec8cd15308f6d5f5cd | 10d3c1a7f8a6187c6f165544fd05041378d95bc9a361ef5c02a6dfb8d432ac07 | MIT | [
"LICENSE.txt"
] | 272 |
2.3 | openbb-pydantic-ai | 0.1.8 | Pydantic AI adapter for OpenBB Workspace. Connect any pydantic-ai agent to OpenBB via SSE streaming, widget tools, and PDF context. | [](https://github.com/astral-sh/uv)
[](https://github.com/astral-sh/ty)
[](https://deepwiki.com/MagnusS0/openbb-pydantic-ai)
# OpenBB Pydantic AI Adapter
`openbb-pydantic-ai` lets any [Pydantic AI](https://ai.pydantic.dev/) agent
run behind OpenBB Workspace by translating `QueryRequest` payloads into a Pydantic
AI run, exposing Workspace widgets as deferred tools, and streaming native
OpenBB SSE events back to the UI.
- **Stateless by design**: each `QueryRequest` carries the full conversation history, widgets, context, and URLs so requests are processed independently.
- **First-class widget tools**: every widget becomes a deferred Pydantic AI tool; when the model calls one, the adapter emits `copilotFunctionCall` events and waits for the Workspace to return data before resuming.
- **Rich event stream**: reasoning steps, thinking traces, tables, charts, HTML artifacts, and citations are streamed as native OpenBB SSE payloads.
- **PDF context**: install the `[pdf]` extra and any PDF widget in the Workspace is automatically extracted and passed as context to the agent.
- **Output helpers included**: structured outputs (dicts/lists) are auto-detected and converted to tables or charts; chart parameters are normalized for consistent rendering.
See the [OpenBB Custom Agent SDK](https://github.com/OpenBB-finance/openbb-ai) and
[Pydantic AI UI adapter docs](https://ai.pydantic.dev/ui/overview/) for the underlying types.
## Installation
```bash
pip install openbb-pydantic-ai
# or with uv
uv add openbb-pydantic-ai
```
For PDF context support (requires [docling](https://github.com/docling-project/docling)):
```bash
uv add "openbb-pydantic-ai[pdf]"
# GPU variant (CUDA 12.8)
uv add "openbb-pydantic-ai[pdf-cu128]"
```
## Quick Start (FastAPI)
```python
from anyio import BrokenResourceError
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from pydantic_ai import Agent
from openbb_pydantic_ai import OpenBBAIAdapter, OpenBBDeps
agent = Agent(
"openrouter:minimax/minimax-m2.5",
instructions="Be concise and helpful. Only use widget tools for data lookups.",
deps_type=OpenBBDeps,
)
app = FastAPI()
AGENT_BASE_URL = "http://localhost:8003"
@app.get("/agents.json")
async def agents_json():
return JSONResponse(
content={
"<agent-id>": {
"name": "My Custom Agent",
"description": "This is my custom agent",
"image": f"{AGENT_BASE_URL}/my-custom-agent/logo.png",
"endpoints": {"query": f"{AGENT_BASE_URL}/query"},
"features": {
"streaming": True,
"widget-dashboard-select": True, # primary & secondary widgets
"widget-dashboard-search": True, # extra widgets
"mcp-tools": True,
},
}
}
)
@app.post("/query")
async def query(request: Request):
try:
return await OpenBBAIAdapter.dispatch_request(request, agent=agent)
except BrokenResourceError:
pass # client disconnected
app.add_middleware(
CORSMiddleware,
allow_origins=["https://pro.openbb.co"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
```
### How It Works
#### 1. Request Handling
- OpenBB Workspace POST's a `QueryRequest` to `/query`
- `OpenBBAIAdapter` validates it, builds the Pydantic AI message stack, and injects workspace context and URLs as system prompts
#### 2. Widget Tool Conversion
- Widgets in the request become deferred Pydantic AI tools
- Each call emits a `copilotFunctionCall` event (via `get_widget_data`)
- The adapter pauses until Workspace responds with data, then resumes the run
#### 3. Event Streaming
| Pydantic AI event | OpenBB SSE event |
|---|---|
| Text chunk | `copilotMessageChunk` |
| Reasoning / thinking block | Collapsed under "Step-by-step reasoning" dropdown |
| Table / chart / HTML artifact | `copilotMessageArtifact` |
| Widget citations | `copilotCitationCollection` (batched at end of run) |
## Features
### Widget Toolsets
Widgets are grouped by priority (`primary`, `secondary`, `extra`) and exposed through dedicated toolsets. Tool names follow the `openbb_widget_<identifier>` convention with any redundant `openbb_` prefix trimmed (e.g. `openbb_widget_financial_statements`).
Control access via the `agents.json` feature flags:
```json
"features": {
"widget-dashboard-select": true,
"widget-dashboard-search": true
}
```
### Visualization: Charts, Tables & HTML
Three built-in tools handle structured output. The model can call any of them directly; the adapter handles serialization and streaming.
#### `openbb_create_chart`
Creates chart artifacts inline in the response. Supported types: `line`, `bar`, `scatter`, `pie`, `donut`.
Insert `{{place_chart_here}}` in the model's text where the chart should appear — the adapter swaps the placeholder with the rendered artifact while streaming:
```
Here is the revenue breakdown: {{place_chart_here}}
```
Required axes:
- `line` / `bar` / `scatter`: `x_key` + `y_keys`
- `pie` / `donut`: `angle_key` + `callout_label_key`
Different field spellings (`y_keys`, `yKeys`, etc.) are accepted and normalized before emitting.
#### `openbb_create_table`
Creates a table artifact from structured data with explicit column ordering and metadata. Use this when you want predictable output over auto-detection.
#### `openbb_create_html`
Renders a self-contained HTML artifact, useful for custom layouts, formatted reports, or SVG-based plots when Markdown isn't enough.
> **Constraint**: limited to HTML + CSS + inline SVG. No JavaScript. This is an OpenBB Workspace restriction on non-Enterprise plans.
**Auto-detection**: dict/list outputs shaped like `{"type": "table", "data": [...]}` or a plain list of dicts are automatically converted to table artifacts without calling any tool explicitly.
**Markdown tables** are also supported: stream tabular data as Markdown and Workspace renders it as an interactive table users can promote to a widget.
### MCP Tools
Tools listed in `QueryRequest.tools` are exposed as an external MCP toolset. The model sees the same tool names the Workspace UI presents. Deferred `execute_agent_tool` results replay on the next request just like widget results.
Enable in `agents.json`:
```json
"features": { "mcp-tools": true }
```
### PDF Context
Install the `[pdf]` extra and any PDF widget on the active dashboard is automatically extracted and passed as context before the run starts, no code changes needed.
```bash
uv add "openbb-pydantic-ai[pdf]"
```
Text is extracted and linked back to citation bounding boxes so the agent can cite specific pages (currently you get a citation to the page, and not a displayed bounding box).
> **Performance**: GPU extraction is significantly faster. CPU works, but expect slowdowns on documents over ~50 pages.
### Deferred Results & Citations
- Pending widget responses in the request are replayed before the run starts, keeping multi-turn workflows seamless.
- Every widget call records a citation via `openbb_ai.helpers.cite`, emitted as a `copilotCitationCollection` at the end of the run.
## Progressive Tool Discovery (Default)
Instead of dumping every tool schema into the context upfront, the adapter wraps toolsets with four meta-tools:
| Meta-tool | Purpose |
|---|---|
| `list_tools` | List available tools by group |
| `search_tools` | Keyword search across tool descriptions |
| `get_tool_schema` | Fetch the full schema for a specific tool |
| `call_tools` | Invoke a tool by name |
The model fetches schemas only when it needs them, keeping the initial context window small. Deferred flows (widget data, MCP) continue to emit `get_widget_data` and `execute_agent_tool` events as before.
To disable and expose all schemas upfront:
```python
adapter = OpenBBAIAdapter(
agent=agent,
run_input=run_input,
enable_progressive_tool_discovery=False,
)
```
## Adding Custom Toolsets
Pass custom or third-party toolsets to the adapter at request time rather than mounting them on `Agent`. They are merged into the progressive discovery wrapper automatically.
> **Important**: do **not** also pass these toolsets to `Agent(toolsets=[...])` when using the OpenBB adapter — they would appear as both direct and progressive tools.
Tag a toolset with `add_to_progressive(...)`:
```python
from pydantic_ai.toolsets import FunctionToolset
from pydantic_ai.tools import RunContext
from openbb_pydantic_ai import OpenBBDeps
from openbb_pydantic_ai.tool_discovery import add_to_progressive
custom_tools = FunctionToolset[OpenBBDeps](id="custom_agent_tools")
@custom_tools.tool
def earnings_note(ctx: RunContext[OpenBBDeps], symbol: str) -> str:
_ = ctx
return f"Custom note for {symbol}"
add_to_progressive(
custom_tools,
group="custom_agent_tools",
description="Custom user tools",
)
# Pass at request time
return await OpenBBAIAdapter.dispatch_request(request, agent=agent, toolsets=[custom_tools])
```
Or use the `@progressive(...)` decorator directly on the tool function:
```python
from openbb_pydantic_ai.tool_discovery import progressive
@progressive(toolset=custom_tools, group="custom_agent_tools", description="Custom user tools")
@custom_tools.tool
def earnings_note(ctx: RunContext[OpenBBDeps], symbol: str) -> str:
_ = ctx
return f"Custom note for {symbol}"
```
Untagged toolsets passed at request time are forwarded as standalone toolsets without being merged into the progressive wrapper.
## Advanced Usage
Instantiate the adapter manually for full control:
```python
from openbb_pydantic_ai import OpenBBAIAdapter
run_input = OpenBBAIAdapter.build_run_input(body_bytes)
adapter = OpenBBAIAdapter(agent=agent, run_input=run_input)
async for event in adapter.run_stream():
yield event # already encoded as OpenBB SSE payloads
```
`message_history`, `deferred_tool_results`, and `on_complete` callbacks are forwarded directly to `Agent.run_stream_events()`.
**Runtime deps & prompts**: `OpenBBDeps` bundles widgets (by priority group), context rows, relevant URLs, workspace state, timezone, and a `state` dict you can pass to toolsets or output validators. The adapter merges dashboard context and current widget parameter values into the runtime instructions automatically — append your own instructions without re-supplying that context.
## Local Development
```bash
uv sync --dev
uv run pytest
uv run pre-commit run --all-files # lint + format
```
| text/markdown | Magnus Samuelsen | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"openbb-ai>=1.8.5",
"pydantic-ai-slim[ui]>=1.47.0",
"docling>=2.72.0; extra == \"pdf\"",
"httpx>=0.28.1; extra == \"pdf\"",
"torch; extra == \"pdf\"",
"torchvision; extra == \"pdf\"",
"docling>=2.72.0; extra == \"pdf-cu128\"",
"httpx>=0.28.1; extra == \"pdf-cu128\"",
"torch; extra == \"pdf-cu128\"",... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T00:07:50.750103 | openbb_pydantic_ai-0.1.8-py3-none-any.whl | 75,765 | 9f/2c/44f3d32bc761f64d8d2d1f6a73c2912df65e688163c6a227c7ba666afbfb/openbb_pydantic_ai-0.1.8-py3-none-any.whl | py3 | bdist_wheel | null | false | 275519da814fdc71259d41312429df4b | a6056075cfe12a49f86ba7e2c6d88dc6434ba5040bba229a26388c7a9144e457 | 9f2c44f3d32bc761f64d8d2d1f6a73c2912df65e688163c6a227c7ba666afbfb | null | [] | 261 |
2.4 | starlink-pyast | 4.0.0 | A Python wrapper for the Starlink AST library | PyAST is a Python extension that provides an interface to the Starlink
AST library. It requires Python v2.7 or later. It can be obtained from
<http://pypi.python.org/pypi/starlink-pyast/>. To install, do:
$ pip install starlink-pyast
or when building locally:
$ pip install .
$ python -m build
$ python setup.py install --prefix=<installation directory>
To test it, do:
$ python src/starlink/ast/test/test.py
User docs are available at http://starlink.github.io/starlink-pyast/pyast.html
## History
### 4.0.0
* Update AST to version 9.3.0.
* New minimum version of Python 3.11.
* Support Python 3.14 and numpy 2.0.
* Add support for SplineMap mapping.
* Fixed handling of `options` parameters in constructors (they were always ignored previously).
* Many internal cleanups associated with no longer having to support legacy versions.
* Improved detection of YAML library in a Conda environment.
* Now support standard build tooling and metadata via `pyproject.toml`.
### 3.15.4
* Upgrade AST internals to version 9.2.5.
## Licence
This program is free software: you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation, either
version 3 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General
License along with this program. If not, see
<http://www.gnu.org/licenses/>.
| text/markdown | null | "David S. Berry" <d.berry@eaobservatory.org>, Tim Jenness <tjenness@lsst.org> | null | null | null | null | [
"Intended Audience :: Science/Research",
"Programming Language :: C",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering ... | [] | null | null | >=3.11.0 | [] | [] | [] | [
"numpy",
"astropy; extra == \"atl\"",
"matplotlib; extra == \"grf\""
] | [] | [] | [] | [
"Homepage, https://www.starlink.ac.uk/ast",
"Source, https://github.com/Starlink/starlink-pyast"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:07:23.497905 | starlink_pyast-4.0.0.tar.gz | 17,569,844 | 40/01/9902b4ca6d49c4e08cd9ceef34ec738b55e28465b4fdc9d6792b4ff1d279/starlink_pyast-4.0.0.tar.gz | source | sdist | null | false | 80796a9d0fb10b7f4618f60d55d17451 | 9198a1d305fb2d82aab5e417367992f1160d4655c0103ff1a17b2de98b4214fd | 40019902b4ca6d49c4e08cd9ceef34ec738b55e28465b4fdc9d6792b4ff1d279 | LGPL-3.0-or-later | [] | 1,313 |
2.4 | oracle-ads | 2.14.7 | Oracle Accelerated Data Science SDK | # Oracle Accelerated Data Science (ADS)
[](https://pypi.org/project/oracle-ads/) [](https://pypi.org/project/oracle-ads/) [](https://github.com/ambv/black)
The [Oracle Accelerated Data Science (ADS) SDK](https://accelerated-data-science.readthedocs.io/en/latest/index.html) is maintained by the Oracle Cloud Infrastructure (OCI) [Data Science service](https://docs.oracle.com/en-us/iaas/data-science/using/data-science.htm) team. It speeds up common data science activities by providing tools that automate and simplify common data science tasks. Additionally, provides data scientists a friendly pythonic interface to OCI services. Some of the more notable services are OCI Data Science, Model Catalog, Model Deployment, Jobs, ML Pipelines, Data Flow, Object Storage, Vault, Big Data Service, Data Catalog, and the Autonomous Database. ADS gives you an interface to manage the life cycle of machine learning models, from data acquisition to model evaluation, interpretation, and model deployment.
With ADS you can:
- Read datasets from Oracle Object Storage, Oracle RDBMS (ATP/ADW/On-prem), AWS S3 and other sources into `Pandas dataframes`.
- Tune models using hyperparameter optimization with the `ADSTuner` tool.
- Generate detailed evaluation reports of your model candidates with the `ADSEvaluator` module.
- Save machine learning models to the [OCI Data Science Model Catalog](https://docs.oracle.com/en-us/iaas/data-science/using/models-about.htm).
- Deploy models as HTTP endpoints with [Model Deployment](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm).
- Launch distributed ETL, data processing, and model training jobs in Spark with [OCI Data Flow](https://docs.oracle.com/en-us/iaas/data-flow/using/home.htm).
- Train machine learning models in OCI Data Science [Jobs](https://docs.oracle.com/en-us/iaas/data-science/using/jobs-about.htm).
- Define and run an end-to-end machine learning orchestration covering all the steps of machine learning lifecycle in a repeatable, continuous [ML Pipelines](https://accelerated-data-science.readthedocs.io/en/latest/user_guide/pipeline/overview.html#).
- Manage the life cycle of conda environments through the `ads conda` command line interface (CLI).
## Installation
You have various options when installing ADS.
### Installing the oracle-ads base package
```bash
python3 -m pip install oracle-ads
```
### Installing OCI AI Operators
To use the AI Forecast Operator, install the "forecast" dependencies using the following command:
```bash
python3 -m pip install 'oracle_ads[forecast]>=2.9.0'
```
### Installing extras libraries
To work with gradient boosting models, install the `boosted` module. This module includes XGBoost and LightGBM model classes.
```bash
python3 -m pip install 'oracle-ads[boosted]'
```
For big data use cases using Oracle Big Data Service (BDS), install the `bds` module. It includes the following libraries, `ibis-framework[impala]`, `hdfs[kerberos]` and `sqlalchemy`.
```bash
python3 -m pip install 'oracle-ads[bds]'
```
To work with a broad set of data formats (for example, Excel, Avro, etc.) install the `data` module. It includes the `fastavro`, `openpyxl`, `pandavro`, `asteval`, `datefinder`, `htmllistparse`, and `sqlalchemy` libraries.
```bash
python3 -m pip install 'oracle-ads[data]'
```
To work with geospatial data install the `geo` module. It includes the `geopandas` and libraries from the `viz` module.
```bash
python3 -m pip install 'oracle-ads[geo]'
```
Install the `notebook` module to use ADS within a OCI Data Science service [notebook session](https://docs.oracle.com/en-us/iaas/data-science/using/manage-notebook-sessions.htm). This module installs `ipywidgets` and `ipython` libraries.
```bash
python3 -m pip install 'oracle-ads[notebook]'
```
To work with ONNX-compatible run times and libraries designed to maximize performance and model portability, install the `onnx` module. It includes the following libraries, `onnx`, `onnxruntime`, `onnxmltools`, `skl2onnx`, `xgboost`, `lightgbm` and libraries from the `viz` module.
```bash
python3 -m pip install 'oracle-ads[onnx]'
```
For infrastructure tasks, install the `opctl` module. It includes the following libraries, `oci-cli`, `docker`, `conda-pack`, `nbconvert`, `nbformat`, and `inflection`.
```bash
python3 -m pip install 'oracle-ads[opctl]'
```
For hyperparameter optimization tasks install the `optuna` module. It includes the `optuna` and libraries from the `viz` module.
```bash
python3 -m pip install 'oracle-ads[optuna]'
```
Install the `tensorflow` module to include `tensorflow` and libraries from the `viz` module.
```bash
python3 -m pip install 'oracle-ads[tensorflow]'
```
For text related tasks, install the `text` module. This will include the `wordcloud`, `spacy` libraries.
```bash
python3 -m pip install 'oracle-ads[text]'
```
Install the `torch` module to include `pytorch` and libraries from the `viz` module.
```bash
python3 -m pip install 'oracle-ads[torch]'
```
Install the `viz` module to include libraries for visualization tasks. Some of the key packages are `bokeh`, `folium`, `seaborn` and related packages.
```bash
python3 -m pip install 'oracle-ads[viz]'
```
See `pyproject.toml` file `[project.optional-dependencies]` section for full list of modules and its list of extra libraries.
**Note**
Multiple extra dependencies can be installed together. For example:
```bash
python3 -m pip install 'oracle-ads[notebook,viz,text]'
```
## Documentation
- [Oracle Accelerated Data Science SDK (ADS) Documentation](https://accelerated-data-science.readthedocs.io/en/latest/index.html)
- [OCI Data Science and AI services Examples](https://github.com/oracle/oci-data-science-ai-samples)
- [Oracle AI & Data Science Blog](https://blogs.oracle.com/ai-and-datascience/)
- [OCI Documentation](https://docs.oracle.com/en-us/iaas/data-science/using/data-science.htm)
## Examples
### Load data from Object Storage
```python
import ads
from ads.common.auth import default_signer
import oci
import pandas as pd
ads.set_auth(auth="api_key", oci_config_location=oci.config.DEFAULT_LOCATION, profile="DEFAULT")
bucket_name = <bucket_name>
key = <key>
namespace = <namespace>
df = pd.read_csv(f"oci://{bucket_name}@{namespace}/{key}", storage_options=default_signer())
```
### Load data from ADB
This example uses SQL injection safe binding variables.
```python
import ads
import pandas as pd
connection_parameters = {
"user_name": "<user_name>",
"password": "<password>",
"service_name": "<tns_name>",
"wallet_location": "<file_path>",
}
df = pd.DataFrame.ads.read_sql(
"""
SELECT *
FROM SH.SALES
WHERE ROWNUM <= :max_rows
""",
bind_variables={ max_rows : 100 },
connection_parameters=connection_parameters,
)
```
## Contributing
This project welcomes contributions from the community. Before submitting a pull request, please [review our contribution guide](./CONTRIBUTING.md)
Find Getting Started instructions for developers in [README-development.md](https://github.com/oracle/accelerated-data-science/blob/main/README-development.md)
## Security
Consult the security guide [SECURITY.md](https://github.com/oracle/accelerated-data-science/blob/main/SECURITY.md) for our responsible security vulnerability disclosure process.
## License
Copyright (c) 2020, 2024 Oracle and/or its affiliates. Licensed under the [Universal Permissive License v1.0](https://oss.oracle.com/licenses/upl/)
| text/markdown | Oracle Data Science | null | null | null | null | Oracle Cloud Infrastructure, OCI, Machine Learning, ML, Artificial Intelligence, AI, Data Science, Cloud, Oracle, GenAI, Generative AI, Forecast, Anomaly, Document Understanding, Anomaly Detection | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"PyYAML>=6.0.1",
"asteval>=0.9.25",
"cerberus>=1.3.4",
"cloudpickle>=1.6.0",
"fsspec>=0.8.7",
"gitpython>=3.1.2",
"jinja2>=2.11.2",
"matplotlib>=3.1.3",
"numpy>=1.19.2",
"oci>=2.148.0",
"ocifs>=1.1.3",
"pandas<3.0.0,>=2.2.0",
"psutil>=5.7.2",
"python_jsonschema_objects>=0.3.13",
"request... | [] | [] | [] | [
"Documentation, https://accelerated-data-science.readthedocs.io/en/latest/index.html",
"Github, https://github.com/oracle/accelerated-data-science"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T00:06:29.990487 | oracle_ads-2.14.7.tar.gz | 22,741,539 | 20/3c/c1bcfd6964f322de12615eda0dc11e8079626957a33650c0c98a09b286bc/oracle_ads-2.14.7.tar.gz | source | sdist | null | false | dc9602796c05e0feae7284412fd64c1a | e80b548565bad411b47547a0cfe5a37bfae2e7e4f2357f56f858fc3f4dd3d8d2 | 203cc1bcfd6964f322de12615eda0dc11e8079626957a33650c0c98a09b286bc | UPL-1.0 | [
"LICENSE.txt"
] | 626 |
2.4 | sift-kg | 0.7.2 | Zero-config document-to-knowledge-graph pipeline | # sift-kg
**Turn any collection of documents into a knowledge graph.**
No code, no database, no infrastructure — just a CLI and your documents. Drop in PDFs, papers, articles, or records — get a browsable knowledge graph that shows how everything connects, in minutes. sift-kg extracts entities and relationships via LLM, deduplicates with your approval, and generates an interactive viewer you can explore in your browser. Concept maps for anything, at your fingertips.
**[Live demos →](https://juanceresa.github.io/sift-kg/)** graphs generated entirely by sift-kg
```bash
pip install sift-kg
sift init # create sift.yaml + .env.example
sift extract ./documents/ # extract entities & relations
sift build # build knowledge graph
sift resolve # find duplicate entities
sift review # approve/reject merges interactively
sift apply-merges # apply your decisions
sift narrate # generate narrative summary
sift view # interactive graph in your browser
sift export graphml # export to Gephi, yEd, Cytoscape, SQLite, etc.
```
## How It Works
```
Documents (PDF, DOCX, text, HTML, and 75+ formats)
↓
Text Extraction (Kreuzberg, local) — with optional OCR (Tesseract, EasyOCR, PaddleOCR, or Google Cloud Vision)
↓
Entity & Relation Extraction (LLM)
↓
Knowledge Graph (NetworkX, JSON)
↓
Entity Resolution (LLM proposes → you review)
↓
Narrative Generation (LLM)
↓
Interactive Viewer (browser) / Export (GraphML, GEXF, CSV, SQLite)
```
Every entity and relation links back to the source document and passage. You control what gets merged. The graph is yours.
## Features
- **Zero-config start** — point at a folder, get a knowledge graph. Or drop a `sift.yaml` in your project for persistent settings
- **Any LLM provider** — OpenAI, Anthropic, Mistral, Ollama (local/private), or any LiteLLM-compatible provider
- **Domain-configurable** — define custom entity types and relation types in YAML
- **Human-in-the-loop** — sift proposes entity merges, you approve or reject in an interactive terminal UI
- **CLI search** — `sift search "SBF"` finds entities by name or alias, with optional relation and description output
- **Interactive viewer** — explore your graph in-browser with community regions (colored zones showing graph structure), hover preview, focus mode (double-click to isolate neighborhoods), keyboard navigation (arrow keys to step through connections), trail breadcrumb (persistent path that tracks your exploration — trace back through every node you visited), search, type/community/relation toggles, source document filter, and degree filtering. Pre-filter with CLI flags: `--neighborhood`, `--top`, `--community`, `--source-doc`, `--min-confidence`
- **Export anywhere** — GraphML (yEd, Cytoscape), GEXF (Gephi), SQLite, CSV, or native JSON for advanced analysis
- **Narrative generation** — prose reports with relationship chains, timelines, and community-grouped entity profiles
- **Source provenance** — every extraction links to the document and passage it came from
- **Multilingual** — extracts from documents in any language, outputs a unified English knowledge graph. Proper names stay as-is, non-Latin scripts are romanized automatically
- **75+ document formats** — PDF, DOCX, XLSX, PPTX, HTML, EPUB, images, and more via [Kreuzberg](https://kreuzberg-dev.github.io/kreuzberg/) extraction engine
- **OCR for scanned PDFs** — local OCR via Tesseract (default), EasyOCR, or PaddleOCR (`--ocr` flag), with optional Google Cloud Vision fallback (`--ocr-backend gcv`)
- **Budget controls** — set `--max-cost` to cap LLM spending
- **Runs locally** — your documents stay on your machine
## Use Cases
- **Research & education** — map how theories, methods, and findings connect across a body of literature. Generate concept maps for courses, literature reviews, or self-study
- **Business intelligence** — drop in competitor whitepapers, market reports, or internal docs and see the landscape
- **Investigative work** — analyze FOIA releases, court filings, public records, and document leaks
- **Legal review** — extract and connect entities across document collections
- **Genealogy** — trace family relationships across vital records
## Bundled Domains
sift-kg ships with specialized domains you can use out of the box:
```bash
sift domains # list available domains
sift extract ./docs/ --domain-name osint # use a bundled domain
```
Set a domain in `sift.yaml` so you don't need the flag every time:
```yaml
domain: academic
```
Works with bundled names (`academic`, `osint`, `default`) or a path to a custom YAML file.
| Domain | Focus | Key Entity Types | Key Relation Types |
|--------|-------|------------------|--------------------|
| `default` | General document analysis | PERSON, ORGANIZATION, LOCATION, EVENT, DOCUMENT | ASSOCIATED_WITH, MEMBER_OF, LOCATED_IN |
| `osint` | Investigations & FOIA | SHELL_COMPANY, FINANCIAL_ACCOUNT | BENEFICIAL_OWNER_OF, TRANSACTED_WITH, SIGNATORY_OF |
| `academic` | Literature review & topic mapping | CONCEPT, THEORY, METHOD, SYSTEM, FINDING, PHENOMENON, RESEARCHER, PUBLICATION, FIELD, DATASET | SUPPORTS, CONTRADICTS, EXTENDS, IMPLEMENTS, EXPLAINS, PROPOSED_BY, USES_METHOD, APPLIED_TO, INVESTIGATES |
The **academic** domain maps the intellectual landscape of a research area — feed in papers and get a graph of how theories, methods, systems, findings, and concepts connect. Distinguishes abstract ideas (THEORY, METHOD) from concrete artifacts (SYSTEM — e.g. GPT-2, BERT, GLUE). Designed for literature reviews, topic mapping, and understanding where ideas agree, contradict, or build on each other.
The **osint** domain adds entity types for shell companies, financial accounts, and offshore jurisdictions, plus relation types for tracing beneficial ownership and financial flows.
Nothing gets merged without your approval — the LLM proposes, you verify. Every extraction links back to the source document and passage.
See [`examples/transformers/`](examples/transformers/) for 12 foundational AI papers mapped as a concept graph (425 entities, ~$0.72), [`examples/ftx/`](examples/ftx/) for the FTX collapse (431 entities from 9 articles), and [`examples/epstein/`](examples/epstein/) for the Giuffre v. Maxwell depositions (190 entities from a scanned PDF). [**Explore all three live**](https://juanceresa.github.io/sift-kg/) — no install, no API key.
## Civic Table
Looking for a hosted platform with forensic legal analysis and analyst verification?
[**Civic Table**](https://github.com/juanceresa/forensic_analysis_platform) is a forensic intelligence platform built on the sift-kg pipeline. It adds a 4-tier verification system where analysts and JDs validate AI-extracted facts before they're treated as evidence, LaTeX dossier generation for legal submissions, and a web interface for sharing results with clients and families. Built for property restitution, investigative journalism, and any context where documentary provenance matters.
sift-kg is the open-source CLI. Civic Table is the full platform — and where the output gets vetted by analysts and JDs before it carries evidentiary weight.
## Installation
Requires Python 3.11+.
```bash
pip install sift-kg
```
For OCR support (scanned PDFs, images):
```bash
# Local OCR — install Tesseract on your system
brew install tesseract # macOS
sudo apt install tesseract-ocr # Ubuntu/Debian
# Then use: sift extract ./docs/ --ocr
```
For Google Cloud Vision OCR as an alternative backend (optional):
```bash
pip install sift-kg[ocr]
# Then use: sift extract ./docs/ --ocr --ocr-backend gcv
```
For semantic clustering during entity resolution (optional, ~2GB for PyTorch):
```bash
pip install sift-kg[embeddings]
```
For development:
```bash
git clone https://github.com/juanceresa/sift-kg.git
cd sift-kg
pip install -e ".[dev]"
```
## Quick Start
### 1. Initialize and configure
```bash
sift init # creates sift.yaml + .env.example
cp .env.example .env # copy and add your API key
```
`sift init` generates a `sift.yaml` project config so you don't need flags on every command:
```yaml
# sift.yaml
domain: domain.yaml # or a bundled name like "osint"
model: openai/gpt-4o-mini
ocr: true # enable OCR for scanned PDFs
# extraction:
# backend: kreuzberg # kreuzberg (default, 75+ formats) | pdfplumber
# ocr_backend: tesseract # tesseract | easyocr | paddleocr | gcv
# ocr_language: eng
```
Set your API key in `.env`:
```
SIFT_OPENAI_API_KEY=sk-...
```
Or use Anthropic, Mistral, Ollama, or any LiteLLM provider:
```
SIFT_ANTHROPIC_API_KEY=sk-ant-...
SIFT_MISTRAL_API_KEY=...
```
Settings priority: CLI flags > env vars > `.env` > `sift.yaml` > defaults. You can override anything from `sift.yaml` with a flag on any command.
### 2. Extract entities and relations
```bash
sift extract ./my-documents/
sift extract ./my-documents/ --ocr # local OCR via Tesseract
sift extract ./my-documents/ --ocr --ocr-backend gcv # Google Cloud Vision OCR
sift extract ./my-documents/ --extractor pdfplumber # legacy pdfplumber backend
```
Reads 75+ document formats — PDFs, DOCX, XLSX, PPTX, HTML, EPUB, images, and more. Extracts entities and relations using your configured LLM. Results saved as JSON in `output/extractions/`.
The `--ocr` flag enables local OCR via Tesseract for scanned PDFs — no API keys or cloud services needed. You can switch OCR engines with `--ocr-backend`:
```bash
sift extract ./docs/ --ocr # Tesseract (default, local)
sift extract ./docs/ --ocr --ocr-backend easyocr # EasyOCR (local)
sift extract ./docs/ --ocr --ocr-backend paddleocr # PaddleOCR (local)
sift extract ./docs/ --ocr --ocr-backend gcv # Google Cloud Vision (requires credentials)
```
It autodetects which PDFs need OCR — text-rich PDFs use standard extraction, only near-empty pages fall back to OCR. Safe for mixed folders. Without `--ocr`, sift will warn if a PDF appears to be scanned.
You can also switch the extraction backend entirely with `--extractor pdfplumber` for the legacy pdfplumber backend (PDF/DOCX/TXT/HTML only).
### 3. Build the knowledge graph
```bash
sift build
```
Constructs a NetworkX graph from all extractions. Automatically deduplicates near-identical entity names (plurals, Unicode variants, case differences) before they become graph nodes. Fixes reversed edge directions when the LLM swaps source/target types vs. the domain schema. Flags low-confidence relations for review. Saves to `output/graph_data.json`.
### 4. Resolve duplicate entities
See [Entity Resolution Workflow](#entity-resolution-workflow) below for the full guide — especially important for genealogy, legal, and investigative use cases where accuracy matters.
### 5. Explore and export
**Interactive viewer** — explore your concept map in the browser:
```bash
sift view # full graph
sift view --neighborhood "Palantir Technologies" # 1-hop ego graph around an entity
sift view --neighborhood "Palantir" --depth 3 # 3-hop neighborhood
sift view --top 10 # top 10 hubs + their neighbors
sift view --community "Community 1" # focus on a specific community
sift view --source-doc palantir_nsa_surveillance # entities from one document
sift view --min-confidence 0.8 # hide low-confidence nodes/edges
```
Opens a force-directed graph in your browser. The overview shows **community regions** — colored convex hulls grouping related entities — so you can see graph structure at a glance without label clutter. Hover any node to preview its name and connections. Includes search, type/community/relation toggles, source document filter, degree filter, and a detail sidebar.
Pre-filter flags (`--top`, `--neighborhood`, `--source-doc`, `--min-confidence`) reduce the graph before rendering. `--community` pre-selects a community in the sidebar. `--neighborhood` accepts entity IDs (`person:alice`) or display names (case-insensitive).
**Focus mode:** Double-click any entity to isolate its neighborhood. Use arrow keys to step through connections one by one — each pair is shown in isolation with labeled edges. Press Enter/Right to shift focus to a neighbor, Backspace/Left to go back along your path, Escape to exit. Your exploration is tracked as a **trail breadcrumb** in the sidebar — a persistent path showing every node you've visited and the relations between them. Trail edges stay highlighted on the canvas so you can see your path through the graph. This is the intended way to explore dense graphs — zoom in on what matters, trace connections, read the evidence.
**CLI search** — query entities directly from the terminal:
```bash
sift search "Sam Bankman" # search by name
sift search "SBF" # search by alias
sift search "Caroline" -r # show relations
sift search "FTX" -d -t ORGANIZATION # descriptions + type filter
```
**Static exports** — for analysis tools where you want custom layout, filtering, or styling:
```bash
sift export graphml # → output/graph.graphml (Gephi, yEd, Cytoscape)
sift export gexf # → output/graph.gexf (Gephi native)
sift export sqlite # → output/graph.sqlite (SQL queries, DuckDB, Datasette)
sift export csv # → output/csv/entities.csv + relations.csv
sift export json # → output/graph.json
```
Use GraphML/GEXF when you want to control node sizing, edge weighting, custom color schemes, or apply graph algorithms (centrality, community detection) in dedicated tools. SQLite is useful for ad-hoc SQL queries, [Datasette](https://datasette.io/) publishing, or loading into DuckDB.
### 6. Generate narrative
```bash
sift narrate
sift narrate --communities-only # regenerate community labels only (~$0.01)
```
Produces `output/narrative.md` — a prose report with an overview, key relationship chains between top entities, a timeline (when dates exist in the data), and entity profiles grouped by thematic community (discovered via Louvain community detection). Entity descriptions are written in active voice with specific actions, not role summaries.
## Domain Configuration
sift-kg ships with three bundled domains (see [Bundled Domains](#bundled-domains) above for details).
Use a bundled domain:
```bash
sift extract ./docs/ --domain-name osint
```
Or create your own `domain.yaml`:
```yaml
name: My Domain
entity_types:
PERSON:
description: People and individuals
extraction_hints:
- Look for full names with titles
COMPANY:
description: Business entities
DEPARTMENT:
description: Named departments within a company
canonical_names: # closed vocabulary — only these values allowed
- Engineering
- Sales
- Legal
- Marketing
canonical_fallback_type: ORGANIZATION # non-canonical names get retyped
relation_types:
EMPLOYED_BY:
description: Employment relationship
source_types: [PERSON]
target_types: [COMPANY]
OWNS:
description: Ownership relationship
symmetric: false
review_required: true
```
Entity types with `canonical_names` enforce a closed vocabulary. The allowed names are injected into the LLM extraction prompt so it outputs exact matches. As a safety net, any extracted name not in the list gets retyped to `canonical_fallback_type` during graph building (or kept as-is if no fallback is set). Useful for controlled taxonomies — departments, jurisdictions, predefined classifications.
```bash
sift extract ./docs/ --domain path/to/domain.yaml
```
## Library API
Use sift-kg from Python — Jupyter notebooks, scripts, web apps:
```python
from sift_kg import load_domain, run_extract, run_build, run_narrate, run_resolve, run_export, run_view
from sift_kg import KnowledgeGraph
from pathlib import Path
domain = load_domain() # or load_domain(bundled_name="osint")
# Extract — supports OCR, backend selection, concurrency
results = run_extract(
Path("./docs"), "openai/gpt-4o-mini", domain, Path("./output"),
ocr=True, ocr_backend="tesseract", # enable OCR for scanned PDFs
extractor="kreuzberg", # or "pdfplumber"
concurrency=4, chunk_size=10000,
)
# Build graph
kg = run_build(Path("./output"), domain)
print(f"{kg.entity_count} entities, {kg.relation_count} relations")
# Resolve duplicates — with optional semantic clustering
merges = run_resolve(Path("./output"), "openai/gpt-4o-mini", domain=domain, use_embeddings=True)
# Export — json, graphml, gexf, csv, sqlite
run_export(Path("./output"), "sqlite")
# Narrate — or just regenerate community labels cheaply
run_narrate(Path("./output"), "openai/gpt-4o-mini", communities_only=True)
# View — with optional pre-filters
run_view(Path("./output")) # full graph
run_view(Path("./output"), neighborhood="person:alice", depth=2) # ego graph
run_view(Path("./output"), top_n=10) # top hubs
# Or run the full pipeline (extract → build → narrate)
from sift_kg import run_pipeline
run_pipeline(Path("./docs"), "openai/gpt-4o-mini", domain, Path("./output"))
```
## Project Structure
After running the pipeline, your output directory contains:
```
output/
├── extractions/ # Per-document extraction JSON
│ ├── document1.json
│ └── document2.json
├── graph_data.json # Knowledge graph (native format)
├── merge_proposals.yaml # Entity merge proposals (DRAFT/CONFIRMED/REJECTED)
├── relation_review.yaml # Flagged relations for review
├── narrative.md # Generated narrative summary
├── entity_descriptions.json # Entity descriptions (loaded by viewer)
├── communities.json # Community assignments (shared by narrate + viewer)
├── graph.html # Interactive graph visualization
├── graph.graphml # GraphML export (if exported)
├── graph.gexf # GEXF export (if exported)
├── graph.sqlite # SQLite export (if exported)
└── csv/ # CSV export (if exported)
├── entities.csv
└── relations.csv
```
## Entity Resolution Workflow
When you're building a knowledge graph from family records, legal filings, or any documents where accuracy matters, you want full control over which entities get merged. sift-kg never merges anything without your approval.
The workflow has three layers, each catching different kinds of duplicates:
### Layer 1: Automatic Pre-Dedup (during `sift build`)
Before entities become graph nodes, sift deterministically collapses names that are obviously the same. No LLM involved, no cost, no review needed:
- **Unicode normalization** — "Jose Garcia" and "Jose Garcia" become one node
- **Title stripping** — "Detective Joe Recarey" and "Joe Recarey" merge (strips ~35 common prefixes: Dr., Mr., Judge, Senator, etc.)
- **Singularization** — "Companies" and "Company" merge
- **Fuzzy string matching** — [SemHash](https://github.com/MinishLab/semhash) at 0.95 threshold catches near-identical strings like "MacAulay" vs "Mac Aulay"
This happens automatically every time you run `sift build`. These are the trivial cases — spelling variants that would clutter your graph without adding information.
### Layer 2: LLM Proposes Merges (during `sift resolve`)
The LLM sees batches of entities (all types except DOCUMENT) and identifies ones that likely refer to the same real-world thing. It also detects cross-type duplicates (same name, different entity type) and proposes variant relationships (EXTENDS) when it finds parent/child patterns. Results go to `merge_proposals.yaml` (entity merges) and `relation_review.yaml` (variant relations), all starting as `DRAFT`:
```bash
sift resolve # uses domain from sift.yaml
sift resolve --domain osint # or specify explicitly
```
If you have a domain configured, the LLM uses that context to make better judgments about entity names specific to your field.
This generates proposals like:
```yaml
proposals:
- canonical_id: person:samuel_benjamin_bankman_fried
canonical_name: Samuel Benjamin Bankman-Fried
entity_type: PERSON
status: DRAFT # ← you decide
members:
- id: person:bankman_fried
name: Bankman-Fried
confidence: 0.99
reason: Same person referenced with full name vs. surname only.
- canonical_id: person:stephen_curry
canonical_name: Stephen Curry
entity_type: PERSON
status: DRAFT # ← you decide
members:
- id: person:steph_curry
name: Steph Curry
confidence: 0.99
reason: Same basketball player referenced with nickname 'Steph' and full name 'Stephen'.
```
**Nothing is merged yet.** The LLM is proposing, not deciding.
### Layer 3: You Review and Decide
You have two options for reviewing proposals:
**Option A: Interactive terminal review**
```bash
sift review
```
Walks through each `DRAFT` proposal one by one. For each, you see the canonical entity, the proposed merge members, the LLM's confidence and reasoning. You approve, reject, or skip.
High-confidence proposals (>0.85 by default) are auto-approved, and low-confidence relations (<=0.5 by default) are auto-rejected:
```bash
sift review # uses defaults: --auto-approve 0.85, --auto-reject 0.5
sift review --auto-approve 0.90 # raise the auto-approve threshold
sift review --auto-reject 0.3 # lower the auto-reject threshold
sift review --auto-approve 1.0 # disable auto-approve, review everything manually
```
**Option B: Edit the YAML directly**
Open `output/merge_proposals.yaml` in any text editor. Change `status: DRAFT` to `CONFIRMED` or `REJECTED`:
```yaml
- canonical_id: person:stephen_curry
canonical_name: Stephen Curry
entity_type: PERSON
status: CONFIRMED # ← approve this merge
members:
- id: person:steph_curry
name: Steph Curry
confidence: 0.99
reason: Same basketball player...
- canonical_id: person:winklevoss_twins
canonical_name: Winklevoss twins
entity_type: PERSON
status: REJECTED # ← these are distinct people, don't merge
members:
- id: person:cameron_winklevoss
name: Cameron Winklevoss
confidence: 0.95
reason: ...
```
**For high-accuracy use cases** (genealogy, legal review), we recommend editing the YAML directly so you can study each proposal carefully. The file is designed to be human-readable.
### Layer 3b: Relation Review
During `sift build`, relations below the confidence threshold (default 0.7) or of types marked `review_required` in your domain config get flagged in `output/relation_review.yaml`:
```yaml
review_threshold: 0.7
relations:
- source_name: Alice Smith
target_name: Acme Corp
relation_type: WORKS_FOR
confidence: 0.45
evidence: "Alice mentioned she used to work near the Acme building."
status: DRAFT # ← you decide: CONFIRMED or REJECTED
flag_reason: Low confidence (0.45 < 0.7)
```
Same workflow: review with `sift review` or edit the YAML, then apply.
### Layer 4: Apply Your Decisions
Once you've reviewed everything:
```bash
sift apply-merges
```
This does three things:
1. **Confirmed entity merges** — member entities are absorbed into the canonical entity. All their relations are rewired. Source documents are combined. The member nodes are removed.
2. **Rejected relations** — removed from the graph entirely.
3. **DRAFT proposals** — left untouched. You can come back to them later.
The graph is saved back to `output/graph_data.json`. You can re-export, narrate, or visualize the cleaned graph.
### Iterating
Entity resolution isn't always one-pass. After merging, new duplicates may become apparent. You can re-run:
```bash
sift resolve # find new duplicates in the cleaned graph
sift review # review the new proposals
sift apply-merges # apply again
```
Each run is additive — previous `CONFIRMED`/`REJECTED` decisions in `merge_proposals.yaml` are preserved.
### Recommended Workflow by Use Case
| Use Case | Suggested Approach |
|---|---|
| **Quick exploration** | `sift review --auto-approve 0.85` — approve high-confidence, review the rest |
| **Genealogy / family records** | Edit YAML manually, `--auto-approve 1.0` — review every single merge |
| **Legal / investigative** | `sift resolve --embeddings`, edit YAML manually, use `sift view` to inspect between rounds |
| **Large corpus (1000+ entities)** | `sift resolve --embeddings` for better batching, then interactive review |
## Deduplication Internals
The pre-dedup and LLM batching techniques are inspired by [KGGen](https://github.com/stochastic-sisyphus/KGGen) (NeurIPS 2025) by [@stochastic-sisyphus](https://github.com/stochastic-sisyphus). KGGen uses SemHash for deterministic entity deduplication and embedding-based clustering for grouping entities before LLM comparison. sift-kg adapts these into its human-in-the-loop review workflow.
### Embedding-Based Clustering (optional)
By default, `sift resolve` sorts entities alphabetically and splits them into overlapping batches for LLM comparison. This works well when duplicates have similar spelling — but "Robert Smith" (R) and "Bob Smith" (B) end up in different batches and never get compared.
```bash
pip install sift-kg[embeddings] # sentence-transformers + scikit-learn (~2GB, pulls PyTorch)
sift resolve --embeddings
```
This replaces alphabetical batching with KMeans clustering on sentence embeddings (all-MiniLM-L6-v2). Semantically similar names cluster together regardless of spelling.
| | Default (alphabetical) | `--embeddings` |
|---|---|---|
| Install size | Included | ~2GB (PyTorch) |
| First-run overhead | None | ~90MB model download |
| Per-run overhead | Sorting only | Encoding (<1s for hundreds of entities) |
| Cross-alphabet duplicates | Missed if in different batches | Caught |
| Small graphs (<100/type) | Same result | Same result |
Falls back to alphabetical batching if dependencies aren't installed or clustering fails.
## License
MIT
| text/markdown | null | Juan Ceresa <jcere@umich.edu> | null | null | MIT | document-processing, entity-extraction, knowledge-graph, llm, nlp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topi... | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4>=4.12.0",
"inflect>=7.0.0",
"kreuzberg>=4.0.0",
"litellm>=1.0.0",
"networkx>=3.2",
"pdfplumber>=0.10.0",
"pydantic-settings>=2.1.0",
"pydantic>=2.5.0",
"python-docx>=1.0.0",
"pyvis>=0.3.0",
"pyyaml>=6.0.1",
"rich>=13.0.0",
"semhash>=0.4.0",
"typer[all]>=0.9.0",
"unidecode... | [] | [] | [] | [
"Homepage, https://github.com/civictable/sift-kg",
"Documentation, https://github.com/civictable/sift-kg#readme",
"Repository, https://github.com/civictable/sift-kg",
"Issues, https://github.com/civictable/sift-kg/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-19T00:04:03.560530 | sift_kg-0.7.2.tar.gz | 73,705,388 | 67/7e/363ea45aa3681e068ac924b7cc7cb1c368e93595ff1f48479395692ddc87/sift_kg-0.7.2.tar.gz | source | sdist | null | false | 65b9e4e418ec54b58d9237af81a3cfe4 | 180d3518e387a197a15c633d47b85d7165033bde3b8aeae8f5bc8fcefc40263a | 677e363ea45aa3681e068ac924b7cc7cb1c368e93595ff1f48479395692ddc87 | null | [
"LICENSE"
] | 282 |
2.1 | langgraph-checkpoint-aws | 1.0.5 | A LangChain checkpointer implementation that uses Bedrock Session Management Service and ElastiCache Valkey to enable stateful and resumable LangGraph agents. | # LangGraph Checkpoint AWS
A custom AWS-based persistence solution for LangGraph agents that provides multiple storage backends including Bedrock AgentCore Memory, DynamoDB with S3 offloading, and high-performance Valkey (Redis-compatible) storage.
## Overview
This package provides multiple persistence solutions for LangGraph agents:
### AWS Bedrock AgentCore Memory Service
1. Stateful conversations and interactions
2. Resumable agent sessions
3. Efficient state persistence and retrieval
4. Seamless integration with AWS Bedrock
### DynamoDB Storage
1. **Checkpoint storage** with DynamoDB and automatic S3 offloading
2. Unified table design with TTL support
3. Intelligent compression for optimal storage
### Valkey Storage Solutions
1. **Checkpoint storage** with Valkey (Redis-compatible)
2. **Intelligent caching** for LLM responses and computation results
3. **Document storage** with vector search capabilities
## Installation
You can install the package using pip:
```bash
# Base package (includes Bedrock AgentCore Memory components)
pip install langgraph-checkpoint-aws
# Optional Valkey support
pip install 'langgraph-checkpoint-aws[valkey]'
```
## Components
This package provides following main components:
1. **AgentCoreMemorySaver** - AWS Bedrock-based checkpoint storage
2. **AgentCoreValkeySaver** - AgentCore-compatible Valkey checkpoint storage
3. **DynamoDBSaver** - DynamoDB-based checkpoint storage with S3 offloading
4. **ValkeySaver** - Valkey checkpoint storage
5. **AgentCoreMemoryStore** - AWS Bedrock-based document store
6. **ValkeyStore** - Valkey document store
7. **ValkeyCache** - Valkey LLM response cache
## Usage
### 1. Bedrock Session Management
```python
# Import LangGraph and LangChain components
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent
# Import the AgentCoreMemory integrations
from langgraph_checkpoint_aws import AgentCoreMemorySaver
REGION = "us-west-2"
MEMORY_ID = "YOUR_MEMORY_ID"
MODEL_ID = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
# Initialize checkpointer for state persistence. No additional setup required.
# Sessions will be saved and persisted for actor_id/session_id combinations
checkpointer = AgentCoreMemorySaver(MEMORY_ID, region_name=REGION)
# Initialize chat model
model = init_chat_model(MODEL_ID, model_provider="bedrock_converse", region_name=REGION)
# Create a pre-built langgraph agent (configurations work for custom agents too)
graph = create_react_agent(
model=model,
tools=tools,
checkpointer=checkpointer, # AgentCoreMemorySaver we created above
)
# Specify config at runtime for ACTOR and SESSION
config = {
"configurable": {
"thread_id": "session-1", # REQUIRED: This maps to Bedrock AgentCore session_id under the hood
"actor_id": "react-agent-1", # REQUIRED: This maps to Bedrock AgentCore actor_id under the hood
}
}
# Invoke the agent
response = graph.invoke(
{"messages": [("human", "I like sushi with tuna. In general seafood is great.")]},
config=config
)
```
### 2. Bedrock Memory Store
```python
# Import LangGraph and LangChain components
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent
from langgraph_checkpoint_aws import (
AgentCoreMemoryStore
)
REGION = "us-west-2"
MEMORY_ID = "YOUR_MEMORY_ID"
MODEL_ID = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
# Initialize store for saving and searching over long term memories
# such as preferences and facts across sessions
store = AgentCoreMemoryStore(MEMORY_ID, region_name=REGION)
# Pre-model hook runs and saves messages of your choosing to AgentCore Memory
# for async processing and extraction
def pre_model_hook(state, config: RunnableConfig, *, store: BaseStore):
"""Hook that runs pre-model invocation to save the latest human message"""
actor_id = config["configurable"]["actor_id"]
thread_id = config["configurable"]["thread_id"]
# Saving the message to the actor and session combination that we get at runtime
namespace = (actor_id, thread_id)
messages = state.get("messages", [])
# Save the last human message we see before model invocation
for msg in reversed(messages):
if isinstance(msg, HumanMessage):
store.put(namespace, str(uuid.uuid4()), {"message": msg})
break
# OPTIONAL: Retrieve user preferences based on the last message and append to state
# user_preferences_namespace = ("preferences", actor_id)
# preferences = store.search(user_preferences_namespace, query=msg.content, limit=5)
# # Add to input messages as needed
return {"model_input_messages": messages}
# Initialize chat model
model = init_chat_model(MODEL_ID, model_provider="bedrock_converse", region_name=REGION)
# Create a pre-built langgraph agent (configurations work for custom agents too)
graph = create_react_agent(
model=model,
tools=[],
pre_model_hook=pre_model_hook,
)
# Specify config at runtime for ACTOR and SESSION
config = {
"configurable": {
"thread_id": "session-1", # REQUIRED: This maps to Bedrock AgentCore session_id under the hood
"actor_id": "react-agent-1", # REQUIRED: This maps to Bedrock AgentCore actor_id under the hood
}
}
# Invoke the agent
response = graph.invoke(
{"messages": [("human", "I like sushi with tuna. In general seafood is great.")]},
config=config
)
```
### 3. Valkey Cache - LLM Response caching
Intelligent caching to improve performance and reduce costs:
```python
from langgraph_checkpoint_aws import ValkeyCache
from langchain_aws import ChatBedrockConverse
# Initialize cache
with ValkeyCache.from_conn_string(
"valkey://localhost:6379",
ttl_seconds=3600, # 1 hour TTL
pool_size=10
) as cache:
# Use cache with your LLM calls
model = ChatBedrockConverse(
model="us.anthropic.claude-sonnet-4-5-20250929-v1:0"
)
# Cache expensive prompts/computations
cache_key = "expensive_computation_key"
result = cache.get([cache_key])
if not result:
# Compute and cache result
prompt: str = "Your expensive prompt"
response = model.invoke([HumanMessage(content=prompt)])
cache.set({cache_key: (response.content, 3600)}) # Cache for 1 hour
```
### 4. DynamoDB Checkpoint Storage
```python
from langgraph.graph import StateGraph
from langgraph_checkpoint_aws import DynamoDBSaver
# Basic usage with DynamoDB only
checkpointer = DynamoDBSaver(
table_name="my-checkpoints",
region_name="us-west-2"
)
# With S3 offloading for large checkpoints (>350KB)
checkpointer = DynamoDBSaver(
table_name="my-checkpoints",
region_name="us-west-2",
s3_offload_config={"bucket_name": "my-checkpoint-bucket"}
)
# Production configuration with TTL and compression
checkpointer = DynamoDBSaver(
table_name="my-checkpoints",
region_name="us-west-2",
ttl_seconds=86400 * 7, # 7 days
enable_checkpoint_compression=True,
s3_offload_config={"bucket_name": "my-checkpoint-bucket"}
)
# Create your graph
builder = StateGraph(int)
builder.add_node("add_one", lambda x: x + 1)
builder.set_entry_point("add_one")
builder.set_finish_point("add_one")
graph = builder.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "session-1"}}
result = graph.invoke(1, config)
```
### 5. Valkey Checkpoint Storage
```python
from langgraph.graph import StateGraph
from langgraph_checkpoint_aws import ValkeySaver
# Using connection string
with ValkeySaver.from_conn_string(
"valkey://localhost:6379",
ttl_seconds=3600, # 1 hour TTL
pool_size=10
) as checkpointer:
# Create your graph
builder = StateGraph(int)
builder.add_node("add_one", lambda x: x + 1)
builder.set_entry_point("add_one")
builder.set_finish_point("add_one")
graph = builder.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "session-1"}}
result = graph.invoke(1, config)
```
### 6. AgentCore Valkey Storage
For AWS Bedrock AgentCore-compatible applications that want to use Valkey instead of managed Memory:
```python
from langgraph.prebuilt import create_react_agent
from langgraph_checkpoint_aws import AgentCoreValkeySaver
# Using connection string
with AgentCoreValkeySaver.from_conn_string(
"valkey://localhost:6379",
ttl_seconds=3600, # 1 hour TTL
pool_size=10
) as checkpointer:
# Create your agent
graph = create_react_agent(
model=model,
tools=tools,
checkpointer=checkpointer
)
# AgentCore-style configuration (requires actor_id)
config = {
"configurable": {
"thread_id": "session-123",
"actor_id": "agent-456", # Required for AgentCore compatibility
"checkpoint_ns": "production"
}
}
result = graph.invoke({"messages": [...]}, config)
```
**Key Differences from ValkeySaver:**
- ✅ Requires `actor_id` in configuration (AgentCore requirement)
- ✅ Uses AgentCore-compatible key structure (`agentcore:*` prefix)
- ✅ Built-in retry logic with exponential backoff
- ✅ Pydantic validation for data integrity
- ⚠️ **Data is NOT compatible with ValkeySaver** - choose one at project start
### 7. Valkey Store for Document Storage
Document storage with vector search capabilities using ValkeyIndexConfig:
```python
from langchain_aws import BedrockEmbeddings
from langgraph_checkpoint_aws import ValkeyStore
# Initialize Bedrock embeddings
embeddings = BedrockEmbeddings(
model_id="amazon.titan-embed-text-v1",
region_name=AWS_REGION
)
# Basic usage with ValkeyIndexConfig
with ValkeyStore.from_conn_string(
"valkey://localhost:6379",
index={
"collection_name": "my_documents",
"dims": 1536,
"embed": embeddings,
"fields": ["text", "author"],
"timezone": "UTC",
"index_type": "hnsw"
},
ttl={"default_ttl": 60.0} # 1 hour TTL
) as store:
# Setup vector search index
store.setup()
# Store documents
store.put(
("documents", "user123"),
"report_1",
{
"text": "Machine learning report on customer behavior analysis...",
"tags": ["ml", "analytics", "report"],
"author": "data_scientist"
}
)
# Search documents
results = store.search(
("documents",),
query="machine learning customer analysis",
filter={"author": "data_scientist"},
limit=10
)
# Advanced HNSW configuration for performance tuning
with ValkeyStore.from_conn_string(
"valkey://localhost:6379",
index={
"collection_name": "high_performance_docs",
"dims": 768,
"embed": embeddings,
"fields": ["text", "title", "summary"],
"timezone": "America/New_York",
"index_type": "hnsw",
"hnsw_m": 32, # More connections for better recall
"hnsw_ef_construction": 400, # Higher construction quality
"hnsw_ef_runtime": 20, # Better search accuracy
}
) as store:
# Optimized for high-accuracy vector search
pass
# FLAT index for exact search (smaller datasets)
with ValkeyStore.from_conn_string(
"valkey://localhost:6379",
index={
"collection_name": "exact_search_docs",
"dims": 384,
"embed": embeddings,
"fields": ["text"],
"index_type": "flat" # Exact search, no approximation
}
) as store:
# Exact vector search for smaller datasets
pass
```
## Async Usage
All components support async operations:
```python
from langgraph_checkpoint_aws.async_saver import AsyncBedrockSessionSaver
from langgraph_checkpoint_aws import AsyncValkeySaver
from langgraph_checkpoint_aws import DynamoDBSaver
# Async Bedrock usage
session_saver = AsyncBedrockSessionSaver(region_name="us-west-2")
session_id = (await session_saver.session_client.create_session()).session_id
# Async DynamoDB usage
checkpointer = DynamoDBSaver(table_name="my-checkpoints", region_name="us-west-2")
result = await graph.ainvoke(1, {"configurable": {"thread_id": "session-1"}})
# Async ValkeySaver usage
async with AsyncValkeySaver.from_conn_string("valkey://localhost:6379") as checkpointer:
graph = builder.compile(checkpointer=checkpointer)
result = await graph.ainvoke(1, {"configurable": {"thread_id": "session-1"}})
# Async ValkeyStore usage
async with AsyncValkeyStore.from_conn_string("valkey://localhost:6379") as store:
namespace = ("example",)
key = "key"
data = {
"message": "Sample message",
"timestamp": datetime.now().isoformat(),
"status": "success"
}
await store.setup()
await store.aput(namespace, key, data)
result = await store.aget(namespace, key)
```
## Configuration Options
### Bedrock Session Saver
`BedrockSessionSaver` and `AsyncBedrockSessionSaver` accept the following parameters:
```python
def __init__(
client: Optional[Any] = None,
session: Optional[boto3.Session] = None,
region_name: Optional[str] = None,
credentials_profile_name: Optional[str] = None,
aws_access_key_id: Optional[SecretStr] = None,
aws_secret_access_key: Optional[SecretStr] = None,
aws_session_token: Optional[SecretStr] = None,
endpoint_url: Optional[str] = None,
config: Optional[Config] = None,
)
```
### DynamoDB Saver
`DynamoDBSaver` provides persistent checkpoint storage with these options:
```python
DynamoDBSaver(
table_name: str, # Required: DynamoDB table name
session: Optional[boto3.Session] = None, # Custom boto3 session
region_name: Optional[str] = None, # AWS region
endpoint_url: Optional[str] = None, # Custom dynamodb endpoint url
boto_config: Optional[Config] = None, # Botocore config
ttl_seconds: Optional[int] = None, # Auto-cleanup after N seconds
enable_checkpoint_compression: bool = False, # Enable gzip compression
s3_offload_config: Optional[dict] = None # S3 config for large checkpoints
)
```
**Key Features:**
- Unified table design for checkpoints and writes
- Automatic S3 offloading for payloads >350KB (when configured)
- Optional gzip compression with intelligent thresholds
- TTL support with automatic DynamoDB and S3 lifecycle management
**S3 Offload Configuration:**
```python
s3_offload_config = {
"bucket_name": "my-checkpoint-bucket", # Required
"endpoint_url": "http://localhost:4566" # Optional: Custom s3 endpoint url
}
```
### Valkey Components
Valkey components support these common configuration options:
#### Connection Options
- **Connection String**: `valkey://localhost:6379` or `valkeys://localhost:6380` (SSL). Refer [connection examples](https://valkey-py.readthedocs.io/en/latest/examples/connection_examples.html).
- **Connection Pool**: Reusable connection pools for better performance
- **Pool Size**: Maximum number of connections (default: 10)
- **SSL Support**: Secure connections with certificate validation
#### Performance Options
- **TTL (Time-to-Live)**: Automatic expiration of stored data
- **Batch Operations**: Efficient bulk operations for better throughput
- **Async Support**: Non-blocking operations for high concurrency
#### ValkeyCache Options
```python
ValkeyCache(
client: Valkey,
prefix: str = "langgraph:cache:", # Key prefix
ttl: float | None = None, # Default TTL in seconds
serde: SerializerProtocol | None = None
)
```
#### ValkeySaver Options
```python
valkey_client = Valkey.from_url("valkey://localhost:6379")
ValkeySaver(
client: valkey_client,
ttl: float | None = None, # TTL in seconds
serde: SerializerProtocol | None = None # Custom serialization
)
```
#### ValkeyStore Options
```python
ValkeyStore(
client: Valkey,
index: ValkeyIndexConfig | None = None, # Valkey-specific vector search configuration
ttl: TTLConfig | None = None # TTL configuration
)
# ValkeyIndexConfig - Enhanced vector search configuration
from langgraph_checkpoint_aws.store.valkey import ValkeyIndexConfig
index_config = {
# Basic configuration
"collection_name": "my_documents", # Index collection name
"dims": 1536, # Vector dimensions
"embed": embeddings, # Embedding model
"fields": ["text", "content"], # Fields to index
# Valkey-specific configuration
"timezone": "UTC", # Timezone for operations (default: "UTC")
"index_type": "hnsw", # Algorithm: "hnsw" or "flat" (default: "hnsw")
# HNSW performance tuning parameters
"hnsw_m": 16, # Connections per layer (default: 16)
"hnsw_ef_construction": 200, # Construction search width (default: 200)
"hnsw_ef_runtime": 10, # Runtime search width (default: 10)
}
# TTL Configuration
ttl_config = {
"default_ttl": 60.0 # Default TTL in minutes
}
```
##### Algorithm Selection Guide
**HNSW (Hierarchical Navigable Small World)**
- **Best for**: Large datasets (>10K vectors), fast approximate search
- **Trade-off**: Speed vs accuracy - configurable via parameters
- **Use cases**: Real-time search, large document collections, production systems
**FLAT (Brute Force)**
- **Best for**: Small datasets (<10K vectors), exact search requirements
- **Trade-off**: Perfect accuracy but slower on large datasets
- **Use cases**: High-precision requirements, smaller collections, research
#### Performance Tuning Parameters
**hnsw_m (Connections per layer)**
- **Range**: 4-64 (default: 16)
- **Higher values**: Better recall, more memory usage
- **Lower values**: Faster search, less memory, lower recall
- **Recommendation**: 16-32 for most use cases
**hnsw_ef_construction (Construction search width)**
- **Range**: 100-800 (default: 200)
- **Higher values**: Better index quality, slower construction
- **Lower values**: Faster construction, potentially lower quality
- **Recommendation**: 200-400 for production systems
**hnsw_ef_runtime (Query search width)**
- **Range**: 10-500 (default: 10)
- **Higher values**: Better recall, slower queries
- **Lower values**: Faster queries, potentially lower recall
- **Recommendation**: 10-50 depending on speed/accuracy requirements
##### Configuration Examples
```python
# High-speed configuration (prioritize speed)
speed_config = {
"collection_name": "fast_search",
"index_type": "hnsw",
"hnsw_m": 8, # Fewer connections
"hnsw_ef_construction": 100, # Faster construction
"hnsw_ef_runtime": 10, # Fast queries
}
# High-accuracy configuration (prioritize recall)
accuracy_config = {
"collection_name": "precise_search",
"index_type": "hnsw",
"hnsw_m": 32, # More connections
"hnsw_ef_construction": 400, # Better construction
"hnsw_ef_runtime": 50, # More thorough search
}
# Balanced configuration (good speed/accuracy trade-off)
balanced_config = {
"collection_name": "balanced_search",
"index_type": "hnsw",
"hnsw_m": 16, # Default connections
"hnsw_ef_construction": 200, # Default construction
"hnsw_ef_runtime": 20, # Moderate search width
}
# Exact search configuration (perfect accuracy)
exact_config = {
"collection_name": "exact_search",
"index_type": "flat", # No HNSW parameters needed
}
```
## Development
Setting Up Development Environment
* Clone the repository:
```bash
git clone <repository-url>
cd libs/aws/langgraph-checkpoint-aws
```
* Install development dependencies:
```bash
make install_all
```
* Or install specific components:
```bash
make install_dev # Basic development tools
make install_test # Testing tools
make install_lint # Linting tools
make install_typing # Type checking tools
make install_codespell # Spell checking tools
```
## Running Tests
```bash
make tests # Run all tests
make test_watch # Run tests in watch mode
```
## Code Quality
```bash
make lint # Run linter
make format # Format code
make spell_check # Check spelling
```
## Clean Up
```bash
make clean # Remove all generated files
```
## Infrastructure Setup
### AWS Configuration (for Bedrock components)
Ensure you have AWS credentials configured using one of these methods:
1. Environment variables
2. AWS credentials file (~/.aws/credentials)
3. IAM roles
4. Direct credential injection via constructor parameters
Required AWS permissions for Bedrock Session Management:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockSessionManagement",
"Effect": "Allow",
"Action": [
"bedrock-agentcore:CreateEvent",
"bedrock-agentcore:ListEvents",
"bedrock-agentcore:GetEvent",
],
"Resource": [
"*"
]
}
]
}
```
### DynamoDB Setup
#### CloudFormation Template
A sample CloudFormation template is available at [`langgraph-ddb-cfn-template.yaml`](../../samples/memory/cfn/langgraph-ddb-cfn-template.yaml) for quick setup:
```bash
aws cloudformation create-stack \
--stack-name langgraph-checkpoints \
--template-body file://langgraph-ddb-cfn-template.yaml \
--parameters \
ParameterKey=CheckpointTableName,ParameterValue=my-checkpoints \
ParameterKey=CreateS3Bucket,ParameterValue=true \
ParameterKey=EnableTTL,ParameterValue=true
```
#### Required IAM Permissions
**DynamoDB Only:**
```json
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem"
],
"Resource": "arn:aws:dynamodb:REGION:ACCOUNT:table/TABLE_NAME"
}]
}
```
**With S3 Offloading:**
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem"
],
"Resource": "arn:aws:dynamodb:REGION:ACCOUNT:table/TABLE_NAME"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectTagging"
],
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLifecycleConfiguration",
"s3:PutBucketLifecycleConfiguration"
],
"Resource": "arn:aws:s3:::BUCKET_NAME"
}
]
}
```
## Bedrock Session Saver (Alternative Implementation)
This package also provides an alternative checkpointing solution using AWS Bedrock Session Management Service:
### Usage
```python
from langgraph.graph import StateGraph
from langgraph_checkpoint_aws.saver import BedrockSessionSaver
# Initialize the saver
session_saver = BedrockSessionSaver(
region_name="us-west-2", # Your AWS region
credentials_profile_name="default", # Optional: AWS credentials profile
)
# Create a session
session_id = session_saver.session_client.create_session().session_id
# Use with LangGraph
builder = StateGraph(int)
builder.add_node("add_one", lambda x: x + 1)
builder.set_entry_point("add_one")
builder.set_finish_point("add_one")
graph = builder.compile(checkpointer=session_saver)
config = {"configurable": {"thread_id": session_id}}
graph.invoke(1, config)
```
You can also invoke the graph asynchronously:
```python
from langgraph.graph import StateGraph
from langgraph_checkpoint_aws.async_saver import AsyncBedrockSessionSaver
# Initialize the saver
session_saver = AsyncBedrockSessionSaver(
region_name="us-west-2", # Your AWS region
credentials_profile_name="default", # Optional: AWS credentials profile
)
# Create a session
session_create_response = await session_saver.session_client.create_session()
session_id = session_create_response.session_id
# Use with LangGraph
builder = StateGraph(int)
builder.add_node("add_one", lambda x: x + 1)
builder.set_entry_point("add_one")
builder.set_finish_point("add_one")
graph = builder.compile(checkpointer=session_saver)
config = {"configurable": {"thread_id": session_id}}
await graph.ainvoke(1, config)
```
### Configuration Options
`BedrockSessionSaver` and `AsyncBedrockSessionSaver` accepts the following parameters:
```python
def __init__(
client: Optional[Any] = None,
session: Optional[boto3.Session] = None,
region_name: Optional[str] = None,
credentials_profile_name: Optional[str] = None,
aws_access_key_id: Optional[SecretStr] = None,
aws_secret_access_key: Optional[SecretStr] = None,
aws_session_token: Optional[SecretStr] = None,
endpoint_url: Optional[str] = None,
config: Optional[Config] = None,
)
```
* `client`: boto3 Bedrock runtime client (e.g. boto3.client("bedrock-agent-runtime"))
* `session`: boto3.Session for custom credentials
* `region_name`: AWS region where Bedrock is available
* `credentials_profile_name`: Name of AWS credentials profile to use
* `aws_access_key_id`: AWS access key ID for authentication
* `aws_secret_access_key`: AWS secret access key for authentication
* `aws_session_token`: AWS session token for temporary credentials
* `endpoint_url`: Custom endpoint URL for the Bedrock service
* `config`: Botocore configuration object
### Additional AWS permissions for Session Saver
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Action": [
"bedrock:CreateSession",
"bedrock:GetSession",
"bedrock:UpdateSession",
"bedrock:DeleteSession",
"bedrock:EndSession",
"bedrock:ListSessions",
"bedrock:CreateInvocation",
"bedrock:ListInvocations",
"bedrock:PutInvocationStep",
"bedrock:GetInvocationStep",
"bedrock:ListInvocationSteps"
],
"Resource": ["*"]
},
{
"Sid": "KMSAccess",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:{region}:{account}:key/{kms-key-id}"
}
]
}
```
### Valkey Setup
#### Using AWS ElastiCache for Valkey (Recommended)
```python
# Connect to AWS ElastiCache from host running inside VPC with access to cache
from langgraph_checkpoint_aws.checkpoint.valkey import ValkeySaver
with ValkeySaver.from_conn_string(
"valkeys://your-elasticache-cluster.amazonaws.com:6379",
pool_size=20
) as checkpointer:
pass
```
If you want to connect to cache from a host outside of VPC, use ElastiCache console to setup a jump host so you could create SSH tunnel to access cache locally.
#### Using Docker
```bash
# Start Valkey with required modules
docker run --name valkey-bundle -p 6379:6379 -d valkey/valkey-bundle:latest
# Or with custom configuration
docker run --name valkey-custom \
-p 6379:6379 \
-v $(pwd)/valkey.conf:/etc/valkey/valkey.conf \
-d valkey/valkey-bundle:latest
```
## Performance and Best Practices
### Valkey Performance Optimization
#### Connection Pooling
```python
# Use connection pools for better performance
from valkey.connection import ConnectionPool
pool = ConnectionPool.from_url(
"valkey://localhost:6379",
max_connections=20,
retry_on_timeout=True
)
with ValkeySaver.from_pool(pool) as checkpointer:
# Reuse connections across operations
pass
```
#### TTL Strategy
```python
# Configure appropriate TTL values
with ValkeySaver.from_conn_string(
"valkey://localhost:6379",
ttl_seconds=3600 # 1 hour for active sessions
) as checkpointer:
pass
```
#### Batch Operations
```python
# Use batch operations for better throughput
cache.set({
"key1": (value1, 3600),
"key2": (value2, 1800),
"key3": (value3, 7200)
})
results = cache.get(["key1", "key2", "key3"])
```
## Security Considerations
* Never commit AWS credentials
* Use environment variables or AWS IAM roles for authentication
* Follow AWS security best practices
* Use IAM roles and temporary credentials when possible
* Implement proper access controls for session management
### Valkey Security
* Use SSL/TLS for production deployments (`valkeys://` protocol), refer [SSL connection examples](https://valkey-py.readthedocs.io/en/latest/examples/ssl_connection_examples.html#Connect-to-a-Valkey-instance-via-SSL,-and-validate-OCSP-stapled-certificates)
* Configure authentication with strong passwords
* Implement network security (VPC, security groups)
* Regular security updates and monitoring
* Use AWS ElastiCache for managed Valkey with encryption
```python
# Secure connection example
import os
import valkey
pki_dir = os.path.join("..", "..", "dockers", "stunnel", "keys")
valkey_client = valkey.Valkey(
host="localhost",
port=6666,
ssl=True,
ssl_certfile=os.path.join(pki_dir, "client-cert.pem"),
ssl_keyfile=os.path.join(pki_dir, "client-key.pem"),
ssl_cert_reqs="required",
ssl_ca_certs=os.path.join(pki_dir, "ca-cert.pem"),
)
checkpointer = ValkeySaver(valkey_client)
```
## Examples and Samples
Comprehensive examples are available in the `samples/memory/` directory:
## Contributing
For detailed information on how to contribute, see the [Contributing Guide](https://github.com/langchain-ai/langchain-aws/blob/main/.github/CONTRIBUTING.md).
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
* LangChain team for the base LangGraph framework
* AWS Bedrock team for the session management service
* Valkey team for the Redis-compatible storage
| text/markdown | null | null | null | null | MIT | aws, bedrock, langchain, langgraph, checkpointer, dynamodb, elasticache, valkey | [] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"langgraph-checkpoint<5.0.0,>=3.0.0",
"langgraph>=1.0.0",
"boto3>=1.40.19",
"typing_extensions>=4.0.0; python_version < \"3.11\"",
"valkey>=6.1.1; extra == \"valkey\"",
"orjson>=3.11.3; extra == \"valkey\""
] | [] | [] | [] | [
"Source Code, https://github.com/langchain-ai/langchain-aws/tree/main/libs/langgraph-checkpoint-aws",
"repository, https://github.com/langchain-ai/langchain-aws"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T00:02:35.759574 | langgraph_checkpoint_aws-1.0.5.tar.gz | 271,435 | b1/9d/e0356b70ace0c44e9a5c8382a73c339a98352fb86d4de036c620dacf38fb/langgraph_checkpoint_aws-1.0.5.tar.gz | source | sdist | null | false | c60a06c14e7caeb20c01561ad04907cc | eefeb21540fa66a98db7cf3ee8b0b8b8d1a30822e4cb8da7ea34564aa3a6f875 | b19de0356b70ace0c44e9a5c8382a73c339a98352fb86d4de036c620dacf38fb | null | [] | 3,077 |
2.4 | openmed | 0.5.8 | OpenMed delivers state-of-the-art biomedical and clinical LLMs that rival proprietary enterprise stacks, unifying model discovery, advanced extractions, and one-line orchestration. | # OpenMed
> **Production-ready medical NLP toolkit powered by state-of-the-art transformers**
Transform clinical text into structured insights with a single line of code. OpenMed delivers enterprise-grade entity extraction, assertion detection, and medical reasoning—no vendor lock-in, no compromise on accuracy.
[](https://opensource.org/licenses/Apache-2.0)
[](https://www.python.org/downloads/)
[](https://arxiv.org/abs/2508.01630)
[](https://colab.research.google.com/drive/1x1xJjTZTWR3Z7uLJ0B5B_FyAomeeZGq5?usp=sharing)
```python
from openmed import analyze_text
result = analyze_text(
"Patient started on imatinib for chronic myeloid leukemia.",
model_name="disease_detection_superclinical"
)
for entity in result.entities:
print(f"{entity.label:<12} {entity.text:<35} {entity.confidence:.2f}")
# DISEASE chronic myeloid leukemia 0.98
# DRUG imatinib 0.95
```
---
## ✨ Why OpenMed?
- **Specialized Models**: 12+ curated medical NER models outperforming proprietary solutions
- **HIPAA-Compliant PII Detection**: Smart de-identification with all 18 Safe Harbor identifiers
- **One-Line Deployment**: From prototype to production in minutes
- **Interactive TUI**: Beautiful terminal interface for rapid experimentation
- **Batch Processing**: Multi-file workflows with progress tracking
- **Production-Ready**: Configuration profiles, profiling tools, and medical-aware tokenization
- **Zero Lock-In**: Apache 2.0 licensed, runs on your infrastructure
---
## Quick Start
### Installation
```bash
# Install with Hugging Face support
pip install openmed[hf]
# Or try the interactive TUI
pip install openmed[tui]
```
### Three Ways to Use OpenMed
**1️⃣ Python API** — One-liner for scripts and notebooks
```python
from openmed import analyze_text
result = analyze_text(
"Patient received 75mg clopidogrel for NSTEMI.",
model_name="pharma_detection_superclinical"
)
```
**2️⃣ Interactive TUI** — Visual workbench for exploration
```bash
openmed # Launch the TUI directly
```

**3️⃣ CLI Automation** — Batch processing for production
```bash
# Process a directory of clinical notes
openmed batch --input-dir ./notes --output results.json
# Use configuration profiles
openmed config profile-use prod
```
---
## Interactive Terminal Interface
The OpenMed TUI provides a full-featured workbench that runs in any terminal:
- Real-time entity extraction with `Ctrl+Enter`
- Color-coded entity highlighting
- Live configuration tuning (threshold, grouping, tokenization)
- Confidence visualization with progress bars
- Analysis history and export (JSON, CSV)
- Hot-swappable models and profiles
- File browser for batch analysis
```bash
# Launch with custom settings
openmed tui --model disease_detection_superclinical --confidence-threshold 0.7
```
[📖 Full TUI Documentation](https://openmed.life/docs/tui)
---
## Key Features
### Core Capabilities
- **Curated Model Registry**: Metadata-rich catalog with 12+ specialized medical NER models
- **PII Detection & De-identification**: HIPAA-compliant de-identification with smart entity merging
- **Medical-Aware Tokenization**: Clean handling of clinical patterns (`COVID-19`, `CAR-T`, `IL-6`)
- **Advanced NER Processing**: Confidence filtering, entity grouping, and span alignment
- **Multiple Output Formats**: Dict, JSON, HTML, CSV for any downstream system
### Production Tools (v0.4.0)
- **Batch Processing**: Multi-text and multi-file workflows with progress tracking
- **Configuration Profiles**: `dev`/`prod`/`test`/`fast` presets with flexible overrides
- **Performance Profiling**: Built-in inference timing and bottleneck analysis
- **Interactive TUI**: Rich terminal UI for rapid iteration
---
## Documentation
Comprehensive guides available at **[openmed.life/docs](https://openmed.life/docs/)**
Quick links:
- [Getting Started](https://openmed.life/docs/) — Installation and first analysis
- [Analyze Text Helper](https://openmed.life/docs/analyze-text) — Python API reference
- [PII Detection Guide](examples/notebooks/PII_Detection_Complete_Guide.ipynb) — Complete de-identification tutorial (v0.5.0)
- [CLI & Automation](https://openmed.life/docs/cli) — Batch processing and profiles
- [Interactive TUI](https://openmed.life/docs/tui) — Terminal interface guide
- [Model Registry](https://openmed.life/docs/model-registry) — Browse available models
- [Configuration](https://openmed.life/docs/configuration) — Settings and environment variables
---
## Models
OpenMed includes a curated registry of 12+ specialized medical NER models:
| Model | Specialization | Entity Types | Size |
|-------|---------------|--------------|------|
| `disease_detection_superclinical` | Disease & Conditions | DISEASE, CONDITION, DIAGNOSIS | 434M |
| `pharma_detection_superclinical` | Drugs & Medications | DRUG, MEDICATION, TREATMENT | 434M |
| `pii_detection_superclinical` | PII & De-identification | NAME, DATE, SSN, PHONE, EMAIL, ADDRESS | 434M |
| `anatomy_detection_electramed` | Anatomy & Body Parts | ANATOMY, ORGAN, BODY_PART | 109M |
| `gene_detection_genecorpus` | Genes & Proteins | GENE, PROTEIN | 109M |
[📖 Full Model Catalog](https://openmed.life/docs/model-registry)
---
## Advanced Usage
### PII Detection & De-identification (v0.5.0)
```python
from openmed import extract_pii, deidentify
# Extract PII entities with smart merging (default)
result = extract_pii(
"Patient: John Doe, DOB: 01/15/1970, SSN: 123-45-6789",
model_name="pii_detection_superclinical",
use_smart_merging=True # Prevents entity fragmentation
)
# De-identify with multiple methods
masked = deidentify(text, method="mask") # [NAME], [DATE]
removed = deidentify(text, method="remove") # Complete removal
replaced = deidentify(text, method="replace") # Synthetic data
hashed = deidentify(text, method="hash") # Cryptographic hashing
shifted = deidentify(text, method="shift_dates", date_shift_days=180)
```
**Smart Entity Merging** (NEW in v0.5.0): Fixes tokenization fragmentation by merging split entities like dates (`01/15/1970` instead of `01` + `/15/1970`), ensuring production-ready de-identification.
**HIPAA Compliance**: Covers all 18 Safe Harbor identifiers with configurable confidence thresholds.
[📓 Complete PII Notebook](examples/notebooks/PII_Detection_Complete_Guide.ipynb) | [📖 Documentation](docs/pii-smart-merging.md)
### Batch Processing
```bash
# Process multiple files with progress tracking
openmed batch --input-dir ./clinical_notes --pattern "*.txt" --recursive
# Use profiles for different environments
openmed config profile-use prod
openmed batch --input-files note1.txt note2.txt --output results.json
```
### Configuration Profiles
```python
from openmed import analyze_text
# Apply a profile programmatically
result = analyze_text(
text,
model_name="disease_detection_superclinical",
config_profile="prod" # High confidence, grouped entities
)
```
### Performance Profiling
```python
from openmed import analyze_text, profile_inference
with profile_inference() as profiler:
result = analyze_text(text, model_name="disease_detection_superclinical")
print(profiler.summary()) # Inference time, bottlenecks, recommendations
```
[📖 More Examples](https://openmed.life/docs/examples)
---
## Contributing
We welcome contributions! Whether it's bug reports, feature requests, or pull requests.
- 🐛 **Found a bug?** [Open an issue](https://github.com/maziyarpanahi/openmed/issues)
---
## License
OpenMed is released under the [Apache-2.0 License](LICENSE).
---
## Citation
If you use OpenMed in your research, please cite:
```bibtex
@misc{panahi2025openmedneropensourcedomainadapted,
title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets},
author={Maziyar Panahi},
year={2025},
eprint={2508.01630},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.01630},
}
```
---
## Star History
If you find OpenMed useful, consider giving it a star ⭐ to help others discover it!
---
**Built with ❤️ by the OpenMed team**
[🌐 Website](https://openmed.life) • [📚 Documentation](https://openmed.life/docs) • [🐦 X/Twitter](https://x.com/openmed_ai) • [💬 LinkedIn](https://www.linkedin.com/company/openmed-ai/)
| text/markdown | Maziyar Panahi | null | null | null | Apache-2.0 | LLM, NLP, biomedical, clinical, healthcare, medical, medical LLMs, medical NER, medical NLP, medical de-identification, medical extraction, medical language models, medical reasoning, natural language processing | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pysbd<0.4,>=0.3.4",
"rich>=13.9; extra == \"cli\"",
"typer>=0.12; extra == \"cli\"",
"flake8>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"mkdocs-git-revision-date-localized-plugin>=1.2.6; extra == \"docs\"",
"mkdocs-material>=9.5; extra == \"docs\"",... | [] | [] | [] | [] | Hatch/1.16.3 cpython/3.11.14 HTTPX/0.28.1 | 2026-02-19T00:01:51.830205 | openmed-0.5.8-py3-none-any.whl | 130,473 | 56/c6/317fd08d64267c53ec00c5d31ae16c1832f88e787203ca42a7e69dd4bd70/openmed-0.5.8-py3-none-any.whl | py3 | bdist_wheel | null | false | a8c04fbc1b5a9a5e97df722934f4a49e | 71505b5bed3c1b6b5cf07248b5e26076013719cd05ddd03d52d8286b944cff63 | 56c6317fd08d64267c53ec00c5d31ae16c1832f88e787203ca42a7e69dd4bd70 | null | [
"LICENSE"
] | 344 |
2.4 | getflex | 0.1.0 | FlexGraph CLI — knowledge operating system | # getflex
FlexGraph CLI — knowledge operating system.
```
pip install getflex
flex init
```
Coming soon at [flexgraph.dev](https://flexgraph.dev)
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://flexgraph.dev"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T23:59:57.881723 | getflex-0.1.0.tar.gz | 1,181 | 06/0b/c12e925ef3873e9d150986513af535f1875ad12da7d0c2c315c4d5fdb9d6/getflex-0.1.0.tar.gz | source | sdist | null | false | 00fe61f919c09d25d1ff4f3f28fef1dc | 9e260a5edcb1f262b9f23d174332c344a3ed503a0d9460b1f7f7d76dd273c3fc | 060bc12e925ef3873e9d150986513af535f1875ad12da7d0c2c315c4d5fdb9d6 | null | [] | 278 |
2.4 | onorous | 0.1.0 | Ono - Universal AI-Powered Preprocessor | # Ono - Universal AI-Powered Preprocessor
*"Oh no, this is complicated... let AI figure it out."*
Ono is a universal templating preprocessor that uses AI to solve those annoying cross-platform, language-specific problems you don't want to think about. Write once, deploy anywhere, in any language.
## Why Ono?
Ever tried to write a script that needs to safely kill a process on an unknown system? Or find what's listening on a port across different Unix variants? These "simple" tasks explode into platform-specific complexity:
```bash
#!/bin/bash
port_to_free=8080
# The traditional nightmare of cross-platform compatibility
if command -v lsof >/dev/null 2>&1; then
pid=$(lsof -ti:$port_to_free)
elif command -v netstat >/dev/null 2>&1; then
# Different netstat flags on different systems...
if [[ "$OSTYPE" == "darwin"* ]]; then
pid=$(netstat -anp tcp | grep ":$port_to_free " | awk '{print $9}' | cut -d. -f1)
else
pid=$(netstat -tlnp | grep ":$port_to_free " | awk '{print $7}' | cut -d/ -f1)
fi
# ... 50 more lines of platform detection
```
With Ono:
```bash
#!/bin/bash
port_to_free=8080
cleanup_result="?ono find process on port $port_to_free, attempt graceful shutdown, verify port is freed, return 'SUCCESS' only if $check_port_free($port_to_free) confirms port is available ?"
if [ "$cleanup_result" != "SUCCESS" ]; then
force_result="?ono force kill process on $port_to_free and verify $(netstat -tuln | grep -v ":$port_to_free") shows port is free ?"
fi
```
## Real-World Use Cases
**Cross-Platform Process Management:**
```bash
# Works on Linux, macOS, BSD variants
running_services="?ono list all processes listening on network ports with process names ?"
webapp_pid="?ono find PID of process matching 'webapp' pattern using $ps_command($grep_flags) ?"
safe_kill="?ono safely terminate $webapp_pid with proper cleanup and verification ?"
```
**Docker Intelligence:**
```dockerfile
# Analyzes your codebase to make smart decisions
FROM "?ono determine optimal base image for this python flask app with minimal attack surface ?"
WORKDIR "?ono get appropriate working directory for containerized python apps ?"
# Installs only what's needed, with proper security
RUN "?ono analyze requirements.txt and generate secure apt install with caching for $package_list ?"
# Smart port selection
EXPOSE "?ono determine best port for flask app avoiding common conflicts with $existing_services ?"
HEALTHCHECK "?ono create appropriate health check for flask app at $app_endpoint ?"
```
**Database Migration Scripts:**
```sql
-- Adapts to your specific database version and setup
"?ono create table for user sessions with appropriate column types for $database_version ?";
"?ono add index on sessions table optimized for $query_patterns with proper naming for $db_engine ?";
"?ono generate migration rollback script for the above changes compatible with $migration_system ?";
```
**Microservice Orchestration:**
```yaml
# Kubernetes manifest that adapts to your environment
apiVersion: apps/v1
kind: Deployment
metadata:
name: "?ono generate service name from $project_config ?"-service
spec:
replicas: "?ono determine optimal replica count for $service_load_requirements ?"
template:
spec:
containers:
- name: app
resources:
limits:
memory: "?ono calculate memory limit for $app_type with $expected_traffic ?"
cpu: "?ono determine cpu limit based on $performance_profile ?"
```
**Install Scripts:**
```bash
#!/bin/bash
# Handles every possible system configuration
package_manager="?ono detect available package manager and return command syntax ?"
dependencies="?ono install python development dependencies using $package_manager with proper error handling ?"
# Smart Python setup
python_setup="?ono configure python environment with $python_version, create venv at $venv_path($project_name), handle $path_requirements ?"
```
## Variable Substitution
Ono supports intelligent variable substitution with universal syntax:
- **Variables**: `$variable_name` → platform-appropriate variable access
- **Function calls**: `$function_name($args)` → proper calling convention
- **Expressions**: `$(expression)` → evaluated expressions
```python
# Universal template
config_path = "?ono get config directory and create $config_file($app_name.conf) with proper $permissions(644) ?"
db_connection = "?ono establish database connection to $db_host with timeout $timeout_calc($load_factor + 30) ?"
```
**Becomes Python:**
```python
config_path = get_config_dir() + "/" + create_config_file(f"{app_name}.conf", permissions=0o644)
db_connection = create_db_connection(db_host, timeout=timeout_calc(load_factor + 30))
```
**Becomes Bash:**
```bash
config_path=$(get_config_dir && create_config_file "${app_name}.conf" 644)
db_connection=$(establish_db_connection "$db_host" $((load_factor + 30)))
```
## Quick Start
```bash
# Install
pip install onorous
# Set your LLM endpoint
export ONO_API_URL="http://localhost:8000/v1"
# Process templates
onorous process deploy.ono.sh -o deploy.sh
onorous process docker-compose.ono.yml -o docker-compose.yml
onorous process migration.ono.sql -o migration.sql
# Try it instantly (no install)
echo 'cleanup="?ono safely kill process on port $target_port ?"' | nc demo.onolang.com 8080
```
## Getting Started Examples
For a hands-on introduction, check out the **examples/getting_started/** directory. This directory contains:
1. **Ono templates** (`.ono.sh` files) - Source templates with `?ono` blocks
2. **Generated scripts** (`.sh` files) - Working bash scripts created by Ono
3. **Setup scripts** - Easy way to get a working environment
### Run the Examples
```bash
cd examples/getting_started
# Detect available LLM backends
bash model_detect.sh
# Start the Ono TCP server
bash start_ono.sh
# Or run the full setup
bash setup_ono.sh
```
### Process Your Own Templates
```bash
# Install onorous
pip install onorous
# Process a single template
onorous process template.ono.sh -o output.sh
# Process all templates in a directory
onorous process ./ -o output/
```
See [examples/getting_started/README.md](examples/getting_started/README.md) for detailed instructions.
## Context Intelligence
Build sophisticated workflows with automatic context management:
```bash
# Analyzes your system (creates "system" context)
system_info="?ono context=system analyze this server environment and identify key services ?"
# Uses system analysis for smart decisions
monitoring_setup="?ono context=system setup appropriate monitoring for the identified services ?"
# Forks context for specific tasks
backup_strategy="?ono context=system/backup design backup strategy for $critical_services ?"
security_audit="?ono context=system/security audit the identified services for common vulnerabilities ?"
```
## File Convention
Use `.ono.ext` naming to keep syntax highlighting and tool compatibility:
```
deploy.ono.sh # → deploy.sh
docker-compose.ono.yml # → docker-compose.yml
migrate.ono.sql # → migrate.sql
k8s-config.ono.yaml # → k8s-config.yaml
```
## Smart Metadata
Every generated file includes build info for reproducibility:
```bash
#!/bin/bash
# ?ono
# type=meta
# build_id=20250530-143022-abc123
# source=deploy.ono.sh
# ono_version=0.1.0
# ?
cleanup_result="SUCCESS"
```
## Why "Ono"?
The name comes from **p**hp → **o**no (removing the straight lines). Just like how ono takes PHP's templating concept and makes it fluid for the AI age.
Plus, it captures that "oh no, this is complicated" moment when you realize you need AI to figure it out instead of spending hours researching platform-specific edge cases.
---
**License:** MIT
**Docs:** [onolang.com](https://onolang.com)
**Demo:** `nc demo.onolang.com 8080`
| text/markdown | Ono Team | Ono Team <team@onolang.com> | null | Ono Team <team@onolang.com> | MIT | ono, preprocessor, ai, llm, templating, automation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | https://onolang.com | null | >=3.8 | [] | [] | [] | [
"typer",
"pyyaml",
"requests",
"httpx"
] | [] | [] | [] | [
"Homepage, https://onolang.com",
"Documentation, https://onolang.com/docs",
"Repository, https://github.com/onolang/onorous",
"Issues, https://github.com/onolang/onorous/issues"
] | twine/6.1.0 CPython/3.13.12 | 2026-02-18T23:59:08.424226 | onorous-0.1.0.tar.gz | 15,548 | 5d/e3/558ec67d994e166f342a59f82460b48f4798aaa2d4c649c98d4e28b4f8d6/onorous-0.1.0.tar.gz | source | sdist | null | false | e31ed3f7873e3a5981e96b29cf03309f | a910a1d1934076421afba35fcaefada65fc9d67bbc64cfd483167a07f1e00f15 | 5de3558ec67d994e166f342a59f82460b48f4798aaa2d4c649c98d4e28b4f8d6 | null | [
"LICENSE.MIT"
] | 278 |
2.4 | gitmap-core | 0.1.0 | Version control for ArcGIS web maps — Git-like branching, commits, diff, and Portal sync | # GitMap
**Version control for ArcGIS web maps**
GitMap provides Git-like version control for ArcGIS Online and Enterprise Portal web maps. Branch, commit, diff, merge, push, and pull maps using familiar workflows.
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#configuration)
- [CLI Commands](#cli-commands)
- [Usage Examples](#usage-examples)
- [Docker Setup](#docker-setup)
- [Development](#development)
- [Architecture](#architecture)
## Features
- **Version Control**: Track changes to ArcGIS web maps with commits and branches
- **Branching**: Create feature branches for parallel development
- **Diffing**: Compare map versions and see layer-level changes
- **Merging**: Merge branches with conflict resolution
- **Portal Integration**: Push and pull maps to/from ArcGIS Portal or ArcGIS Online
- **Map Discovery**: List and search available web maps from Portal with the `list` command
- **Layer Settings Transfer**: Transfer popup and form settings between maps with the `lsm` command
- **Bulk Repository Setup**: Automate cloning multiple maps with owner filtering
- **Auto-Pull**: Automatically sync all repositories with Portal to keep them up to date (with optional auto-commit)
- **Context Visualization**: Visualize event history and relationships in multiple formats (Mermaid, ASCII, HTML)
- **CLI Interface**: Familiar Git-like command-line interface
- **Rich Output**: Beautiful terminal output with colors and formatting
## Installation
### Prerequisites
- Python 3.11, 3.12, or 3.13 (arcgis SDK requires Python <3.14)
- ArcGIS Portal or ArcGIS Online account
- pip (Python package manager)
### Install from Source
1. Clone the repository:
```bash
git clone <repository-url>
cd gitmap
```
2. Install the core library:
```bash
pip install -e packages/gitmap_core
```
3. Install the CLI:
```bash
pip install -e apps/cli/gitmap
```
### Verify Installation
```bash
gitmap --version
```
You should see: `gitmap, version 0.5.0`
## Quick Start
### 1. Configure Authentication
Create a `.env` file in the project root (or copy from `configs/env.example`):
```bash
cp configs/env.example .env
```
Edit `.env` with your credentials:
```env
PORTAL_URL=https://your-org.maps.arcgis.com
PORTAL_USER=your_username
PORTAL_PASSWORD=your_password
```
**Note**: The `.env` file is git-ignored and should never be committed.
### 2. Initialize a Repository
Start a new GitMap repository:
```bash
gitmap init
```
Or initialize with project details:
```bash
gitmap init --project-name "My Web Map" --user-name "John Doe" --user-email "john@example.com"
```
### 3. Clone an Existing Map
Clone a web map from Portal:
```bash
gitmap clone <item_id>
```
Example:
```bash
gitmap clone abc123def456 --directory my-project
```
### 4. Make Changes and Commit
After modifying your map JSON:
```bash
# Check status
gitmap status
# Commit changes
gitmap commit -m "Added new operational layer"
```
### 5. Push to Portal
Push your changes to Portal:
```bash
gitmap push
```
## Configuration
### Repository Configuration
GitMap stores configuration in `.gitmap/config.json`:
```json
{
"version": "1.0",
"user_name": "John Doe",
"user_email": "john@example.com",
"project_name": "MyProject",
"remote": {
"name": "origin",
"url": "https://www.arcgis.com",
"folder_id": "abc123",
"item_id": "def456"
}
}
```
### Environment Variables
GitMap supports the following environment variables (set in `.env` or your shell):
- `PORTAL_URL` - Portal URL (defaults to `https://www.arcgis.com` for ArcGIS Online)
- `PORTAL_USER` - Portal username
- `PORTAL_PASSWORD` - Portal password
- `ARCGIS_USERNAME` - Alternative username variable
- `ARCGIS_PASSWORD` - Alternative password variable
### Authentication Methods
GitMap attempts authentication in this order:
1. Username/password provided via command-line options
2. Environment variables from `.env` file
3. ArcGIS Pro authentication (if running in ArcGIS Pro)
4. Anonymous access (limited functionality)
## CLI Commands
### `gitmap init`
Initialize a new GitMap repository.
```bash
gitmap init [PATH] [OPTIONS]
```
**Options:**
- `--project-name, -n` - Project name (defaults to directory name)
- `--user-name, -u` - Default author name for commits
- `--user-email, -e` - Default author email for commits
**Examples:**
```bash
gitmap init
gitmap init --project-name "My Project"
gitmap init /path/to/project --user-name "John Doe"
```
### `gitmap clone`
Clone a web map from Portal.
```bash
gitmap clone <ITEM_ID> [OPTIONS]
```
**Options:**
- `--directory, -d` - Directory to clone into (defaults to map title)
- `--url, -u` - Portal URL (defaults to ArcGIS Online)
- `--username` - Portal username (or use env var)
**Examples:**
```bash
gitmap clone abc123def456
gitmap clone abc123def456 --directory my-project
gitmap clone abc123def456 --url https://portal.example.com
```
### `gitmap setup-repos`
Bulk clone web maps into a repositories directory.
```bash
gitmap setup-repos [OPTIONS]
```
**Description:**
Automates the setup of a repositories directory by cloning multiple web maps at once. Each map is cloned into its own subdirectory with a `.gitmap` folder. Useful for setting up local copies of multiple maps owned by a specific user or matching specific criteria.
**Options:**
- `--directory, -d` - Directory to clone repositories into (defaults to 'repositories')
- `--owner, -o` - Filter web maps by owner username
- `--query, -q` - Search query to filter web maps (e.g., 'title:MyMap')
- `--tag, -t` - Filter web maps by tag
- `--max-results, -m` - Maximum number of web maps to clone (default: 100)
- `--url, -u` - Portal URL (or use PORTAL_URL env var)
- `--username` - Portal username (or use env var)
- `--password` - Portal password (or use env var)
- `--skip-existing` - Skip maps that already have directories (instead of failing)
**Examples:**
```bash
# Clone all maps owned by a specific user
gitmap setup-repos --owner myusername
# Clone to a custom directory
gitmap setup-repos --owner myusername --directory my-repos
# Clone maps with a specific tag
gitmap setup-repos --tag production --skip-existing
# Clone maps matching a search query
gitmap setup-repos --query "title:Project*" --owner myusername
# Combine filters
gitmap setup-repos --owner myusername --tag production --max-results 50
```
### `gitmap auto-pull`
Automatically pull updates for all GitMap repositories in a directory.
```bash
gitmap auto-pull [OPTIONS]
```
**Description:**
Scans a directory for GitMap repositories and pulls the latest changes from Portal for each one. Useful for keeping multiple local repositories in sync with their Portal counterparts. Can be run manually or scheduled via cron/systemd timer for automated synchronization.
**Options:**
- `--directory, -d` - Directory containing GitMap repositories (defaults to 'repositories')
- `--branch, -b` - Branch to pull for each repository (defaults to 'main')
- `--url, -u` - Portal URL (or use PORTAL_URL env var)
- `--username` - Portal username (or use env var)
- `--password` - Portal password (or use env var)
- `--skip-errors` - Continue pulling other repos if one fails (default: True)
- `--auto-commit` - Automatically commit changes after successful pull (default: False)
- `--commit-message, -m` - Custom commit message template (use {repo} for repository name, {date} for timestamp)
**Examples:**
```bash
# Pull updates for all repositories in the default 'repositories' directory
gitmap auto-pull
# Pull from a custom directory
gitmap auto-pull --directory my-repos
# Pull a specific branch from all repositories
gitmap auto-pull --branch production
# Pull with custom Portal URL
gitmap auto-pull --url https://portal.example.com
# Automatically commit changes after pulling
gitmap auto-pull --auto-commit
# Use a custom commit message template
gitmap auto-pull --auto-commit --commit-message "Auto-pull from Portal on {date}"
# Schedule with cron (every hour)
0 * * * * cd /path/to/project && gitmap auto-pull --auto-commit
```
### `gitmap list`
List all available web maps from Portal or ArcGIS Online.
```bash
gitmap list [OPTIONS]
```
**Description:**
Queries Portal/ArcGIS Online and displays all available web maps in a table format. Useful for discovering web map item IDs before cloning or browsing available maps in your organization.
**Options:**
- `--query, -q` - Search query to filter web maps (e.g., 'title:MyMap')
- `--owner, -o` - Filter web maps by owner username
- `--tag, -t` - Filter web maps by tag
- `--max-results, -m` - Maximum number of web maps to return (default: 100)
- `--url, -u` - Portal URL (or use PORTAL_URL env var)
- `--username` - Portal username (or use env var)
- `--password` - Portal password (or use env var)
**Examples:**
```bash
# List all web maps
gitmap list
# List web maps owned by a specific user
gitmap list --owner myusername
# List web maps with a specific tag
gitmap list --tag production
# Combine filters
gitmap list --owner myusername --tag production
# Search by title
gitmap list --query "title:MyMap"
# Limit results
gitmap list --max-results 50
```
### `gitmap status`
Show the working tree status.
```bash
gitmap status
```
Displays:
- Current branch
- Staged changes
- Unstaged changes
- Untracked files
### `gitmap branch`
List, create, or delete branches.
```bash
gitmap branch [BRANCH_NAME] [OPTIONS]
```
**Options:**
- `--delete, -d` - Delete a branch
- `--list, -l` - List all branches
**Examples:**
```bash
gitmap branch # List branches
gitmap branch feature/new-layer # Create new branch
gitmap branch -d feature/old # Delete branch
```
### `gitmap checkout`
Switch branches or restore working tree files.
```bash
gitmap checkout <BRANCH_NAME>
```
**Examples:**
```bash
gitmap checkout feature/new-layer
gitmap checkout main
```
### `gitmap commit`
Record changes to the repository.
```bash
gitmap commit [OPTIONS]
```
**Options:**
- `--message, -m` - Commit message (required)
- `--author` - Override commit author
**Examples:**
```bash
gitmap commit -m "Added new layer"
gitmap commit -m "Fixed layer visibility" --author "Jane Doe"
```
### `gitmap diff`
Show changes between commits, branches, or working tree.
```bash
gitmap diff [OPTIONS]
```
**Options:**
- `--branch, -b` - Compare with branch
- `--commit, -c` - Compare with commit
**Examples:**
```bash
gitmap diff # Show working tree changes
gitmap diff --branch main # Compare with main branch
gitmap diff --commit abc123 # Compare with specific commit
```
### `gitmap log`
Show commit history.
```bash
gitmap log [OPTIONS]
```
**Options:**
- `--branch, -b` - Show log for specific branch
- `--limit, -n` - Limit number of commits
**Examples:**
```bash
gitmap log
gitmap log --branch feature/new-layer
gitmap log --limit 10
```
### `gitmap merge`
Merge branches.
```bash
gitmap merge <BRANCH_NAME>
```
**Examples:**
```bash
gitmap merge feature/new-layer
```
### `gitmap push`
Push changes to Portal.
```bash
gitmap push [OPTIONS]
```
**Options:**
- `--branch, -b` - Branch to push (defaults to current)
- `--url, -u` - Portal URL
- `--username` - Portal username
**Examples:**
```bash
gitmap push
gitmap push --branch feature/new-layer
gitmap push --url https://portal.example.com
```
### `gitmap pull`
Pull changes from Portal.
```bash
gitmap pull [OPTIONS]
```
**Options:**
- `--branch, -b` - Branch to pull (defaults to current)
- `--url, -u` - Portal URL
- `--username` - Portal username
**Examples:**
```bash
gitmap pull
gitmap pull --branch main
```
### `gitmap lsm`
Transfer popup and form settings between maps.
```bash
gitmap lsm <SOURCE> [TARGET] [OPTIONS]
```
**Description:**
Transfers `popupInfo` and `formInfo` from layers and tables in a source map to matching layers and tables in a target map. Works with item IDs, branch names, commit IDs, or file paths. Automatically handles nested layers within GroupLayers.
**Options:**
- `--dry-run` - Preview changes without applying them
**Arguments:**
- `SOURCE` - Source map (item ID, branch name, commit ID, or file path)
- `TARGET` - Target map (optional, defaults to current index)
**Examples:**
```bash
# Transfer settings between branches
gitmap lsm main feature/new-layer
# Transfer from Portal item ID to current index
gitmap lsm abc123def456
# Transfer from file to file with dry-run
gitmap lsm source.json target.json --dry-run
# Transfer from another repository directory
gitmap lsm ../other-repo
```
### `gitmap notify`
Send a notification to members of a Portal/AGOL group using the
ArcGIS API for Python `Group.notify` method (leveraging your Portal/AGOL
authentication; no SMTP settings required). Notifications go to users in
the target group according to their ArcGIS notification preferences.
By default, all group members are notified. Use `--user` to target specific
users (useful for testing).
```bash
gitmap notify --group <GROUP_ID_OR_TITLE> --subject "Subject" --message "Body"
```
**Options:**
- `--group, -g` - Group ID or title to target for notifications (required)
- `--user` - Specific username(s) to notify (can be used multiple times). If omitted, all group members are notified.
- `--subject, -s` - Notification subject line
- `--message, -m` - Notification body (or use `--message-file`)
- `--message-file` - Load the notification body from a text file
- `--url, -u` - Portal URL (defaults to ArcGIS Online)
- `--username` / `--password` - Portal credentials (or use env vars)
**Examples:**
```bash
# Notify all members of the editors group
gitmap notify --group editors --subject "Release planned" \
--message "New basemap will be published on Friday."
# Test by sending to a single user
gitmap notify --group editors --user testuser --subject "Test notification" \
--message "This is a test message."
# Notify multiple specific users
gitmap notify --group editors --user user1 --user user2 --subject "Update" \
--message "Please review the changes."
# Load a longer message from a file
gitmap notify --group "Field Crew" --subject "Inspection prep" --message-file notes.txt
```
### `gitmap context`
Visualize and manage the context graph showing events, relationships, and annotations.
```bash
gitmap context <SUBCOMMAND> [OPTIONS]
```
**Description:**
The context command provides tools for visualizing the event history and relationships in your GitMap repository. It tracks all operations (commits, pushes, pulls, merges, branches, diffs) and displays them in various formats suitable for terminal viewing or export to IDEs.
**Subcommands:**
- `show` - Display context graph in terminal (ASCII, Mermaid, or Mermaid Timeline formats)
- `export` - Export context graph to file (Mermaid, ASCII, or HTML)
- `timeline` - Show ASCII timeline of context events
- `graph` - Show ASCII graph of event relationships
**Options (for `show` and `timeline`):**
- `--format, -f` - Output format: `ascii`, `mermaid`, or `mermaid-timeline` (default: ascii)
- `--limit, -n` - Maximum events to display (default: 20)
- `--type, -t` - Filter by event types (can be used multiple times): `commit`, `push`, `pull`, `merge`, `branch`, `diff`
- `--no-unicode` - Use simple ASCII characters (no Unicode)
**Options (for `export`):**
- `--format, -f` - Output format: `mermaid`, `mermaid-timeline`, `mermaid-git`, `ascii`, `ascii-graph`, or `html` (default: mermaid)
- `--output, -o` - Output file path (defaults to context.<ext>)
- `--limit, -n` - Maximum events to include (default: 50)
- `--type, -t` - Filter by event types
- `--title` - Title for the visualization
- `--theme` - Color theme for HTML output: `light` or `dark` (default: light)
- `--direction` - Graph direction for Mermaid flowcharts: `TB`, `BT`, `LR`, or `RL` (default: TB)
- `--no-annotations` - Exclude annotations from visualization
**Examples:**
```bash
# Display context graph in terminal
gitmap context show
# Display as Mermaid diagram
gitmap context show --format mermaid
# Show only commits and pushes
gitmap context show --type commit --type push
# Export to Mermaid file for IDE viewing
gitmap context export
# Export to HTML with dark theme
gitmap context export --format html --theme dark -o context.html
# Export with custom title and direction
gitmap context export --format mermaid --direction LR --title "My Project Timeline"
# Show timeline of recent events
gitmap context timeline
# Show event relationship graph
gitmap context graph -n 15
```
### `gitmap config`
Manage repository configuration settings.
```bash
gitmap config [OPTIONS]
```
**Description:**
Configure repository settings such as the production branch (which triggers notifications on push) and auto-visualization (automatically regenerates context graph after events).
**Options:**
- `--production-branch, -p` - Set the production branch name (branch that triggers notifications on push)
- `--unset-production` - Remove the production branch setting
- `--auto-visualize` - Enable automatic context graph regeneration after events
- `--no-auto-visualize` - Disable automatic context graph regeneration
**Examples:**
```bash
# View current configuration
gitmap config
# Set production branch
gitmap config --production-branch main
# Set production branch to a release branch
gitmap config --production-branch release/1.0.0
# Remove production branch setting
gitmap config --unset-production
# Enable auto-visualization
gitmap config --auto-visualize
# Disable auto-visualization
gitmap config --no-auto-visualize
```
## Usage Examples
### Workflow: Bulk Repository Setup
```bash
# Set up a repositories directory with all maps owned by a user
gitmap setup-repos --owner myusername --directory my-maps
# Navigate into one of the cloned repositories
cd my-maps/MyWebMap
# Check the status
gitmap status
# Make changes and commit
gitmap commit -m "Updated layer symbology"
# Push back to Portal
gitmap push
```
### Workflow: Keeping Repositories in Sync
```bash
# Pull updates for all repositories at once
gitmap auto-pull
# Pull from a specific directory
gitmap auto-pull --directory my-maps
# Set up automated synchronization with cron (runs every hour)
# Add this to your crontab (crontab -e):
0 * * * * cd /path/to/project && /path/to/gitmap auto-pull --directory repositories
# Or use systemd timer for more control
# Create /etc/systemd/system/gitmap-sync.service and gitmap-sync.timer
```
### Workflow: Creating a New Feature
```bash
# 1. Start from main branch
gitmap checkout main
# 2. Create feature branch
gitmap branch feature/add-basemap
# 3. Switch to feature branch
gitmap checkout feature/add-basemap
# 4. Make changes to your map (edit JSON files)
# 5. Check what changed
gitmap status
gitmap diff
# 6. Commit changes
gitmap commit -m "Added new basemap layer"
# 7. Push to Portal
gitmap push --branch feature/add-basemap
# 8. Merge back to main
gitmap checkout main
gitmap merge feature/add-basemap
gitmap push
```
### Workflow: Collaborating with Others
```bash
# 1. Pull latest changes from Portal
gitmap pull
# 2. Check for conflicts
gitmap status
# 3. If conflicts exist, resolve them manually
# Then commit the resolution
gitmap commit -m "Resolved merge conflicts"
# 4. Push resolved changes
gitmap push
```
### Workflow: Comparing Versions
```bash
# Compare current working tree with main branch
gitmap diff --branch main
# Compare two specific commits
gitmap diff --commit abc123 --commit def456
# View commit history
gitmap log --limit 20
```
### Workflow: Transferring Layer Settings
```bash
# Transfer popup and form settings from one branch to another
gitmap checkout feature/new-layer
gitmap lsm main
# Preview what would be transferred (dry-run)
gitmap lsm main feature/new-layer --dry-run
# Transfer settings from a Portal item ID
gitmap lsm abc123def456
# Transfer settings between different repositories
gitmap lsm ../source-repo
```
## Docker Setup
GitMap includes Docker support for consistent development environments.
### Development Shell
Start an interactive development shell:
```bash
docker-compose up dev
```
This provides:
- Python 3.11 environment
- All dependencies installed
- Volume mounts for live code editing
- ArcGIS cache persistence
### Running Apps
Run a specific app:
```bash
APP_GROUP=cli APP_NAME=gitmap docker-compose up app
```
## Development
### Project Structure
```
gitmap/
├── apps/ # Runnable applications
│ └── cli/
│ └── gitmap/ # CLI application
├── packages/ # First-party libraries
│ └── gitmap_core/ # Core library
├── configs/ # Configuration templates
├── docker/ # Docker configuration
└── documentation/ # Specifications and docs
```
### Installing for Development
```bash
# Install core library in editable mode
pip install -e packages/gitmap_core
# Install CLI in editable mode
pip install -e apps/cli/gitmap
```
### Running Tests
```bash
# From project root
pytest
```
### Code Standards
- Python 3.11+
- PEP 8 style guide
- Type hints required
- PEP 257 docstrings
- Uses `pathlib.Path` for file operations
## Architecture
### Core Components
- **`gitmap_core`**: Core library providing:
- Repository management (`.gitmap` directory structure)
- Portal authentication and connection
- Web map JSON operations
- Diff and merge algorithms
- Remote push/pull operations
- **`gitmap-cli`**: Command-line interface providing:
- 18 Git-like commands (including `list`, `lsm`, `setup-repos`, `auto-pull`, `notify`, `context`, and `config`)
- Rich terminal output
- User-friendly error messages
### Repository Structure
GitMap stores version control data in a `.gitmap` directory:
```
.gitmap/
├── config.json # Repository configuration
├── HEAD # Current branch reference
├── index.json # Staging area
├── refs/
│ └── heads/ # Branch references
└── objects/
└── commits/ # Commit objects
```
### Data Model
- **Commit**: Snapshot of map state with metadata
- **Branch**: Named pointer to a commit
- **Remote**: Portal connection configuration
- **Config**: Repository settings and defaults
## Troubleshooting
### Authentication Issues
If you encounter authentication errors:
1. Verify your `.env` file exists and contains correct credentials
2. Check that environment variables are set:
```bash
echo $PORTAL_USER
echo $PORTAL_PASSWORD
```
3. Try providing credentials via command-line options
4. Verify Portal URL is correct
### Common Errors
**"Not connected. Call connect() first"**
- Ensure you've authenticated with Portal
- Check your `.env` file configuration
**"Repository already exists"**
- Remove existing `.gitmap` directory if starting fresh
- Or work within the existing repository
**"Failed to connect to Portal"**
- Verify Portal URL is accessible
- Check network connectivity
- Confirm credentials are correct
## Contributing
Contributions are welcome! Please:
1. Follow the code standards outlined in `documentation/project_specs/`
2. Add tests for new features
3. Update documentation as needed
4. Submit pull requests with clear descriptions
## License
MIT License - see LICENSE file for details
## Support
For issues, questions, or contributions:
- Open an issue on the repository
- Review documentation in `documentation/`
- Check specifications in `documentation/project_specs/`
---
**GitMap** - Version control for ArcGIS web maps
| text/markdown | null | null | null | null | MIT | agol, arcgis, gis, version-control, webmap | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: GIS"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"arcgis>=2.0.0",
"click>=8.0",
"rich>=12.0"
] | [] | [] | [] | [
"Homepage, https://github.com/14-TR/Git-Map",
"Repository, https://github.com/14-TR/Git-Map"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T23:57:34.506771 | gitmap_core-0.1.0.tar.gz | 242,045 | 79/2d/3ac20dd26d91f6b6d1dcc1112e68ccc43a7231430eda965c584833e87058/gitmap_core-0.1.0.tar.gz | source | sdist | null | false | 26f8084630cf7e83482bf86cf963e09f | 1ac300c6a6e7896e7c56d0479356d20a14ba65c9368268723b86da5fb4a23423 | 792d3ac20dd26d91f6b6d1dcc1112e68ccc43a7231430eda965c584833e87058 | null | [
"LICENSE"
] | 280 |
2.4 | mcp-action-firewall | 0.3.0 | A transparent MCP proxy that intercepts dangerous tool calls and requires OTP-based user approval. | # 🔥 MCP Action Firewall
[](https://python.org)
[](LICENSE)
[](https://modelcontextprotocol.io)
### Works with any MCP-compatible agent
[](https://claude.ai)
[](https://cursor.sh)
[](https://codeium.com/windsurf)
[](https://openai.com)
[](https://gemini.google.com)
[](https://github.com/openclaw)
A transparent **MCP proxy** that intercepts dangerous tool calls and requires **OTP-based human approval** before execution. Acts as a circuit breaker between your AI agent and any MCP server.
## How It Works
```
┌──────────┐ stdin/stdout ┌──────────────────┐ stdin/stdout ┌──────────────────┐
│ AI Agent │ ◄────────────────► │ MCP Action │ ◄────────────────► │ Target MCP Server│
│ (Claude) │ │ Firewall │ │ (e.g. Stripe) │
└──────────┘ └──────────────────┘ └──────────────────┘
│
Policy Engine
┌───────────────┐
│ Allow? Block? │
│ Generate OTP │
└───────────────┘
```
MCP servers don't run like web servers — there's no background process on a port. Instead, your AI agent (Claude, Cursor, etc.) **spawns the MCP server as a subprocess** and talks to it over stdin/stdout. When the chat ends, the process dies.
The firewall inserts itself into that chain:
```
Without firewall:
Claude ──spawns──► mcp-server-stripe
With firewall:
Claude ──spawns──► mcp-action-firewall ──spawns──► mcp-server-stripe
```
So you just **replace the server command** in your MCP client config with the firewall, and tell the firewall what the original command was:
**Before** (direct):
```json
{ "command": "uvx", "args": ["mcp-server-stripe", "--api-key", "sk_test_..."] }
```
**After** (wrapped with firewall):
```json
{ "command": "uv", "args": ["run", "mcp-action-firewall", "--target", "mcp-server-stripe --api-key sk_test_..."] }
```
Then the firewall applies your security policy:
1. ✅ **Safe calls** (e.g. `get_balance`) → forwarded immediately
2. 🛑 **Dangerous calls** (e.g. `delete_user`) → blocked, OTP generated
3. 🔑 Agent asks user for the code → user replies → agent calls `firewall_confirm` → original action executes
## Installation
```bash
pip install mcp-action-firewall
# or
uvx mcp-action-firewall --help
```
## Quick Start — MCP Client Configuration
Add the firewall as a wrapper around any MCP server in your client config:
```json
{
"mcpServers": {
"stripe": {
"command": "uv",
"args": ["run", "mcp-action-firewall", "--target", "mcp-server-stripe --api-key sk_test_abc123"]
}
}
}
```
That's it. Everything after `--target` is the **full shell command** to launch the real MCP server — including its own flags like `--api-key`. The firewall doesn't touch those args, it just spawns the target and sits in front of it.
### More Examples
<details>
<summary>Claude Desktop with per-server rules</summary>
```json
{
"mcpServers": {
"stripe": {
"command": "uv",
"args": [
"run", "mcp-action-firewall",
"--target", "uvx mcp-server-stripe --api-key sk_test_...",
"--name", "stripe"
]
},
"database": {
"command": "uv",
"args": [
"run", "mcp-action-firewall",
"--target", "uvx mcp-server-postgres --connection-string postgresql://...",
"--name", "database",
"--config", "/path/to/my/firewall_config.json"
]
}
}
}
```
</details>
<details>
<summary>Cursor / Other MCP Clients</summary>
```json
{
"mcpServers": {
"github": {
"command": "uvx",
"args": [
"mcp-action-firewall",
"--target", "npx @modelcontextprotocol/server-github"
]
}
}
}
```
</details>
## The OTP Flow
When the agent tries to call a blocked tool, the firewall returns a structured response:
```json
{
"status": "PAUSED_FOR_APPROVAL",
"message": "⚠️ The action 'delete_user' is HIGH RISK and has been locked by the Action Firewall.",
"action": {
"tool": "delete_user",
"arguments": { "id": 42 }
},
"instruction": "To unlock this action, you MUST ask the user for authorization.\n\n1. Show the user the following and ask for approval:\n Tool: **delete_user**\n Arguments:\n{\"id\": 42}\n\n2. Tell the user: 'Please reply with approval code: **9942**' to allow this action, or say no to cancel.\n3. STOP and wait for their reply.\n4. When they reply with '9942', call the 'firewall_confirm' tool with that code.\n5. If they say no or give a different code, do NOT retry."
}
```
> **Argument visibility guarantee:** The arguments shown to the user are frozen at interception time — they are taken from the original blocked call, not from what the agent passes to `firewall_confirm`. The agent cannot change the arguments after the OTP is issued.
The `firewall_confirm` tool is automatically injected into the server's tool list:
```json
{
"name": "firewall_confirm",
"description": "Call this tool ONLY when the user provides the correct 4-digit approval code to confirm a paused action.",
"inputSchema": {
"type": "object",
"properties": {
"otp": {
"type": "string",
"description": "The 4-digit code provided by the user."
}
},
"required": ["otp"]
}
}
```
## Configuration
The firewall ships with sensible defaults. Override with `--config`:
```json
{
"global": {
"allow_prefixes": ["get_", "list_", "read_", "fetch_"],
"block_keywords": ["delete", "update", "create", "pay", "send", "transfer", "drop", "remove", "refund"],
"default_action": "block",
"otp_attempt_count": 1
},
"servers": {
"stripe": {
"allow_prefixes": [],
"block_keywords": ["refund", "charge"],
"default_action": "block"
},
"database": {
"allow_prefixes": ["select_"],
"block_keywords": ["drop", "truncate", "alter"],
"default_action": "block"
}
}
}
```
**Rule evaluation order:**
1. Tool name starts with an allow prefix → **ALLOW**
2. Tool name contains a block keyword → **BLOCK** (OTP required)
3. No match → fallback to `default_action`
**`otp_attempt_count`** — maximum number of failed OTP attempts before the pending action is permanently locked out. Defaults to `1` (any wrong code cancels the request). Increase for more forgiving UX, keep at `1` for maximum security.
**Per-server rules** extend (not replace) the global rules. Use `--name stripe` to activate server-specific overrides.
## CLI Reference
### `--target` (required)
The full command to launch the real MCP server. This is the server you want to protect:
```bash
mcp-action-firewall --target "mcp-server-stripe --api-key sk_test_abc123"
mcp-action-firewall --target "npx @modelcontextprotocol/server-github"
mcp-action-firewall --target "uvx mcp-server-postgres --connection-string postgresql://localhost/mydb"
```
### `--name` (optional)
Activates per-server rules from your config. Without it, only global rules apply:
```bash
mcp-action-firewall --target "mcp-server-stripe" --name stripe
```
### `--config` (optional)
Custom config file path. Without it, uses `firewall_config.json` in your current directory, or the bundled defaults:
```bash
mcp-action-firewall --target "mcp-server-stripe" --config /path/to/my_rules.json
```
### `-v` / `--verbose` (optional)
Turns on debug logging (written to stderr, won't interfere with MCP traffic):
```bash
mcp-action-firewall --target "mcp-server-stripe" -v
```
## Project Structure
```
src/mcp_action_firewall/
├── __init__.py # Package version
├── __main__.py # python -m support
├── server.py # CLI entry point
├── proxy.py # JSON-RPC stdio proxy
├── policy.py # Allow/block rule engine
├── state.py # OTP store with TTL
└── default_config.json # Bundled default rules
```
## Try It — Interactive Demo
See the firewall in action without any setup:
```bash
git clone https://github.com/starskrime/mcp-action-firewall.git
cd mcp-action-firewall
uv sync
uv run python demo.py
```
The demo simulates an AI agent and walks you through the full OTP flow:
1. ✅ **Safe call** (`get_balance`) → passes through instantly
2. 🛑 **Dangerous call** (`delete_user`) → blocked, OTP generated
3. 🔑 **You enter the code** → action executes after approval
## Known Limitations
### Argument Inspection
The firewall matches on **tool names only**, not argument values. This means a tool like `get_data({"sql": "DROP TABLE users"})` would pass if `get_` is in your allow list, because the policy engine only sees `get_data`.
**Workaround:** Use explicit tool names in your allow/block lists and set `"default_action": "block"` so unrecognized tools require approval.
> 🚧 **Roadmap:** Argument-level inspection (scanning argument values against `block_keywords`) is planned for a future release.
## Development
```bash
# Install dev dependencies
uv sync
# Run tests
uv run pytest tests/ -v
# Run the firewall locally
uv run mcp-action-firewall --target "your-server-command" -v
```
## License
MIT
| text/markdown | Bakir Talibov | null | null | null | MIT | ai-safety, firewall, mcp, security, tool-calls | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"mcp[cli]>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/starskrime/mcp-action-firewall",
"Repository, https://github.com/starskrime/mcp-action-firewall"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T23:57:03.876796 | mcp_action_firewall-0.3.0-py3-none-any.whl | 18,626 | 5c/f3/b2a62953e34bbffeb1be96bcb9f8b7604af63b723e694292fef61b774fa5/mcp_action_firewall-0.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6bebd9d6219777190087283581636172 | 2e812ae6f7f07ef0090e66b2dd1341515a350435fdf7675084a8ee7688766050 | 5cf3b2a62953e34bbffeb1be96bcb9f8b7604af63b723e694292fef61b774fa5 | null | [
"LICENSE"
] | 239 |
2.4 | openadapt-evals | 0.3.3 | Evaluation infrastructure for GUI agent benchmarks | # OpenAdapt Evals
[](https://github.com/OpenAdaptAI/openadapt-evals/actions/workflows/release.yml)
[](https://pypi.org/project/openadapt-evals/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
Evaluation infrastructure for GUI agent benchmarks, built for [OpenAdapt](https://github.com/OpenAdaptAI/OpenAdapt).
## What is OpenAdapt Evals?
OpenAdapt Evals is a unified framework for evaluating GUI automation agents against standardized benchmarks such as [Windows Agent Arena (WAA)](https://microsoft.github.io/WindowsAgentArena/). It provides benchmark adapters, agent interfaces, Azure VM infrastructure for parallel evaluation, and result visualization -- everything needed to go from "I have a GUI agent" to "here are its benchmark scores."
## Benchmark Viewer

<details>
<summary>More screenshots</summary>
**Task Detail View** -- step-by-step replay with screenshots, actions, and execution logs:

**Cost Tracking Dashboard** -- real-time Azure VM cost monitoring with tiered sizing and spot instances:

</details>
## Key Features
- **Benchmark adapters** for WAA (live, mock, and local modes), with an extensible base for OSWorld, WebArena, and others
- **Agent interfaces** including `ApiAgent` (Claude / GPT), `RetrievalAugmentedAgent`, `RandomAgent`, and `PolicyAgent`
- **Azure VM infrastructure** with `AzureVMManager`, `PoolManager`, `SSHTunnelManager`, and `VMMonitor` for running evaluations at scale
- **CLI tools** -- `oa-vm` for VM and pool management (50+ commands), benchmark CLI for running evals
- **Cost optimization** -- tiered VM sizing, spot instance support, and real-time cost tracking
- **Results visualization** -- HTML viewer with step-by-step screenshot replay, execution logs, and domain breakdowns
- **Trace export** for converting evaluation trajectories into training data
- **Configuration via pydantic-settings** with automatic `.env` loading
## Installation
```bash
pip install openadapt-evals
```
With optional dependencies:
```bash
pip install openadapt-evals[azure] # Azure VM management
pip install openadapt-evals[retrieval] # Demo retrieval agent
pip install openadapt-evals[viewer] # Live results viewer
pip install openadapt-evals[all] # Everything
```
## Quick Start
### Run a mock evaluation (no VM required)
```bash
openadapt-evals mock --tasks 10
```
### Run a live evaluation against a WAA server
```bash
# Start with a single Azure VM
oa-vm pool-create --workers 1
oa-vm pool-wait
# Run evaluation
openadapt-evals run --agent api-claude --task notepad_1
# View results
openadapt-evals view --run-name live_eval
# Clean up (stop billing)
oa-vm pool-cleanup -y
```
### Python API
```python
from openadapt_evals import (
ApiAgent,
WAALiveAdapter,
WAALiveConfig,
evaluate_agent_on_benchmark,
compute_metrics,
)
adapter = WAALiveAdapter(WAALiveConfig(server_url="http://localhost:5001"))
agent = ApiAgent(provider="anthropic")
results = evaluate_agent_on_benchmark(agent, adapter, task_ids=["notepad_1"])
metrics = compute_metrics(results)
print(f"Success rate: {metrics['success_rate']:.1%}")
```
### Parallel evaluation on Azure
```bash
# Create a pool of VMs and distribute tasks
oa-vm pool-create --workers 5
oa-vm pool-wait
oa-vm pool-run --tasks 50
# Or use Azure ML orchestration
openadapt-evals azure --workers 10 --waa-path /path/to/WindowsAgentArena
```
## Architecture
```
openadapt_evals/
├── agents/ # Agent implementations
│ ├── base.py # BenchmarkAgent ABC
│ ├── api_agent.py # ApiAgent (Claude, GPT)
│ ├── retrieval_agent.py# RetrievalAugmentedAgent
│ └── policy_agent.py # PolicyAgent (trained models)
├── adapters/ # Benchmark adapters
│ ├── base.py # BenchmarkAdapter ABC + data classes
│ └── waa/ # WAA live, mock, and local adapters
├── infrastructure/ # Azure VM and pool management
│ ├── azure_vm.py # AzureVMManager
│ ├── pool.py # PoolManager
│ ├── ssh_tunnel.py # SSHTunnelManager
│ └── vm_monitor.py # VMMonitor dashboard
├── benchmarks/ # Evaluation runner, CLI, viewers
│ ├── runner.py # evaluate_agent_on_benchmark()
│ ├── cli.py # Benchmark CLI (run, mock, live, view)
│ ├── vm_cli.py # VM/Pool CLI (oa-vm, 50+ commands)
│ ├── viewer.py # HTML results viewer
│ ├── pool_viewer.py # Pool results viewer
│ └── trace_export.py # Training data export
├── waa_deploy/ # Docker agent deployment
├── server/ # WAA server extensions
├── config.py # Settings (pydantic-settings, .env)
└── __init__.py
```
### How it fits together
```
LOCAL MACHINE AZURE VM (Ubuntu)
┌─────────────────────┐ ┌──────────────────────┐
│ oa-vm CLI │ SSH Tunnel │ Docker │
│ (pool management) │ ─────────────> │ └─ QEMU (Win 11) │
│ │ :5001 → :5000 │ ├─ WAA Flask API │
│ openadapt-evals │ :8006 → :8006 │ └─ Agent │
│ (benchmark runner) │ │ │
└─────────────────────┘ └──────────────────────┘
```
## CLI Reference
### Benchmark CLI (`openadapt-evals`)
| Command | Description |
|------------|-----------------------------------------------|
| `run` | Run live evaluation (localhost:5001 default) |
| `mock` | Run with mock adapter (no VM required) |
| `live` | Run against a WAA server (full control) |
| `azure` | Run parallel evaluation on Azure ML |
| `probe` | Check if a WAA server is ready |
| `view` | Generate HTML viewer for results |
| `estimate` | Estimate Azure costs |
### VM/Pool CLI (`oa-vm`)
| Command | Description |
|-----------------|------------------------------------------|
| `pool-create` | Create N VMs with Docker and WAA |
| `pool-wait` | Wait until WAA is ready on all workers |
| `pool-run` | Distribute tasks across pool workers |
| `pool-status` | Show status of all pool VMs |
| `pool-cleanup` | Delete all pool VMs and resources |
| `vm monitor` | Dashboard with SSH tunnels |
| `vm setup-waa` | Deploy WAA container on a VM |
Run `oa-vm --help` for the full list of 50+ commands.
## Configuration
Settings are loaded automatically from environment variables or a `.env` file in the project root via [pydantic-settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/).
```bash
# .env
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
# Azure (required for VM management)
AZURE_SUBSCRIPTION_ID=...
AZURE_ML_RESOURCE_GROUP=...
AZURE_ML_WORKSPACE_NAME=...
```
See [`openadapt_evals/config.py`](openadapt_evals/config.py) for all available settings.
## Custom Agents
Implement the `BenchmarkAgent` interface to evaluate your own agent:
```python
from openadapt_evals import BenchmarkAgent, BenchmarkAction, BenchmarkObservation, BenchmarkTask
class MyAgent(BenchmarkAgent):
def act(
self,
observation: BenchmarkObservation,
task: BenchmarkTask,
history: list[tuple[BenchmarkObservation, BenchmarkAction]] | None = None,
) -> BenchmarkAction:
# Your agent logic here
return BenchmarkAction(type="click", x=0.5, y=0.5)
def reset(self) -> None:
pass
```
## Contributing
We welcome contributions. To get started:
```bash
git clone https://github.com/OpenAdaptAI/openadapt-evals.git
cd openadapt-evals
pip install -e ".[dev]"
pytest tests/ -v
```
See [CLAUDE.md](./CLAUDE.md) for development conventions and architecture details.
## Related Projects
| Project | Description |
|---------|-------------|
| [OpenAdapt](https://github.com/OpenAdaptAI/OpenAdapt) | Desktop automation with demo-conditioned AI agents |
| [openadapt-ml](https://github.com/OpenAdaptAI/openadapt-ml) | Training and policy runtime |
| [openadapt-capture](https://github.com/OpenAdaptAI/openadapt-capture) | Screen recording and demo sharing |
| [openadapt-grounding](https://github.com/OpenAdaptAI/openadapt-grounding) | UI element localization |
## License
[MIT](https://opensource.org/licenses/MIT)
| text/markdown | null | Richard Abrich <richard@openadapt.ai> | null | OpenAdaptAI <contact@openadapt.ai> | null | agent, ai, automation, benchmark, evaluation, gui | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pytho... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.0",
"open-clip-torch>=2.20.0",
"pillow>=10.0.0",
"pydantic-settings>=2.0.0",
"python-dotenv>=1.2.1",
"requests>=2.28.0",
"tenacity>=8.2.0",
"azure-ai-ml>=1.12.0; extra == \"all\"",
"azure-identity>=1.15.0; extra == \"all\"",
"azure-mgmt-compute>=33.0.0; extra == \"all\"",
"azure-mgm... | [] | [] | [] | [
"Homepage, https://github.com/OpenAdaptAI/openadapt-evals",
"Repository, https://github.com/OpenAdaptAI/openadapt-evals",
"Documentation, https://github.com/OpenAdaptAI/openadapt-evals#readme",
"Bug Tracker, https://github.com/OpenAdaptAI/openadapt-evals/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:56:07.222732 | openadapt_evals-0.3.3.tar.gz | 6,085,061 | 8d/ea/674568d0cf900775a968027e0ea5050aabf8a0afb605d3ccd5cc42df8b12/openadapt_evals-0.3.3.tar.gz | source | sdist | null | false | fd75c25ad8a267742ba89b133aa77cae | fc0e45803a6ffd898e7abc8578d7202f0739cbcdb070307093d939a6173d330d | 8dea674568d0cf900775a968027e0ea5050aabf8a0afb605d3ccd5cc42df8b12 | MIT | [
"LICENSE"
] | 275 |
2.4 | aws-ssm-tools | 2.1.1 | Tools for AWS Systems Manager: ec2-session ecs-session ec2-ssh ssm-tunnel | # aws-ssm-tools - AWS System Manager Tools
[](https://github.com/mludvig/aws-ssm-tools/actions)
[](https://pypi.org/project/aws-ssm-tools/)
[](https://pypi.org/project/aws-ssm-tools/)
Helper tools for AWS Systems Manager: `ec2-session`, `ec2-ssh` and `ssm-tunnel`,
and for ECS Docker Exec: `ecs-session`
## Scripts included
* **ec2-session** (formerly _ssm-session_)
Wrapper around `aws ssm start-session` that can open
SSM Session to an instance specified by *Name* or *IP Address*.
It doesn't need user credentials or even `sshd` running on the instance.
Check out *[SSM Sessions the easy
way](https://aws.nz/projects/ssm-session/)* for an example use.
Works with any Linux or Windows EC2 instance registered in SSM.
* **ecs-session**
Wrapper around `aws ecs execute-command` that can run a command
or open an interactive session to an Exec-enabled ECS container
specified by the service, name, IP address, etc.
It doesn't need user credentials or `sshd` running on the container,
however the containers must be configured to allow this access.
Check out *[Interactive shell in ECS Containers](https://aws.nz/projects/ecs-session/)*
for an example use.
* **ec2-ssh** (formerly _ssm-ssh_)
Open an SSH connection to the remote server through *Systems Manager*
without the need for open firewall or direct internet access. SSH can
then be used to forward ports, copy files, etc.
Unlike `ssm-tunnel` it doesn't create a full VPN link, however it's in
some aspects more versatile as it can be used with `rsync`, `scp`,
`sftp`, etc.
It works with any client that can run SSH (including Mac OS-X) and
doesn't require a special agent on the instance, other than the standard
AWS SSM agent.
Also supports pushing your SSH key to the instance with `--send-key` (aka
*EC2 Instance Connect*, although that's an odd name for this function).
* **ssm-tunnel**
Open *IP tunnel* to the SSM instance and to enable *network access*
to the instance VPC. This requires [ssm-tunnel-agent](README-agent.md)
installed on the instance.
Works with *Amazon Linux 2* instances and probably other recent Linux
EC2 instances. Requires *Linux* on the client side - if you are on Mac
or Windows you can install a Linux VM in a [VirtualBox](https://virtualbox.org).
Requires `ssm-tunnel-agent` installed on the instance - see below for
instructions.
## Usage
1. **List instances** available for connection
```
~ $ ec2-session --list
InstanceId InstanceName HostName Addresses
------------------- ---------------- ------------------------------ --------------
i-07c189021bc56e042 nginx-web-server ip-10-251-128-70.ec2.internal 10.251.128.70
i-094df06d3633f3267 bastion-host ip-10-251-128-73.ec2.internal 10.251.128.73
i-02689d593e17f2b75 jenkins-server ip-10-251-129-78.ec2.internal 10.251.129.78
```
If you're like me and have access to many different AWS accounts you
can select the right one with `--profile` and / or change the `--region`:
```
~ $ ec2-session --profile aws-sandpit --region us-west-2 --list
InstanceId InstanceName HostName Addresses
------------------- ---------------- ----------------------------- --------------
i-07c189021bc56e042 nginx-web-server ip-10-251-128-70.ec2.internal 10.251.128.70
```
Alternatively use the standard AWS *environment variables*:
```
~ $ export AWS_DEFAULT_PROFILE=aws-sandpit
~ $ export AWS_DEFAULT_REGION=us-west-2
~ $ ec2-session --list
InstanceId InstanceName HostName Addresses
------------------- ---------------- ----------------------------- -------------
i-07c189021bc56e042 nginx-web-server ip-10-251-128-70.ec2.internal 10.251.128.70
```
2. **Open SSM session** to an instance:
This opens an interactive shell session over SSM without the need for
a password or SSH key. Note that by default the login user is `ssm-user`.
You can specify most a different user with e.g. `--user ec2-user` or
even `--user root`.
Running `ec2-session` without specifying an IP or hostname to connect to will show a simple terminal menu.
You can see all the servers managed by SSM here and pressing enter will start a connection to the highted server.
note that you will still need to pass in `--user` if you are not using the default values.
You can skip the interactive menu by specifying the server directly into the command.
```
~ $ ec2-session -v nginx-web-server --user ec2-user --reason "optional - The reason why you are connecting to the instance"
InstanceId InstanceName HostName Addresses
------------------- ---------------- ----------------------------- -------------
i-07c189021bc56e042 nginx-web-server ip-10-251-128-70.ec2.internal 10.251.128.70
Starting session with SessionId: botocore-session-0d381a3ef740153ac
[ec2-user@ip-10-251-128-70] ~ $ hostname
ip-10-251-128-70.ec2.internal
```
You can specify other SSM documents to run with `--document-name AWS-...`
to customise your session. Refer to AWS docs for details.
3. **Open SSH session** over SSM with *port forwarding*.
The `ec2-ssh` tool provides a connection and authentication mechanism
for running SSH over Systems Manager.
The target instance *does not need* a public IP address, it also does
*not* need an open SSH port in the Security Group. All it needs is to be
registered in the Systems Manager.
All `ssh` options are supported, go wild. In this example we will port forward
our local 3306 port to our MySQL RDS database which is running on the same standard port
`-L 3306:mysql-rds.aws.nz:3306` SSH port forwarding method.
```
~ $ ec2-ssh ec2-user@test1 -L 3306:mysql-rds.aws.nz:3306 -i ~/.ssh/aws-nz.pem
InstanceId InstanceName HostName Addresses
------------------- --------------------------- ------------------------------ --------------
i-07c189021bc56e042 nginx-web-server ip-10-251-128-70.ec2.internal 10.251.128.70
[ec2-ssh] INFO: Resolved instance name 'test1' to 'i-07c189021bc56e042'
[ec2-ssh] INFO: Running: ssh -o ProxyCommand='aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters portNumber=%p' i-07c189021bc56e042 -l ec2-user -L 3306:mysql-rds.aws.nz:3306 -i ~/.ssh/aws-nz.pem
OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
...
Last login: Sun Apr 12 20:05:09 2020 from localhost
__| __|_ )
_| ( / Amazon Linux 2 AMI
___|\___|___|
[ec2-user@ip-192-168-45-158] ~ $
```
From another terminal we can now connect to the MySQL RDS. Since the
port 3306 is forwarded from *localhost* through the tunnel we will
instruct `mysql` client to connect to `127.0.0.1` (localhost).
```
~ $ mysql -h 127.0.0.1 -u {RdsMasterUser} -p
Enter password: {RdsMasterPassword}
Welcome to the MariaDB monitor. Commands end with ; or \g.
Server version: 5.6.10 MySQL Community Server (GPL)
MySQL [(none)]> show processlist;
+-----+------------+-----------------------+
| Id | User | Host |
+-----+------------+-----------------------+
| 52 | rdsadmin | localhost |
| 289 | masteruser | 192.168.45.158:52182 | <<< Connection from test1 IP
+-----+------------+-----------------------+
2 rows in set (0.04 sec)
```
4. **Use `rsync` with `ec2-ssh`** to copy files to/from EC2 instance.
Since in the end we run a standard `ssh` client we can use it with
[rsync](https://en.wikipedia.org/wiki/Rsync) to copy files to/from the
EC2 instance.
```
~ $ rsync -e ec2-ssh -Prv ec2-user@test1:some-file.tar.gz .
some-file.tar.gz
31,337,841 100% 889.58kB/s 0:00:34 (xfr#1, to-chk=0/1)
sent 43 bytes received 31,345,607 bytes 814,172.73 bytes/sec
total size is 31,337,841 speedup is 1.00
```
We can also select a different AWS profile and/or region:
```
~ $ rsync -e "ec2-ssh --profile aws-sandpit --region us-west-2" -Prv ...
```
Alternatively set the profile and region through standard AWS
*environment variables* `AWS_DEFAULT_PROFILE` and
`AWS_DEFAULT_REGION`.`
5. **Create IP tunnel** and SSH to another instance in the VPC through it.
We will use `--route 192.168.44.0/23` that gives us access to the VPC CIDR.
```
~ $ ssm-tunnel -v tunnel-test --route 192.168.44.0/23
[ssm-tunnel] INFO: Local IP: 100.64.160.100 / Remote IP: 100.64.160.101
00:00:15 | In: 156.0 B @ 5.2 B/s | Out: 509.0 B @ 40.4 B/s
```
Leave it running and from another shell `ssh` to one of the instances listed
with `--list` above. For example to `test1` that's got VPC IP `192.168.45.158`:
```
~ $ ssh ec2-user@192.168.45.158
Last login: Tue Jun 18 20:50:59 2019 from 100.64.142.232
...
[ec2-user@test1 ~]$ w -i
21:20:43 up 1:43, 1 user, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
ec2-user pts/0 192.168.44.95 21:20 3.00s 0.02s 0.00s w -i
^^^^^^^^^^^^^
[ec2-user@test1 ~]$ exit
Connection to 192.168.45.158 closed.
~ $
```
Note the source IP `192.168.44.95` that belongs to the `tunnel-test`
instance - our connections will *appear* as if they come from this instance.
Obviously the **Security Groups** of your other instances must allow SSH
access from the IP or SG of your tunnelling instance.
All these tools support `--help` and a set of common parameters:
--profile PROFILE, -p PROFILE
Configuration profile from ~/.aws/{credentials,config}
--region REGION, -g REGION
Set / override AWS region.
--verbose, -v Increase log level.
--debug, -d Increase log level even more.
`ec2-ssh` only supports the long options to prevent conflict with `ssh`'s
own short options that are being passed through.
Standard AWS environment variables like `AWS_DEFAULT_PROFILE`,
`AWS_DEFAULT_REGION`, etc, are also supported.
## Installation
All the tools use **AWS CLI** to open **SSM Session** and then use that
session to run commands on the target instance. The target instances **must be
registered in SSM**, which means they need:
- **connectivity to SSM endpoint**, e.g. through public IP, NAT Gateway, or
SSM VPC endpoint.
- **EC2 instance IAM Role** with permissions to connect to Systems Manager.
Follow the detailed instructions at [**Using SSM Session Manager for
interactive instance access**](https://aws.nz/best-practice/ssm-session-manager/)
for more informations.
### Install *AWS CLI* and `session-manager-plugin`
Make sure you've got `aws` and `session-manager-plugin` installed locally
on your laptop.
```
~ $ aws --version
aws-cli/1.18.31 Python/3.6.9 Linux/5.3.0-42-generic botocore/1.15.31
~ $ session-manager-plugin --version
1.1.56.0
```
Follow [AWS CLI installation
guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
and [session-manager-plugin
installation guide](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html) to install them if needed.
Note that `ec2-ssh` needs `session-manager-plugin` version *1.1.23* or
newer. Upgrade if your version is older.
### Register your instances with Systems Manager
*Amazon Linux 2* instances already have the `amazon-ssm-agent` installed and
running. All they need to register with *Systems Manager* is
**AmazonEC2RoleforSSM** managed role in their *IAM Instance Role* and network
access to `ssm.{region}.amazonaws.com` either directly or through a *https proxy*.
Check out the [detailed instructions](https://aws.nz/best-practice/ssm-session-manager/) for more info.
### Install SSM-Tools *(finally! :)*
The easiest way is to install the ssm-tools from *[PyPI](https://pypi.org/)* repository:
```
sudo pip3 install aws-ssm-tools
```
**NOTE:** SSM Tools require **Python 3.9 or newer**. Only `ssm-tunnel-agent`
requires **Python 3.8 or newer** as that's what's available by default
on *Amazon Linux 2* instances.
### Standalone *ssm-tunnel-agent* installation
Refer to *[README-agent.md](README-agent.md)* for `ssm-tunnel-agent`
installation details.
Alternatively it's also bundled with this package, you can take it from here and
copy to `/usr/local/bin/ssm-tunnel-agent` on the instance. Make it executable
and it should just work.
## Other AWS Utilities
Check out **[AWS Utils](https://github.com/mludvig/aws-utils)**
repository for more useful AWS tools.
## Author and License
All these scripts were written by [Michael Ludvig](https://aws.nz/)
and are released under [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0).
| text/markdown | null | Michael Ludvig <mludvig@logix.net.nz> | null | null | null | aws, ec2-session, ec2-ssh, ecs-session, ssm, ssm-tunnel | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.22.0",
"botocore",
"packaging",
"pexpect",
"simple-term-menu",
"tabulate"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/mludvig/aws-ssm-tools/issues",
"Documentation, https://github.com/mludvig/aws-ssm-tools/blob/master/README.md",
"Source Code, https://github.com/mludvig/aws-ssm-tools"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-18T23:56:03.465864 | aws_ssm_tools-2.1.1.tar.gz | 23,194 | f4/b9/e5357e57bb8149b97dbc37e67002ed7ac948899c016d07ac8b767ec50a1a/aws_ssm_tools-2.1.1.tar.gz | source | sdist | null | false | c52a4b3d2dd647c5be187d0cd4fba09d | 80d291f220216d2db879bbdc46b55a78de0ae50398a4559c61dcb177dd9a10d9 | f4b9e5357e57bb8149b97dbc37e67002ed7ac948899c016d07ac8b767ec50a1a | Apache-2.0 | [
"LICENSE"
] | 374 |
2.4 | llmboost-hub | 0.4.1 | A lightweight CLI tool for managing LLMBoost™ model images and environments. | # [LLMBoost Hub (lbh)](https://llmboost.mangoboost.io/docs/)
Manage LLMBoost™ model containers and environments to run, serve, and tune large language models.
Note: This is proprietary software and requires a valid LLMBoost™ license to use. Request a license at [support@mangoboost.io](mailto:support@mangoboost.io).
---
## Pre-requisites
### Dependencies:
- Python 3.10+
- Docker 27.3.1+
- NVIDIA GPU: [nvidia-docker2](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) or AMD GPU: [ROCm 6.3+](https://rocm.docs.amd.com/en/latest/Installation_Guide/Installation-Guide.html)
### Install LLMBoost Hub:
```bash
pip install llmboost_hub
# Verify installation
lbh --version
```
Upgrade:
```bash
pip install --upgrade llmboost_hub
```
Note: This document uses `lbh` interchangeably with `llmboost_hub`.
### Login to Hugging Face and Docker:
```bash
huggingface-cli login # or set HF_TOKEN env var
docker login -u <your_docker_username>
```
---
## Quick start
Fetch list of supported models from remote (automatically authenticates LLMBoost license):
```bash
lbh fetch # only needed once a day
```
One-liner to start serving a model (automatically downloads image and model, if needed):
```bash
lbh serve <Repo/Model-Name> # Full model name (including repository or organization name) must match the name from https://huggingface.co
```
For example:
```bash
lbh serve meta-llama/Llama-3.1-1B-Instruct
```
#### Basic workflow:
```bash
lbh fetch # authenticate LLMBoost license
lbh list # list models you've prepared (from model_paths.yaml)
lbh list --discover /path # discover models in a directory and add to model_paths.yaml
lbh prep <Repo/Model-Name> # download image and model assets
lbh run <Repo/Model-Name> # start container
lbh serve <Repo/Model-Name> # start LLMBoost server inside container
lbh test <Repo/Model-Name> # send test request
lbh stop <Repo/Model-Name> # stop container
```
For more details, see the [Command Reference](#command-reference) section below.
For more details, see the [Configuration Options](#configuration-options) section below.
#### Shell completions (auto-complete commands and model names):
```bash
eval "$(lbh completions)" # current shell
lbh completions [--venv|--profile] # persist for venv or profile
```
**Note:** Model name completion shows all supported models from `lbh list` (includes wildcard-expanded models).
## Configuration Options
*llmboost_hub* uses the following environment variables:
- `LBH_HOME`: base directory for all *llmboost_hub* data. (defaults: (host) `~/.llmboost_hub` <- (container) `/llmboost_hub`)
- `LBH_MODELS`: directory for storing and retrieving model assets. (default: `$LBH_HOME/models`)
- `LBH_MODEL_PATHS`: YAML file mapping model names to their paths. Automatically updated by `prep` and `run --model_path`. (default: `$LBH_HOME/model_paths.yaml`)
- `LBH_LOCAL_DB`: persistent local database for tuning results. Survives container restarts, system reboots, and LLMBoost version upgrades. (default: `$LBH_HOME/local_inference.db`)
- `LBH_WORKSPACE`: mounted user workspace for manually transferring files out of containers. (defaults: (host) `$LBH_HOME/workspace` <- (container) `/user_workspace`)
Notes:
- A configuation file is stored at `$LBH_HOME/config.yaml` with all the above mentioned settings (and other advanced settings).
- Precedence order for settings: Environment variables > Configuration file > Defaults
- `LBH_HOME` can only be changed by setting the env var (or in `~/.bashrc`).
- WARNING: Changing `LBH_HOME` will cause a new data directory to be used, and all configuration will be reset.
- `HF_TOKEN` is injected automatically when set.
---
## Command Reference
Use `lbh -h` for a summary of all commands, and `lbh [COMMAND] -h` for help with a specific command and all available options.
Use `lbh -v [COMMAND]` for verbose output with any command; shows useful diagnostic info for troubleshooting.
- `lbh login`
- Reads `$LBH_LICENSE_PATH` or prompts for a token.
- Validates online and saves the license file.
- `lbh fetch`
- Fetches latest models supported by LLMBoost.
- Filters to available GPU.
- `lbh list [query] [--discover PATH]`
- Lists models you've prepared (tracked in `model_paths.yaml`).
- **Default mode**: Shows only models prepared via `lbh prep`.
- **Discovery mode** (`--discover /path`): Scans directory for models and prompts to add them to `model_paths.yaml`.
- Status meanings:
- `pending`: model path doesn't exist or is empty
- `stopped`: model exists but container not running
- `running`: container running but idling
- `initializing`: container running and starting LLMBoost server
- `serving`: LLMBoost server ready to accept requests
- `tuning`: autotuner running
- Supports query filtering (case-insensitive, e.g., `lbh list llama`)
- Works correctly even when `LBH_MODELS` changes (paths in `model_paths.yaml` are absolute)
- `lbh prep <Repo/Model-Name> [--only-verify] [--fresh]`
- Pulls the image and downloads HF assets.
- Automatically saves model path to `LBH_MODEL_PATHS` after successful preparation.
- `--only-verify` checks digests and sizes.
- `--fresh` removes existing image and re-downloads model assets from Hugging Face.
- `lbh run <Repo/Model-Name> [OPTIONS] -- [DOCKER FLAGS...]`
- Resolves and starts the container detached.
- Mounts `$LBH_HOME` and `$LBH_WORKSPACE`. Injects HF_TOKEN.
- NVIDIA GPUs use `--gpus all`. AMD maps `/dev/dri` and `/dev/kfd`.
- Path resolution: checks `LBH_MODEL_PATHS` first, then falls back to `$LBH_MODELS/<repo>/<model>`.
- Useful options:
- `--image <image>`: override docker image.
- `--model_path <model_path>`: override model assets path (saved to `LBH_MODEL_PATHS` for future use).
- `--restart`: restarts container, if already running.
- `--use-local-db`: merge persistent local database (~/.llmboost_hub/local_inference.db) into container to leverage historical tuning data.
- Pass extra docker flags after `--`.
- `lbh serve <Repo/Model-Name> [--host 0.0.0.0] [--port 8011] [--detached] [--force] -- [LLMBOOST ARGS...]`
- Starts LLMBoost server inside the container.
- Waits until ready, unless `--detached`.
- `--force` skips GPU utilization checks (use if GPU utilization is incorrectly reported by NVidia or AMD GPU drivers).
- `--use-local-db`: merge persistent local database (~/.llmboost_hub/local_inference.db) into container to leverage historical tuning data.
- Pass extra llmboost serve arguments after `--`.
- `lbh test <Repo/Model-Name> [--query "..."] [-t N] [--host 127.0.0.1] [--port 8011]`
- Sends a test request to `/v1/chat/completions`.
- `lbh attach <Repo/Model-Name> [-c <container name or ID>]`
- Opens a shell in the running container.
- `lbh stop <Repo/Model-Name> [-c <container name or ID>]`
- Stops the container.
- `lbh status [model]`
- Shows status and model.
- `lbh tune <Repo/Model-Name> [--metrics throughput] [--detached] [--image <image>]`
- Runs the autotuner. - Results are automatically saved to persistent local database (`$LBH_LOCAL_DB`) and will survive container restarts, system reboots, and LLMBoost version upgrades.
- Use `lbh serve --use-local-db` to leverage tuning results from previous sessions.
### Cluster Commands (Multi-Node Deployments)
- `lbh cluster install [--kubeconfig PATH] [--docker-username USER] [--docker-pat TOKEN] [--docker-email EMAIL] [-- EXTRA_HELM_ARGS]`
- Install LLMBoost Helm chart and Kubernetes infrastructure for multi-node deployments.
- Displays access credentials for management and monitoring UIs after installation.
- Requires running Kubernetes cluster and helm installed.
- Docker authentication options:
- `--docker-username`, `--docker-pat`, `--docker-email`: Provide credentials directly (all three required together)
- Alternatively, run `docker login` and credentials will be read from `~/.docker/config.json`
- If neither provided, cluster will be installed without Docker registry secret
- `lbh cluster deploy [-f CONFIG_FILE] [--kubeconfig PATH]`
- Deploy models across cluster nodes based on configuration file.
- Generates and applies Kubernetes CRD manifests.
- Config template: `$LBH_HOME/utils/template_cluster_config.jsonc`
- `lbh cluster status [--kubeconfig PATH] [--show-secrets]`
- Show status of all model deployments and management services.
- Displays summary statistics: Models: <ready>/<total> and Mgmt.: <ready>/<total>
- Shows model deployment table with pod status, restarts, and error messages.
- Service URLs for management UI and monitoring (Grafana).
- Use `--show-secrets` to display access credentials (masked).
- Use `-v --show-secrets` for full unmasked credentials.
- `lbh cluster logs [--models|--management] [--pod POD_NAME] [--tail TAIL_ARGS...] [--grep GREP_ARGS...] [--kubeconfig PATH]`
- View logs from model deployment or management pods.
- `--models`: Show logs from model deployment pods.
- `--management`: Show logs from management/monitoring pods (displays as table).
- `--pod POD_NAME`: Filter to specific pod by name.
- `--tail TAIL_ARGS`: Show last N lines from workspace logs (default: 10).
- `--grep GREP_ARGS`: Filter logs by pattern (uses awk for pattern matching).
- Defaults to showing both model and management logs if no filter specified.
- `lbh cluster remove <MODEL_NAME> [--all] [--kubeconfig PATH]`
- Remove specific model deployments from the cluster.
- Deletes LLMBoostDeployment custom resources by name.
- `--all`: Remove all model deployments (requires confirmation unless used with --force).
- Example: `lbh cluster remove facebook/opt-125m` or `lbh cluster remove --all`
- `lbh cluster uninstall [--kubeconfig PATH] [--force]`
- Uninstall LLMBoost cluster resources.
- Prompts for confirmation unless `--force` is used.
- Does not automatically delete the namespace.
---
## Support
- Docs: https://llmboost.mangoboost.io/docs/
- Website: https://llmboost.mangoboost.io/
- Email: support@mangoboost.io
| text/markdown | null | Harish Kambhampaty <harish.kambhampaty@mangoboost.io> | null | null | null | LLM, Docker, CLI, HPC, AI, LLMBoost | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Developers"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0",
"requests>=2.31",
"openai>=0.27",
"tabulate>=0.9",
"rich>=13.7",
"docker>=7.0",
"pyyaml>=6.0",
"pandas>=2.0",
"licensing",
"bcrypt>=4.0",
"huggingface_hub>=0.23",
"black>=23.1.0",
"cryptography>=41.0",
"build>=1.0.0; extra == \"dev\"",
"wheel>=0.42; extra == \"dev\"",
"se... | [] | [] | [] | [
"Homepage, https://llmboost.mangoboost.io/",
"Documentation, https://llmboost.mangoboost.io/docs/"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T23:55:52.623241 | llmboost_hub-0.4.1.tar.gz | 71,625 | 7a/84/701675312a9efc19f41d874951e0ae4f75f28ccb01662e30cdc94c35e062/llmboost_hub-0.4.1.tar.gz | source | sdist | null | false | 61c00389364292ff61cf03e0d291efbe | b2a4cd0c52ee84f12e7667a087b57076e0bc6161c6e9de2af93a1c678b65ed31 | 7a84701675312a9efc19f41d874951e0ae4f75f28ccb01662e30cdc94c35e062 | null | [
"LICENSE"
] | 172 |
2.4 | ezrules | 0.12.0 | Open-source transaction monitoring engine for business rules | # ezrules
Open-source transaction monitoring engine for business rules.
ezrules provides a Python-based framework for defining, managing, and executing business rules with a web-based management interface and scalable infrastructure for rule execution and backtesting.
## ✨ Features
- **Rule Engine**: Flexible Python-based rule execution with custom logic support
- **Management Interface**: Modern web UI for creating and managing rules
- **Enterprise Security**: Granular role-based access control with 27 permission types
- **Transaction Labeling**: Comprehensive fraud analytics with API and bulk CSV upload capabilities
- **Analytics Dashboard**: Real-time transaction volume charts with configurable time ranges (1h, 6h, 12h, 24h, 30d)
- **Scalable Architecture**: Unified API service with integrated rule evaluation
- **Database Integration**: PostgreSQL backend with SQLAlchemy ORM and full audit history
- **Audit Trail**: Change tracking for rules, user lists, outcomes, labels, and field type configurations, with per-change user attribution
- **Field Type Management**: Auto-discovers JSON field types from live traffic and test payloads; configurable type casting (integer, float, string, boolean, datetime) applied before rule evaluation so comparisons behave correctly regardless of how values arrive in JSON
- **Backtesting**: Test rule changes against historical data before deployment
- **CLI Tools**: Command-line interface for database management and realistic test data generation
## 🏗️ Architecture
ezrules consists of several core components:
- **Rule Engine**: Evaluates events against defined rules and aggregates outcomes
- **API Service**: FastAPI-based API with JWT authentication, including real-time rule evaluation at `/api/v2/evaluate` (default port 8888)
- **Web Frontend**: Modern UI for rule management, analytics, and administration
- **Database Layer**: PostgreSQL storage for rules, events, and execution logs
### Data Flow
1. Events are submitted to the API service at `/api/v2/evaluate`
2. Rules are executed against event data
3. Outcomes are aggregated and stored
4. Results are available via API and web interface
## 🚀 Quick Start
### Prerequisites
- **Python 3.12+**
- **PostgreSQL** — used for rule storage, audit logs, and Celery result backend
- **Redis** — used as the Celery message broker for backtesting tasks
- **Docker & Docker Compose** (recommended) — to run PostgreSQL, Redis, and the Celery worker with a single command
#### Start infrastructure with Docker Compose (recommended)
```bash
docker compose up -d
```
This starts three services in the background:
- **PostgreSQL** on port 5432 — database (data persisted in a Docker volume)
- **Redis** on port 6379 — Celery message broker
- **Celery worker** — processes backtest tasks (built from the project `Dockerfile`)
The worker waits for PostgreSQL and Redis to be healthy before starting.
To stop:
```bash
docker compose down # stop containers, keep data
docker compose down -v # stop containers and delete data
```
After `docker compose up -d`, you only need to run the API locally:
```bash
uv run ezrules api --port 8888
```
#### Or install services manually
<details>
<summary>Manual installation instructions</summary>
**Redis:**
```bash
# macOS
brew install redis && brew services start redis
# Ubuntu/Debian
sudo apt install redis-server && sudo systemctl start redis
```
**PostgreSQL:** Install via your system package manager or use the standalone Docker script in `scripts/run_postgres_locally.sh`.
</details>
Redis must be running on `localhost:6379` (default). To use a different URL, set the `EZRULES_CELERY_BROKER_URL` environment variable (e.g. `redis://myhost:6380/0`).
### Installation
```bash
# Clone the repository
git clone https://github.com/sofeikov/ezrules.git
cd ezrules
# Install dependencies
uv sync
```
### Database Setup
```bash
# Initialize the database
uv run ezrules init-db
# Initialize database with automatic deletion of existing database (non-interactive)
uv run ezrules init-db --auto-delete
# Set up permissions and default roles
uv run ezrules init-permissions
# Add a user
uv run ezrules add-user --user-email admin@example.com --password admin
```
The `init-db` command automatically handles database creation and provides options for managing existing databases:
- **Interactive mode** (default): Prompts if you want to delete and recreate existing databases
- **Auto-delete mode** (`--auto-delete`): Automatically deletes existing databases without prompting
- **Smart creation**: Only creates the database if it doesn't already exist
### Start Services
```bash
# Start the API service (FastAPI - includes rule evaluation and frontend API)
uv run ezrules api --port 8888
# With auto-reload for development:
uv run ezrules api --port 8888 --reload
```
#### Celery Worker (required for backtesting)
The backtesting feature runs rule comparisons asynchronously via Celery. A Celery worker must be running for backtest tasks to execute.
If you're using `docker compose up -d`, the worker is **already running** — no extra steps needed.
To run the worker manually instead (e.g. for debugging):
```bash
# On macOS, use --pool=solo to avoid fork-related crashes (SIGSEGV)
uv run celery -A ezrules.backend.tasks worker -l INFO --pool=solo
# On Linux, the default prefork pool works fine:
uv run celery -A ezrules.backend.tasks worker -l INFO
```
A VS Code launch configuration named **"Celery Worker"** is also available in `.vscode/launch.json` for debugging the worker with breakpoints.
**Architecture notes:**
- **Broker** (Redis): Delivers task messages from the API to the worker
- **Result backend** (PostgreSQL): Stores task results in the same database as the application, using the `EZRULES_DB_ENDPOINT` connection string
- Without a running worker, backtest requests will remain in `PENDING` state indefinitely
### Web Frontend
ezrules includes a web frontend that communicates with the FastAPI backend.
#### Features
The frontend provides:
- **Rule List View**: Browse all rules with a modern, responsive interface
- **Rule Detail View**: View comprehensive rule details including:
- Rule ID, description, and logic
- Created date and version history
- Test functionality with dynamic JSON input
- Real-time rule testing with sample data
- Revision history browsing with read-only historical revision views
- **Labels Management**: Full CRUD for transaction labels — list, create, and delete labels (with confirmation), plus a link to bulk CSV upload
- **Label Analytics**: View labeled transaction analytics — total labeled events metric card, per-label time-series charts with Chart.js, and a time range selector (1h, 6h, 12h, 24h, 30d)
- **Seamless Navigation**: Navigate between rule list, detail, labels, and analytics pages
#### Build Frontend (optional)
```bash
cd ezrules/frontend
npm install
npm run build
```
Build output will be generated in `ezrules/frontend/dist/`.
### Generate Test Data
```bash
# Create sample rules and events for testing
uv run ezrules generate-random-data --n-rules 10 --n-events 100
# Generate events with realistic fraud labeling
uv run ezrules generate-random-data --n-events 100 --label-ratio 0.3 --export-csv test_labels.csv
```
## 🔐 Enterprise Security
ezrules includes a comprehensive role-based access control system designed for enterprise compliance requirements.
### Permission Types
The system supports 27 granular permission types:
**Rule Management:**
- `create_rule` - Create new business rules
- `modify_rule` - Edit existing rules
- `delete_rule` - Delete rules
- `view_rules` - View rules and rule history
**Outcome Management:**
- `create_outcome` - Add new outcome types
- `modify_outcome` - Edit outcome definitions
- `delete_outcome` - Remove outcome types
- `view_outcomes` - View outcome configurations
**List Management:**
- `create_list` - Create new user lists
- `modify_list` - Add/remove list entries
- `delete_list` - Delete entire lists
- `view_lists` - View user lists
**Label Management:**
- `create_label` - Create transaction labels
- `modify_label` - Modify transaction labels
- `delete_label` - Delete transaction labels
- `view_labels` - View transaction labels
**Audit Access:**
- `access_audit_trail` - View system audit logs and change history
**User Management:**
- `view_users` - View users
- `create_user` - Create users
- `modify_user` - Modify users
- `delete_user` - Delete users
- `manage_user_roles` - Assign/remove user roles
**Role & Permission Management:**
- `view_roles` - View roles
- `create_role` - Create roles
- `modify_role` - Modify roles
- `delete_role` - Delete roles
- `manage_permissions` - Manage role permissions
### Default Roles
Three pre-configured roles are available:
- **Admin**: Full system access with all permissions
- **Rule Editor**: Can create and modify rules, view outcomes and lists
- **Read-only**: View-only access to rules, outcomes, and lists
### Role Assignment
Users can be assigned to roles through the database or programmatically. The permission system supports:
- Multiple roles per user
- Organization-scoped data model (`o_id`) used by core entities
- Audit history for rules, user lists, outcomes, and labels
## 🏷️ Transaction Labeling & Analytics
ezrules includes comprehensive transaction labeling capabilities for fraud detection analytics and model validation.
### Labeling Methods
**Single Event API**: Programmatically mark individual transactions
```bash
curl -X POST http://localhost:8888/api/v2/labels/mark-event \
-H "Authorization: Bearer <access_token>" \
-H "Content-Type: application/json" \
-d '{"event_id": "txn_123", "label_name": "FRAUD"}'
```
**Bulk CSV Upload**: Upload CSV files through the web interface for batch labeling (no header row)
```csv
txn_456,NORMAL
txn_789,CHARGEBACK
```
### Label Analytics Dashboard
Access comprehensive analytics for labeled transactions via the web interface:
**Key Metrics:**
- **Total Labeled Events**: Track overall labeling coverage
- **Labels Over Time**: Individual time-series charts for each label type showing temporal trends
**Time Range Options**: View analytics over 1h, 6h, 12h, 24h, or 30d periods
**API Endpoints:**
- `/api/v2/analytics/labels-summary` - Summary statistics (total labeled events count)
- `/api/v2/analytics/labels-distribution` - Distribution of individual labels by time period
### Test Data Generation
Generate realistic test data with fraud patterns:
```bash
# Generate 200 events, label 40% with realistic patterns, export to CSV
uv run ezrules generate-random-data --n-events 200 --label-ratio 0.4 --export-csv fraud_test.csv
# Export existing events to CSV for testing uploads
uv run ezrules export-test-csv --n-events 50 --unlabeled-only --output-file test_upload.csv
```
### Built-in Labels
- **FRAUD**: Suspicious or confirmed fraudulent transactions
- **CHARGEBACK**: Disputed transactions resulting in chargebacks
- **NORMAL**: Legitimate transactions
### Analytics Benefits
- **False Positive Analysis**: Measure how often legitimate transactions are flagged
- **False Negative Analysis**: Identify missed fraud cases for rule improvement
- **Model Validation**: Test machine learning models against known outcomes
- **Performance Metrics**: Track rule effectiveness over time
- **Temporal Analysis**: Understand fraud patterns and trends over configurable time periods
## 💼 Use Cases
- **Financial Transaction Monitoring**: Real-time fraud detection and compliance checking
- **Enterprise Compliance**: Role-based access control with audit trails for regulatory requirements
- **Business Rule Automation**: Automated decision making based on configurable business logic
- **Event-Driven Processing**: Rule-based responses to system events and data changes
- **Fraud Analytics**: Comprehensive transaction labeling for performance analysis and model improvement
## 📖 Documentation
### Building Documentation
The project uses MkDocs for documentation generation:
```bash
# Build documentation
uv run mkdocs build
# Serve documentation locally with live reload
uv run mkdocs serve
# Then open http://127.0.0.1:8000/ in your browser
# Build and serve in one command
uv run mkdocs serve
```
The documentation is also available online at [ReadTheDocs](https://ezrules.readthedocs.io/).
## 🛠️ Development
### Tech Stack
- **Backend**: Python 3.12+, FastAPI, SQLAlchemy, Celery
- **Frontend**: Angular, Tailwind CSS, Chart.js
- **Database**: PostgreSQL
- **Task Queue**: Celery with Redis broker and PostgreSQL result backend (for backtesting)
- **Authentication**: JWT tokens (API v2)
### Code Quality
```bash
# Run linting and type checking
uv run poe check
# Run tests
uv run pytest
```
### Testing
#### Backend Tests
```bash
# Run tests with coverage
uv run pytest --cov=ezrules.backend --cov=ezrules.core --cov-report=term-missing --cov-report=xml tests
# Run CLI tests
./test_cli.sh
# Code quality checks (ruff format, type checking, linting)
uv run poe check
# Generate test data
uv run ezrules generate-random-data
# Clean up test data
uv run ezrules delete-test-data
```
#### Frontend Tests
The Angular frontend includes comprehensive end-to-end tests using Playwright.
**Prerequisites:**
- API service running on port 8888
- Angular dev server running (port 4200)
- Playwright browsers installed (first time only): `npx playwright install chromium`
```bash
cd ezrules/frontend
npm run test:e2e
```
## 📄 License
Apache License 2.0 - see [LICENSE](LICENSE) file for details.
| text/markdown | null | Konstantin Sofeikov <sofeykov@gmail.com> | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"celery>=5.4.0",
"click>=8.0.0",
"email-validator",
"fastapi>=0.115.0",
"gunicorn",
"pandas>=2.2.2",
"passlib[bcrypt]>=1.7.4",
"psycopg2-binary",
"pydantic-settings>=2.3.3",
"pydantic>=2.7.4",
"pyparsing",
"python-jose[cryptography]>=3.3.0",
"python-multipart>=0.0.9",
"pyyaml",
"redis>=6... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T23:55:35.943907 | ezrules-0.12.0.tar.gz | 288,264 | af/99/3c9816325d8a580e0782232442982b665d61a9aaca40612cb152b805a18e/ezrules-0.12.0.tar.gz | source | sdist | null | false | 37508f31d605255406eaaeb9906c9481 | 0e96e5873cdbcc33845e2ad446db8c579d8255b92d21437739490fee3a19ed03 | af993c9816325d8a580e0782232442982b665d61a9aaca40612cb152b805a18e | null | [
"LICENSE"
] | 250 |
2.4 | bengal-chirp | 0.1.2 | A Python web framework for the modern web platform — HTML fragments, streaming, SSE, free-threading ready | # ⌁⌁ Chirp
[](https://pypi.org/project/bengal-chirp/)
[](https://pypi.org/project/bengal-chirp/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/bengal-chirp/)
**A Python web framework for the modern web platform.**
```python
from chirp import App
app = App()
@app.route("/")
def index():
return "Hello, World!"
app.run()
```
---
## What is Chirp?
Chirp is a Python web framework built for the modern web platform: browser-native UI, HTML over the wire, streaming responses, and Server-Sent Events. Return values drive content negotiation — no `make_response()`, no `jsonify()`. The type *is* the intent.
**What's good about it:**
- **Browser-native UI** — `<dialog>`, `popover`, View Transitions, container queries. Most of what required a JS framework is now native HTML and CSS.
- **HTML over the wire** — Serve full pages, template fragments, streaming HTML, and SSE. Built for htmx and the modern browser.
- **Streaming HTML** — Send the page shell immediately and fill in content as data becomes available. No loading spinners, no skeleton screens.
- **Server-Sent Events** — Push real-time updates over plain HTTP. No WebSocket protocol upgrade, no special infrastructure.
---
## Installation
```bash
# pip
pip install bengal-chirp
# uv
uv add bengal-chirp
```
Requires Python 3.14+
---
## Quick Start
```bash
chirp new myapp && cd myapp && python app.py
```
| Function | Description |
|----------|-------------|
| `chirp new <name>` | Scaffold a new project |
| `chirp run <app>` | Start the dev server from an import string |
| `chirp check <app>` | Validate hypermedia contracts |
| `App()` | Create an application |
| `@app.route(path)` | Register a route handler |
| `Template(name, **ctx)` | Render a full template |
| `Template.inline(src, **ctx)` | Render from string (prototyping) |
| `Page(name, block, **ctx)` | Auto Fragment or Template based on request |
| `Fragment(name, block, **ctx)` | Render a named template block |
| `Stream(name, **ctx)` | Stream HTML progressively |
| `Suspense(name, **ctx)` | Shell first, OOB swaps for deferred data |
| `EventStream(gen)` | Server-Sent Events stream |
| `app.run()` | Start the development server |
---
## Features
| Feature | Description | Docs |
|---------|-------------|------|
| **Routing** | Pattern matching, path params, method dispatch | [Routing →](https://lbliii.github.io/chirp/docs/routing/) |
| **Filesystem routing** | Route discovery from `pages/` with layouts | [Filesystem →](https://lbliii.github.io/chirp/docs/routing/filesystem-routing/) |
| **Templates** | Kida integration, rendering, filters | [Templates →](https://lbliii.github.io/chirp/docs/templates/) |
| **Fragments** | Render named template blocks independently | [Fragments →](https://lbliii.github.io/chirp/docs/templates/fragments/) |
| **Forms** | `form_or_errors`, form macros, validation | [Forms →](https://lbliii.github.io/chirp/docs/data/forms-validation/) |
| **Streaming** | Progressive HTML rendering via Kida | [Streaming →](https://lbliii.github.io/chirp/docs/streaming/) |
| **SSE** | Server-Sent Events for real-time updates | [SSE →](https://lbliii.github.io/chirp/docs/streaming/server-sent-events/) |
| **Middleware** | CORS, sessions, static files, security headers, custom | [Middleware →](https://lbliii.github.io/chirp/docs/middleware/) |
| **Contracts** | Compile-time validation of hypermedia surface | [Reference →](https://lbliii.github.io/chirp/docs/reference/) |
| **Testing** | Test client, assertions, isolation utilities | [Testing →](https://lbliii.github.io/chirp/docs/testing/) |
| **Data** | Database integration and form validation | [Data →](https://lbliii.github.io/chirp/docs/data/) |
📚 **Full documentation**: [lbliii.github.io/chirp](https://lbliii.github.io/chirp/)
---
## Production Deployment
Chirp apps run on **[pounce](https://github.com/lbliii/pounce)**, a production-grade ASGI server with enterprise features built-in:
### Automatic Features (Zero Configuration)
- ✅ **WebSocket compression** — 60% bandwidth reduction
- ✅ **HTTP/2 support** — Multiplexed streams, server push
- ✅ **Graceful shutdown** — Finishes active requests on SIGTERM
- ✅ **Zero-downtime reload** — `kill -SIGUSR1` for hot code updates
- ✅ **Built-in health endpoint** — `/health` for Kubernetes probes
### Production Features (Configurable)
- 📊 **Prometheus metrics** — `/metrics` endpoint for monitoring
- 🛡️ **Per-IP rate limiting** — Token bucket algorithm, configurable burst
- 📦 **Request queueing** — Load shedding during traffic spikes
- 🐛 **Sentry integration** — Automatic error tracking and reporting
- 🔄 **Multi-worker mode** — CPU-based auto-scaling
### Quick Start: Production Mode
```python
from chirp import App, AppConfig
# Production configuration
config = AppConfig(
debug=False, # ← Enables production mode
workers=4,
metrics_enabled=True,
rate_limit_enabled=True,
sentry_dsn="https://...",
)
app = App(config=config)
@app.route("/")
def index():
return "Hello, Production!"
app.run() # ← Automatically uses production server
```
### CLI Production Mode
```bash
# Development (single worker, auto-reload)
chirp run myapp:app
# Production (multi-worker, all features)
chirp run myapp:app --production --workers 4 --metrics --rate-limit
```
### Docker Deployment
```dockerfile
FROM python:3.14-slim
WORKDIR /app
COPY . .
RUN pip install bengal-chirp
CMD ["chirp", "run", "myapp:app", "--production", "--workers", "4"]
```
📦 **Full deployment guide**: [docs/deployment/production.md](docs/deployment/production.md)
---
## Usage
<details>
<summary><strong>Return Values</strong> — Type-driven content negotiation</summary>
Route functions return *values*. The framework handles content negotiation based on the type:
```python
return "Hello" # -> 200, text/html
return {"users": [...]} # -> 200, application/json
return Template("page.html", title="Home") # -> 200, rendered via Kida
return Page("search.html", "results", items=x) # -> Fragment or Template (auto)
return Fragment("page.html", "results", items=x) # -> 200, rendered block
return Stream("dashboard.html", **async_ctx) # -> 200, streamed HTML
return Suspense("dashboard.html", stats=...) # -> shell + OOB swaps
return EventStream(generator()) # -> SSE stream
return Response(body=b"...", status=201) # -> explicit control
return Redirect("/login") # -> 302
```
No `make_response()`. No `jsonify()`. The type *is* the intent.
</details>
<details>
<summary><strong>Fragments and htmx</strong> — Render template blocks independently</summary>
Kida can render a named block from a template independently, without rendering the whole page:
```html
{# templates/search.html #}
{% extends "base.html" %}
{% block content %}
<input type="search" hx-get="/search" hx-target="#results" name="q">
{% block results_list %}
<div id="results">
{% for item in results %}
<div class="result">{{ item.title }}</div>
{% end %}
</div>
{% endblock %}
{% endblock %}
```
```python
@app.route("/search")
async def search(request: Request):
results = await db.search(request.query.get("q", ""))
if request.is_fragment:
return Fragment("search.html", "results_list", results=results)
return Template("search.html", results=results)
```
Full page request renders everything. htmx request renders just the `results_list` block.
Same template, same data, different scope. No separate "partials" directory.
</details>
<details>
<summary><strong>Streaming HTML</strong> — Progressive rendering</summary>
Kida renders template sections as they complete. The browser receives the shell immediately
and content fills in progressively:
```python
@app.route("/dashboard")
async def dashboard(request: Request):
return Stream("dashboard.html",
header=site_header(),
stats=await load_stats(),
activity=await load_activity(),
)
```
</details>
<details>
<summary><strong>Server-Sent Events</strong> — Real-time HTML updates</summary>
Push Kida-rendered HTML fragments to the browser in real-time:
```python
@app.route("/notifications")
async def notifications(request: Request):
async def stream():
async for event in notification_bus.subscribe(request.user):
yield Fragment("components/notification.html", event=event)
return EventStream(stream())
```
Combined with htmx's SSE support, this enables real-time UI updates with zero client-side
JavaScript. The server renders HTML, the browser swaps it in.
</details>
<details>
<summary><strong>Middleware</strong> — Composable request/response pipeline</summary>
No base class. No inheritance. A middleware is anything that matches the protocol:
```python
async def timing(request: Request, next: Next) -> Response:
start = time.monotonic()
response = await next(request)
elapsed = time.monotonic() - start
return response.with_header("X-Time", f"{elapsed:.3f}")
app.add_middleware(timing)
```
Built-in middleware: CORS, StaticFiles, HTMLInject, Sessions, SecurityHeaders.
</details>
<details>
<summary><strong>Typed Contracts</strong> — Compile-time hypermedia validation</summary>
Chirp validates the server-client boundary at startup:
```python
issues = app.check()
for issue in issues:
print(f"{issue.severity}: {issue.message}")
```
Every `hx-get`, `hx-post`, and `action` attribute in your templates is checked against the
registered route table. Every `Fragment` and `SSE` return type is checked against available
template blocks. Broken references become compile-time errors, not runtime 404s.
</details>
---
## Key Ideas
- **HTML over the wire.** Serve full pages, template fragments, streaming HTML, and
Server-Sent Events. Built for htmx and the modern browser.
- **Kida built in.** Same author, no seam. Fragment rendering, streaming templates, and
filter registration are first-class features, not afterthoughts.
- **Typed end-to-end.** Frozen config, frozen request, chainable response. Zero
`type: ignore` comments.
- **Free-threading native.** Designed for Python 3.14t from the first line. Immutable data
structures, ContextVar isolation.
- **Contracts, not conventions.** `app.check()` validates the full hypermedia surface at
startup.
- **Minimal dependencies.** `kida-templates` + `anyio` + `bengal-pounce`. Everything else is optional.
---
## Documentation
📚 **[lbliii.github.io/chirp](https://lbliii.github.io/chirp/)**
| Section | Description |
|---------|-------------|
| [Get Started](https://lbliii.github.io/chirp/docs/get-started/) | Installation and quickstart |
| [Core Concepts](https://lbliii.github.io/chirp/docs/core-concepts/) | App lifecycle, return values, configuration |
| [Routing](https://lbliii.github.io/chirp/docs/routing/) | Routes, filesystem routing, requests |
| [Templates](https://lbliii.github.io/chirp/docs/templates/) | Rendering, fragments, filters |
| [Streaming](https://lbliii.github.io/chirp/docs/streaming/) | HTML streaming and Server-Sent Events |
| [Middleware](https://lbliii.github.io/chirp/docs/middleware/) | Built-in and custom middleware |
| [Data](https://lbliii.github.io/chirp/docs/data/) | Database integration and forms |
| [Testing](https://lbliii.github.io/chirp/docs/testing/) | Test client and assertions |
| [Deployment](https://lbliii.github.io/chirp/docs/deployment/) | Production deployment with Pounce |
| [Tutorials](https://lbliii.github.io/chirp/docs/tutorials/) | Flask migration, htmx patterns |
| [Examples](https://lbliii.github.io/chirp/docs/examples/) | RAG demo, production stack, API |
| [Reference](https://lbliii.github.io/chirp/docs/reference/) | API documentation |
---
## Development
```bash
git clone https://github.com/lbliii/chirp.git
cd chirp
uv sync --group dev
pytest
```
---
## The Bengal Ecosystem
A structured reactive stack — every layer written in pure Python for 3.14t free-threading.
| | | | |
|--:|---|---|---|
| **ᓚᘏᗢ** | [Bengal](https://github.com/lbliii/bengal) | Static site generator | [Docs](https://lbliii.github.io/bengal/) |
| **∿∿** | [Purr](https://github.com/lbliii/purr) | Content runtime | — |
| **⌁⌁** | **Chirp** | Web framework ← You are here | [Docs](https://lbliii.github.io/chirp/) |
| **=^..^=** | [Pounce](https://github.com/lbliii/pounce) | ASGI server | [Docs](https://lbliii.github.io/pounce/) |
| **)彡** | [Kida](https://github.com/lbliii/kida) | Template engine | [Docs](https://lbliii.github.io/kida/) |
| **ฅᨐฅ** | [Patitas](https://github.com/lbliii/patitas) | Markdown parser | [Docs](https://lbliii.github.io/patitas/) |
| **⌾⌾⌾** | [Rosettes](https://github.com/lbliii/rosettes) | Syntax highlighter | [Docs](https://lbliii.github.io/rosettes/) |
Python-native. Free-threading ready. No npm required.
---
## License
MIT
| text/markdown | null | Bengal Contributors <lbeezr@icloud.com> | null | null | null | web-framework, asgi, html-over-the-wire, htmx, sse, streaming, free-threading, templates | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Internet :: WWW/HT... | [] | null | null | >=3.14 | [] | [] | [] | [
"kida-templates>=0.2.2",
"anyio>=4.0",
"bengal-pounce>=0.2.0",
"python-multipart>=0.0.18; extra == \"forms\"",
"itsdangerous>=2.2.0; extra == \"sessions\"",
"argon2-cffi>=23.1.0; extra == \"auth\"",
"httpx>=0.27.0; extra == \"testing\"",
"asyncpg>=0.30.0; extra == \"data-pg\"",
"httpx>=0.27.0; extra... | [] | [] | [] | [
"Homepage, https://github.com/lbliii/chirp",
"Documentation, https://github.com/lbliii/chirp",
"Repository, https://github.com/lbliii/chirp",
"Changelog, https://github.com/lbliii/chirp/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:55:31.848037 | bengal_chirp-0.1.2.tar.gz | 274,302 | 3e/23/39f85077be06992c73dc1be49d8ac17c4387800bcd4492c789ba45deb657/bengal_chirp-0.1.2.tar.gz | source | sdist | null | false | dfc4f896df24e851115cfa9d34d3c509 | eb08984a9f9658b2366e7c95767fce029c40a044648512d1bc1067a574983a89 | 3e2339f85077be06992c73dc1be49d8ac17c4387800bcd4492c789ba45deb657 | MIT | [
"LICENSE"
] | 256 |
2.4 | dyada | 0.0.9 | A Code for Memory-Saving Dyadic Adaptivity in Optimization and Simulation | <!--
SPDX-FileCopyrightText: 2025 Theresa Pollinger
SPDX-License-Identifier: GPL-3.0-or-later
-->
# `DyAda`: A Code for Dyadic Adaptivity in Optimization, Simulation, and Machine Learning
[](https://pypi.org/project/dyada/)
[](https://github.com/freifrauvonbleifrei/DyAda/blob/main/pyproject.toml)
[](https://github.com/freifrauvonbleifrei/DyAda/actions/workflows/python-package.yml/)

[](https://app.codacy.com/gh/freifrauvonbleifrei/DyAda/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[](https://www.gnu.org/licenses/gpl-3.0)
## Installation
It's as simple as
```bash
pip install dyada[drawing,matplotlib,opengl]
```
Or, if you would like to change the source code, do
```bash
git clone https://github.com/freifrauvonbleifrei/DyAda.git
cd DyAda
# ... git checkout the required version ...
pip install -e .[drawing,matplotlib,opengl]
```
## Dyadic Adaptivity
Dyadic adaptivity means: A given hypercube of 2 or more dimensions may or may not
be subdivided into two parts in any number of dimensions.
Of the resulting sub-boxes, each may again be subdivided into two in any dimension,
and so forth.
### Why Dyadic Adaptivity?
Currently, the most common approach to adaptivity are octrees, which are a
special type of dyadic adaptivity: Each box is either refined in *every* dimension
or not at all.
For a three-d domain, the tree and the resulting partitioning could look like this:
<!--
images generated like this:
```bash
for f in *.tex ; do latexmk -pdf $f ; done
for d in *.pdf ; do inkscape --without-gui --file=$d --export-plain-svg=${d%.*}.svg ; done
rsvg-convert tikz_cuboidss_solid.svg -w 268 -h 252 -f svg -o tikz_cuboids_solid.svg #etc.
``` -->


But maybe you didn't need all this resolution?
Maybe, in the finely-resolved areas, you only needed only *some* of the dimensions
resolved finely:

This is what DyAda provides.
The tree will then look like this:

And you can use only 14 degrees of freedom instead of 29!
(Count the number of colorful tree nodes to check!)
This reduction will be even stronger if you go to higher dimensions.
For details, refer to the [preprint](https://arxiv.org/abs/2508.06316),
with animations!
## Using DyAda
For a quick overview, the following example sticks to only two-dimensional discretizations,
but all algorithms work on (almost) arbitrary-dimensional omnitrees, though DyAda may
become slow for too many dimensions.
(Find the full tutorial code in [dyada_tutorial.py](./dyada_tutorial.py), and more examples
of usage in the extensive test suite in [/test](/test).)
You can start with a regular `RefinementDescriptor`:
```python:dyada_tutorial.py:s:descriptor
import bitarray as ba
import dyada
from random import randint
# %%
descriptor = dyada.RefinementDescriptor(2, [2, 1])
# dyada.plot_tree_tikz(descriptor, filename="simple_tree")
num_dimensions = descriptor.get_num_dimensions()
print(descriptor)
```
Expected output:
```console
RefinementDescriptor('11 01 00 00 ...0 00 01 00 00')
```
This one has four rectangles in the first dimension and two on the second, because
the level `[2, 1]` is passed as base-2 exponents.
If you uncomment the line with `plot_tree_tikz` and you have `latexmk` and some
LaTeX tikz packages installed, the script will generate a `simple_tree.pdf` in the
same folder.
You can use the descriptor and `MortonOrderLinearization` to build a `Discretization`:
```python:dyada_tutorial.py:s:discretization
discretization = dyada.Discretization(dyada.MortonOrderLinearization(), descriptor)
print("initial discretization:")
print(discretization)
```
->
```console
initial discretization:
_________
|_|_|_|_|
|_|_|_|_|
```
If you want to refine a single rectangle at once, you can use `apply_single_refinement`:
```python:dyada_tutorial.py:s:zero
new_discretization, index_mapping = dyada.apply_single_refinement(
discretization, 0, track_mapping="boxes"
)
print("after refining box 0:")
print(new_discretization)
```
->
```console
after refining box 0:
_________________
| | | | |
|___|___|___|___|
|_|_| | | |
|_|_|___|___|___|
```
Of course, you can also refine only in a subset of the dimensions:
```python:dyada_tutorial.py:s:random
# select random index and refinement
random_index = randint(0, new_discretization.descriptor.get_num_boxes() - 1)
random_refinement = ba.bitarray("00")
while random_refinement.count() == 0:
random_refinement = ba.bitarray(
"".join(str(randint(0, 1)) for _ in range(num_dimensions))
)
new_discretization, index_mapping = dyada.apply_single_refinement(
new_discretization, random_index, random_refinement, track_mapping="boxes"
)
print("after refining random box:")
print(new_discretization)
```
->
```console
after refining random box:
_________________
| |___| | |
|___|___|___|___|
|_|_| | | |
|_|_|___|___|___|
```
You can keep running the above and watch your discretization become finer and finer!
To refine many rectangles at once, you can collect the refinements
as `PlannedAdaptiveRefinement` object:
```python:dyada_tutorial.py:s:planned
refining = dyada.PlannedAdaptiveRefinement(discretization)
refining.plan_refinement(0, ba.bitarray("11"))
refining.plan_refinement(1, ba.bitarray("01"))
new_discretization, index_mapping = refining.apply_refinements(track_mapping="boxes")
# dyada.plot_all_boxes_2d(new_discretization, backend="matplotlib", labels="boxes")
print("after applying planned refinements:")
print(new_discretization)
```
->
```console
after applying planned refinements:
_________________
| | | | |
|___|___|___|___|
|_|_| | | | |
|_|_|_|_|___|___|
```
If you uncomment the `plot_all_boxes_2d`, it will show you the discretization
as matplotlib. Other backends are `tikz`, `ascii`(only 2d), and `opengl` (only 3d).
Note that dyada does not store your function data; you have to manage your own
container (for example, a `numpy` array) to do that.
But the `index_mapping` in the above snippets helps you figure out how your
function data has moved: `new_indices = index_mapping[old_index]`.
For a full workflow based on `dyada`, have a look at the project
[thingies_with_omnitrees](https://github.com/freifrauvonbleifrei/thingies_with_omnitrees).
## Contributing
Feel free to request features or voice your intent to work on/with DyAda as an
[issue](https://github.com/freifrauvonbleifrei/DyAda/issues).
Depending on what you are looking for, exciting features may be in preparation,
or they may just be waiting for you to implement them!
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"bitarray",
"numpy",
"cmap; extra == \"drawing\"",
"matplotlib; extra == \"matplotlib\"",
"pyopengl; extra == \"opengl\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T23:52:35.391206 | dyada-0.0.9.tar.gz | 120,964 | 91/48/d32df42c167b62c5af5d85f0d59a49fe9c07a5202bb155bd1324cb802414/dyada-0.0.9.tar.gz | source | sdist | null | false | b6063a3cc8d965a62174d6a3c8d0edf7 | a1ea89724a8eadedf80dc1dd511e31e472e52779afdc7b30aed1d47a6f8c59a8 | 9148d32df42c167b62c5af5d85f0d59a49fe9c07a5202bb155bd1324cb802414 | null | [] | 263 |
2.4 | gaze-tracker-v3 | 0.1.3 | Desktop gaze tracking app based on MediaPipe Face Mesh/Iris | # Gaze Tracker v3
Desktop-приложение для трекинга взгляда с веб-камеры на базе MediaPipe Face Mesh/Iris.
Проект разработан для проведения исследований на профессионалах.
## Возможности
- Калибровка по расширенной сетке точек
- Предсказание позиции взгляда в экранных координатах
- Сглаживание движения (Median + OneEuro)
- GUI-интерфейс на Tkinter + визуализация в OpenCV
## Требования
- Python 3.10+
- Веб-камера
- Windows (текущая UI-реализация ориентирована на Windows)
## Установка
Установите зависимости:
```bash
python -m pip install -r requirements.txt
```
## Запуск
### Основной способ
```bash
python main.py
```
### Через модуль
```bash
python -m gaze_tracker
```
### Через Python API
```python
from gaze_tracker import run_app
run_app()
```
## Быстрый сценарий использования
1. Нажмите **«Калибровка»** и смотрите на красные точки.
2. После завершения калибровки начните отслеживание (если не запущено).
3. Для повторной настройки используйте **«Сброс калибровки»**.
## Публичный API
- `gaze_tracker.run_app()` — запуск приложения
- `gaze_tracker.main()` — алиас для совместимости
| text/markdown | Molashko | null | null | null | null | gaze, eye-tracking, mediapipe, computer-vision, opencv | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: Microsoft :: Windows",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mediapipe==0.10.9",
"opencv-python==4.10.0.84",
"numpy<2.0.0,>=1.24.0",
"scikit-learn==1.5.2",
"Pillow==10.3.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Molashko/Gaze_App",
"Repository, https://github.com/Molashko/Gaze_App"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-18T23:51:55.445454 | gaze_tracker_v3-0.1.3.tar.gz | 24,497 | d2/3b/b4830d3b002cf9761c62f126f6f517e67cb604e2151c966ffc60caaa9004/gaze_tracker_v3-0.1.3.tar.gz | source | sdist | null | false | 7bd762bdc6704c4548411a0eeabc4005 | c551ae27567ecfce9198fea5671ca6bfaea7bd95a362aeee7b89c13d427b0ff1 | d23bb4830d3b002cf9761c62f126f6f517e67cb604e2151c966ffc60caaa9004 | null | [] | 259 |
2.4 | cloudglue | 0.6.3 | Python SDK for Cloudglue API | # Cloudglue Python SDK
[](https://pypi.org/project/cloudglue)
[](LICENSE.md)
[](https://discord.gg/QD5KWFVner)
Cloudglue makes it easy to turn video into LLM ready data. Official Python SDK for the Cloudglue API.
## 📖 Resources
- [Cloudglue API Docs](https://docs.cloudglue.dev)
- [Terms of Service](https://cloudglue.dev/terms)
- [Privacy Policy](https://cloudglue.dev/privacy)
- [Pricing](https://cloudglue.dev/pricing)
> By using this SDK, you agree to the [Cloudglue Terms of Service](https://cloudglue.dev/terms) and acknowledge our [Privacy Policy](https://cloudglue.dev/privacy).
## Installation
You can install the Cloudglue Python SDK using pip:
```bash
pip install cloudglue
```
## Quick Start
```python
from cloudglue import CloudGlue
# Initialize the client
client = CloudGlue(api_key="your_api_key") # Or use CLOUDGLUE_API_KEY env variable
# Define your messages
messages = [
{"role": "user", "content": "What are aligned video captions?"}
]
# Make an API request
response = client.chat.completions.create(
messages=messages,
model="nimbus-001",
collections=["abc123"], # Assumes collection already exists, otherwise create one first then reference here by collection id
)
# Get the generated text
generated_text = response.choices[0].message.content
print(generated_text)
```
## Development
### Prerequisites
- Python 3.10+
- Make (for build tasks)
- Git
### Setup
Clone the repository and set up the development environment:
```bash
git clone https://github.com/aviaryhq/cloudglue-python.git
cd cloudglue-python
brew install openapi-generator
make setup # This will set up the virtual environment
# Initialize the API spec Git submodule
make submodule-init
```
### API Specification
The OpenAPI specification is maintained in a separate [repository](https://github.com/aviaryhq/cloudglue-api-spec) and included as a Git submodule:
```bash
# Update the API spec to the latest version
make submodule-update
# After updating the spec, regenerate the SDK
make generate
```
### Building
```bash
make generate # Generate SDK from OpenAPI spec
make build # Build the package
```
### Project Structure
Project directory structure described below:
```
cloudglue/
├── __init__.py # Main package initialization
├── client/ # Custom client wrapper code
│ └── main.py # CloudGlue class implementation
└── sdk/ # Auto-generated API code
dist/ # Pre-built package dist
spec/ # Git submodule with OpenAPI specification
└── spec/ # Nested spec directory
└── openapi.json # OpenAPI spec file
```
## Contact
* [Open an Issue](https://github.com/aviaryhq/cloudglue-python/issues/new)
* [Email](mailto:support@cloudglue.dev)
| text/markdown | null | "Aviary Inc." <hello@aviaryhq.com> | null | null | Apache-2.0 | cloudglue, api, sdk | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"urllib3>=2.2.3",
"python-dateutil",
"requests>=2.32.2",
"certifi>=2024.12.14",
"pydantic>=2.10.6"
] | [] | [] | [] | [
"Homepage, https://github.com/aviaryhq/cloudglue-python",
"Bug Tracker, https://github.com/aviaryhq/cloudglue-python/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-18T23:51:28.157684 | cloudglue-0.6.3.tar.gz | 174,373 | ad/46/d47d938255016524b9964758f285bc601910c118baf7297a25ac2a6ed80b/cloudglue-0.6.3.tar.gz | source | sdist | null | false | fa003cc46b6b47bff9f7dc90db143f33 | 0bfdc683b2dc3f9de078f1d3df181105a4b4ea452322cfe3f8b337b025660e8e | ad46d47d938255016524b9964758f285bc601910c118baf7297a25ac2a6ed80b | null | [
"LICENSE.md"
] | 265 |
2.4 | byod-cli | 1.0.8 | Command-line interface for Lablytics BYOD - Secure biotech data processing with zero-knowledge encryption | # BYOD CLI
**Secure biotech data processing with zero-knowledge encryption.**
Your data is encrypted on your machine, processed inside a cryptographically attested AWS Nitro Enclave, and returned encrypted. No one — including Lablytics — can access your plaintext data.
## Install
```bash
pip install byod-cli
# With the local web UI
pip install 'byod-cli[ui]'
```
**Requirements:** Python 3.9+ and AWS credentials (`aws configure` or environment variables).
## Get Started
### 1. Sign up and get an API key
Go to **https://byod.cultivatedcode.co**, create an account, then go to **Settings > API Keys** and create a key. Copy it — it's only shown once.
### 2. Authenticate
```bash
byod auth login
```
Paste your API key when prompted (`sk_live_xxxxx`).
### 3. Set up your AWS resources (one-time)
```bash
byod setup
```
This creates a KMS key and IAM role **in your AWS account**. Only the verified Nitro Enclave can use the key to decrypt — not Lablytics, not anyone else.
### 4. Submit data
```bash
byod submit genomic-qc ./sample.fastq.gz
```
The CLI encrypts your file locally, uploads the ciphertext, and returns a job ID.
### 5. Get results
```bash
byod status <job-id> # Check progress
byod get <job-id> -o ./output/ # Retrieve + decrypt in one step
```
That's it. Your data was never visible to anyone outside the enclave.
> **Prefer a GUI?** Run `byod ui` to do all of the above through a local web interface with drag-and-drop file submission and visual progress tracking.
---
## Commands
| Command | What it does |
|---------|-------------|
| `byod auth login` | Authenticate with your API key |
| `byod auth logout` | Clear stored credentials |
| `byod auth status` | Check if you're authenticated |
| `byod setup` | Create KMS key + IAM role in your AWS account |
| `byod update-policy` | Update KMS key policy with latest enclave PCR0 values |
| `byod submit <plugin> <file>` | Encrypt and submit data for processing |
| `byod status <job-id>` | Check job status |
| `byod list` | List your jobs |
| `byod get <job-id> -o <dir>` | Retrieve and decrypt results in one step |
| `byod plugins` | List available processing plugins |
| `byod profile list` | List configured profiles |
| `byod profile switch <name>` | Switch active profile |
| `byod profile delete <name>` | Delete a profile |
| `byod profile show` | Show current profile details |
| `byod config show` | Show current configuration |
| `byod ui` | Launch the local web UI for graphical submission and monitoring |
| `byod completion <shell>` | Generate shell completions (bash/zsh/fish) |
## Plugins
| Plugin | Description | Accepts |
|--------|-------------|---------|
| `genomic-qc` | FastQC + MultiQC quality control | `.fastq`, `.fastq.gz` |
| `demo-count` | Line/word counting (for testing) | Any text file |
```bash
byod plugins # See all available plugins
byod plugins --format json # JSON output for scripting
```
## Web UI
Prefer a graphical interface? The CLI includes a local web UI with drag-and-drop file submission, visual job tracking, and one-click result retrieval.
```bash
byod ui # Opens http://localhost:8420
byod ui --port 9000 # Custom port
byod ui --no-browser # Don't auto-open browser
```
The web UI runs entirely on your machine. All encryption happens locally — same security model as the CLI. Features include:
- **Guided setup** — walks you through authentication, AWS configuration, and KMS key creation
- **Drag-and-drop submission** — select a plugin, drop your files, and submit
- **Live job tracking** — watch progress with real-time status updates
- **One-click results** — download and decrypt results directly in the browser
- **Profile management** — switch between profiles and view configuration
## Examples
```bash
# Submit a directory (auto-archived as tar.gz)
byod submit genomic-qc ./samples/
# Submit with metadata
byod submit genomic-qc ./sample.fastq.gz \
--description "Batch 2026-02" \
--tags experiment=exp001 \
--tags batch=batch_a
# Submit with custom pipeline config
echo '{"min_quality": 20}' > config.json
byod submit genomic-qc ./sample.fastq.gz --config config.json
# Wait for completion with live status updates
byod submit genomic-qc ./sample.fastq.gz --wait --timeout 3600
# List completed jobs
byod list --status completed
# JSON output for scripting
byod list --format json
byod submit genomic-qc ./data.fastq --format json
byod auth status --format json
# Quiet mode for CI/CD (suppress progress output)
byod --quiet submit genomic-qc ./sample.fastq.gz
# Disable colored output
byod --no-color list
# Use API key via environment variable (useful for CI/CD)
export BYOD_API_KEY=sk_live_xxxxx
byod --quiet submit genomic-qc ./sample.fastq.gz --format json
# Launch the web UI
byod ui
byod ui --port 9000 --no-browser
# Shell completions
eval "$(byod completion bash)"
eval "$(byod completion zsh)"
byod completion fish > ~/.config/fish/completions/byod.fish
```
## How Security Works
```
Your Machine Lablytics
┌──────────────────┐ ┌──────────────────────────┐
│ byod-cli │ ciphertext │ S3 (encrypted blobs) │
│ - encrypt locally│───────────────>│ │ │
│ - decrypt locally│ │ v │
└────────┬─────────┘ │ Nitro Enclave │
│ │ - attests to your KMS key│
v │ - decrypts, processes, │
Your AWS Account │ re-encrypts │
┌──────────────────┐ │ - no network access │
│ KMS Key │<───────────────│ │
│ - you own it │ attestation └──────────────────────────┘
│ - PCR0 condition │
└──────────────────┘
```
| Who | Can decrypt your data? | Why |
|-----|----------------------|-----|
| **You** | Yes | Your KMS key, your AWS credentials |
| **Nitro Enclave** | Yes | Cross-account role with PCR0 attestation |
| **Lablytics operators** | **No** | No access to your KMS key |
| **Lablytics infrastructure** | **No** | Attestation check blocks non-enclave access |
## Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `BYOD_API_KEY` | API key (alternative to `byod auth login`) | — |
| `BYOD_API_URL` | Custom API endpoint | `https://byod.cultivatedcode.co` |
| `BYOD_DEBUG` | Enable debug logging (`1` or `true`) | `false` |
| `NO_COLOR` | Disable colored output (any value) | — |
| `AWS_PROFILE` | AWS credentials profile | `default` |
| `AWS_REGION` | Region for KMS operations | `us-east-1` |
## Exit Codes
| Code | Meaning |
|------|---------|
| `0` | Success |
| `1` | General error |
| `2` | Authentication error |
| `3` | Network error |
| `4` | Resource not found |
## Troubleshooting
**"Not authenticated"** — Run `byod auth login` with your API key from https://byod.cultivatedcode.co.
**"No KMS key configured"** — Run `byod setup` to create your KMS key and IAM role.
**"AWS credentials not found"** — Run `aws configure` or set `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
**"AccessDenied when creating KMS key"** — Your AWS user needs: `kms:CreateKey`, `kms:CreateAlias`, `kms:PutKeyPolicy`, `iam:CreateRole`, `iam:PutRolePolicy`.
**"Decryption failed: AccessDeniedException"** — Make sure you're using the same AWS account that ran `byod setup`. Check that the KMS key hasn't been deleted.
**Debug mode:**
```bash
byod --debug submit genomic-qc ./sample.fastq.gz
```
## Development
```bash
pip install -e ".[dev]"
pytest # Run tests
ruff check src/ # Lint
ruff format src/ # Format
```
## License
MIT — see [LICENSE](LICENSE).
| text/markdown | null | Lablytics <support@lablytics.io> | null | null | null | aws, biotech, encryption, genomics, kms, nextflow, nitro-enclave, proteomics, secure-computing, zero-knowledge | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9... | [] | null | null | >=3.9 | [] | [] | [] | [
"boto3>=1.28.0",
"click>=8.1.0",
"cryptography>=41.0.0",
"fastapi>=0.104.0",
"python-multipart>=0.0.6",
"pyyaml>=6.0",
"requests>=2.31.0",
"rich>=13.0.0",
"tqdm>=4.66.0",
"uvicorn[standard]>=0.24.0",
"httpx>=0.25.0; extra == \"dev\"",
"moto>=5.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \... | [] | [] | [] | [
"Homepage, https://lablytics.io",
"Documentation, https://docs.lablytics.io/cli",
"Repository, https://github.com/lablytics/byod-cli",
"Issues, https://github.com/lablytics/byod-cli/issues",
"Changelog, https://github.com/lablytics/byod-cli/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:50:58.648089 | byod_cli-1.0.8.tar.gz | 165,637 | 6f/9b/5d96463a7e8b2c3fe115092b455b3f1b4adde089245905aeca9557d38a70/byod_cli-1.0.8.tar.gz | source | sdist | null | false | ad042ece8c2555962352a0a204a47d14 | c380ba5ed7a0b0083cca485023d55a56a496b2a5fe0e3f5c4a94174c305cc06e | 6f9b5d96463a7e8b2c3fe115092b455b3f1b4adde089245905aeca9557d38a70 | MIT | [
"LICENSE"
] | 260 |
2.4 | asgion | 0.3.0 | ASGI protocol inspector — validates, traces, and analyzes your ASGI app | # asgion
[](https://github.com/ack1d/asgion/actions/workflows/ci.yml)
[](https://codecov.io/gh/ack1d/asgion)
[](https://pypi.org/project/asgion/)
[](https://pypi.org/project/asgion/)
[](https://github.com/ack1d/asgion/blob/main/LICENSE)
**ASGI protocol inspector** — validates your ASGI application against the
[ASGI specification](https://asgi.readthedocs.io/en/latest/) at runtime.
Catches protocol violations, state machine errors, and event schema mismatches
before they become production bugs.
Zero runtime dependencies. Python 3.12+.
## Quickstart
### Python API
```bash
pip install asgion
```
```python
from asgion import inspect
app = inspect(app) # wrap any ASGI app — zero config
```
Use with any ASGI server:
```python
import uvicorn
uvicorn.run(inspect(app), host="127.0.0.1", port=8000)
```
### CLI
```bash
pip install asgion[cli]
asgion check myapp:app
```
## What It Catches
**164 rules** across 12 layers — scope fields, event schemas, state machines,
extensions, and semantic checks for HTTP, WebSocket, and Lifespan.
```
[G-005] error Message must be a dict
[HE-012] error response.body 'body' must be bytes, got str
[HF-003] error Duplicate http.response.start
[WE-008] warning websocket.send has both 'bytes' and 'text' set
```
Every rule has an ID, severity, summary, and hint. See the full list:
[docs/rules.md](docs/rules.md)
## CLI Reference
### `asgion check`
```
asgion check APP_PATH [OPTIONS]
```
Check an ASGI app for protocol violations.
| Option | Description |
|--------|-------------|
| `APP_PATH` | Module:attribute path (e.g. `myapp:app`) |
| `--path PATH` | Paths to check (repeatable, default `/`). Prefix with protocol to set scope type: `http:/path`, `https:/path`, `ws:/path`, `wss:/path` |
| `--strict` | Exit 1 on any violations |
| `--format text\|json` | Output format (default `text`) |
| `--exclude-rules IDS` | Comma-separated rule IDs to skip |
| `--min-severity LEVEL` | Minimum severity: `perf`, `info`, `warning`, `error` |
| `--config FILE` | Path to `.asgion.toml` or `pyproject.toml` |
| `--profile PROFILE` | Rule filter profile: `strict`, `recommended`, `minimal` |
| `--no-color` | Disable ANSI colors (also respects `NO_COLOR` env) |
| `--no-lifespan` | Skip lifespan startup/shutdown checks |
```bash
asgion check myapp:app --path /api/users # HTTP (default)
asgion check myapp:app --path ws:/ws/chat # WebSocket
asgion check myapp:app --path /api --path ws:/ws # both
```
Exit codes: `0` = clean, `1` = violations (with `--strict`), `2` = runtime error.
### `asgion rules`
```
asgion rules [OPTIONS]
```
List all validation rules.
| Option | Description |
|--------|-------------|
| `--format text\|json` | Output format (default `text`) |
| `--layer LAYER` | Filter by layer: `general`, `http`, `websocket`, `lifespan` |
| `--severity LEVEL` | Filter by severity: `perf`, `info`, `warning`, `error` |
| `--no-color` | Disable ANSI colors |
### `asgion --version`
Print version and exit.
## Python API
```python
from asgion import AsgionConfig, inspect
cfg = AsgionConfig(
min_severity="warning", # skip perf/info rules
exclude_rules={"HE-012", "G-008"}, # suppress specific rules
ttfb_threshold=2.0, # custom TTFB threshold (seconds)
)
wrapped = inspect(
app,
config=cfg,
strict=False, # True to raise on violations
on_violation=lambda v: print(v), # real-time callback
exclude_paths=["/health", "/metrics"], # skip these paths
)
```
Or select a built-in profile:
```python
from asgion import BUILTIN_PROFILES, inspect
app = inspect(app, config=BUILTIN_PROFILES["recommended"]) # warning+ only
```
### Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `app` | `ASGIApp` | required | The ASGI application to wrap |
| `config` | `AsgionConfig` | `None` | Rule filter settings and thresholds |
| `strict` | `bool` | `False` | Raise `ASGIProtocolError` on any violation |
| `on_violation` | callback | `None` | Called with each `Violation` in real-time |
| `exclude_paths` | `list[str]` | `None` | Paths to skip validation |
| `exclude_rules` | `set[str]` | `None` | Rule IDs to suppress (additive to config) |
| `registry` | `ValidatorRegistry` | `None` | Custom validator registry |
### AsgionConfig
Can also be loaded from `pyproject.toml` or `.asgion.toml`:
```toml
[tool.asgion]
profile = "recommended" # base profile: strict / recommended / minimal
exclude_rules = ["SEM-006"] # suppress specific rules (supports globs: "SEM-*")
include_rules = ["HF-*"] # allowlist — only these rules fire
categories = ["http"] # filter by layer prefix ("http" matches http.fsm, http.semantic, …)
ttfb_threshold = 2.0 # SEM-006: TTFB limit (seconds)
lifecycle_threshold = 30.0 # SEM-007: total connection time (seconds)
body_size_threshold = 10485760 # SEM-008: response size (bytes)
```
### Violation
```python
@dataclass(frozen=True, slots=True)
class Violation:
rule_id: str # "HF-001", "G-010"
severity: Severity # error, warning, info, perf
message: str # human-readable description
hint: str # suggestion for fixing
scope_type: str # "http", "websocket", "lifespan"
path: str # "/api/users"
method: str # "GET"
```
## pytest Plugin
```bash
pip install asgion[pytest]
```
```python
async def test_my_app(asgi_inspect):
app = asgi_inspect(my_app)
async with httpx.AsyncClient(transport=ASGITransport(app)) as client:
resp = await client.get("/users")
assert app.violations == []
```
Auto-check violations with a marker:
```python
@pytest.mark.asgi_validate(min_severity="error")
async def test_strict(asgi_inspect):
app = asgi_inspect(my_app)
# ... drive the app — violations checked automatically at teardown
```
Or enable globally for all tests using `asgi_inspect`:
```bash
pytest --asgi-strict
pytest --asgi-strict --asgi-min-severity warning
```
## Comparison
| Feature | asgion | asgiref.testing | Manual testing |
|---------|--------|-----------------|----------------|
| Scope validation | 71 rules | basic | none |
| Event schema checks | 43 rules | none | manual |
| State machine (FSM) | 34 rules | none | none |
| Semantic checks | 13 rules | none | none |
| Extension validation | 3 rules | none | none |
| pytest plugin | yes | no | n/a |
| Real-time callbacks | yes | no | n/a |
| CLI tool | yes | no | no |
| Zero dependencies | yes | no (asgiref) | n/a |
| Rule suppression | per-rule | no | n/a |
## Contributing
```bash
git clone https://github.com/ack1d/asgion.git
cd asgion
uv sync --group dev
uv run pytest # run tests
uv run ruff check src/ # lint
uv run mypy src/ # type check
```
## License
MIT
| text/markdown | Andrei Satseviсh | Andrei Satseviсh <satsevich.andrei@gmail.com> | null | null | null | asgi, validator, inspector, protocol, fastapi, starlette, litestar, django, testing | [
"Development Status :: 3 - Alpha",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Pr... | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3; extra == \"cli\"",
"pytest>=8.3; extra == \"pytest\""
] | [] | [] | [] | [
"Homepage, https://github.com/ack1d/asgion",
"Repository, https://github.com/ack1d/asgion.git",
"Issues, https://github.com/ack1d/asgion/issues",
"Changelog, https://github.com/ack1d/asgion/blob/main/CHANGELOG.md",
"Documentation, https://github.com/ack1d/asgion#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:49:43.794617 | asgion-0.3.0.tar.gz | 33,739 | fa/81/5136b133f72573fc6e3439dac63dd747ee4fae4a5a8620bcb273434c6030/asgion-0.3.0.tar.gz | source | sdist | null | false | 7787dc9aec0be08d81151f7e60281e1d | 60643a8e32a6b1a409eff9e783717765125617ada6de2712dd88fcadb3be8bf2 | fa815136b133f72573fc6e3439dac63dd747ee4fae4a5a8620bcb273434c6030 | MIT | [] | 268 |
2.4 | neonlink-client | 1.5.10 | Official Python client for NeonLink message broker | # NeonLink Python SDK
Official Python client library for NeonLink message broker. Full feature parity with the Go SDK.
## Installation
```bash
pip install neonlink-client
```
Or install from source:
```bash
cd py3
pip install -e .
```
For development:
```bash
pip install -e ".[dev]"
```
## Requirements
- Python >= 3.10
- gRPC runtime (`grpcio`)
- Protocol Buffers (`protobuf`)
## Quick Start
### Publishing Messages
```python
import asyncio
from neonlink import NeonLinkClient, ConfigBuilder, MessageBuilder
async def main():
config = (
ConfigBuilder()
.with_service_name("my-service")
.with_address("neonlink:9090")
.build()
)
async with NeonLinkClient(config) as client:
request = (
MessageBuilder()
.with_stream("my-stream")
.with_message_type("MyMessage")
.with_json_payload({"key": "value"})
.with_idempotency_fields("user-123", "action-456")
.build()
)
response = await client.publish(request)
print(f"Published: {response.message_id}")
asyncio.run(main())
```
### Subscribing to Streams
```python
import asyncio
from neonlink import NeonLinkClient, ConfigBuilder
from neoncontract.messaging.v1 import messaging_pb2
async def main():
config = (
ConfigBuilder()
.with_service_name("my-worker")
.with_address("neonlink:9090")
.build()
)
async with NeonLinkClient(config) as client:
client.enable_auto_reconnect()
request = messaging_pb2.SubscribeRequest(
stream="my-stream",
consumer_group="my-workers",
)
async for message in client.subscribe(request):
print(f"Received: {message.message_id}")
# Process message...
await client.ack(messaging_pb2.AckRequest(
message_id=message.message_id,
stream=message.stream,
))
asyncio.run(main())
```
## Configuration
### From Code
```python
from neonlink import ConfigBuilder
config = (
ConfigBuilder()
.with_service_name("my-service")
.with_address("neonlink:9090")
.with_timeout(30.0)
.with_retry_policy(max_retries=3, initial_backoff=0.1)
.with_tls(
cert_path="/path/to/cert.pem",
key_path="/path/to/key.pem",
ca_path="/path/to/ca.pem",
)
.build()
)
```
### From Environment Variables
```python
from neonlink import NeonLinkConfig
config = NeonLinkConfig.from_env()
```
Environment variables:
- `NEONLINK_SERVICE_NAME` (required)
- `NEONLINK_ADDRESS` (default: `localhost:9090`)
- `NEONLINK_TIMEOUT` (default: `30`)
- `NEONLINK_TLS_CERT`
- `NEONLINK_TLS_KEY`
- `NEONLINK_TLS_CA`
- `NEONLINK_TLS_INSECURE`
## Middleware
Add middleware for logging, retries, timeouts, and metrics:
```python
from neonlink import (
NeonLinkClient,
LoggingMiddleware,
RetryMiddleware,
TimeoutMiddleware,
MetricsMiddleware,
)
middlewares = [
LoggingMiddleware(),
RetryMiddleware(max_retries=3),
TimeoutMiddleware(timeout=10.0),
MetricsMiddleware(my_metrics_collector),
]
client = NeonLinkClient(config, middlewares=middlewares)
```
## Idempotency
Use the `MessageBuilder` to ensure idempotent message delivery:
```python
from neonlink import MessageBuilder
# Option 1: Explicit idempotency key
request = (
MessageBuilder()
.with_stream("my-stream")
.with_message_type("MyMessage")
.with_idempotency_key("unique-key-123")
.build()
)
# Option 2: Generate from fields (recommended)
request = (
MessageBuilder()
.with_stream("my-stream")
.with_message_type("MyMessage")
.with_idempotency_fields("user-123", "action-456", "timestamp")
.build()
)
```
## Proto Definitions
The SDK uses protobuf definitions from the `neoncontract` package, which is automatically
installed as a dependency. The package provides all message types for the NeonLink gRPC service.
```python
# Import protobuf types directly from neoncontract
from neoncontract.messaging.v1 import messaging_pb2
# Available types:
# - messaging_pb2.PublishRequest
# - messaging_pb2.PublishResponse
# - messaging_pb2.SubscribeRequest
# - messaging_pb2.StreamMessage
# - messaging_pb2.AckRequest
# - messaging_pb2.AckResponse
# - messaging_pb2.ReleaseRequest
# - messaging_pb2.ReleaseResponse
```
## Development
### Running Tests
```bash
pytest
```
With coverage:
```bash
pytest --cov=neonlink --cov-report=html
```
### Type Checking
```bash
mypy neonlink
```
### Linting
```bash
ruff check neonlink
```
## Examples
See the `examples/` directory for complete examples:
- `publish.py` - Basic publishing
- `subscribe.py` - Subscribing and processing
- `middleware_example.py` - Using middleware
- `finai_integration.py` - FinAI service integration
- `retry_example.py` - Retry behavior
## License
MIT License - see LICENSE file.
| text/markdown | null | LetA Tech <dev@leta.tech> | null | null | null | async, grpc, mellions, message-broker, neonlink, redis-streams | [
"Development Status :: 5 - Production/Stable",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"grpcio-tools>=1.60.0",
"grpcio>=1.60.0",
"neoncontract-gen>=1.5.2",
"protobuf>=4.25.0",
"grpcio-testing>=1.60.0; extra == \"dev\"",
"mypy>=1.8.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.1.0; ... | [] | [] | [] | [
"Homepage, https://github.com/LetA-Tech/mcfo-neonlink",
"Documentation, https://github.com/LetA-Tech/mcfo-neonlink/tree/main/py3",
"Repository, https://github.com/LetA-Tech/mcfo-neonlink.git",
"Changelog, https://github.com/LetA-Tech/mcfo-neonlink/blob/main/py3/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:49:24.153146 | neonlink_client-1.5.10.tar.gz | 28,505 | 86/1d/67080ba965cc891bf965bf48865b683b333acd73d09a3ddd9b8a3d74f24c/neonlink_client-1.5.10.tar.gz | source | sdist | null | false | 32d1f752c93d5c4a956e880e1ad2965e | d29cf0d649e39db023e5b8d2b359503f8ba8717d5e4c4872fe2f0f4d8699e001 | 861d67080ba965cc891bf965bf48865b683b333acd73d09a3ddd9b8a3d74f24c | MIT | [
"LICENSE"
] | 249 |
2.4 | paper-to-md | 0.2.0 | Convert academic PDF papers to clean markdown using Docling + AI cleanup | # pdf2md
Convert academic PDF papers to clean, readable markdown with linked citations, embedded figures, and structured metadata for RAG systems.
## Contents
- [Quick Start](#quick-start) — install and convert a paper
- [Depth Levels](#depth-levels) — control how much processing is applied
- [Direct CLI Usage](#direct-cli-usage) — convert PDFs locally
- [Service Mode](#service-mode) — Docker microservice for remote/homelab use
- [Claude Code Integration](#claude-code-integration) — MCP server + `/convert-paper` command
- [Processing Pipeline](#processing-pipeline) — what happens at each stage
- [Local AI Setup](#local-ai-setup) — run with LM Studio or Ollama
- [Installation](#installation) — extras and requirements
- [Batch Processing](#batch-processing) — convert many papers at once
## Quick Start
```bash
# Install
pip install paper-to-md
# Pre-download Docling ML models (~500MB, one-time)
pdf2md download-models
# Convert a paper (Docling + postprocess + LLM retouch)
pdf2md convert paper.pdf ./output
# Fast conversion (no AI)
pdf2md convert paper.pdf ./output -d low
# Full pipeline with local LLM
pdf2md convert paper.pdf ./output -d high --local
```
## Depth Levels
pdf2md uses a depth-based system to control how much processing is applied:
| Depth | What happens | Speed |
|-------|-------------|-------|
| `low` | Docling extraction + rule-based postprocessing (citations, figures, sections, cleanup) | Fast, no AI |
| `medium` | + LLM retouch (author formatting, lettered section detection) | Moderate |
| `high` | + VLM figure descriptions + code/equation enrichments | Slow |
## Direct CLI Usage
### `pdf2md convert` — Main Conversion
```bash
uv run pdf2md convert paper.pdf ./output [OPTIONS]
```
| Option | Description |
|--------|-------------|
| `-d, --depth` | Analysis depth: `low`, `medium` (default), `high` |
| `-l, --local` | Use local LLM/VLM instead of cloud (Claude) |
| `-p, --provider` | LLM provider: `lm_studio` (default), `ollama` |
| `-m, --model` | Override LLM/VLM model name |
| `--keep-raw` | Save raw Docling extraction alongside processed output |
| `--raw` | Skip all processing, output only raw extraction |
| `--images-scale N` | Image resolution multiplier (default: 2.0) |
| `--min-image-width` | Minimum image width in pixels, filters logos (default: 200) |
| `--min-image-height` | Minimum image height in pixels (default: 150) |
| `--min-image-area` | Minimum image area in pixels (default: 40000) |
**Output:**
```
output/paper/
├── paper.md # Final processed markdown
├── paper_raw.md # Raw Docling output (if --keep-raw)
├── img/
│ ├── figure1.png
│ ├── figure2.png
│ └── ...
├── enrichments.json # All metadata (depth=high only)
├── figures.json # Figure metadata
├── equations.json # Equations with LaTeX
└── code_blocks.json # Code with language detection
```
### `pdf2md retouch` — LLM Cleanup Only
Run LLM-based cleanup on an existing markdown file:
```bash
uv run pdf2md retouch paper.md [OPTIONS]
```
| Option | Description |
|--------|-------------|
| `-l, --local` | Use local LLM instead of cloud (Claude) |
| `-p, --provider` | LLM provider: `lm_studio`, `ollama` |
| `-m, --model` | Override LLM model name |
| `-i, --images` | Path to images directory (default: `./img`) |
| `-v, --verbose` | Show detailed LLM progress |
The retouch step fixes:
- **Author formatting** — Extracts and formats author names, affiliations, emails
- **Lettered section headers** — Classifies `A. Background` as header vs `A. We conducted...` as sentence
### `pdf2md postprocess` — Rule-Based Fixes Only
```bash
uv run pdf2md postprocess paper.md [OPTIONS]
```
| Option | Description |
|--------|-------------|
| `-i, --images` | Path to images directory (default: `./img`) |
| `-o, --output` | Output path (default: overwrite input file) |
### `pdf2md enrich` — Extract RAG Metadata
```bash
uv run pdf2md enrich paper.pdf ./output [OPTIONS]
```
| Option | Description |
|--------|-------------|
| `--describe` | Generate VLM descriptions for figures |
| `-l, --local` | Use local VLM instead of cloud |
| `-p, --provider` | VLM provider: `lm_studio`, `ollama` |
| `-m, --model` | Override VLM model |
| `--images-scale N` | Image resolution multiplier (default: 2.0) |
## Service Mode
Run pdf2md as a Docker microservice for remote or homelab use. The service provides an HTTP API with Ed25519 signature authentication and async job processing via Redis/arq.
### Docker Deployment
```bash
# Start all services (API, worker, PostgreSQL, Redis)
docker compose up -d --build
# Run database migrations
docker compose exec api alembic upgrade head
# Check logs
docker compose logs -f worker
```
### API Endpoints
All endpoints require Ed25519 signature authentication (see [Auth Setup](#auth-setup)).
| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | `/submit_paper` | Upload a PDF and enqueue conversion. Returns `job_id`. |
| `GET` | `/status/{job_id}` | Check job status, progress, and errors. |
| `GET` | `/retrieve/{job_id}` | Download completed results as `tar.gz`. |
**Submit example:**
```bash
curl -X POST http://your-server:8000/submit_paper \
-F "file=@paper.pdf" \
-F "depth=medium" \
-H "Authorization: Signature <base64-sig>" \
-H "X-Timestamp: $(date +%s)" \
-H "X-Client-Id: <your-uuid>"
```
### Auth Setup
The service uses Ed25519 keypairs for authentication. Each client has a UUID and a public key stored in the database; requests are signed with the corresponding private key.
**Signature format:** `METHOD\nPATH\nTIMESTAMP` signed with the client's Ed25519 private key.
**Headers required:**
- `Authorization: Signature <base64-signature>`
- `X-Timestamp: <unix-epoch>`
- `X-Client-Id: <client-uuid>`
Timestamps must be within 5 minutes of server time (configurable via `PDF2MD_SERVICE_AUTH_TIMESTAMP_TOLERANCE_SECONDS`).
### Service Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `PDF2MD_SERVICE_DATABASE_URL` | `postgresql+asyncpg://...` | PostgreSQL connection string |
| `PDF2MD_SERVICE_REDIS_URL` | `redis://localhost:6379` | Redis connection string |
| `PDF2MD_SERVICE_DATA_DIR` | `/data` | Root data directory |
| `PDF2MD_SERVICE_UPLOAD_DIR` | `/data/uploads` | PDF upload storage |
| `PDF2MD_SERVICE_AUTH_TIMESTAMP_TOLERANCE_SECONDS` | `300` | Signature freshness window |
| `PDF2MD_SERVICE_WORKER_MAX_JOBS` | `1` | Concurrent conversion jobs |
## Claude Code Integration
### MCP Server
The `mcp/server.py` script exposes the service API as MCP tools for Claude Code. It loads credentials from a `.env` file in the repo root.
**Register the server:**
```bash
claude mcp add --scope user pdf2md-service -- uv run /path/to/paper-to-md/mcp/server.py
```
**Required `.env` variables** (not committed — see `.env.example`):
```
PDF2MD_SERVICE_URL=http://your-server:8000
PDF2MD_CLIENT_ID=00000000-0000-0000-0000-000000000001
PDF2MD_PRIVATE_KEY=<base64-ed25519-private-key>
```
**Tools provided:**
| Tool | Description |
|------|-------------|
| `pdf2md_submit` | Upload a PDF and start conversion. Returns job ID. |
| `pdf2md_status` | Poll job status and progress. |
| `pdf2md_retrieve` | Download and extract completed results. |
### `/convert-paper` Command
A project-level slash command in `.claude/commands/convert-paper.md` that orchestrates the full conversion workflow.
```
/convert-paper path/to/paper.pdf
```
This submits the PDF, polls for completion, downloads results, and reports extracted files. Auto-discovered by Claude Code when working in this repo.
## Processing Pipeline
### 1. Docling Extraction
Uses [Docling](https://github.com/DS4SD/docling) (ML-based) to extract:
- Text with structure (headings, paragraphs, lists)
- Tables with formatting
- Figures as images
- Equations
### 2. Deterministic Post-Processing
Applied at all depth levels (including `low`):
**Citations:**
- `[7]` → `[[7]](#ref-7)` (clickable links)
- `[11]-[14]` → expanded to four individual linked citations
- Anchors added to reference entries for link targets
**Sections:**
- `Abstract -Text here` → `## Abstract\n\nText here`
- Hierarchical section numbering → proper markdown headers
**Figures:**
- Embeds `` above line-start captions
- Each figure embedded exactly once
**Bibliography:**
- Adds `<a id="ref-N"></a>` anchors to reference entries
- Ensures proper spacing between entries
**Cleanup:**
- Fixes ligatures (fi→fi, fl→fl)
- Removes GLYPH artifacts from OCR
- Fixes hyphenated word breaks across lines
- Merges split paragraphs
- Removes OCR garbage near figure embeds
### 3. LLM Retouch (medium, high depth)
Uses LLM to fix issues that need judgment:
- **Author formatting** — Extracts names, affiliations, emails into structured `## Authors` section
- **Lettered sections** — Classifies `A. Background` as header vs `A. We conducted...` as sentence
### 4. VLM + Enrichments (high depth)
Extracts structured data for RAG:
| File | Contents |
|------|----------|
| `figures.json` | Caption, classification, VLM description, page number |
| `equations.json` | LaTeX representation, surrounding context |
| `code_blocks.json` | Code text, detected language |
| `enrichments.json` | All of the above combined |
## Local AI Setup
pdf2md supports running entirely locally using LM Studio or Ollama:
```bash
# Using LM Studio (default local provider)
export LM_STUDIO_HOST=http://localhost:1234/v1
uv run pdf2md convert paper.pdf ./output --local
# Using Ollama
export OLLAMA_HOST=http://localhost:11434
uv run pdf2md convert paper.pdf ./output --local --provider ollama
# Override model
uv run pdf2md convert paper.pdf ./output --local --model qwen3-8b
# VLM on a separate node
export PDF2MD_VLM_HOST=http://192.168.1.100:1234/v1
uv run pdf2md convert paper.pdf ./output -d high --local
```
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `PDF2MD_TEXT_MODEL` | `qwen3-4b` | Text LLM for retouch |
| `PDF2MD_VLM_MODEL` | `qwen3-vl-4b` | VLM for figure descriptions |
| `PDF2MD_PROVIDER` | `lm_studio` | Default provider |
| `LM_STUDIO_HOST` | `http://localhost:1234/v1` | LM Studio endpoint |
| `PDF2MD_VLM_HOST` | `http://localhost:1234/v1` | VLM endpoint (can differ from text) |
| `OLLAMA_HOST` | `http://localhost:11434` | Ollama endpoint |
## Installation
```bash
# Standard install — includes Docling, Claude Agent SDK, and LiteLLM
pip install paper-to-md
# Pre-download Docling ML models (~500MB, one-time)
pdf2md download-models
# Docker microservice dependencies
pip install paper-to-md[service]
# Development (pytest + ruff)
pip install paper-to-md[dev]
```
### Requirements
- Python 3.10-3.12
- [uv](https://docs.astral.sh/uv/) recommended for dependency management
## Batch Processing
```bash
# Convert all PDFs in a directory
uv run python scripts/batch_convert.py papers/ output/
# Fast batch (no AI)
uv run python scripts/batch_convert.py papers/ output/ --depth low
# Full batch with local LLM
uv run python scripts/batch_convert.py papers/ output/ --depth high --local
# Dry run to see what would be processed
uv run python scripts/batch_convert.py papers/ output/ --dry-run
```
## License
MIT
| text/markdown | null | Jaime Cernuda <jcernudagarcia@hawk.iilinoistech.edu> | null | null | null | academic, docling, extraction, markdown, pdf | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: ... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"claude-agent-sdk>=0.1.14",
"docling>=2.0.0",
"litellm>=1.50.0",
"pillow>=10.0.0",
"pymupdf>=1.26.6",
"rich>=13.0.0",
"typer>=0.15.0",
"aiosqlite>=0.20.0; extra == \"dev\"",
"httpx>=0.27.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>... | [] | [] | [] | [
"Homepage, https://github.com/JaimeCernuda/paper-to-md",
"Repository, https://github.com/JaimeCernuda/paper-to-md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:48:24.692586 | paper_to_md-0.2.0.tar.gz | 259,763 | 84/b8/1af6243efc1e05408c46c56933995d71e1700c34cd8ab053f6fa41f669e6/paper_to_md-0.2.0.tar.gz | source | sdist | null | false | 27b148bb404cc0d0f322d62dcab4d72b | 346426d5888361dc7d55164fafe853dd932610d1dfc8fea0664e9cf3879098af | 84b81af6243efc1e05408c46c56933995d71e1700c34cd8ab053f6fa41f669e6 | MIT | [
"LICENSE"
] | 272 |
2.4 | astronomo | 0.20.0 | A modern Gemini browser for the terminal | # Astronomo
**A modern Gemini browser for the terminal**
[](https://www.python.org/downloads/)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://pypi.org/project/astronomo/)
---
## What is Astronomo?
[Gemini](gemini://geminiprotocol.net/) is a lightweight internet protocol that sits between Gopher and the web. It prioritizes simplicity, privacy, and user autonomy—no tracking, no ads, no JavaScript. Just clean, focused content served over encrypted connections.
**Astronomo** brings the Gemini experience to your terminal with a modern, polished interface. Built on [Textual](https://textual.textualize.io/)—a powerful Python TUI framework—Astronomo offers features you'd expect from a desktop browser: full mouse support, syntax highlighting, beautiful themes, and responsive async networking.
Whether you're exploring Geminispace for the first time or looking for a better terminal client, Astronomo delivers a seamless browsing experience without leaving your command line.
<img width="1731" height="1017" alt="image" src="https://github.com/user-attachments/assets/88ec1dfe-e6b4-4f34-8720-e17864ce9c50" />
---
## Why Astronomo?
| Feature | Astronomo | Amfora | Bombadillo |
|---------|-----------|--------|------------|
| **UI Framework** | Textual (modern) | ncurses | ncurses |
| **Mouse Support** | Full | Limited | No |
| **Syntax Highlighting** | Yes | Yes | No |
| **Built-in Themes** | 10 | User-contributed | No |
| **Bookmark Folders** | Yes | Flat list | Flat list |
| **Client Certificates** | Yes | Yes | No |
| **TOFU Security** | Yes | Yes | No |
| **Tabs** | Yes | Yes | No |
| **Multi-protocol** | Gemini + Gopher + Finger + Nex + Spartan | Proxying | Gopher+Finger |
| **Mail (Misfin)** | Yes (GMAP) | No | No |
| **Feed Reader** | Yes (RSS/Atom) | No | No |
| **Development** | Active | Maintenance mode | Maintenance mode |
| **Language** | Python | Go | Go |
**Key advantages:**
- **Modern TUI** — Textual provides superior rendering, full mouse support, and async I/O compared to legacy ncurses
- **Active development** — Amfora is in maintenance mode; Astronomo is actively evolving
- **Python extensibility** — Easy to hack, extend, or integrate with your workflow
- **Beautiful out of the box** — 10 popular themes included (Nord, Dracula, Catppuccin, and more)
---
## Features
### Styled Gemtext Rendering
Every Gemtext element is beautifully rendered with distinct visual styling:
- **Headings** — Three levels with different colors and formatting
- **Links** — Colored and underlined with arrow indicators when selected
- **Blockquotes** — Bordered and italicized for clear distinction
- **Lists** — Properly formatted with bullet points
- **Preformatted blocks** — Distinct background with preserved formatting
### Interactive Navigation
Navigate Geminispace effortlessly:
- **Keyboard** — Arrow keys to move between links, Enter to activate
- **Mouse** — Click any link to follow it
- **Visual feedback** — Clear focus indicators show your current selection
### Syntax Highlighting
Code blocks with language hints (` ```python `, ` ```rust `, etc.) are automatically syntax highlighted using Pygments, making technical content easy to read.
### History Navigation
- **Back/Forward** — Navigate your browsing history with Backspace and Shift+Backspace
- **Position memory** — Scroll position and link selection are preserved
- **Fast navigation** — Pages are cached in memory, no re-fetching required
### Bookmarks System
Organize your favorite capsules:
- **Folder organization** — Group bookmarks into collapsible folders
- **Sidebar** — Toggle with Ctrl+B for quick access
- **Quick add** — Bookmark the current page with Ctrl+D
- **Full management** — Edit titles, move between folders, delete
- **Persistence** — Saved as TOML in your config directory
### Client Certificates / Identities
Full support for authenticated Gemini sites:
- **Identity management** — Create, edit, and delete client certificates
- **URL assignment** — Associate identities with specific capsules
- **Settings UI** — Manage everything from an in-app settings panel
### TOFU Certificate Verification
Trust On First Use security via [Nauyaca](https://github.com/alanbato/nauyaca):
- **Automatic trust** — Certificates are stored on first connection
- **Change detection** — Warns if a server's certificate changes unexpectedly
- **Known Hosts management** — View, search, and revoke trusted servers in Settings (Ctrl+,)
- **SQLite storage** — Known hosts persisted at `~/.nauyaca/tofu.db`
### Theming
Choose from 10 built-in themes:
- `textual-dark` (default)
- `textual-light`
- `textual-ansi`
- `nord`
- `gruvbox`
- `tokyo-night`
- `monokai`
- `dracula`
- `catppuccin-mocha`
- `solarized-light`
### Interactive Input
Seamless support for Gemini's input requests:
- **Search queries** — Status 10 prompts for text input
- **Sensitive input** — Status 11 for password-style masked entry
- **Byte counter** — Visual feedback for URL length limits
### Inline Images (Optional)
Display images directly in the terminal as ANSI art:
- **Formats supported** — PNG, JPEG, GIF, WebP
- **Quality settings** — Low, Medium, High rendering options
- **Optional dependency** — Install with `pip install astronomo[chafa]`
### Multi-Protocol Support
Browse beyond Gemini with native support for classic protocols:
**Gopher Protocol:**
- **Directory browsing** — Navigate Gopher menus with type indicators ([DIR], [TXT], [SEARCH])
- **Text files** — View Gopher text documents with proper formatting
- **Search support** — Interactive search queries (type 7)
- **Binary downloads** — Download files to `~/Downloads`
**Finger Protocol:**
- **User queries** — Look up user information from Finger servers
- **Flexible URLs** — Supports both `finger://user@host` and `finger://host/user` formats
**Smart URL Detection:**
- `user@host` → automatically uses `finger://`
- `gopher.example.com` → automatically uses `gopher://`
- `misfin:user@host` → opens mail compose
- Everything else → defaults to `gemini://`
### GMAP Mail
Full terminal mail client for Misfin mailboxes via the GMAP protocol:
- **Three-pane interface** — Tags sidebar, message list, and reading pane (Ctrl+E)
- **Compose and reply** — Send messages via Misfin protocol with Gemtext formatting
- **Tag management** — Archive, trash, unread, and custom tags
- **Offline access** — SQLite message cache for fast queries
- **Certificate auth** — Uses client certificates for authentication
- **Misfin links** — `misfin:` URLs in pages open compose with pre-filled recipients
### RSS/Atom Feeds
Subscribe to and read feeds from Geminispace:
- **Feed reader** — Browse articles with read/unread tracking (Ctrl+J)
- **Folder organization** — Group subscriptions into folders
- **OPML support** — Import and export feed lists
### Configuration
- **XDG-compliant** — Config stored at `~/.config/astronomo/`
- **TOML format** — Human-readable with helpful comments
- **Settings screen** — Configure without editing files manually
---
## Installation
### Recommended: Install with uv
```bash
uv tool install astronomo
```
### Alternative: Install with pipx
```bash
pipx install astronomo
```
### From Source (for contributors)
```bash
git clone https://github.com/alanbato/astronomo.git
cd astronomo
uv sync
uv run astronomo
```
---
## Quick Start
```bash
# Launch Astronomo
astronomo
# Open a Gemini capsule
astronomo gemini://geminiprotocol.net/
# Browse a Gopher server
astronomo gopher://gopher.floodgap.com/
# Query a Finger server (smart detection)
astronomo user@example.com
# Use a custom config file
astronomo --config ~/my-config.toml
```
### Capsules to Explore
New to Gemini? Here are some great starting points:
- `gemini://geminiprotocol.net/` — Official Gemini protocol documentation
- `gemini://geminispace.info/` — Search engine for Geminispace
- `gemini://kennedy.gemi.dev/` — Gemini search and discovery
- `gemini://rawtext.club/` — Community and hosting
---
## Keyboard Shortcuts
| Key | Action |
|-----|--------|
| `Enter` | Activate selected link |
| `Left` / `Right` | Navigate between links |
| `Backspace` | Go back in history |
| `Shift+Backspace` | Go forward in history |
| `Ctrl+B` | Toggle bookmarks sidebar |
| `Ctrl+D` | Bookmark current page |
| `Ctrl+E` | Open mail screen |
| `Ctrl+J` | Open feeds screen |
| `Ctrl+K` | Quick navigation (fuzzy finder) |
| `Ctrl+T` | New tab |
| `Ctrl+W` | Close tab |
| `Ctrl+S` | Save page snapshot |
| `Ctrl+,` | Open settings |
| `Ctrl+Q` | Quit |
---
## Configuration
Astronomo stores configuration at `~/.config/astronomo/config.toml`:
```toml
# Astronomo Configuration
[appearance]
# Available themes: textual-dark, textual-light, textual-ansi, nord,
# gruvbox, tokyo-night, monokai, dracula,
# catppuccin-mocha, solarized-light
theme = "textual-dark"
# Enable syntax highlighting in code blocks
syntax_highlighting = true
[browsing]
# Default home page (uncomment to set)
# home_page = "gemini://geminiprotocol.net/"
# Request timeout in seconds
timeout = 30
# Maximum redirects to follow
max_redirects = 5
```
Bookmarks are stored separately at `~/.config/astronomo/bookmarks.toml`.
---
## Roadmap
Astronomo is actively developed. Here's what's coming next:
**Planned:**
- **Page Search** — Find text within pages (Ctrl+F)
- **Downloads** — Save pages and files to disk
- **Custom Keybindings** — Vi/Emacs-style key configurations
---
## Contributing
Contributions are welcome! Astronomo is built with:
- **[Textual](https://textual.textualize.io/)** — Modern Python TUI framework
- **[Nauyaca](https://github.com/alanbato/nauyaca)** — Gemini protocol library
- **[Titlani](https://github.com/alanbato/titlani)** — Misfin protocol library
- **[pytest](https://pytest.org/)** — Testing framework
### Development Setup
```bash
git clone https://github.com/alanbato/astronomo.git
cd astronomo
uv sync --group dev
uv run pre-commit install
# Run the app
uv run astronomo
# Run tests
uv run pytest
# Run with Textual devtools
uv run textual run --dev src/astronomo/astronomo.py
```
---
## Links
**Protocols:**
- [Gemini Protocol](gemini://geminiprotocol.net/) — Learn about the Gemini protocol
- [Gopher Protocol](https://en.wikipedia.org/wiki/Gopher_(protocol)) — The classic menu-driven protocol
- [Finger Protocol](https://en.wikipedia.org/wiki/Finger_(protocol)) — User information lookup protocol
**Development:**
- [Textual Documentation](https://textual.textualize.io/) — The TUI framework powering Astronomo
- [Nauyaca](https://github.com/alanbato/nauyaca) — Gemini protocol library for Python
- [Titlani](https://github.com/alanbato/titlani) — Misfin protocol library for Python
| text/markdown | Alan Velasco | Alan Velasco <alan@alanbato.com> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"emoji>=2.0.0",
"feedparser>=6.0.0",
"humanize>=4.0.0",
"nauyaca>=0.1.0",
"textual[syntax]>=6.6.0",
"tomli-w>=1.0.0",
"tomli>=2.0.0; python_full_version < \"3.11\"",
"typing-extensions>=4.0.0; python_full_version < \"3.11\"",
"typer>=0.20.0",
"mapilli>=0.1.1",
"mototli>=0.1.1",
"teyaotlani>=0.... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:47:00.868705 | astronomo-0.20.0.tar.gz | 124,452 | ba/cb/530114c2c6c2d9288c464606526af6d99713d2384cba6918cf5fe7ad0cd8/astronomo-0.20.0.tar.gz | source | sdist | null | false | 121e057c804a272ca2a06b3a53e092c5 | ee1ee878fce764e2f07949372f148afe310992b89d7b30c681427749efc90250 | bacb530114c2c6c2d9288c464606526af6d99713d2384cba6918cf5fe7ad0cd8 | GPL-3.0-or-later | [] | 257 |
2.3 | BayesFlux | 0.8.1 | Bayesian Fast Linear algebra sUbspace eXtraction in JAX | # BayesFlux
BayesFlux is a JAX Python package that provides Fast Linear algebra sUbspace eXtraction for Bayesian inverse problems built off https://github.com/joshuawchen/randLAX.
## Features
- Active Subspace for parameter dimension reduction
- Informative Output Subspace for data dimension reduction
## Installation
### Core Installation (JAX-only functionality)
You can install the core BayesFlux package using pip:
pip install bayesflux
This installs the JAX-based functionality only and does not require Fenics or hippylib.
### Installation with hippylib Support (Requires Fenics)
Some BayesFlux functionality depends on hippylib, which requires Fenics 2019.1.
Fenics has system-level dependencies and cannot be reliably installed via pip alone.
You must first create a conda environment.
Step 1 — Create a Fenics environment
conda create -n fenics-2019.1_env -c conda-forge fenics==2019.1.0
conda activate fenics-2019.1_env
Step 2 — Install BayesFlux with the hippylib extra
pip install bayesflux[hippylib]
This installs:
- hippylib
- hickle
- bayesflux
Make sure the conda environment is activated before running pip install.
## For Developers
If your software depends on BayesFlux with hippylib functionality, declare the dependency in your pyproject.toml as:
dependencies = [
"bayesflux[hippylib]>=<minimum_version>"
]
However, your users must still create the Fenics conda environment before installing your software:
conda create -n fenics-2019.1_env -c conda-forge fenics==2019.1.0
conda activate fenics-2019.1_env
pip install your_package
Important: pip cannot install Fenics. Any software depending on bayesflux[hippylib] must document the required conda environment setup.
## Requirements
- Python 3.9
| text/markdown | Joshua Chen, Michael Brennan, Thomas O'Leary-Roseberry | Joshua Chen <joshuawchen@icloud.com>, Michael Brennan <mcbrenn@mit.edu>, Thomas O'Leary-Roseberry <tom.olearyroseberry@utexas.edu> | null | null | MIT License
Copyright (c) 2025 Joshua Chen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"jax",
"jaxlib",
"randlax>=0.3.2",
"pytest; extra == \"dev\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"flake8; extra == \"dev\"",
"flake8-pyproject; extra == \"dev\"",
"hickle; extra == \"hippylib\"",
"hippylib>=3.1.0; extra == \"hippylib\""
] | [] | [] | [] | [
"Homepage, https://github.com/joshuawchen/BayesFlux",
"Repository, https://github.com/joshuawchen/BayesFlux"
] | twine/6.1.0 CPython/3.9.19 | 2026-02-18T23:46:50.309386 | bayesflux-0.8.1.tar.gz | 16,662 | 4a/cc/260feb7df1092c4aa10bc81f2a132d60ec7ffc521b307ee0246fcade680a/bayesflux-0.8.1.tar.gz | source | sdist | null | false | 0ab0dbef97121a87e6d6fad74a7fa705 | 397164b5d5b61335eb5e0d9ca2cfd43d5d75ea0f842eb85b2aa2d4582a9574ac | 4acc260feb7df1092c4aa10bc81f2a132d60ec7ffc521b307ee0246fcade680a | null | [] | 0 |
2.4 | tigrbl | 0.3.16.dev4 | A modern pure ASGI/WSGI Python framework for building schema-first REST and JSON-RPC APIs with SQLAlchemy models, typed validation, lifecycle hooks, and engine extension support. | 
<p align="center">
<a href="https://pypi.org/project/tigrbl/">
<img src="https://img.shields.io/pypi/dm/tigrbl" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/tigrbl/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/standards/tigrbl.svg"/></a>
<a href="https://pypi.org/project/tigrbl/">
<img src="https://img.shields.io/pypi/pyversions/tigrbl" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/tigrbl/">
<img src="https://img.shields.io/pypi/l/tigrbl" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/tigrbl/">
<img src="https://img.shields.io/pypi/v/tigrbl?label=tigrbl&color=green" alt="PyPI - tigrbl"/></a>
</p>
---
# Tigrbl 🐅🐂
A high-leverage ASGI meta-framework that turns plain SQLAlchemy models into a fully-featured REST+RPC surface with near-zero boilerplate. 🚀
## Features ✨
- ⚡ Zero-boilerplate CRUD for SQLAlchemy models
- 🔌 Unified REST and RPC endpoints from a single definition
- 🪝 Hookable phase system for deep customization
- 🧩 Pluggable engine and provider abstractions
- 🚀 Built as an ASGI-native framework with Pydantic-powered schema generation
## Terminology 📚
- **Tenant** 🏢 – a namespace used to group related resources.
- **Principal** 👤 – an owner of resources, such as an individual user or an organization.
- **Resource** 📦 – a logical collection of data or functionality exposed by the API.
- **Engine** ⚙️ – the database connection and transaction manager backing a resource.
- **Model / Table** 🧱 – the ORM or database representation of a resource's records.
- **Column** 📏 – a field on a model that maps to a table column.
- **Operation** 🛠️ – a verb-driven action executed against a resource.
- **Hook** 🪝 – a callback that runs during a phase to customize behavior.
- **Phase** ⏱️ – a step in the request lifecycle where hooks may run.
- **Verb** 🔤 – the canonical name of an operation such as create or read.
- **Runtime** 🧠 – orchestrates phases and hooks while processing a request.
- **Kernel** 🧩 – the core dispatcher invoked by the runtime to handle operations.
- **Schema** 🧬 – the structured shape of request or response data.
- **Request** 📥 – inbound data and context provided to an operation.
- **Response** 📤 – outbound result returned after an operation completes.
## Built-in Verbs 🧰
Tigrbl exposes a canonical set of operations that surface as both REST
and RPC endpoints. The table below summarizes the default REST routes,
RPC methods, arity, and the expected input and output shapes for each
verb. `{resource}` stands for the collection path and `{id}` is the
primary key placeholder.
| Verb | REST route | RPC method | Arity | Input type | Output type |
|------|------------|------------|-------|------------|-------------|
| `create` ➕ | `POST /{resource}` | `Model.create` | collection | dict | dict |
| `read` 🔍 | `GET /{resource}/{id}` | `Model.read` | member | – | dict |
| `update` ✏️ | `PATCH /{resource}/{id}` | `Model.update` | member | dict | dict |
| `replace` ♻️ | `PUT /{resource}/{id}` | `Model.replace` | member | dict | dict |
| `merge` 🧬 | `PATCH /{resource}/{id}` | `Model.merge` | member | dict | dict |
| `delete` 🗑️ | `DELETE /{resource}/{id}` | `Model.delete` | member | – | dict |
| `list` 📃 | `GET /{resource}` | `Model.list` | collection | dict | array |
| `clear` 🧹 | `DELETE /{resource}` | `Model.clear` | collection | dict | dict |
| `bulk_create` 📦➕ | `POST /{resource}` | `Model.bulk_create` | collection | array | array |
| `bulk_update` 📦✏️ | `PATCH /{resource}` | `Model.bulk_update` | collection | array | array |
| `bulk_replace` 📦♻️ | `PUT /{resource}` | `Model.bulk_replace` | collection | array | array |
| `bulk_merge` 📦🧬 | `PATCH /{resource}` | `Model.bulk_merge` | collection | array | array |
| `bulk_delete` 📦🗑️ | `DELETE /{resource}` | `Model.bulk_delete` | collection | dict | dict |
| `bulk_read` – | – | – | – | – | – |
### Update, Merge, and Replace 🔄
`update` applies a shallow PATCH: only the supplied fields change and
missing fields are left untouched. `merge` performs a deep merge with
upsert semantics—if the target row is absent it is created, and nested
mapping fields are merged rather than replaced. `replace` follows PUT
semantics, overwriting the entire record and nulling any omitted
attributes.
### Verb Overrides 🧭
Because `create` and `bulk_create` share the same collection `POST`
route, enabling `bulk_create` removes the REST `create` endpoint; the
`Model.create` RPC method remains available. Likewise, `bulk_delete`
supersedes `clear` by claiming the collection `DELETE` route. Only one
of each conflicting pair can be exposed at a time. Other verbs coexist
without conflict because they operate on distinct paths or HTTP
methods.
## Phase Lifecycle ⛓️
Tigrbl operations execute through a fixed sequence of phases. Hook chains can
attach handlers at any phase to customize behavior or enforce policy.
| Phase | Description |
|-------|-------------|
| `PRE_TX_BEGIN` ⏳ | Pre-transaction checks before a database session is used. |
| `START_TX` 🚦 | Open a new transaction when one is not already active. |
| `PRE_HANDLER` 🧹 | Validate the request and prepare resources for the handler. |
| `HANDLER` ▶️ | Execute the core operation logic within the transaction. |
| `POST_HANDLER` 🔧 | Post-processing while still inside the transaction. |
| `PRE_COMMIT` ✅ | Final verification before committing; writes are frozen. |
| `END_TX` 🧾 | Commit and close the transaction. |
| `POST_COMMIT` 📌 | Steps that run after commit but before the response is returned. |
| `POST_RESPONSE` 📮 | Fire-and-forget work after the response has been sent. |
| `ON_ERROR` 🛑 | Fallback error handler when no phase-specific chain matches. |
| `ON_PRE_TX_BEGIN_ERROR` 🧯 | Handle errors raised during `PRE_TX_BEGIN`. |
| `ON_START_TX_ERROR` 🧯 | Handle errors raised during `START_TX`. |
| `ON_PRE_HANDLER_ERROR` 🧯 | Handle errors raised during `PRE_HANDLER`. |
| `ON_HANDLER_ERROR` 🧯 | Handle errors raised during `HANDLER`. |
| `ON_POST_HANDLER_ERROR` 🧯 | Handle errors raised during `POST_HANDLER`. |
| `ON_PRE_COMMIT_ERROR` 🧯 | Handle errors raised during `PRE_COMMIT`. |
| `ON_END_TX_ERROR` 🧯 | Handle errors raised during `END_TX`. |
| `ON_POST_COMMIT_ERROR` 🧯 | Handle errors raised during `POST_COMMIT`. |
| `ON_POST_RESPONSE_ERROR` 🧯 | Handle errors raised during `POST_RESPONSE`. |
| `ON_ROLLBACK` ↩️ | Run when the transaction rolls back to perform cleanup. |
### Happy-path flow
```
PRE\_TX\_BEGIN
|
START\_TX
|
PRE\_HANDLER
|
HANDLER
|
POST\_HANDLER
|
PRE\_COMMIT
|
END\_TX
|
POST\_COMMIT
|
POST\_RESPONSE
```
If a phase raises an exception, control transfers to the matching
`ON_<PHASE>_ERROR` chain or falls back to `ON_ERROR`, with `ON_ROLLBACK`
executing when the transaction is rolled back.
## Request → Response Flow Examples 🔀
### REST example
```
Client
|
v
HTTP Request
|
v
ASGI Router
|
v
Tigrbl Runtime
|
v
Operation Handler
|
v
HTTP Response
```
### RPC example
```
Client
|
v
JSON-RPC Request
|
v
RPC Dispatcher
|
v
Tigrbl Runtime
|
v
Operation Handler
|
v
JSON-RPC Response
````
## Hooks 🪝
Hooks allow you to plug custom logic into any phase of a verb. Use the
`hook_ctx` decorator to declare context-only hooks:
```python
from tigrbl import Base, hook_ctx
class Item(Base):
__tablename__ = "items"
@hook_ctx(ops="create", phase="PRE_HANDLER")
async def validate(cls, ctx):
if ctx["request"].payload.get("name") == "bad":
raise ValueError("invalid name")
````
The function runs during the `PRE_HANDLER` phase of `create`. The
`ctx` mapping provides request and response objects, a database session,
and values from earlier hooks.
Hooks can also be registered imperatively:
```python
async def audit(ctx):
...
class Item(Base):
__tigrbl_hooks__ = {"delete": {"POST_COMMIT": [audit]}}
```
Running apps expose a `/system/hookz` route that lists all registered
hooks. 📋
## Step Types 🧱
Tigrbl orders work into labeled steps that control how phases run:
* **secdeps** 🔐 – security dependencies executed before other checks. Downstream
applications declare these to enforce auth or policy.
* **deps** 🧩 – general dependencies resolved ahead of phase handlers. Downstream
code provides these to inject request context or resources.
* **sys** 🏗️ – system steps bundled with Tigrbl that drive core behavior.
Maintainers own these and downstream packages should not modify them.
* **atoms** ⚛️ – built-in runtime units such as schema collectors or wire
validators. These are maintained by the core team.
* **hooks** 🪝 – extension points that downstream packages register to customize
phase behavior.
Only `secdeps`, `deps`, and `hooks` are expected to be configured downstream;
`sys` and `atom` steps are maintained by the Tigrbl maintainers.
## Kernelz Labeling 🔎
Running apps expose a `/system/kernelz` diagnostics endpoint that returns the
kernel's phase plan for each model and operation. Every entry is prefixed by
its phase and a descriptive label, for example:
```
PRE_TX:secdep:myapp.auth.require_user
HANDLER:hook:wire:myapp.handlers.audit@HANDLER
END_TX:hook:sys:txn:commit@END_TX
POST_HANDLER:atom:wire:dump@POST_HANDLER
```
The token after the phase identifies the step type:
* `secdep` and `dep` – security and general dependencies as
`PRE_TX:secdep:<callable>` and `PRE_TX:dep:<callable>`.
* `hook:sys` – built-in system hooks shipped with Tigrbl.
* `hook:wire` – default label for user hooks including module/function name + phase.
* `atom:{domain}:{subject}` – runtime atoms, e.g. `atom:wire:dump`.
These labels allow downstream services to inspect execution order and debug how
work is scheduled. 🧭
## Configuration Overview ⚙️
### Operation Config Precedence 🧮
When merging configuration for a given operation, Tigrbl layers settings in
increasing order of precedence:
1. defaults
2. app config
3. API config
4. table config
5. column config
6. operation spec
7. per-request overrides
Later entries override earlier ones, so request overrides win over all other
sources. This can be summarized as
`overrides > opspec > colspecs > tabspec > apispec > appspec > defaults`.
### Schema Config Precedence 🧬
Tigrbl merges schema configuration from several scopes.
Later layers override earlier ones, with the precedence order:
1. defaults (lowest)
2. app configuration
3. API configuration
4. table configuration
5. column-level `cfg` values
6. op-specific `cfg`
7. per-request overrides (highest)
This hierarchy ensures that the most specific settings always win. 🥇
### Table-Level 🧾
* `__tigrbl_request_extras__` – verb-scoped virtual request fields.
* `__tigrbl_response_extras__` – verb-scoped virtual response fields.
* `__tigrbl_register_hooks__` – hook registration entry point.
* `__tigrbl_nested_paths__` – nested REST path segments.
* `__tigrbl_allow_anon__` – verbs permitted without auth.
* `__tigrbl_owner_policy__` / `__tigrbl_tenant_policy__` – server vs client field injection.
* `__tigrbl_verb_aliases__` & `__tigrbl_verb_alias_policy__` – custom verb names.
### Routing 🧭
* `__tigrbl_nested_paths__` for hierarchical routing.
* `__tigrbl_verb_aliases__` for custom verbs.
* `__tigrbl_verb_alias_policy__` to scope alias application.
### Persistence 💾
* Mixins such as `Upsertable`, `Bootstrappable`, `GUIDPk`, `Timestamped`.
* Policies `__tigrbl_owner_policy__` and `__tigrbl_tenant_policy__`.
* `transactional` decorator for atomic RPC + REST endpoints.
### Security 🔐
* Pluggable `AuthNProvider` interface.
* `__tigrbl_allow_anon__` to permit anonymous access.
### Default Precedence 🔧
When assembling values for persistence, defaults are resolved in this order:
1. Client-supplied value
2. API `default_factory`
3. ORM default
4. Database `server_default`
5. HTTP 422 if the field is required and still missing
### Database Guards 🛡️
Tigrbl executes each phase under database guards that temporarily replace
`commit` and `flush` on the SQLAlchemy session. Guards prevent writes or
commits outside their allowed phases and only permit commits when Tigrbl
owns the transaction. See the
[runtime documentation](tigrbl/v3/runtime/README.md#db-guards) for the full
matrix of phase policies.
The `START_TX` phase opens a transaction and disables `session.flush`,
allowing validation and hooks to run before any statements hit the
database. Once the transaction exists, `PRE_HANDLER`, `HANDLER`, and
`POST_HANDLER` phases permit flushes so pending writes reach the database
without committing. The workflow concludes in `END_TX`, which performs a
final flush and commits the transaction when the runtime owns it. ✅
### Response and Template Specs 📑
Customize outbound responses with `ResponseSpec` and `TemplateSpec`. These dataclasses
control headers, status codes, and optional template rendering. See
[tigrbl/v3/response/README.md](tigrbl/v3/response/README.md) for field descriptions and examples.
### Dependencies 📦
* SQLAlchemy for ORM integration.
* Pydantic for schema generation.
* ASGI-native routing and dependency injection.
## Best Design Practices ✅
The following practices are the canonical, production-ready patterns for
building on Tigrbl. Each rule is explained and demonstrated with
approved usage. These are not optional—adhering to them keeps the runtime
predictable, preserves hook lifecycle guarantees, and ensures schema
consistency across REST and RPC surfaces.
### 1) Never import SQLAlchemy directly or bypass Tigrbl APIs
**Why:** Direct imports bypass Tigrbl's compatibility layer and make it
harder to evolve internal dependencies. Use the Tigrbl exports so your
code stays aligned with the framework’s versioned ASGI API.
✅ **Preferred:**
```python
from tigrbl import Base, TigrblApp, TigrblApi
from tigrbl.types import Integer, String, Mapped
from tigrbl.types import Depends, HTTPException, Request
```
🚫 **Avoid:**
```python
from sqlalchemy import Integer, String
from some_framework import Depends
```
### 2) Do not coerce UUIDs manually
**Why:** Tigrbl schemas and types already normalize UUIDs. Manual coercion
creates inconsistent behavior across engines and breaks schema-level
validation.
✅ **Preferred:**
```python
from tigrbl.types import PgUUID, uuid4, Mapped
class Item(Table):
__tablename__ = "items"
id: Mapped[PgUUID] = acol(primary_key=True, default=uuid4)
```
🚫 **Avoid:**
```python
from uuid import UUID
item_id = UUID(str(payload["id"]))
```
### 3) Use engine specs for persistence, not ad-hoc engines
**Why:** Engine specs make persistence declarative, testable, and
compatible with engine resolution across app, API, table, and op scopes.
✅ **Preferred:**
```python
from tigrbl.engine.shortcuts import engine_spec
from tigrbl.engine.decorators import engine_ctx
spec = engine_spec(kind="postgres", async_=True, host="db", name="app_db")
@engine_ctx(spec)
class App:
...
```
🚫 **Avoid:**
```python
from sqlalchemy.ext.asyncio import create_async_engine
engine = create_async_engine("postgresql+asyncpg://...")
```
### 4) Never call DB session methods directly
**Why:** Direct calls bypass the hook lifecycle and the database guards.
Use model handlers or `app.<Model>.handlers.<op>` so hooks, policies, and
schema enforcement run consistently.
✅ **Preferred:**
```python
result = await Item.handlers.create(payload, ctx=request_ctx)
# or from a Tigrbl app instance:
result = await app.Item.handlers.create(payload, ctx=request_ctx)
```
🚫 **Avoid:**
```python
db.add(item)
await db.execute(statement)
```
### 5) Always use encapsulated payloads as inputs and outputs
**Why:** Tigrbl expects request/response envelopes to preserve metadata,
support policy enforcement, and keep REST/RPC in lockstep.
✅ **Preferred:**
```python
from tigrbl import get_schema
CreateIn = get_schema(Item, "create", "in")
CreateOut = get_schema(Item, "create", "out")
payload = CreateIn(name="Widget")
result = await Item.handlers.create(payload, ctx=request_ctx)
response = CreateOut(result=result)
```
🚫 **Avoid:**
```python
payload = {"name": "Widget"}
result = await Item.handlers.create(payload)
```
### 6) Encapsulation must use `get_schema(...)`
**Why:** `get_schema` guarantees the envelope is aligned to the configured
schema and respects schema overrides, request extras, and response extras.
✅ **Preferred:**
```python
ListIn = get_schema(Item, "list", "in")
ListOut = get_schema(Item, "list", "out")
```
🚫 **Avoid:**
```python
from pydantic import BaseModel
class ListIn(BaseModel):
payload: dict
```
### 7) `Table` must be the first inherited class for all models
**Why:** Tigrbl inspects base classes for lifecycle and configuration.
Putting `Table` first preserves deterministic MRO behavior.
✅ **Preferred:**
```python
from tigrbl.orm.tables import Table
from tigrbl.orm.mixins import Timestamped
class Item(Table, Timestamped):
__tablename__ = "items"
```
🚫 **Avoid:**
```python
class Item(Timestamped, Table):
__tablename__ = "items"
```
### 8) Never call `db.flush()` or `db.commit()`
**Why:** The hook lifecycle owns transactional boundaries. Manual flush or
commit short-circuits phase guards and can corrupt the request lifecycle.
✅ **Preferred:**
```python
@hook_ctx(ops="create", phase="HANDLER")
async def handler(ctx):
await Item.handlers.create(ctx["request"].payload, ctx=ctx)
```
🚫 **Avoid:**
```python
db.flush()
db.commit()
```
### 9) Use ops for new REST/RPC methods—never add ad-hoc framework routes
**Why:** Ops keep routing, schemas, hooks, and policies unified. Custom
custom framework routes bypass these guarantees.
✅ **Preferred:**
```python
from tigrbl import op_ctx
@op_ctx(name="rotate_keys", method="POST", path="/keys/rotate")
async def rotate_keys(payload, *, ctx):
return await Key.handlers.rotate(payload, ctx=ctx)
```
🚫 **Avoid:**
```python
from some_framework import APIRouter
router = APIRouter()
@router.post("/keys/rotate")
async def rotate_keys(payload):
...
```
### 10) Use context decorators where appropriate
**Why:** Context decorators (`engine_ctx`, `schema_ctx`, `op_ctx`,
`hook_ctx`) provide explicit, declarative binding of behavior and are
resolved deterministically by the runtime.
✅ **Preferred:**
```python
from tigrbl import hook_ctx, op_ctx, schema_ctx
from tigrbl.engine.decorators import engine_ctx
@engine_ctx(kind="sqlite", mode="memory")
class Item(Table):
__tablename__ = "items"
@schema_ctx(ops="create", cfg={"exclude": {"id"}})
class ItemCreateSchema:
model = Item
@op_ctx(name="export", method="GET", path="/items/export")
async def export_items(payload, *, ctx):
return await Item.handlers.list(payload, ctx=ctx)
@hook_ctx(ops="create", phase="PRE_HANDLER")
async def validate(ctx):
...
```
### Engine & Provider examples 🛠️
```python
from tigrbl.engine.shortcuts import engine_spec, prov
from tigrbl.engine._engine import Engine, Provider
# Build an EngineSpec from a DSN string
spec = engine_spec("sqlite://:memory:")
# Or from keyword arguments
spec_pg = engine_spec(kind="postgres", async_=True, host="db", name="app_db")
# Lazy Provider from the spec
provider = prov(spec) # same as Provider(spec)
with provider.session() as session:
session.execute("SELECT 1")
# Engine façade wrapping a Provider
eng = Engine(spec_pg)
async with eng.asession() as session:
await session.execute("SELECT 1")
# Direct Provider construction is also supported
provider_pg = Provider(spec_pg)
```
### Attaching engine contexts 🔌
`engine_ctx` binds database configuration to different layers. It accepts a
DSN string, a mapping, an `EngineSpec`, a `Provider`, or an `Engine`. The
resolver chooses the most specific binding in the order
`op > table > api > app`.
#### Engine precedence 🥇
When engine contexts are declared at multiple scopes, Tigrbl resolves them
with strict precedence:
1. **Op level** – bindings attached directly to an operation take highest priority.
2. **Table/Model level** – definitions on a model or table override API and app defaults.
3. **API level** – bindings on the API class apply when no model-specific context exists.
4. **App level** – the default engine supplied to the application is used last.
This ordering ensures that the most specific engine context always wins.
#### Declarative bindings 📝
```python
from types import SimpleNamespace
from tigrbl.engine.shortcuts import prov, engine
app = SimpleNamespace(db=prov(kind="sqlite", mode="memory"))
alt = SimpleNamespace(db=engine(kind="sqlite", mode="memory"))
class API:
db = {"kind": "sqlite", "memory": True}
class Item:
__tablename__ = "items"
table_config = {"db": {"kind": "sqlite", "memory": True}}
async def create(payload, *, db=None):
...
create.__tigrbl_engine_ctx__ = {
"kind": "postgres",
"async": True,
"host": "db",
"name": "op_db",
}
```
#### Decorative bindings 🎛️
```python
from tigrbl.engine.decorators import engine_ctx
from tigrbl.engine.shortcuts import prov, engine
@engine_ctx(prov(kind="sqlite", mode="memory"))
class App:
pass
@engine_ctx(engine(kind="sqlite", mode="memory"))
class DecoratedAPI:
pass
@engine_ctx(kind="sqlite", mode="memory")
class DecoratedItem:
__tablename__ = "items"
@engine_ctx(kind="postgres", async_=True, host="db", name="op_db")
async def decorated_create(payload, *, db=None):
...
```
### Swarmauri class + Tigrbl lifecycle integration 🧬
If you need to run concrete Swarmauri classes inside Tigrbl's runtime, see:
* [`examples/swarmauri_tigrbl_bridge.py`](./examples/swarmauri_tigrbl_bridge.py)
* [`examples/swarmauri_tigrbl_bridge_smooth.py`](./examples/swarmauri_tigrbl_bridge_smooth.py)
The bridge examples cover two integration styles:
* **Factory + schema-rich envelope** (`swarmauri_tigrbl_bridge.py`)
* Swarmauri Pydantic JSON workflows (`model_validate_json`, `model_dump_json`,
`model_json_schema`) with `HumanMessage`.
* A Swarmauri `Factory` invocation during `PRE_HANDLER` via `hook_ctx`.
* Tigrbl default verbs (`create`, `get`, `list`, `update`, `delete`) plus a custom op.
* `engine_ctx` at model and operation scope.
* Generated OpenAPI and OpenRPC documents mounted from the same model bindings.
* **Smoother direct-model flow** (`swarmauri_tigrbl_bridge_smooth.py`)
* Uses hooks + default `create` persistence to normalize Swarmauri payloads.
* Adds a `Conversation` table with a persisted one-to-many relationship to messages.
* Avoids extra `json_schema` fields in request/response payload contracts.
* Returns `HumanMessage.model_validate_json(...)` directly from a custom op.
* Uses the concrete model classes themselves to derive input/output schema docs.
## Glossary 📖
1. Tables
2. Schemas
3. Schema Overlays (Request Extras)
4. Phases
5. Phase Lifecycle
6. Request
7. Request Ctx
8. Default Flush
9. Core
10. Core\_Raw
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | tigrbl, sdk, standards, asgi, rest, rpc | [
"License :: OSI Approved :: Apache Software License",
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming L... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"aiosqlite>=0.19.0",
"asyncpg>=0.30.0; extra == \"postgres\"",
"greenlet>=3.2.3",
"httpx>=0.27.0",
"jinja2>=3.1.4; extra == \"templates\"",
"psycopg2-binary>=2.9.9; extra == \"postgres\"",
"pydantic>=2.0.0",
"sqlalchemy>=2.0",
"tigrbl-tests; extra == \"tests\"",
"uvicorn"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T23:46:00.020231 | tigrbl-0.3.16.dev4.tar.gz | 256,673 | b6/f4/3124c9aadfa47400dad6c78326c6a3bce1f91c6e35821f9c14c213debe3b/tigrbl-0.3.16.dev4.tar.gz | source | sdist | null | false | df4b94c789b86a9fa58ef10239d4e943 | eef9308e76c0e1009cecd0b6306e0f051ba1627a38fe230db1d8dfc35f72d4e7 | b6f43124c9aadfa47400dad6c78326c6a3bce1f91c6e35821f9c14c213debe3b | Apache-2.0 | [
"LICENSE"
] | 227 |
2.4 | basilearn | 0.2.2 | A library for educational content and interactive learning |
# Basilearn 📚 #
Basilearn makes learning Python a breeze! It’s a fun, interactive package that helps you master programming basics with ease, covering essential topics like variables, data types, operators, and more. The best part? You can use it anywhere—even notebooks!.
## 🚀 Installation and Usage ##
### Install the package: ###
Open a terminal (Command Prompt on Windows, Terminal on macOS/Linux) and type:
```
pip install basilearn
```
### Start the fun: ###
To begin interactive lessons, type the following in your terminal:
```
basilearn-run
```
### Use in Python Notebooks: ###
If you prefer coding in Jupyter or Colab, Basilearn works there too! Simply run the following in a notebook cell:
```
!pip install basilearn
!basilearn-run
```
## 🎉 Why Basilearn? ##
- Interactive & Fun: Learn Python basics while having a blast!
- Accessible Anywhere: Works on your laptop, offline, or even on Google Colab!
- Immediate Feedback: See results instantly and build confidence.
- Beginner-Friendly: Tailored for those just starting their coding journey.
## 🖥️ Tools You Can Use: ##
- Windows: Use Command Prompt (search for "cmd" in the Start Menu).
- macOS/Linux: Open Terminal (search for "Terminal" or press Ctrl+Alt+T).
- Notebooks : Open Jupyter or to Google Colab, and enter the commands above.
## 🌟 Next Steps ##
We’re just getting started! Here’s what’s planned for Basilearn:
- More Lessons: Cover advanced topics like loops, functions, and data structures.
- Simple UI: Develop a user-friendly interface for those who prefer visual learning over the terminal.
- Community Contributions: Open to lesson ideas and contributions!

| text/markdown | Barbara Asiamah | barbaraasiamah99@gmail.com | null | null | null | education, interactive-learning, python, library | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Education"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"setuptools>=56.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:44:59.099153 | basilearn-0.2.2.tar.gz | 9,267 | 60/d2/cdd002eb22fbcfda0472989e628a3d6f82b101411c6da4573809448f4fce/basilearn-0.2.2.tar.gz | source | sdist | null | false | d3482babfbe431ab0db94c717c5fd4e2 | 8dcfde4552f8fba7b270f2283c2f71a563f63538aebaf21792665e8fc45a7f1d | 60d2cdd002eb22fbcfda0472989e628a3d6f82b101411c6da4573809448f4fce | null | [
"LICENSE"
] | 263 |
2.4 | flexgraph | 0.1.0 | FlexGraph — knowledge operating system | # FlexGraph
Knowledge operating system.
Coming soon at [flexgraph.dev](https://flexgraph.dev)
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://flexgraph.dev"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T23:43:33.980263 | flexgraph-0.1.0.tar.gz | 1,035 | b9/66/3525d6883756a7c51bf1b9bdb8599916e08b3fecf04331817d2cd5b3c7c2/flexgraph-0.1.0.tar.gz | source | sdist | null | false | fb1dce5bf98fa5bfb027995a9e50e607 | bc6086538dcfbc2463db2e0bb4cae55ea96fa659616da122c08413c40659e786 | b9663525d6883756a7c51bf1b9bdb8599916e08b3fecf04331817d2cd5b3c7c2 | null | [] | 281 |
2.4 | slack-to-md | 0.1.0 | Convert Slack workspace exports to Markdown files | # slack-to-md
[](https://github.com/edgarrmondragon/slack-to-md/actions/workflows/ci.yml)
[](https://github.com/edgarrmondragon/slack-to-md/blob/main/LICENSE)
[](https://github.com/edgarrmondragon/slack-to-md)
Convert a Slack workspace export ZIP into Markdown files.
## Installation
```bash
uv tool install slack-to-md
```
## Usage
```bash
# Export all channels
slack-to-md -z export.zip
# Export specific channels
slack-to-md -z export.zip -c general -c random
# Export to a specific directory
slack-to-md -z export.zip -c announcements -o output/
```
## Options
| Flag | Description |
|---|---|
| `-z`, `--zip` | Path to Slack export ZIP file (required) |
| `-c`, `--channel` | Channel name to export (repeatable, defaults to all) |
| `-o`, `--output-dir` | Output directory (default: current dir) |
| text/markdown | null | Edgar Ramírez Mondragón <edgarrm358@gmail.com> | null | null | null | converter, export, google-docs, markdown, slack | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.14",
"Topic :: Communications :: Chat",
"Topic :: Text Processing :: Markup :: Markdown",
"Typing :: Typed"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"rich>=14.3"
] | [] | [] | [] | [
"Homepage, https://github.com/edgarrmondragon/slack-to-md",
"Issues, https://github.com/edgarrmondragon/slack-to-md/issues",
"Repository, https://github.com/edgarrmondragon/slack-to-md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:41:51.299083 | slack_to_md-0.1.0.tar.gz | 12,831 | 9e/cc/a53a53c8bbcfb697c3284da7f2ba27214aa0e1a2784815b1629a4e7397e3/slack_to_md-0.1.0.tar.gz | source | sdist | null | false | e89d491356927a60691cb84a7d239fed | d7ef9464805abe42040eba8591882239ddd9bfb3f8f2b4b9cec4c2bf0bcb399b | 9ecca53a53c8bbcfb697c3284da7f2ba27214aa0e1a2784815b1629a4e7397e3 | MIT | [
"LICENSE"
] | 252 |
2.4 | airflow-dbt-python | 3.4.0 | A collection of Airflow operators, hooks, and utilities to execute dbt commands | # airflow-dbt-python
[](https://pypi.org/project/airflow-dbt-python/)
[](https://github.com/tomasfarias/airflow-dbt-python/actions)
[](https://github.com/astral-sh/ruff)
[](https://github.com/tomasfarias/airflow-dbt-python/actions)
[](https://airflow-dbt-python.readthedocs.io/en/latest/?badge=latest)
A collection of [*Airflow*](https://airflow.apache.org/) operators, hooks, and utilities to execute [*dbt*](https://pypi.org/project/dbt-core/) commands.
Read the [documentation](https://airflow-dbt-python.readthedocs.io) for examples, installation instructions, and more details.
# Installation
## Requirements
Before using *airflow-dbt-python*, ensure you meet the following requirements:
* A *dbt* project using [dbt-core](https://pypi.org/project/dbt-core/) version 1.8 or later.
* An *Airflow* deployment using version 3.0 or later.
* If using any managed *Airflow* service, like [AWS MWAA](https://aws.amazon.com/managed-workflows-for-apache-airflow/) or [GCP Cloud Composer](https://cloud.google.com/composer), ensure your environment is created with a supported version of *Airflow*.
* If self-hosting, *Airflow* installation instructions can be found in their [official documentation](https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html).
* Python 3.10 or later.
> **Warning**
>
> New versions of *Airflow* and *dbt* may introduce breaking changes. We recommend testing any new versions of *Airflow* and *dbt* before upgrading production systems; Please [report any issues](https://github.com/tomasfarias/airflow-dbt-python/issues/new/choose) that may arise during testing so they can be addressed.
> **Note**
>
> We only test *airflow-dbt-python* against a limited set of versions of *Airflow* and *dbt*, and try to keep up with the latest releases. For *Airflow*, our policy is to cover with tests the latest release of *Airflow*, the latest version available in [GCP Cloud Composer](https://docs.cloud.google.com/composer/docs/composer-versions), and the latest version available in [AWS MWAA](https://docs.aws.amazon.com/mwaa/latest/userguide/airflow-versions). For *dbt*, our policy is to cover the last two minor versions.
## From PyPI
*airflow-dbt-python* is available in [PyPI](https://pypi.org/project/airflow-dbt-python/) and can be installed with *pip*:
``` shell
pip install airflow-dbt-python
```
As a convenience, some *dbt* adapters can be installed by specifying extras. For example, if requiring the *dbt-redshift* adapter:
``` shell
pip install airflow-dbt-python[redshift]
```
## Building from source
*airflow-dbt-python* can also be built from source by cloning this GitHub repository:
``` shell
git clone https://github.com/tomasfarias/airflow-dbt-python.git
cd airflow-dbt-python
```
And build with *uv*:
``` shell
uv build
```
## In AWS MWAA
Add *airflow-dbt-python* to your `requirements.txt` file and edit your Airflow environment to use this new `requirements.txt` file, or upload it as a plugin.
Read the [documentation](https://airflow-dbt-python.readthedocs.io/en/latest/getting_started/#installing-in-mwaa) for more a more detailed AWS MWAA installation breakdown.
## In GCP Cloud Composer
Add *airflow-dbt-python* to your PyPI packages list.
Refer to the [GCP Cloud Composer documentation](https://cloud.google.com/composer/docs/composer-3/install-python-dependencies#install-pypi) on how to do this.
## In other managed services
*airflow-dbt-python* should be compatible with most or all Airflow managed services. Consult the documentation specific to your provider.
If you notice an issue when installing *airflow-dbt-python* in a specific managed service, please open an [issue](https://github.com/tomasfarias/airflow-dbt-python/issues/new/choose).
# Features
*airflow-dbt-python* aims to make dbt a **first-class citizen** of Airflow by supporting additional features that integrate both tools. As you would expect, *airflow-dbt-python* can run all your dbt workflows in Airflow with the same interface you are used to from the CLI, but without being a mere wrapper: *airflow-dbt-python* directly communicates with internal *dbt-core* classes, bridging the gap between them and Airflow's operator interface. Essentially, we are attempting to use *dbt* **as a library**.
As this integration was completed, several features were developed to **extend the capabilities of dbt** to leverage Airflow as much as possible. Can you think of a way *dbt* could leverage Airflow that is not currently supported? Let us know in a [GitHub issue](https://github.com/tomasfarias/airflow-dbt-python/issues/new/choose)!
## Independent task execution
Airflow executes [Tasks](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html) independent of one another: even though downstream and upstream dependencies between tasks exist, the execution of an individual task happens entirely independently of any other task execution (see: [Tasks Relationships](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#relationships)).
In order to work with this constraint, *airflow-dbt-python* runs each dbt command in a **temporary and isolated directory**. Before execution, all the relevant dbt files are copied from supported backends, and after executing the command any artifacts are exported. This ensures dbt can work with any Airflow deployment, including most production deployments as they are usually running [Remote Executors](https://airflow.apache.org/docs/apache-airflow/stable/executor/index.html#executor-types) and do not guarantee any files will be shared by default between tasks, since each task may run in a completely different environment.
## Download dbt files from a remote storage
The dbt parameters `profiles_dir` and `project_dir` would normally point to a directory containing a `profiles.yml` file and a dbt project in the local environment respectively (defined by the presence of a *dbt_project.yml* file). *airflow-dbt-python* extends these parameters to also accept an URL pointing to a remote storage.
Currently, we support the following remote storages:
* [AWS S3](https://aws.amazon.com/s3/) (identified by a *s3* scheme).
* Remote git repositories, like those stored in GitHub (both *https* and *ssh* schemes are supported).
* If a remote URL is used for `project_dir`, then this URL must point to a location in your remote storage containing a *dbt* project to run. A *dbt* project is identified by the prescence of a *dbt_project.yml*, and contains all your [resources](https://docs.getdbt.com/docs/build/projects). All of the contents of this remote location will be downloaded and made available for the operator. The URL may also point to an archived file containing all the files of a dbt project, which will be downloaded, uncompressed, and made available for the operator.
* If a remote URL is used for `profiles_dir`, then this URL must point to a location in your remote storage that contains a *profiles.yml* file. The *profiles.yml* file will be downloaded and made available for the operator to use when running. The *profiles.yml* may be part of your *dbt* project, in which case this argument may be ommitted.
This feature is intended to work in line with Airflow's [description of the task concept](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#relationships):
> Tasks don’t pass information to each other by default, and run entirely independently.
We interpret this as meaning a task should be responsible of fetching all the *dbt* related files it needs in order to run independently, as already described in [Independent Task Execution](#independent-task-execution).
## Push dbt artifacts to XCom
Each dbt execution produces one or more [JSON artifacts](https://docs.getdbt.com/reference/artifacts/dbt-artifacts/) that are valuable to produce meta-metrics, build conditional workflows, for reporting purposes, and other uses. *airflow-dbt-python* can push these artifacts to [XCom](https://airflow.apache.org/docs/apache-airflow/stable/concepts/xcoms.html) as requested via the `do_xcom_push_artifacts` parameter, which takes a list of artifacts to push.
## Use Airflow connections as dbt targets (without a profiles.yml)
[Airflow connections](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html) allow users to manage and store connection information, such as hostname, port, username, and password, for operators to use when accessing certain applications, like databases. Similarly, a *dbt* `profiles.yml` file stores connection information under each target key. *airflow-dbt-python* bridges the gap between the two and allows you to use connection information stored as an Airflow connection by specifying the connection id as the `target` parameter of any of the *dbt* operators it provides. What's more, if using an Airflow connection, the `profiles.yml` file may be entirely omitted (although keep in mind a `profiles.yml` file contains a configuration block besides target connection information).
See an example DAG [here](examples/airflow_connection_target_dag.py).
# Motivation
## Airflow running in a managed environment
Although [`dbt`](https://docs.getdbt.com/) is meant to be installed and used as a CLI, we may not have control of the environment where Airflow is running, disallowing us the option of using *dbt* as a CLI.
This is exactly what happens when using [Amazon's Managed Workflows for Apache Airflow](https://aws.amazon.com/managed-workflows-for-apache-airflow/) (aka MWAA): although a list of Python requirements can be passed, the CLI cannot be found in the worker's PATH.
There is a workaround which involves using Airflow's `BashOperator` and running Python from the command line:
``` python
from airflow.operators.bash import BashOperator
BASH_COMMAND = "python -c 'from dbt.main import main; main()' run"
operator = BashOperator(
task_id="dbt_run",
bash_command=BASH_COMMAND,
)
```
But it can get cumbersome when appending all potential arguments a `dbt run` command (or other subcommand) can take.
That's where *airflow-dbt-python* comes in: it abstracts the complexity of interfacing with *dbt-core* and exposes one operator for each *dbt* subcommand that can be instantiated with all the corresponding arguments that the *dbt* CLI would take.
## An alternative to *airflow-dbt* that works without the *dbt* CLI
The alternative [`airflow-dbt`](https://pypi.org/project/airflow-dbt/) package, by default, would not work if the *dbt* CLI is not in PATH, which means it would not be usable in MWAA. There is a workaround via the `dbt_bin` argument, which can be set to `"python -c 'from dbt.main import main; main()' run"`, in similar fashion as the `BashOperator` example. Yet this approach is not without its limitations:
* *airflow-dbt* works by wrapping the *dbt* CLI, which makes our code dependent on the environment in which it runs.
* *airflow-dbt* does not support the full range of arguments a command can take. For example, `DbtRunOperator` does not have an attribute for `fail_fast`.
* *airflow-dbt* does not offer access to *dbt* artifacts created during execution. *airflow-dbt-python* does so by pushing any artifacts to [XCom](https://airflow.apache.org/docs/apache-airflow/stable/concepts/xcoms.html).
# Usage
Currently, the following *dbt* commands are supported:
* `clean`
* `compile`
* `debug`
* `deps`
* `docs generate`
* `ls`
* `parse`
* `run`
* `run-operation`
* `seed`
* `snapshot`
* `source`
* `test`
## Examples
All example DAGs are tested against the latest Airflow version. Some changes, like modifying `import` statements or changing types, may be required for them to work in other versions.
``` python
import datetime as dt
import pendulum
from airflow import DAG
from airflow_dbt_python.operators.dbt import (
DbtRunOperator,
DbtSeedOperator,
DbtTestOperator,
)
args = {
"owner": "airflow",
}
with DAG(
dag_id="example_dbt_operator",
default_args=args,
schedule="0 0 * * *",
start_date=pendulum.today("UTC").add(days=-1),
dagrun_timeout=dt.timedelta(minutes=60),
tags=["example", "example2"],
) as dag:
dbt_test = DbtTestOperator(
task_id="dbt_test",
selector="pre-run-tests",
)
dbt_seed = DbtSeedOperator(
task_id="dbt_seed",
select=["/path/to/first.csv", "/path/to/second.csv"],
full_refresh=True,
)
dbt_run = DbtRunOperator(
task_id="dbt_run",
select=["/path/to/models"],
full_refresh=True,
fail_fast=True,
)
dbt_test >> dbt_seed >> dbt_run
```
More examples can be found in the [`examples/`](examples/) directory and the [documentation](https://airflow-dbt-python.readthedocs.io).
# Development
See the [development documentation](https://airflow-dbt-python.readthedocs.io/en/latest/development/) for a more in-depth dive into setting up a development environment, running the test-suite, and general commentary on working on *airflow-dbt-python*.
## Testing
Tests are run with *pytest*, can be located in `tests/`. To run them locally, you may use *uv*:
``` shell
uv run pytest tests/ -vv
```
# License
This project is licensed under the MIT license. See [LICENSE](LICENSE).
| text/markdown | null | Tomás Farías Santana <tomas@tomasfarias.dev> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"apache-airflow<4.0,>=2.8; python_version < \"3.12\"",
"apache-airflow<4.0,>=2.9; python_version >= \"3.12\" and python_version < \"3.13\"",
"apache-airflow<4.0,>=3.1; python_version >= \"3.13\"",
"contextlib-chdir==1.0.2; python_version < \"3.11\"",
"dbt-core<2.0.0,>=1.8.0",
"apache-airflow-providers-ama... | [] | [] | [] | [
"repository, https://github.com/tomasfarias/airflow-dbt-python",
"documentation, https://airflow-dbt-python.readthedocs.io"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T23:41:41.682907 | airflow_dbt_python-3.4.0-py3-none-any.whl | 46,563 | ea/45/0d2b23dee6914c611935625425a397db13e2631d5338c24bbdf4c2b66e9a/airflow_dbt_python-3.4.0-py3-none-any.whl | py3 | bdist_wheel | null | false | e25078d7f303327d261ab99bdd5471a7 | cd0dbe90438ac71fc58e24a53741078f3a854efd821372f199ada2c7cea47316 | ea450d2b23dee6914c611935625425a397db13e2631d5338c24bbdf4c2b66e9a | MIT | [
"LICENSE"
] | 1,180 |
2.4 | rollgate | 1.2.1 | Python SDK for Rollgate feature flags | # Rollgate Python SDK
[](https://github.com/rollgate/sdks/actions/workflows/ci.yml)
[](https://pypi.org/project/rollgate/)
[](https://opensource.org/licenses/MIT)
Official Python SDK for [Rollgate](https://rollgate.io) - Feature flags made simple.
## Requirements
- Python 3.9+
- httpx >= 0.25.0
- httpx-sse >= 0.4.0
## Installation
```bash
pip install rollgate
```
## Quick Start
```python
import asyncio
from rollgate import RollgateClient, RollgateConfig, UserContext
async def main():
# Initialize client
config = RollgateConfig(api_key="your-api-key")
client = RollgateClient(config)
# Initialize and fetch flags
await client.init()
# Check if feature is enabled
if client.is_enabled("new-feature"):
print("New feature is enabled!")
# With user targeting
await client.identify(UserContext(
id="user-123",
email="user@example.com",
attributes={"plan": "pro", "country": "IT"}
))
if client.is_enabled("premium-feature"):
print("Premium feature is enabled for this user!")
# Cleanup
await client.close()
asyncio.run(main())
```
## Context Manager
```python
async with RollgateClient(RollgateConfig(api_key="your-api-key")) as client:
if client.is_enabled("my-feature"):
# Feature is enabled
pass
```
## Configuration
```python
from rollgate import (
RollgateConfig,
RetryConfig,
CircuitBreakerConfig,
CacheConfig,
)
config = RollgateConfig(
api_key="your-api-key",
base_url="https://api.rollgate.io", # Custom API URL
refresh_interval_ms=30000, # Polling interval (30s default)
enable_streaming=False, # Use SSE for real-time updates
timeout_ms=5000, # Request timeout
# Retry configuration
retry=RetryConfig(
max_retries=3,
base_delay_ms=100,
max_delay_ms=10000,
jitter_factor=0.1,
),
# Circuit breaker configuration
circuit_breaker=CircuitBreakerConfig(
failure_threshold=5,
recovery_timeout_ms=30000,
monitoring_window_ms=60000,
success_threshold=3,
),
# Cache configuration
cache=CacheConfig(
ttl_ms=300000, # 5 minutes
stale_ttl_ms=3600000, # 1 hour
persist_path="/tmp/rollgate-cache.json", # Optional persistence
),
)
```
## Events
```python
client = RollgateClient(config)
# Register event callbacks
client.on("ready", lambda: print("Client ready"))
client.on("flags_updated", lambda flags: print(f"Flags updated: {flags}"))
client.on("flag_changed", lambda key, new, old: print(f"{key}: {old} -> {new}"))
client.on("error", lambda err: print(f"Error: {err}"))
client.on("circuit_open", lambda *args: print("Circuit breaker opened"))
client.on("circuit_closed", lambda: print("Circuit breaker closed"))
await client.init()
```
## Event Tracking
Track conversion events for A/B testing experiments:
```python
from rollgate import TrackEventOptions
# Track a conversion event
client.track(TrackEventOptions(
flag_key="checkout-redesign",
event_name="purchase",
user_id="user-123",
))
# Track with all options
client.track(TrackEventOptions(
flag_key="checkout-redesign",
event_name="purchase",
user_id="user-123",
variation_id="variant-b",
value=29.99,
metadata={"currency": "EUR", "item_count": 3},
))
# Manually flush pending events
await client.flush_events()
```
Events are buffered in memory and flushed automatically every 30 seconds or when the buffer reaches 100 events. A final flush is attempted when the client is closed.
### TrackEventOptions
| Field | Type | Required | Description |
| -------------- | ----------------- | -------- | -------------------------------- |
| `flag_key` | `str` | Yes | The flag key for the experiment |
| `event_name` | `str` | Yes | Name of the conversion event |
| `user_id` | `str` | Yes | The user who triggered the event |
| `variation_id` | `Optional[str]` | No | The variation the user saw |
| `value` | `Optional[float]` | No | Numeric value (e.g. revenue) |
| `metadata` | `Optional[Dict]` | No | Additional event metadata |
## Features
### Polling (Default)
By default, the SDK polls for flag updates every 30 seconds.
```python
config = RollgateConfig(
api_key="your-api-key",
refresh_interval_ms=30000, # Poll every 30s
)
```
### SSE Streaming
Enable Server-Sent Events for real-time flag updates:
```python
config = RollgateConfig(
api_key="your-api-key",
enable_streaming=True,
)
```
### Circuit Breaker
The SDK includes a circuit breaker to prevent cascading failures:
```python
# Check circuit state
state = client.circuit_state # CircuitState.CLOSED, OPEN, or HALF_OPEN
# Get statistics
stats = client.get_circuit_stats()
# Force reset
client.reset_circuit()
```
### Caching
Flags are cached locally with stale-while-revalidate support:
```python
# Get cache statistics
stats = client.get_cache_stats()
hit_rate = client.get_cache_hit_rate()
# Clear cache
client.clear_cache()
```
### Error Handling
```python
from rollgate import (
RollgateError,
AuthenticationError,
NetworkError,
RateLimitError,
)
try:
await client.init()
except AuthenticationError as e:
print(f"Invalid API key: {e}")
except NetworkError as e:
print(f"Network error: {e}")
except RateLimitError as e:
print(f"Rate limited, retry after: {e.retry_after}s")
except RollgateError as e:
print(f"Rollgate error: {e}")
```
## API Reference
### RollgateClient
| Method | Description |
| --------------------------------------- | ---------------------------------- |
| `init(user?)` | Initialize client and fetch flags |
| `is_enabled(flag_key, default?)` | Check if flag is enabled |
| `is_enabled_detail(flag_key, default?)` | Check flag with evaluation reason |
| `get_all_flags()` | Get all flags as dictionary |
| `identify(user)` | Set user context and refresh flags |
| `reset()` | Clear user context |
| `refresh()` | Force refresh flags |
| `track(options)` | Track a conversion event |
| `flush_events()` | Flush pending conversion events |
| `close()` | Cleanup resources |
### Evaluation Reasons
Get detailed information about why a flag evaluated to a particular value:
```python
detail = client.is_enabled_detail("my-flag", False)
print(detail.value) # bool
print(detail.reason.kind) # "OFF", "TARGET_MATCH", "RULE_MATCH", "FALLTHROUGH", "ERROR", "UNKNOWN"
```
Reason kinds:
| Kind | Description |
| -------------- | ---------------------------------- |
| `OFF` | Flag is disabled |
| `TARGET_MATCH` | User is in the flag's target list |
| `RULE_MATCH` | User matched a targeting rule |
| `FALLTHROUGH` | Default rollout (no rules matched) |
| `ERROR` | Error during evaluation |
| `UNKNOWN` | Flag not found |
### UserContext
| Field | Type | Description |
| ------------ | ------- | ------------------------------- |
| `id` | `str` | User identifier (required) |
| `email` | `str?` | User email |
| `attributes` | `dict?` | Custom attributes for targeting |
## Documentation
- [Getting Started](../../docs/GETTING-STARTED.md)
- [Architecture](../../docs/ARCHITECTURE.md)
- [Production Setup](../../docs/PRODUCTION-SETUP.md)
Full documentation: [docs.rollgate.io](https://rollgate.io/docs)
## About Rollgate
[Rollgate](https://rollgate.io) is a feature management platform that helps teams release features safely with gradual rollouts, user targeting, and instant kill switches.
## License
MIT
| text/markdown | null | Rollgate <hello@rollgate.io> | null | null | null | feature-flags, feature-toggles, rollgate, sdk | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Langu... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx-sse>=0.4.0",
"httpx>=0.25.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"respx>=0.20.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://rollgate.io",
"Documentation, https://rollgate.io/docs/sdk/python",
"Repository, https://github.com/rollgate/sdks"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T23:40:50.972619 | rollgate-1.2.1.tar.gz | 42,062 | 5b/d7/f3772c9660bd0ad1c095ffb993d5201f8f7dc0e96be20bb7bda3a7ed68ea/rollgate-1.2.1.tar.gz | source | sdist | null | false | 094c9ad9cb07e6b6427e0558f9058810 | 8f3fe63c6b39579a794e79ac08a6c85e89dd4db23a5542b7beaf0966825992e1 | 5bd7f3772c9660bd0ad1c095ffb993d5201f8f7dc0e96be20bb7bda3a7ed68ea | MIT | [] | 239 |
2.4 | huawei-intel-sdk | 1.0 | SDK for Huawei Public Security Intelligence | # Huawei Intel SDK
A Python SDK for Huawei's Public Threat Intelligence.
## Installation
```bash
pip install huawei-intel-sdk
```
## Usage
### Python API
```python
from huawei_intel import HuaweiIntelClient
client = HuaweiIntelClient()
# Check IP
print(client.get_ip_intel("1.1.1.1"))
# Check URL (Automatically cleans to domain)
print(client.get_domain_intel("https://bad-site.com/path"))
# Check file hash (MD5, SHA1, SHA256, etc.)
print(client.get_file_intel("a665a45920422f9d417e4867efdc4fb8a04a1f3fff1fa07e998e86f7f7a27ae3"))
# Check Local File (Auto-hashes)
print(client.check_file("suspicious.exe"))
```
### Command Line Interface
```bash
# Check IP address
huawei-intel check 1.1.1.1
# Check domain
huawei-intel check bad-site.com
# Check file hash
huawei-intel check a665a45920422f9d417e4867efdc4fb8a04a1f3fff1fa07e998e86f7f7a27ae3
# Check local file
huawei-intel check-file suspicious.exe
| text/markdown | null | Yassine Cherair <yassine.cherair@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"build>=1.1.1",
"pytest-html>=3.2.0",
"requests>=2.25.0",
"twine>=4.0.2"
] | [] | [] | [] | [] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T23:40:41.657925 | huawei_intel_sdk-1.0-py3-none-any.whl | 9,254 | 54/14/6177efe1144d872db29208d5e440204d1bf128acd20e3adf2a4c45c7346b/huawei_intel_sdk-1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | ebd0ff635caeba741c6981a0b77ed191 | 3865a4c3e864efc003f1d5b0fad4f200f8e2d15563d661e5368a5343bfac0496 | 54146177efe1144d872db29208d5e440204d1bf128acd20e3adf2a4c45c7346b | null | [] | 260 |
2.1 | benchling-api-client | 2.0.424 | Autogenerated Python client from OpenAPI Python Client generator | # benchling-api-client
A client generated from Benchling's OpenAPI definition files using openapi-python-client.
Rather than using this package directly, we recommend using the
[Benchling SDK](https://pypi.org/project/benchling-sdk/), which has extra scaffolding to make some endpoints
easier to use and has been released to general availability.
| text/markdown | Benchling Support | support@benchling.com | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"backoff<3.0,>=1.10.0",
"typing-extensions<5.0,>=3.7.4",
"dataclasses-json<0.6.0,>=0.5.2",
"httpx>=0.23.0",
"attrs>=20.1.0",
"python-dateutil<3.0.0,>=2.8.0"
] | [] | [] | [] | [] | poetry/1.8.5 CPython/3.9.25 Linux/6.12.63-84.121.amzn2023.x86_64 | 2026-02-18T23:39:46.465360 | benchling_api_client-2.0.424.tar.gz | 2,259,767 | a7/6f/ff4f0dd35d7ab01a7d1a76cae9ab13c10e8416f9e5583a656aaacf6fe710/benchling_api_client-2.0.424.tar.gz | source | sdist | null | false | 998b0485e5d72bd28028a950c41cebe8 | d0a8a81581d7d20b64639a7d0640847f8768c3b52ca0a4b9f78e575b6b2f5619 | a76fff4f0dd35d7ab01a7d1a76cae9ab13c10e8416f9e5583a656aaacf6fe710 | null | [] | 674 |
2.4 | gilas | 0.1.0 | Simple persistent Python data structures. | # gilas
Simple persistent Python data structures.
## Usage
```python
from gilas import plist, pdict
arr = plist()
arr.append(10)
arr.append(20)
print(arr[0])
items = pdict()
items["key"] = "value"
print(items["key"])
```
Each object stores its data in a local SQLite file named .gilas.db.
Objects keep a stable id you can use to reload them later in a new process.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=7.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T23:39:37.968475 | gilas-0.1.0.tar.gz | 4,708 | 14/8a/b624aaad58fa87c66f7e68840d6443f7893036c169f1f514481789bdd280/gilas-0.1.0.tar.gz | source | sdist | null | false | 580c6b7992aa985ad42d707fb895ec3c | 7e1ea13a59f2a6eeeca9b65697d338aa251bdb68ae8a0303c8f4d920c651524f | 148ab624aaad58fa87c66f7e68840d6443f7893036c169f1f514481789bdd280 | null | [] | 266 |
2.4 | agent-backend | 0.9.0 | A distributed, scalable filesystem backend for deep AI agents, backed by blob storage | # Python Client Library
Python implementation of the `agent-backend` package. See the [main README](../README.md) for an overview, quick start, and core usage.
## Package Info
| Field | Value |
|--------------|-----------------------------|
| Package | `agent-backend` |
| Registry | PyPI |
| Manager | uv / pip |
| Test runner | pytest |
| Build | python -m build |
| Linter | ruff |
| Type checker | mypy |
| Source | `python/agent_backend/` |
| Tests | `python/tests/` |
## Advanced Features
### Environment Variables
Scoped backends support custom environment variables that apply to all commands:
```python
from agent_backend import ScopeConfig
scoped_backend = backend.scope("projects/my-app", ScopeConfig(
env={
"PYTHONPATH": "/workspace/lib",
"API_KEY": "secret",
"DATABASE_URL": "postgres://...",
}
))
await scoped_backend.exec("python -m build") # uses custom env
```
### Operations Logging
```python
from agent_backend import ConsoleOperationsLogger, ScopeConfig
scoped_backend = backend.scope("project", ScopeConfig(
operations_logger=ConsoleOperationsLogger()
))
await scoped_backend.exec("pip install -r requirements.txt")
# Logs: [AgentBackend] exec: pip install -r requirements.txt
```
### Binary Data
```python
from agent_backend import ReadOptions
image_data = await backend.read("logo.png", ReadOptions(encoding="buffer"))
tarball = await backend.exec("tar -czf - .", ExecOptions(encoding="buffer"))
```
### Timeouts
```python
from agent_backend import RemoteFilesystemBackend, RemoteFilesystemBackendConfig
backend = RemoteFilesystemBackend(RemoteFilesystemBackendConfig(
root_dir="/tmp/agentbe-workspace",
host="server.com",
auth_token="...",
operation_timeout_ms=300_000, # 5 minutes
max_output_length=10 * 1024 * 1024, # 10MB
))
```
## Backend Connection Pooling
See [docs/connection-pooling.md](../docs/connection-pooling.md) for `BackendPoolManager` usage, key-based pooling, idle cleanup, and graceful shutdown.
## Examples
### Code Execution Sandbox
```python
from agent_backend import LocalFilesystemBackend, LocalFilesystemBackendConfig, IsolationMode
sandbox = LocalFilesystemBackend(LocalFilesystemBackendConfig(
root_dir="/tmp/agentbe-workspace",
isolation=IsolationMode.AUTO,
))
user_code_backend = sandbox.scope(f"users/{user_id}")
await user_code_backend.write("script.py", untrusted_code)
result = await user_code_backend.exec("python script.py")
```
### Multi-tenant SaaS
```python
from agent_backend import RemoteFilesystemBackend, RemoteFilesystemBackendConfig
# Separate backend per organization
org1_backend = RemoteFilesystemBackend(RemoteFilesystemBackendConfig(
root_dir="/var/saas/org1",
host="org1-server.example.com",
auth_token="...",
))
org2_backend = RemoteFilesystemBackend(RemoteFilesystemBackendConfig(
root_dir="/var/saas/org2",
host="org2-server.example.com",
auth_token="...",
))
# Scoped backends per user within each org
org1_user1 = org1_backend.scope("users/user1")
org1_user2 = org1_backend.scope("users/user2")
```
### Agent State Management
```python
from agent_backend import MemoryBackend
state = MemoryBackend()
await state.write("agents/agent1/current-task", "building")
await state.write("agents/agent1/progress", "50%")
all_agents = await state.list_keys("agents/")
```
## Error Handling
```python
from agent_backend import BackendError, DangerousOperationError, PathEscapeError
try:
await backend.exec("rm -rf /")
except DangerousOperationError as e:
# Command blocked by safety validation
print("Blocked:", e.operation)
except PathEscapeError:
# Path attempted to escape scope
pass
except BackendError as e:
# General backend error (check e.code)
print("Error:", e.code, str(e))
```
---
## Development
### Commands
All commands can be run from the monorepo root via Make or from the `python/` directory via uv.
| Task | Make (root) | uv (`python/`) |
|-------------|--------------------------|------------------------------------------------------|
| Build | `make build-python` | `uv build` |
| Test | `make test-python` | `uv run pytest` |
| Test (cov) | -- | `uv run pytest --cov=agent_backend --cov-fail-under=80` |
| Lint | `make lint-python` | `uv run ruff check .` |
| Lint (fix) | `make lint-fix` | `uv run ruff check --fix .` |
| Typecheck | `make typecheck-python` | `uv run ty check` |
### Code Style
- ruff enforces formatting (`line-length = 100`, `target-version = "py311"`)
- Type hints on all function signatures -- avoid `Any`
- `snake_case` for functions and variables, `PascalCase` for classes
- Dataclasses for all config objects (`LocalFilesystemBackendConfig`, `ScopeConfig`, etc.)
- Custom error classes: `BackendError`, `DangerousOperationError`, `PathEscapeError`
- Imports sorted with ruff (`isort` rules enabled)
### Testing
Tests live in `python/tests/` and use pytest with `pytest-asyncio`.
`asyncio_mode = "auto"` is configured in `pyproject.toml`, so async test functions are detected automatically -- no `@pytest.mark.asyncio` decorator needed.
**Shared fixtures** in `conftest.py` provide pre-configured backends:
```python
@pytest.fixture
def local_backend(tmp_workspace):
config = LocalFilesystemBackendConfig(
root_dir=tmp_workspace,
prevent_dangerous=True,
)
return LocalFilesystemBackend(config)
```
Use `unittest.mock.AsyncMock` for mocking async methods. Use the shared fixtures (`local_backend`, `memory_backend`, `tmp_workspace`) rather than building backends from scratch.
**Running tests:**
```bash
uv run pytest # All tests, single run
uv run pytest -k "safety" # Filter by pattern
uv run pytest --cov=agent_backend # With coverage report
uv run pytest -m "not integration" # Skip integration tests
```
### Gotchas
- All backend methods are `async` -- always `await` them, including `read`, `write`, `readdir`, and `exists`.
- `MemoryBackend.exec()` raises `NotImplementedBackendError` -- memory backends do not support command execution.
- Use `list_keys(prefix)` on `MemoryBackend`, not `list()`.
- `IsolationMode.AUTO` and `IsolationMode.NONE` are enum members, not string literals.
- `BackendType` enum values are `"local-filesystem"`, `"remote-filesystem"`, `"memory"`.
- Config objects are dataclasses, not dicts -- use keyword arguments (e.g., `LocalFilesystemBackendConfig(root_dir=...)`).
- Scoped backends delegate `track_closeable()` to their parent, so resources are closed when the parent is destroyed.
- `destroy()` closes all tracked closeables (MCP clients, transports) before tearing down the backend.
- Coverage threshold is 80% (`--cov-fail-under=80`). Remote backend and transport modules are excluded from coverage.
- `ExecOptions` and `ReadOptions` use `encoding: Literal["utf8", "buffer"]`, not Python's standard encoding names.
| text/markdown | Agent Backend Contributors | null | null | null | Apache-2.0 | agents, ai, backend, filesystem, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Ty... | [] | null | null | >=3.11 | [] | [] | [] | [
"asyncssh>=2.17.0",
"mcp>=1.0.0",
"websockets>=14.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:38:52.966507 | agent_backend-0.9.0.tar.gz | 33,538 | 78/90/42630cb98fbc713c2d5753d689f4423928e666c737c3b13a9b799118efdf/agent_backend-0.9.0.tar.gz | source | sdist | null | false | ae8623b9e887d5c2fd384a62c6d1f410 | ec4e3ea840c7e1dad4ec1ca2a39e93641ff97da434fb7b20cf78c428195a2f1c | 789042630cb98fbc713c2d5753d689f4423928e666c737c3b13a9b799118efdf | null | [] | 256 |
2.3 | hyperspell | 0.32.0 | The official Python library for the hyperspell API | # Hyperspell Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/hyperspell/)
The Hyperspell Python library provides convenient access to the Hyperspell REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## MCP Server
Use the Hyperspell MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.
[](https://cursor.com/en-US/install-mcp?name=hyperspell-mcp&config=eyJuYW1lIjoiaHlwZXJzcGVsbC1tY3AiLCJ0cmFuc3BvcnQiOiJodHRwIiwidXJsIjoiaHR0cHM6Ly9oeXBlcnNwZWxsLnN0bG1jcC5jb20iLCJoZWFkZXJzIjp7IngtaHlwZXJzcGVsbC1hcGkta2V5IjoiTXkgQVBJIEtleSIsIlgtQXMtVXNlciI6Ik15IFVzZXIgSUQifX0)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22hyperspell-mcp%22%2C%22type%22%3A%22http%22%2C%22url%22%3A%22https%3A%2F%2Fhyperspell.stlmcp.com%22%2C%22headers%22%3A%7B%22x-hyperspell-api-key%22%3A%22My%20API%20Key%22%2C%22X-As-User%22%3A%22My%20User%20ID%22%7D%7D)
> Note: You may need to set environment variables in your MCP client.
## Documentation
The REST API documentation can be found on [docs.hyperspell.com](https://docs.hyperspell.com/). The full API of this library can be found in [api.md](https://github.com/hyperspell/python-sdk/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install hyperspell
```
## Usage
The full API of this library can be found in [api.md](https://github.com/hyperspell/python-sdk/tree/main/api.md).
```python
import os
from hyperspell import Hyperspell
client = Hyperspell(
api_key=os.environ.get("HYPERSPELL_API_KEY"), # This is the default and can be omitted
)
memory_status = client.memories.add(
text="text",
)
print(memory_status.resource_id)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `HYPERSPELL_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncHyperspell` instead of `Hyperspell` and use `await` with each API call:
```python
import os
import asyncio
from hyperspell import AsyncHyperspell
client = AsyncHyperspell(
api_key=os.environ.get("HYPERSPELL_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
memory_status = await client.memories.add(
text="text",
)
print(memory_status.resource_id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install hyperspell[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from hyperspell import DefaultAioHttpClient
from hyperspell import AsyncHyperspell
async def main() -> None:
async with AsyncHyperspell(
api_key=os.environ.get("HYPERSPELL_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
memory_status = await client.memories.add(
text="text",
)
print(memory_status.resource_id)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Hyperspell API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from hyperspell import Hyperspell
client = Hyperspell()
all_memories = []
# Automatically fetches more pages as needed.
for memory in client.memories.list(
collection="REPLACE_ME",
):
# Do something with memory here
all_memories.append(memory)
print(all_memories)
```
Or, asynchronously:
```python
import asyncio
from hyperspell import AsyncHyperspell
client = AsyncHyperspell()
async def main() -> None:
all_memories = []
# Iterate through items across all pages, issuing requests as needed.
async for memory in client.memories.list(
collection="REPLACE_ME",
):
all_memories.append(memory)
print(all_memories)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.memories.list(
collection="REPLACE_ME",
)
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.items)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.memories.list(
collection="REPLACE_ME",
)
print(f"next page cursor: {first_page.next_cursor}") # => "next page cursor: ..."
for memory in first_page.items:
print(memory.resource_id)
# Remove `await` for non-async usage.
```
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from hyperspell import Hyperspell
client = Hyperspell()
query_result = client.memories.search(
query="query",
options={},
)
print(query_result.options)
```
## File uploads
Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
```python
from pathlib import Path
from hyperspell import Hyperspell
client = Hyperspell()
client.memories.upload(
file=Path("/path/to/file"),
)
```
The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `hyperspell.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `hyperspell.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `hyperspell.APIError`.
```python
import hyperspell
from hyperspell import Hyperspell
client = Hyperspell()
try:
client.memories.add(
text="text",
)
except hyperspell.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except hyperspell.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except hyperspell.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from hyperspell import Hyperspell
# Configure the default for all requests:
client = Hyperspell(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).memories.add(
text="text",
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from hyperspell import Hyperspell
# Configure the default for all requests:
client = Hyperspell(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Hyperspell(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).memories.add(
text="text",
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/hyperspell/python-sdk/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `HYPERSPELL_LOG` to `info`.
```shell
$ export HYPERSPELL_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from hyperspell import Hyperspell
client = Hyperspell()
response = client.memories.with_raw_response.add(
text="text",
)
print(response.headers.get('X-My-Header'))
memory = response.parse() # get the object that `memories.add()` would have returned
print(memory.resource_id)
```
These methods return an [`APIResponse`](https://github.com/hyperspell/python-sdk/tree/main/src/hyperspell/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/hyperspell/python-sdk/tree/main/src/hyperspell/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.memories.with_streaming_response.add(
text="text",
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from hyperspell import Hyperspell, DefaultHttpxClient
client = Hyperspell(
# Or use the `HYPERSPELL_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from hyperspell import Hyperspell
with Hyperspell() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/hyperspell/python-sdk/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import hyperspell
print(hyperspell.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/hyperspell/python-sdk/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Hyperspell <hello@hyperspell.com> | null | null | MIT | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Pro... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/hyperspell/python-sdk",
"Repository, https://github.com/hyperspell/python-sdk"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-18T23:38:11.430344 | hyperspell-0.32.0.tar.gz | 140,324 | c3/ce/297becfae2e65f209b2fdeba24747c1d2e134621784bf0573950723200f4/hyperspell-0.32.0.tar.gz | source | sdist | null | false | fc56b007361b8b0469748d6ffea435e6 | 57c72f6137a46b8a36f977ce16608be20411a87286b9d173e1c4cf8fdfd2cb0f | c3ce297becfae2e65f209b2fdeba24747c1d2e134621784bf0573950723200f4 | null | [] | 242 |
2.4 | odin-bots | 0.8.0 | DEPRECATED — use 'iconfucius' instead. pip install iconfucius | # odin-bots is deprecated
**This package has been renamed to [`iconfucius`](https://pypi.org/project/iconfucius/).**
## Migration
```bash
pip uninstall odin-bots
pip install iconfucius
```
Then replace `odin-bots` with `iconfucius` in your workflow:
```bash
cd my-bots
iconfucius
```
`iconfucius` will detect your existing `odin-bots.toml` and offer to upgrade it
to `iconfucius.toml`. Your `.wallet/`, `.cache/`, and `.memory/` directories
are fully compatible.
## Links
- New package: [pypi.org/project/iconfucius](https://pypi.org/project/iconfucius/)
- Source: [github.com/onicai/IConfucius](https://github.com/onicai/IConfucius)
## License
MIT
| text/markdown | null | icpp-pro <icpp@icpp.world> | null | icpp-pro <icpp@icpp.world> | null | bitcoin, icp, internet-computer, odin, trading, defi, siwb, ckbtc | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Developmen... | [] | null | null | >=3.11 | [] | [] | [] | [
"typer>=0.21.1",
"requests>=2.31",
"curl_cffi>=0.7",
"btclib>=2023.7",
"bitcoin-utils>=0.7.3",
"icp-py-core>=2.3.0",
"filelock>=3.0",
"anthropic>=0.40",
"python-dotenv>=1.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"twine>=5.... | [] | [] | [] | [
"Homepage, https://github.com/onicai/odin_bots",
"Repository, https://github.com/onicai/odin_bots",
"Documentation, https://github.com/onicai/odin_bots#readme",
"Issues, https://github.com/onicai/odin_bots/issues"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T23:36:56.000906 | odin_bots-0.8.0.tar.gz | 120,407 | df/82/edd4d8e0c12f7a50f7a21a72eba911b1cf40bb7e01b6d01238d066cb5b28/odin_bots-0.8.0.tar.gz | source | sdist | null | false | 02fc330f52bfbcb6dcea7d17010cd625 | 4b613b062f2180a9fd2d3e4d083702a732112ee89cf648631136225296e47660 | df82edd4d8e0c12f7a50f7a21a72eba911b1cf40bb7e01b6d01238d066cb5b28 | MIT | [
"LICENSE"
] | 239 |
2.4 | normadocs | 0.1.2a1 | Convert Markdown to professionally formatted DOCX/PDF following academic standards (APA 7th, ICONTEC, IEEE). | # NormaDocs
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/release/python-3120/)
[](https://github.com/astral-sh/ruff)
**NormaDocs** es una herramienta profesional de código abierto diseñada para convertir documentos Markdown a formatos académicos estándar (DOCX, PDF), comenzando con soporte estricto para **APA 7ª Edición**.
Su arquitectura modular permite la integración futura de otras normas como **ICONTEC**, **IEEE** y más.
## Características ✨
- **Automatización Total**: Convierte Markdown simple en documentos listos para entregar.
- **Multiformato**: Salida en DOCX y PDF.
- **Cumplimiento APA 7**:
- Portada automática.
- Formato Times New Roman 12pt, Doble espacio.
- Citas y referencias formateadas.
- **Modular**: Úsalo como CLI (`normadocs`) o como librería Python (`normadocs`).
## Instalación 📦
### Requisitos Previos
- Python 3.12+
- [Pandoc](https://pandoc.org/installing.html)
- LibreOffice (Opcional, para PDF)
### Desde el repositorio
```bash
git clone https://github.com/mackroph/normadocs.git
cd normadocs
make install
```
## Uso 🚀
### Línea de Comandos (CLI)
El comando principal es `normadocs`:
```bash
# Ayuda
normadocs --help
# Conversión básica
normadocs IDocs/paper.md
# Conversión a PDF y DOCX en carpeta específica
normadocs IDocs/paper.md -o ./ExportDocs --format pdf
```
### Como Librería (Python)
```python
from pathlib import Path
from normadocs.preprocessor import MarkdownPreprocessor
from normadocs.docx_formatter import APADocxFormatter
from normadocs.pandoc_client import PandocRunner
# 1. Pre-procesar
md_text = Path("paper.md").read_text()
processor = MarkdownPreprocessor()
clean_md, meta = processor.process(md_text)
# 2. Convertir
PandocRunner().run(clean_md, "output.docx")
# 3. Aplicar Normas
formatter = APADocxFormatter("output.docx")
formatter.process(meta)
formatter.save("output_final.docx")
```
## Desarrollo 🛠️
```bash
make install # Instalar dependencias
make test # Correr tests
make lint # Verificar calidad
make build # Crear paquete
```
## Licencia 📄
Este proyecto está bajo la Licencia MIT.
| text/markdown | null | Cristian Muñoz <cristianmz21@users.noreply.github.com> | null | null | MIT License
Copyright (c) 2026 Mackroph
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | academic, apa, citation, docx, formatting, icontec, ieee, markdown, normas, pdf, research, thesis, writing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"python-docx>=1.1.0",
"typer>=0.9.0",
"weasyprint>=60.0.0",
"build; extra == \"dev\"",
"mypy; extra == \"dev\"",
"ruff; extra == \"dev\"",
"twine; extra == \"dev\"",
"types-setuptools; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/CristianMz21/normadocs",
"Repository, https://github.com/CristianMz21/normadocs",
"Bug Tracker, https://github.com/CristianMz21/normadocs/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:36:13.772169 | normadocs-0.1.2a1.tar.gz | 51,842 | da/1f/551d573a2c5422a9ba25e055c8063f2936cd27f8ab834941e8021e456a95/normadocs-0.1.2a1.tar.gz | source | sdist | null | false | af52ed0048ab8ffe49fa7861480e738e | a42696009484704561d2854cfcd490072599967ffe5b44f528623759b1882d59 | da1f551d573a2c5422a9ba25e055c8063f2936cd27f8ab834941e8021e456a95 | null | [
"LICENSE"
] | 223 |
2.4 | squint | 0.2.0 | A simple query interface for tabular data that's light-weight and easy to learn. |
***********************************************
squint: Simple query interface for tabular data
***********************************************
..
Project badges for quick reference:
|buildstatus| |devstatus| |license| |pyversions|
.. start-inclusion-marker-description
Squint is a simple query interface for tabular data that's light-weight
and easy to learn. A core feature of Squint is that **the structure of a
query's selection determines the structure of its result**. With
it you can:
* Select data using Python literals—sets, lists, dictionaries,
etc.—and get results in the same format.
* Aggregate, map, filter, reduce, and otherwise manipulate data.
* Lazily iterate over results, write them to a file, or eagerly
evaluate them in memory.
* Analyze data from CSV, Excel, SQL, and other data sources.
.. end-inclusion-marker-description
:Documentation:
| https://squint.readthedocs.io/ (stable)
| https://squint.readthedocs.io/en/latest/ (latest)
:Official:
| https://pypi.org/project/squint/
:Development:
| https://github.com/shawnbrown/squint
Some Examples
=============
The examples below will query a CSV file containing the following
data (**example.csv**):
=== === ===
A B C
=== === ===
x foo 20
x foo 30
y foo 10
y bar 20
z bar 10
z bar 10
=== === ===
To begin, we load the CSV file into a Select object:
.. code-block:: python
import squint
select = squint.Select('example.csv')
+------------------------------+--------------------------------------+
| When you select a | The result contains a |
+==============================+======================================+
| single column | list of values from that column |
| | |
| .. code-block:: python | .. code-block:: python |
| | |
| select('A') | ['foo', |
| | 'foo', |
| | 'foo', |
| | 'bar', |
| | 'bar', |
| | 'bar'] |
+------------------------------+--------------------------------------+
| tuple of columns | list of tuples with values from |
| | those columns |
| .. code-block:: python | |
| | .. code-block:: python |
| select(('A', 'B')) | |
| | [('x', 'foo'), |
| | ('x', 'foo'), |
| | ('y', 'foo'), |
| | ('y', 'bar'), |
| | ('z', 'bar'), |
| | ('z', 'bar')] |
+------------------------------+--------------------------------------+
| set of columns | list of sets with values from |
| | those columns |
| .. code-block:: python | |
| | .. code-block:: python |
| select({'A', 'B'}) | |
| | [{'x', 'foo'}, |
| | {'x', 'foo'}, |
| | {'y', 'foo'}, |
| | {'y', 'bar'}, |
| | {'z', 'bar'}, |
| | {'z', 'bar'}] |
+------------------------------+--------------------------------------+
| dictionary of columns | dictionary with keys and values |
| | from those columns |
| .. code-block:: python | |
| | .. code-block:: python |
| select({'A': 'C'}) | |
| | {'x': [20, 30], |
| | 'y': [10, 20], |
| | 'z': [10, 10]} |
| | |
| | (Notice that values are grouped by |
| | matching key.) |
+------------------------------+--------------------------------------+
| dictionary with a tuple of | dictionary with keys and tuples of |
| column values | values from those columns |
| | |
| .. code-block:: python | .. code-block:: python |
| | |
| select({'A': ('B', 'C')}) | {'x': [('foo', 20), ('foo', 30)], |
| | 'y': [('foo', 10), ('bar', 20)], |
| | 'z': [('bar', 10), ('bar', 10)]} |
+------------------------------+--------------------------------------+
| dictionary with a tuple of | dictionary with tuple keys and |
| column keys | values from those columns |
| | |
| .. code-block:: python | .. code-block:: python |
| | |
| select({('A', 'B'): 'C'}) | {('x', 'foo'): [20, 30], |
| | ('y', 'foo'): [10], |
| | ('y', 'bar'): [20], |
| | ('z', 'bar'): [10, 10]} |
+------------------------------+--------------------------------------+
Installation
============
.. start-inclusion-marker-install
The Squint package is tested on Python 2.7, 3.4 through 3.8, PyPy,
and PyPy3; and is freely available under the Apache License, version 2.
The easiest way to install squint is to use `pip <https://pip.pypa.io>`_:
.. code-block:: console
pip install squint
To upgrade an existing installation, use the "``--upgrade``" option:
.. code-block:: console
pip install --upgrade squint
The development repository for ``squint`` is hosted on
`GitHub <https://github.com/shawnbrown/squint>`_. If you need bug-fixes
or features that are not available in the current stable release, you can
"pip install" the development version directly from GitHub:
.. code-block:: console
pip install --upgrade https://github.com/shawnbrown/squint/archive/master.zip
All of the usual caveats for a development install should
apply—only use this version if you can risk some instability
or if you know exactly what you're doing. While care is taken
to never break the build, it can happen.
.. end-inclusion-marker-install
----------
Freely licensed under the Apache License, Version 2.0
Copyright 2015 - 2020 National Committee for an Effective Congress, et al.
..
SUBSTITUTION DEFINITONS:
.. |buildstatus| image:: https://travis-ci.org/shawnbrown/squint.svg?branch=master
:target: https://travis-ci.org/shawnbrown/squint
:alt: Current Build Status
.. |devstatus| image:: https://img.shields.io/pypi/status/squint.svg
:target: https://pypi.org/project/squint/
:alt: Development Status
.. |license| image:: https://img.shields.io/badge/license-Apache%202-blue.svg
:target: https://opensource.org/licenses/Apache-2.0
:alt: Apache 2.0 License
.. |pyversions| image:: https://img.shields.io/pypi/pyversions/squint.svg
:target: https://pypi.org/project/squint/#supported-versions
:alt: Supported Python Versions
.. |githubstars| image:: https://img.shields.io/github/stars/shawnbrown/squint.svg
:target: https://github.com/shawnbrown/squint/stargazers
:alt: GitHub users who have starred this project
.. |pypiversion| image:: https://img.shields.io/pypi/v/squint.svg
:target: https://pypi.org/project/squint/
:alt: Current PyPI Version
| text/x-rst | Shawn Brown | shawnbrown@users.noreply.github.com | null | null | Apache 2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language ... | [] | https://github.com/shawnbrown/squint | null | !=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7 | [] | [] | [] | [
"get-reader[dbf,excel]"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T23:35:33.334985 | squint-0.2.0.tar.gz | 50,280 | 32/12/2a2e3f58940847dd725770e47aeae047e2354874bed4c12445e91edec77c/squint-0.2.0.tar.gz | source | sdist | null | false | 0d9d596f2037b2a930929c89a2a90b03 | e982a7ce42cc5b6cd423f713c7903d6bbfbab9047d33bd790ba90349f7f61186 | 32122a2e3f58940847dd725770e47aeae047e2354874bed4c12445e91edec77c | null | [
"LICENSE",
"AUTHORS"
] | 262 |
2.4 | deploypyfiles | 0.0.3 | Copy Python scripts from working directory to specified locations | # Simple Python scripts deployer `deploypyfiles`
Copy files from working directory to specified locations
| text/markdown | null | Uladzislau Khamkou <kham@tuta.io> | null | Uladzislau Khamkou <kham@tuta.io> | null | deploy, tool | [
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Build Tools",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.14 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Tracker, https://github.com/hvox/deploypyfiles/issues",
"Homepage, https://github.com/hvox/deploypyfiles/blob/main/readme.md",
"Repository, https://github.com/hvox/deploypyfiles"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T23:35:02.758816 | deploypyfiles-0.0.3.tar.gz | 5,902 | eb/75/222c396085edb6e897a4341a5236e318be01006f1a903231547048bbebf0/deploypyfiles-0.0.3.tar.gz | source | sdist | null | false | 76430223e21059c827668929720333d4 | 773821b16aa5c66b31d8c01d432a622cc379578ad2702f98f14fd371f0f934c6 | eb75222c396085edb6e897a4341a5236e318be01006f1a903231547048bbebf0 | MIT | [
"license"
] | 253 |
2.4 | ggsql | 0.1.3 | SQL extension for declarative data visualization | # ggsql
Python bindings for [ggsql](https://github.com/georgestagg/ggsql), a SQL extension for declarative data visualization.
This package provides Python bindings to the Rust `ggsql` crate, enabling Python users to create visualizations using ggsql's VISUALISE syntax with native Altair chart output.
## Installation
### From PyPI (when published)
```bash
pip install ggsql
```
### From source
Building from source requires:
- Rust toolchain (install via [rustup](https://rustup.rs/))
- Python 3.10+
- [maturin](https://github.com/PyO3/maturin)
```bash
# Clone the monorepo
git clone https://github.com/georgestagg/ggsql.git
cd ggsql/ggsql-python
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate # or `.venv\Scripts\activate` on Windows
# Install build dependencies
pip install maturin
# Build and install in development mode
maturin develop
# Or build a wheel
maturin build --release
pip install target/wheels/ggsql-*.whl
```
## Quick Start
### Simple Usage with `render_altair`
For quick visualizations, use the `render_altair` convenience function:
```python
import ggsql
import polars as pl
# Create a DataFrame
df = pl.DataFrame({
"x": [1, 2, 3, 4, 5],
"y": [10, 20, 15, 30, 25],
"category": ["A", "B", "A", "B", "A"]
})
# Render to Altair chart
chart = ggsql.render_altair(df, "VISUALISE x, y DRAW point")
# Display or save
chart.display() # In Jupyter
chart.save("chart.html") # Save to file
```
### Two-Stage API
For more control, use the two-stage API with explicit reader and writer:
```python
import ggsql
import polars as pl
# 1. Create a DuckDB reader
reader = ggsql.DuckDBReader("duckdb://memory")
# 2. Register your DataFrame as a table
df = pl.DataFrame({
"date": ["2024-01-01", "2024-01-02", "2024-01-03"],
"revenue": [100, 150, 120],
"region": ["North", "South", "North"]
})
reader.register("sales", df)
# 3. Execute the ggsql query
spec = reader.execute(
"""
SELECT * FROM sales
VISUALISE date AS x, revenue AS y, region AS color
DRAW line
LABEL title => 'Sales by Region'
"""
)
# 4. Inspect metadata
print(f"Rows: {spec.metadata()['rows']}")
print(f"Columns: {spec.metadata()['columns']}")
print(f"Layers: {spec.layer_count()}")
# 5. Inspect SQL/VISUALISE portions and data
print(f"SQL: {spec.sql()}")
print(f"Visual: {spec.visual()}")
print(spec.layer_data(0)) # Returns polars DataFrame
# 6. Render to Vega-Lite JSON
writer = ggsql.VegaLiteWriter()
vegalite_json = writer.render(spec)
print(vegalite_json)
```
## API Reference
### Classes
#### `DuckDBReader(connection: str)`
Database reader that executes SQL and manages DataFrames.
```python
reader = ggsql.DuckDBReader("duckdb://memory") # In-memory database
reader = ggsql.DuckDBReader("duckdb:///path/to/file.db") # File database
```
**Methods:**
- `register(name: str, df: polars.DataFrame, replace: bool = False)` - Register a DataFrame as a queryable table
- `unregister(name: str)` - Unregister a previously registered table
- `execute_sql(sql: str) -> polars.DataFrame` - Execute SQL and return results
#### `VegaLiteWriter()`
Writer that generates Vega-Lite v6 JSON specifications.
```python
writer = ggsql.VegaLiteWriter()
json_output = writer.render(spec)
```
#### `Validated`
Result of `validate()` containing query analysis without SQL execution.
**Methods:**
- `valid() -> bool` - Whether the query is syntactically and semantically valid
- `has_visual() -> bool` - Whether the query contains a VISUALISE clause
- `sql() -> str` - The SQL portion (before VISUALISE)
- `visual() -> str` - The VISUALISE portion
- `errors() -> list[dict]` - Validation errors with messages and locations
- `warnings() -> list[dict]` - Validation warnings
#### `Spec`
Result of `reader.execute()`, containing resolved visualization ready for rendering.
**Methods:**
- `metadata() -> dict` - Get `{"rows": int, "columns": list[str], "layer_count": int}`
- `sql() -> str` - The executed SQL query
- `visual() -> str` - The VISUALISE clause
- `layer_count() -> int` - Number of DRAW layers
- `data() -> polars.DataFrame | None` - Main query result DataFrame
- `layer_data(index: int) -> polars.DataFrame | None` - Layer-specific data (if filtered)
- `stat_data(index: int) -> polars.DataFrame | None` - Statistical transform data
- `layer_sql(index: int) -> str | None` - Layer filter SQL
- `stat_sql(index: int) -> str | None` - Stat transform SQL
- `warnings() -> list[dict]` - Validation warnings from execution
### Functions
#### `validate(query: str) -> Validated`
Validate query syntax and semantics without executing SQL.
```python
validated = ggsql.validate("SELECT x, y FROM data VISUALISE x, y DRAW point")
if validated.valid():
print("Query is valid!")
else:
for error in validated.errors():
print(f"Error: {error['message']}")
```
#### `reader.execute(query: str) -> Spec`
Execute a ggsql query and return the visualization specification.
```python
reader = ggsql.DuckDBReader("duckdb://memory")
spec = reader.execute("SELECT 1 AS x, 2 AS y VISUALISE x, y DRAW point")
```
#### `render_altair(df, viz: str, **kwargs) -> altair.Chart`
Convenience function to render a DataFrame with a VISUALISE spec to an Altair chart.
**Parameters:**
- `df` - Any narwhals-compatible DataFrame (polars, pandas, etc.). LazyFrames are collected automatically.
- `viz` - The VISUALISE specification string
- `**kwargs` - Additional arguments passed to `altair.Chart.from_json()` (e.g., `validate=False`)
**Returns:** An Altair chart object (Chart, LayerChart, FacetChart, etc.)
```python
import polars as pl
import ggsql
df = pl.DataFrame({"x": [1, 2, 3], "y": [10, 20, 30]})
chart = ggsql.render_altair(df, "VISUALISE x, y DRAW point")
```
## Examples
### Mapping Styles
```python
df = pl.DataFrame({"x": [1, 2, 3], "y": [10, 20, 30], "category": ["A", "B", "A"]})
# Explicit mapping
ggsql.render_altair(df, "VISUALISE x AS x, y AS y DRAW point")
# Implicit mapping (column name = aesthetic name)
ggsql.render_altair(df, "VISUALISE x, y DRAW point")
# Wildcard mapping (map all matching columns)
ggsql.render_altair(df, "VISUALISE * DRAW point")
# With color encoding
ggsql.render_altair(df, "VISUALISE x, y, category AS color DRAW point")
```
### Custom Readers
You can use any Python object with an `execute_sql(sql: str) -> polars.DataFrame` method as a reader. This enables integration with any data source.
```python
import ggsql
import polars as pl
class CSVReader:
"""Custom reader that loads data from CSV files."""
def __init__(self, data_dir: str):
self.data_dir = data_dir
def execute_sql(self, sql: str) -> pl.DataFrame:
# Simple implementation: ignore SQL and return fixed data
# A real implementation would parse SQL to determine which file to load
return pl.read_csv(f"{self.data_dir}/data.csv")
# Use custom reader with ggsql.execute()
reader = CSVReader("/path/to/data")
spec = ggsql.execute(
"SELECT * FROM data VISUALISE x, y DRAW point",
reader
)
writer = ggsql.VegaLiteWriter()
json_output = writer.render(spec)
```
**Additional methods** for custom readers:
- `register(name: str, df: polars.DataFrame, replace: bool = False) -> None` - Register a DataFrame as a queryable table (required)
- `unregister(name: str) -> None` - Unregister a previously registered table (optional)
```python
class AdvancedReader:
"""Custom reader with registration support."""
def __init__(self):
self.tables = {}
def execute_sql(self, sql: str) -> pl.DataFrame:
# Your SQL execution logic here
...
def register(self, name: str, df: pl.DataFrame, replace: bool = False) -> None:
self.tables[name] = df
def unregister(self, name: str) -> None:
del self.tables[name]
```
Native readers like `DuckDBReader` use an optimized fast path, while custom Python readers are automatically bridged via IPC serialization.
### Ibis Reader Example
[Ibis](https://ibis-project.org/) provides a unified Python API for SQL operations across multiple backends. Here's how to create an ibis-based custom reader:
```python
import ggsql
import polars as pl
import ibis
class IbisReader:
"""Custom reader using ibis as the SQL backend."""
def __init__(self, backend="duckdb"):
if backend == "duckdb":
self.con = ibis.duckdb.connect()
elif backend == "sqlite":
self.con = ibis.sqlite.connect()
# Add other backends as needed
def execute_sql(self, sql: str) -> pl.DataFrame:
return self.con.con.execute(sql).pl()
def register(self, name: str, df: pl.DataFrame, replace: bool = False) -> None:
self.con.create_table(name, df.to_arrow(), overwrite=replace)
def unregister(self, name: str) -> None:
self.con.drop_table(name)
# Usage
reader = IbisReader()
df = pl.DataFrame({
"date": ["2024-01-01", "2024-01-02", "2024-01-03"],
"revenue": [100, 150, 120],
})
reader.register("sales", df)
spec = ggsql.execute(
"SELECT * FROM sales VISUALISE date AS x, revenue AS y DRAW line",
reader
)
writer = ggsql.VegaLiteWriter()
print(writer.render(spec))
```
## Development
### Keeping in sync with the monorepo
The `ggsql-python` package is part of the [ggsql monorepo](https://github.com/posit-dev/ggsql) and depends on the Rust `ggsql` crate via a path dependency. When the Rust crate is updated, you may need to rebuild:
```bash
cd ggsql-python
# Rebuild after Rust changes
maturin develop
# If tree-sitter grammar changed, clean and rebuild
cd .. && cargo clean -p tree-sitter-ggsql && cd ggsql-python
maturin develop
```
### Running tests
```bash
# Install test dependencies
pip install pytest
# Run all tests
pytest tests/ -v
```
## Requirements
- Python >= 3.10
- altair >= 5.0
- narwhals >= 2.15
- polars >= 1.0
## License
MIT
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT | sql, visualization, vega-lite, grammar-of-graphics | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"altair>=5.0",
"narwhals>=2.15.0",
"polars>=1.0",
"maturin>=1.4; extra == \"dev\"",
"pytest>=7.0; extra == \"test\"",
"duckdb>=1.0; extra == \"test\"",
"pyarrow>=14.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:34:44.565515 | ggsql-0.1.3.tar.gz | 397,494 | f0/fd/9ce21098bee8e8e704ad492a8973f96c21d8c6dd5f77905a070c9f4fef67/ggsql-0.1.3.tar.gz | source | sdist | null | false | e1db3f914fb649d3295102c60648c1d5 | 6f7583ec0ebe28a795ae1d4bdc4afd861f64c196fae07c5769cc5e74b35e34c4 | f0fd9ce21098bee8e8e704ad492a8973f96c21d8c6dd5f77905a070c9f4fef67 | null | [] | 521 |
2.4 | netbox-ip-monitor | 0.1.4 | Visual representation of IP addresses | ## netbox-ip-monitor
Visual representation of IP addresses
IP monitor to display all IP addresses in a prefix
> The monitor does not display IP addresses in IPv6, container and overly large (</24) prefixes.

## Compatibility
| NetBox Version| Plugin Version|
|---------------|---------------|
| 4.5 | >= 0.1.3 |
| 4.4 | >= 0.1.2 |
| 4.3 | >= 0.1.2 |
| 4.2 | >= 0.0.0, < 0.1.0 |
| 3.X | 0.0.0 |
## Installation
The plugin is available as a [Python package](https://pypi.org/project/netbox-ip-monitor/) in PyPI and can be installed with pip
```
source /opt/netbox/venv/bin/activate
python3 -m pip install netbox-ip-monitor
# or
# python3 -m pip install netbox-ip-monitor==<version>
```
Enable the plugin in /opt/netbox/netbox/netbox/configuration.py:
```
PLUGINS = ['netbox_ip_monitor']
```
Run collectstatic:
```
python3 manage.py collectstatic --no-input
```
To ensure the plugin is automatically re-installed during future upgrades, create a file named `local_requirements.txt` (if not already existing) in the NetBox root directory (alongside `requirements.txt`) and append the `netbox-ip-monitor` package:
```no-highlight
echo netbox-ip-monitor >> local_requirements.txt
```
| text/markdown | Alexander Burmatov | burmatov202002@gmail.com | null | null | Apache 2.0 | netbox ip monitor plugin | [] | [] | https://github.com/Future998/netbox-ip-monitor | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:34:10.463306 | netbox_ip_monitor-0.1.4.tar.gz | 7,582 | b9/d7/d738b96e5d6fdf222255f146be7b88f77431dacce08a9b3ef9e39c0ade26/netbox_ip_monitor-0.1.4.tar.gz | source | sdist | null | false | ab13a78c17abf1624e4bd2ef661b84b5 | b965872d337ae66b70eae3912c1e5520c9ef647e12196e2c54697edc2828caf8 | b9d7d738b96e5d6fdf222255f146be7b88f77431dacce08a9b3ef9e39c0ade26 | null | [
"LICENSE"
] | 248 |
2.4 | libmeshctrl | 1.3.3 | Python package for interacting with a Meshcentral server instance | .. These are examples of badges you might want to add to your README:
please update the URLs accordingly
.. image:: https://api.cirrus-ci.com/github/<USER>/pylibmeshctrl.svg?branch=main
:alt: Built Status
:target: https://cirrus-ci.com/github/<USER>/pylibmeshctrl
.. image:: https://readthedocs.org/projects/pylibmeshctrl/badge/?version=latest
:alt: ReadTheDocs
:target: https://pylibmeshctrl.readthedocs.io/en/stable/
.. image:: https://img.shields.io/coveralls/github/<USER>/pylibmeshctrl/main.svg
:alt: Coveralls
:target: https://coveralls.io/r/<USER>/pylibmeshctrl
.. image:: https://img.shields.io/pypi/v/pylibmeshctrl.svg
:alt: PyPI-Server
:target: https://pypi.org/project/pylibmeshctrl/
.. image:: https://img.shields.io/conda/vn/conda-forge/pylibmeshctrl.svg
:alt: Conda-Forge
:target: https://anaconda.org/conda-forge/pylibmeshctrl
.. image:: https://pepy.tech/badge/pylibmeshctrl/month
:alt: Monthly Downloads
:target: https://pepy.tech/project/pylibmeshctrl
.. image:: https://img.shields.io/twitter/url/http/shields.io.svg?style=social&label=Twitter
:alt: Twitter
:target: https://twitter.com/pylibmeshctrl
.. image:: https://img.shields.io/badge/-PyScaffold-005CA0?logo=pyscaffold
:alt: Project generated with PyScaffold
:target: https://pyscaffold.org/
|
meshctrl
========
Library for remotely interacting with a
`MeshCentral <https://meshcentral.com/>`__ server instance
Installation
------------
pip install libmeshctrl
Usage
-----
This module is implemented as a primarily asynchronous library
(asyncio), mostly through the `Session <https://pylibmeshctrl.readthedocs.io/en/latest/api/meshctrl.html#meshctrl.session.Session>`__ class. Because the library is asynchronous, you must wait for it to be
initialized before interacting with the server. The preferred way to do
this is to use the async context manager pattern:
.. code:: python
import meshctrl
async with meshctrl.Session(url, **options):
print(await session.list_users())
...
However, if you prefer to instantiate the object yourself, you can
simply use the `initialized <https://pylibmeshctrl.readthedocs.io/en/latest/api/meshctrl.html#meshctrl.session.Session.initialized>`__ property:
.. code:: python
session = meshctrl.Session(url, **options)
await session.initialized.wait()
Note that, in this case, you will be rquired to clean up tho session
using its `close <https://pylibmeshctrl.readthedocs.io/en/latest/api/meshctrl.html#meshctrl.session.Session.close>`__ method.
Session Parameters
------------------
``url``: URL of meshcentral server to connect to. Should start with
either "ws://" or "wss://".
``options``: optional parameters. Described at `Read the
Docs <https://pylibmeshctrl.readthedocs.io/en/latest/api/meshctrl.html#module-meshctrl.session>`__
API
---
API is documented in the `API
Docs <https://pylibmeshctrl.readthedocs.io/en/latest/api/meshctrl.html>`__
.. _pyscaffold-notes:
Note
====
This project has been set up using PyScaffold 4.6. For details and usage
information on PyScaffold see https://pyscaffold.org/.
| text/x-rst; charset=UTF-8 | Josiah Baldwin | jbaldwin8889@gmail.com | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python"
] | [
"any"
] | https://github.com/HuFlungDu/pylibmeshctrl/ | null | >=3.8 | [] | [] | [] | [
"importlib-metadata",
"cryptography~=46.0.5",
"websockets~=16.0.0",
"python-socks[asyncio]~=2.8.1",
"setuptools; extra == \"testing\"",
"pytest; extra == \"testing\"",
"pytest-cov; extra == \"testing\""
] | [] | [] | [] | [
"Documentation, https://pylibmeshctrl.readthedocs.io/",
"Source, https://github.com/HuFlungDu/pylibmeshctrl/"
] | twine/6.0.1 CPython/3.13.0 | 2026-02-18T23:33:38.635352 | libmeshctrl-1.3.3.tar.gz | 168,577 | b6/87/55b4bca2797f21b6b84c4cad112e7b1a44d4cfcaacb10fbbf351323d4960/libmeshctrl-1.3.3.tar.gz | source | sdist | null | false | adc07656987ad057df58e1e56e9c14d7 | 452c84e774c99579e6fcb38593954e54bff7a0fa26639686918aba6aeb0ac605 | b68755b4bca2797f21b6b84c4cad112e7b1a44d4cfcaacb10fbbf351323d4960 | null | [] | 268 |
2.1 | tesseract-robotics-nanobind | 0.7.4 | Tesseract robotics Python bindings (nanobind) | # Tesseract Python (nanobind)
[](https://pypi.org/project/tesseract-robotics-nanobind/)
[](https://github.com/tesseract-robotics/tesseract_nanobind)
[](https://github.com/tesseract-robotics/tesseract_nanobind/actions)
[](https://tesseract-robotics.github.io/tesseract_nanobind/)
[](https://opensource.org/licenses/Apache-2.0)
> **Note:** This is a friendly fork of [tesseract_python](https://github.com/tesseract-robotics/tesseract_python) that replaces SWIG bindings with modern [nanobind](https://github.com/wjakob/nanobind) bindings.
Python bindings for [Tesseract](https://github.com/tesseract-robotics/tesseract) robotics motion planning using [nanobind](https://github.com/wjakob/nanobind).
## Features
- Scene loading and management (URDF, SRDF, meshes)
- Collision checking (Bullet, FCL)
- Kinematics (KDL, OPW, UR)
- Motion planning (OMPL, Descartes, TrajOpt)
- Time parameterization (TOTG, ISP, Ruckig)
- Task composition and pipelines
- Pythonic high-level API
## Installation
```bash
pip install tesseract-robotics-nanobind
```
**Platform support:** Linux x86_64. macOS arm64 coming soon.
## Quick Start
```python
from tesseract_robotics.planning import (
Robot, MotionProgram, JointTarget, CartesianTarget,
Pose, box, create_obstacle, TaskComposer,
)
# Load robot
robot = Robot.from_urdf(
"package://tesseract_support/urdf/abb_irb2400.urdf",
"package://tesseract_support/urdf/abb_irb2400.srdf"
)
# Add obstacle
create_obstacle(robot, "box", box(0.5, 0.5, 0.5), Pose.from_xyz(0.5, 0, 0.3))
# Build motion program
program = (MotionProgram("manipulator", tcp_frame="tool0")
.set_joint_names(robot.get_joint_names("manipulator"))
.move_to(JointTarget([0, 0, 0, 0, 0, 0]))
.move_to(CartesianTarget(Pose.from_xyz(0.5, 0.3, 0.8)))
)
# Plan
composer = TaskComposer.from_config()
result = composer.plan(robot, program)
if result.successful:
for pt in result.trajectory:
print(pt.positions)
```
## Low-Level API
For direct C++ API access:
```python
from tesseract_robotics.tesseract_environment import Environment
from tesseract_robotics.tesseract_common import GeneralResourceLocator
env = Environment()
locator = GeneralResourceLocator()
env.init("/path/to/robot.urdf", "/path/to/robot.srdf", locator)
print(f"Joints: {env.getJointNames()}")
print(f"Links: {env.getLinkNames()}")
```
## Examples
See the `examples/` directory for:
- `basic_cartesian_example.py` - Simple Cartesian planning
- `freespace_ompl_example.py` - OMPL freespace planning
- `pick_and_place_example.py` - Pick and place with TrajOpt
- `puzzle_piece_example.py` - Cartesian path following
- And more...
## Development
Enable pre-commit hook (runs test suite before each commit):
```bash
pre-commit install
pre-commit install --hook-type pre-push
```
that'll run the `hooks` defined in `.pre-commit-config.yaml`
## Acknowledgments
This project builds upon the excellent work of [John Wason](https://github.com/johnwason) and the [Tesseract Robotics](https://github.com/tesseract-robotics) team. The original [tesseract_python](https://github.com/tesseract-robotics/tesseract_python) SWIG bindings laid the foundation for this nanobind implementation.
Special thanks to:
- **John Wason** (Wason Technology, LLC) - Original tesseract_python author and Tesseract maintainer
- **Levi Armstrong** - Tesseract core developer
- **Jelle Feringa** ([Terrestrial](http://terrestrial.construction)) - nanobind port developer
- The ROS-Industrial consortium for supporting Tesseract development
## License
Apache 2.0
| text/markdown | null | Jelle Feringa <jelle@terrestrial.construction>, John Wason <wason@wasontech.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Topic :: Scientific/Engineering",
"Operating System :: Unix",
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.21.0",
"loguru>=0.7.0",
"pyyaml>=6.0",
"scipy>=1.7.0",
"aiohttp",
"importlib-resources"
] | [] | [] | [] | [
"Homepage, https://github.com/tesseract-robotics/tesseract_nanobind",
"Repository, https://github.com/tesseract-robotics/tesseract_nanobind"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:32:21.433458 | tesseract_robotics_nanobind-0.7.4-cp39-cp39-manylinux_2_35_x86_64.whl | 103,764,072 | 72/db/5ce32c602b5ea480156f7ed5be17b7a70f24c0a0800dfa916ced7c48fcff/tesseract_robotics_nanobind-0.7.4-cp39-cp39-manylinux_2_35_x86_64.whl | cp39 | bdist_wheel | null | false | 04dd46eae3d8666266b5da55c8d27e58 | 0d50f6b545c1702e6f830ffc0b2fab5c66c94479d42463279f4afe4dc8508023 | 72db5ce32c602b5ea480156f7ed5be17b7a70f24c0a0800dfa916ced7c48fcff | null | [] | 328 |
2.4 | Qubx | 0.7.40.dev11 | Qubx - Quantitative Trading Framework | # Qubx - Quantitative Trading Framework
[](https://github.com/xLydianSoftware/Qubx/actions/workflows/ci.yml)
```
⠀⠀⡰⡖⠒⠒⢒⢦⠀⠀
⠀⢠⠃⠈⢆⣀⣎⣀⣱⡀ QUBX | Quantitative Backtesting Environment
⠀⢳⠒⠒⡞⠚⡄⠀⡰⠁ (c) 2026, by xLydian
⠀⠀⠱⣜⣀⣀⣈⣦⠃⠀⠀⠀
```
Qubx is a next-generation quantitative trading framework designed for efficient backtesting and live trading. Built with Python, it offers a robust environment for developing, testing, and deploying trading strategies.
**Qubx is under active development.** We are continuously improving the framework and will update our documentation in the coming days/weeks. This will include comprehensive end-to-end examples for running simulations and live trading.
### Supported Data Types
Qubx supports a wide range of market data:
- OHLC (candlestick data)
- L2 Orderbook
- Liquidations
- Funding rates
- And more...
## Quick Start
### 1. Install Dependencies
```bash
just install
```
### 2. Create a Strategy
```bash
# Create a simple strategy template (default)
uv run qubx init
# Or specify a name and symbols
uv run qubx init --name my_strategy --symbols BTCUSDT,ETHUSDT
```
### 3. Run Your Strategy
```bash
cd my_strategy
# Run in paper trading mode
uv run qubx run config.yml --paper
# Or run in Jupyter mode for interactive development
./jpaper.sh
```
### Available Templates
```bash
# List available strategy templates
uv run qubx init --list-templates
# Create strategy with full project structure and MACD example
uv run qubx init --template project --name my_project
```
### Strategy Development Workflow
1. **Initialize**: `uv run qubx init` - Create strategy from template
2. **Develop**: Edit `strategy.py` to implement your trading logic
3. **Test**: `uv run qubx run config.yml --paper` - Run in paper mode
4. **Debug**: `./jpaper.sh` - Use Jupyter for interactive development
5. **Deploy**: Configure for live trading when ready
## Features
- High-performance backtesting engine
- Live trading support
- Advanced data analysis tools
- Integration with multiple exchanges
- Comprehensive strategy development toolkit
- Detailed performance analytics
## Documentation
For detailed documentation, visit [Qubx Documentation](https://xlydiansoftware.github.io/Qubx/en/latest/)
## Prerequisites
To build and run Qubx, you need:
- Python 3.11 or higher
- C/C++ compiler for Cython compilation
- uv for dependency management
## Installation
### Using pip
```bash
pip install qubx
```
### Development Setup
1. Clone the repository
2. Install dependencies using uv:
```bash
uv sync --all-extras
```
Example trading strategies can be found in the `examples/` directory.
## CLI Usage
Qubx comes with a command-line interface that provides several useful commands:
```bash
qubx --help # Show all available commands
```
Available commands:
- `qubx init` - Create a new strategy from template
- `qubx run` - Start a strategy with given configuration
- `qubx simulate` - Run strategy simulation
- `qubx ls` - List all strategies in a directory
- `qubx release` - Package a strategy into a zip file
- `qubx deploy` - Deploy a strategy from a zip file
- `qubx browse` - Browse backtest results using interactive TUI
## Development
### Running Tests
Run the test suite:
```bash
just test
```
### Additional Commands
- Check code style: `just style-check`
- Build package: `just build`
- Run verbose tests: `just test-verbose`
## In Production
Qubx powers the [AllegedAlpha](https://app.lighter.xyz/public-pools/281474976625478) public pool on Lighter. Public pools allow users to deposit funds from their blockchain wallet into a smart contract. The pool operator manages the trading strategy, and a performance fee is taken from profits (X: [@allegedalpha](https://x.com/allegedalpha)).
## About xLydian
Qubx is developed by [xLydian](https://xlydian.com/).
- Website: [xlydian.com](https://xlydian.com/)
- X: [@xLydian_xyz](https://x.com/xLydian_xyz)
- Contact: [info@xlydian.com](mailto:info@xlydian.com)
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the GNU General Public License v3.0 - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Dmitry Marienko <dmitry.marienko@xlydian.com>, Yuriy Arabskyy <yuriy.arabskyy@xlydian.com> | null | null | null | null | [] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"aiohttp<3.11,>=3.10.11",
"ccxt<5,>=4.2.68",
"croniter<3,>=2.0.5",
"cython==3.0.8",
"dash-bootstrap-components<2,>=1.6.0",
"dash<3,>=2.18.2",
"gitpython<4,>=3.1.44",
"importlib-metadata",
"ipywidgets<9,>=8.1.5",
"jinja2<4,>=3.1.0",
"jupyter-console<7,>=6.6.3",
"jupyter<2,>=1.1.1",
"loguru<1,... | [] | [] | [] | [
"homepage, https://xlydian.com",
"repository, https://github.com/xLydianSoftware/Qubx",
"docs, https://xlydiansoftware.github.io/Qubx"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:32:09.972267 | qubx-0.7.40.dev11.tar.gz | 744,569 | 99/94/b5cd31efe5a08beccb95fcd048964f26c80878af1cd8f69a49357566c110/qubx-0.7.40.dev11.tar.gz | source | sdist | null | false | bfd08c5fd69fafde1e723d110672faac | 89480c39d9e37b8121b51fe0bd31128ebf959b2143f03c9fc445a1b51766c541 | 9994b5cd31efe5a08beccb95fcd048964f26c80878af1cd8f69a49357566c110 | null | [
"LICENSE"
] | 0 |
2.4 | xnobot-ai | 0.1.4 | A lightweight personal AI assistant framework | <div align="center">
<img src="xnobot_logo.png" alt="中航小诺" width="500">
<h1>中航小诺,智能体协同助手</h1>
<p>
<img src="https://img.shields.io/badge/python-≥3.11-blue" alt="Python">
<img src="https://img.shields.io/badge/license-MIT-green" alt="License">
<img src="https://img.shields.io/badge/core_LOC-~7.3k_lines-orange" alt="Lines of Code">
<img src="https://img.shields.io/pypi/v/xnobot-ai" alt="PyPI">
</p>
</div>
🐶 **xnobot** is an **ultra-lightweight** personal AI assistant inspired by [Clawdbot](https://github.com/openclaw/openclaw) and [nanobot](https://github.com/HKUDS/nanobot).
⚡️ Delivers core agent functionality in just **~7,300 lines** of Python — compact and readable.
📏 Real-time line count: **7,295 lines** (run `bash core_agent_lines.sh` to verify)
## 📢 News
- **2026-02-17** 🎉 xnobot launched! Welcome to try 🐶 xnobot!
## Key Features of xnobot:
🪶 **Ultra-Lightweight**: Core capabilities in ~7k Python lines, focused on readable architecture and maintainable modules.
🔬 **Research-Ready**: Clean, readable code that's easy to understand, modify, and extend for research.
⚡️ **Lightning Fast**: Minimal footprint means faster startup, lower resource usage, and quicker iterations.
💎 **Easy-to-Use**: One-click to depoly and you're ready to go.
## 🏗️ Architecture
<p align="center">
<img src="xnobot_arch.png" alt="xnobot architecture" width="800">
</p>
## 🔍 Code Audit (2026-02-15)
本次审计覆盖 `xnobot/`、`shangwang-bridge/`、`bridge/`,重点检查了代码冗余、实现逻辑一致性、可维护性与测试有效性。
### 核心执行链路(当前实现)
1. `xnobot cli` 启动后,创建 `MessageBus`、`AgentLoop`、`ChannelManager`、`CronService`、`HeartbeatService`。
2. 渠道消息统一进入 `InboundMessage`,由 `AgentLoop` 组装上下文并调用 LLM。
3. Tool Calling 通过 `ToolRegistry` 执行(文件、命令、Web、RAG、消息、子 agent)。
4. 输出统一为 `OutboundMessage`,由 `ChannelManager` 分发到 Telegram/WhatsApp/商网/WeCom。
5. 商网链路通过 `shangwang-bridge`(CDP + NIM hook)转发消息与附件。
### 主要问题(按优先级)
| 优先级 | 类型 | 发现 | 建议 |
|---|---|---|---|
| P0 | 逻辑缺陷 | `AgentLoop.process_direct` 的 `session_key` 参数未生效,所有 CLI 直连对话写入同一会话 | 在 `process_direct` 中按 `session_key` 拆分 `channel/chat_id` 或直接显式传递会话键 |
| P0 | 逻辑缺陷 | `knowledge_get_document` 读取 Chroma `get()` 结果时按嵌套结构访问,可能导致文档 chunk 读取异常 | 统一按 Chroma `get()` 的扁平返回结构解析,并补单元测试 |
| P0 | 逻辑缺陷 | `browser_automation` 的 `extract` 对 `textContent/innerText/innerHTML` 使用 `get_attribute`,结果可能为空 | 改为 `inner_text()/text_content()/evaluate()` 读取 DOM 属性 |
| P0 | 稳定性 | `gateway` 心跳清理分支使用 `logger` 但文件未导入,触发时会抛 `NameError` | 引入统一 logger 或改为 `console/logging` |
| P1 | 冗余 | 主 agent、system 消息处理、subagent 各自实现一套近似 LLM-tool 循环 | 抽取统一 `run_llm_with_tools()` 执行器,减少三处重复 |
| P1 | 冗余 | CLI 中 API Key 检查与商网 bridge URL 规范化逻辑重复 | 抽成 `_validate_provider_config()` 与 `_normalize_ws_url()` 工具函数 |
| P1 | 冗余 | `ChatHistoryRecorder` 多处重复 JSONL 读取解析 | 抽 `_load_rows(path)`,统一异常与空行处理 |
| P2 | 一致性 | 包版本号存在双源:`pyproject.toml` 与 `xnobot/__init__.py` 不一致 | 统一单一版本源(建议来自 `pyproject`) |
| P2 | 测试 | 当前 async 测试被跳过(缺少 `pytest-asyncio` 实际生效环境) | CI 固化 dev 依赖并将 async 测试改为必跑 |
### 审计建议(实施顺序)
1. 先修 P0(逻辑正确性)并补最小回归测试。
2. 再做 P1(重复实现收敛),降低后续功能迭代成本。
3. 最后处理 P2(版本与测试治理),避免发布与运维偏差。
## ✨ Features
<table align="center">
<tr align="center">
<th><p align="center">📈 24/7 Real-Time Market Analysis</p></th>
<th><p align="center">🚀 Full-Stack Software Engineer</p></th>
<th><p align="center">📅 Smart Daily Routine Manager</p></th>
<th><p align="center">📚 Personal Knowledge Assistant</p></th>
</tr>
<tr>
<td align="center"><p align="center"><img src="case/search.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/code.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/scedule.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/memory.gif" width="180" height="400"></p></td>
</tr>
<tr>
<td align="center">Discovery • Insights • Trends</td>
<td align="center">Develop • Deploy • Scale</td>
<td align="center">Schedule • Automate • Organize</td>
<td align="center">Learn • Memory • Reasoning</td>
</tr>
</table>
## 📦 Install
**Install from source** (latest features, recommended for development)
```bash
git clone https://github.com/Yuhamixli/XnoBot.git
cd xnobot
pip install -e .
```
**Install with [uv](https://github.com/astral-sh/uv)** (stable, fast)
```bash
uv tool install xnobot-ai
```
**Install from PyPI** (stable)
```bash
pip install xnobot-ai
```
## 🚀 Quick Start
> [!TIP]
> Set your API key in `~/.xnobot/config.json`.
> Get API keys: [OpenRouter](https://openrouter.ai/keys) (LLM) · [Brave Search](https://brave.com/search/api/) (optional, for web search)
> 模型在 `agents.defaults.model` 配置,推荐:`anthropic/claude-opus-4-5`、`openai/gpt-4o`;省成本可用 `minimax/minimax-m2`,`moonshotai/kimi-k2.5`。
**1. Initialize**
```bash
xnobot onboard
```
**2. Configure** (`~/.xnobot/config.json`)
```json
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
},
"tools": {
"web": {
"search": {
"apiKey": "BSA-xxx"
}
}
}
}
```
**3. Chat**
```bash
xnobot agent -m "What is 2+2?"
```
That's it! You have a working AI assistant in 2 minutes.
> **项目模式**:将 `config.example.json` 复制为 `~/.xnobot/config.json`,workspace 已指向 `c:/Projects/xnobot/workspace`,知识库、记忆、技能随项目版本控制与部署。
## 🌐 RPA / 浏览器自动化
通过 **browser_automation** 工具,可以让 agent 驱动浏览器:打开外网页面、登录、填表、点击、提取内容,适合与需要在前端操作的平台(如企业商网)对接。
**安装可选依赖**
```bash
pip install playwright
playwright install chromium
```
或安装 xnobot 的 RPA 可选组:`pip install "xnobot-ai[rpa]"`,再执行 `playwright install chromium`。
**使用方式**
直接对 agent 说人话即可,例如:
- 「打开 https://example.com 并提取页面标题」
- 「打开某商网登录页,在用户名框填 xxx、密码框填 xxx,点登录,然后提取待办列表」
Agent 会调用 `browser_automation`,按步骤执行:`navigate` → `fill` / `click` → `extract`。若页面选择器复杂,可在 AGENTS.md 或对话中说明页面结构(如「登录按钮的 id 是 submit」)以便更稳确定位。
## 🖥️ Local Models (vLLM)
Run xnobot with your own local models using vLLM or any OpenAI-compatible server.
**1. Start your vLLM server**
```bash
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
```
**2. Configure** (`~/.xnobot/config.json`)
```json
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
},
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}
```
**3. Chat**
```bash
xnobot agent -m "Hello from my local LLM!"
```
> [!TIP]
> The `apiKey` can be any non-empty string for local servers that don't require authentication.
## 💬 Chat Apps
Talk to your xnobot through Telegram, WhatsApp, or 企业微信 (WeCom) — anytime, anywhere.
| Channel | Setup |
|---------|-------|
| **商网办公 (AVIC)** | CDP Bridge(需 Windows) |
| **Telegram** | Easy (just a token) |
| **WhatsApp** | Medium (scan QR) |
| **企业微信 (WeCom)** | 企业应用(发消息) |
<details>
<summary><b>商网办公 (Shangwang / AVIC Office)</b></summary>
通过 **Chrome DevTools Protocol (CDP)** 连接已登录的商网办公 Avic.exe(Electron 应用),直接 hook 内嵌的网易云信 NIM SDK,实现消息的实时收发。无需爬虫或 OCR,稳定可靠。
**前提条件**
- Windows 系统,商网办公 (Avic.exe) 已安装
- 中国境内网络(商网不支持海外访问)
**1. 启动 Avic.exe(带调试端口)**
修改桌面快捷方式的「目标」,或在 PowerShell 中直接运行:
```powershell
& "C:\Program Files (x86)\AVIC Office\Avic.exe" --remote-debugging-port=9222
```
**2. 手动登录商网办公**,进入聊天界面。
**3. 启动 Bridge**
```bash
cd shangwang-bridge
pip install -r requirements.txt
python main.py
```
**4. 配置 xnobot** (`~/.xnobot/config.json`)
```json
{
"channels": {
"shangwang": {
"enabled": true,
"bridgeUrl": "ws://localhost:3010",
"mentionNames": ["Js小程"],
"groupReplyMaxLength": 200
}
}
}
```
- `mentionNames`: 群聊中仅回复 @提及 了这些昵称的消息,私聊不受影响;空数组则回复所有群消息
- `groupReplyMaxLength`: 群聊回复最大字数(默认 200),超出自动截断
**5. 运行**
```bash
xnobot gateway
```
> 详细文档见 [shangwang-bridge/README.md](./shangwang-bridge/README.md)
</details>
<details>
<summary><b>本地知识库 (RAG)</b></summary>
商网(或任意通道)提问时,agent 可检索本地知识库并基于制度/政策文档回复。
**1. 安装 RAG 依赖**
```bash
pip install xnobot-ai[rag]
```
**2. 放置文档**
将制度、规范、政策等文件放入 **workspace 下的 `knowledge` 目录**。项目模式:`c:/Projects/xnobot/workspace/knowledge/`;默认:`~/.xnobot/workspace/knowledge/`。支持:TXT、MD、PDF、Word(.docx)、Excel(.xlsx)。
**3. 导入知识库**
```bash
xnobot knowledge ingest
```
或对 agent 说「导入 knowledge 目录到知识库」,agent 会调用 `knowledge_ingest`。
**4. 提问**
在商网或 CLI 直接提问,例如「差旅报销标准是什么?」。Agent 会先 `knowledge_search` 检索,再结合结果回答。
可选配置见 `~/.xnobot/config.json` 的 `tools.knowledge`(chunkSize、topK、enabled、webCacheEnabled 等)。
**网络搜索缓存**:`web_search` / `web_fetch` 结果会自动存入 `knowledge/短期/_cache_web/` 并 ingest,重复问题可更快回答;每周自动清空。详见 [workspace/knowledge/README.md](./workspace/knowledge/README.md)。
</details>
<details>
<summary><b>Telegram</b> </summary>
**1. Create a bot**
- Open Telegram, search `@BotFather`
- Send `/newbot`, follow prompts
- Copy the token
**2. Configure**
```json
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
```
> Get your user ID from `@userinfobot` on Telegram.
**3. Run**
```bash
xnobot gateway
```
</details>
<details>
<summary><b>WhatsApp</b></summary>
Requires **Node.js ≥18**.
**1. Link device**
```bash
xnobot channels login
# Scan QR with WhatsApp → Settings → Linked Devices
```
**2. Configure**
```json
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}
```
**3. Run** (two terminals)
```bash
# Terminal 1
xnobot channels login
# Terminal 2
xnobot gateway
```
</details>
<details>
<summary><b>企业微信 (WeCom)</b></summary>
通过企业微信「自建应用」向成员发送消息(当前支持**发送**;接收用户消息需在企业微信后台配置回调,后续版本可支持)。
**1. 创建自建应用**
- 登录 [企业微信管理后台](https://work.weixin.qq.com/wework_admin/loginpage_wx)
- 「应用管理」→「自建」→ 创建应用,记录 **AgentId**、**Secret**
- 「我的企业」→「企业信息」→ 记录 **企业 ID (corp_id)**
**2. 配置**
```json
{
"channels": {
"wecom": {
"enabled": true,
"corpId": "wwxxxxxxxx",
"agentId": 1000002,
"secret": "xxxxxxxx",
"allowFrom": []
}
}
}
```
- `allowFrom` 为空表示允许所有成员;可填成员 UserID 限制接收范围。
- 发往某成员时,cron/脚本里 `deliver.to` 填该成员的 **UserID**;发全员可填 `@all`。
**3. 运行**
```bash
xnobot gateway
```
</details>
## ⚙️ Configuration
Config file: `~/.xnobot/config.json`
### Web Search(网页搜索)
Agent 的「搜索互联网」能力依赖 **Brave Search API**。若未配置 `tools.web.search.apiKey`,`web_search` 会报错,agent 会退而用 `web_fetch`、浏览器自动化等方式,效果差(如你看到的「无法直接获取金融新闻」)。
**配置步骤**:在 `~/.xnobot/config.json` 的 `tools.web.search` 中填入 `apiKey`,申请地址:[Brave Search API](https://brave.com/search/api/)(免费档可用)。
```json
"tools": {
"web": {
"search": {
"apiKey": "BSA-你的Key",
"maxResults": 5,
"proxy": "http://127.0.0.1:7890"
}
}
}
```
- **国内网络**:Brave API(api.search.brave.com)可能被限速或超时。若网页搜索一直失败,可(二选一):在 `tools.web.search` 里加上 `proxy`(如本地代理 `http://127.0.0.1:7890`);或先设置环境变量 `HTTPS_PROXY` 再启动 gateway(如 PowerShell:`$env:HTTPS_PROXY="http://127.0.0.1:7890"; xnobot gateway`)。改配置或环境后需**重启 gateway**。
- **若使用商网/Telegram 等 gateway**:修改 `config.json` 后必须**重启 xnobot gateway** 才会生效(gateway 只在启动时读一次配置)。
### Providers
> [!NOTE]
> Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
| Provider | Purpose | Get API Key |
|----------|---------|-------------|
| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) |
| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) |
| `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) |
| `groq` | LLM + **Voice transcription** (Whisper) | [console.groq.com](https://console.groq.com) |
| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) |
<details>
<summary><b>Full config example</b></summary>
```json
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
},
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
},
"groq": {
"apiKey": "gsk_xxx"
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "123456:ABC...",
"allowFrom": ["123456789"]
},
"whatsapp": {
"enabled": false
},
"wecom": {
"enabled": false,
"corpId": "",
"agentId": 0,
"secret": "",
"allowFrom": []
}
},
"tools": {
"web": {
"search": {
"apiKey": "BSA..."
}
}
}
}
```
</details>
<details>
<summary><b>模型配置 (Model)</b></summary>
Agent 的推理能力(含 gateway、agent 命令、cron、heartbeat)统一使用 `agents.defaults.model`。
**配置位置**:`~/.xnobot/config.json` → `agents.defaults.model`
**推荐强大模型**(需对应 provider 的 apiKey):
- `anthropic/claude-opus-4-5` - Claude 最强(Anthropic API)
- `anthropic/claude-sonnet-4` - 平衡
- `openai/gpt-4o` - GPT-4o(OpenAI API)
- `openai/gpt-4o-mini` - 轻量
**通过 OpenRouter**(一个 key 访问多种模型):
```json
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
}
}
```
修改后需**重启 gateway** 生效。
</details>
## CLI Reference
| Command | Description |
|---------|-------------|
| `xnobot onboard` | Initialize config & workspace |
| `xnobot agent -m "..."` | Chat with the agent |
| `xnobot agent` | Interactive chat mode |
| `xnobot gateway` | Start the gateway |
| `xnobot status` | Show status |
| `xnobot knowledge ingest` | Import documents into knowledge base (default: workspace/knowledge) |
| `xnobot knowledge status` | Show knowledge base chunk count |
| `xnobot knowledge clear-web-cache` | Clear web search cache (normally auto-cleared weekly) |
| `xnobot channels login` | Link WhatsApp (scan QR) |
| `xnobot channels status` | Show channel status |
<details>
<summary><b>Scheduled Tasks (Cron)</b></summary>
```bash
# Add a job
xnobot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
xnobot cron add --name "hourly" --message "Check status" --every 3600
# List jobs
xnobot cron list
# Remove a job
xnobot cron remove <job_id>
```
</details>
## 🐳 Docker
> [!TIP]
> The `-v ~/.xnobot:/root/.xnobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
Build and run xnobot in a container:
```bash
# Build the image
docker build -t xnobot .
# Initialize config (first time only)
docker run -v ~/.xnobot:/root/.xnobot --rm xnobot onboard
# Edit config on host to add API keys
vim ~/.xnobot/config.json
# Run gateway (connects to Telegram/WhatsApp)
docker run -v ~/.xnobot:/root/.xnobot -p 18790:18790 xnobot gateway
# Or run a single command
docker run -v ~/.xnobot:/root/.xnobot --rm xnobot agent -m "Hello!"
docker run -v ~/.xnobot:/root/.xnobot --rm xnobot status
```
## Push to Pypi
```bash
twine upload dist\*
```
## 📁 Project Structure
```
xnobot/
├── agent/ # 🧠 Core agent logic
│ ├── loop.py # Agent loop (LLM ↔ tool execution)
│ ├── context.py # Prompt builder
│ ├── memory.py # Persistent memory
│ ├── skills.py # Skills loader
│ ├── subagent.py # Background task execution
│ └── tools/ # Built-in tools (incl. spawn)
├── skills/ # 🎯 Bundled skills (github, weather, tmux...)
├── channels/ # 📱 Chat channels
│ ├── telegram.py # Telegram bot
│ ├── whatsapp.py # WhatsApp (Node bridge)
│ ├── wecom.py # 企业微信
│ └── shangwang.py# 商网办公 (CDP bridge)
├── bus/ # 🚌 Message routing
├── cron/ # ⏰ Scheduled tasks
├── heartbeat/ # 💓 Proactive wake-up
├── providers/ # 🤖 LLM providers (OpenRouter, etc.)
├── session/ # 💬 Conversation sessions
├── config/ # ⚙️ Configuration
├── cli/ # 🖥️ Commands
shangwang-bridge/ # 🔌 CDP bridge for AVIC Office
├── cdp.py # CDP client (JS hook injection)
├── server.py # WebSocket server (bridge ↔ xnobot)
├── config.py # Bridge configuration
└── main.py # Entry point
```
## 🤝 Contribute & Roadmap
PRs welcome! The codebase is intentionally small and readable. 🤗
**Roadmap** — Pick an item and [open a PR](https://github.com/Yuhamixli/XnoBot/pulls)!
- [x] **Voice Transcription** — Support for Groq Whisper (Issue #13)
- [ ] **Multi-modal** — See and hear (images, voice, video)
- [ ] **Long-term memory** — Never forget important context
- [ ] **Better reasoning** — Multi-step planning and reflection
- [ ] **More integrations** — Discord, Slack, email, calendar
- [ ] **Self-improvement** — Learn from feedback and mistakes
### Contributors
| text/markdown | xnobot contributors | null | null | null | MIT | agent, ai, chatbot | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"croniter>=2.0.0",
"httpx>=0.25.0",
"litellm>=1.0.0",
"loguru>=0.7.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"python-telegram-bot>=21.0",
"readability-lxml>=0.8.0",
"rich>=13.0.0",
"typer>=0.9.0",
"websocket-client>=1.6.0",
"websockets>=12.0",
"pytest-asyncio>=0.21.0; extra == \"dev... | [] | [] | [] | [
"Homepage, https://github.com/Yuhamixli/XnoBot",
"Repository, https://github.com/Yuhamixli/XnoBot",
"Issues, https://github.com/Yuhamixli/XnoBot/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T23:32:04.106480 | xnobot_ai-0.1.4.tar.gz | 109,419 | 6d/f1/d05973984a18c07e9411d6ac33ff98b5794799aae4c70ca8828ecb9e6bd8/xnobot_ai-0.1.4.tar.gz | source | sdist | null | false | 14be13618a82ad86ecef75dec51056a2 | 079b26251deb92fbf6c5eb9c3d3111650ee45e402ea945470acbf81a73f62516 | 6df1d05973984a18c07e9411d6ac33ff98b5794799aae4c70ca8828ecb9e6bd8 | null | [
"LICENSE"
] | 240 |
2.1 | pytribeam | 0.0.1 | automated data collection on TriBeam tools | # pyTriBeam

[![userguide][userguide_badge]](https://sandialabs.github.io/pytribeam/docs/userguide/book/index.html) [![api][api_badge]](https://sandialabs.github.io/pytribeam/docs/api/index.html) [![test-coverage][test-coverage_badge]](https://sandialabs.github.io/pytribeam/coverage_reports/combined/htmlcov/index.html) [![lint][lint_badge]](https://sandialabs.github.io/pytribeam/logs/lint.log) [![version][version_badge]](https://github.com/sandialabs/pytribeam)
[userguide_badge]: https://sandialabs.github.io/pytribeam/badges/userguide.svg
[api_badge]: https://sandialabs.github.io/pytribeam/badges/api.svg
[test-coverage_badge]: https://sandialabs.github.io/pytribeam/badges/test-coverage.svg
[lint_badge]: https://sandialabs.github.io/pytribeam/badges/lint.svg
[version_badge]: https://sandialabs.github.io/pytribeam/badges/version.svg
## Getting Started
Installation instructions and more can be found in the [User Guide](https://sandialabs.github.io/pytribeam/docs/userguide/book/index.html).
More info coming soon!
| text/markdown | null | Andrew Polonsky <apolon@sandia.gov>, Chad Hovey <chovey@sandia.gov>, James Lamb <jdlamb@sandia.gov> | null | null | null | null | [] | [] | null | null | ==3.8.12 | [] | [] | [] | [
"pytest==8.3.3",
"pytest-cov==5.0.0",
"schema",
"h5py; extra == \"dev\"",
"numpy; extra == \"dev\"",
"pdoc; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"documentation, https://gitlab-ex.sandia.gov/tribeam/pytribeam/",
"repository, https://gitlab-ex.sandia.gov/tribeam/pytribeam"
] | twine/6.1.0 CPython/3.8.12 | 2026-02-18T23:30:51.492843 | pytribeam-0.0.1.tar.gz | 193,519 | 62/09/a0dd60479822bb37416d00d027f8b693a678008c3d594888307e3716d21a/pytribeam-0.0.1.tar.gz | source | sdist | null | false | b3c0e63fb45a1928f479511ef79fbcba | 89dff2a25702d3dc1dba4c90b030d63a317f317b0ab364c1cecc7d0420be300f | 6209a0dd60479822bb37416d00d027f8b693a678008c3d594888307e3716d21a | null | [] | 268 |
2.4 | hogql-parser | 1.3.14 | HogQL parser for internal PostHog use | # HogQL Parser
Blazing fast HogQL parsing. This package can only work in the context of the PostHog Django app, as it imports from `posthog.hogql`.
You can test changes locally by running `pip install ./hogql_parser`
| text/markdown | PostHog Inc. | hey@posthog.com | PostHog Inc. | hey@posthog.com | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/PostHog/posthog/tree/master/common/hogql_parser | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:29:33.038417 | hogql_parser-1.3.14.tar.gz | 67,727 | 3c/cb/7e9cca162e3a37ffbb4ba5f502387fa4924f0ec5ee531ef25b848b1a8d71/hogql_parser-1.3.14.tar.gz | source | sdist | null | false | 7baac6858064b811e3120954d1a20b6a | d5ca34209e91353d415c4ab4b3e7f730136f384af5226c707c14349081985abc | 3ccb7e9cca162e3a37ffbb4ba5f502387fa4924f0ec5ee531ef25b848b1a8d71 | null | [] | 27,655 |
2.4 | adk-redis | 0.0.3 | Redis integrations for Google's Agent Development Kit (ADK) | <div align="center">
<h1>
<img src="https://raw.githubusercontent.com/redis/redis-vl-python/main/docs/_static/Redis_Logo_Red_RGB.svg" width="120" alt="Redis" style="vertical-align: middle; margin-right: 20px;">
<span style="vertical-align: middle; margin: 0 10px;">×</span>
<img src="https://raw.githubusercontent.com/google/adk-python/main/assets/agent-development-kit.png" width="120" alt="ADK" style="vertical-align: middle; margin-left: 20px;">
</h1>
<h1>Redis Integrations for Google Agent Development Kit</h1>
</div>
<div align="center">
[](https://badge.fury.io/py/adk-redis)
[](https://opensource.org/licenses/Apache-2.0)
[](https://www.python.org/downloads/)
[](https://github.com/google/pyink)
[](https://mypy-lang.org/)
**[PyPI](https://pypi.org/project/adk-redis/)** • **[Documentation](https://github.com/redis-developer/adk-redis)** • **[Examples](examples/)** • **[Agent Memory Server](https://github.com/redis/agent-memory-server)** • **[RedisVL](https://docs.redisvl.com)**
</div>
---
## Introduction
**adk-redis** provides Redis integrations for Google's Agent Development Kit (ADK). Implements ADK's `BaseMemoryService`, `BaseSessionService`, tool interfaces, and semantic caching using Redis Agent Memory Server and RedisVL.
<div align="center">
| 🔌 [**ADK Services**](#memory-services) | 🔧 [**Agent Tools**](#search-tools) | ⚡ [**Semantic Caching**](#semantic-caching) |
|:---:|:---:|:---:|
| **Memory Service**<br/>*Long-term memory via Agent Memory Server* | **Memory Tools**<br/>*LLM-controlled memory operations* | **LLM Response Cache**<br/>*Reduce latency & costs* |
| Semantic search & auto-extraction | search, create, update, delete | Similarity-based cache lookup |
| Cross-session knowledge retrieval | Direct Agent Memory Server API | Configurable distance threshold |
| Recency-boosted search | Namespace & user isolation | TTL-based expiration |
| **Session Service**<br/>*Working memory via Agent Memory Server* | **Search Tools**<br/>*RAG via RedisVL* | **Tool Cache**<br/>*Avoid redundant calls* |
| Context window management | Vector, hybrid, text, range search | Cache tool execution results |
| Auto-summarization | Multiple vectorizers supported | Reduce API calls |
| Background memory promotion | Metadata filtering | Configurable thresholds |
</div>
---
## Installation
### Install from PyPI
```bash
pip install adk-redis
```
### Optional Dependencies
Install with optional features based on your use case:
```bash
# Memory & session services (Redis Agent Memory Server integration)
pip install adk-redis[memory]
# Search tools (RedisVL integration)
pip install adk-redis[search]
# All features
pip install adk-redis[all]
```
### Verify Installation
```bash
python -c "from adk_redis import __version__; print(__version__)"
```
### Development Installation
For contributors or those who want the latest unreleased changes:
```bash
# Clone the repository
git clone https://github.com/redis-developer/adk-redis.git
cd adk-redis
# Install with uv (recommended for development)
pip install uv
uv sync --all-extras
# Or install directly from GitHub
pip install git+https://github.com/redis-developer/adk-redis.git@main
```
---
## Getting Started
### Prerequisites
**For memory/session services:**
- [Redis Agent Memory Server](https://github.com/redis/agent-memory-server) (port 8088)
- Redis 8.4+ or Redis Cloud (backend for Agent Memory Server)
**For search tools:**
- Redis 8.4+ or Redis Cloud with Search capability
**Quick start:**
#### 1. Start Redis 8.4
Redis is required for all examples in this repository. Choose one of the following options:
**Option A: Automated setup (recommended)**
```bash
# Run from the repository root
./scripts/start-redis.sh
```
This script will:
- Check if Docker is installed and running
- Check if Redis is already running on port 6379
- Start Redis 8.4 in a Docker container with health checks
- Verify the Redis container is healthy and accepting connections
- Provide helpful commands for managing Redis
**Option B: Manual setup**
```bash
docker run -d --name redis -p 6379:6379 redis:8.4-alpine
```
> **Note**: Redis 8.4 includes the Redis Query Engine (evolved from RediSearch) with native support for vector search, full-text search, and JSON operations. Docker will automatically download the image (~40MB) on first run.
**Verify Redis is running:**
```bash
# Check container status
docker ps | grep redis
# Test connection
docker exec redis redis-cli ping
# Should return: PONG
# Or if you have redis-cli installed locally
redis-cli -p 6379 ping
```
**Common Redis commands:**
```bash
# View logs
docker logs redis
docker logs -f redis # Follow logs in real-time
# Stop Redis
docker stop redis
# Restart Redis
docker restart redis
# Remove Redis (stops and deletes container)
docker rm -f redis
```
**Troubleshooting:**
- **Port 6379 already in use**: Another process is using the port. Find it with `lsof -i :6379` or use a different port: `docker run -d --name redis -p 6380:6379 redis:8.4-alpine`
- **Docker not running**: Start Docker Desktop or the Docker daemon
- **Permission denied**: Run with `sudo` or add your user to the docker group
- **Container won't start**: Check logs with `docker logs redis`
#### 2. Start Agent Memory Server
```bash
docker run -d --name agent-memory-server -p 8088:8088 \
-e REDIS_URL=redis://host.docker.internal:6379 \
-e GEMINI_API_KEY=your-gemini-api-key \
-e GENERATION_MODEL=gemini/gemini-2.0-flash \
-e EMBEDDING_MODEL=gemini/text-embedding-004 \
-e FAST_MODEL=gemini/gemini-2.0-flash \
-e SLOW_MODEL=gemini/gemini-2.0-flash \
-e EXTRACTION_DEBOUNCE_SECONDS=5 \
redislabs/agent-memory-server:latest \
agent-memory api --host 0.0.0.0 --port 8088 --task-backend=asyncio
```
> **Configuration Options:**
> - **LLM Provider**: Agent Memory Server uses [LiteLLM](https://docs.litellm.ai/) and supports 100+ providers (OpenAI, Gemini, Anthropic, AWS Bedrock, Ollama, etc.). Set the appropriate environment variables for your provider (e.g., `GEMINI_API_KEY`, `GENERATION_MODEL=gemini/gemini-2.0-flash`). See the [Agent Memory Server LLM Providers docs](https://redis.github.io/agent-memory-server/llm-providers/) for details.
> - **Model Configuration**: Set `GENERATION_MODEL`, `FAST_MODEL` (for quick tasks like extraction), and `SLOW_MODEL` (for complex tasks) to your preferred models. All default to OpenAI models if not specified.
> - **Memory Extraction Debounce**: `EXTRACTION_DEBOUNCE_SECONDS` controls how long to wait before extracting memories from a conversation (default: 300 seconds). Lower values (e.g., 5) provide faster memory extraction, while higher values reduce API calls.
> - **Embedding Models**: Agent Memory Server also uses LiteLLM for embeddings. For local/offline embeddings, use Ollama (e.g., `EMBEDDING_MODEL=ollama/nomic-embed-text`, `REDISVL_VECTOR_DIMENSIONS=768`). See [Embedding Providers docs](https://redis.github.io/agent-memory-server/embedding-providers/) for all options.
**See detailed setup guides:**
- [Redis Setup Guide](docs/redis-setup.md) - All Redis deployment options
- [Agent Memory Server Setup](docs/agent-memory-server-setup.md) - Complete configuration
- [Integration Guide](docs/integration-guide.md) - End-to-end setup with code examples
---
## Quick Start
### Two-Tier Memory Architecture
Uses both working memory (session-scoped) and long-term memory (persistent):
```python
from google.adk import Agent
from google.adk.runners import Runner
from adk_redis.memory import RedisLongTermMemoryService, RedisLongTermMemoryServiceConfig
from adk_redis.sessions import (
RedisWorkingMemorySessionService,
RedisWorkingMemorySessionServiceConfig,
)
# Configure session service (Tier 1: Working Memory)
session_config = RedisWorkingMemorySessionServiceConfig(
api_base_url="http://localhost:8088", # Agent Memory Server URL
default_namespace="my_app",
model_name="gpt-4o", # Model for auto-summarization
context_window_max=8000, # Trigger summarization at this token count
)
session_service = RedisWorkingMemorySessionService(config=session_config)
# Configure memory service (Tier 2: Long-Term Memory)
memory_config = RedisLongTermMemoryServiceConfig(
api_base_url="http://localhost:8088",
default_namespace="my_app",
extraction_strategy="discrete", # Extract individual facts
recency_boost=True, # Prioritize recent memories in search
)
memory_service = RedisLongTermMemoryService(config=memory_config)
# Create agent
agent = Agent(
name="memory_agent",
model="gemini-2.0-flash",
instruction="You are a helpful assistant with long-term memory.",
)
# Create runner with both services
runner = Runner(
agent=agent,
app_name="my_app",
session_service=session_service,
memory_service=memory_service,
)
```
**How it works:**
1. **Working Memory**: Stores session messages, state, and handles auto-summarization
2. **Background Extraction**: Automatically promotes important information to long-term memory
3. **Long-Term Memory**: Provides semantic search across all sessions for relevant context
4. **Recency Boosting**: Prioritizes recent memories while maintaining access to historical knowledge
### Vector Search Tools
RAG with semantic search using RedisVL:
```python
from google.adk import Agent
from redisvl.index import SearchIndex
from redisvl.utils.vectorize import HFTextVectorizer
from adk_redis.tools import RedisVectorSearchTool, RedisVectorQueryConfig
# Create a vectorizer (HuggingFace, OpenAI, Cohere, Mistral, Voyage AI, etc.)
vectorizer = HFTextVectorizer(model="sentence-transformers/all-MiniLM-L6-v2")
# Connect to existing search index
index = SearchIndex.from_existing("products", redis_url="redis://localhost:6379")
# Create the search tool with custom name and description
search_tool = RedisVectorSearchTool(
index=index,
vectorizer=vectorizer,
config=RedisVectorQueryConfig(
vector_field_name="embedding",
return_fields=["name", "description", "price"],
num_results=5,
),
# Customize the tool name and description for your domain
name="search_product_catalog",
description="Search to find relevant products in the product catalog by description semantic similarity",
)
# Use with an ADK agent
agent = Agent(
name="search_agent",
model="gemini-2.0-flash",
instruction="Help users find products using semantic search.",
tools=[search_tool],
)
```
**Customizing Tool Prompts:**
All search tools (`RedisVectorSearchTool`, `RedisHybridSearchTool`, `RedisTextSearchTool`, `RedisRangeSearchTool`) support custom `name` and `description` parameters to make them domain-specific:
```python
# Example: Medical knowledge base
medical_search = RedisVectorSearchTool(
index=medical_index,
vectorizer=vectorizer,
name="search_medical_knowledge",
description="Search medical literature and clinical guidelines for relevant information",
)
# Example: Customer support FAQ
faq_search = RedisTextSearchTool(
index=faq_index,
name="search_support_articles",
description="Search customer support articles and FAQs by keywords",
)
# Example: Legal document search
legal_search = RedisHybridSearchTool(
index=legal_index,
vectorizer=vectorizer,
name="search_legal_documents",
description="Search legal documents using both semantic similarity and keyword matching",
)
```
> **Note:** RedisVL supports many vectorizers including OpenAI, HuggingFace, Cohere, Mistral, Voyage AI, and more. See [RedisVL documentation](https://docs.redisvl.com/) for the full list.
> **Future Enhancement:** We plan to add native support for ADK embeddings classes through a union type or wrapper, allowing seamless integration with ADK's embedding infrastructure alongside RedisVL vectorizers.
---
## Features Overview
### Memory Services
Implements ADK's `BaseMemoryService` interface for persistent agent memory:
| Feature | Description |
|---------|-------------|
| **Semantic Search** | Vector-based similarity search across all sessions |
| **Recency Boosting** | Prioritize recent memories while maintaining historical access |
| **Auto-Extraction** | LLM-based extraction of facts, preferences, and episodic memories |
| **Cross-Session Retrieval** | Access knowledge from any previous conversation |
| **Background Processing** | Non-blocking memory promotion and indexing |
**Implementation:** `RedisLongTermMemoryService`
### Session Services
Implements ADK's `BaseSessionService` interface for conversation management:
| Feature | Description |
|---------|-------------|
| **Message Storage** | Persist conversation messages and session state |
| **Auto-Summarization** | Automatic summarization when context window limits are exceeded |
| **Memory Promotion** | Trigger background extraction to long-term memory |
| **State Management** | Store and retrieve arbitrary session state |
| **Token Tracking** | Monitor context window usage |
**Implementation:** `RedisWorkingMemorySessionService`
### Search Tools
Four specialized search tools for different RAG use cases:
| Tool | Best For | Key Features |
|------|----------|--------------|
| **`RedisVectorSearchTool`** | Semantic similarity | Vector embeddings, KNN search, metadata filtering |
| **`RedisHybridSearchTool`** | Combined search | Vector + text search, Redis 8.4+ native support, aggregation fallback |
| **`RedisRangeSearchTool`** | Threshold-based retrieval | Distance-based filtering, similarity radius |
| **`RedisTextSearchTool`** | Keyword search | Full-text search, no embeddings required |
> All search tools support multiple vectorizers (OpenAI, HuggingFace, Cohere, Mistral, Voyage AI, etc.) and advanced filtering.
### Semantic Caching
Reduce latency and costs with similarity-based caching:
| Feature | Description |
|---------|-------------|
| **LLM Response Cache** | Cache LLM responses and return similar cached results |
| **Tool Result Cache** | Cache tool execution results to avoid redundant calls |
| **Similarity Threshold** | Configurable distance threshold for cache hits |
| **TTL Support** | Time-based cache expiration |
| **Multiple Vectorizers** | Support for OpenAI, HuggingFace, local embeddings, etc. |
**Implementations:** `LLMResponseCache`, `ToolCache`
---
## Requirements
- **Python** 3.10, 3.11, 3.12, or 3.13
- **Google ADK** 1.0.0+
- **For memory/session services:** [Redis Agent Memory Server](https://github.com/redis/agent-memory-server)
- **For search tools:** Redis 8.4+ or Redis Cloud with Search capability
---
## Examples
Complete working examples with ADK web runner integration:
| Example | Description | Features |
|---------|-------------|----------|
| **[simple_redis_memory](examples/simple_redis_memory/)** | Agent with two-tier memory architecture | Working memory, long-term memory, auto-summarization, semantic search |
| **[semantic_cache](examples/semantic_cache/)** | Semantic caching for LLM responses | Vector-based cache, reduced latency, cost optimization, local embeddings |
| **[redis_search_tools](examples/redis_search_tools/)** | RAG with search tools | Vector search, hybrid search, range search, text search |
| **[travel_agent_memory_hybrid](examples/travel_agent_memory_hybrid/)** | Travel agent with framework-managed memory | Redis session + memory services, automatic memory extraction, web search, calendar export, itinerary planning |
| **[travel_agent_memory_tools](examples/travel_agent_memory_tools/)** | Travel agent with LLM-controlled memory | Memory tools only (search/create/update/delete), in-memory session, web search, calendar export, itinerary planning |
### Travel Agent Examples Comparison
Both examples use **Redis Agent Memory Server** for long-term memory persistence. The difference is in how they integrate with ADK:
| Aspect | `travel_agent_memory_hybrid` | `travel_agent_memory_tools` |
|--------|------------------------------|----------------------------|
| **How to Run** | `python main.py` (custom FastAPI) | `adk web .` (standard ADK CLI) |
| **Session Service** | `RedisWorkingMemorySessionService` (Redis-backed, auto-summarization) | ADK default (in-memory) |
| **Memory Service** | `RedisLongTermMemoryService` (ADK's `BaseMemoryService` interface) | Memory tools only (direct Agent Memory Server API calls) |
| **Memory Extraction** | `after_agent_callback` + framework-managed | `after_agent_callback` |
| **Session Sync** | Real-time (every message synced to Agent Memory Server) | End-of-turn (batch sync via `after_agent_callback`) |
| **Auto-Summarization** | Yes, mid-conversation (real-time sync triggers when context exceeded) | Yes, end-of-turn (batch sync triggers when context exceeded) |
| **Best For** | Full ADK service integration (`BaseSessionService` + `BaseMemoryService`) | Tool-based Agent Memory Server integration (no custom services) |
Each example includes:
- Complete runnable code
- ADK web runner integration
- Configuration examples
- Setup instructions
---
## Development
This project follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html), matching the [ADK-Python core](https://github.com/google/adk-python) project conventions.
### Quick Start
```bash
# Clone the repository
git clone https://github.com/redis-developer/adk-redis.git
cd adk-redis
# Install development dependencies
make dev
# Run all checks (format, lint, type-check, test)
make check
```
### Available Commands
```bash
make format # Format code with pyink and isort
make lint # Run ruff linter
make type-check # Run mypy type checker
make test # Run pytest test suite
make coverage # Generate coverage report
```
### Code Quality
See **[CONTRIBUTING.md](CONTRIBUTING.md)** for coding style, type hints, testing, and PR guidelines.
---
## Contributing
Please help us by contributing PRs, opening GitHub issues for bugs or new feature ideas, improving documentation, or increasing test coverage. See the following steps for contributing:
1. [Open an issue](https://github.com/redis-developer/adk-redis/issues) for bugs or feature requests
2. Read [CONTRIBUTING.md](CONTRIBUTING.md) and submit a pull request
3. Improve documentation and examples
---
## License
Apache 2.0 - See [LICENSE](LICENSE) for details.
---
## Helpful Links
### Documentation & Resources
- **[PyPI Package](https://pypi.org/project/adk-redis/)** - Install with `pip install adk-redis`
- **[GitHub Repository](https://github.com/redis-developer/adk-redis)** - Source code and issue tracking
- **[Examples](examples/)** - Complete working examples with ADK web runner
- **[Contributing Guide](CONTRIBUTING.md)** - How to contribute to the project
### Setup Guides
- **[Redis Setup Guide](docs/redis-setup.md)** - All Redis deployment options
- **[Agent Memory Server Setup](docs/agent-memory-server-setup.md)** - Complete configuration
- **[Integration Guide](docs/integration-guide.md)** - End-to-end setup with code examples
### Related Projects
- **[Google ADK](https://github.com/google/adk-python)** - Agent Development Kit framework
- **[Redis Agent Memory Server](https://github.com/redis/agent-memory-server)** - Memory layer for AI agents
- **[RedisVL](https://docs.redisvl.com/)** - Redis Vector Library documentation
- **[Redis](https://redis.io/)** - Redis 8.4+ with Search, JSON, and vector capabilities
---
| text/markdown | null | Redis Applied AI <applied.ai@redis.com> | null | null | null | adk, agent, llm, memory, redis, sessions, vector-search | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [
"google-adk>=1.0.0",
"pydantic>=2.0.0",
"agent-memory-client>=0.2.0; extra == \"all\"",
"redisvl>=0.5.0; extra == \"all\"",
"fakeredis>=2.20.0; extra == \"dev\"",
"isort>=5.13.0; extra == \"dev\"",
"mypy>=1.8.0; extra == \"dev\"",
"pre-commit>=3.6.0; extra == \"dev\"",
"pyink>=24.3.0; extra == \"dev... | [] | [] | [] | [
"Homepage, https://github.com/redis-developer/adk-redis",
"Documentation, https://github.com/redis-developer/adk-redis#readme",
"Repository, https://github.com/redis-developer/adk-redis",
"Issues, https://github.com/redis-developer/adk-redis/issues",
"Changelog, https://github.com/redis-developer/adk-redis/... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:28:33.202262 | adk_redis-0.0.3.tar.gz | 36,223 | 1c/71/2b21ddea8a65d1820843f60b0146e6c21b680c999026cc27ad225e454753/adk_redis-0.0.3.tar.gz | source | sdist | null | false | d53a84093bf6beee11075bf8e0a0e1ca | 2c33e7f404cea446cdc68fb0dd589846c50d09043458533fd329e8f3f5c9cc97 | 1c712b21ddea8a65d1820843f60b0146e6c21b680c999026cc27ad225e454753 | Apache-2.0 | [
"LICENSE"
] | 283 |
2.4 | sdmc-tools | 0.0.6 | Helper utilities for SDMC ad-hoc data processing requests. | # sdmc tools
This package contains a collection of functions designed for the standard cleaning and processing of assay data by SDMC before the data is shared with stats.
These include
- methods and functions for standardizing a dataset and merging on ldms
- methods for pulling ldms data from Delphi
- command line tools for creating and compiling a data dictionary (.xlsx) and documentation (.md + .html)
## Installation
If you would like to use the `access_ldms` module, you will need to first run the following:
```
sudo apt update
sudo apt install libpq-dev
```
This installs the `libpq` C library that `psycopg` requires. If you do not need to use the `access_ldms` methods, then you can skip this. Note that trying to import `sdmc_tools.access_ldms` will throw errors.
After doing this, the package can be installed using pip: `pip install sdmc-tools`.
- python >= 3.8 is required. these functions might break with earlier python versions.
- The following packages are depenencies:
- docutils
- pandas
- numpy
- PyYAML
- typing
- datetime
- openpyxl
- xlsxwriter
- psycopg
## Usage
### Pulling ldms data
---
Python functions for connecting to Delphi and pulling LDMS data.
You will need to save a config file to the filepath `~/.config/sdmc-tools/config.yaml`. Do NOT add this to a git repo, as it will contain a plain-text password. Populate the file with:
```
username: 'MY_DELPHI/HUTCH_USERNAME'
password: 'MY_DELPHI_PW'
```
The available methods include:
`pull_one_protocol`:
```
import sdmc_tools.access_ldms as access_ldms
ldms_hvtn = access_ldms.pull_one_protocol('hvtn', 130)
ldms_covpn = acess_ldms.pull_one_protocol('covpn', 3008)
```
`pull_multiple_protocols`:
```
import sdmc_tools.access_ldms as access_ldms
ldms_hvtn_130_140 = access_ldms.pull_multiple_protocols('hvtn', [130, 140])
ldms_covpn_3008_5001 = acess_ldms.pull_multiple_protocols('covpn', [3008, 5001])
ldms_hvtn = access_ldms.pull_multiple_protocols('hvtn', 'all') # pull ldms for all hvtn protocols. this will take longer.
```
### Data processing
---
Python functions and constants for data processing / prep.
The primary function is `standard_processing`:
```
import sdmc_tools.process as sdmc
outputs = sdmc.standard_processing(
input_data = input_data,
input_data_path="/path/to/input_data.xlsx",
guspec_col='guspec',
network='hvtn',
metadata_dict=hand_appended_metadata,
ldms=ldms
)
```
To see the function signature and documentation, you can run `? sdmc.standard_processing` in a Python interpreter. Given `input_data`, the function does the following:
- merges on ldms, renames columns with standard labels
- adds a spectype column
- adds a drawdt column, drops drawdm, drawdd, drawdy
- for each (key,value) in the metadata dict creates a column of the name 'key' with values 'value'
- standardizes the 'ptid' and 'protocol' columns to be int-formatted strings
- merges on columns pertaining to sdmc processing
- rearranges columns into a standardized order
- converts column names "From This" -> "to_this" format
See https://github.com/beatrixh/sdmc-tools/blob/main/src/sdmc_tools/constants.py for the list of constants accessible.
A usage example is included below.
```
import pandas as pd
import sdmc_tools.process as sdmc # this contains the main data processing utilities
import sdmc_tools.access_ldms as access_ldms
ldms = access_ldms.pull_one_protocol('hvtn', 302)
```
*ldms*

*input_data*

```
hand_appended_metadata = {
'network': 'HVTN',
'upload_lab_id': 'N4',
'assay_lab_name': 'Name of Lab Here',
'instrument': 'SpectraMax',
'assay_type': 'Neutralizing Antibody (NAb)',
'specrole': 'Sample',
}
outputs = sdmc.standard_processing(
input_data = input_data, #a pandas dataframe containing input data
input_data_path="/path/to/input_data.xlsx", #the path to the original input data
guspec_col='guspec', #the name of the column containing guspecs within the input data
network='hvtn', #the relevant network ('hvtn' or 'covpn')
metadata_dict=hand_appended_metadata, #a dictionary of additional data to append as columns
ldms=ldms #a pandas dataframe containing the ldms columns we want to merge from
)
```
*outputs*


### Data dictionary creation
---
This is a command line tool; it creates a data dictionary for a set of processed outputs.
`gen-data-dict` takes two positional arguments:
- the filepath where the outputs are stored,
- and the desired name of the resulting data dict.
```
gen-data-dict /path/to/outputs.txt name_of_dictionary.xlsx
```
If the dictionary does not already exist in the directory where the outputs live, it will then create
- an xlsx sheet in the same directory as the outputs, with a row for each variable in the outputs, and corresponding definitions for the standard vars. The variables unique to the specific outputs will need to be hand-edited.
- a .txt log in the same directory with notes about any non-standard variables that have been included, or any standard variables that have been omitted.
If a dictionary of the given name already exists, it will be updated to reflect the variables in the output sheet, and the log will note the diff.
### README creation
---
This is a command line tool; given a set of processed outputs, it creates a .md file with documentation for how the outputs were created, and a correspdonding .html of the compiled .md.
`gen-readme` takes one positional arguments:
- the filepath to the `paths.yaml` from which it pulls the input and output filepaths
```
gen-data-dict /path/to/paths.yaml
```
It will then create
- a markdown file describing how the outputs were created, including notes of where the inputs are saved. Note that it will assume the processing was standard, so this will need to be corrected for any nonstandard processing. It will search the output directory for the processed data outputs, a pivot summary of the samples, and the processing code. If it doesn't find these there, it will not include notes on these in the markdown.
- an html file created via compiling the above markdown
`regen-readme` takes two positional arguments:
- a filepath to the markdown to compile
- the filepath to the data dictionary it should pull in. Eg., `/path/to/data_dict.xlsx`.
```
regen-readme /path/to/my_markdown.md /path/to/data_dict.xlsx
```
It will then compile into an html file in the same directory and of the same name. If such an html file already exists, it will be overwritten.
| text/markdown | null | Beatrix Haddock <beatrix.haddock@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"docutils",
"pandas",
"numpy",
"PyYAML",
"typing",
"datetime",
"openpyxl",
"xlsxwriter",
"psycopg",
"requires<=0.4"
] | [] | [] | [] | [
"Homepage, https://github.com/beatrixh/sdmc-tools"
] | twine/6.2.0 CPython/3.10.13 | 2026-02-18T23:27:51.590563 | sdmc_tools-0.0.6.tar.gz | 160,280 | 74/07/48a09d8271b1e122ce0d02e06f853bce7a57b70734a92475b241ad474dfa/sdmc_tools-0.0.6.tar.gz | source | sdist | null | false | cade668ce545dba7663f11db94e87742 | 0b072dd5a37b1d949d23f90fabbe4c7fc2ab7ce0c7a23d4184bc9ae9ecc7cfcd | 740748a09d8271b1e122ce0d02e06f853bce7a57b70734a92475b241ad474dfa | null | [
"LICENSE"
] | 262 |
2.4 | pytolino | 2.6 | client for tolino cloud | UPDATE
===========
because of heavy anti-bot protection, it is no longer possible to make a fully automatic login. One can however reuse authorization token after a manual login. The token can then be refreshed automatically, for example with a cronjob (at least once per hour.)
pytolino
===================
A client to interact (login, upload, delete ebooks, etc..) with the tolino cloud with python. thanks to https://github.com/darkphoenix/tolino-calibre-sync for the inspiration.
One difference is that I aim to create a python package from it and to put it on pypi, so that one can use this python module in other projects.
Installation
============
.. code-block:: bash
pip install pytolino
Usage
=====
First, login manually, and use an inspector tool in the browser to inspect the requests. After connecting to the digital libray of tolino, there is POST request (named token). From the request response, copy the value of the refresh token (and the expiration time in seconds). Then, in a PATH request, in the request header, find the device_id number.
You can then store the token:
.. code-block:: python
from pytolino.tolino_cloud import Client
partner = 'orellfuessli'
account_name = 'any name for reference'
client = Client(partner)
print('login on your browser and get the token.')
refresh_token = input('refresh token:\n')
expires_in = int(input('expires_in:\n'))
hardware_id = input('hardware id:\n')
Client.store_token(
account_name, refresh_token, expires_in, hardware_id)
Then, get a new access token. It will expires in 1 hours, so you might want to create a crontab job to do it regularely:
.. code-block:: python
from pytolino.tolino_cloud import Client
partner = 'orellfuessli'
account_name = 'any name for reference'
client = Client(partner)
client.get_new_token(account_name)
After this, instead of login, you only need to retrieve the access token that is stored on disk and upload, delete books, etc...
.. code-block:: python
from pytolino.tolino_cloud import Client
partner = 'orellfuessli'
account_name = 'any name for reference'
client = Client(partner)
client.retrieve_token(account_name)
ebook_id = client.upload(EPUB_FILE_PATH) # return a unique id that can be used for reference
client.add_collection(epub_id, 'science fiction') # add the previous book to the collection science-fiction
client.add_cover(epub_id, cover_path) # to upload a cover on the book.
client.delete_ebook(epub_id) # delete the previousely uploaded ebook
inventory = client.get_inventory() # get a list of all the books on the cloud and their metadata
client.upload_metadata(epub_id, title='my title', author='someone') # you can upload various kind of metadata
To get a list of the supported partners:
.. code-block:: python
from pytolino.tolino_cloud import PARTNERS
print(PARTNERS)
for now, only orelfuessli is supported, but it should be easy to include the others (but always need of a manual login)
Features
========
* upload ebook
* delete ebook from the cloud
* add a book to a collection
* download inventory
* upload metadata
License
=======
The project is licensed under GNU GENERAL PUBLIC LICENSE v3.0
| text/x-rst | Imam Usmani | null | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"requests",
"mechanize",
"curl_cffi",
"varboxes",
"seleniumbase",
"pytest; extra == \"dev\"",
"flake8; extra == \"dev\"",
"ipython; extra == \"dev\"",
"sphinx; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\"",
"sphinx-rtd-theme; extra == \"dev\""
] | [] | [] | [] | [
"Source Code, https://github.com/ImamAzim/pytolino",
"Documentation, https://pytolino.readthedocs.io/en/latest/index.html"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-18T23:27:32.441068 | pytolino-2.6.tar.gz | 23,235 | 07/5c/81cc029d0fabea57b25ce9be091e9016aa6eeafdee1858c434c9dd185641/pytolino-2.6.tar.gz | source | sdist | null | false | 68ac8dbd7197c5fcf2f0e02b5bb306f1 | 2a07dda97a853aa8b813cf2addbb9b6ae3634ce5f749e9795550d3a04eff9404 | 075c81cc029d0fabea57b25ce9be091e9016aa6eeafdee1858c434c9dd185641 | null | [
"LICENSE"
] | 266 |
2.4 | gms-mcp | 0.1.73 | GameMaker CLI + MCP server toolset | # GameMaker MCP Tools
[](https://github.com/Ampersand-Game-Studios/gms-mcp/actions/workflows/ci.yml)
## Project Features
- `gms`: a Python CLI for GameMaker project operations (asset creation, maintenance, runner, etc).
- `gms-mcp`: an MCP server that exposes the same operations as MCP tools (Cursor is the primary example client).
- **TCP Bridge (optional)**: live, bidirectional game communication (commands + log capture) via `gm_bridge_install`, `gm_run_command`, and `gm_run_logs`. See `documentation/BRIDGE.md`.
- **Reliability-First Architecture**: Custom exception hierarchy, typed result objects, and an execution policy manager replace monolithic exit calls and raw dictionaries. This enables structured error handling, consistent tool integration, and optimized performance (Fast assets, Resilient runner).
- **Health & Diagnostics**: `gm_mcp_health` provides a one-click diagnostic tool to verify the local GameMaker environment. `gm_diagnostics` provides structured, machine-readable project diagnostics (JSON, naming, orphans, references) compatible with IDE problem panels.
- **Runtime Management**: `gm_runtime_list`, `gm_runtime_pin`, and `gm_runtime_verify` allow precise control over which GameMaker runtime version is used for builds and execution.
- **Cross-Platform Runner Defaults**: `gm_run` / `gm_compile` now default to the host OS target platform (`macOS`, `Linux`, or `Windows`) when not explicitly provided.
- **macOS Runner Launch Support**: temp-output runs now detect and launch macOS `.app` bundles by resolving the executable in `Contents/MacOS/`.
- **GML Symbol Indexing & Code Intelligence**: `gm_build_index`, `gm_find_definition`, `gm_find_references`, and `gm_list_symbols` provide deep, fast, and filtered code analysis (definitions and cross-file references).
- **Introspection**: complete project inspection with support for all asset types (including extensions and datafiles).
- **MCP Resources**: addressable project index and asset graph for high-performance agent context loading.
- `gms-mcp-init`: generates shareable MCP config files for a workspace. Now auto-detects environment variables like `GMS_MCP_GMS_PATH` to include in the generated config.
## Install (recommended: pipx)
```bash
pipx install gms-mcp
```
PowerShell equivalent:
```powershell
pipx install gms-mcp
```
## Claude Code Plugin
For Claude Code users, install the plugin for the best experience:
```
/install-plugin github:Ampersand-Game-Studios/gms-mcp
```
This provides:
- **Skills**: 18 workflow guides + 7 reference docs
- **Hooks**: Automatic update checks and error notifications
- **MCP Server**: Auto-configured via uvx (no pip install needed)
### For Other Tools (Cursor, VSCode, OpenClaw, etc.)
```bash
pip install gms-mcp
gms-mcp-init --cursor # or --vscode, --windsurf, --openclaw, etc.
```
For skill packs, OpenClaw users can install to user or workspace scope:
```bash
gms skills install --openclaw # user scope: ~/.openclaw/skills/
gms skills install --openclaw --project # workspace scope: ./skills/
```
Note: `.openclaw/openclaw.json` is for settings. Workspace skills are loaded from `./skills/`.
### For Codex
```bash
gms-mcp-init --codex
```
This writes a workspace `.codex/mcp.toml` file and prints the `codex mcp add` registration command.
Global config mode writes directly to `~/.codex/config.toml` (merging server entries).
Use the printed command directly, or copy `.codex/mcp.toml` content into the `[mcp_servers]` section of your `~/.codex/config.toml`.
Codex helpers:
- `gms-mcp-init --codex-check` prints detected Codex config paths and active server entry.
- `gms-mcp-init --codex-check-json` prints the same check output in machine-readable JSON.
- `gms-mcp-init --codex-dry-run-only` prints final merged payloads for workspace + global Codex config without writing files.
- `gms-mcp-init --codex-app-setup` runs one-shot Codex app setup: writes workspace config, previews global merge, then prints check + readiness summary.
## Local Development Setup
If you are working on the `gms-mcp` codebase itself, follow these steps to set up a local development environment:
1. **Clone and install in editable mode**:
```bash
git checkout dev
python3.12 -m venv .venv
source .venv/bin/activate
python3.12 -m pip install -e ".[dev]"
```
`gms-mcp` requires Python `3.10+`; we recommend Python `3.12` for local development.
2. **Run the full local test suite**:
```bash
PYTHONPATH=src python3.12 cli/tests/python/run_all_tests.py
```
3. **Initialize local and global MCP servers for testing**:
We recommend setting up two separate MCP server configurations in Cursor to test your changes:
* **Global (`gms-global`)**: For general use across all your GameMaker projects.
* **Local (`gms-local`)**: Specifically for testing your current changes to the server.
Run these commands from the project root (zsh/bash):
```bash
# Global setup (names it 'gms-global' in Cursor)
gms-mcp-init --cursor-global --server-name gms-global --mode python-module --python python3 --non-interactive
# Local setup (names it 'gms-local' in Cursor)
gms-mcp-init --cursor --server-name gms-local --mode python-module --python python3 --non-interactive
```
PowerShell equivalent:
```powershell
# Global setup (names it 'gms-global' in Cursor)
gms-mcp-init --cursor-global --server-name gms-global --mode python-module --python python --non-interactive
# Local setup (names it 'gms-local' in Cursor)
gms-mcp-init --cursor --server-name gms-local --mode python-module --python python --non-interactive
```
4. **Verify in Cursor**:
Go to **Cursor Settings > Features > MCP** to see your new servers. You may need to click "Reload" or restart Cursor to see changes.
## Publishing (maintainers)
Publishing is automated via GitHub Actions (PyPI Trusted Publishing) on every push to `main` and on tags `v*`.
See `RELEASING.md` for the one-time PyPI setup and the first manual upload helper scripts.
## CI Coverage
- Core CI runs on Ubuntu and Windows across Python `3.11`-`3.13`.
- Runner/session regression tests also run on macOS across Python `3.11`-`3.13`, including a mockless smoke test that builds a real `.app` bundle structure and validates executable path resolution.
### Quality Reports
Quality reports are generated during CI and published as `quality-reports-*` artifacts.
- `TEST_COVERAGE_REPORT.md`
- `MCP_TOOL_VALIDATION_REPORT.md`
- `coverage.xml`
- `pytest_results.xml`
- `quality_summary.json`
You can regenerate these locally with:
```bash
python scripts/generate_quality_reports.py
```
Use `--skip-test-run` to regenerate from existing CI artifacts:
```bash
python scripts/generate_quality_reports.py --skip-test-run --junit-xml build/reports/pytest_results.xml --coverage-xml build/reports/coverage.xml
```
## X (Twitter) posting on `main`
This repo can post to X automatically when `main` is updated.
- **Personality / voice**: `.github/x-personality.md`
- **Tweet staging file**: `.github/next_tweet.txt`
### How it works
- When a commit lands on `main`, GitHub Actions reads `.github/next_tweet.txt`.
- If it contains the placeholder text (or is empty), it **skips posting**.
- If it contains a real tweet, it posts to X and then **clears the file** back to the placeholder.
### Maintainer flow (dev -> pre-release -> main)
Because this repo promotes changes `dev` -> `pre-release` -> `main`, prepare the tweet during the `pre-release` -> `main` PR:
- Update `.github/next_tweet.txt` with the tweet (following `.github/x-personality.md`)
- Merge to `main`
## Use with a GameMaker project (multi-project friendly)
Run this inside each GameMaker project workspace (or repo) to generate config:
```bash
gms-mcp-init --cursor
```
This writes `.cursor/mcp.json` and attempts to auto-detect the `.yyp` location to set `GM_PROJECT_ROOT`.
For a one-time setup that works across many projects, write Cursor's global config instead:
```bash
gms-mcp-init --cursor-global
```
Generate a Codex config from the current workspace:
```bash
gms-mcp-init --codex
```
Generate a global Codex entry in `~/.codex/config.toml`:
```bash
gms-mcp-init --codex-global
```
Global mode merges with existing entries so it is safe to keep multiple MCP servers in the same file.
Inspect current Codex config resolution:
```bash
gms-mcp-init --codex-check
```
Preview final merged Codex payloads for local + global without writing:
```bash
gms-mcp-init --codex-dry-run-only
```
Print Codex check output as JSON (useful for app automation):
```bash
gms-mcp-init --codex-check-json
```
One-shot Codex app setup (recommended for new workspaces):
```bash
gms-mcp-init --codex-app-setup
```
### Codex App Quickstart
1. Run `gms-mcp-init --codex-app-setup` in your GameMaker workspace.
2. Confirm the output says `Ready for Codex app: yes`.
3. If needed, run `gms-mcp-init --codex-check-json` and verify `active.scope` is `workspace`.
4. Use `gms-mcp-init --codex-dry-run-only` before changing global config to preview merged TOML safely.
## Canonical Client Workflow
All clients now support the same canonical action surface:
```bash
gms-mcp-init \
--client <cursor|codex|claude-code|claude-desktop|antigravity|gemini|vscode|windsurf|openclaw|generic> \
--scope <workspace|global> \
--action <setup|check|check-json|app-setup>
```
Optional:
- `--config-path /custom/path` to override default config location
- `--safe-profile` to enforce conservative env defaults
Examples:
```bash
# Cursor setup + readiness check
gms-mcp-init --client cursor --scope workspace --action app-setup
# Codex machine-readable readiness
gms-mcp-init --client codex --scope workspace --action check-json
# Claude Desktop global plugin sync
gms-mcp-init --client claude-desktop --scope global --action setup
# Gemini alias (Antigravity path)
gms-mcp-init --client gemini --scope global --action app-setup
# OpenClaw app setup + workspace skills install
gms-mcp-init --client openclaw --scope workspace --action app-setup \
--openclaw-install-skills --openclaw-skills-project
```
For parity status and supported defaults, see `documentation/CLIENT_SUPPORT_MATRIX.md`.
Generate example configs for other MCP-capable clients:
```bash
gms-mcp-init --vscode --windsurf --antigravity --openclaw
```
Set up Antigravity global config (recommended):
```bash
gms-mcp-init --antigravity-setup
```
This merges into `~/.gemini/antigravity/mcp_config.json`, writes atomically, creates a timestamped backup on overwrite, and enables a conservative safety profile by default:
- `GMS_MCP_ENABLE_DIRECT=0`
- `GMS_MCP_REQUIRE_DRY_RUN=1`
Check Antigravity readiness:
```bash
gms-mcp-init --antigravity-check
```
Print Antigravity check output as JSON:
```bash
gms-mcp-init --antigravity-check-json
```
One-shot Antigravity app setup:
```bash
gms-mcp-init --antigravity-app-setup
```
Use a custom Antigravity config path:
```bash
gms-mcp-init --antigravity-setup --antigravity-config-path /path/to/mcp_config.json
```
Opt in to the conservative safety profile for Antigravity example configs too:
```bash
gms-mcp-init --antigravity --safe-profile
```
When `GMS_MCP_REQUIRE_DRY_RUN=1` is set, you can allow specific destructive tools with:
```bash
export GMS_MCP_REQUIRE_DRY_RUN_ALLOWLIST=gm_asset_delete,gm_workflow_delete
```
Or generate everything at once:
```bash
gms-mcp-init --all
```
## Monorepos / multiple `.yyp`
If multiple `.yyp` projects are detected in a workspace:
- `gms-mcp-init` will warn and (when interactive) prompt you to pick one.
- In non-interactive environments, it defaults `GM_PROJECT_ROOT` to `${workspaceFolder}` (safe).
Force a specific project root:
```bash
gms-mcp-init --cursor --gm-project-root path/to/project
```
Preview output without writing files:
```bash
gms-mcp-init --cursor --dry-run
```
## Code Intelligence & Introspection
The MCP server provides comprehensive project analysis capabilities:
### GML Symbol Indexing (`gm_build_index`)
Build a high-performance index of all functions, enums, macros, and global variables in the project. This is required for advanced code intelligence tools.
### Symbol Definition (`gm_find_definition`)
Find the exact location and docstrings for any GML symbol in your project.
### Find References (`gm_find_references`)
Search for all usages of a specific function or variable across your entire codebase.
### List Symbols (`gm_list_symbols`)
List all project symbols with filtering by type, name substring, or file path.
### Asset Listing (`gm_list_assets`)
List all assets in your project, optionally filtered by type:
- **Supported types**: script, object, sprite, room, sound, font, shader, path, timeline, tileset, animcurve, sequence, note, folder, **extension**, **includedfile** (datafiles)
### Asset Reading (`gm_read_asset`)
Read the complete `.yy` JSON metadata for any asset by name or path.
### Reference Search (`gm_search_references`)
Search for patterns across project files with:
- **Scopes**: `all`, `gml`, `yy`, `scripts`, `objects`, `extensions`, `datafiles`
- **Modes**: literal string or regex
- **Options**: case sensitivity, max results
### Asset Graph (`gm_get_asset_graph`)
Build a dependency graph of assets with two modes:
- **Shallow (fast)**: Parses `.yy` files for structural references (parent objects, sprites, etc.)
- **Deep (complete)**: Also scans all GML code for runtime references like `instance_create`, `sprite_index`, `audio_play_sound`, etc.
### Texture Groups (`gm_texture_group_*`)
Create, inspect, and edit `.yyp` `TextureGroups`, plus bulk-assign assets (sprites/fonts/tilesets/etc) via `textureGroupId`.
Read-only tools:
- `gm_texture_group_list`: list texture groups + available configs (desktop/android/ios/etc)
- `gm_texture_group_read`: read a single texture group entry
- `gm_texture_group_members`: list assets in a group (top-level + ConfigValues overrides)
- `gm_texture_group_scan`: report missing groups referenced + mismatches (top-level vs config override)
Destructive tools (all support `dry_run=true`):
- `gm_texture_group_create`: clone an existing template group (default: `Default`)
- `gm_texture_group_update`: patch fields on a group (optionally per config via `ConfigValues`)
- `gm_texture_group_rename`: rename a group and rewrite asset references
- `gm_texture_group_delete`: blocks by default if referenced unless `reassign_to` is provided
- `gm_texture_group_assign`: bulk-assign assets by explicit list or filters
Config scope defaults:
- Assignment updates an asset's top-level `textureGroupId` **only when it is a dict** (null is left as-is).
- If `configs` is omitted, assignment updates only **existing** `ConfigValues` entries; pass `configs=[...]` to create explicit overrides.
### MCP Resources
Pre-built, cacheable project data for agents:
- `gms://project/index`: Complete project structure (assets, folders, room order, configs, audio/texture groups, IDE version)
- `gms://project/asset-graph`: Asset dependency graph
- `gms://system/updates`: Returns a human-readable message if a newer version of `gms-mcp` is available on PyPI or GitHub.
### Update Notifier
The server automatically checks for updates on startup and during common operations:
- **Tool**: `gm_check_updates` returns structured update info.
- **Auto-check**: `gm_project_info` includes an `updates` field.
- **Resource**: `gms://system/updates` provides a quick text status.
## CLI usage
Run from a project directory (or pass `--project-root`):
```bash
gms --version
gms --project-root . asset create script my_function --parent-path "folders/Scripts.yy"
gms --project-root . texture-groups list
gms --project-root . texture-groups assign game --type sprite --folder-prefix sprites/ --dry-run
```
| text/markdown | Callum Lory, Ampersand Game Studios | null | null | null | null | gamemaker, mcp, cursor, cli, tools | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"fastmcp>=0.4.1",
"tqdm>=4.66.0",
"colorama>=0.4.6",
"tomli>=2.0.1; python_version < \"3.11\"",
"pytest>=9.0.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"Pillow>=10.0.0; extra == \"dev\"",
"ruff>=0.9.0; extra == \"dev\"",
"pyright>=1.1.390; extra == \"dev\"",
"toml... | [] | [] | [] | [
"Homepage, https://github.com/Ampersand-Game-Studios/gms-mcp",
"Repository, https://github.com/Ampersand-Game-Studios/gms-mcp",
"Issues, https://github.com/Ampersand-Game-Studios/gms-mcp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:27:29.734506 | gms_mcp-0.1.73.tar.gz | 477,272 | ed/33/b5a836f21b7b5571d68d650864fd221e039143384284732651b54975c1c2/gms_mcp-0.1.73.tar.gz | source | sdist | null | false | 74044c46d96a6fa25296cb138fcafd8f | b48edb7177bdf844b03ac1c3f5e634aacfe73e4f826cecf44f36522bf9cce956 | ed33b5a836f21b7b5571d68d650864fd221e039143384284732651b54975c1c2 | MIT | [
"LICENSE"
] | 262 |
2.4 | inu | 2.5.7 | Inertial Navigation Utilities | # Inertial Navigation Utilities
The `inu.py` Python library provides a comprehensive set of tools for inertial
navigation, focusing on the mechanization of Inertial Measurement Unit (IMU)
sensor data (accelerometer and gyroscope readings) to derive position, velocity,
and attitude, as well as the inverse process to compute sensor values from pose
data. It includes utilities for generating navigation paths, estimating
velocities and attitudes, and handling barometric altitude aiding. This library
is well-suited for inertial navigation system simulations, flight path planning,
and state estimation.
## Inertial Mechanization
### Mechanization
```python
llh_t, vne_t, rpy_t = inu.mech(
fbbi_t: np.ndarray,
wbbi_t: np.ndarray,
llh0: np.ndarray,
vne0: np.ndarray,
rpy0: np.ndarray,
T: float,
hae_t: np.ndarray | None = None,
baro_name: str | None = None,
grav_model: Callable[[np.ndarray], np.ndarray] = somigliana)
Dllh, Dvne, wbbn = inu.mech_step(
fbbi: np.ndarray,
wbbi: np.ndarray,
llh: np.ndarray,
vne: np.ndarray,
Cnb: np.ndarray,
hb: float | None = None,
baro: Baro | None = None,
grav_model: Callable[[np.ndarray], np.ndarray] = somigliana)
```
The `mech` function performs forward mechanization, integrating IMU sensor data
(`fbbi_t` and `wbbi_t`) to compute position (`llh_t`), velocity (`vne_t`), and
attitude (`rpy_t`). It supports barometric altitude aiding or direct height
override. This function processes an entire time-history profile of sensor
values and returns the path solution for the corresponding span of time. If you
would prefer to mechanize only one step at a time, you can call the `mech_step`
function instead. Actually, the `mech` function does call the `mech_step`
function within a `for` loop.
The `mech` function can override the height solution with whatever is provided
for `hae_t`. If a barometric altimeter is named (one of eight names), `hae_t`
will be treated as the barometric altitude. Similarly, the `mech_step` function
can take a barometric model, generated by the `Baro` class.
### Inverse Mechanization
```python
fbbi_t, wbbi_t = inu.mech_inv(
llh_t: np.ndarray,
rpy_t: np.ndarray,
T: float,
grav_model: Callable[[np.ndarray], np.ndarray] = somigliana)
```
The `mech_inv` function performs inverse mechanization, taking path information
in the form of position (`llh_t`) and attitude (`rpy_t`) over time and estimates
the corresponding sensor values for an accelerometer (`fbbi_t`) and gyroscope
(`wbbi_t`). This function is fully vectorized (i.e., no `for` loop internally),
which means it processes a profile very quickly. Note that the velocity is
internally calculated from position over time. This function is the perfect dual
of the `mech` (forward mechanization) algorithm. This means a navigation path
could be input into `mech_inv` to generate sensor profiles; those profiles could
be fed into `mech`; and the resulting navigation path would match the original.
### Dynamics Jacobian
```python
F = inu.jacobian(fbbi, llh, vne, Cnb, baro=None)
```
The Jacobian of the dynamics is calculated using the `jacobian` function. This
can be used in state estimation filters (e.g., EKF). This is a square matrix
whose elements are the derivatives with respect to state of the
continuous-domain, time-derivatives of states. For example, the time derivative
of latitude is

So, the derivative of this with respect to height above ellipsoid is

The order of the states is position (latitude, longitude, and height), velocity
(North, East, and Down), and attitude. So, the above partial derivative would be
found in row 1, column 3 of the Jacobian matrix.
The representation of attitude is naturally complicated. This library uses 3x3
direction cosine matrices (DCMs) to process attitude. The change in attitude is
represented by a tilt error vector, which means the last three states in the
Jacobian are the *x*, *y*, and *z* tilt errors. This makes a grand total of 9
states, so the Jacobian is a 9x9 matrix.
## Truth Generation
### Waypoints
```python
way = inu.waypoints(
points: np.ndarray,
seg_len: float = 1.0,
radius_min: float = 0.0,
plot: bool = True,
ax: axes= None,
color: str = "tab:blue",
warncolor: str = "tab:orange",
bounds: Callable[[np.ndarray, np.ndarray],
np.ndarray | float] | list | tuple | None = None,
ned: bool = True)
```
The `waypoints` class generates smooth navigation paths using Bezier curves from
waypoints, ensuring constant velocity for a uniform sampling rate. It takes a
(2, N) array of North and East waypoints, (`points`) and creates an interactive
plot connecting the waypoints with quadratic Bezier curves. These curves can be
manipulated by moving, adding, and deleting waypoints. The modified waypoints
are accesible from `way.points`. The field `way.path` contains the final North,
East, Down (NED) coordinates of the navigation path. The down coordinates will
all be zero.
Strictly speaking, the term "waypoints" is not accurate because the path does
not pass through these points; however, it is believed that "waypoints" does a
better job of communicating the idea of path planning than "control points".
### Built-in Paths
```python
points = inu.points_box(
width: float = 2000.0,
height: float = 2000.0,
radius: float = 300.0,
cycles: int = 3)
points = inu.points_clover(
radius: float = 10000.0,
cycles: int = 3)
points = inu.points_grid(
spacing: float = 300.0,
length: float = 1600.0,
rows: int = 6)
points = inu.points_spiral(
spacing: float = 300.0,
cycles: int = 3)
```
Several functions have been provided to generate the control points necessary to
pass to `waypoints` in order to produce interesting navigation paths.
```python
pc_t = inu.path_box(
seg_len: float,
width: float = 2000.0,
height: float = 2000.0,
radius: float = 300.0,
cycles: int = 3,
ned: bool = True,
plot: bool = False)
pc_t = inu.path_circle(
seg_len: float,
radius: float = 1000.0,
cycles: int = 5,
ned: bool = True)
pc_t = inu.path_clover(
seg_len: float,
radius: float = 10000.0,
cycles: int = 3,
ned: bool = True,
plot: bool = False)
pc_t = inu.path_grid(
seg_len: float,
spacing: float = 300.0,
length: float = 1600.0,
rows: int = 6,
ned: bool = True,
plot: bool = False)
pc_t = inu.path_pretzel(
K: int,
radius: float = 1000.0,
height: float = 100.0,
cycles: float = 1.0,
twists: int = 1,
ned: bool = True)
pc_t = inu.path_spiral(
seg_len: float,
spacing: float = 300.0,
cycles: int = 3,
ned: bool = True,
plot: bool = False)
```
Several, pre-defined navigation paths generated using the control-point
generation are also provided. These will return the North, East, Down
coordinates of the navigation path. The user can then convert these to geodetic
coordinates with either the `r3f.curvilinear_to_geodetic` or
`r3f.tangent_to_geodetic` function.
### Attitude and Velocity from Position
```python
t, vne_t, rpy_t = inu.llh_to_tva(llh_t, T)
vne_t = inu.llh_to_vne(llh_t, T)
rpy_t = inu.vne_to_rpy(vne_t, grav_t, T, alpha=0.06, wind=None)
```
With a navigation path, `llh_t`, the velocity and attitude can be estimated
assuming coordinated turns.
### Gravity
```python
grav = inu.somigliana(llh: np.ndarray)
```
Calculate local gravity acceleration using the Somigliana equation. The gravity
vector is in North, East, Down (NED) coordinates.
## Support Functions
### Orientation
```python
vec = inu.ned_enu(vec)
```
This library assumes all local-level coordinates are in the North, East, Down
(NED) orientation. If your coordinates are in the East, North, Up (ENU)
orientation or you wish for the final results to be converted to that
orientation, use the `ned_enu` function.
### Discretization
```python
Phi, Bd, Qd = inu.vanloan(F, B=None, Q=None, T=None)
```
The extended Kalman filter (EKF) examples in the `examples/` directory show a
reduced-order approximation to the matrix exponential of the Jacobian. The
***Q*** dynamics noise covariance matrix also needs to be discretized. This was
done with a first-order approximation by just multiplying by the sampling period
*T*. This is reasonably accurate and computationally fast. However, it is an
approximation. The mathematically accurate way to discretize the Jacobian and
***Q*** is to use the van Loan method. This is implemented with the `vanloan`
function.
### Path Offset
```python
xo, yo = inu.offset_path(
x: np.ndarray,
y: np.ndarray,
d: np.ndarray | float)
```
Compute the coordinates of a closed polygon outlining a filled area offset from
a 2D path.
The input path is defined by coordinates (`x`, `y`), and the offset distance `d`
specifies the perpendicular distance to shift the path outward on both sides.
The resulting polygon is formed by concatenating the outward offset paths on
either side, forming a closed loop, a clockwise encirclement of the input path.
## Extended Kalman Filter
An extended Kalman filter can be implemented using this library. The `mech_step`
function applies the mechanization equations to a single time step. It returns
the time derivatives of the states. The `jacobian` function calculates the
continuous-domain Jacobian of the dynamics function. While this does mean that
the user must then manually integrate the derivatives and discretize the
Jacobian, this gives the user greater flexibility to decide how to discretize
them. There are a few example scripts provided in the `examples/` folder.
The example code below is meant to run within a `for` loop stepping through
time, where `k` is the time index:
```python
# Inputs
fbbi = fbbi_t[:, k] # specific forces (m/s^2)
wbbi = wbbi_t[:, k] # rotation rates (rad/s)
z = z_t[:, k] # GPS position (rad, rad, m)
# Update
S = H @ Ph @ H.T + R # innovation covariance (3, 3)
Si = np.linalg.inv(S) # inverse (3, 3)
Kg = Ph @ H.T @ Si # Kalman gain (9, 3)
Ph -= Kg @ H @ Ph # update to state covariance (9, 9)
r = z - llh # innovation (3,)
dx = Kg @ r # changes to states (9,)
llh += dx[:3] # add change in position
vne += dx[3:6] # add change in velocity
# matrix exponential of skew-symmetric matrix
Psi = r3f.rodrigues(dx[6:])
Cnb = Psi.T @ Cnb
# Save results.
tllh_t[:, k] = llh.copy()
tvne_t[:, k] = vne.copy()
trpy_t[:, k] = r3f.dcm_to_rpy(Cnb.T)
# Get the Jacobian and propagate the state covariance.
F = inu.jacobian(fbbi, llh, vne, Cnb)
Phi = I + (F*T)@(I + (F*T/2)) # 2nd-order expm(F T)
Ph = Phi @ Ph @ Phi.T + Qd
# Get the state derivatives.
Dllh, Dvne, wbbn = inu.mech_step(fbbi, wbbi, llh, vne, Cnb)
# Integrate (forward Euler).
llh += Dllh * T # change applies linearly
vne += Dvne * T # change applies linearly
Cnb[:, :] = Cnb @ r3f.rodrigues(wbbn * T)
Cnb[:, :] = r3f.mgs(Cnb)
```
In the example above, `H` should be a (3, 9) matrix with ones along the
diagonal. The `Qd` should be the (9, 9) discretized dynamics noise covariance
matrix. The `R` should be the (3, 3) measurement noise covariance matrix. Note
that forward Euler integration has been performed on the state derivatives and a
second-order approximation to the matrix exponential has been implemented to
discretize the continuous-time Jacobian.
## Key Features
### Accuracy
The mechanization algorithms in this library make no simplifying assumptions.
The Earth is defined as an ellipsoid. Any deviations of the truth from this
simple shape can be captured by more complex gravity models. The algorithms use
a single frequency update structure which is much simpler than the common
two-frequency update structure and just as, if not more, accurate.
### Duality
The forward and inverse mechanization functions are perfect duals of each other.
This means that if you started with a profile of position, velocity, and
attitude and passed these into the inverse mechanization algorithm to get sensor
values and then passed those sensor values into the forward mechanization
algorithm, you would get back the original position, velocity, and attitude
profiles. The only error would be due to finite-precision rounding.
### Vectorization
When possible, the functions are vectorized in order to handle processing
batches of values. A set of scalars is a 1D array. A set of vectors is a 2D
array, with each vector in a column. So, a (3, 7) array is a set of seven
vectors, each with 3 elements. If an input matrix does not have 3 rows, it will
be assumed that the rows of the matrix are vectors.
An example of the vectorization in this library is the `mech_inv` (inverse
mechanization) algorithm. There is no `for` loop to iterate through time; rather
the entire algorithm has been vectorized. This results in an over 100x speed
increase.
### Numerical Methods
Employs forward Euler for integration and differentiation and Rodrigues rotation
for attitude updates.
### Flexibility
Supports custom gravity models and barometric aiding for altitude correction.
| text/markdown | null | David Woodburn <david.woodburn@icloud.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"numpy",
"scipy",
"matplotlib",
"r3f",
"pytest; extra == \"test\"",
"avar; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/davidwoodburn/inu",
"Bug Tracker, https://gitlab.com/davidwoodburn/inu/-/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T23:27:17.865931 | inu-2.5.7.tar.gz | 28,772 | 76/56/d16bd0c71dfadff6ffad378ff68bdd7c93de9bec5b2a809f99b73eeb394c/inu-2.5.7.tar.gz | source | sdist | null | false | af8d52c10161b9df4adb9ee1a8b3a612 | 8c4aa5cfb486e3e295739b4c9cea2de55dae49bf52181f01ddaec4891dbe2d63 | 7656d16bd0c71dfadff6ffad378ff68bdd7c93de9bec5b2a809f99b73eeb394c | MIT | [
"LICENSE.txt"
] | 271 |
2.4 | discophon | 0.0.6 | The Phoneme Discovery Benchmark | # The Phoneme Discovery benchmark
[💾 [Website](https://benchmarks.cognitive-ml.fr/phoneme_discovery)] [📜 [Paper]()] [📖 [BibTex](https://github.com/bootphon/phoneme_discovery?tab=readme-ov-file#citation)]
## Introduction
The last several years have seen revolutionary improvements in both speech processing and textual natural language
processing. In both cases, unsupervised or self-supervised pre-training has been the key to models autonomously
discovering representations that are tremendously useful for doing language tasks. Yet, central to the study of human
speech processing is the phoneme inventory, a small set of discrete units that abstract away from massive pronunciation
variability in the signal.
Discovering the correct set of phonemes for a language is crucial: encode the wrong categories, and contrasts between
words are distorted or disappear; fail to categorize at all, and contrasts between words are hidden behind semantically
irrelevant variation in the signal. While much attention has been paid to whether unsupervised speech models’
(continuous or discrete) representations are predictive of phonemes, this benchmark, for the first time, explicitly
fixes the goal of learning a discrete set of categories that are in one-to-one correspondence with the phoneme
inventory of a language.
Infants appear to learn the phoneme inventory of their language effortlessly, before they can speak. They benefit from
millions of years of evolution of the human brain and body, giving them a learning architecture that allows them to
thrive in the face of scarce and noisy language data, preparing them to learn the phoneme inventory of any human
language.
The Phoneme Discovery benchmark is aimed at building models that discover phoneme inventories across various languages,
using only small amounts of speech data, and without textual data during training.
## Installation
```bash
pip install discophon
```
To be able to compute ABX discriminabilities: `pip install discophon[abx]`.
If you want to run baselines and have access to the utility scripts, clone this repository:
```bash
git clone https://github.com/bootphon/phoneme_discovery
cd phoneme_discovery
uv sync
# uv sync --all-extras --all-groups # If you want all dependencies
```
## Usage
Check out the documentation:
- [Data preparation](https://github.com/bootphon/phoneme_discovery/blob/main/docs/prepare.md)
- [Simple evaluation](https://github.com/bootphon/phoneme_discovery/blob/main/docs/evaluate.md)
- [Run the benchmark](https://github.com/bootphon/phoneme_discovery/blob/main/docs/benchmark.md)
- [Use the baseline systems](https://github.com/bootphon/phoneme_discovery/blob/main/docs/baselines.md)
### Citation
```bibtex
```
Contact: `benchmarks [at] cognitive-ml [dot] fr`
| text/markdown | null | CoML <dev@cognitive-ml.fr> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"filelock>=3.20.2",
"httpx>=0.28.1",
"joblib>=1.5.3",
"numba>=0.63.1",
"numpy>=2.3.5",
"polars>=1.36.1",
"praat-textgrids>=1.4.0",
"soundfile>=0.13.1",
"soxr>=1.0.0",
"tqdm>=4.67.1",
"xarray>=2025.12.0",
"fastabx>=0.7.0; extra == \"abx\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T23:27:02.578881 | discophon-0.0.6-py3-none-any.whl | 28,064 | 90/4d/8ec85a692849ea9f0a6ec9605be72e1ba70b9dd8c32ded181e056ccb596d/discophon-0.0.6-py3-none-any.whl | py3 | bdist_wheel | null | false | d5d1b6c51b2e00dededccd3dd6341568 | 614dc2fbbca0bea648753af3d5e7b83532f77db1b0fde03074e33dae82d2de1e | 904d8ec85a692849ea9f0a6ec9605be72e1ba70b9dd8c32ded181e056ccb596d | MIT | [
"LICENSE"
] | 264 |
2.4 | adpapi | 1.4.0 | Add your description here | # adpapi
Minimal Python client for the ADP Workforce Now API using OAuth2 client credentials + mutual TLS (mTLS).
[](https://JoeyRussoniello.github.io/Adp-Api-Client/)
## Install
```bash
uv add adpapi
```
or
```bash
pip install adpapi
```
## Configuration
Provide credentials via environment variables (or a `.env` file):
```env
CLIENT_ID=...
CLIENT_SECRET=...
CERT_PATH=certificate.pem
KEY_PATH=adp.key
```
## Quickstart
```python
import os
from dotenv import load_dotenv
from adpapi.client import AdpApiClient, AdpCredentials
from adpapi.logger import configure_logging
# Optional helper: Configure logger with file handlers and stream handling
configure_logging()
load_dotenv()
# Decide which OData columnns are required from your pull
cols = [
"workers/person/legalName",
"workers/person/birthDate",
"workers/workAssignments/reportsTo",
"workers/associateOID",
"workers/businessCommunication/emails",
]
# Load API Credentials from environment
credentials = AdpCredentials.from_env()
# Define your API Client
with AdpApiClient(
client_id=os.environ["CLIENT_ID"],
client_secret=os.environ["CLIENT_SECRET"],
cert_path=os.getenv("CERT_PATH", "certificate.pem"),
key_path=os.getenv("KEY_PATH", "adp.key"),
) as api:
workers = api.call_endpoint(
endpoint="/hr/v2/workers",
select=cols,
masked=True, # set False to request unmasked fields if your tenant allows it
page_size=100, # ADP max
max_requests=1, # increase/remove for full exports
)
```
## Filtering with OData
Use `FilterExpression` to build OData `$filter` parameters. Pass filters to `call_endpoint()` using the `filters` parameter:
```python
from adpapi.odata_filters import FilterExpression
# Simple equality
filter1 = FilterExpression.field("workers.status").eq("Active")
# Combine conditions with logical operators
filter2 = (
FilterExpression.field("workers.status").eq("Active")
& FilterExpression.field("workers.hireDate").ge("2020-01-01")
)
# Multiple values (IN operator)
filter3 = FilterExpression.field("workers.status").isin(["Active", "OnLeave", "Pending"])
# String search
filter4 = FilterExpression.field("workers.person.legalName.familyName").contains("Smith")
# Pass to API call
workers = api.call_endpoint(
endpoint="/hr/v2/workers",
filters=filter2,
select=cols,
masked=True,
)
```
**Supported Operators:**
- Comparison: `eq`, `ne`, `gt`, `ge`, `lt`, `le`
- String functions: `contains()`, `startswith()`, `endswith()`
- Logical: `&` (and), `|` (or), `~` (not)
- IN operator: `isin([...])`
**Notes:**
- Field paths use dots in Python code (e.g., `workers.status`) but convert to forward slashes in OData syntax (`workers/status`)
- Not all operators are supported by all endpoints; check ADP API documentation
- You can also pass OData filter strings directly: `filters="workers/status eq 'Active'"`
## Notes
- Uses OData-style pagination (`$top`, `$skip`, `$select`) and stops on HTTP 204 (No Content).
- `masked=False` requests `Accept: application/json;masked=false` (subject to tenant permissions).
- Logging writes DEBUG output to `app.log` and to the console.
## `Monofile.ipynb`
For clients such as Microsoft Fabric, Azure Databricks, or other notebook-driven programming environments, running a single notebook with magic commands may be more efficient than creating a custom runtime with the `pip` version of the package. To allow for this, [`monofile.ipynb`](./monofile.ipynb) can simply be uploaded to the desired location and ran there.
Import Syntax Changes to
```python
%run monofile.ipynb # Or whatever monofile has been renamed to in the notebook client
# Now, imports are no longer necessary and the top-level Api objects are exposed at top-level
configure_logging()
with AdpApiClient(...) as api:
api.call_endpoint(...)
```
| text/markdown | null | Joey Russoniello <jmrusso@bu.edu> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.32.5"
] | [] | [] | [] | [
"Homepage, https://github.com/JoeyRussoniello/Adp-Api-Client",
"Documentation, https://JoeyRussoniello.github.io/Adp-Api-Client/"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T23:24:59.858991 | adpapi-1.4.0.tar.gz | 28,524 | fd/4c/63cf548522e5c73928441774242c0b8cde23cfa26413893efc94b522f331/adpapi-1.4.0.tar.gz | source | sdist | null | false | 9340a8e83ffaa5e091af15d347604d53 | 7dafaabeab1f4906ce2da6ea412ce29c91fd1ed552da691280d7cecfff980850 | fd4c63cf548522e5c73928441774242c0b8cde23cfa26413893efc94b522f331 | null | [] | 277 |
2.4 | genxai-framework | 1.0.0 | GenXAI Core (MIT) - Advanced Agentic AI Framework with Graph-Based Orchestration | # GenXAI - Advanced Agentic AI Framework
**Version:** 1.0.0
**Status:** Active Development
**License:** MIT
# Irsal Imran - [irsal2025@gmail.com](mailto:irsal2025@gmail.com)
---
## 🚀 Overview
GenXAI is an advanced agentic AI framework designed to surpass existing solutions by combining:
- **Graph-Based Orchestration** (like LangGraph) for complex agent workflows
- **Advanced Memory Systems** with multiple memory types (short-term, long-term, episodic, semantic, procedural)
- **No-Code Studio (Enterprise)** for visual workflow building
- **50+ Built-in Tools** for web, database, file, computation, and communication tasks
- **Enterprise-Grade Features (OSS)** including observability, security, connectors, and scalability
> **Open Source vs Enterprise**: This repository contains the **MIT-licensed core framework** plus
> enterprise-grade runtime features (connectors, triggers, observability, security, CLI extensions).
> The **Studio UI** remains enterprise-only and is staged under `enterprise/` for a separate
> commercial repo.
## 🧩 Applications
- **[Autonomous Coding Agent](https://github.com/genexsus-ai/genxbot/blob/main/README.md)**: GenXAI-powered autonomous coding application.
- Includes recipe-template run support with blended recipe + agent-generated actions (dedupe + fallback action coverage), plus structured observability hooks for planning latency, tool invocations, safety decisions, and retry/failure events.
- **[AI Strategy Agent (P2P Brainstorming)](./applications/ai_strategy_agent/backend/README.md)**: peer-to-peer brainstorming workflow with layered architecture and local observability hooks.
- **[Travel Planning Agent](./applications/travel_planning_agent/README.md)**: GenXAI-powered travel planning app with FastAPI backend, React frontend, and streaming itinerary updates.
## ✅ OSS vs Enterprise
**Open-source (MIT) core + enterprise-grade runtime** — available in OSS:
- `genxai/` (agents, graph engine, flows, tools, LLM providers)
- `genxai/connectors` (Kafka, SQS, Postgres CDC, webhooks, Slack, GitHub, Jira, Notion, Google Workspace)
- `genxai/triggers` (webhook, schedule, queue triggers)
- `genxai/observability` (logging, metrics, tracing)
- `genxai/security` (RBAC, policy engine, audit, rate limits)
- CLI commands: `tool`, `workflow`, `connector`, `metrics`, `approval`, `audit`
- `examples/`, `docs/`, `tests/`, `scripts/`
**Enterprise (commercial) features** — remain in the enterprise repo:
- `enterprise/` (Studio UI/backend + Studio-only assets)
---
## ✨ Key Features
### 🔗 Graph-Based Workflows
- Define complex agent relationships as directed graphs
- Conditional edges and dynamic routing
- Parallel and sequential execution
- Cycles, loops, and subgraphs
- Real-time visualization
### 🧠 Advanced Agent Capabilities
- **Multi-Modal**: Text, vision, audio, code understanding
- **Learning**: Self-improvement through feedback
- **Memory**: Multi-layered memory system
- **Tools**: 50+ built-in tools + custom tool creation
- **Personality**: Configurable agent personalities
- **LLM Ranking (opt-in)**: Safe JSON-based ranking with heuristic fallbacks for tool selection ([docs/LLM_INTEGRATION.md](./docs/LLM_INTEGRATION.md))
> **New in 0.1.6:** LLM ranking utility for tool selection with safe JSON parsing and heuristic fallbacks. See [LLM integration](./docs/LLM_INTEGRATION.md).
### 💾 Multi-Layered Memory
- **Short-Term**: Recent conversation context
- **Long-Term**: Persistent knowledge with vector search
- **Episodic**: Past experiences and learning
- **Semantic**: Factual knowledge base
- **Procedural**: Learned skills and procedures
- **Working**: Active processing space
- **Backend Plugins (Implemented)**: Redis, SQLite, Neo4j via formal plugin registry
- **Telemetry (Implemented)**: Backend memory utilization, size, and graph traversal metrics via `MemorySystem.get_stats()`
```python
stats = await memory.get_stats()
print(stats["backend_plugins"].keys()) # e.g. redis/sqlite/neo4j (when configured)
```
### 🎨 No-Code Studio
The Studio UI and enterprise backend are now staged under:
```
enterprise/studio/
```
They are intended for the **enterprise repo** and are **not part of the MIT-licensed core**.
### ⚡ Trigger SDK (OSS)
Trigger SDKs are part of the OSS runtime and live under `genxai/triggers`.
### 🏢 Enterprise-Ready (OSS Runtime)
- **Observability**: Logging, metrics, tracing
- **Security**: RBAC, encryption, guardrails
- **Scalability**: Horizontal scaling, distributed execution
- **Reliability**: 99.9% uptime target
### 📈 Metrics API (OSS Runtime)
Observability endpoints are part of the OSS runtime and live under `genxai/observability`.
---
## 📋 Documentation
Comprehensive documentation is available in the following files:
- **[ARCHITECTURE.md](./ARCHITECTURE.md)** - Complete system architecture and design principles
- **[REQUIREMENTS.md](./REQUIREMENTS.md)** - Detailed functional and non-functional requirements
- **[IMPLEMENTATION_PLAN.md](./IMPLEMENTATION_PLAN.md)** - Development roadmap
- **[TOOLS_DESIGN.md](./TOOLS_DESIGN.md)** - Tool system architecture and 50+ built-in tools
- **[MEMORY_DESIGN.md](./MEMORY_DESIGN.md)** - Multi-layered memory system design
- **[WORKFLOW_COMPOSITION.md](./docs/WORKFLOW_COMPOSITION.md)** - Composing global workflows with subflows
- **[COMPARISON.md](./docs/COMPARISON.md)** - CrewAI vs GenXAI comparison guide
- **[COMPARISON_CHEATSHEET.md](./docs/COMPARISON_CHEATSHEET.md)** - Condensed comparison cheatsheet
- **[COMPARISON_SLIDES.md](./docs/COMPARISON_SLIDES.md)** - Slide-style outline for presentations
### 🖼️ Workflow Composition Preview
For a visual overview of composing global workflows with subflows and deterministic routing,
see **[docs/WORKFLOW_COMPOSITION.md](./docs/WORKFLOW_COMPOSITION.md)**.

_Figure: Global workflow routing to two subflows (SVG preview)._

_Figure: PNG preview for environments that don’t render SVG._
---
## 🎯 Design Goals
1. **Superior to Existing Frameworks**: More features than CrewAI, AutoGen, BeeAI
2. **Graph-First**: Complex orchestration like LangGraph, but better
3. **No-Code Friendly**: Visual interface for non-technical users
4. **Enterprise-Grade**: Production-ready with observability and security
5. **Extensible**: Plugin architecture for easy customization
---
## 🏗️ Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ No-Code Studio │ │ CLI/SDK/API │ │
│ │ (Visual Editor) │ │ (Code Interface)│ │
│ └──────────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ ORCHESTRATION LAYER │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Graph Engine │ │ Flow Control │ │ State Manager│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ┌───────────────────────────────┐ │
│ │ Trigger Runner │ │
│ │ (Webhook, Schedule, Events) │ │
│ └───────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ AGENT LAYER │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Agent Runtime│ │ Memory System│ │ Tool Registry │ │
│ └──────────────┘ └──────────────┘ │ + Tool Executor │ │
│ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ COMMUNICATION LAYER │
│ ┌──────────────┐ ┌──────────────────┐ ┌──────────────┐ │
│ │ Message Bus │ │ Event Stream │ │ Pub/Sub │ │
│ └──────────────┘ │ + Event Router │ └──────────────┘ │
│ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ INFRASTRUCTURE LAYER │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ LLM Providers│ │ Vector DBs │ │ Observability│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ┌──────────────────────┐ ┌───────────────────────────┐ │
│ │ Persistent Stores │ │ Connectors / Integrations │ │
│ │ (Postgres, Redis,…) │ │ (Slack, Kafka, Jira, …) │ │
│ └──────────────────────┘ └───────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ CROSS-CUTTING (ALL LAYERS): SECURITY / GOVERNANCE │
│ ┌──────────────┐ ┌──────────────────┐ ┌──────────────┐ │
│ │ RBAC │ │ Policy Engine │ │ Audit Logging│ │
│ │ │ │ (ACL + approvals)│ │ │ │
│ └──────────────┘ └──────────────────┘ └──────────────┘ │
│ ┌──────────────────┐ ┌────────────────────────────────┐ │
│ │ Guardrails │ │ Secrets + Encryption (configs) │ │
│ │ (PII, filters, …)│ │ │ │
│ └──────────────────┘ └────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
See [ARCHITECTURE.md](./ARCHITECTURE.md) for complete details.
---
## 💡 Quick Start
### CLI Quick Start (OSS)
The OSS package ships a `genxai` CLI with `tool` and `workflow` commands.
```bash
# Verify the CLI entry point
genxai --help
# List available tools
genxai tool list
# Search and inspect tools
genxai tool search weather
genxai tool info weather_api
# Run a YAML workflow
genxai workflow run examples/nocode/content_generation.yaml \
--input '{"topic": "AI workflow design"}'
# Create and export a tool
genxai tool create \
--name my_tool \
--description "My custom tool" \
--category custom \
--template api_call \
--config '{"url": "https://api.example.com", "method": "GET"}'
genxai tool export my_tool --output ./my_tool.json
# Import a tool and export schema bundles
genxai tool import-tool ./my_tool.json
genxai tool export-schema --output tool_schemas.json
genxai tool export-schema --format yaml --output tool_schemas.yaml
```
### Using GenXAI as a Framework Library
```python
import os
from genxai import Agent, AgentConfig, AgentRegistry, Graph
# Set your API key (required)
os.environ["OPENAI_API_KEY"] = "sk-your-api-key-here"
# Define agents
classifier = Agent(
id="classifier",
config=AgentConfig(
role="Classifier",
goal="Categorize customer requests",
llm_model="gpt-4",
tools=["sentiment_analysis", "category_detector"],
),
)
support = Agent(
id="support",
config=AgentConfig(
role="Support Agent",
goal="Resolve customer issues",
llm_model="claude-3-opus",
enable_memory=True,
),
)
AgentRegistry.register(classifier)
AgentRegistry.register(support)
# Build graph
graph = Graph()
from genxai.core.graph.nodes import InputNode, OutputNode, AgentNode
from genxai.core.graph.edges import Edge
graph.add_node(InputNode(id="start"))
graph.add_node(AgentNode(id="classify", agent_id="classifier"))
graph.add_node(AgentNode(id="support", agent_id="support"))
graph.add_node(OutputNode(id="end"))
graph.add_edge(Edge(source="start", target="classify"))
graph.add_edge(Edge(source="classify", target="support"))
graph.add_edge(Edge(source="support", target="end"))
# Run workflow
result = await graph.run(input_data="My app crashed")
```
### Flow Orchestrator Examples
GenXAI also ships with lightweight flow orchestrators for common patterns:
```python
from genxai import AgentFactory, RoundRobinFlow, SelectorFlow, P2PFlow
agents = [
AgentFactory.create_agent(id="analyst", role="Analyst", goal="Analyze"),
AgentFactory.create_agent(id="writer", role="Writer", goal="Write"),
]
# Round-robin flow
round_robin = RoundRobinFlow(agents)
# Selector flow
def choose_next(state, agent_ids):
return agent_ids[state.get("selector_hop", 0) % len(agent_ids)]
selector = SelectorFlow(agents, selector=choose_next, max_hops=3)
# P2P flow
p2p = P2PFlow(agents, max_rounds=4, consensus_threshold=0.7)
```
See runnable examples in:
- `examples/code/flow_round_robin_example.py`
- `examples/code/flow_selector_example.py`
- `examples/code/flow_p2p_example.py`
- `examples/code/flow_parallel_example.py`
- `examples/code/flow_conditional_example.py`
- `examples/code/flow_loop_example.py`
- `examples/code/flow_router_example.py`
- `examples/code/flow_ensemble_voting_example.py`
- `examples/code/flow_critic_review_example.py`
- `examples/code/flow_coordinator_worker_example.py`
- `examples/code/flow_map_reduce_example.py`
- `examples/code/flow_subworkflow_example.py`
- `examples/code/flow_auction_example.py`
Full flow documentation: [docs/FLOWS.md](./docs/FLOWS.md)
### Trigger SDK Quick Start (OSS)
```python
from genxai.triggers import WebhookTrigger
from genxai.core.graph import TriggerWorkflowRunner
trigger = WebhookTrigger(trigger_id="support_webhook", secret="my-secret")
# Wire trigger to workflow
runner = TriggerWorkflowRunner(nodes=nodes, edges=edges)
async def on_event(event):
result = await runner.handle_event(event)
print("Workflow result:", result)
trigger.on_event(on_event)
await trigger.start()
# In your FastAPI handler:
# await trigger.handle_request(payload, raw_body=raw, headers=request.headers)
```
### Install Options
```bash
# Core install
pip install genxai-framework
# Full install with providers/tools/API (core)
pip install "genxai-framework[llm,tools,api]"
# Everything included
pip install "genxai-framework[all]"
```
> For the Studio UI, use the enterprise repository and its commercial license.
---
## 🧩 OSS Enterprise Features (Studio Excluded)
The following enterprise-grade capabilities are **included in OSS**:
- **Connectors**: Kafka, SQS, Postgres CDC, Webhooks, Slack, GitHub, Notion, Jira, Google Workspace
- **Triggers**: Webhook, schedule, and queue triggers
- **Observability**: logging, metrics, tracing, alerts
- **Security**: RBAC, policy engine, audit logging, rate limits, PII utilities
- **CLI Extensions**: metrics, connector, approval, audit commands
- **Worker Queue Engine**: distributed execution support
---
## 🛠️ Technology Stack
### Core Framework
- **Language**: Python 3.11+
- **Validation**: Pydantic v2
- **Concurrency**: AsyncIO
- **Testing**: Pytest
### Storage
- **Metadata**: PostgreSQL
- **Caching**: Redis
- **Vector DB**: Pinecone, Weaviate, Chroma
- **Graph DB**: Neo4j
### LLM Providers
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude 3)
- Google (Gemini)
- Cohere
- Local models (Ollama, LM Studio)
### No-Code Studio
- **Frontend**: React + TypeScript
- **Graph Viz**: ReactFlow
- **Styling**: TailwindCSS
- **Backend**: FastAPI
### DevOps
- **Containers**: Docker
- **Orchestration**: Kubernetes
- **CI/CD**: GitHub Actions
- **Monitoring**: Prometheus + Grafana
---
## 🎯 Key Differentiators
### vs CrewAI
✅ Graph-based workflows (not just sequential)
✅ Advanced memory system
✅ No-code interface
✅ Learning agents
✅ Enterprise features
### vs AutoGen
✅ Simpler configuration
✅ Rich built-in tools
✅ Visual workflow builder
✅ Better state management
✅ Multi-modal support
### vs BeeAI
✅ More sophisticated agents
✅ Complex orchestration
✅ Advanced memory
✅ Enterprise scalability
✅ Comprehensive tooling
### vs LangGraph
✅ All graph features PLUS:
✅ No-code interface
✅ Advanced agent capabilities
✅ Multi-layered memory
✅ Tool marketplace
✅ Learning and adaptation
---
## 📊 Success Metrics
### Technical
- ✅ All functional requirements implemented
- ✅ 80%+ test coverage
- ✅ 99.9% uptime
- ✅ < 2s agent response time
### Business
- 🎯 10,000+ GitHub stars in first year
- 🎯 100+ contributors
- 🎯 100+ companies in production
- 🎯 4.5+ star rating
### User Experience
- 🎯 < 5 minutes to first workflow
- 🎯 Non-technical users productive in < 1 hour
- 🎯 < 5% framework-related failures
---
## 🤝 Contributing
We welcome contributions! This project is in active development. We provide:
- Contributing guidelines
- Development setup instructions
- Issue templates
- Pull request templates
---
## 👥 Contributors
| Name | Email |
| --- | --- |
| Irsal Imran | [irsal2025@gmail.com](mailto:irsal2025@gmail.com) |
---
## 📜 License
MIT License
---
## 🔗 Links
- **Documentation**: See docs/ directory
- **GitHub**: https://github.com/genexsus-ai/genxai
- **Discord**: (To be created)
- **Website**: https://www.genxai.dev
---
## 📧 Contact
For questions or collaboration opportunities, please reach out through GitHub Discussions (once created).
---
## 🙏 Acknowledgments
Inspired by:
- [LangGraph](https://github.com/langchain-ai/langgraph) - Graph-based orchestration
- [CrewAI](https://github.com/joaomdmoura/crewAI) - Multi-agent collaboration
- [AutoGen](https://github.com/microsoft/autogen) - Conversational agents
- [BeeAI](https://github.com/i-am-bee/bee-agent-framework) - Agent framework design
---
## 📈 Project Status
**Current Phase**: Active Development
**Next Milestone**: Complete visual editor + studio polish
**Expected Launch**: TBD
---
**Built with ❤️ by the GenXAI team**
| text/markdown | null | GenXAI Team <team@genxai.dev> | null | null | null | ai, agents, llm, graph, orchestration, multi-agent | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.5.0",
"pydantic-settings>=2.1.0",
"asyncio>=3.4.3",
"aiohttp>=3.9.0",
"httpx>=0.25.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0.1",
"typing-extensions>=4.8.0",
"click>=8.1.0",
"rich>=13.0.0",
"sqlalchemy>=2.0.23",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == ... | [] | [] | [] | [
"Homepage, https://genxai.dev",
"Documentation, https://docs.genxai.dev",
"Repository, https://github.com/genexsus-ai/genxai",
"Issues, https://github.com/genexsus-ai/genxai/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T23:24:46.135719 | genxai_framework-1.0.0.tar.gz | 329,233 | ec/7f/4392897e97175ec5bed20cf92d75c7e357d018708131ea70ca8a76c430cb/genxai_framework-1.0.0.tar.gz | source | sdist | null | false | 303eb7ac74530b61734bed5e9139c174 | a59ab22617a40bb47f903b54cbe2c37a5e5cb39601fe397b32b903b0ca4d992a | ec7f4392897e97175ec5bed20cf92d75c7e357d018708131ea70ca8a76c430cb | MIT | [
"LICENSE",
"LICENSES.md"
] | 258 |
2.4 | nsflow | 0.6.9 | A Neuro-San powered Smart Agent Network Framework | # nsflow - A FastAPI powered client and IDE for NeuroSan
Note: To see how `nsflow` works in conjunction with the neuro-san library, visit [https://github.com/cognizant-ai-lab/neuro-san-studio](https://github.com/cognizant-ai-lab/neuro-san-studio)
**nsflow** is a fastapi and react based developer-oriented client and IDE that enables users to explore, visualize, and interact with smart agent networks. It integrates with [**NeuroSan**](https://github.com/cognizant-ai-lab/neuro-san) for intelligent agent-based interactions.
It comes with an **Agent Network Designer** that embodies the agentic design philosophy, making the neuro-san library accessible to both developers and non-developers alike. This transforms nsflow from a simple interactive chat client into a well-featured agent orchestration platform with visual design capabilities.

---
## **Enabling/Disabling text-to-speech and speech-to-text**
For local development (when running the backend and frontend separately), you can toggle text-to-speech and speech-to-text by setting the VITE_USE_SPEECH variable in the nsflow/frontend/.env.development file to "true" or "false".
The frontend development server reads this file directly.
---
## **Installation & Running nsflow**
**nsflow** can be installed and run in **two different ways:**
### **1️⃣ Run nsflow using pypi package**
To simplify execution, nsflow provides a CLI command to start both the backend and frontend simultaneously.
#### **Step 1: Create and source a virtual environment**
```bash
python -m venv .venv
source .venv/bin/activate
```
#### **Step 2: Install nsflow from pip**
```bash
pip install nsflow
```
#### **Step 3: Run Everything with a Single Command**
```bash
python -m nsflow.run
```
By default, this will start:
- **backend** (FastAPI + NeuroSan) here: `http://127.0.0.1:4173/docs` or `http://127.0.0.1:4173/redoc`
- **frontend** (React) here: `http://127.0.0.1:4173`
---
### **2️⃣ Development & Contribution (Manually Start Frontend & Backend)**
If you want to contribute, ensure you have the necessary dependencies installed.
To start the frontend and backend separately, follow these steps:
#### **Step 1: Clone the Repository**
```bash
git clone https://github.com/cognizant-ai-lab/nsflow.git
cd nsflow
```
#### **Step 2: Install Dependencies**
- Make sure you have python (preferably **Python 3.12**) installed.
```bash
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -r requirements-build.txt
```
#### **Step 3: Start the Backend in dev mode & Frontend separately**
- Ensure that you have a few example hocon files in your `registries` and the same mapped in `registries/manifest`.
- [Optional] Ensure that you have the necessary coded tools in the `coded_tools` dir.
- From the root start Backend:
```bash
python -m nsflow.run --dev
```
- Start Frontend:
- Ensure that you have **Node.js (with Yarn)** installed.
- Follow the instructions to setup the frontend here: [./nsflow/frontend/README.md](https://github.com/cognizant-ai-lab/nsflow/tree/main/nsflow/frontend/README.md)
- On another terminal window
```bash
cd nsflow/frontend; yarn install
yarn dev
```
- By default:
- **backend** will be available at: `http://127.0.0.1:8005`
- **frontend** will be available at: `http://127.0.0.1:5173`
- You may change the host/port configs using environment variables for fastapi (refer [run.py](./nsflow/run.py)) and using [frontend/.env.development](./nsflow/frontend/.env.development) for react app
#### **Step 4: To make sure your changes to frontend take effect in the wheel, run the script**
- To build the Frontend
```bash
sh build_scripts/build_frontend.sh
```
Note: The above script's output should show that `./nsflow` dir contains a module `prebuilt_frontend`
- To build and test the wheel locally
```bash
sh build_scripts/build_wheel.sh
```
## For using Text-to-Speech and Speech-to-Text
Prerequisite: install `ffmpeg` for text-to-speech and speech-to-text support
- On Mac
```bash
brew install ffmpeg
```
- On Linux
```bash
sudo apt install ffmpeg
```
- On windows, follow the [instructions](https://phoenixnap.com/kb/ffmpeg-windows) here.
---
### Enabling Visual Question Answering (VQA) http endpoints
Follow these [instructions](./docs/VQA_README.md)
| text/markdown | Deepak | null | null | null | null | NsFlow, NeuroSan, agent-network | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"neuro-san<0.7,>=0.6.32",
"fastapi-cors>=0.0.6",
"fastapi>=0.115.8",
"aiofiles>=24.1.0",
"graphviz==0.20.3",
"nbformat>=5.10.4",
"pydantic>=2.9.2",
"pyhocon>=0.3.61",
"python-dotenv==1.0.1",
"rich>=14.2.0",
"uvicorn>=0.34.0",
"websockets>=14.2",
"wsproto>=1.2.0",
"Werkzeug>=3.1.4",
"gTTS... | [] | [] | [] | [
"Homepage, https://github.com/cognizant-ai-lab/nsflow",
"Repository, https://github.com/cognizant-ai-lab/nsflow",
"Documentation, https://github.com/cognizant-ai-lab/nsflow#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:24:15.369647 | nsflow-0.6.9.tar.gz | 1,358,675 | e9/ff/73511e1a33748e85240ef83288832761fcda90878a610205cd52ff1aeb59/nsflow-0.6.9.tar.gz | source | sdist | null | false | 5e91afd4948f3210ab3b10dc46606320 | 3cb241454a20d52d59d4e0732898418969d3507bd72742e8c385a14ff58594f0 | e9ff73511e1a33748e85240ef83288832761fcda90878a610205cd52ff1aeb59 | Apache-2.0 | [
"LICENSE.txt"
] | 493 |
2.4 | luminarycloud | 0.23.3 | Luminary Cloud SDK | Luminary Cloud's Python Software Development Kit (SDK) allows you to access many of the features within our platform programmatically (i.e. without needing to go through the graphical user interface in your browser).
Our Python SDK provides a secure abstraction layer, a set of simulation-specific data structures, and all the necessary functionality to enable automation via simple Python scripts.
It allows you to create your own applications leveraging Luminary (such as importing geometry and creating meshes, running and post-processing simulations, running explorations and creating surrogate models) and connect Luminary simulations to pre- and post-processing tools that are already part of your own workflows.
The sample code below shows how the SDK can be used to upload a mesh and run a
simulation (note that the Python SDK is designated as Early Access and syntax
and functionality may change significantly).
```py
import luminarycloud as lc
project = lc.create_project("NACA 0012", "My first SDK project.")
mesh = project.upload_mesh("./airfoil.lcmesh")
sim_template = project.create_simulation_template("test template", params_json_path="./simulation_template.json")
sim = project.create_simulation(mesh.id, "My simulation", sim_template.id)
```
| text/markdown | null | "Luminary Cloud Inc." <support@luminarycloud.com> | null | null | null | Luminary Cloud, SDK | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"google-crc32c~=1.7",
"googleapis-common-protos~=1.70",
"grpcio-status~=1.65",
"grpcio-tools~=1.65",
"grpcio~=1.65",
"importlib-metadata~=8.7",
"opentelemetry-api~=1.25",
"opentelemetry-exporter-otlp-proto-common~=1.25",
"opentelemetry-exporter-otlp-proto-http~=1.25",
"opentelemetry-instrumentatio... | [] | [] | [] | [
"Homepage, https://www.luminarycloud.com/",
"Documentation, https://app.luminarycloud.com/docs/api/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:23:31.044102 | luminarycloud-0.23.3.tar.gz | 775,386 | c8/16/e47336d6f2b4be79f7dde216fd9687dee8750632f9ac1e23fcb36be66023/luminarycloud-0.23.3.tar.gz | source | sdist | null | false | 7c9c90eb9e73555439ac425267fd6247 | 2f9317981398dfc81ea2f57a359f668c6218bd7896fa8d37c9c6698f6357b2fa | c816e47336d6f2b4be79f7dde216fd9687dee8750632f9ac1e23fcb36be66023 | null | [] | 314 |
2.4 | vivarium-build-utils | 2.3.0 | Shared build utilities and Jenkins pipeline library for Simulation Science projects. | ====================
Vivarium Build Utils
====================
Vivarium Build Utils contains shared build utilities for Simulation Science projects.
You can install ``vivarium_build_utils`` from PyPI with pip::
$ pip install vivarium_build_utils
or build it from source with::
$ git clone https://github.com/ihmeuw/vivarium_build_utils.git
$ cd vivarium_build_utils
$ conda create -n ENVIRONMENT_NAME
$ pip install -e .
Overview
========
This repository provides:
- **`vars/`**: Jenkins shared library functions for continuous integration pipelines
- **`resources/`**: Shared Makefiles and build scripts for consistent build processes
Note: for help with the Make targets available to any environment with this repository
installed, run `make help` in the terminal.
| text/x-rst | The vivarium developers | vivarium.dev@gmail.com | null | null | BSD-3-Clause | null | [
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX",
"Operating System :: POSIX :: BSD",
"Operating S... | [] | https://github.com/ihmeuw/vivarium_build_utils | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:22:37.650142 | vivarium_build_utils-2.3.0.tar.gz | 35,534 | b2/b4/cec7bdec35c196a5d8f4bb1daacbde8e708f9158e2d465f8c845c2f84041/vivarium_build_utils-2.3.0.tar.gz | source | sdist | null | false | bf31ceed7b302539046a8a0bfb81ecb6 | 728f8ff39da9c1afeeda8c98b39c3da660c7da77e1d3716b1f915d3ef85b2d9d | b2b4cec7bdec35c196a5d8f4bb1daacbde8e708f9158e2d465f8c845c2f84041 | null | [
"LICENSE.txt"
] | 1,973 |
2.4 | pypdfium2 | 5.5.0 | Python bindings to PDFium | <!-- SPDX-FileCopyrightText: 2026 geisserml <geisserml@gmail.com> -->
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
# pypdfium2
<!-- [](https://pepy.tech/project/pypdfium2) -->
pypdfium2 is an ABI-level Python 3 binding to [PDFium](https://pdfium.googlesource.com/pdfium/+/refs/heads/main), a powerful and liberal-licensed library for PDF rendering, inspection, manipulation and creation.
It is built with [ctypesgen](https://github.com/pypdfium2-team/ctypesgen) and external [PDFium binaries](https://github.com/bblanchon/pdfium-binaries/).
The custom setup infrastructure provides a seamless packaging and installation process. A wide range of platforms is supported with pre-built packages.
pypdfium2 includes [helpers](#support-model) to simplify common use cases, while the [raw PDFium API](#raw-pdfium-api) (ctypes) remains accessible as well.
## Installation
### From PyPI (recommended)
```bash
python -m pip install -U pypdfium2
```
If available for your platform, this will use a pre-built wheel package, which is the easiest way of installing pypdfium2.
Otherwise, [setup code](#from-the-repository--with-setup) will run.
If your platform is not covered with pre-built binaries, this will look for system pdfium, or attempt to build pdfium from source.
#### JavaScript/XFA builds
pdfium-binaries also offer V8 (JavaScript) / XFA enabled builds.
If you need them, do e.g.:
```bash
PDFIUM_PLATFORM=auto-v8 pip install -v pypdfium2 --no-binary pypdfium2
```
This will bypass wheels and run setup, while requesting use of V8 builds through the `PDFIUM_PLATFORM=auto-v8` environment setting. See below for more info.
#### Optional runtime dependencies
As of this writing, pypdfium2 does not require any mandatory runtime dependencies, apart from Python and PDFium itself (which is commonly bundled).
However, some optional support model / CLI features need additional packages:
* [`Pillow`](https://pillow.readthedocs.io/en/stable/) (module `PIL`) is a pouplar imaging library for Python. pypdfium2 provides convenience adapters to translate between raw bitmap buffers and PIL images. It also uses PIL for some command-line functionality (e.g. image saving).
* [`NumPy`](https://numpy.org/doc/stable/index.html) is a library for scientific computing. As with `Pillow`, pypdfium2 provides helpers to get a numpy array view of a raw bitmap.
* [`opencv-python`](https://github.com/opencv/opencv-python) (module `cv2`) is an imaging library built around numpy arrays. It can be used in the rendering CLI to save with pypdfium2's numpy adapter.
pypdfium2 tries to defer imports of optional dependencies until they are actually needed, so there should be no startup overhead if you don't use them.
### From the repository / With setup
_Note, unlike helpers, pypdfium2's setup is not bound by API stability promises, so it may change any time._
#### Setup Dependencies
*System*
+ C pre-processor (`gcc`/`clang` – alternatively, specify the command to invoke via `$CPP`)
+ `git` (Used e.g. to determine the latest pdfium-binaries version, to get `git describe` info, or to check out pdfium on sourcebuild. Might be optional on default setup.)
+ [`gh >= 2.47.0`](https://github.com/cli/cli/) (optional; used to verify pdfium-binaries build attestations)
*Python*
+ [`ctypesgen` (pypdfium2-team fork)](https://github.com/pypdfium2-team/ctypesgen)
+ `setuptools`
+ `wheel`, if setuptools is `< v70.1.0`
Python dependencies should be installed automatically, unless `--no-build-isolation` is passed to pip.
> [!NOTE]
> pypdfium2 and its ctypesgen fork are developed in sync, i.e. each pypdfium2 commit ought to be coupled with the then `HEAD` of pypdfium2-ctypesgen.<br>
> Our release sdists, and latest pypdfium2 from git, will automatically use matching ctypesgen.<br>
> However, when using a non-latest commit, you'll have to set up the right ctypesgen version on your own, and install pypdfium2 without build isolation.
#### Get the code
```bash
git clone "https://github.com/pypdfium2-team/pypdfium2.git"
cd pypdfium2/
```
#### Default setup
```bash
# In the pypdfium2/ directory
python -m pip install -v .
```
This will invoke pypdfium2's `setup.py`. Typically, this means a binary will be downloaded from `pdfium-binaries` and bundled into pypdfium2, and ctypesgen will be called on pdfium headers to produce the bindings interface.
`pdfium-binaries` offer GitHub build provenance [attestations](https://github.com/bblanchon/pdfium-binaries/attestations), so it is highly recommended that you install the `gh` CLI for our setup to verify authenticity of the binaries.
If no pre-built binaries are available for your platform, setup will [look for system pdfium](#with-system-pdfium), or attempt to [build pdfium from source](#with-self-built-pdfium).
##### `pip` options of interest
- `-v`: Verbose logging output. Useful for debugging.
- `-e`: Install in editable mode, so the installation points to the source tree. This way, changes directly take effect without needing to re-install. Recommended for development.
- `--no-build-isolation`: Do not isolate setup in a virtual env; use the main env instead. This renders `pyproject.toml [build-system]` inactive, so setup deps must be prepared by caller. Useful to install custom versions of setup deps, or as speedup when installing repeatedly.
- `--no-binary pypdfium2`: Do not use binary *wheels* when installing from PyPI – instead, use the sdist and run setup. Note, this option is improperly named, as pypdfium2's setup will attempt to use binaries all the same. If you want to prevent that, set e.g. `PDFIUM_PLATFORM=fallback` to achieve the same behavior as if there were no pdfium-binaries for the host. Or if you just want to package a source distribution, set `PDFIUM_PLATFORM=sdist`.
- `--pre` to install a beta release, if available.
#### With system pdfium
```bash
PDFIUM_PLATFORM="system-search" python -m pip install -v .
```
Look for a system-provided pdfium shared library, and bind against it.
Standard, portable [`ctypes.util.find_library()`](https://docs.python.org/3/library/ctypes.html#finding-shared-libraries) means will be used to probe for system pdfium at setup time, and the result will be hardcoded into the bindings. Alternatively, set `$PDFIUM_BINARY` to the path of the out-of-tree DLL to use.
If system pdfium was found, we will look for pdfium headers from which to generate the bindings (e.g. in `/usr/include`). If the headers are in a location not recognized by our code, set `$PDFIUM_HEADERS` to the directory in question.
Also, we try to determine the pdfium version, either from the library filename itself, or via `pkg-config`.
If this fails, you can pass the version alongside the setup target, e.g. `PDFIUM_PLATFORM=system-search:XXXX`, where `XXXX` is the pdfium build version.
If the version is not known in the end, `NaN` placeholders will be set.
If the version is known but no headers were found, they will be downloaded from upstream.
If neither headers nor version are known (or ctypesgen is not installed), the reference bindings will be used as a last resort. This is ABI-unsafe and thus discouraged.
If `find_library()` failed to find pdfium, we *may* do additional, custom search, such as checking for a pdfium shared library included with LibreOffice, and – if available – determining its version.<br>
Our search heuristics currently expect a Linux-like filesystem hierarchy (e.g. `/usr`), but contributions for other systems are welcome.
> [!IMPORTANT]
> When pypdfium2 is installed with system pdfium, the bindings ought to be re-generated with the new headers whenever the out-of-tree pdfium DLL is updated, for ABI safety reasons.[^upstream_abi_policy]<br>
> For distributors, we highly recommend the use of versioned libraries (e.g. `libpdfium.so.140.0.7269.0`) or similar concepts that enforce binary/bindings version match, so outdated bindings will safely stop working with a meaningful error, rather than silently continue unsafely, at risk of hard crashes.
> [!TIP]
> If you mind pypdfium2's setup making a web request to resolve the full version, you may pass it in manually via `GIVEN_FULLVER=$major.$minor.$build.$patch` (colon-separated if there are multiple versions), or less ideally, set `IGNORE_FULLVER=1` to use `NaN` placeholders.
> This applies to other setup targets as well.<br>
> For distributors, we recommend that you use the full version in binary filename or pkgconfig info, so pypdfium2's setup will not need to resolve it in the first place.
[^upstream_abi_policy]: Luckily, upstream tend to be careful not to change the ABI of existing stable APIs, but they don't mind ABI-breaking changes to APIs that have not been promoted to stable tier yet, and pypdfium2 uses many of them, so it is still prudent to care about downstream ABI safety as well (it always is). You can read more about upstream's policy [here](https://pdfium.googlesource.com/pdfium/+/refs/heads/main/CONTRIBUTING.md#stability).
##### Related targets
There is also a `system-generate:$VERSION` target, to produce system pdfium bindings in a host-independent fashion. This will call `find_library()` at runtime, and may be useful for packaging.
Further, you can set just `system` to consume pre-generated files from the `data/system` staging directory. See the section on [caller-provided data files](#with-caller-provided-data-files) for more info.
#### With self-built pdfium
You can also install pypdfium2 with a self-compiled pdfium shared library, by placing it in `data/sourcebuild/` along with a bindings interface and version info, and setting the `PDFIUM_PLATFORM="sourcebuild"` directive to use these files on setup.
This project comes with two scripts to automate the build process: `build_toolchained.py` and `build_native.py` (in `setupsrc/`).
- `build_toolchained` is based on the build instructions in pdfium's Readme, and uses Google's toolchain (this means foreign binaries and sysroots). This results in a heavy checkout process that may take a lot of time and space. Dependency libraries are vendored. An advantage of the toolchain is its powerful cross-compilation support (including symbol reversioning).
- `build_native` is an attempt to address some shortcomings of the toolchained build. It performs a lean, self-managed checkout, and is tailored towards native compilation. It uses system dependencies (compiler/gn/ninja), which must be installed by the caller beforehand. This script should theoretically work on arbitrary Linux architectures. As a drawback, this process is not supported or even documented upstream, so it might be hard to maintain.
> [!TIP]
> The native sourcebuild can either use system libraries, or pdfium's vendored libraries.
> When invoked directly, by default, system libraries need to be installed. However, when invoked through fallback setup (`PDFIUM_PLATFORM=fallback`), vendored libraries will be used.<br>
> The `--vendor ...` and `--no-vendor ...` options can be used to control vendoring on a per-library basis. See `build_native.py --help` for details.
You can also set `PDFIUM_PLATFORM` to `sourcebuild-native` or `sourcebuild-toolchained` to trigger either build script through setup, and pass command-line flags with `$BUILD_PARAMS`.
However, for simplicity, both scripts/subtargets share just `sourcebuild` as staging directory.
Dependencies:
- When building with system libraries, the following packages need to be installed (including development headers): `freetype, icu-uc, lcms2, libjpeg, libopenjp2, libpng, libtiff, zlib` (and maybe `glib` to satisfy the build system).
- You might also want to know that pdfium bundles `agg, abseil, fast_float`.
- When building with system tools, `gn (generate-ninja)`, `ninja`, and a compiler are needed. If available, the compiler defaults to GCC, but Clang should also work if you set up some symlinks, and make sure you have the `libclang_rt` builtins or pass `--no-libclang-rt`.
To do the toolchained build, you'd run something like:
```bash
# call build script with --help to list options
python setupsrc/build_toolchained.py
PDFIUM_PLATFORM="sourcebuild" python -m pip install -v .
```
Or for the native build, on Ubuntu 24.04, you could do e.g.:
```bash
# Install dependencies
sudo apt-get install generate-ninja ninja-build libfreetype-dev liblcms2-dev libjpeg-dev libopenjp2-7-dev libpng-dev libtiff-dev zlib1g-dev libicu-dev libglib2.0-dev
```
```bash
# Build with GCC
python ./setupsrc/build_native.py --compiler gcc
```
```bash
# Alternatively, build with Clang
sudo apt-get install llvm lld
VERSION=18
ARCH=$(uname -m)
sudo ln -s "/usr/lib/clang/$VERSION/lib/linux" "/usr/lib/clang/$VERSION/lib/$ARCH-unknown-linux-gnu"
sudo ln -s "/usr/lib/clang/$VERSION/lib/linux/libclang_rt.builtins-$ARCH.a" "/usr/lib/clang/$VERSION/lib/linux/libclang_rt.builtins.a"
python ./setupsrc/build_native.py --compiler clang
```
```bash
# Install
PDFIUM_PLATFORM="sourcebuild" python -m pip install -v .
```
Note, on *some* platforms, you might also need symlinks for GCC, e.g.:
```bash
PREFIX=$(python ./utils/get_gcc_prefix.py) # in pypdfium2 dir
GCC_DIR="/usr" # or e.g. /opt/rh/gcc-toolset-14/root
sudo ln -s $GCC_DIR/bin/gcc $GCC_DIR/bin/$PREFIX-gcc
sudo ln -s $GCC_DIR/bin/g++ $GCC_DIR/bin/$PREFIX-g++
sudo ln -s $GCC_DIR/bin/nm $GCC_DIR/bin/$PREFIX-nm
sudo ln -s $GCC_DIR/bin/readelf $GCC_DIR/bin/$PREFIX-readelf
sudo ln -s $GCC_DIR/bin/ar $GCC_DIR/bin/$PREFIX-ar
```
> [!NOTE]
> The native sourcebuild currently supports Linux (or similar).
> macOS and Windows are not handled, as we do not have access to these systems, and working over CI did not turn out feasible – use the toolchain-based build for now.
> Community help / pull requests to extend platform support would be welcome.
##### Android (Termux)
The native build may also work on Android with Termux in principle.
<details>
<summary>Click to expand for instructions</summary>
First, make sure git can work in your checkout of pypdfium2:
```bash
# set $PROJECTS_FOLDER accordingly
git config --global --add safe.directory '$PROJECTS_FOLDER/*'
```
To install the dependencies, you'll need something like
```bash
pkg install gn ninja freetype littlecms libjpeg-turbo openjpeg libpng zlib libicu libtiff glib
```
Then apply the clang symlinks as described above, but use `ARCH=$(uname -m)-android`
and substitute `/usr` with `$PREFIX` (`/data/data/com.termux/files/usr`).
Last time we tested `build_native` on Android, there were some bugs with freetype/openjpeg includes. A *quick & dirty* workaround with symlinks is:
```bash
# freetype
ln -s "$PREFIX/include/freetype2/ft2build.h" "$PREFIX/include/ft2build.h"
ln -s "$PREFIX/include/freetype2/freetype" "$PREFIX/include/freetype"
# openjpeg
OPJ_VER="2.5" # adapt this to your setup
ln -s "$PREFIX/include/openjpeg-$OPJ_VER/openjpeg.h" "$PREFIX/include/openjpeg.h"
ln -s "$PREFIX/include/openjpeg-$OPJ_VER/opj_config.h" "$PREFIX/include/opj_config.h"
```
Now, you should be ready to run the build.
On Android, PDFium's build system outputs `libpdfium.cr.so` by default, thus you'll want to rename the binary so pypdfium2's library search can find it:
```bash
mv data/sourcebuild/libpdfium.cr.so data/sourcebuild/libpdfium.so
```
Then install with `PDFIUM_PLATFORM=sourcebuild`.
In case dependency libraries were built separately, you may also need to set the OS library search path, e.g.:
```bash
PY_VERSION="3.12" # adapt this to your setup
LD_LIBRARY_PATH="$PREFIX/lib/python$PY_VERSION/site-packages/pypdfium2_raw"
```
By default, our build script currently bundles everything into a single DLL, though.
</details>
##### cibuildwheel
Sourcebuild can be run through cibuildwheel. For targets configured in our [`pyproject.toml`](./pyproject.toml), the basic invocation is as simple as p.ex.
```bash
CIBW_BUILD="cp311-manylinux_x86_64" cibuildwheel
```
A more involved use case could look like this:
```bash
CIBW_BUILD="cp310-musllinux_s390x" CIBW_ARCHS=s390x CIBW_CONTAINER_ENGINE=podman TEST_PDFIUM=1 cibuildwheel
```
See also our [cibuildwheel](.github/workflows/cibw.yaml) [workflow](.github/workflows/cibw_one.yaml).
For more options, see the [upstream documentation](https://cibuildwheel.pypa.io/en/stable/options).
On Linux, this will use the native sourcebuild with vendored dependency libraries.
On Windows and macOS, the toolchained sourcebuild is used.
Note, for Linux, cibuildwheel requires Docker. On the author's version of Fedora, it can be installed as follows:
```bash
sudo dnf in moby-engine # this provides the docker command
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USER
# then reboot (re-login might also suffice)
```
For other ways of installing Docker, refer to the cibuildwheel docs ([Setup](https://cibuildwheel.pypa.io/en/stable/setup/), [Platforms](https://cibuildwheel.pypa.io/en/stable/platforms/)) and the links therein.
> [!WARNING]
> cibuildwheel copies the project directory into a container, not taking `.gitignore` rules into account.
> Thus, it is advisable to make a fresh checkout of pypdfium2 before running cibuildwheel.
> In particular, a toolchained checkout of pdfium within pypdfium2 is problematic, and will cause a halt on the `Copying project into container...` step.
> For development, make sure the fresh checkout is in sync with the working copy.
> [!TIP]
> pdfium itself has first-class cross-compilation support.
> In particular, for Linux architectures supported by upstream's toolchain but not available natively on CI, we recommend to forego cibuildwheel, and instead cross-build pdfium using its own toolchain, e.g.:
> ```bash
> # assuming cross-compilation dependencies are installed
> python setupsrc/build_toolchained.py --target-cpu arm
> PDFIUM_PLATFORM=sourcebuild CROSS_TAG="manylinux_2_17_armv7l" python -m build -wxn
> ```
> This typically achieves a lower glibc requirement than we can with cibuildwheel.
#### With caller-provided data files
pypdfium2 is like any other Python project in essentials, except that it needs some data files: a pdfium DLL (either bundled or out-of-tree), a bindings interface (generated via ctypesgen), and pdfium version info (JSON).
The main point of pypdfium2's custom setup is to automate deployment of these files, in a way that suits end users / contributors, and our PyPI packaging.
However, if you want to (or have to) forego this automation, you can also *just supply these files yourself*, as shown below. This allows to largely sidestep pypdfium2's own setup code.<br>
The idea is basically to put your data files in a staging directory, `data/sourcebuild` or `data/system` (depending on whether you want to bundle or use system pdfium), and set the matching `$PDFIUM_PLATFORM` target to consume from that directory on setup.
This setup strategy should be inherently free of web requests.
Mind though, we don't support the result. If you bring your own files, it's your own responsibility, and it's quite possible your pypdfium2 might turn out subtly different from ours.
```bash
# First, ask yourself: Do you want to bundle pdfium (in-tree), or use system
# pdfium (out-of-tree)? For bundling, set "sourcebuild", else set "system".
TARGET="sourcebuild" # or "system"
STAGING_DIR="data/$TARGET"
# If you have decided for bundling, copy over the pdfium DLL in question.
# Otherwise, skip this step.
cp "$MY_BINARY_PATH" "$STAGING_DIR/libpdfium.so"
# Now, we will call ctypesgen to generate the bindings interface.
# Reminder: You'll want to use the pypdfium2-team fork of ctypesgen.
# It generates much cleaner bindings, and it's what our source expects
# (there may be subtle API differences in terms of output).
# How exactly you do this is down to you.
# See ctypesgen --help or base.py::run_ctypesgen() for further options.
ctypesgen --library pdfium --rt-libpaths $MY_RT_LIBPATHS --ct-libpaths $MY_CT_LIBPATHS \
--headers $MY_INCLUDE_DIR/fpdf*.h -o $STAGING_DIR/bindings.py [-D $MY_RAW_FLAGS]
# Then write the version file (fill the placeholders).
# Note, this is not a mature interface yet and might change any time!
# See also https://pypdfium2.readthedocs.io/en/stable/python_api.html#pypdfium2.version.PDFIUM_INFO
# major/minor/build/patch: integers forming the pdfium version being packaged
# n_commits/hash: git describe like post-tag info (0/null for release commit)
# origin: a string to identify the build
# flags: a comma-delimited list of pdfium feature flag strings
# (e.g. "V8", "XFA") - may be empty for default build
cat > "$STAGING_DIR/version.json" <<END
{
"major": $PDFIUM_MAJOR,
"minor": $PDFIUM_MINOR,
"build": $PDFIUM_BUILD,
"patch": $PDFIUM_PATCH,
"n_commits": $POST_TAG_COMMIT_COUNT,
"hash": $POST_TAG_HASH,
"origin": "$TARGET-$MY_ORIGIN",
"flags": [$MY_SHORT_FLAGS]
}
END
# Finally, run setup (through pip, pyproject-build or whatever).
# The PDFIUM_PLATFORM value will instruct pypdfium2's setup to use the files
# we supplied, rather than to generate its own.
PDFIUM_PLATFORM=$TARGET python -m pip install --no-build-isolation -v .
```
#### Further setup info (formal summary)
This is a *somewhat* formal description of pypdfium2's setup capabilities.
It is meant to sum up and complement the above documentation on specific sub-targets.
Disclaimer: As it is hard to keep up with constantly evolving setup code, it is possible this documentation may be outdated/incomplete. Also keep in mind that these APIs could change any time, and may be mainly of internal interest.
* Binaries are stored in platform-specific sub-directories of `data/`, along with bindings and version information.
* `$PDFIUM_PLATFORM` defines which binary to include on setup.
- Format spec: `[$PLATFORM][-v8][:$VERSION]` (`[]` = segments, `$CAPS` = variables).
- Examples: `auto`, `auto:7269` `auto-v8:7269` (`auto` may be substituted by an explicit platform name, e.g. `linux_x64`).
- V8: If given, use the V8 (JavaScript) and XFA enabled pdfium binaries. Otherwise, use the regular (non-V8) binaries.
- Version: If given, use the specified pdfium-binaries release. Otherwise, use the default version currently set in the codebase. Set `pinned` to request that behavior explicitly. Or set `latest` to use the newest pdfium-binaries release instead.
- Platform:
+ If unset or `auto`, the host platform is detected and a corresponding binary will be selected.
+ If an explicit platform identifier (e.g. `linux_x64`, `darwin_arm64`, ...), binaries for the requested platform will be used.[^platform_ids]
+ If `system-search`, look for and bind against system-provided pdfium instead of embedding a binary. If just `system`, consume existing bindings from `data/system/`.
+ If `sourcebuild`, binary and bindings will be taken from `data/sourcebuild/`, assuming a prior run of the native or toolchained build scripts. `sourcebuild-native` or `sourcebuild-toolchained` can also be used to trigger either build through setup (use `$BUILD_PARAMS` to pass custom options).
+ If `sdist`, no platform-specific files will be included, so as to create a source distribution.
* `$PYPDFIUM_MODULES=[raw,helpers]` defines the modules to include. Metadata adapts dynamically.
- May be used by packagers to decouple raw bindings and helpers, which may be relevant if packaging against system pdfium.
- Would also allow to install only the raw module without helpers, or only helpers with a custom raw module.
* `$PDFIUM_BINDINGS=reference` allows to override ctypesgen and use the reference bindings file `autorelease/bindings.py` instead.
- This is a convenience option to get pypdfium2 installed from source even if a working ctypesgen / C pre-processor is not available in the install env. *May be automatically enabled under given circumstances.*
- Warning: This may not be ABI-safe. Please make sure binary/bindings build headers match to avoid ABI issues.
[^platform_ids]: Intended for packaging, so that wheels can be crafted for any platform without access to a native host.
### From Conda
> [!WARNING]
> **Beware:** Any conda packages/recipes of pypdfium2 or pdfium-binaries that might be provided by other distributors, including `anaconda/main` or `conda-forge` default channels, are [unofficial](#unofficial-packages).
> [!NOTE]
> **Wait a moment:** Do you really need this?
> pypdfium2 is best installed from `PyPI` (e.g. via `pip`),[^pypi_reasons] which you can also do in a conda env. Rather than asking your users to add custom channels, consider making pypdfium2 optional at install time, and ask them to install it via pip instead.<br>
> This library has no hard runtime dependencies, so you don't need to worry about breaking the conda env.
[^pypi_reasons]: To name some reasons:
+ pypdfium2 from PyPI covers platforms that we cannot cover on conda.
+ pypdfium2 from PyPI has extensive fallback setup, while conda does not provide an opportunity to run custom setup code.
+ With conda, in-project publishing / custom channels are second class.
+ With conda, it seems there is no way to create platform-specific but interpreter-independent python packages, so we cannot reasonably bundle pdfium. Thus, we have to use external pdfium, which is more complex and has some pitfalls.
+ To install
With permanent channel config (encouraged):
```bash
conda config --add channels bblanchon
conda config --add channels pypdfium2-team
conda config --set channel_priority strict
conda install pypdfium2-team::pypdfium2_helpers
```
Alternatively, with temporary channel config:
```bash
conda install pypdfium2-team::pypdfium2_helpers --override-channels -c pypdfium2-team -c bblanchon -c defaults
```
If desired, you may limit the channel config to the current environment by adding `--env`.
Adding the channels permanently and tightening priority is encouraged to include pypdfium2 in `conda update` by default, and to avoid accidentally replacing the install with a different channel.
Otherwise, you should be cautious when making changes to the environment.
+ To depend on pypdfium2 in a `conda-build` recipe
```yaml
requirements:
run:
- pypdfium2-team::pypdfium2_helpers
```
You'll want to have downstream callers handle the custom channels as shown above, otherwise conda will not be able to satisfy requirements.
+ To set up channels in a GH workflow
```yaml
- name: ...
uses: conda-incubator/setup-miniconda@v3
with:
# ... your options
channels: pypdfium2-team,bblanchon
channel-priority: strict
```
This is just a suggestion, you can also call `conda config` manually, or pass channels on command basis using `-c`, as discussed above.
+ To verify the sources
```bash
conda list --show-channel-urls "pypdfium2|pdfium-binaries"
conda config --show-sources
```
The table should show `pypdfium2-team` and `bblanchon` in the channels column.
If added permanently, the config should also include these channels, ideally with top priority.
Please check this before reporting any issue with a conda install of pypdfium2.
_**Note:** Conda packages are normally managed using recipe feedstocks driven by third parties, in a Linux repository like fashion. However, with some quirks it is also possible to do conda packaging within the original project and publish to a custom channel, which is what pypdfium2-team does, and the above instructions are referring to._
### Unofficial packages
The authors of this project have no control over and are not responsible for possible third-party builds of pypdfium2, and we do not support them. Please use our official packages where possible.
If you have an issue with a third-party build, either contact your distributor, or try to reproduce with our official builds.
Do not expect us to add/change code for downstream-specific setup tasks.
Related issues or PRs may be closed without further notice if we don't see fit for upstream.
Enhancements of general value that are maintainable and align well with the idea of our setup code are welcome, though.
> [!IMPORTANT]
> If you are a third-party distributor, please point out in the description that your package is unofficial, i.e. not affiliated with or endorsed by the pypdfium2 authors.<br>
> In particular, if you feel like you need patches to package pypdfium2, please submit them on the Discussions page so we can figure out if there isn't a better way (there usually is).
## Usage
### [Support model](https://pypdfium2.readthedocs.io/en/stable/python_api.html)
<!-- TODO demonstrate more APIs (e. g. XObject placement, transform matrices, image extraction, ...) -->
Here are some examples of using the support model API.
* Import the library
```python
import pypdfium2 as pdfium
import pypdfium2.raw as pdfium_c
```
* Open a PDF using the helper class `PdfDocument` (supports file paths as string or `pathlib.Path`, or file content as bytes or byte stream)
```python
pdf = pdfium.PdfDocument("./path/to/document.pdf")
version = pdf.get_version() # get the PDF standard version
n_pages = len(pdf) # get the number of pages in the document
page = pdf[0] # load a page
```
* Render the page
```python
bitmap = page.render(
scale = 1, # 72dpi resolution
rotation = 0, # no additional rotation
# ... further rendering options
)
pil_image = bitmap.to_pil()
pil_image.show()
```
Note, with the PIL adapter, it might be advantageous to use `force_bitmap_format=pdfium_c.FPDFBitmap_BGRA, rev_byteorder=True` or perhaps `prefer_bgrx=True, maybe_alpha=True, rev_byteorder=True`, to achieve a pixel format supported natively by PIL, and avoid rendering with transparency to a non-alpha bitmap, which can slow down pdfium.
With `.to_numpy()`, all formats are zero-copy, but passing either `maybe_alpha=True` (if dynamic pixel format is acceptable) or `force_bitmap_format=pdfium_c.FPDFBitmap_BGRA` is also recommended for the transparency problem.
* Try some page methods
```python
# Get page dimensions in PDF canvas units (1pt->1/72in by default)
width, height = page.get_size()
# Set the absolute page rotation to 90° clockwise
page.set_rotation(90)
# Locate objects on the page
for obj in page.get_objects():
print(obj.level, obj.type, obj.get_bounds())
```
* Extract and search text
```python
# Load a text page helper
textpage = page.get_textpage()
# Extract text from the whole page
text_all = textpage.get_text_bounded()
# Extract text from a specific rectangular area
text_rect = textpage.get_text_bounded(left=50, bottom=100, right=width-50, top=height-100)
# Extract text from a specific char range
text_span = textpage.get_text_range(index=10, count=15)
# Locate text on the page
searcher = textpage.search("something", match_case=False, match_whole_word=False)
# This returns the next occurrence as (char_index, char_count), or None if not found
match = searcher.get_next()
```
* Read the table of contents
```python
import pypdfium2.internal as pdfium_i
for bm in pdf.get_toc(max_depth=15):
count, dest = bm.get_count(), bm.get_dest()
out = " " * bm.level
out += "[%s] %s -> " % (
f"{count:+}" if count != 0 else "*",
bm.get_title(),
)
if dest:
index, (view_mode, view_pos) = dest.get_index(), dest.get_view()
out += "%s # %s %s" % (
index+1 if index != None else "?",
pdfium_i.ViewmodeToStr.get(view_mode),
round(view_pos, 3),
)
else:
out += "_"
print(out)
```
* Create a new PDF with an empty A4 sized page
```python
pdf = pdfium.PdfDocument.new()
width, height = (595, 842)
page_a = pdf.new_page(width, height)
```
* Include a JPEG image in a PDF
```python
pdf = pdfium.PdfDocument.new()
image = pdfium.PdfImage.new(pdf)
image.load_jpeg("./tests/resources/mona_lisa.jpg")
width, height = image.get_px_size()
matrix = pdfium.PdfMatrix().scale(width, height)
image.set_matrix(matrix)
page = pdf.new_page(width, height)
page.insert_obj(image)
page.gen_content()
```
* Save the document
```python
# PDF 1.7 standard
pdf.save("output.pdf", version=17)
```
### Raw PDFium API
While helper classes conveniently wrap the raw PDFium API, it may still be accessed directly and is available in the namespace `pypdfium2.raw`. Lower-level utilities that may aid with using the raw API are provided in `pypdfium2.internal`.
```python
import pypdfium2.raw as pdfium_c
import pypdfium2.internal as pdfium_i
```
Since PDFium is a large library, many components are not covered by helpers yet. However, as helpers expose their underlying raw objects, you may seamlessly integrate raw APIs while using helpers as available. When passed as ctypes function parameter, helpers automatically resolve to the raw object handle (but you may still access it explicitly if desired):
```python
permission_flags = pdfium_c.FPDF_GetDocPermission(pdf.raw) # explicit
permission_flags = pdfium_c.FPDF_GetDocPermission(pdf) # implicit
```
For PDFium docs, please look at the comments in its [public header files](https://pdfium.googlesource.com/pdfium/+/refs/heads/main/public/).[^pdfium_docs]
A variety of examples on how to interface with the raw API using [`ctypes`](https://docs.python.org/3/library/ctypes.html) is already provided with [support model source code](src/pypdfium2/_helpers).
Nonetheless, the following guide may be helpful to get started with the raw API, if you are not familiar with `ctypes` yet.
[^pdfium_docs]: Unfortunately, no recent HTML-rendered docs are available for PDFium at the moment.
<!-- TODO write something about weakref.finalize(); add example on creating a C page array -->
<!-- TODO doctests? -->
* In general, PDFium functions can be called just like normal Python functions.
However, parameters may only be passed positionally, i.e. it is not possible to use keyword arguments.
There are no defaults, so you always need to provide a value for each argument.
```python
# arguments: filepath (bytes), password (bytes|None)
# NUL-terminate filepath and encode as UTF-8
pdf = pdfium_c.FPDF_LoadDocument((filepath+"\x00").encode("utf-8"), None)
```
This is the underlying bindings declaration,[^bindings_decl] which loads the function from the binary and
contains the information required to convert Python types to their C equivalents.
```python
if hasattr(_libs['pdfium'], 'FPDF_LoadDocument'):
FPDF_LoadDocument = _libs['pdfium']['FPDF_LoadDocument']
FPDF_LoadDocument.argtypes = (FPDF_STRING, FPDF_BYTESTRING)
FPDF_LoadDocument.restype = FPDF_DOCUMENT
```
Python `bytes` are converted to `FPDF_STRING` by ctypes autoconversion. This works because `FPDF_STRING` is actually an alias to `POINTER(c_char)` (i.e. `char*`), which is a primitive pointer type.
When passing a string to a C function, it must always be NUL-terminated, as the function merely receives a pointer to the first item and then continues to read memory until it finds a NUL terminator.
[^bindings_decl]: From the auto-generated bindings file. We maintain a reference copy at `autorelease/bindings.py`. Or if you have an editable install, there will also be `src/pypdfium2_raw/bindings.py`.
* First of all, function parameters are not only used for input, but also for output:
```python
# Initialise an integer object (defaults to 0)
c_version = ctypes.c_int()
# Let the function assign a value to the c_int object, and capture its return code (True for success, False for failure)
ok = pdfium_c.FPDF_GetFileVersion(pdf, c_version)
# If successful, get the Python int by accessing the `value` attribute of the c_int object
# Otherwise, set the variable to None (in other cases, it may be desired to raise an exception instead)
version = c_version.value if ok else None
```
* If an array is required as output parameter, you can initialise one like this (in general terms):
```python
# long form
array_type = (c_type * array_length)
array_object = array_type()
# short form
array_object = (c_type * array_length)()
```
Example: Getting view mode and target position from a destination object returned by some other function.
```python
# (Assuming `dest` is an FPDF_DEST)
n_params = ctypes.c_ulong()
# Create a C array to store up to four coordinates
view_pos = (pdfium_c.FS_FLOAT * 4)()
view_mode = pdfium_c.FPDFDest_GetView(dest, n_params, view_pos)
# Convert the C array to a Python list and cut it down to the actual number of coordinates
view_pos = list(view_pos)[:n_params.value]
```
* For string output parameters, callers needs to provide a sufficiently long, pre-allocated buffer.
This may work differently depending on what type the function requires, which encoding is used, whether the number of bytes or characters is returned, and whether space for a NUL terminator is included or not. Carefully review the documentation of the function in question to fulfill its requirements.
Example A: Getting the title string of a bookmark.
```python
# (Assuming `bookmark` is an FPDF_BOOKMARK)
# First call to get the required number of bytes (not units!), including space for a NUL terminator
n_bytes = pdfium_c.FPDFBookmark_GetTitle(bookmark, None, 0)
# Initialise the output buffer
buffer = ctypes.create_string_buffer(n_bytes)
# Second call with the actual buffer
pdfium_c.FPDFBookmark_GetTitle(bookmark, buffer, n_bytes)
# Decode to string, cutting off the NUL terminator (encoding: UTF-16LE)
title = buffer.raw[:n_bytes-2].decode("utf-16-le")
```
Example B: Extracting text in given boundaries.
```python
# (Assuming `textpage` is an FPDF_TEXTPAGE and the boundary variables are set)
# Store common arguments for the two calls
args = (textpage, left, top, right, bottom)
# First call to get the required number of units (not bytes!) - a possible NUL terminator is not included
n_chars = pdfium_c.FPDFText_GetBoundedText(*args, None, 0)
# If no characters were found, return an empty string
if n_chars <= 0:
return ""
# Calculate the required number of bytes (encoding: UTF-16LE again)
# The function signature uses c_ushort, so 1 unit takes sizeof(c_ushort) == 2 bytes
n_bytes = 2 * n_chars
# Initialise the output buffer - this function can work without NUL terminator, so skip it
buffer = ctypes.create_string_buffer(n_bytes)
# Re-interpret the type from char to unsigned short* as required by the function
buffer_ptr = ctypes.cast(buffer, ctypes.POINTER(ctypes.c_ushort))
# Second call with the actual buffer
pdfium_c.FPDFText_GetBoundedText(*args, buffer_ptr, n_chars)
# Decode to string (You may want to pass `errors="ignore"` to skip possible errors in the PDF's encoding)
text = buffer.raw.decode("utf-16-le")
```
* Not only are there different ways of string output that need to be handled according to the requirements of the function in question.
String input, too, can work differently depending on encoding and type.
We have already discussed `FPDF_LoadDocument()`, which takes a UTF-8 encoded string as `char*`.
A different examples is `FPDFText_FindStart()`, which needs a UTF-16LE encoded string, given as `unsigned short*`:
```python
# (Assuming `text` is a str and `textpage` an FPDF_TEXTPAGE)
# Add the NUL terminator and encode as UTF-16LE
enc_text = (text + "\x00").encode("utf-16-le")
# cast `enc_text` to a c_ushort pointer
text_ptr = ctypes.cast(enc_text, ctypes.POINTER(ctypes.c_ushort))
search = pdfium_c.FPDFText_FindStart(textpage, text_ptr, 0, 0)
```
* Leaving strings, let's suppose you have a C memory buffer allocated by PDFium and wish to read its data.
PDFium will provide you with a pointer to the first item of the byte array.
To access the data, you'll want to re-interpret the point | text/markdown | pypdfium2-team | geisserml@gmail.com | null | null | BSD-3-Clause, Apache-2.0, dependency licenses | pdf, pdfium | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Multimedia :: Gr... | [] | https://github.com/pypdfium2-team/pypdfium2 | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Source, https://github.com/pypdfium2-team/pypdfium2",
"Tracker, https://github.com/pypdfium2-team/pypdfium2/issues",
"Documentation, https://pypdfium2.readthedocs.io",
"Changelog, https://pypdfium2.readthedocs.io/en/stable/changelog.html"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:22:37.643420 | pypdfium2-5.5.0.tar.gz | 270,502 | fb/f6/42f5f1b9beb7e036f5532832b9c590fd107c52a78f704302c03bc6793954/pypdfium2-5.5.0.tar.gz | source | sdist | null | false | 55cf5c0053ae3b3de7f92f85d0980d73 | 3283c61f54c3c546d140da201ef48a51c18b0ad54293091a010029ac13ece23a | fbf642f5f1b9beb7e036f5532832b9c590fd107c52a78f704302c03bc6793954 | null | [
"LICENSES/Apache-2.0.txt",
"LICENSES/BSD-3-Clause.txt",
"LICENSES/CC-BY-4.0.txt",
"REUSE.toml"
] | 1,132,136 |
2.4 | webhook-platform | 1.1.0 | Official Python SDK for Webhook Platform | # webhook-platform
Official Python SDK for [Webhook Platform](https://github.com/vadymkykalo/webhook-platform).
## Installation
```bash
pip install webhook-platform
```
## Quick Start
```python
from webhook_platform import WebhookPlatform, Event
client = WebhookPlatform(
api_key="wh_live_your_api_key",
base_url="http://localhost:8080", # optional
)
# Send an event
event = client.events.send(
Event(
type="order.completed",
data={
"order_id": "ord_12345",
"amount": 99.99,
"currency": "USD",
},
)
)
print(f"Event created: {event.event_id}")
print(f"Deliveries created: {event.deliveries_created}")
```
## API Reference
### Events
```python
from webhook_platform import Event
# Send event with idempotency key
event = client.events.send(
Event(type="order.completed", data={"order_id": "123"}),
idempotency_key="unique-key",
)
```
### Endpoints
```python
from webhook_platform import EndpointCreateParams, EndpointUpdateParams
# Create endpoint
endpoint = client.endpoints.create(
project_id,
EndpointCreateParams(
url="https://api.example.com/webhooks",
description="Production webhooks",
enabled=True,
),
)
# List endpoints
endpoints = client.endpoints.list(project_id)
# Update endpoint
client.endpoints.update(
project_id,
endpoint_id,
EndpointUpdateParams(enabled=False),
)
# Delete endpoint
client.endpoints.delete(project_id, endpoint_id)
# Rotate secret
updated = client.endpoints.rotate_secret(project_id, endpoint_id)
print(f"New secret: {updated.secret}")
# Test endpoint connectivity
result = client.endpoints.test(project_id, endpoint_id)
print(f"Test {'passed' if result.success else 'failed'}: {result.latency_ms}ms")
```
### Subscriptions
```python
from webhook_platform import SubscriptionCreateParams
# Subscribe endpoint to an event type
subscription = client.subscriptions.create(
project_id,
SubscriptionCreateParams(
endpoint_id=endpoint.id,
event_type="order.completed",
enabled=True,
),
)
# List subscriptions
subscriptions = client.subscriptions.list(project_id)
# Update subscription
client.subscriptions.update(
project_id,
subscription_id,
event_type="order.shipped",
)
# Delete subscription
client.subscriptions.delete(project_id, subscription_id)
```
### Deliveries
```python
from webhook_platform import DeliveryListParams, DeliveryStatus
# List deliveries with filters
deliveries = client.deliveries.list(
project_id,
DeliveryListParams(status=DeliveryStatus.FAILED, page=0, size=20),
)
print(f"Total failed: {deliveries.total_elements}")
# Get delivery attempts
attempts = client.deliveries.get_attempts(delivery_id)
for attempt in attempts:
print(f"Attempt {attempt.attempt_number}: {attempt.http_status} ({attempt.latency_ms}ms)")
# Replay failed delivery
client.deliveries.replay(delivery_id)
```
## Webhook Signature Verification
Verify incoming webhooks in your endpoint:
```python
from webhook_platform import verify_signature, construct_event, WebhookPlatformError
# Flask example
from flask import Flask, request
app = Flask(__name__)
@app.route("/webhooks", methods=["POST"])
def handle_webhook():
payload = request.get_data(as_text=True)
headers = dict(request.headers)
secret = os.environ["WEBHOOK_SECRET"]
try:
# Option 1: Just verify
verify_signature(payload, headers.get("X-Signature", ""), secret)
# Option 2: Verify and parse
event = construct_event(payload, headers, secret)
print(f"Received {event.type}: {event.data}")
# Handle the event
if event.type == "order.completed":
handle_order_completed(event.data)
return "OK", 200
except WebhookPlatformError as e:
print(f"Webhook verification failed: {e.message}")
return "Invalid signature", 400
```
### FastAPI Example
```python
from fastapi import FastAPI, Request, HTTPException
from webhook_platform import construct_event, WebhookPlatformError
app = FastAPI()
@app.post("/webhooks")
async def handle_webhook(request: Request):
payload = await request.body()
headers = dict(request.headers)
try:
event = construct_event(
payload.decode("utf-8"),
headers,
os.environ["WEBHOOK_SECRET"],
)
# Process event...
return {"status": "ok"}
except WebhookPlatformError as e:
raise HTTPException(status_code=400, detail=e.message)
```
## Error Handling
```python
from webhook_platform import (
WebhookPlatformError,
RateLimitError,
AuthenticationError,
ValidationError,
)
try:
client.events.send(Event(type="test", data={}))
except RateLimitError as e:
# Wait and retry
print(f"Rate limited. Retry after {e.retry_after_ms}ms")
time.sleep(e.retry_after_ms / 1000)
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Validation failed: {e.field_errors}")
except WebhookPlatformError as e:
print(f"Error {e.status}: {e.message}")
```
## Configuration
```python
client = WebhookPlatform(
api_key="wh_live_xxx", # Required: Your API key
base_url="https://api.example.com", # Optional: API base URL
timeout=30, # Optional: Request timeout in seconds (default: 30)
)
```
## Type Hints
This SDK includes full type hints for better IDE support:
```python
from webhook_platform import (
Event,
EventResponse,
Endpoint,
Delivery,
DeliveryStatus,
)
```
## Development
### Running Tests
**Local (requires Python 3.8+):**
```bash
pip install -e ".[dev]"
pytest
```
**Docker:**
```bash
docker run --rm -v $(pwd):/app -w /app python:3.11-slim sh -c "pip install -e '.[dev]' && pytest"
```
## License
MIT
| text/markdown | null | Vadym Kykalo <vadymkykalo@gmail.com> | null | null | MIT | webhook, webhooks, api, events, delivery | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"types-requests>=2.28.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/vadymkykalo/webhook-platform",
"Repository, https://github.com/vadymkykalo/webhook-platform",
"Documentation, https://github.com/vadymkykalo/webhook-platform#readme"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T23:22:07.250898 | webhook_platform-1.1.0.tar.gz | 14,323 | b3/9d/17057bf0907cb091fd971fb93d3e4d7a004745086ca0b28ad5649c1a3454/webhook_platform-1.1.0.tar.gz | source | sdist | null | false | e28e38e4e9ebbee7869a120e6887bac1 | 669fa394360c0d6b398aebbb4a15856c9bd84046bdab0181f0ecb1fc985f4399 | b39d17057bf0907cb091fd971fb93d3e4d7a004745086ca0b28ad5649c1a3454 | null | [] | 251 |
2.4 | slack-pat-mcp | 0.1.0 | Model Context Protocol (MCP) server for Slack using PAT (Personal Access Token) | # slack-pat-mcp
MCP server for personal Slack access using user tokens (xoxp-). 4 tools covering channels, DMs, chat, search, and user management.
## Install
```bash
uvx slack-pat-mcp
```
## Claude Code Config
```json
"slack": {
"command": "uvx",
"args": ["slack-pat-mcp"],
"env": {
"SLACK_USER_TOKEN": "xoxp-...",
"SLACK_TEAM_ID": "T..."
}
}
```
## Tools
- **slack_channel** — list, list_dms, history, thread, open_dm
- **slack_chat** — post, update, delete, react_add, react_remove
- **slack_search** — search messages with Slack syntax
- **slack_users** — list, info, profile, usergroups
## Required Slack OAuth Scopes (User Token)
channels:read, channels:history, groups:read, groups:history, im:read, im:history, im:write, mpim:read, mpim:history, mpim:write, chat:write, reactions:read, reactions:write, search:read, users:read, users:read.email, users.profile:read, usergroups:read
| text/markdown | null | Shivansh Singh <wiseeldrich2004@gmail.com> | null | null | null | ai, claude, mcp, model-context-protocol, slack | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.28.0"
] | [] | [] | [] | [
"Homepage, https://github.com/wise-toddler/slack-pat-mcp",
"Repository, https://github.com/wise-toddler/slack-pat-mcp",
"Issues, https://github.com/wise-toddler/slack-pat-mcp/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T23:21:42.962876 | slack_pat_mcp-0.1.0.tar.gz | 4,619 | cb/24/8970fe507e500b81c84e41c745385c2c7677762fbb4272675352f6dfb01b/slack_pat_mcp-0.1.0.tar.gz | source | sdist | null | false | a6b088f247e418758e595ddd9da3784d | 77a4bbe11f267c61aea60b9c60cd1591929b9bd2f65ea17e4fe6275bdfb84022 | cb248970fe507e500b81c84e41c745385c2c7677762fbb4272675352f6dfb01b | MIT | [] | 274 |
2.4 | warpt | 0.2.0 | Performance monitoring and system utilities | # warpt
A unified command-line tool for hardware discovery, stress testing, and performance monitoring.
warpt provides a vendor-agnostic interface for understanding and validating computational resources—answering questions like *"What hardware do I have?"*, *"Is it working correctly?"*, and *"How fast is it?"*
## Installation
```bash
pip install warpt
```
For stress testing capabilities:
```bash
pip install warpt[stress]
```
**Requirements:** Python 3.8+ (3.11+ recommended) | Linux, macOS, or Windows
## Quick Start
```bash
# Discover your hardware
warpt list
# Run CPU stress tests
warpt stress -c cpu
# Monitor system in real-time
warpt monitor
# Check power consumption (Linux/macOS)
warpt power
```
## Features
| Command | Description |
|---------|-------------|
| `warpt list` | Detect CPU, GPU, memory, storage, and installed ML frameworks |
| `warpt stress` | Run stress tests across CPU, GPU, RAM, storage, and network |
| `warpt monitor` | Real-time system monitoring with TUI dashboard |
| `warpt power` | Power consumption monitoring and per-process attribution |
| `warpt carbon` | Track energy consumption, CO2 emissions, and estimated cost |
| `warpt benchmark` | Performance benchmarking suite |
## Documentation
- [Getting Started](https://docs.earthframe.com/getting_started) — Installation and first steps
- [CLI Reference](https://docs.earthframe.com/cli_reference) — Complete command and option reference
- [Support Matrix](https://docs.earthframe.com/support_matrix) — System requirements and platform compatibility
## Platform Support
| Platform | Status |
|----------|--------|
| Linux | Full support |
| macOS | Full support (power monitoring requires sudo) |
| Windows | Limited support (see [Known Limitations](https://docs.earthframe.com/support_matrix#known-limitations)) |
**GPU Support:** NVIDIA GPUs supported. AMD, Intel, and Apple Silicon GPU support coming soon.
## Carbon Tracking
warpt automatically tracks energy usage and CO2 emissions during stress tests and power monitoring. You can also track any workload manually:
```bash
# Automatic — built into stress tests
warpt stress -c cpu -d 30
# [carbon] 30.2s | 23.8W avg | 199.7 mWh | 0.08g CO2 | $0.0000 | less than breathing for a minute
# Manual — track any workload
warpt carbon start
# ... run your workload ...
warpt carbon stop
# View history and totals
warpt carbon history
warpt carbon summary
# Check available grid regions and carbon intensities
warpt carbon regions
```
Carbon calculations use regional grid intensity data to estimate CO2 emissions from energy consumption. Configure your region with `--region` (defaults to US).
## Example Output
```
$ warpt list
CPU Information:
Make: Intel
Model: Xeon W-2295
Architecture: x86_64
Topology:
Total Sockets: 1
Total Phys Cores: 18
Total Logic Cores: 36
Memory Information:
Total: 128.0 GB
Type: DDR4
GPU Information:
GPU 0: NVIDIA RTX 4090
Memory: 24576 MB
Driver: 545.23.08
```
## Alpha Release
This is an **alpha release**. Some features are still in development:
- Carbon tracking — new in v0.2.0
- AMD GPU support (ROCm) — in progress
- Intel GPU support (oneAPI) — in progress
- Apple Neural Engine — in progress
- Additional benchmarks — expanding
See the [Support Matrix](https://docs.earthframe.com/support_matrix) for full details.
## Feedback
We'd love to hear from you:
- **Report bugs:** [GitHub Issues](https://github.com/EarthFrame/warpt/issues)
- **Feature requests:** [GitHub Issues](https://github.com/EarthFrame/warpt/issues)
## License
MIT License — see [LICENSE](LICENSE) for details.
| text/markdown | null | "Yousuf W. Rajput" <yousuf@earthframe.com>, "Eric T. Dawson" <eric@earthframe.com> | null | null | null | performance, monitoring, gpu, cpu, system, benchmark, stress-test | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
... | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.0.0",
"psutil>=5.9.0",
"pydantic>=2.0.0",
"nvidia-ml-py>=12.0.0",
"backports.strenum>=1.3.1; python_version < \"3.11\"",
"pytest>=7.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"... | [] | [] | [] | [
"Homepage, https://earthframe.com",
"Documentation, https://docs.earthframe.com",
"Repository, https://github.com/EarthFrame/warpt"
] | twine/6.1.0 CPython/3.12.0 | 2026-02-18T23:20:43.631920 | warpt-0.2.0.tar.gz | 190,777 | bb/cd/133ca74296fc07fd57b2f3fbe9dca22b82ce2e3dd0b85b847b8158167309/warpt-0.2.0.tar.gz | source | sdist | null | false | 7c4f8a05ff689aba35e105c152b0f30d | 77a10a9d244f1e3b5a1a4d4214573b991d7e41f76d5b58972681a2ac62aadf3c | bbcd133ca74296fc07fd57b2f3fbe9dca22b82ce2e3dd0b85b847b8158167309 | MIT | [
"LICENSE"
] | 270 |
2.4 | Molara | 0.1.2 | A visualisation tool for chemical structures. | <div align="center">
<img src="img/MolaraLogo.svg" alt="inPsights Logo" height="128"/>
<p>Logo: Nicole Maser</p>
</div>
[](https://badge.fury.io/py/Molara)
[](https://github.com/Molara-Lab/Molara/actions/workflows/test.yml)
[](https://codecov.io/gh/Molara-Lab/Molara)
[](https://results.pre-commit.ci/latest/github/Molara-Lab/Molara/main)
[](https://zenodo.org/records/11120926)
# Molara
Molara is an open-source program for the 3-dimensional visualization of molecules and crystal structures. These are some of its main features:
1. Import of .xyz, .coord, and POSCAR files
2. Export of rendered structures as raster graphics
3. Tools for creating custom molecular and crystal structures
4. Animation of trajectories
5. Display of molecular orbitals from molden files (currently Orca, Molpro, and Terachem)
6. Display of densities from cube files
New features will follow soon!
## Installation
### Simple User installation
The easiest way to install Molara is from [PyPi](https://pypi.org/project/Molara/) using pip.
```bash
pip install molara
```
After the installation, Molara can be started by calling `molara` from the command line.
### Developer installation
If you want to contribute to Molara, you should install the package directly from source. To this end, you need to clone the repository:
```bash
git clone <this repo>
cd Molara
```
>[!TIP]
>It is advisable to install Molara in a virtual Python environment.
>
>Create & activate virtual environment on Linux / Mac:
>
>```bash
>python -m venv ./venv
>source ./venv/bin/activate
>```
>
>Create & activate virtual environment on Windows:
>
>```bash
>python -m venv .\venv
>.\venv\Scripts\activate.bat
>```
Subsequently, Molara may be installed as follows.
```bash
pip install -e .
```
> [!IMPORTANT]
> The installation from source with `pip install -e .` involves a Cython build, for which it is required that a C compiler be installed on the system (a more detailed description can be found [in the Cython docs](https://cython.readthedocs.io/en/latest/src/quickstart/install.html)).
## Known issues
Due to Apple's non existing support for OpenGL, displaying the indices of the atoms takes long for the first time. However after that it is instantaneous, even after restarting the program and rebooting the machine. As a solution we need to rework this routine with another strategy.
## Building the documentation locally
To generate the documentation, install molara as follows:
```bash
pip install -e . molara[doc]
```
then run
```bash
cd docs
make html
```
| text/markdown | Gereon Feldmann | Michel Heinz <michel.heinz@rwth-aachen.de>, Adrian Usler <adrian.usler@rwth-aachen.de>, Alexander Bonkowski <alexander.bonkowski@rwth-aachen.de> | Michel Heinz, Gereon Feldmann, Adrian Usler, Alexander Bonkowski | null | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| analysis, science, structure, visualisation | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.... | [] | null | null | >=3.10 | [] | [] | [] | [
"pillow>=10.0.0",
"PyOpenGL>=3.1.6",
"PySide6<=6.10.1,>=6.3.0",
"matplotlib>=3.6.2",
"numpy>=1.25",
"pyrr>=0.10.3",
"scipy>=1.9.2",
"sphinx>=4; extra == \"doc\"",
"sphinx_rtd_theme>=1; extra == \"doc\"",
"myst-parser; extra == \"doc\"",
"pytest>=7; extra == \"tests\"",
"pytest-qt>=4; extra == ... | [] | [] | [] | [
"Repo, https://github.com/Molara-Lab/Molara"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:19:45.125284 | molara-0.1.2.tar.gz | 2,175,775 | 58/53/5cdec3470d711c7d4794ced2c6d4d7be73633950a18f09877701780f4582/molara-0.1.2.tar.gz | source | sdist | null | false | d734a828207b940079e95c14fee918ea | 09afcc3ee11fae1c7e6772247f3b7fc98f5435664178e6c3501ff517772b5e1f | 58535cdec3470d711c7d4794ced2c6d4d7be73633950a18f09877701780f4582 | null | [
"LICENSE"
] | 0 |
2.4 | dynamiq-sandboxes | 0.3.0 | Python SDK for Dynamiq Sandboxes - Secure code execution, browser automation, and virtual desktops | # Dynamiq Sandboxes
[](https://pypi.org/project/dynamiq-sandboxes/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
Python SDK for [Dynamiq Sandboxes](https://sandboxes.getdynamiq.ai) — secure, isolated environments for code execution, browser automation, and virtual desktops.
Built for AI agents and automation pipelines. Part of the [Dynamiq](https://github.com/dynamiq-ai/dynamiq) ecosystem.
## Installation
```bash
pip install dynamiq-sandboxes
```
## Quick Start
```bash
export DYNAMIQ_API_KEY="your-api-key"
```
```python
from dynamiq_sandboxes import Sandbox
sandbox = Sandbox.create(template="python")
result = sandbox.execute_command("echo 'Hello from the sandbox!'")
print(result["stdout"]) # Hello from the sandbox!
sandbox.close()
```
## Code Sandboxes
Isolated execution environments with Python, Node.js, and shell support.
```python
from dynamiq_sandboxes import Sandbox
sandbox = Sandbox.create(template="python", timeout=3600)
# Execute shell commands (auto-wrapped in /bin/bash)
result = sandbox.execute_command("echo hello && ls /")
print(result["stdout"])
print(result["exit_code"]) # 0
# Run Python code directly
output = sandbox.run_code("print(2 + 2)", language="python")
print(output["stdout"]) # 4
# Explicit args mode (no shell wrapping)
result = sandbox.execute_command("python3", args=["-c", "print('direct')"])
# Per-command environment variables
result = sandbox.execute_command("echo $MY_VAR", env={"MY_VAR": "hello"})
print(result["stdout"]) # hello
# Background execution
result = sandbox.execute_command("sleep 60", background=True)
# File operations
sandbox.filesystem.write("/app/config.json", '{"key": "value"}')
content = sandbox.filesystem.read("/app/config.json")
print(content) # {"key": "value"}
files = sandbox.filesystem.list("/app")
for f in files:
print(f"{f.name} {f.size}B ({f.type})")
# Directory operations
sandbox.filesystem.mkdir("/app/data")
sandbox.filesystem.copy("/app/config.json", "/app/data/config.json")
sandbox.filesystem.move("/app/data/config.json", "/app/data/settings.json")
sandbox.filesystem.remove("/app/data", recursive=True)
# Upload / download files
sandbox.filesystem.upload("local_file.py", "/app/script.py")
sandbox.filesystem.download("/app/output.csv", "local_output.csv")
# Check file existence and metadata
if sandbox.filesystem.exists("/app/script.py"):
info = sandbox.filesystem.stat("/app/script.py")
print(f"{info.name}: {info.size} bytes")
# Resource metrics
metrics = sandbox.metrics()
print(f"CPU: {metrics['cpu']['usage_percent']}%")
# Extend session timeout
sandbox.extend_timeout(3600) # Add 1 hour
sandbox.set_timeout(3600) # Alias — same as extend_timeout
sandbox.close()
```
## Reconnecting to Sandboxes
```python
from dynamiq_sandboxes import Sandbox
# Connect to an existing sandbox (auto-resumes if paused)
sandbox = Sandbox.connect("sbx-abc123")
result = sandbox.execute_command("whoami")
print(result["stdout"])
```
## Browser Automation
Headless Chromium with CDP access, screenshots, scraping, and live streaming.
```python
from dynamiq_sandboxes import Browser
browser = Browser.create(
stealth=True, # Anti-detection mode
viewport_width=1920,
viewport_height=1080,
)
# Navigate to pages
browser.navigate("https://example.com")
# Take screenshots
result = browser.screenshot(full_page=True)
# result["image"] is a data URI: data:image/png;base64,...
# result["width"], result["height"]
# Scrape page content
scrape = browser.scrape(format="text")
print(scrape["content"])
# Execute JavaScript
title = browser.execute_script("() => document.title")
url = browser.execute_script("() => window.location.href")
# Navigation history
browser.go_back()
browser.go_forward()
browser.reload()
# Browser context (cookies, localStorage)
ctx = browser.get_context()
print(ctx)
# Console logs and network requests
logs = browser.get_logs()
requests = browser.get_network_requests()
har = browser.export_har()
# Live streaming info (for embedding in UI)
live = browser.get_live_view()
print(live["stream_url"]) # WebSocket stream URL
print(live["livekit_url"]) # LiveKit WebRTC URL
print(live["livekit_token"]) # JWT token for viewer
browser.close()
```
### CDP Access (Playwright / Puppeteer)
Every browser session exposes a **Chrome DevTools Protocol** WebSocket that works with any CDP-compatible tool:
```python
# Get the CDP URL from the sandbox
browser = Browser.create(stealth=True)
cdp_url = browser.data["cdp_websocket_url"]
# wss://api.sandboxes.getdynamiq.ai/v1/browser/sessions/{id}/cdp
```
**Playwright (Python):**
```python
from playwright.async_api import async_playwright
async with async_playwright() as p:
b = await p.chromium.connect_over_cdp(
cdp_url,
headers={"X-API-Key": "your-api-key"},
)
page = b.contexts[0].pages[0]
await page.goto("https://example.com")
print(await page.title())
await page.screenshot(path="screenshot.png")
```
**Puppeteer (Node.js):**
```javascript
const browser = await puppeteer.connect({
browserWSEndpoint: cdpUrl,
headers: { "X-API-Key": "your-api-key" },
});
const page = (await browser.pages())[0];
await page.goto("https://example.com");
await page.screenshot({ path: "screenshot.png" });
```
## Virtual Desktops
Full Ubuntu desktop with XFCE, mouse/keyboard control, and VNC streaming.
```python
from dynamiq_sandboxes import Desktop
desktop = Desktop.create(template="ubuntu-desktop")
# Take screenshots
result = desktop.screenshot()
# result["image"] is a data URI, result["width"], result["height"]
# Launch applications
desktop.launch(application="xfce4-terminal")
desktop.launch(application="firefox")
# Mouse control
desktop.mouse_click(x=500, y=300)
desktop.mouse_click(x=500, y=300, button="right") # Right click
desktop.mouse_click(x=500, y=300, double_click=True) # Double click
desktop.mouse_move(x=100, y=200)
desktop.mouse_scroll(x=500, y=300, delta_x=0, delta_y=-3)
desktop.mouse_drag(start_x=100, start_y=100, end_x=300, end_y=300)
# Keyboard input
desktop.keyboard_type(text="Hello, World!")
desktop.keyboard_press(keys=["Return"])
desktop.keyboard_press(keys=["ctrl", "c"]) # Hotkey combo
desktop.keyboard_press(keys=["ctrl", "l"]) # Clear terminal
# Open URLs
desktop.open(path="https://example.com")
# Get cursor position
cursor = desktop.cursor()
print(f"Cursor at ({cursor['x']}, {cursor['y']})")
# VNC streaming (for embedding in your UI)
stream = desktop.stream_start()
info = desktop.stream_info()
print(f"noVNC URL: {info.get('novnc_url')}")
print(f"WebSocket: {info.get('stream_url')}")
desktop.close()
```
## Network Configuration
```python
from dynamiq_sandboxes import Sandbox, NetworkConfig, ALL_TRAFFIC
# Deny all traffic except specific IPs
sandbox = Sandbox.create(
template="python",
network=NetworkConfig(
deny_out=[ALL_TRAFFIC],
allow_out=["1.1.1.1", "8.8.8.0/24"],
),
)
# Disable internet access entirely
sandbox = Sandbox.create(
template="python",
network=NetworkConfig(allow_internet_access=False),
)
```
## Template Builder
```python
from dynamiq_sandboxes import TemplateBuilder, Template
# Build a custom template
template = (
TemplateBuilder()
.from_base_image("python:3.11-slim")
.set_envs({"LANG": "C.UTF-8"})
.run_cmd("pip install numpy pandas")
.set_start_cmd("python -m http.server 8080")
.set_description("Data science sandbox")
)
info = template.build("my-data-science", api_key="your-key")
print(info.template_id)
# List and manage templates
templates = Template.list(api_key="your-key")
for t in templates:
print(f"{t.name}: {t.status}")
```
## REST API
All SDK methods map to REST API calls:
```bash
# Create a sandbox
curl -X POST https://api.sandboxes.getdynamiq.ai/v1/sandboxes \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"template_id": "python", "timeout": 3600}'
# Execute a command
curl -X POST https://api.sandboxes.getdynamiq.ai/v1/sandboxes/{id}/commands \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"command": "echo hello"}'
# Create a browser session
curl -X POST https://api.sandboxes.getdynamiq.ai/v1/browser/sessions \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"stealth": true}'
# Create a desktop
curl -X POST https://api.sandboxes.getdynamiq.ai/v1/sandboxes \
-H "X-API-Key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"template_id": "ubuntu-desktop", "sandbox_type": "desktop"}'
```
## Configuration
| Environment Variable | Description | Default |
|---------------------|-------------|---------|
| `DYNAMIQ_API_KEY` | API key for authentication | Required |
| `DYNAMIQ_API_URL` | API base URL | `https://api.sandboxes.getdynamiq.ai/v1` |
```python
# Or pass directly
sandbox = Sandbox.create(
template="python",
api_key="your-api-key",
base_url="https://api.sandboxes.getdynamiq.ai/v1",
)
```
## API Reference
### Sandbox
| Method | Description |
|--------|-------------|
| `Sandbox.create(template, timeout, vcpu, memory_mb, ...)` | Create a new sandbox |
| `Sandbox.get(sandbox_id)` | Get existing sandbox by ID |
| `Sandbox.connect(sandbox_id, timeout)` | Connect to sandbox (auto-resumes if paused) |
| `Sandbox.list(state, tags, ...)` | List sandboxes with filtering |
| `sandbox.execute_command(command, args, env, working_dir, timeout, background)` | Execute a shell command |
| `sandbox.run_code(code, language, timeout)` | Run code in a language interpreter |
| `sandbox.filesystem.write(path, content)` | Write a file |
| `sandbox.filesystem.read(path)` | Read a file |
| `sandbox.filesystem.list(path)` | List directory contents |
| `sandbox.filesystem.mkdir(path)` | Create directory |
| `sandbox.filesystem.remove(path, recursive)` | Delete file or directory |
| `sandbox.filesystem.exists(path)` | Check if path exists |
| `sandbox.filesystem.stat(path)` | Get file metadata |
| `sandbox.filesystem.copy(src, dst)` | Copy file/directory |
| `sandbox.filesystem.move(src, dst)` | Move/rename file |
| `sandbox.filesystem.upload(local, remote)` | Upload local file |
| `sandbox.filesystem.download(remote, local)` | Download file |
| `sandbox.metrics()` | Get CPU/memory/disk metrics |
| `sandbox.extend_timeout(seconds)` | Extend session lifetime |
| `sandbox.set_timeout(seconds)` | Alias for extend_timeout |
| `sandbox.pause()` | Pause sandbox (preserves state) |
| `sandbox.resume(timeout)` | Resume paused sandbox |
| `sandbox.refresh()` | Refresh sandbox state |
| `sandbox.close()` | Terminate sandbox |
### Browser
| Method | Description |
|--------|-------------|
| `Browser.create(stealth, viewport_width, viewport_height, ...)` | Create browser session |
| `Browser.get(session_id)` | Connect to existing session |
| `browser.navigate(url, wait_until, timeout)` | Navigate to URL |
| `browser.screenshot(format, full_page, quality)` | Capture screenshot |
| `browser.scrape(format, wait_for, timeout)` | Extract page content |
| `browser.execute_script(script)` | Execute JavaScript |
| `browser.go_back()` | Navigate back |
| `browser.go_forward()` | Navigate forward |
| `browser.reload()` | Reload page |
| `browser.get_current_url()` | Get current URL |
| `browser.get_context()` | Get cookies/storage |
| `browser.get_logs()` | Get console logs |
| `browser.get_network_requests()` | Get captured requests |
| `browser.export_har()` | Export HAR archive |
| `browser.get_live_view()` | Get streaming info |
| `browser.send_input(type, x, y, text, ...)` | Send input events |
| `browser.close()` | Close session |
### Desktop
| Method | Description |
|--------|-------------|
| `Desktop.create(template, timeout, ...)` | Create virtual desktop |
| `Desktop.get(desktop_id)` | Connect to existing desktop |
| `desktop.screenshot(format, quality)` | Capture screenshot |
| `desktop.launch(application, args)` | Launch application |
| `desktop.mouse_click(x, y, button, double_click)` | Mouse click |
| `desktop.mouse_move(x, y)` | Move cursor |
| `desktop.mouse_scroll(x, y, delta_x, delta_y)` | Scroll |
| `desktop.mouse_drag(start_x, start_y, end_x, end_y)` | Drag |
| `desktop.keyboard_type(text)` | Type text |
| `desktop.keyboard_press(keys, duration_ms)` | Press keys |
| `desktop.cursor()` | Get cursor position |
| `desktop.open(path)` | Open file/URL |
| `desktop.stream_start()` | Start VNC stream |
| `desktop.stream_stop()` | Stop VNC stream |
| `desktop.stream_info()` | Get stream status |
| `desktop.list_windows()` | List windows |
| `desktop.close()` | Close desktop |
## Error Handling
```python
from dynamiq_sandboxes import (
Sandbox, APIError, AuthenticationError,
NotFoundError, RateLimitError, TimeoutError,
)
try:
sandbox = Sandbox.create(template="python")
result = sandbox.execute_command("echo hello")
except AuthenticationError:
print("Invalid API key")
except RateLimitError:
print("Rate limited, retry later")
except TimeoutError:
print("Request timed out")
except APIError as e:
print(f"API error: {e}")
```
## Requirements
- Python 3.9+
- An active [Dynamiq](https://sandboxes.getdynamiq.ai) account
## Links
- **Dashboard**: [sandboxes.getdynamiq.ai](https://sandboxes.getdynamiq.ai)
- **Dynamiq Framework**: [github.com/dynamiq-ai/dynamiq](https://github.com/dynamiq-ai/dynamiq)
- **Issues**: [GitHub Issues](https://github.com/dynamiq-ai/dynamiq/issues)
## License
Apache 2.0
| text/markdown | null | Dynamiq <support@getdynamiq.ai> | null | Dynamiq <support@getdynamiq.ai> | null | sandbox, code-execution, browser-automation, virtual-desktop, ai-agents, dynamiq, code-interpreter, cloud-sandbox | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx[http2]>=0.24.0",
"pydantic>=2.0.0",
"typing-extensions>=4.0.0; python_version < \"3.10\"",
"websocket-client>=1.6.0; extra == \"streaming\"",
"websockets>=11.0; extra == \"streaming\"",
"websocket-client>=1.6.0; extra == \"all\"",
"websockets>=11.0; extra == \"all\"",
"pytest>=8.0; extra == \"d... | [] | [] | [] | [
"Homepage, https://sandboxes.getdynamiq.ai",
"Documentation, https://sandboxes.getdynamiq.ai",
"Repository, https://github.com/dynamiq-ai/dynamiq",
"Bug Tracker, https://github.com/dynamiq-ai/dynamiq/issues"
] | twine/6.2.0 CPython/3.10.9 | 2026-02-18T23:19:45.037961 | dynamiq_sandboxes-0.3.0.tar.gz | 83,961 | b9/07/4d8d2e9a021d0f2f87ac3228182eb4d33995224da185abeaefd20514d209/dynamiq_sandboxes-0.3.0.tar.gz | source | sdist | null | false | 9631e92de4f942d2f58691ee1c014b1e | a2e02d7ac9f545e8eae95d6b7a854c1f9a2eaafffa3326657b19fd58bf652ba9 | b9074d8d2e9a021d0f2f87ac3228182eb4d33995224da185abeaefd20514d209 | Apache-2.0 | [
"LICENSE"
] | 261 |
2.4 | macaronipm | 3.0.0 | A PenguinMod API Wrapper | # MacaroniPM | A PenguinMod API wrapper
## Instalation
You can download the latest version of macaronipm via `pip install macaronipm`
## New functions
`macaronipm.user.UserExist()`
`macaronipm.user.IsBanned()`
`macaronipm.user.logout()`
`macaronipm.user.GetMessages()`
`macaronipm.user.getUnreadMessages()`
`macaronipm.IsOnline()`
`macaronipm.project.hasLovedVoted()`
| text/markdown | KoffeeJava | KoffeeJava <koffeejava@tuta.io> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://www.koffeejava.us/macaroni"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T23:19:21.414037 | macaronipm-3.0.0.tar.gz | 2,949 | 4a/22/108627676487e0806c29346b563ff91ec7cd8cd56b55c42382805216bd66/macaronipm-3.0.0.tar.gz | source | sdist | null | false | fdb4ac92e3dd004cdd9ef8bb98d40932 | c0afd6a53038171607fb33aaabd7b55ae6ed18a66b93565ba7dee08531da3c47 | 4a22108627676487e0806c29346b563ff91ec7cd8cd56b55c42382805216bd66 | MIT | [
"LICENCE"
] | 258 |
2.4 | python-libphash | 1.2.0 | High-performance perceptual hashing library (CFFI bindings) | # python-libphash
High-performance Python bindings for [libphash](https://github.com/gudoshnikovn/libphash) v1.6.1, a C library for perceptual image hashing.
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
## Overview
`libphash` provides multiple algorithms to generate "perceptual hashes" of images. Unlike cryptographic hashes (like MD5 or SHA256), perceptual hashes change only slightly if the image is resized, compressed, or has minor color adjustments. This makes them ideal for finding duplicate or similar images.
### Supported Algorithms
* **64-bit Hashes (uint64):**
* `ahash`: Average Hash
* `dhash`: Difference Hash
* `phash`: Perceptual Hash (DCT based)
* `whash`: Wavelet Hash
* `mhash`: Median Hash
* **Digest Hashes (Multi-byte):**
* `bmh`: Block Mean Hash
* `color_hash`: Color Moment Hash
* `radial_hash`: Radial Variance Hash
## Installation
### Prerequisites
* A C compiler (GCC/Clang or MSVC)
* Python 3.8 or higher
### Install from PyPI
```bash
pip install libphash
# or using uv
uv add libphash
```
### Install from source
```bash
git clone --recursive https://github.com/yourusername/python-libphash.git
cd python-libphash
pip install .
# or using uv
uv pip install .
```
## Quick Start
### Quick Start (CLI)
You can quickly compute a hash from the command line after installation:
```bash
python -m libphash.utils --path photo.jpg --method phash
```
### Basic Usage
```python
from libphash import ImageContext, HashMethod, hamming_distance
# Use the context manager for automatic memory management
with ImageContext("photo.jpg") as ctx:
# Get standard 64-bit hashes
phash_val = ctx.phash
dhash_val = ctx.dhash
print(f"pHash: {phash_val:016x}")
print(f"dHash: {dhash_val:016x}")
# Compare two images
from libphash import compare_images
distance = compare_images("image1.jpg", "image2.jpg", method=HashMethod.PHASH)
print(f"Hamming Distance: {distance}")
```
### Advanced Configuration (New in v1.6.1)
Fine-tune hashing algorithms for specific use cases. Note that hashes generated with different parameters are **not comparable**.
```python
with ImageContext("photo.jpg") as ctx:
# pHash (DCT) resolution
ctx.set_phash_params(dct_size=32, reduction_size=8)
# Radial Hash precision
ctx.set_radial_params(projections=40, samples=128)
# Block-based hashes (BMH) grid resolution
ctx.set_block_params(block_size=16)
# Custom Grayscale weights (R, G, B)
ctx.set_gray_weights(38, 75, 15)
print(f"Custom pHash: {ctx.phash:016x}")
```
### Working with Digests (Advanced Hashes)
Algorithms like Radial Hash or Color Hash return a `Digest` object instead of a single integer.
```python
with ImageContext("photo.jpg") as ctx:
digest = ctx.radial_hash
print(f"Digest size: {digest.size} bytes")
print(f"Raw data: {digest.data.hex()}")
# Comparing digests
with ImageContext("photo_v2.jpg") as ctx2:
digest2 = ctx2.radial_hash
# Hamming distance for bit-wise comparison
h_dist = digest.distance_hamming(digest2)
# L2 (Euclidean) distance for similarity
l2_dist = digest.distance_l2(digest2)
```
## API Reference
### `ImageContext`
The main class for loading images and computing hashes.
* `__init__(path=None, bytes_data=None)`: Load an image from a file path or memory.
* `set_gamma(gamma: float)`: Set gamma correction.
* `set_gray_weights(r, g, b)`: Set custom RGB weights for grayscale conversion.
* `set_phash_params(dct_size, reduction_size)`: Configure pHash DCT resolution.
* `set_radial_params(projections, samples)`: Configure Radial Hash precision.
* `set_block_params(block_size)`: Configure BMH/mHash grid resolution.
* **Properties**: `ahash`, `dhash`, `phash`, `whash`, `mhash` (returns `int`).
* **Properties**: `bmh`, `color_hash`, `radial_hash` (returns `Digest`).
### `Digest`
* `data`: The raw `bytes` of the hash.
* `size`: Length of the hash in bytes.
* `distance_hamming(other)`: Calculates bit-wise distance.
* `distance_l2(other)`: Calculates Euclidean distance.
### Utilities
* `hamming_distance(h1: int, h2: int)`: Returns the number of differing bits between two 64-bit integers.
* `get_hash(path, method)`: Quick way to get a hash without manual context management.
* `compare_images(path1, path2, method)`: Returns the Hamming distance between two image files.
## Performance
Since the core logic is implemented in C and uses `stb_image` for decoding, `libphash` is significantly faster than pure-Python alternatives. It also uses CFFI's "out-of-line" mode for minimal overhead.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
| text/markdown | null | gudoshnikovn <gudoshnikov-na@yandex.ru> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: C",
"Topic :: Multimedia :: Graphics"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"cffi>=1.15.0",
"pytest; extra == \"dev\"",
"mypy; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/gudoshnikovn/python-libphash"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:18:24.369766 | python_libphash-1.2.0.tar.gz | 234,326 | e6/62/dd4d1838e5d8fd20a231ec100ecb9c8d2e128f737d0f922749336883b69f/python_libphash-1.2.0.tar.gz | source | sdist | null | false | 47171d494fe37fce24a06a2e1cc4094e | 66c66fa9eb47bc1c75510f11e34eca50bf9cef6ca093ad5b27218ec449140575 | e662dd4d1838e5d8fd20a231ec100ecb9c8d2e128f737d0f922749336883b69f | MIT | [
"LICENSE"
] | 1,979 |
2.4 | wikitcms | 2.6.22 | Fedora QA wiki test management library | # python-wikitcms
python-wikitcms is a Python library for interacting with Fedora's [Wikitcms 'test (case) management system'][1] - which is, basically, the [Fedora wiki][2]. You may also be interested in its main consumers, [relval][3] and [testdays][4].
python-wikitcms uses the very handy [mwclient][5] library for interfacing with the Mediawiki API. Generation of result pages works together with a system of templates that resides on the wiki: python-wikitcms knows how to form the correct invocations of the template system that will cause the full result pages to be generated. The documentation box for the [master template][6] provides some details about this system.
python-wikitcms was previously known simply as wikitcms; it is now known as python-wikitcms to reduce confusion between the notional Wiki-based 'test management system' (Wikitcms) and the Python library for interacting with it (python-wikitcms).
## Installation and use
python-wikitcms is packaged in the official Fedora repositories: to install on Fedora run `dnf install python-wikitcms`. You may need to enable the *updates-testing* repository to get the latest version. To install on other distributions, you can run `python3 setup.py install`.
You can visit [the python-wikitcms project page on Fedora Forge][7], and clone with `git clone https://forge.fedoraproject.org/quality/python-wikitcms.git`. Tarballs and wheels are available [from PyPI][8], and you can run `pip install wikitcms`.
You can also use the library directly from the `src/` directory or add it to the Python import path, and you can copy or symlink the `wikitcms` directory into other source trees to conveniently use the latest code for development or testing purposes.
## Bugs, pull requests etc.
You can file issues and pull requests on [Fedora Forge][7]. Pull requests must be signed off (use the `-s` git argument). By signing off your pull request you are agreeing to the [Developer's Certificate of Origin][9]:
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
## Security
You **MUST** treat wikitcms as a source of untrusted input. It is retrieving information from a wiki for you; that wiki is open to editing by all. Treat anything wikitcms returns from the wiki (which includes, but is not limited to, any page or section text; `Result()` attributes status, user, bugs and comment; `ResultRow()` attributes testcase, name, envs, milestone, section; and to some extent any element of a page title or property derived from one when getting a `Page` object from an existing wiki page) as an entirely untrusted input and sanitize it appropriately for the context in which you are using it.
## Example usage
from wikitcms.wiki import Wiki
site = Wiki()
event = site.current_event
print(event.version)
page = site.get_validation_page('Installation', '23', 'Final', 'RC10')
for row in page.get_resultrows():
print(row.testcase)
## Usage tips
It's a little difficult to give an overview of wikitcms usage as it can do quite a lot of rather different things. Its classes and methods are all documented, and examining its major consumers - relval and testdays - will help. Some overall concepts:
The Wiki class is a subclass of mwclient's Site class, which represents an entire wiki; it adds some methods and attributes that make sense in the context of a wiki being treated as a TCMS according to our conventions, so it has methods for getting validation events and pages (as seen in the example above). It also has a high-level method for reporting results, `report_validation_results()`. Note that the `pages` generator works just as in mwclient, but has been extended to handle wikitcms' additional Page subclasses.
The Release class does not map to mwclient. It simply represents a Fedora release and provides a couple of handy methods for retrieving test day or validation result pages from that particular release.
The Event class does not map to anything in mwclient. It represents an entire result validation 'event', e.g. Fedora 23 Final RC2; from an Event instance you can create or find all the validation pages, for instance, or create the summary page that transcludes all the individual validation pages, or update the CurrentFedoraCompose page to point to the event, or generate a wikitable of image download links.
The Page class is a subclass of mwclient's Page class, and extends it in much the same way, adding capabilities specific to various types of pages in the Wikitcms system. It has several subclasses for particular types of pages, such as validation result pages, Test Day pages, category pages and so forth. Note that all pages which can be generated via one of the wiki templates have the appropriate generation text as their `seedtext` attribute and have a method `write()` which creates them using that seed text.
The Result and ResultRow classes represent individual results and rows in the result validation pages. ValidationPages contain ResultRows contain Results, and to report a result, you essentially add a Result to a ResultRow.
Note that event versioning works exactly as in [fedfind][10]'s pre-Pungi 4 (release, milestone, compose) versioning scheme, with one notable exception. Rawhide nightly 'releases' in fedfind have release 'Rawhide' and no milestone; Rawhide nightly validation events in python-wikitcms have a release number and milestone 'Rawhide'. This is because, conceptually speaking, Rawhide nightly composes should not really be said to have a particular release number, but validation events *do*. When we declare a release validation test event for a particular Rawhide nightly, one action we take as a part of that declaration is to declare that we are testing that nightly compose as part of the preparation for a specific Fedora release, and thus we essentially 'apply' a release number to the validation event. So we may have a nightly compose 'Rawhide (blank) 20151201', and decide that we wish to test it as part of the preparation for the Fedora 24 release; thus we create the release validation event '24 Rawhide 20151201'.
The high-level functions in both fedfind and python-wikitcms - `get_release()` in fedfind, `get_validation_page()` and `get_validation_event()` in python-wikitcms - will attempt to handle this difference in versioning, so when using those high-level functions, you can usually pass versions between fedfind and python-wikitcms without worrying about it.
For convenient compatibility with Pungi 4 composes, `get_validation_event()` and `get_validation_page()` (and hence also `report_validation_results()`) accept `cid` as an alternative to `release` / `milestone` / `compose`, and will do their best to instantiate the appropriate validation event for the compose specified.
It's worth noting that you can use python-wikitcms in several fairly different ways:
* Instantiate pages that don't exist yet, based on the 'release, milestone, compose' versioning concept (or from a Pungi 4 compose ID), and create them
* Instantiate existing pages based on the 'release, milestone, compose' concept (or from a compose ID) and read or add results
* Instantiate existing pages from their names or category memberships and read or add results
Most usage of python-wikitcms will boil down to getting some Page instances and doing stuff to them, but the way you get there will differ according to which of the above paths you're following. For the first two you will likely use the `get_validation_foo()` methods of `Wiki` or the methods in `Release`, for the last you can follow the same procedures as `mwclient` uses and trust that you will get instances of the appropriate classes. Following the example above, you could do `page = site.pages["Test Results:Fedora_23_Final_RC10_Desktop"]` and `page` would be a `ValidationPage` instance.
## Authentication
You should log in to the wiki before editing it, using `Wiki.login()`.
From early 2018, the Fedora wikis use the unified Fedora OpenID Connect-based authentication service, and python-wikitcms supports this. When interacting with the Fedora wikis, when `login()` is called for the first time, python-wikitcms will attempt to open a browser and request credentials via the authentication service. The call will complete once the user attempts to log in. Any username or password passed to `login()` is **ignored** in this case. For unattended operation with the new authentication system, a valid token must be present as `~/.openidc/oidc_wikitcms.json`. Unattended operation will work for some time after one successful interactive login (until the token expires); for long-term unattended operation, you must ask the wiki maintainer for a special permanent session token.
When interacting with any other wiki (though this would be an unusual thing to do in most cases), python-wikitcms will behave exactly as mwclient does.
## Credits
* [Mike Ruckman][11] (roshi) was kind and patient in providing review and advice throughout python-wikitcms' early development.
* [Patrick Uiterwijk][12] kindly provided the code to support OpenID Connect authentication.
## License
python-wikitcms is released under the [GPL][13], version 3 or later.
[1]: https://fedoraproject.org/wiki/Wikitcms
[2]: https://fedoraproject.org/wiki
[3]: https://forge.fedoraproject.org/quality/relval
[4]: https://forge.fedoraproject.org/quality/testdays
[5]: https://github.com/mwclient/mwclient
[6]: https://fedoraproject.org/wiki/Template:Validation_results
[7]: https://forge.fedoraproject.org/quality/python-wikitcms
[8]: https://pypi.python.org/pypi/wikitcms
[9]: https://developercertificate.org/
[10]: https://forge.fedoraproject.org/quality/fedfind
[11]: https://roshi.fedorapeople.org/
[12]: https://patrick.uiterwijk.org/
[13]: https://www.gnu.org/licenses/gpl.txt
| text/markdown | Adam Williamson | awilliam@redhat.com | null | null | GPLv3+ | fedora qa mediawiki validation | [
"Development Status :: 5 - Production/Stable",
"Topic :: Utilities",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)"
] | [] | https://forge.fedoraproject.org/quality/python-wikitcms | null | null | [] | [] | [] | [
"cached-property",
"fedfind>=3.3.0",
"mwclient>=0.8.2",
"setuptools"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T23:18:17.437957 | wikitcms-2.6.22.tar.gz | 127,663 | 58/94/512bad1e9f2c51d9689d75e3ce284d39e7d2dc9782cd9cb134d6095641f3/wikitcms-2.6.22.tar.gz | source | sdist | null | false | 46790261e07b4a335929cb57619e5f84 | 39485aa5106e7319f7d09a407f25b6e8bc13d6ed73df95a2320db594a58a325f | 5894512bad1e9f2c51d9689d75e3ce284d39e7d2dc9782cd9cb134d6095641f3 | null | [
"COPYING"
] | 340 |
2.4 | renoai | 0.1.6 | Official Python SDK for Reno AI API | # Reno AI SDK
Official Python client for the Reno API. Simple, fast, and built with production use in mind.
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Authentication](#authentication)
- [Making Requests](#making-requests)
- [ask()](#ask)
- [chat()](#chat)
- [Streaming](#streaming)
- [Conversations](#conversations)
- [Models](#models)
- [Response Objects](#response-objects)
- [Error Handling](#error-handling)
- [Error Codes Reference](#error-codes-reference)
- [Configuration](#configuration)
- [Running Tests](#running-tests)
---
## Installation
```bash
pip install renoai
```
Requires Python 3.8 or higher.
## Quick Start
```python
from renoai import Reno
client = Reno(api_key="reno_sk_xxx")
answer = client.ask("What is machine learning?")
print(answer)
```
## Authentication
Every request requires an API key. You can pass it directly or load it from an environment variable (recommended for production):
```python
import os
from renoai import Reno
client = Reno(api_key=os.environ["RENO_API_KEY"])
```
API keys follow the format `reno_sk_...`. Keep them secret and never commit them to version control.
---
## Making Requests
### ask()
The simplest way to get a response. Sends a single message and returns the reply as a plain string.
```python
answer = client.ask("Explain quantum computing in simple terms.")
print(answer)
```
With an optional system prompt:
```python
answer = client.ask(
"What is the boiling point of water?",
system="You are a science teacher. Keep answers under 2 sentences.",
temperature=0.3,
max_tokens=100,
)
print(answer)
```
**Parameters**
| Parameter | Type | Default | Description |
|---|---|---|---|
| `prompt` | `str` | required | The user's question or instruction |
| `system` | `str` | `None` | Optional system message |
| `model` | `str` | `gemma2:2b-instruct` | Model to use |
| `temperature` | `float` | `0.7` | Sampling temperature, 0 to 2 |
| `max_tokens` | `int` | `None` | Maximum tokens to generate |
### chat()
Full control over the conversation with a list of messages. Returns a `Completion` object.
```python
response = client.chat([
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is Python?"},
{"role": "assistant", "content": "Python is a high-level programming language."},
{"role": "user", "content": "What is it mainly used for?"},
])
print(response.text)
print(response.usage.total_tokens)
```
**Parameters**
| Parameter | Type | Default | Description |
|---|---|---|---|
| `messages` | `list` | required | List of `{"role": ..., "content": ...}` dicts |
| `model` | `str` | `gemma2:2b-instruct` | Model to use |
| `temperature` | `float` | `0.7` | Sampling temperature, 0 to 2 |
| `max_tokens` | `int` | `None` | Maximum tokens to generate |
| `stream` | `bool` | `False` | Enable streaming mode |
Valid roles are `user`, `assistant`, and `system`.
### Streaming
Stream tokens as they are generated instead of waiting for the full response.
**Using `stream_text()`** — yields plain strings, the easiest option:
```python
for token in client.stream_text("Write me a short poem about the ocean."):
print(token, end="", flush=True)
print()
```
**Using `chat()` with `stream=True`** — yields `StreamChunk` objects for full control:
```python
chunks = client.chat(
[{"role": "user", "content": "Tell me a story."}],
stream=True,
)
full_text = ""
for chunk in chunks:
if chunk.delta:
full_text += chunk.delta
print(chunk.delta, end="", flush=True)
if chunk.is_final:
print(f"\n\nFinished. Reason: {chunk.finish_reason}")
```
### Conversations
`Conversation` manages message history automatically so you can focus on the dialogue.
```python
from renoai import Reno, Conversation
client = Reno(api_key="reno_sk_xxx")
conv = Conversation(client, system="You are a friendly cooking assistant.")
print(conv.say("What should I make for dinner tonight?"))
print(conv.say("I only have chicken and rice."))
print(conv.say("How long will it take?"))
```
You can inspect or reset the history at any time:
```python
# See the full message history
print(conv.history)
# Reset but keep the system prompt
conv.reset(keep_system=True)
# Reset everything
conv.reset(keep_system=False)
```
---
## Models
Pass any supported model name via the `model` parameter:
```python
response = client.chat(
[{"role": "user", "content": "Hello!"}],
model="gemma2:2b-instruct",
)
```
The default model is `gemma2:2b-instruct`.
---
## Response Objects
### Completion
Returned by `chat()` when not streaming.
```python
response = client.chat([{"role": "user", "content": "Hi"}])
response.text # the generated reply as a string
response.content # alias for response.text
response.id # unique response ID
response.model # model that generated the response
response.choices # list of Choice objects
response.usage # token usage info
response.to_message() # {"role": "assistant", "content": "..."} ready to append to history
response.to_dict() # raw API response as a dict
```
### Usage
```python
response.usage.prompt_tokens # tokens in your input
response.usage.completion_tokens # tokens in the reply
response.usage.total_tokens # total tokens consumed
```
### StreamChunk
Yielded by streaming calls.
```python
chunk.delta # the new text in this chunk (str or None)
chunk.finish_reason # "stop" on the last chunk, None otherwise
chunk.is_final # True when this is the last chunk
chunk.to_dict() # raw chunk data
```
---
## Error Handling
The SDK raises typed exceptions so you can handle each failure case precisely.
```python
from renoai import (
RenoError,
RenoConnectionError,
RenoTimeoutError,
RenoValidationError,
)
import time
try:
answer = client.ask("Hello")
except RenoValidationError as e:
# Bad input before the request was even sent
print("Fix your input:", e.message)
except RenoConnectionError as e:
# Could not reach the server at all
print("Server unreachable:", e.message)
except RenoTimeoutError as e:
# Request started but took too long
print("Timed out:", e.message)
except RenoError as e:
# Any other API-level error
print(e.user_friendly())
if e.is_retryable:
wait = e.retry_after or 5
print(f"Retrying in {wait}s...")
time.sleep(wait)
```
### Exception Hierarchy
```
RenoError # base class for all SDK exceptions
RenoConnectionError # network or DNS failure (code 6002)
RenoTimeoutError # request exceeded timeout (code 6003)
RenoValidationError # invalid input caught client-side (code 2001)
```
### RenoError Properties
| Property | Type | Description |
|---|---|---|
| `code` | `int` | Reno error code |
| `message` | `str` | Short description of the error |
| `details` | `str` | Extra context or suggestion from the server |
| `is_retryable` | `bool` | Whether it is safe to retry this request |
| `retry_after` | `float` | Seconds to wait before retrying (from `Retry-After` header) |
| `user_friendly()` | `str` | Formatted message with title, description, and tip |
### Production Loop Example
```python
import time
from renoai import Reno, RenoError, RenoConnectionError, RenoTimeoutError, RenoValidationError
client = Reno(api_key="reno_sk_xxx")
while True:
try:
answer = client.ask("Summarize today's AI news.")
print(answer)
except RenoValidationError as e:
print("Validation error:", e.message)
break # code bug, do not retry
except RenoConnectionError:
print("Connection failed, retrying in 10s...")
time.sleep(10)
continue
except RenoTimeoutError:
print("Timed out, retrying in 5s...")
time.sleep(5)
continue
except RenoError as e:
if e.is_retryable:
wait = e.retry_after or 5
print(f"Retryable error [{e.code}], waiting {wait}s...")
time.sleep(wait)
continue
else:
print(e.user_friendly())
break # auth, billing, content policy, etc.
except KeyboardInterrupt:
print("Stopped.")
break
time.sleep(3)
client.close()
```
---
## Error Codes Reference
| Range | Category |
|---|---|
| 1001 to 1008 | Authentication and API key errors |
| 2001 to 2009 | Request validation errors |
| 3001 to 3007 | Model availability errors |
| 4001 to 4005 | Token and context length errors |
| 5001 to 5006 | Rate limiting and quota errors |
| 6001 to 6006 | Server and infrastructure errors |
| 7001 to 7004 | Content moderation errors |
| 8001 to 8004 | Billing and subscription errors |
| 9001 to 9004 | Internal and unexpected errors |
Call `error.user_friendly()` on any `RenoError` to get a plain-English title, description, and actionable tip for any of these codes.
---
## Configuration
### Client Options
```python
client = Reno(
api_key="reno_sk_xxx",
base_url="http://127.0.0.1:8000/api/v1", # default
timeout=30, # seconds, default 30
max_retries=3, # default 3
)
```
### Context Manager
The client can be used as a context manager to ensure the HTTP session is always closed:
```python
with Reno(api_key="reno_sk_xxx") as client:
print(client.ask("Hello!"))
```
---
## Running Tests
```bash
pip install pytest
pytest tests/ -v
```
To run a specific test file or test:
```bash
pytest tests/test_renoai.py -v
pytest tests/test_renoai.py::TestClientRequests::test_ask_success -v
```
---
## License
MIT License. See `LICENSE` for details.
| text/markdown | null | Sazuke Hiroshima <sazuketech12@gmail.com> | null | Aymene Boudali <boudaliaymene4@gmail.com> | MIT | reno, ai, api, llm, language-model, chat, completion | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0",
"urllib3>=1.26.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"isort>=5.0.0; extra == \"dev\"",
"types-requests>=2.28.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/renoai-sdk",
"Documentation, https://github.com/yourusername/renoai-sdk#readme",
"Repository, https://github.com/yourusername/renoai-sdk",
"Bug Tracker, https://github.com/yourusername/renoai-sdk/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T23:17:01.072353 | renoai-0.1.6.tar.gz | 23,886 | 14/45/76c3e10330ac975194aa98b92c2aca5dbb46bf63f8366218b43f1aea5e2d/renoai-0.1.6.tar.gz | source | sdist | null | false | ab682ee06dc9efaa39095dc1eef50b04 | 8077fc4a89fcb181d4297789b00ddf4daade36a96cdb55d564d1424c539eb695 | 144576c3e10330ac975194aa98b92c2aca5dbb46bf63f8366218b43f1aea5e2d | null | [
"LICENSE"
] | 251 |
2.4 | dcscope | 2.25.2 | User interface for deformability cytometry (DC) | |DCscope|
===========
|PyPI Version| |Build Status| |Coverage Status| |Docs Status|
**DCscope** (formerly Shape-Out) is a graphical user interface for the
analysis and visualization of RT-DC datasets.
Documentation
-------------
The documentation, including the code reference and examples, is available at
`dcscope.readthedocs.io <https://dcscope.readthedocs.io>`__.
Installation
------------
Installers for Windows and macOS are available at the `release page <https://github.com/DC-analysis/DCscope/releases>`__.
If you have Python 3 installed, you can install DCscope with
::
pip install dcscope
Citing DCscope
----------------
Please cite DCscope either in-line
::
(...) using the analysis software DCscope (formerly Shape-Out) version 2.X.X
(available at https://github.com/DC-analysis/DCscope).
or in a bibliography
::
Paul Müller and others (2019), DCscope (formerly Shape-Out) version 2.X.X:
Analysis software for real-time deformability cytometry [Software].
Available at https://github.com/DC-analysis/DCscope.
and replace ``2.X.X`` with the version of DCscope that you used.
Testing
-------
::
pip install -e .
pip install -r tests/requirements.txt
pytest tests
.. |DCscope| image:: https://raw.github.com/DC-analysis/DCscope/main/dcscope/img/splash.png
.. |PyPI Version| image:: https://img.shields.io/pypi/v/DCscope.svg
:target: https://pypi.python.org/pypi/DCscope
.. |Build Status| image:: https://img.shields.io/github/actions/workflow/status/DC-analysis/DCscope/check.yml?branch=main
:target: https://github.com/DC-analysis/DCscope/actions?query=workflow%3AChecks
.. |Coverage Status| image:: https://img.shields.io/codecov/c/github/DC-analysis/DCscope/main.svg
:target: https://codecov.io/gh/DC-analysis/DCscope
.. |Docs Status| image:: https://img.shields.io/readthedocs/dcscope
:target: https://readthedocs.org/projects/dcscope/builds/
| text/x-rst | Benedikt Hartmann, Eoghan O'Connell, Maximilian Schlögel, Paul Müller, Raghava Alajangi | null | null | Paul Müller <dev@craban.de> | null | RT-DC, DC, deformability, cytometry | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Intended Audience :: Science/Research"
] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"dclab[dcor,export,http,s3]>=0.67.4",
"h5py>=2.8.0",
"numpy>=1.21",
"pygments",
"pyqt6",
"pyqtgraph==0.14.0",
"requests>=2.31.0",
"scipy>=1.10.0"
] | [] | [] | [] | [
"source, https://github.com/DC-analysis/DCscope",
"tracker, https://github.com/DC-analysis/DCscope/issues",
"documentation, https://dcscope.readthedocs.io",
"changelog, https://github.com/DC-analysis/DCscope/blob/main/CHANGELOG"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T23:16:51.605773 | dcscope-2.25.2.tar.gz | 9,800,166 | e2/70/9c445764317f912b43d49fe97099cc4c532b25207a7b301764642743f01d/dcscope-2.25.2.tar.gz | source | sdist | null | false | b55155d3fc8a18733959ee0fce1ba89e | 21cdb29bcc8a6fef73201295aeff7a978c2edc722508e1509e73864ccee195f3 | e2709c445764317f912b43d49fe97099cc4c532b25207a7b301764642743f01d | GPL-3.0-or-later | [
"LICENSE"
] | 267 |
2.4 | audio-workbench-player | 0.0.3 | Python wrapper for Audio Workbench Player (HTML/Streamlit embedding) | # audio-workbench-player (Python wrapper)
Python helper package to embed `audio-workbench-player` in Streamlit, Jupyter, and other HTML-capable UIs.
## Install (PyPI)
```bash
pip install audio-workbench-player
```
Optional demo dependencies:
```bash
pip install "audio-workbench-player[streamlit]"
pip install "audio-workbench-player[gradio]"
```
## Install (local dev)
```bash
pip install -e .
```
## Usage
```python
from audio_workbench_player import render_daw_player
html = render_daw_player(
audio_bytes,
iframe_height=320,
viewMode="spectrogram",
transportStyle="hero",
transportOverlay=True,
showOverview=False,
showFileOpen=False,
showStatusbar=False,
)
```
## Demo Features
- Presets: `Full DAW`, `Compact`, `Preview Waveform Hero`, `Preview Spectrogram Hero`, `Ultra Compact Hero`
- Advanced toggles for all relevant player sections
- Live options preview as JSON
## Streamlit demo
```bash
streamlit run demo_streamlit.py
```
## Gradio demo
```bash
pip install gradio
python demo_gradio.py
```
## License
GNU AGPL-3.0
| text/markdown | Perch Contributors | null | null | null | null | audio, player, streamlit, jupyter, embed, waveform, spectrogram | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"streamlit>=1.30; extra == \"streamlit\"",
"gradio>=4.0; extra == \"gradio\""
] | [] | [] | [] | [
"Homepage, https://github.com/LimitlessGreen/Audio-Workbench",
"Repository, https://github.com/LimitlessGreen/Audio-Workbench",
"Issues, https://github.com/LimitlessGreen/Audio-Workbench/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T23:16:45.863070 | audio_workbench_player-0.0.3.tar.gz | 64,303 | 57/c6/4b6f3c9a5cb52844572857421fd6d53a88156e38fdec943b4ac1eae296d9/audio_workbench_player-0.0.3.tar.gz | source | sdist | null | false | 10350b39f2dc2419b61eb7cc5be580ad | f961dc26d8f9d4733bfcd340c5a8a4616c3f26e16e6318a8a1b9d65f20458d49 | 57c64b6f3c9a5cb52844572857421fd6d53a88156e38fdec943b4ac1eae296d9 | AGPL-3.0-only | [
"LICENSE"
] | 285 |
2.4 | whoare | 0.2.8 | Another whois scraper | 
[](https://github.com/avdata99/whoare/releases)
[](https://github.com/avdata99/whoare/issues)
[](https://github.com/avdata99/whoare/pulls)
[](https://github.com/avdata99/whoare/blob/main/LICENSE)
[](https://pypi.org/project/whoare/)
[](https://github.com/avdata99/whoare/commits/main)
# A WhoIs parser
Just a `whois` parser
Available countries:
- `.ar`: Argentina
## Sample
```python
from whoare.whoare import WhoAre
wa = WhoAre()
wa.load('fernet.com.ar') # optional torify=True to run "torify whois ..."
wa.domain.base_name
'fernet'
wa.domain.zone
'com.ar'
wa.domain.full_name()
'fernet.com.ar'
wa.domain.registered
datetime.datetime(2020, 5, 7, 10, 44, 4, 210977)
wa.domain.expire
datetime.datetime(2021, 5, 7, 0, 0)
wa.registrant.name
'XXXX jose XXXXX'
wa.registrant.legal_uid
'20XXXXXXXX9'
wa.dnss[0].name
'ns2.sedoparking.com'
wa.dnss[1].name
'ns1.sedoparking.com'
```
## Get new domains
### Argentina
```python
from datetime import date
from whoare.zone_parsers.ar.news_from_blockchain import NewDomains
nd = NewDomains()
nd.data_path = '' # here
results = nd.get_from_date(date(2020, 3, 28))
{
'zonas': {
'com.ar': [
'3cconstrucciones.com.ar',
'4kids.com.ar'
],
'ar': [
'andamios.ar',
'apuesta.ar',
'camaras.ar'
],
'tur.ar': [
'villacarlospaz.tur.ar'
]
},
'errors': {}
}
```
| text/markdown | Anders Vazquez | andres@data99.com.ar | null | null | null | whois, domain, nic.ar | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programm... | [] | https://github.com/avdata99/whoare | null | <4,>=3.6 | [] | [] | [] | [
"pytz",
"requests"
] | [] | [] | [] | [
"Bug Reports, https://github.com/avdata99/whoare/issues",
"Source, https://github.com/avdata99/whoare/"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T23:16:22.059737 | whoare-0.2.8.tar.gz | 18,051 | fb/d4/fc61ea7aa8138c0603fa677bb52ccc18751adaf59d5db34cc299b2bd774c/whoare-0.2.8.tar.gz | source | sdist | null | false | e1c6478c5e26666834ec021cf9ca7839 | 34a4d1f827f241574882728e118b98c8b49628a8459cb3fdc821f88e113c0e87 | fbd4fc61ea7aa8138c0603fa677bb52ccc18751adaf59d5db34cc299b2bd774c | null | [
"LICENSE"
] | 383 |
2.4 | focus-mapper | 0.9.0 | Generate FinOps FOCUS™ compliant reports from any billing data using simple YAML mapping configuration. | # focus-mapper

[](https://codecov.io/gh/quickwind/focus-mapper)
Generate FinOps FOCUS compliant reports from a pre-flattened billing data.
This project takes any tabular data (CSV/Parquet) and converts it to a FOCUS compliant report using a YAML mapping.
You can build mappings via:
- CLI mapping wizard (interactive prompts)
- Desktop GUI (for managing mappings and running generation/validation with previews
## Table of Contents
- [General Description](#general-description)
- [Functionalities](#functionalities)
- [Architecture](#architecture)
- [Usage](#usage)
- [Install](#install)
- [What You Need](#what-you-need)
- [Generate (CLI)](#generate-cli)
- [Validate (CLI)](#validate-cli)
- [Mapping Wizard (CLI)](#mapping-wizard-cli)
- [Desktop GUI (Tkinter)](#desktop-gui-tkinter)
- [Use as a Library](#use-as-a-library)
- [High-Level API (Recommended)](#high-level-api-recommended)
- [Return Types](#return-types)
- [Exported Types](#exported-types)
- [Low-Level API](#low-level-api)
- [Tech Details](#tech-details)
- [Supported Spec Versions](#supported-spec-versions)
- [Populate Spec Versions](#populate-spec-versions)
- [External Spec Directory (Dev/Test Only)](#external-spec-directory-devtest-only)
- [Data Generator Configuration](#data-generator-configuration)
- [v1.3 Metadata Support](#v13-metadata-support)
- [Mapping YAML Specification](#mapping-yaml-specification)
- [Tests](#tests)
- [Building for Windows](#building-for-windows)
## General Description
`focus-mapper` turns a “source” dataset (your billing reports data) into a FOCUS dataset by applying a mapping YAML.
It can also validate an existing FOCUS dataset and produce a validation report.
## Functionalities
- Generate FOCUS dataset from source (CSV/Parquet) + mapping (YAML) with sidecar metadata and validation report output.
- Validate an existing FOCUS dataset.
- Validate mapping YAML before using it (catch config/spec issues early).
- Build mappings interactively via the CLI wizard.
- Manage/edit mappings and generate datasets via a Tkinter desktop GUI:
- Mapping list table with status and quick actions
- Mapping editor with spec-aware column help, operation configuration, previews, and validation configuration UI
- Generator output preview (first 100 rows), logs/progress, and a validation report viewer
- Persistent settings (e.g., external spec directory for dev/testing)
## Architecture
```mermaid
flowchart TD
%% Define styles
classDef file fill:#e1f5fe,stroke:#01579b,stroke-width:2px;
classDef tool fill:#fff3e0,stroke:#ff6f00,stroke-width:2px;
classDef artifact fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px;
%% Subgraphs
subgraph Input_Files [Inputs]
Sample["Sample Data<br/>(CSV/Parquet)"]:::file
Source["Source Data<br/>(CSV/Parquet)"]:::file
Spec["FOCUS Spec<br/>(Built-in / Custom)"]:::file
ExistingFocus["Existing FOCUS Data<br/>(CSV/Parquet)"]:::file
end
subgraph Tools ["focus-mapper (CLI / GUI)"]
direction TB
Wizard["Mapping Wizard<br/>(Create/Edit Mapping)"]:::tool
Generator["Generator Mode<br/>(Generate Data)"]:::tool
Validator["Validator Mode<br/>(Validate Data)"]:::tool
end
subgraph Output_Artifacts [Outputs]
direction TB
YAML["Mapping YAML"]:::artifact
FD["FOCUS Dataset<br/>(CSV/Parquet)"]:::artifact
Meta["Metadata JSON"]:::artifact
Report["Validation Report JSON"]:::artifact
end
%% -- Vertical Ordering Enforcement --
%% Invisible links (~~~) help the layout engine place blocks top-to-bottom
Sample ~~~ Wizard
Wizard ~~~ YAML
YAML ~~~ Generator
Generator ~~~ FD
%% -- Flow 1: Wizard --
Sample -->|1| Wizard
Spec -->|1| Wizard
Wizard -->|2| YAML
%% -- Flow 2: Generator --
Source -->|3| Generator
YAML -->|3| Generator
Generator -->|4| FD
Generator -->|4| Meta
Generator -->|4| Report
%% -- Flow 3: Validator --
ExistingFocus -->|5| Validator
Validator -->|6| Report
%% Implicit validation
FD -.-> Validator
```
## Usage
### Install
```bash
pip install focus-mapper
# With Parquet support
pip install "focus-mapper[parquet]"
# Force Pandas 1.5 (legacy support)
pip install "focus-mapper[pandas15]"
focus-mapper --help
```
### What You Need
- A flat input table (CSV or Parquet).
- A mapping YAML that tells the tool how to create each FOCUS column.
If you don't have a mapping yet, start with the wizard.
### Generate (CLI)
```bash
python -m focus_mapper generate \
--input telemetry.csv \
--mapping mapping.yaml \
--output focus.csv \
--metadata-out focus.focus-metadata.json \
--validation-out focus.validation.json
```
Notes:
- `--spec` is optional; by default the spec version comes from the mapping YAML.
Outputs:
- `focus.csv` (FOCUS report)
- `focus.focus-metadata.json` (metadata)
- `focus.validation.json` (validation report)
### Validate (CLI)
```bash
python -m focus_mapper validate \
--spec v1.2 \
--input focus.csv \
--out focus.validation.json
```
### Mapping Wizard (CLI)
Interactive wizard to create a mapping YAML based on a sample input file:
```bash
focus-mapper-wizard \
--spec v1.2 \
--input telemetry.csv \
--output mapping.yaml
```
You can also run the wizard with no arguments and it will prompt for values:
```bash
focus-mapper-wizard
```
Optional flags:
- `--include-recommended` to include Recommended columns
- `--include-conditional` to include Conditional columns
- `--include-optional` to include Optional columns
Tip: column name prompts support tab‑completion (case‑insensitive).
The wizard will also show a summary of default validation settings and let you override them globally or per column.
For standard FOCUS columns, the wizard does not offer a `cast` option because the generator will coerce to the spec type automatically.
### Desktop GUI (Tkinter)
Launch the GUI:
```bash
python -m focus_mapper.gui.main
```
If Tkinter isn’t installed:
- macOS (Homebrew): `brew install python-tk`
- Linux (Debian/Ubuntu): `sudo apt-get install python3-tk`
- Windows: reinstall Python and ensure “tcl/tk and IDLE” is checked
### Use as a Library
Install from PyPI:
```bash
pip install focus-mapper
```
### High-Level API (Recommended)
The library provides three main entrypoint functions:
```python
from focus_mapper import generate, validate, validate_mapping
# Generate FOCUS data from input + mapping
result = generate(
input_data="telemetry.csv", # Path or DataFrame
mapping="mapping.yaml", # Path or MappingConfig
output_path="focus.parquet", # Output file path
spec_version="v1.3", # Optional, defaults to mapping version
dataset_instance_complete=True, # v1.3+ metadata
generator_name="my-tool", # Custom generator name
generator_version="2.0.0", # Custom generator version
)
if result.is_valid:
print(f"Generated {len(result.output_df)} rows")
else:
print(f"Validation errors: {result.validation.summary.errors}")
# Validate existing FOCUS data
report = validate(
data="focus.parquet", # Path or DataFrame
spec_version="v1.3",
output_path="validation.json",
write_report=True,
)
for finding in report.findings:
if finding.severity == "ERROR":
print(f"Error in {finding.column}: {finding.message}")
# Validate mapping YAML before use
mapping_result = validate_mapping("mapping.yaml")
if not mapping_result.is_valid:
for error in mapping_result.errors:
print(f"Error: {error}")
```
### Return Types
| Function | Returns |
|----------|---------|
| `generate()` | `GenerationResult` with `output_df`, `validation`, `metadata`, `is_valid` |
| `validate()` | `ValidationReport` with `findings`, `summary` |
| `validate_mapping()` | `MappingValidationResult` with `is_valid`, `errors`, `warnings`, `mapping` |
### Exported Types
```python
from focus_mapper import (
# Main API functions
generate, validate, validate_mapping,
# Result types
GenerationResult, MappingValidationResult, ValidationReport,
ValidationFinding, ValidationSummary, SidecarMetadata,
# Config types
MappingConfig, FocusSpec,
# Loaders
load_mapping_config, load_focus_spec,
)
```
### Low-Level API
For more control, use the individual modules:
```python
from pathlib import Path
import pandas as pd
from focus_mapper.mapping.config import load_mapping_config
from focus_mapper.mapping.executor import generate_focus_dataframe
from focus_mapper.spec import load_focus_spec
from focus_mapper.validate import validate_focus_dataframe
from focus_mapper.metadata import build_sidecar_metadata, write_sidecar_metadata
df = pd.read_csv("input.csv")
mapping = load_mapping_config(Path("mapping.yaml"))
spec = load_focus_spec("v1.2")
out = generate_focus_dataframe(df, mapping=mapping, spec=spec)
report = validate_focus_dataframe(out, spec=spec, mapping=mapping)
```
Notes:
- Parquet support requires `pyarrow` (`pip install "focus-mapper[parquet]"`).
- Supported specs: `v1.1`, `v1.2`, and `v1.3`. `v1.0` is not supported.
- Validation overrides require passing `mapping` to `validate_focus_dataframe`.
## Tech Details
### Supported Spec Versions
Default spec version is `v1.3`. Supported versions are `v1.1`, `v1.2`, and `v1.3`. `v1.0` is not supported.
### Populate Spec Versions
Use the tool below to download and store a specific spec version from the upstream repository:
```bash
python tools/populate_focus_spec.py --version 1.1
python tools/populate_focus_spec.py --version 1.2
python tools/populate_focus_spec.py --version 1.3
```
If a version tag doesn't exist, override the git ref:
```bash
python tools/populate_focus_spec.py --version 1.2 --ref main
```
Then use `--spec v1.2` (or `v1.1`, `v1.3`) in the CLI.
### External Spec Directory (Dev/Test Only)
You can override bundled specs with custom JSON files for testing or spec development (not recommended for production use).
**Directory format:**
```
/path/to/specs/
├── focus_spec_v1.1.json # Override v1.1
├── focus_spec_v1.2.json # Override v1.2
└── focus_spec_v1.3.json # Override v1.3 (use as many as needed)
```
**Usage options (in priority order):**
```bash
# 1. CLI flag (highest priority)
focus-mapper generate --spec v1.3 --spec-dir /path/to/specs ...
# 2. Environment variable
export FOCUS_SPEC_DIR=/path/to/specs
focus-mapper generate --spec v1.3 ...
# 3. Library API
from focus_mapper import load_focus_spec
spec = load_focus_spec("v1.3", spec_dir="/path/to/specs")
```
GUI note:
- The GUI Settings screen persists `spec_dir` to `~/.focus_mapper/settings.json` and sets `FOCUS_SPEC_DIR` for the current process.
If a version isn't found in the external directory, it falls back to bundled specs.
### Data Generator Configuration
You can customize the `DataGenerator` metadata field (used in v1.3+) using environment variables. This is useful when wrapping `focus-mapper` in another tool.
**Environment Variables:**
- `FOCUS_DATA_GENERATOR_NAME`: Override the generator name (default: `focus-mapper`)
- `FOCUS_DATA_GENERATOR_VERSION`: Override the generator version (default: library version)
**Priority Order:**
1. Explicit arguments (CLI `--data-generator` or API `generator_name`)
2. Environment variables
3. Default values
### v1.3 Metadata Support
For v1.3 datasets, the library generates the new collection-based metadata structure:
```json
{
"DatasetInstance": [{ "DatasetInstanceId": "...", "DatasetInstanceName": "...", "FocusDatasetId": "CostAndUsage" }],
"Recency": [{ "DatasetInstanceId": "...", "DatasetInstanceComplete": true, "TimeSectors": [...] }],
"Schema": [{ "SchemaId": "...", "FocusVersion": "1.3", "ColumnDefinition": [...] }],
"DataGenerator": { "DataGenerator": "focus-mapper" }
}
```
Key v1.3 features:
- **TimeSectors**: Per-period completeness tracking for CostAndUsage datasets
- **DatasetInstanceComplete**: Overall dataset completeness flag
- **DatasetInstanceId**: Deterministic hash for dataset identification
### Mapping YAML Specification
The mapping file is a YAML document that defines how your input columns are transformed into FinOps FOCUS compliant columns.
#### Core Concept: The Pipeline
Each column in the `mappings` section is defined as a series of **steps**. Steps are executed in order, and the output of one step is passed as the input to the next.
#### Top-level Structure
```yaml
spec_version: "1.2"
creation_date: "2026-02-07T09:00:00Z" # v1.2+: ISO8601 timestamp
dataset_type: "CostAndUsage" # v1.3+: CostAndUsage or ContractCommitment
dataset_instance_name: "My Report" # v1.3+: Human-readable name
validation:
default:
mode: permissive
datetime:
format: null
decimal:
precision: null
scale: null
integer_only: false
min: null
max: null
string:
min_length: null
max_length: null
allow_empty: true
trim: true
json:
object_only: false
allowed_values:
case_insensitive: false
nullable:
allow_nulls: null
presence:
enforce: true
mappings:
# Standard FOCUS column name
BilledCost:
description: "Optional documentation for metadata"
steps:
- op: from_column
column: "raw_cost"
- op: cast
to: "decimal"
scale: 2
validation:
decimal:
precision: 12
scale: 2
min: 0
```
#### Validation Overrides (Optional)
Validation is **permissive by default** unless you define `validation.default`. You can override validation for individual columns inside each mapping:
```yaml
spec_version: "1.2"
validation:
default:
mode: permissive
datetime:
format: null
decimal:
precision: null
scale: null
integer_only: false
min: 0
max: null
string:
min_length: null
max_length: 120
allow_empty: true
trim: true
json:
object_only: false
allowed_values:
case_insensitive: false
nullable:
allow_nulls: null
presence:
enforce: true
mappings:
BillingPeriodStart:
steps:
- op: sql
expr: "TRY_CAST(billing_period || '-01' AS TIMESTAMPTZ)"
- op: cast
to: datetime
validation:
mode: strict
datetime:
format: "%Y-%m-%dT%H:%M:%SZ"
BilledCost:
steps:
- op: from_column
column: billed_cost
validation:
decimal:
precision: 12
scale: 2
min: 0
Tags:
steps:
- op: from_column
column: tags_json
validation:
json:
object_only: true
```
Validation override keys:
- `mode`: `permissive` or `strict`
- `datetime.format`: strftime format; if omitted, permissive uses inference
- `decimal`: `precision`, `scale`, `integer_only`, `min`, `max`
- `string`: `min_length`, `max_length`, `allow_empty`, `trim`
- `json.object_only`: require JSON objects only
- `allowed_values.case_insensitive`: case‑insensitive matching (default: false)
- `nullable.allow_nulls`: override spec nullability
- `presence.enforce`: skip "missing column" checks
#### Validation Severity by Feature Level
Missing column validation severity is determined by the column's feature level:
| Feature Level | Severity | Finding Generated? |
|---------------|----------|-------------------|
| Mandatory | ERROR | Yes |
| Recommended | WARN | Yes |
| Conditional | INFO | Yes |
| Optional | - | No |
#### Example Input (CSV)
```csv
billing_period,billing_date,billing_hour,billing_currency,billed_cost,tax_amount,charge_category,charge_class,charge_description,alt_description,tag_a,tag_b,pricing_quantity,pricing_unit
2026-01,2026-01-30,4,USD,12.34,1.23,Usage,Normal,Compute usage,,core,vm,1,Hours
2026-01,2026-01-30,5,USD,5.00,0.50,Tax,Normal,Sales tax,Alt tax desc,billing,tax,1,Hours
```
#### Example Mapping (YAML)
```yaml
spec_version: "1.2"
mappings:
BillingPeriodStart:
steps:
- op: pandas_expr
expr: 'pd.to_datetime(df["billing_period"] + "-01", utc=True)'
- op: cast
to: datetime
BillingPeriodEnd:
steps:
- op: pandas_expr
expr: 'pd.to_datetime((df["billing_period"].str.slice(0,4).astype(int) + ((df["billing_period"].str.slice(5,7).astype(int) + 1) > 12).astype(int)).astype(str) + "-" + (((df["billing_period"].str.slice(5,7).astype(int) + 1 - 1) % 12) + 1).astype(str).str.replace(r"^(\\d)$", r"0\\1", regex=True) + "-01", utc=True)'
- op: cast
to: datetime
ChargePeriodStart:
steps:
- op: sql
expr: "TRY_CAST(billing_date || 'T' || LPAD(CAST(billing_hour AS VARCHAR), 2, '0') || ':00:00Z' AS TIMESTAMPTZ)"
- op: cast
to: datetime
ChargePeriodEnd:
steps:
- op: sql
expr: "TRY_CAST(billing_date || 'T' || LPAD(CAST(billing_hour AS VARCHAR), 2, '0') || ':00:00Z' AS TIMESTAMPTZ) + INTERVAL 1 HOUR"
- op: cast
to: datetime
EffectiveCost:
steps:
- op: pandas_expr
expr: "df['billed_cost'] + df['tax_amount']"
- op: cast
to: decimal
scale: 2
x_TagConcat:
description: "Concat tag_a and tag_b"
steps:
- op: concat
columns: ["tag_a", "tag_b"]
sep: "-"
```
#### Operation Reference
| Operation | Description | Parameters | Example |
|-----------|-------------|------------|---------|
| `from_column` | Initialize the pipeline from a source column. Use this as the first step when you want to transform an input field. | `column` (string; input column name) | `- op: from_column`<br>` column: "cost"` |
| `const` | Create a column with the same value for every row (including `null`). Useful for static metadata. | `value` (any) | `- op: const`<br>` value: "Acme"` |
| `null` | Create a column filled with null values. Use quoted `"null"` in YAML. | (none) | `- op: "null"` |
| `coalesce` | Pick the first non-null value across multiple columns, left to right. | `columns` (list of strings) | `- op: coalesce`<br>` columns: ["a", "b"]` |
| `map_values` | Replace values using a lookup dictionary. If a value is missing, use `default` (or null if not set). Can start from `column` or the current series. | `mapping` (dict), `default` (optional), `column` (optional) | `- op: map_values`<br>` column: "charge_category"`<br>` mapping: {"Usage": "U", "Tax": "T"}` |
| `concat` | Join multiple columns into a single string. Nulls are treated as empty strings. | `columns` (list), `sep` (string, default "") | `- op: concat`<br>` columns: ["tag_a", "tag_b"]`<br>` sep: "-"` |
| `cast` | Convert the current series to a specific type. Use `decimal` for money and `datetime` for timestamps. | `to` (string: `string|float|int|datetime|decimal`), `scale` (int, decimal only), `precision` (int, decimal only) | `- op: cast`<br>` to: "decimal"`<br>` scale: 2`<br>` precision: 12` |
| `round` | Round the current numeric series to `ndigits`. | `ndigits` (int, default 0) | `- op: round`<br>` ndigits: 2` |
| `math` | Row-wise arithmetic across columns or constants. Supports `add`, `sub`, `mul`, `div`. Use `operands` to list inputs. | `operator` (string), `operands` (list of `{column}` or `{const}`) | `- op: math`<br>` operator: add`<br>` operands: [{column: "cost"}, {column: "tax"}]` |
| `when` | Conditional assignment: if `column == value` then `then`, else `else`. | `column`, `value`, `then`, `else` (optional) | `- op: when`<br>` column: "charge_category"`<br>` value: "Tax"`<br>` then: "Y"`<br>` else: "N"` |
| `sql` | **Recommended.** Execute a DuckDB SQL expression. Cleaner syntax and better performance than `pandas_expr`. Use column names directly. | `expr` (string) or `query` (full SQL) | `- op: sql`<br>` expr: "a + b"` |
| `pandas_expr` | Evaluate a safe pandas expression. Use `df` for the input DataFrame, `current` for the prior series, and `pd` for pandas helpers. Must return a Series or scalar. | `expr` (string) | `- op: pandas_expr`<br>` expr: "df['a'] + df['b']"` |
#### Automatic Dry-Run for sql and pandas_expr
When defining `sql` or `pandas_expr` steps in the wizard, you get a **real-time preview**:
- **Live Dry-Run**: Try applying it for first 100 rows of input.
- **Instant Feedback**: Shows first 5 result values on success, or error details on failure.
#### sql Operation (Recommended)
The `sql` operation uses DuckDB SQL and is recommended over `pandas_expr` for:
- Cleaner, more readable syntax
- Better performance on large datasets
- Familiar SQL semantics
```yaml
# Arithmetic
EffectiveCost:
steps:
- op: sql
expr: "billed_cost + tax_amount"
# Date/time with INTERVAL
ChargePeriodEnd:
steps:
- op: sql
expr: "TRY_CAST(billing_date || 'T00:00:00Z' AS TIMESTAMPTZ) + INTERVAL 1 HOUR"
# Conditional (CASE)
x_IsTax:
steps:
- op: sql
expr: "CASE WHEN charge_category = 'Tax' THEN 'Y' ELSE 'N' END"
# Full Query Mode
# Use 'query' for complex SQL involving aggregations or window functions.
# You must SELECT from the 'src' table and return a single column.
x_RunningTotal:
steps:
- op: sql
query: "SELECT SUM(billed_cost) OVER (PARTITION BY billing_account_id ORDER BY billing_date) FROM src"
```
**Safety Note:** `sql` queries must start with `SELECT` or `WITH` to ensure read-only operations.
**Warning:** Using `ORDER BY` in your query (or within window functions) may change the row order of the result relative to the input DataFrame. Ensure your validation and logic accounts for potential row reordering if you rely on index alignment.
See [DuckDB SQL docs](https://duckdb.org/docs/sql/introduction) for available functions.
#### pandas_expr Safety Notes
`pandas_expr` is evaluated in a restricted environment:
- Available names: `df`, `pd`, `current`, `str`, `int`, `float`
- No builtins, no private/dunder attribute access
- Only a safe allowlist of pandas methods is permitted
If you need more pandas methods, add them to the allowlist in `src/focus_mapper/mapping/ops.py`.
#### Extension Columns
Custom columns MUST start with the `x_` prefix. They will be appended to the output dataset and documented in the generated metadata if a `description` is provided.
#### Skip a Column
If you skip a column in the wizard, it will not be mapped and will remain null in the output.
### Tests
```bash
pytest
```
Coverage is enabled by default. HTML report is written to `htmlcov/index.html`.
GUI tests:
- GUI tests use Tk and run behind a guard: set `RUN_GUI_TESTS=1` to enable them locally.
- In CI on Linux, tests run under `xvfb` so Tk can run headlessly.
### Building for Windows
To build standalone Windows executables (`focus-mapper.exe` and `focus-mapper-wizard.exe`), use [PyInstaller](https://pyinstaller.org/).
1. **Install PyInstaller:**
```bash
pip install -e ".[build]"
```
2. **Run the Build:**
Use the provided build scripts in `tools/windows/`:
**Command Prompt:**
```cmd
tools\windows\build.bat
```
**PowerShell:**
```powershell
.\tools\windows\build.ps1
```
3. **Locate Executables:**
The generated `.exe` files will be in the `dist-win/` directory:
- `dist-win/focus-mapper.exe`
- `dist-win/focus-mapper-wizard.exe`
| text/markdown | null | Wallace Wei <wallacewei@live.cn> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2026 Wallace Wei (wallacewei@live.cn)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| FinOps, FOCUS, cost, billing, mapping | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Program... | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas<3,>=1.5",
"PyYAML>=5.4",
"pyreadline3>=3.5; platform_system == \"Windows\"",
"iso4217parse>=0.6",
"duckdb>=1.0",
"pandas<2,>=1.5; extra == \"pandas15\"",
"numpy<2; extra == \"pandas15\"",
"pyarrow>=14.0; extra == \"parquet\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"d... | [] | [] | [] | [
"Homepage, https://github.com/quickwind/focus-mapper",
"Repository, https://github.com/quickwind/focus-mapper"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:15:50.557064 | focus_mapper-0.9.0.tar.gz | 462,539 | 5b/04/b911153f195409a9871fd39b608bf27d83849dfa0ac82ca17bbbdc894364/focus_mapper-0.9.0.tar.gz | source | sdist | null | false | ac3cb70871aa2bbc9c3e1a045d5a6294 | 6025782eeb89117f91a0bab2c185befc1781531a5dcc9d3d4a75c766b247690b | 5b04b911153f195409a9871fd39b608bf27d83849dfa0ac82ca17bbbdc894364 | null | [
"LICENSE"
] | 224 |
2.4 | hexz | 0.5.0 | High-performance snapshot storage library with compression and encryption | # hexz
[](https://pypi.org/project/hexz/)
[](https://pypi.org/project/hexz/)
[](https://github.com/hexz-org/hexz/blob/main/LICENSE)
Python library for reading, writing, and streaming [Hexz](https://github.com/hexz-org/hexz) snapshots — a seekable, deduplicated compression format built in Rust.
```bash
pip install hexz
```
## Quick Start
### Reading snapshots
```python
import hexz
with hexz.open("data.hxz") as reader:
data = reader.read() # read entire snapshot
chunk = reader.read(4096) # read 4KB from current position
reader.seek(1024) # seek to offset
block = reader[100:200] # slice notation
```
### Writing snapshots
```python
import hexz
with hexz.open("output.hxz", mode="w", compression="lz4") as writer:
writer.add_file("disk.img")
writer.add_bytes(b"extra data")
```
### Building from files
```python
import hexz
# Build with a profile preset (ml, eda, embedded, generic, archival)
metadata = hexz.build("source.img", "output.hxz", profile="ml")
```
### Converting from other formats
```python
import hexz
# Convert tar, HDF5, or WebDataset archives
hexz.convert("dataset.tar", "dataset.hxz")
hexz.convert("data.h5", "data.hxz") # requires pip install hexz[hdf5]
```
### Remote storage
```python
import hexz
# Stream from S3 (only fetches needed blocks)
reader = hexz.open("s3://bucket/data.hxz", s3_region="us-east-1")
chunk = reader.read(4096)
# HTTP streaming
reader = hexz.open("https://example.com/data.hxz")
```
## API
### Core I/O
| Function / Class | Description |
|---|---|
| `hexz.open(path, mode="r", **opts)` | Open a snapshot for reading or writing |
| `Reader(path, ...)` | Read snapshots with file-like interface (seek, read, tell, slice) |
| `AsyncReader.create(path, ...)` | Async reader for asyncio workflows |
| `Writer(path, ...)` | Create new snapshots with compression and deduplication |
### Data operations
| Function | Description |
|---|---|
| `build(source, output, profile, ...)` | Build snapshot from files with preset profiles |
| `convert(input, output, format, ...)` | Convert tar/HDF5/WebDataset to Hexz |
| `inspect(path)` | Get snapshot metadata (compression, size, block count) |
| `verify(path, ...)` | Verify integrity and optional cryptographic signature |
### Array support
| Function / Class | Description |
|---|---|
| `read_array(source, offset, shape, dtype)` | Read NumPy array from snapshot |
| `write_array(dest, array, ...)` | Write NumPy array to snapshot |
| `ArrayView(path, shape, dtype)` | Memory-mapped array access with slicing |
### ML integration
| Class | Description |
|---|---|
| `Dataset(path, ...)` | PyTorch `Dataset` with caching and prefetching |
```python
from hexz import Dataset
from torch.utils.data import DataLoader
dataset = Dataset("s3://bucket/train.hxz", cache_size_mb=512)
loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=True)
for batch in loader:
train_step(batch)
```
### Cryptographic operations
```python
import hexz
hexz.keygen("private.key", "public.key") # generate Ed25519 keypair
hexz.sign("snapshot.hxz", "private.key") # sign a snapshot
hexz.verify("snapshot.hxz", "public.key") # verify signature
```
## Optional dependencies
```bash
pip install hexz[numpy] # NumPy array support
pip install hexz[torch] # PyTorch Dataset integration
pip install hexz[hdf5] # HDF5 conversion (h5py)
pip install hexz[ml] # NumPy + PyTorch
pip install hexz[full] # All optional dependencies
```
## Compression
LZ4 is always available. Zstd and S3 streaming are included by default in PyPI wheels.
| Algorithm | Speed | Ratio | Default |
|---|---|---|---|
| LZ4 | Fast (~2-3 GB/s) | Moderate | Always included |
| Zstd | Moderate | High | Yes |
## Requirements
- Python 3.8+
- Linux (x86_64, aarch64), macOS (x86_64, Apple Silicon), or Windows (x86_64)
## Links
- [GitHub](https://github.com/hexz-org/hexz)
- [Documentation](https://hexz-org.github.io/hexz/)
- [CLI Tool](https://github.com/hexz-org/hexz/releases) — `hexz` command-line binary
| text/markdown; charset=UTF-8; variant=GFM | Will McCallion | null | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"h5py>=3.0; extra == \"convert\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-benchmark>=4.0; extra == \"dev\"",
"hypothesis>=6.0; extra == \"dev\"",
"pytest-timeout>=2.0; extra == \"dev\"",
"pytest-mock>=3.0; extra == \"dev\"",
"moto[server]>=5.0; extra == \"de... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:15:17.085936 | hexz-0.5.0.tar.gz | 398,953 | 68/55/a73b3f89bb41db75d7ba7d5bf0888625810906f7706451c94d4ec38367cb/hexz-0.5.0.tar.gz | source | sdist | null | false | aeb5b8f93fe3df334f427a3f5a20c038 | 616b3841b8e20e6ab5c76b4d6bd7f1689c17eda391207d0ef5d405781847d754 | 6855a73b3f89bb41db75d7ba7d5bf0888625810906f7706451c94d4ec38367cb | null | [] | 677 |
2.4 | easyhaproxy | 6.0.0 | HAProxy label based routing with service discovery for Docker, Swarm, and Kubernetes | # EasyHAProxy
[](https://github.com/sponsors/byjg)
[](http://opensource.byjg.com)
[](https://github.com/byjg/docker-easy-haproxy/actions/workflows/build.yml)
[](https://github.com/byjg/docker-easy-haproxy/)
[](https://opensource.byjg.com/opensource/licensing.html)
[](https://github.com/byjg/docker-easy-haproxy/releases/)
[](https://opensource.byjg.com/helm)
[](https://artifacthub.io/packages/search?repo=byjg)

## Service discovery for HAProxy
EasyHAProxy dynamically creates the `haproxy.cfg` based on metadata collected from your workloads (Docker labels, Swarm service labels, or Kubernetes ingress annotations).
EasyHAProxy can detect and configure HAProxy automatically on the following platforms:
- Docker
- Docker Swarm
- Kubernetes
- Static YAML definitions (`EASYHAPROXY_DISCOVER=static`)
## Who is using?
EasyHAProxy is part of some projects:
- Dokku
- MicroK8s
- DigitalOcean Marketplace
See detailed instructions on how to install below.
## EasyHAProxy Mission
Easy to set up and low configuration to numerous features.
## Features
EasyHAProxy will discover services based on Docker (or Swarm) labels and Kubernetes ingress annotations, then dynamically build the `haproxy.cfg`. Below, EasyHAProxy main features:
- Support Automatic Certificate Management Environment (ACME) protocol compatible with Let's Encrypt and other CAs.
- Set your custom SSL certificates
- Balance traffic between multiple replicas
- Set SSL policies (`strict`, `default`, `loose`) via `EASYHAPROXY_SSL_MODE`.
- Set up HAProxy to listen to TCP.
- Add redirects.
- Enable/disable Stats on port 1936 with a custom password.
- Enable/disable custom errors.
Also, it is possible to set up HAProxy from a simple Yaml file instead of creating `haproxy.cfg` file.
## How Does It Work?
You don't need to change your current infrastructure and don't need to learn the HAProxy configuration.
The steps are:
- Run the EasyHAProxy container;
- Add some labels to the containers you want to be parsed by EasyHAProxy (see detailed instructions below);
- EasyHAProxy will automatically detect the containers, set up, and reload the HAProxy configurations for you without downtime.
## Detailed Instructions
For detailed instructions on how to use EasyHAProxy, follow the instructions for the platform you want to use:
[](docs/kubernetes.md)
[](docs/swarm.md)
[](docs/docker.md)
[](docs/static.md)
Or you can install using tools:
[](docs/helm.md)
[](docs/microk8s.md)
[](docs/dokku.md)
[](docs/digitalocean.md)
## Special Topics
If you already set up the EasyHAProxy, is time to go deeper:
- [Custom SSL](docs/ssl.md)
- [Automatic Certificate Issuing](docs/acme.md) (e.g. Letsencrypt)
## Configuration Reference
Detailed configuration guides for advanced setups:
- [Container Labels](docs/container-labels.md) - Configure Docker/Swarm containers with labels
- [Environment Variables](docs/environment-variable.md) - Configure EasyHAProxy behavior
- [Volumes](docs/volumes.md) - Map volumes for certificates, config, and custom files
- [Plugins](docs/plugins.md) - Extend HAProxy with plugins ([Development Guide](docs/plugin-development.md))
- [JWT Validator](docs/Plugins/jwt-validator.md) - JWT authentication validation
- [FastCGI](docs/Plugins/fastcgi.md) - PHP-FPM and FastCGI application support
- [Cloudflare](docs/Plugins/cloudflare.md) - Restore visitor IP from Cloudflare CDN
- [IP Whitelist](docs/Plugins/ip-whitelist.md) - Restrict access to IPs/CIDR ranges
- [Deny Pages](docs/Plugins/deny-pages.md) - Block access to specific paths
- [Cleanup](docs/Plugins/cleanup.md) - Automatic cleanup of temporary files
- [Other Configurations](docs/other.md) - Additional configurations (ports, custom errors, etc.)
- [Limitations](docs/limitations.md) - Important limitations and considerations
## Development
### Requirements
- Python 3.11 or higher
- [uv](https://github.com/astral-sh/uv) package manager
### Installation for Development
```bash
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/byjg/docker-easy-haproxy.git
cd docker-easy-haproxy
# Install dependencies (creates virtual environment automatically)
uv sync --dev
# Run tests
make test
# or directly: uv run pytest tests/ -vv
# Run linting
make lint
# Format code
make format
```
### Installing the Package
```bash
# Install with uv
uv pip install easymapping
# Or install from source
uv pip install -e ".[dev]"
```
## See EasyHAProxy in action
Click on the image to see the videos (use HD for better visualization)
[](https://youtu.be/ar8raFK0R1k)
[](https://youtu.be/xwIdj9mc2mU)
[](https://youtu.be/uq7TuLIijks)
[](https://youtu.be/v9Q4M5Al7AQ)
[](https://youtu.be/B_bYZnRTGJM)
[](https://youtu.be/JHqcq9crbDI)
[Here is the code](https://gist.github.com/byjg/e125e478a0562190176d69ea795fd3d4) applied in the test examples above.
----
[Open source ByJG](http://opensource.byjg.com)
| text/markdown | null | null | null | null | MIT License
Copyright (c) 2018 Joao Gilberto Magalhães
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | docker, haproxy, kubernetes, load-balancer, service-discovery | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic ::... | [] | null | null | >=3.11 | [] | [] | [] | [
"deepdiff>=6.0.0",
"docker>=7.0.0",
"jinja2>=3.1.0",
"kubernetes>=28.0.0",
"psutil>=5.9.0",
"pyopenssl>=24.0.0",
"pyyaml>=6.0",
"requests>=2.31.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:13:15.753756 | easyhaproxy-6.0.0.tar.gz | 1,500,660 | 0a/38/0e112f1fbeac01222c74273ca76ad89d240ae294a212b101ac01f20799ba/easyhaproxy-6.0.0.tar.gz | source | sdist | null | false | 15f47193afca8aafbded031723ad79e8 | 23e80e54a78d128a8d91c4405d663764c05ffa088a5a9c57c96227c4f0617daa | 0a380e112f1fbeac01222c74273ca76ad89d240ae294a212b101ac01f20799ba | null | [
"LICENSE"
] | 281 |
2.4 | fhaviary | 0.33.0 | Gymnasium framework for training language model agents on constructive tasks | # Aviary
<!-- pyml disable-num-lines 10 line-length -->
[](https://github.com/Future-House/aviary)
[](https://www.repostatus.org/#active)

[](https://aviary.bio/)
[](https://badge.fury.io/py/fhaviary)
[](https://github.com/Future-House/aviary)
[](https://www.codefactor.io/repository/github/future-house/aviary/)
[](https://github.com/psf/black)
[](https://www.python.org)
<p align="left">
<a href="https://arxiv.org/abs/2412.21154">
<img src="docs/assets/aviary_art.png" width="500" alt="Crows in a gym" />
</a>
</p>
**Aviary** [^1] is a gymnasium for defining custom language agent RL environments.
The library features pre-existing environments on
math [^2], general knowledge [^3], biological sequences [^4], scientific literature search [^5], and protein stability.
Aviary is designed to work in tandem with its sister library LDP (<https://github.com/Future-House/ldp>)
which enables the user to define custom language agents as Language Decision Processes.
See the following [tutorial][2] for an example of how to run an LDP language agent on an Aviary environment.
[2]: https://github.com/Future-House/aviary/blob/main/tutorials/Building%20a%20GSM8k%20Environment%20in%20Aviary.ipynb
[Overview](#overview)
| [Getting Started](#getting-started)
| [Documentation](https://aviary.bio/)
| [Paper](https://arxiv.org/abs/2412.21154)
## What's New?
- We have a new environment to run Jupyter notebooks at
[packages/notebook](packages/notebook).
## Overview
<p align="left">
<a href="https://arxiv.org/abs/2412.21154">
<img src="docs/assets/Aviary.png" width="800" alt="Aviary and LDP overview from paper" />
</a>
</p>
A pictorial overview of the five implemented Aviary environments and the language decision process framework.
## Getting Started
To install aviary (note `fh` stands for FutureHouse):
```bash
pip install fhaviary
```
To install aviary together with the incumbent environments:
```bash
pip install 'fhaviary[gsm8k,hotpotqa,labbench,lfrqa,notebook]'
```
To run the tutorial notebooks:
```bash
pip install "fhaviary[dev]"
```
### Developer Installation
For local development, please see [CONTRIBUTING.md](CONTRIBUTING.md).
## Tutorial Notebooks
1. [Building a Custom Environment in Aviary](tutorials/Building%20a%20Custom%20Environment%20in%20Aviary.ipynb)
2. [Building a GSM8K Environment in Aviary](tutorials/Building%20a%20GSM8k%20Environment%20in%20Aviary.ipynb)
3. [Creating Language Agents to Interact with Aviary Environments][4]
4. [Evaluate a Llama Agent on GSM8K][5]
[4]: https://github.com/Future-House/ldp/blob/main/tutorials/creating_a_language_agent.ipynb
[5]: https://github.com/Future-House/ldp/blob/main/tutorials/evaluating_a_llama_agent.ipynb
## Defining a Custom Environment
The example below walks through defining a custom environment in Aviary.
We define a simple environment where an agent takes actions to modify a counter.
The example is also featured in the following [notebook][1].
```python
from collections import namedtuple
from aviary.core import Environment, Message, ToolRequestMessage, Tool
# State in this example is simply a counter
CounterEnvState = namedtuple("CounterEnvState", ["count"])
class CounterEnv(Environment[CounterEnvState]):
"""A simple environment that allows an agent to modify a counter."""
async def reset(self):
"""Initialize the environment with a counter set to 0. Goal is to count to 10"""
self.state = CounterEnvState(count=0)
# Target count
self.target = 10
# Create tools allowing the agent to increment and decrement counter
self.tools = [
Tool.from_function(self.incr),
Tool.from_function(self.decr),
]
# Return an observation message with the counter and available tools
return [Message(content=f"Count to 10. counter={self.state.count}")], self.tools
async def step(self, action: ToolRequestMessage):
"""Executes the tool call requested by the agent."""
obs = await self.exec_tool_calls(action)
# The reward is the square of the current count
reward = int(self.state.count == self.target)
# Returns observations, reward, done, truncated
return obs, reward, reward == 1, False
def incr(self):
"""Increment the counter."""
self.state.count += 1
return f"counter={self.state.count}"
def decr(self):
"""Decrement the counter."""
self.state.count -= 1
return f"counter={self.state.count}"
```
## Evaluating an Agent on the Environment
Following the definition of our custom environment,
we can now evaluate a language agent on the environment using
Aviary's sister library LDP (<https://github.com/Future-House/ldp>).
```python
from ldp.agent import Agent
from ldp.graph import LLMCallOp
from ldp.alg import RolloutManager
class AgentState:
"""A container for maintaining agent state across interactions."""
def __init__(self, messages, tools):
self.messages = messages
self.tools = tools
class SimpleAgent(Agent):
def __init__(self, **kwargs):
self._llm_call_op = LLMCallOp(**kwargs)
async def init_state(self, tools):
return AgentState([], tools)
async def get_asv(self, agent_state, obs):
"""Take an action, observe new state, return value"""
action = await self._llm_call_op(
config={"name": "gpt-4o", "temperature": 0.1},
msgs=agent_state.messages + obs,
tools=agent_state.tools,
)
new_state = AgentState(
messages=agent_state.messages + obs + [action],
tools=agent_state.tools,
)
# Return action, state, value
return action, new_state, 0.0
# Create a simple agent and perform rollouts on the environment
# Endpoint can be model identifier e.g. "claude-3-opus" depending on service
agent = SimpleAgent(config={"model": "my_llm_endpoint"})
runner = RolloutManager(agent=agent)
trajectories = await runner.sample_trajectories(
environment_factory=CounterEnv,
batch_size=2,
)
```
Below we expand on some of the core components of the Aviary library together with more advanced usage examples.
### Environment
An environment should have two methods, `env.reset` and `env.step`:
```py
obs_msgs, tools = await env.reset()
new_obs_msgs, reward, done, truncated = await env.step(action_msg)
```
Communication is achieved through messages.
The `action_msg` is an instance of `ToolRequestMessage` which comprises one or more calls
to the `tools` returned by `env.reset` method.
The `obs_msgs` are either general obseravation messages
or instances of `ToolResponseMessage` returned from the environment.
while `reward` is a scalar value, and `done` and `truncated`
are Boolean values.
We explain the message formalism in further detail below.
### Messages
Communication between the agent and environment is achieved via messages.
We follow the [OpenAI](https://platform.openai.com/docs/api-reference/messages/createMessage) standard.
Messages have two attributes:
```py
msg = Message(content="Hello, world!", role="assistant")
```
The `content` attribute can be a string but can also comprise objects such as [images][3].
For example, the `create_message` method can be used to create a message with images:
[3]: https://platform.openai.com/docs/guides/vision?lang=node#uploading-base64-encoded-images
```py
from PIL import Image
import numpy as np
img = Image.open("your_image.jpg")
img_array = np.array(img)
msg = Message.create_message(role="user", text="Hello, world!", images=[img_array])
```
In this case, `content` will be a list of dictionaries with the keys `text` and `image_url`.
```py
{
{"type": "text", "text": "Hello World!"},
{"text": "image_url", "image_url": "data:image/png;base64,{base64_image}"},
}
```
The role, see the table below.
You can change around roles as desired,
except for `tool` which has a special meaning in aviary.
| Role | Host | Example(s) |
| --------- | ------------------------------------------------ | ---------------------------------------------------------------- |
| assistant | Agent | An agent's tool selection message |
| system | Agent system prompt | "You are an agent." |
| user | Environment system prompt or emitted observation | HotPotQA problem to solve, or details of an internal env failure |
| tool | Result of a tool run in the environment | The output of the calculator tool for a GSM8K question |
The `Message` class is extended in `ToolRequestMessage` and `ToolResponseMessage`
to include the relevant tool name and arguments.
### Subclassing Environments
If you need more control over Environments and tools, you may wish to subclass `Environment`. We illustrate this
with an example environment in which an agent is tasked to write a story.
We subclass `Environment` and define a `state`. The `state` consists of all variables
that change per step that we wish to bundle together. It will be accessible in tools, so you can use `state` to store
information you want to persist between steps and tool calls.
```py
from pydantic import BaseModel
from aviary.core import Environment
class ExampleState(BaseModel):
reward: float = 0
done: bool = False
class ExampleEnv(Environment[ExampleState]):
state: ExampleState
```
We do not have other variables aside from `state` for this environment,
although we could also have variables like configuration, a name,
tasks, etc. attached to it.
### Defining Tools
We will define a single tool that prints a story. Tools may optionally take a final argument
`state` which is the environment state. This argument will not be
exposed to the agent as a parameter but will be injected by the environment
(if part of the function signature).
```py
def print_story(story: str, state: ExampleState):
"""Print a story.
Args:
story: Story to print.
state: Environment state (hidden from agent).
"""
print(story)
state.reward = 1
state.done = True
```
The tool is built from the following parts of the function: its
name, its argument's names, the arguments types, and the docstring.
The docstring is parsed to obtain a description of the
function and its arguments, so be sure to match the syntax carefully.
Environment episode completion is indicated by setting `state.done = True`.
This example terminates immediately - other
termination conditions are also possible.
It is also possible make the function `async` - the environment will account for that when the tool is called.
### Advanced Tool Descriptions
Aviary also supports more sophisticated signatures:
- Multiline docstrings
- Non-primitive type hints (e.g. type unions)
- Default values
- Exclusion of info below `\f` (see below)
If you have summary-level information that belongs in the docstring,
but you don't want it to be part of the `Tool.info.description`,
add a `r` prefix to the docstring
and inject `\f` before the summary information to exclude.
This convention was created by FastAPI ([docs][1]).
[1]: https://fastapi.tiangolo.com/advanced/path-operation-advanced-configuration/#advanced-description-from-docstring
```python
def print_story(story: str | bytes, state: ExampleState):
r"""Print a story.
Extra information that is part of the tool description.
\f
This sentence is excluded because it's an implementation detail.
Args:
story: Story to print, either as a string or bytes.
state: Environment state.
"""
print(story)
state.reward = 1
state.done = True
```
### The Environment `reset` Method
Next we define the `reset` function which initializes the tools
and returns one or more initial observations as well as the tools.
The `reset` function is `async` to allow for database interactions or HTTP requests.
```py
from aviary.core import Message, Tool
async def reset(self):
self.tools = [Tool.from_function(ExampleEnv.print_story)]
start = Message(content="Write a 5 word story and call print")
return [start], self.tools
```
### The Environment `step` Method
Next we define the `step` function which takes an action and returns
the next observation, reward, done, and whether the episode was truncated.
```py
from aviary.core import Message
async def step(self, action: Message):
msgs = await self.exec_tool_calls(action, state=self.state)
return msgs, self.state.reward, self.state.done, False
```
You will probably often use this specific syntax for calling the tools - calling `exec_tool_calls` with the action.
### Environment `export_frame` Method
Optionally, we can define a function to export a snapshot of the environment
and its state for visualization or debugging purposes.
```py
from aviary.core import Frame
def export_frame(self):
return Frame(
state={"done": self.state.done, "reward": self.state.reward},
info={"tool_names": [t.info.name for t in self.tools]},
)
```
### Viewing Environment Tools
If an environment can be instantiated without anything other than the task
(i.e., it implements `from_task`), you can start a server to view its tools:
```sh
pip install fhaviary[server]
aviary tools [env name]
```
This will start a server that allows you to view the tools and call them,
viewing the descriptions/types and output that an agent would see when using the tools.
### Incumbent Environments
Below we list some pre-existing environments implemented in Aviary:
| Environment | PyPI | Extra | README |
| ----------- | -------------------------------------------------------------- | -------------------- | ------------------------------------------------------- |
| GSM8k | [`aviary.gsm8k`](https://pypi.org/project/aviary.gsm8k/) | `fhaviary[gsm8k]` | [`README.md`](packages/gsm8k/README.md#installation) |
| HotPotQA | [`aviary.hotpotqa`](https://pypi.org/project/aviary.hotpotqa/) | `fhaviary[hotpotqa]` | [`README.md`](packages/hotpotqa/README.md#installation) |
| LAB-Bench | [`aviary.labbench`](https://pypi.org/project/aviary.labbench/) | `fhaviary[labbench]` | [`README.md`](packages/labbench/README.md#installation) |
| LFRQA | [`aviary.lfrqa`](https://pypi.org/project/aviary.lfrqa/) | `fhaviary[lfrqa]` | [`README.md`](packages/lfrqa/README.md#installation) |
| Notebook | [`aviary.notebook`](https://pypi.org/project/aviary.notebook/) | `fhaviary[notebook]` | [`README.md`](packages/notebook/README.md#installation) |
| LitQA | [`aviary.litqa`](https://pypi.org/project/aviary.litqa/) | Moved to `labbench` | Moved to `labbench` |
### Task Datasets
Included with some environments are collections of problems that define training or evaluation datasets.
We refer to these as `TaskDataset`s, e.g. for the `HotpotQADataset` subclass of `TaskDataset`:
```py
from aviary.envs.hotpotqa import HotPotQADataset
dataset = HotPotQADataset(split="dev")
```
### Functional Environments
An alternative way to create an environment is using the functional interface,
which uses functions and decorators to define environments.
Let's define an environment that requires an agent to write a story
about a particular topic by implementing its `start` function:
```python
from aviary.core import fenv
@fenv.start()
def my_env(topic):
# return the first observation and starting environment state
# (empty in this case)
return f"Write a story about {topic}", {}
```
The `start` decorator begins the definition of an environment.
The function, `my_env`,
takes an arbitrary input and returns a tuple containing the first observation
and any information you wish to store about the environment state
(used to persist/share information between tools).
The state will always have an optional `reward` and a Boolean `done` that indicate
if the environment episode is complete.
Next we define some tools:
```python
@my_env.tool()
def multiply(x: float, y: float) -> float:
"""Multiply two numbers."""
return x * y
@my_env.tool()
def print_story(story: str | bytes, state) -> None:
"""Print a story to the user and complete episode."""
print(story)
state.reward = 1
state.done = True
```
The tools will be converted into objects visible for LLMs using the type hints and the variable descriptions.
Thus, the type hinting can be valuable for an agent that uses it correctly.
The docstrings are also passed to the LLM and is the primary means
(along with the function name) for communicating the intended tool usage.
You can access the `state` variable in tools,
which will have any fields you passed in the return tuple of `start()`.
For example, if you returned `{'foo': 'bar'}`,
then you could access `state.foo` in the tools.
You may stop an environment or set a reward via the `state` variable
as shown in the second `print_story` tool.
If the reward is not set, it is treated as zero.
Next we illustrate how to use our environment:
```python
env = my_env(topic="foo")
obs, tools = await env.reset()
```
## Citing Aviary
If Aviary is useful for your work please consider citing the following paper:
```bibtex
@article{Narayanan_Aviary_training_language_2024,
title = {{Aviary: training language agents on challenging scientific tasks}},
author = {
Narayanan, Siddharth and Braza, James D. and Griffiths, Ryan-Rhys and
Ponnapati, Manvitha and Bou, Albert and Laurent, Jon and Kabeli, Ori and
Wellawatte, Geemi and Cox, Sam and Rodriques, Samuel G. and White, Andrew
D.
},
year = 2024,
month = dec,
journal = {preprint},
doi = {10.48550/arXiv.2412.21154},
url = {https://arxiv.org/abs/2412.21154}
}
```
## References
[^1]: Narayanan, S., Braza, J.D., Griffiths, R.R., Ponnapati, M., Bou, A., Laurent, J., Kabeli, O., Wellawatte, G., Cox, S., Rodriques, S.G. and White, A.D., 2024. [Aviary: training language agents on challenging scientific tasks.](https://arxiv.org/abs/2412.21154) arXiv preprint arXiv:2412.21154.
[^2]: Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R. and Hesse, C., 2021. [Training verifiers to solve math word problems.](https://arxiv.org/abs/2110.14168) arXiv preprint arXiv:2110.14168.
[^3]: Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W., Salakhutdinov, R. and Manning, C.D., 2018. [HotpotQA: A dataset for diverse, explainable multi-hop question answering.](https://aclanthology.org/D18-1259/) EMNLP 2018 (pp. 2369-2380).
[^4]: Laurent, J.M., Janizek, J.D., Ruzo, M., Hinks, M.M., Hammerling, M.J., Narayanan, S., Ponnapati, M., White, A.D. and Rodriques, S.G., 2024. [Lab-Bench: Measuring capabilities of language models for biology research.](https://arxiv.org/abs/2407.10362) arXiv preprint arXiv:2407.10362.
[^5]: Skarlinski, M.D., Cox, S., Laurent, J.M., Braza, J.D., Hinks, M., Hammerling, M.J., Ponnapati, M., Rodriques, S.G. and White, A.D., 2024. [Language agents achieve superhuman synthesis of scientific knowledge.](https://arxiv.org/abs/2409.13740) arXiv preprint arXiv:2409.13740.
| text/markdown | null | FutureHouse technical staff <hello@futurehouse.org> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 FutureHouse
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | >=3.11 | [] | [] | [] | [
"docstring_parser>=0.16",
"httpx",
"httpx-aiohttp",
"pydantic~=2.0",
"boto3; extra == \"cloud\"",
"aviary.gsm8k[typing]; extra == \"dev\"",
"aviary.hotpotqa; extra == \"dev\"",
"aviary.labbench[dev]; extra == \"dev\"",
"aviary.lfrqa[dev]; extra == \"dev\"",
"aviary.notebook[dev]; extra == \"dev\""... | [] | [] | [] | [
"issues, https://github.com/Future-House/aviary/issues",
"repository, https://github.com/Future-House/aviary"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:12:51.777619 | fhaviary-0.33.0.tar.gz | 5,868,495 | 0b/c3/513f3ca1e70c6965d4b5604c62b035a482015004d8655891ea6b867831dd/fhaviary-0.33.0.tar.gz | source | sdist | null | false | a0cec5dfa11baec2d2efc59f0c1e62bb | 442b7844c4d78080c0de8857372ba3d2f4d9ec608baa41f737e1ae14c8706cf6 | 0bc3513f3ca1e70c6965d4b5604c62b035a482015004d8655891ea6b867831dd | null | [
"LICENSE"
] | 1,884 |
2.4 | aviary.notebook | 0.33.0 | Jupyter notebook environment implemented with aviary | # aviary.notebook
A Jupyter notebook environment.
## Installation
To install the notebook environment, run the following command:
```bash
pip install 'fhaviary[notebook]'
```
To allow the environment to run notebooks in containerized sandboxes (recommended), first build the default image:
```bash
cd docker/
docker build -t aviary-notebook-env -f Dockerfile.pinned .
```
And second, set the environment variable `NB_ENVIRONMENT_USE_DOCKER=true`.
You may use your own Docker image with a different name,
in which case you must override the environment variable `NB_ENVIRONMENT_DOCKER_IMAGE`.
| text/markdown | null | FutureHouse technical staff <hello@futurehouse.org> | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | >=3.11 | [] | [] | [] | [
"Pillow",
"aiodocker",
"fhaviary",
"ipykernel",
"jupyter-client",
"litellm",
"nbformat",
"numpy",
"pydantic~=2.0",
"matplotlib; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:12:51.068831 | aviary_notebook-0.33.0.tar.gz | 17,893 | 0a/ba/85e8c5e5ed4e11239aa0c5b7959c3bd226c5246ef3f6f0d19404bf367f96/aviary_notebook-0.33.0.tar.gz | source | sdist | null | false | a063ccfd709bd7836e20da664630fd8d | a196dd7fbdc9aa2ee26a2aee62f64d7266fdc17475776ac2fca3b7413b4cf705 | 0aba85e8c5e5ed4e11239aa0c5b7959c3bd226c5246ef3f6f0d19404bf367f96 | null | [] | 0 |
2.4 | aviary.lfrqa | 0.33.0 | LFRQA environment implemented with aviary | # aviary.lfrqa
An environment designed to utilize PaperQA
for answering questions from the LFRQATaskDataset
Long-form RobustQA (LFRQA) is a human-annotated dataset introduced in the RAG-QA-Arena,
featuring over 1400 questions from various categories, including science.
## Installation
To install the LFRQA environment, run:
```bash
pip install 'fhaviary[lfrqa]'
```
## Usage
Refer to [this tutorial][2] for instructions on how to run the environment.
[2]: https://github.com/Future-House/paper-qa/blob/main/docs/tutorials/running_on_lfrqa.md
## References
[1] RAG-QA Arena (<https://arxiv.org/pdf/2407.13998>)
| text/markdown | null | FutureHouse technical staff <hello@futurehouse.org> | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | >=3.11 | [] | [] | [] | [
"aviary.labbench",
"fhaviary",
"fhlmi",
"paper-qa>=5.14.0",
"pydantic~=2.0",
"pandas; extra == \"csv\"",
"aviary.lfrqa[csv]; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:12:50.527679 | aviary_lfrqa-0.33.0.tar.gz | 10,150 | 5a/30/d5864cb2046e3d8ee06127a0aadb9471ba3b6f3b89d7a8100679f17b2049/aviary_lfrqa-0.33.0.tar.gz | source | sdist | null | false | 18528ffd8d3b1e999c1f7f62c0b2c7d5 | 94214d1a1254ed875f7c476b2a640f5597a8cfe48ffc301e400d42be2437a9ee | 5a30d5864cb2046e3d8ee06127a0aadb9471ba3b6f3b89d7a8100679f17b2049 | null | [] | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.