metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | urich | 0.1.3 | Async DDD framework for microservices on Starlette | # Urich
Async DDD framework for microservices on Starlette.
**Documentation:** [kashn9sh.github.io/urich](https://kashn9sh.github.io/urich) · **Contributing:** [CONTRIBUTING.md](CONTRIBUTING.md) · **Community & promotion:** [docs/community.md](https://kashn9sh.github.io/urich/community/)
The application is composed from module objects via `app.register(module)` — similar to FastAPI with routers, but one consistent style for domain, events, RPC and discovery.
## Idea
- **One object = one building block:** DomainModule, EventBusModule, OutboxModule, DiscoveryModule, RpcModule. All configured via fluent API and attached with `app.register(module)`.
- **DDD:** Bounded context as DomainModule with `.aggregate()`, `.repository()`, `.command()`, `.query()`, `.on_event()`. Commands and queries get HTTP routes automatically.
- **No lock-in:** Protocols (EventBus, ServiceDiscovery, RpcTransport) in core; implementations (Redis, Consul, HTTP+JSON) supplied by the user or optional out-of-the-box adapters.
## Install
```bash
pip install urich
# CLI for generating skeletons:
pip install "urich[cli]"
```
## Quick start
```python
from urich import Application
from urich.ddd import DomainModule
# One object = full bounded context
from orders.module import orders_module
app = Application()
app.register(orders_module)
# Run: python -m uvicorn main:app --reload (or: pip install uvicorn && uvicorn main:app --reload)
```
Routes by convention: `POST /orders/commands/create_order`, `GET /orders/queries/get_order`.
## OpenAPI / Swagger
After registering all modules, call `app.openapi(title="My API", version="0.1.0")`. Then:
- **GET /openapi.json** — OpenAPI 3.0 spec
- **GET /docs** — Swagger UI
```python
app = Application()
# ... app.register(module) ...
app.openapi(title="My API", version="0.1.0")
```
## CLI
```bash
urich create-app myapp
cd myapp
urich add-context orders --dir .
urich add-aggregate orders Order --dir .
# In main.py: from orders.module import orders_module; app.register(orders_module)
```
## Module structure (DomainModule)
- **domain** — aggregate (AggregateRoot), domain events (DomainEvent).
- **application** — commands/queries (Command/Query), handlers (one per command/query).
- **infrastructure** — repository interface and implementation (e.g. in-memory for prototypes).
- **module.py** — one object `DomainModule("orders").aggregate(...).repository(...).command(...).query(...).on_event(...)`; register in the app with `app.register(orders_module)`.
## Other modules
- **EventBusModule** — `.adapter(impl)` or `.in_memory()`; in container as EventBus.
- **OutboxModule** — `.storage(...)` and `.publisher(...)`; contracts in core.
- **DiscoveryModule** — `.static({"svc": "http://..."})` or `.adapter(impl)`; ServiceDiscovery protocol.
- **RpcModule** — `.server(path="/rpc")` and `.client(discovery=..., transport=...)`; optional JsonHttpRpcTransport (requires httpx).
Full composition example: `examples/ecommerce/main.py`.
| text/markdown | null | null | null | null | null | asgi, ddd, microservices, starlette, cqrs | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"starlette>=0.41.0",
"pydantic>=2.0",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"httpx; extra == \"dev\"",
"uvicorn; extra == \"dev\"",
"typer>=0.9.0; extra == \"cli\"",
"mkdocs<2,>=1.5; extra == \"docs\"",
"mkdocs-material>=9.0; extra == \"docs\"",
"pymdown-extensions; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://kashn9sh.github.io/urich",
"Documentation, https://kashn9sh.github.io/urich",
"Repository, https://github.com/KashN9sh/urich"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:27:24.647656 | urich-0.1.3.tar.gz | 18,175 | 0a/a0/8b5dfe1d4b4253d267f41bfa096540c0a298684a1ead8edd8a29a3799ca4/urich-0.1.3.tar.gz | source | sdist | null | false | f709c36b5384a60d510ff0ebbed94abb | ca0d89dac901ce982eeeefa19b5d20b8f98ec3903293e7de1caed588575430f1 | 0aa08b5dfe1d4b4253d267f41bfa096540c0a298684a1ead8edd8a29a3799ca4 | null | [
"LICENSE"
] | 237 |
2.4 | qmllib | 1.2.0 | Python/Fortran toolkit for representation of molecules and solids for machine learning of properties of molecules and solids. | [](https://github.com/qmlcode/qmllib/actions/workflows/test.ubuntu.yml)
[](https://github.com/qmlcode/qmllib/actions/workflows/test.macos.yml)
[](https://pypi.org/project/qmllib/)
[](https://pypi.org/project/qmllib/)
[](https://github.com/qmlcode/qmllib)
[](https://opensource.org/licenses/MIT)
## What is qmllib?
`qmllib` is a Python/Fortran toolkit for representation of molecules and solids for machine learning of properties of molecules and solids. The library is not a high-level framework where you can do `model.train()`, but supplies the building blocks to carry out efficient and accurate machine learning. As such, the goal is to provide usable and efficient implementations of concepts such as representations and kernels.
## QML or qmllib?
`qmllib` represents the core library functionality derived from the original QML package, providing a powerful toolkit for quantum machine learning applications, but without the high-level abstraction, for example SKLearn.
This package is and should stay free-function design oriented.
If you are moving from `qml` to `qmllib`, note that there are breaking changes to the interface to make it more consistent with both argument orders and function naming.
## How to install
Install from PyPI — pre-built wheels are available for Linux and macOS. They are pre-compiled with optimized BLAS libraries and OpenMP support.
For most users, you can just install with pip:
```bash
pip install qmllib
```
This installs pre-compiled wheels with optimized BLAS libraries:
- **Linux**: OpenBLAS
- **macOS**: Apple Accelerate framework
## Installing from source
If you are installing from source (e.g. directly from GitHub), you will need a Fortran compiler, OpenMP and a BLAS library. On Linux:
```bash
sudo apt install gfortran libomp-dev libopenblas-dev
```
On macOS via Homebrew:
```bash
brew install gcc libomp llvm
```
Or install directly from GitHub:
```bash
pip install git+https://github.com/qmlcode/qmllib
```
Or a specific branch:
```bash
pip install git+https://github.com/qmlcode/qmllib@feature_branch
```
## How to contribute
[uv](https://docs.astral.sh/uv/) is required for the development workflow.
Fork and clone the repo, then set up the environment and run the tests:
```bash
git clone your_repo qmllib.git
cd qmllib.git
make install-dev
make test
```
Fork it, clone it, make it, test it!
## How to use
Notebook examples are coming. For now, see test files in `tests/*`.
## How to cite
Please cite the representation that you are using accordingly.
- **Implementation**
qmllib: A Python Toolkit for Quantum Chemistry Machine Learning,
https://github.com/qmlcode/qmllib, \<version or git commit\>
- **FCHL19** `generate_fchl19`
FCHL revisited: Faster and more accurate quantum machine learning,
Christensen, Bratholm, Faber, Lilienfeld,
J. Chem. Phys. 152, 044107 (2020),
https://doi.org/10.1063/1.5126701
- **FCHL18** `generate_fchl18`
Alchemical and structural distribution based representation for universal quantum machine learning,
Faber, Christensen, Huang, Lilienfeld,
J. Chem. Phys. 148, 241717 (2018),
https://doi.org/10.1063/1.5020710
- **Coulomb Matrix** `generate_coulomb_matrix_*`
Fast and Accurate Modeling of Molecular Atomization Energies with Machine Learning,
Rupp, Tkatchenko, Müller, Lilienfeld,
Phys. Rev. Lett. 108, 058301 (2012)
DOI: https://doi.org/10.1103/PhysRevLett.108.058301
- **Bag of Bonds (BoB)** `generate_bob`
Assessment and Validation of Machine Learning Methods for Predicting Molecular Atomization Energies,
Hansen, Montavon, Biegler, Fazli, Rupp, Scheffler, Lilienfeld, Tkatchenko, Müller,
J. Chem. Theory Comput. 2013, 9, 8, 3404–3419
https://doi.org/10.1021/ct400195d
- **SLATM** `generate_slatm`
Understanding molecular representations in machine learning: The role of uniqueness and target similarity,
Huang, Lilienfeld,
J. Chem. Phys. 145, 161102 (2016)
https://doi.org/10.1063/1.4964627
- **ACSF** `generate_acsf`
Atom-centered symmetry functions for constructing high-dimensional neural network potentials,
Behler,
J Chem Phys 21;134(7):074106 (2011)
https://doi.org/10.1063/1.3553717
- **AARAD** `generate_aarad`
Alchemical and structural distribution based representation for universal quantum machine learning,
Faber, Christensen, Huang, Lilienfeld,
J. Chem. Phys. 148, 241717 (2018),
https://doi.org/10.1063/1.5020710
| text/markdown | Jimmy Kromann, Anders S. Christensen | null | null | null | null | qml, quantum chemistry, machine learning, representations, kernels | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Fortran",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Chemistry",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.0",
"pytest>=8; extra == \"test\"",
"pytest-xdist; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-timeout; extra == \"test\"",
"pandas; extra == \"test\"",
"scipy; extra == \"test\"",
"ruff>=0.8.0; extra == \"dev\"",
"ty>=0.0.1; extra == \"dev\"",
"pre-commit>=3.6.0; extra == \"dev\"",
"monkeytype>=23.3.0; extra == \"dev\"",
"twine>=6.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://qmlcode.org",
"Issues, https://github.com/qmlcode/qmllib/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:26:07.239542 | qmllib-1.2.0.tar.gz | 12,107,854 | 13/47/f7b298ddcbf11eb416309aaa9a8ab0d134de3f29230b884acf88bc49f0a1/qmllib-1.2.0.tar.gz | source | sdist | null | false | 18cd707e3e472314c36c210906f55fb0 | 7c3d975ed9776b8abec3b7278207c38041731264f734a04f906e6f8ebd106363 | 1347f7b298ddcbf11eb416309aaa9a8ab0d134de3f29230b884acf88bc49f0a1 | MIT | [] | 734 |
2.4 | trcc-linux | 6.1.4 | Linux implementation of Thermalright LCD Control Center | # TRCC Linux
[](https://github.com/Lexonight1/thermalright-trcc-linux/actions/workflows/tests.yml)
[](https://github.com/Lexonight1/thermalright-trcc-linux/actions/workflows/tests.yml)
[](https://pypi.org/project/trcc-linux/)
[](https://python.org)
[](LICENSE)
[](https://buymeacoffee.com/Lexonight1)
> If this helped you, could you **[buy me a nice frosty cold one](https://buymeacoffee.com/Lexonight1)**?
Native Linux port of the Thermalright LCD Control Center (Windows TRCC 2.0.3). Control and customize the LCD displays and LED segment displays on Thermalright CPU coolers, AIO pump heads, and fan hubs — entirely from Linux.
> **This project wouldn't exist without our testers.** I only own one device. Every supported device in this list works because someone plugged it in, ran `trcc report`, and told me what broke. 20 testers across 6 countries helped us go from "SCSI only" to full C# feature parity with 4 USB protocols, 16 FBL resolutions, and 12 LED styles. Open source at its best — see [Contributors](#contributors) below.
> Unofficial community project, not affiliated with Thermalright. Built with [Claude](https://claude.ai) (AI) for protocol reverse engineering and code generation, guided by human architecture decisions and logical assessment. If something doesn't work on your distro, please [open an issue](https://github.com/Lexonight1/thermalright-trcc-linux/issues).
### Have an untested device?
Run `trcc report` and [paste the output in an issue](https://github.com/Lexonight1/thermalright-trcc-linux/issues/new) — takes 30 seconds. See the **[full list of devices that need testers](doc/TESTERS_WANTED.md)**.

## Features
- **Themes** — Local, cloud, masks, carousel mode, export/import as `.tr` files
- **Media** — Video/GIF playback, video trimmer, image cropper, screen cast (X11 + Wayland)
- **Editor** — Overlay text/sensors/date/time, font picker, dynamic scaling, color picker
- **Hardware** — 77+ sensors, customizable dashboard, multi-device with per-device config, RGB LED control
- **Display** — 15 resolutions (240x240 to 1920x462), 0/90/180/270 rotation, 3 brightness levels
- **Extras** — 5 starter themes + 120 masks per resolution, on-demand download, system tray, auto-start
## Supported Devices
Run `lsusb` to find your USB ID (`xxxx:xxxx` after `ID`), then match it below.
**SCSI devices** — fully supported:
| USB ID | Devices |
|--------|---------|
| `87CD:70DB` | FROZEN HORIZON PRO, FROZEN MAGIC PRO, FROZEN VISION V2, CORE VISION, ELITE VISION, AK120, AX120, PA120 DIGITAL, Wonder Vision |
| `0416:5406` | LC1, LC2, LC3, LC5 (AIO pump heads) |
| `0402:3922` | FROZEN WARFRAME, FROZEN WARFRAME 360, FROZEN WARFRAME SE |
**Bulk USB devices** — raw USB protocol:
| USB ID | Devices |
|--------|---------|
| `87AD:70DB` | GrandVision 360 AIO, Mjolnir Vision 360 |
**HID LCD devices** — auto-detected, needs hardware testers:
| USB ID | Devices |
|--------|---------|
| `0416:5302` | Trofeo Vision LCD, Assassin Spirit 120 Vision ARGB, AS120 VISION, BA120 VISION, FROZEN WARFRAME, FROZEN WARFRAME 360, FROZEN WARFRAME SE, FROZEN WARFRAME PRO, ELITE VISION, LC5 |
| `0418:5303` | TARAN ARMS |
| `0418:5304` | TARAN ARMS |
**HID LED devices** — RGB LED control:
| USB ID | Devices |
|--------|---------|
| `0416:8001` | AX120 DIGITAL, PA120 DIGITAL, Peerless Assassin 120 DIGITAL, Assassin X 120R Digital ARGB, Phantom Spirit 120 Digital EVO, and others (model auto-detected via handshake) |
> HID devices are auto-detected. See the [Device Testing Guide](doc/DEVICE_TESTING.md) if you have one — I need testers.
## Install
### Quick install (PyPI)
```bash
pip install trcc-linux
trcc setup # interactive wizard — deps, udev, desktop entry
```
Then **unplug and replug the USB cable** and run `trcc gui`.
### One-line bootstrap
Download and run — installs trcc-linux, then launches the setup wizard (GUI if you have a display, CLI otherwise):
```bash
bash <(curl -sSL https://raw.githubusercontent.com/Lexonight1/thermalright-trcc-linux/main/setup.sh)
```
### Setup wizard
After installing, run the setup wizard to configure everything:
```bash
trcc setup # interactive CLI wizard
trcc setup-gui # GUI wizard with Install buttons
```
The wizard checks system dependencies, GPU packages, udev rules, and desktop integration — and offers to install anything missing.
### Automatic (recommended for full setup)
```bash
git clone https://github.com/Lexonight1/thermalright-trcc-linux.git
cd thermalright-trcc-linux
sudo ./install.sh
```
Detects your distro, installs system packages, Python deps, udev rules, and desktop shortcut. On PEP 668 distros (Ubuntu 24.04+, Fedora 41+) it auto-falls back to a virtual environment if `pip` refuses direct install.
After it finishes: **unplug and replug the USB cable**, then run `trcc gui`.
### Supported distros
Fedora, Nobara, Ubuntu, Debian, Mint, Pop!_OS, Zorin, elementary OS, Arch, Manjaro, EndeavourOS, CachyOS, Garuda, openSUSE, Void, Gentoo, Alpine, NixOS, Bazzite, Aurora, Bluefin, SteamOS (Steam Deck).
> **`trcc: command not found`?** Open a new terminal — pip installs to `~/.local/bin` which needs a new shell session to appear on PATH.
> See the **[Install Guide](doc/INSTALL_GUIDE.md)** for distro-specific instructions, troubleshooting, and optional deps.
## Usage
```bash
trcc gui # Launch GUI
trcc detect # Show connected devices
trcc send image.png # Send image to LCD
trcc color "#ff0000" # Fill LCD with solid color
trcc brightness 2 # Set brightness (1=25%, 2=50%, 3=100%)
trcc rotation 90 # Rotate display (0/90/180/270)
trcc theme-list # List available themes
trcc theme-load NAME # Load a theme by name
trcc overlay # Render and send overlay
trcc screencast # Live screen capture to LCD
trcc video clip.mp4 # Play video on LCD
trcc led-color "#00ff00" # Set LED color
trcc led-mode breathing # Set LED effect mode
trcc serve # Start REST API server
trcc setup # Interactive setup wizard (CLI)
trcc setup-gui # Setup wizard (GUI)
trcc setup-selinux # Install SELinux USB policy (Bazzite/Silverblue)
trcc doctor # Check system dependencies
trcc report # Generate diagnostic report
trcc setup-udev # Install udev rules
trcc install-desktop # Install app menu entry and icon
trcc uninstall # Remove TRCC completely
```
See the **[CLI Reference](doc/CLI_REFERENCE.md)** for all 39 commands, options, and troubleshooting.
## Documentation
| Document | Description |
|----------|-------------|
| [Install Guide](doc/INSTALL_GUIDE.md) | Installation for all major distros |
| [Troubleshooting](doc/TROUBLESHOOTING.md) | Common issues and fixes |
| [CLI Reference](doc/CLI_REFERENCE.md) | All commands, options, and troubleshooting |
| [Changelog](doc/CHANGELOG.md) | Version history |
| [Architecture](doc/ARCHITECTURE.md) | Project layout and design |
| [Technical Reference](doc/TECHNICAL_REFERENCE.md) | SCSI protocol and file formats |
| [USBLCD Protocol](doc/audit/USBLCD_PROTOCOL.md) | SCSI protocol reverse-engineered from USBLCD.exe |
| [USBLCDNEW Protocol](doc/audit/USBLCDNEW_PROTOCOL.md) | USB bulk protocol reverse-engineered from USBLCDNEW.exe |
| [USBLED Protocol](doc/audit/USBLED_PROTOCOL.md) | HID LED protocol reverse-engineered from FormLED.cs |
| [Testers Wanted](doc/TESTERS_WANTED.md) | Devices that need hardware validation |
| [Device Testing Guide](doc/DEVICE_TESTING.md) | Device support and troubleshooting |
| [Supported Devices](doc/SUPPORTED_DEVICES.md) | Full device list with USB IDs |
## Contributors
A big thanks to everyone who has contributed invaluable reports to this project:
- **[Zeltergiest](https://github.com/Zeltergiest)** — Trofeo Vision 360 HID Type 2 testing, detailed bug reports & enhancement suggestions
- **[Xentrino](https://github.com/Xentrino)** — Peerless Assassin 120 Digital ARGB White LED testing across 15+ versions
- **[hexskrew](https://github.com/hexskrew)** — Assassin X 120R Digital ARGB HID testing & GUI layout feedback
- **[javisaman](https://github.com/javisaman)** — Phantom Spirit 120 Digital EVO LED testing & GPU phase validation
- **[Pikarz](https://github.com/Pikarz)** — Mjolnir Vision 360 bulk protocol testing
- **[michael-spinelli](https://github.com/michael-spinelli)** — Assassin Spirit 120 Vision ARGB HID testing & font style bug report
- **[Rizzzolo](https://github.com/Rizzzolo)** — Phantom Spirit 120 Digital EVO hardware testing
- **[N8ghtz](https://github.com/N8ghtz)** — Trofeo Vision HID testing
- **[Lcstyle](https://github.com/Lcstyle)** — HR10 2280 PRO Digital testing
- **[PantherX12max](https://github.com/PantherX12max)** — Trofeo Vision LCD hardware testing
- **[shadowepaxeor-glitch](https://github.com/shadowepaxeor-glitch)** — AX120 Digital hardware testing & USB descriptor dumps
- **[bipobuilt](https://github.com/bipobuilt)** — GrandVision 360 AIO bulk protocol testing
- **[cadeon](https://github.com/cadeon)** — GrandVision 360 AIO bulk protocol testing
- **[gizbo](https://github.com/gizbo)** — FROZEN WARFRAME SCSI color bug report
- **[apj202-ops](https://github.com/apj202-ops)** — Frozen Warframe SE HID testing
- **[Edoardo-Rossi-EOS](https://github.com/Edoardo-Rossi-EOS)** — Frozen Warframe 360 HID testing
- **[edoargo1996](https://github.com/edoargo1996)** — Frozen Warframe 360 HID testing
- **[stephendesmond1-cmd](https://github.com/stephendesmond1-cmd)** — Frozen Warframe 360 HID Type 2 testing
- **[acioannina-wq](https://github.com/acioannina-wq)** — Assassin Spirit 120 Vision HID testing
- **[Civilgrain](https://github.com/Civilgrain)** — Wonder Vision Pro 360 bulk protocol testing
## License
GPL-3.0
| text/markdown | TRCC Linux Contributors | null | null | null | null | cooling, hardware, lcd, monitor, thermalright | [
"Development Status :: 4 - Beta",
"Environment :: X11 Applications",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Hardware",
"Topic :: System :: Monitoring"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.24.0",
"pillow>=10.0.0",
"psutil>=5.9.0",
"pyside6>=6.5.0",
"pyusb>=1.2.0",
"typer>=0.9.0",
"dbus-python>=1.3.0; extra == \"all\"",
"fastapi>=0.100; extra == \"all\"",
"hidapi>=0.14.0; extra == \"all\"",
"httpx>=0.24.0; extra == \"all\"",
"nvidia-ml-py>=11.0.0; extra == \"all\"",
"pygobject>=3.42.0; extra == \"all\"",
"pytest-cov>=4.0.0; extra == \"all\"",
"pytest>=7.0.0; extra == \"all\"",
"python-multipart>=0.0.6; extra == \"all\"",
"ruff>=0.4.0; extra == \"all\"",
"uvicorn[standard]>=0.20; extra == \"all\"",
"fastapi>=0.100; extra == \"api\"",
"uvicorn[standard]>=0.20; extra == \"api\"",
"fastapi>=0.100; extra == \"dev\"",
"httpx>=0.24.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"python-multipart>=0.0.6; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"hidapi>=0.14.0; extra == \"hid\"",
"nvidia-ml-py>=11.0.0; extra == \"nvidia\"",
"dbus-python>=1.3.0; extra == \"wayland\"",
"pygobject>=3.42.0; extra == \"wayland\""
] | [] | [] | [] | [
"Homepage, https://github.com/Lexonight1/thermalright-trcc-linux",
"Documentation, https://github.com/Lexonight1/thermalright-trcc-linux#readme",
"Repository, https://github.com/Lexonight1/thermalright-trcc-linux",
"Issues, https://github.com/Lexonight1/thermalright-trcc-linux/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:24:45.334573 | trcc_linux-6.1.4.tar.gz | 31,157,141 | 02/ab/6192b76bd4c5e2dc4cfa7f4b66b8482d95cdcc4136566a432b27783c9833/trcc_linux-6.1.4.tar.gz | source | sdist | null | false | 6e1ecdbc70269b6aec0547fc70c13b4d | fc505fa1a41447cb53c99f54b6d5f103b98af69a583d5a1f04ca6ec74ce22fe6 | 02ab6192b76bd4c5e2dc4cfa7f4b66b8482d95cdcc4136566a432b27783c9833 | GPL-3.0-or-later | [
"LICENSE"
] | 250 |
2.4 | maturin | 1.12.4 | Build and publish crates with pyo3, cffi and uniffi bindings as well as rust binaries as python packages | # Maturin
_formerly pyo3-pack_
[](https://maturin.rs)
[](https://crates.io/crates/maturin)
[](https://pypi.org/project/maturin)
[](https://discord.gg/33kcChzH7f)
Build and publish crates with [pyo3, cffi and uniffi bindings](https://maturin.rs/bindings) as well as rust binaries as python packages with minimal configuration.
It supports building wheels for python 3.8+ on Windows, Linux, macOS and FreeBSD, can upload them to [pypi](https://pypi.org/) and has basic PyPy and GraalPy support.
Check out the [User Guide](https://maturin.rs/)!
## Usage
You can either download binaries from the [latest release](https://github.com/PyO3/maturin/releases/latest) or install it with [pipx](https://pypa.github.io/pipx/) or [uv](https://github.com/astral-sh/uv):
```shell
# pipx
pipx install maturin
# uv
uv tool install maturin
```
> [!NOTE]
>
> `pip install maturin` should also work if you don't want to use pipx.
There are three main commands:
- `maturin new` creates a new cargo project with maturin configured.
- `maturin build` builds the wheels and stores them in a folder (`target/wheels` by default), but doesn't upload them. It's recommended to publish packages with [uv](https://github.com/astral-sh/uv) using `uv publish`.
- `maturin develop` builds the crate and installs it as a python module directly in the current virtualenv. Note that while `maturin develop` is faster, it doesn't support all the feature that running `pip install` after `maturin build` supports.
maturin doesn't need extra configuration files and doesn't clash with an existing setuptools-rust configuration.
You can even integrate it with testing tools such as [tox](https://tox.readthedocs.io/en/latest/).
There are examples for the different bindings in the `test-crates` folder.
The name of the package will be the name of the cargo project, i.e. the name field in the `[package]` section of `Cargo.toml`.
The name of the module, which you are using when importing, will be the `name` value in the `[lib]` section (which defaults to the name of the package). For binaries, it's simply the name of the binary generated by cargo.
When using `maturin build` and `maturin develop` commands, you can compile a performance-optimized program by adding the `-r` or `--release` flag.
## Python packaging basics
Python packages come in two formats:
A built form called wheel and source distributions (sdist), both of which are archives.
A wheel can be compatible with any python version, interpreter (cpython and pypy, mainly), operating system and hardware architecture (for pure python wheels),
can be limited to a specific platform and architecture (e.g. when using ctypes or cffi) or to a specific python interpreter and version on a specific architecture and operating system (e.g. with pyo3).
When using `pip install` on a package, pip tries to find a matching wheel and install that. If it doesn't find one, it downloads the source distribution and builds a wheel for the current platform,
which requires the right compilers to be installed. Installing a wheel is much faster than installing a source distribution as building wheels is generally slow.
When you publish a package to be installable with `pip install`, you upload it to [pypi](https://pypi.org/), the official package repository.
For testing, you can use [test pypi](https://test.pypi.org/) instead, which you can use with `pip install --index-url https://test.pypi.org/simple/`.
Note that for [publishing for linux](#manylinux-and-auditwheel), you need to use the manylinux docker container or zig, while for publishing from your repository you can use the [PyO3/maturin-action](https://github.com/PyO3/maturin-action) github action.
## Mixed rust/python projects
To create a mixed rust/python project, create a folder with your module name (i.e. `lib.name` in Cargo.toml) next to your Cargo.toml and add your python sources there:
```
my-project
├── Cargo.toml
├── my_project
│ ├── __init__.py
│ └── bar.py
├── pyproject.toml
├── README.md
└── src
└── lib.rs
```
You can specify a different python source directory in `pyproject.toml` by setting `tool.maturin.python-source`, for example
**pyproject.toml**
```toml
[tool.maturin]
python-source = "python"
module-name = "my_project._lib_name"
```
then the project structure would look like this:
```
my-project
├── Cargo.toml
├── python
│ └── my_project
│ ├── __init__.py
│ └── bar.py
├── pyproject.toml
├── README.md
└── src
└── lib.rs
```
> [!NOTE]
>
> This structure is recommended to avoid [a common `ImportError` pitfall](https://github.com/PyO3/maturin/issues/490)
maturin will add the native extension as a module in your python folder. When using develop, maturin will copy the native library and for cffi also the glue code to your python folder. You should add those files to your gitignore.
With cffi you can do `from .my_project import lib` and then use `lib.my_native_function`, with pyo3 you can directly `from .my_project import my_native_function`.
Example layout with pyo3 after `maturin develop`:
```
my-project
├── Cargo.toml
├── my_project
│ ├── __init__.py
│ ├── bar.py
│ └── _lib_name.cpython-36m-x86_64-linux-gnu.so
├── README.md
└── src
└── lib.rs
```
When doing this also be sure to set the module name in your code to match the last part of `module-name` (don't include the package path):
```rust
#[pymodule]
#[pyo3(name="_lib_name")]
fn my_lib_name(m: &Bound<'_, PyModule>) -> PyResult<()> {
m.add_class::<MyPythonRustClass>()?;
Ok(())
}
```
## Python metadata
maturin supports [PEP 621](https://www.python.org/dev/peps/pep-0621/), you can specify python package metadata in `pyproject.toml`.
maturin merges metadata from `Cargo.toml` and `pyproject.toml`, `pyproject.toml` takes precedence over `Cargo.toml`.
To specify python dependencies, add a list `dependencies` in a `[project]` section in the `pyproject.toml`. This list is equivalent to `install_requires` in setuptools:
```toml
[project]
name = "my-project"
dependencies = ["flask~=1.1.0", "toml>=0.10.2,<0.11.0"]
```
You can add so called console scripts, which are shell commands that execute some function in your program in the `[project.scripts]` section.
The keys are the script names while the values are the path to the function in the format `some.module.path:class.function`, where the `class` part is optional. The function is called with no arguments. Example:
```toml
[project.scripts]
get_42 = "my_project:DummyClass.get_42"
```
You can also specify [trove classifiers](https://pypi.org/classifiers/) in your `pyproject.toml` under `project.classifiers`:
```toml
[project]
name = "my-project"
classifiers = ["Programming Language :: Python"]
```
## Source distribution
maturin supports building through `pyproject.toml`. To use it, create a `pyproject.toml` next to your `Cargo.toml` with the following content:
```toml
[build-system]
requires = ["maturin>=1.0,<2.0"]
build-backend = "maturin"
```
If a `pyproject.toml` with a `[build-system]` entry is present, maturin can build a source distribution of your package when `--sdist` is specified.
The source distribution will contain the same files as `cargo package`. To only build a source distribution, pass `--interpreter` without any values.
You can then e.g. install your package with `pip install .`. With `pip install . -v` you can see the output of cargo and maturin.
You can use the options `compatibility`, `skip-auditwheel`, `bindings`, `strip` and common Cargo build options such as `features` under `[tool.maturin]` the same way you would when running maturin directly.
The `bindings` key is required for cffi and bin projects as those can't be automatically detected. Currently, all builds are in release mode (see [this thread](https://discuss.python.org/t/pep-517-debug-vs-release-builds/1924) for details).
For a non-manylinux build with cffi bindings you could use the following:
```toml
[build-system]
requires = ["maturin>=1.0,<2.0"]
build-backend = "maturin"
[tool.maturin]
bindings = "cffi"
compatibility = "linux"
```
`manylinux` option is also accepted as an alias of `compatibility` for backward compatibility with old version of maturin.
To include arbitrary files in the sdist for use during compilation specify `include` as an array of `path` globs with `format` set to `sdist`:
```toml
[tool.maturin]
include = [{ path = "path/**/*", format = "sdist" }]
```
There's a `maturin sdist` command for only building a source distribution as workaround for [pypa/pip#6041](https://github.com/pypa/pip/issues/6041).
## Manylinux and auditwheel
For portability reasons, native python modules on linux must only dynamically link a set of very few libraries which are installed basically everywhere, hence the name manylinux.
The pypa offers special docker images and a tool called [auditwheel](https://github.com/pypa/auditwheel/) to ensure compliance with the [manylinux rules](https://peps.python.org/pep-0599/#the-manylinux2014-policy).
If you want to publish widely usable wheels for linux pypi, **you need to use a manylinux docker image or build with zig**.
The Rust compiler since version 1.64 [requires at least glibc 2.17](https://blog.rust-lang.org/2022/08/01/Increasing-glibc-kernel-requirements.html), so you need to use at least manylinux2014.
For publishing, we recommend enforcing the same manylinux version as the image with the manylinux flag, e.g. use `--manylinux 2014` if you are building in `quay.io/pypa/manylinux2014_x86_64`.
The [PyO3/maturin-action](https://github.com/PyO3/maturin-action) github action already takes care of this if you set e.g. `manylinux: 2014`.
maturin contains a reimplementation of auditwheel automatically checks the generated library and gives the wheel the proper platform tag.
If your system's glibc is too new or you link other shared libraries, it will assign the `linux` tag.
You can also manually disable those checks and directly use native linux target with `--manylinux off`.
For full manylinux compliance you need to compile in a CentOS docker container. The [pyo3/maturin](https://ghcr.io/pyo3/maturin) image is based on the manylinux2014 image,
and passes arguments to the `maturin` binary. You can use it like this:
```
docker run --rm -v $(pwd):/io ghcr.io/pyo3/maturin build --release # or other maturin arguments
```
Note that this image is very basic and only contains python, maturin and stable rust. If you need additional tools, you can run commands inside the manylinux container.
See [konstin/complex-manylinux-maturin-docker](https://github.com/konstin/complex-manylinux-maturin-docker) for a small educational example or [nanoporetech/fast-ctc-decode](https://github.com/nanoporetech/fast-ctc-decode/blob/b226ea0f2b2f4f474eff47349703d57d2ea4801b/.github/workflows/publish.yml) for a real world setup.
maturin itself is manylinux compliant when compiled for the musl target.
## Examples
- [agg-python-bindings](https://pypi.org/project/agg-python-bindings) - A Python Library that binds to Asciinema Agg terminal record renderer and Avt terminal emulator
- [ballista-python](https://github.com/apache/arrow-ballista-python) - A Python library that binds to Apache Arrow distributed query engine Ballista
- [bleuscore](https://github.com/shenxiangzhuang/bleuscore) - A BLEU score calculation library, written in pure Rust
- [chardetng-py](https://github.com/john-parton/chardetng-py) - Python binding for the chardetng character encoding detector.
- [connector-x](https://github.com/sfu-db/connector-x/tree/main/connectorx-python) - ConnectorX enables you to load data from databases into Python in the fastest and most memory efficient way
- [datafusion-python](https://github.com/apache/arrow-datafusion-python) - a Python library that binds to Apache Arrow in-memory query engine DataFusion
- [deltalake-python](https://github.com/delta-io/delta-rs/tree/main/python) - Native Delta Lake Python binding based on delta-rs with Pandas integration
- [opendal](https://github.com/apache/incubator-opendal/tree/main/bindings/python) - OpenDAL Python Binding to access data freely
- [orjson](https://github.com/ijl/orjson) - A fast, correct JSON library for Python
- [polars](https://github.com/pola-rs/polars/tree/master/py-polars) - Fast multi-threaded DataFrame library in Rust | Python | Node.js
- [pydantic-core](https://github.com/pydantic/pydantic-core) - Core validation logic for pydantic written in Rust
- [pyrus-cramjam](https://github.com/milesgranger/pyrus-cramjam) - Thin Python wrapper to de/compression algorithms in Rust
- [pyxel](https://github.com/kitao/pyxel) - A retro game engine for Python
- [roapi](https://github.com/roapi/roapi) - ROAPI automatically spins up read-only APIs for static datasets without requiring you to write a single line of code
- [robyn](https://github.com/sansyrox/robyn) - A fast and extensible async python web server with a Rust runtime
- [ruff](https://github.com/charliermarsh/ruff) - An extremely fast Python linter, written in Rust
- [rnet](https://github.com/0x676e67/rnet) - Asynchronous Python HTTP Client with Black Magic
- [rustpy-xlsxwriter](https://github.com/rahmadafandi/rustpy-xlsxwriter): A high-performance Python library for generating Excel files, utilizing the [rust_xlsxwriter](https://github.com/jmcnamara/rust_xlsxwriter) crate for efficient data handling.
- [tantivy-py](https://github.com/quickwit-oss/tantivy-py) - Python bindings for Tantivy
- [tpchgen-cli](https://github.com/clflushopt/tpchgen-rs/tree/main/tpchgen-cli) - Python CLI binding for `tpchgen`, a blazing fast TPC-H benchmark data generator built in pure Rust with zero dependencies.
- [watchfiles](https://github.com/samuelcolvin/watchfiles) - Simple, modern and high performance file watching and code reload in python
- [wonnx](https://github.com/webonnx/wonnx/tree/master/wonnx-py) - Wonnx is a GPU-accelerated ONNX inference run-time written 100% in Rust
## Contributing
Everyone is welcomed to contribute to maturin! There are many ways to support the project, such as:
- help maturin users with issues on GitHub and Gitter
- improve documentation
- write features and bugfixes
- publish blogs and examples of how to use maturin
Our [contributing notes](https://github.com/PyO3/maturin/blob/main/guide/src/contributing.md) have more resources if you wish to volunteer time for maturin and are searching where to start.
If you don't have time to contribute yourself but still wish to support the project's future success, some of our maintainers have GitHub sponsorship pages:
- [messense](https://github.com/sponsors/messense)
## License
Licensed under either of:
- Apache License, Version 2.0, ([LICENSE-APACHE](https://github.com/PyO3/maturin/blob/main/license-apache) or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license ([LICENSE-MIT](https://github.com/PyO3/maturin/blob/main/license-mit) or http://opensource.org/licenses/MIT)
at your option.
| text/markdown | null | konstin <konstin@mailbox.org> | null | null | null | null | [
"Topic :: Software Development :: Build Tools",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: Implementation :: GraalPy"
] | [] | https://github.com/pyo3/maturin | null | >=3.7 | [] | [] | [] | [
"tomli>=1.1.0; python_full_version < \"3.11\"",
"patchelf; extra == \"patchelf\"",
"ziglang>=0.10.0; extra == \"zig\""
] | [] | [] | [] | [
"Changelog, https://maturin.rs/changelog.html",
"Documentation, https://maturin.rs",
"Issues, https://github.com/PyO3/maturin/issues",
"Source Code, https://github.com/PyO3/maturin"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T10:24:35.382771 | maturin-1.12.4-py3-none-linux_armv6l.whl | 9,758,449 | e1/cd/8285f37bf968b8485e3c7eb43349a5adbccfddfc487cd4327fb9104578cc/maturin-1.12.4-py3-none-linux_armv6l.whl | py3 | bdist_wheel | null | false | 0e26eebec487046303708c225c32f442 | cf8a0eddef9ab8773bc823c77aed3de9a5c85fb760c86448048a79ef89794c81 | e1cd8285f37bf968b8485e3c7eb43349a5adbccfddfc487cd4327fb9104578cc | MIT OR Apache-2.0 | [
"license-mit",
"license-apache"
] | 64,032 |
2.4 | bruin-sdk | 0.1.0 | Python SDK for Bruin CLI — query databases, parse context, and access connections with zero boilerplate. | # Bruin Python SDK
The official Python SDK for [Bruin CLI](https://github.com/bruin-data/bruin). Query databases, access connections, and read pipeline context — all with zero boilerplate.
```python
from bruin import query, get_connection, context
# One-liner: query any database Bruin manages
df = query("SELECT * FROM users WHERE created_at > '{{start_date}}'")
# Access pipeline context
print(context.start_date) # datetime.date(2024, 6, 1)
print(context.pipeline) # "my_pipeline"
print(context.asset_name) # "my_asset"
# Get a typed database client
conn = get_connection("my_bigquery")
client = conn.client # google.cloud.bigquery.Client, ready to use
```
## Installation
Add `bruin-sdk` to the `requirements.txt` that sits next to your Python assets:
```
bruin-sdk
pandas
```
For specific database connections, install the corresponding extras:
```
bruin-sdk[bigquery] # Google BigQuery
bruin-sdk[snowflake] # Snowflake
bruin-sdk[postgres] # PostgreSQL / Redshift
bruin-sdk[redshift] # Redshift (alias for postgres extra)
bruin-sdk[mssql] # Microsoft SQL Server
bruin-sdk[mysql] # MySQL
bruin-sdk[duckdb] # DuckDB
bruin-sdk[sheets] # Google Sheets (for GCP connections)
bruin-sdk[all] # Everything
```
## Quick Start
### Before (manual boilerplate)
```python
""" @bruin
name: my_asset
secrets:
- key: bigquery_conn
@bruin """
import os
import json
from google.cloud import bigquery
# Parse connection JSON from env var
raw = json.loads(os.environ["bigquery_conn"])
sa_info = json.loads(raw["service_account_json"])
# Create client manually
client = bigquery.Client.from_service_account_info(
sa_info, project=raw["project_id"]
)
# Execute query
start = os.environ["BRUIN_START_DATE"]
df = client.query(f"SELECT * FROM users WHERE dt >= '{start}'").to_dataframe()
```
### After (with SDK)
```python
""" @bruin
name: my_asset
connection: bigquery_conn
@bruin """
from bruin import query, context
df = query(f"SELECT * FROM users WHERE dt >= '{context.start_date}'")
```
---
## API Reference
### `context`
A module-level object that provides access to all `BRUIN_*` environment variables as properly typed Python values. Each property reads the env var fresh on every access — no caching, no stale values.
```python
from bruin import context
```
| Property | Type | Env Var | Description |
|----------|------|---------|-------------|
| `context.start_date` | `date \| None` | `BRUIN_START_DATE` | Pipeline run start date |
| `context.end_date` | `date \| None` | `BRUIN_END_DATE` | Pipeline run end date |
| `context.start_datetime` | `datetime \| None` | `BRUIN_START_DATETIME` | Start date with time |
| `context.end_datetime` | `datetime \| None` | `BRUIN_END_DATETIME` | End date with time |
| `context.execution_date` | `date \| None` | `BRUIN_EXECUTION_DATE` | Execution date |
| `context.run_id` | `str \| None` | `BRUIN_RUN_ID` | Unique run identifier |
| `context.pipeline` | `str \| None` | `BRUIN_PIPELINE` | Pipeline name |
| `context.asset_name` | `str \| None` | `BRUIN_ASSET` | Current asset name |
| `context.connection` | `str \| None` | `BRUIN_CONNECTION` | Asset's default connection |
| `context.is_full_refresh` | `bool` | `BRUIN_FULL_REFRESH` | `True` when `--full-refresh` flag is set |
| `context.vars` | `dict` | `BRUIN_VARS` | Pipeline variables (types preserved from JSON Schema) |
All properties return `None` when the corresponding env var is missing (except `is_full_refresh` which returns `False`, and `vars` which returns `{}`).
```python
from bruin import context
# Dates
print(context.start_date) # datetime.date(2024, 6, 1)
print(context.end_date) # datetime.date(2024, 6, 2)
# Pipeline variables (types preserved from pipeline.yml JSON Schema)
segment = context.vars["segment"] # str: "enterprise"
horizon = context.vars["horizon"] # int: 30
cohorts = context.vars["cohorts"] # list[dict]
# Conditional logic
if context.is_full_refresh:
df = query("SELECT * FROM users")
else:
df = query(f"SELECT * FROM users WHERE dt >= '{context.start_date}'")
```
---
### `query(sql, connection=None)`
Execute SQL and return results.
```python
from bruin import query
```
**Parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `sql` | `str` | *(required)* | SQL statement to execute |
| `connection` | `str \| None` | `None` | Connection name. When `None`, uses the asset's default connection (`BRUIN_CONNECTION`) |
**Returns:** `pandas.DataFrame` for data-returning statements (`SELECT`, `WITH`, `SHOW`, `DESCRIBE`, `EXPLAIN`), `None` for DDL/DML (`CREATE`, `INSERT`, `UPDATE`, `DELETE`, `DROP`, etc.).
```python
# Uses the asset's default connection (from the `connection:` field in asset definition)
df = query("SELECT * FROM users")
# Explicit connection name
df = query("SELECT * FROM users", connection="my_bigquery")
# DDL/DML returns None
query("CREATE TABLE temp_users AS SELECT * FROM users")
query("INSERT INTO audit_log VALUES ('ran_asset', NOW())")
# Works with any supported database
df_bq = query("SELECT * FROM users", connection="my_bigquery")
df_sf = query("SELECT * FROM users", connection="my_snowflake")
df_pg = query("SELECT * FROM users", connection="my_postgres")
```
Every query is automatically annotated with `@bruin.config` metadata for observability and cost tracking.
---
### `get_connection(name)`
Get a typed connection object with a lazy database client.
```python
from bruin import get_connection
```
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `name` | `str` | Connection name as defined in `.bruin.yml` and injected via `secrets` |
**Returns:** `Connection` or `GCPConnection` depending on the connection type.
```python
conn = get_connection("my_bigquery")
conn.name # "my_bigquery"
conn.type # "google_cloud_platform"
conn.raw # dict — the parsed connection JSON
conn.client # Lazy-initialized database client
```
#### Connection types
| Type | `.client` returns | Install extra |
|------|-------------------|---------------|
| `google_cloud_platform` | `bigquery.Client` | `bruin-sdk[bigquery]` |
| `snowflake` | `snowflake.connector.Connection` | `bruin-sdk[snowflake]` |
| `postgres` | `psycopg2.connection` | `bruin-sdk[postgres]` |
| `redshift` | `psycopg2.connection` | `bruin-sdk[redshift]` |
| `mssql` | `pymssql.Connection` | `bruin-sdk[mssql]` |
| `mysql` | `mysql.connector.Connection` | `bruin-sdk[mysql]` |
| `duckdb` | `duckdb.DuckDBPyConnection` | `bruin-sdk[duckdb]` |
| `generic` | N/A (raises error) | — |
Client creation is **lazy** — the actual database connection is only established when `.client` is first accessed.
#### GCP connections
GCP connections have extra methods since one connection can access multiple Google services:
```python
conn = get_connection("my_gcp")
# BigQuery (most common — also available as .client)
bq_client = conn.bigquery()
df = bq_client.query("SELECT 1").to_dataframe()
# Google Sheets
sheets_client = conn.sheets() # requires bruin-sdk[sheets]
# Cloud Storage
gcs_client = conn.storage() # requires google-cloud-storage
# Raw credentials for any Google API
creds = conn.credentials # google.oauth2.Credentials
```
#### Generic connections
Generic connections hold a raw string value (like an API key or webhook URL). They don't have a database client:
```python
conn = get_connection("slack_webhook")
conn.type # "generic"
conn.raw # "https://hooks.slack.com/services/T00/B00/xxx"
conn.client # raises ConnectionTypeError
```
---
### `Connection.query(sql)`
Connections also have a `.query()` method — an alternative to the top-level `query()`:
```python
conn = get_connection("my_bigquery")
# These are equivalent:
df = conn.query("SELECT * FROM users")
df = query("SELECT * FROM users", connection="my_bigquery")
```
Same return behavior: `DataFrame` for SELECT, `None` for DDL/DML.
---
## Exceptions
All SDK exceptions inherit from `BruinError`:
```python
from bruin.exceptions import (
BruinError, # Base class
ConnectionNotFoundError, # Connection name not found or env var missing
ConnectionParseError, # Invalid JSON in connection env var
ConnectionTypeError, # Unsupported or generic connection type
QueryError, # SQL execution failed
)
```
```python
try:
df = query("SELECT * FROM users", connection="missing")
except ConnectionNotFoundError as e:
print(e)
# Connection 'missing' not found. Available connections: my_bigquery, my_snowflake.
```
Missing optional dependencies give clear install instructions:
```python
conn = get_connection("my_snowflake")
conn.client
# ImportError: Install bruin-sdk[snowflake] to use Snowflake connections:
# pip install 'bruin-sdk[snowflake]'
```
---
## Asset Setup
The SDK reads connection data from environment variables that Bruin injects. To make a connection available in your Python asset, use the `secrets` key in your asset definition:
```python
""" @bruin
name: my_asset
secrets:
- key: my_bigquery
@bruin """
from bruin import get_connection
conn = get_connection("my_bigquery")
```
For the default connection (used by `query()` when no `connection` argument is given), set the `connection` field:
```python
""" @bruin
name: my_asset
connection: my_bigquery
secrets:
- key: my_bigquery
@bruin """
from bruin import query
# Uses my_bigquery automatically
df = query("SELECT * FROM users")
```
---
## Examples
### Incremental load with date filtering
```python
""" @bruin
name: analytics.daily_events
connection: my_bigquery
secrets:
- key: my_bigquery
@bruin """
from bruin import query, context
if context.is_full_refresh:
df = query("SELECT * FROM raw.events")
else:
df = query(f"""
SELECT * FROM raw.events
WHERE event_date BETWEEN '{context.start_date}' AND '{context.end_date}'
""")
print(f"Loaded {len(df)} events")
```
### Cross-database ETL
```python
""" @bruin
name: sync.postgres_to_bigquery
secrets:
- key: my_postgres
- key: my_bigquery
@bruin """
from bruin import query, get_connection
# Read from Postgres
df = query("SELECT * FROM users WHERE active = true", connection="my_postgres")
# Write to BigQuery
bq = get_connection("my_bigquery")
df.to_gbq(
"staging.active_users",
project_id=bq.raw["project_id"],
credentials=bq.credentials,
if_exists="replace",
)
```
### Using pipeline variables
```yaml
# pipeline.yml
name: marketing
variables:
segment:
type: string
default: "enterprise"
lookback_days:
type: integer
default: 30
```
```python
""" @bruin
name: marketing.segment_report
connection: my_snowflake
secrets:
- key: my_snowflake
@bruin """
from bruin import query, context
segment = context.vars["segment"]
lookback = context.vars["lookback_days"]
df = query(f"""
SELECT * FROM customers
WHERE segment = '{segment}'
AND created_at >= DATEADD(day, -{lookback}, CURRENT_DATE())
""")
print(f"Found {len(df)} {segment} customers in last {lookback} days")
```
### DDL operations
```python
""" @bruin
name: setup.create_tables
connection: my_postgres
secrets:
- key: my_postgres
@bruin """
from bruin import query
# DDL returns None
query("CREATE TABLE IF NOT EXISTS audit_log (event TEXT, ts TIMESTAMP)")
query("INSERT INTO audit_log VALUES ('setup_complete', NOW())")
# SELECT returns DataFrame
df = query("SELECT COUNT(*) as cnt FROM audit_log")
print(f"Audit log has {df['cnt'][0]} entries")
```
| text/markdown | Bruin Team | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pandas",
"db-dtypes; extra == \"all\"",
"duckdb; extra == \"all\"",
"google-auth; extra == \"all\"",
"google-cloud-bigquery; extra == \"all\"",
"mysql-connector-python; extra == \"all\"",
"psycopg2-binary; extra == \"all\"",
"pygsheets; extra == \"all\"",
"pymssql; extra == \"all\"",
"snowflake-connector-python; extra == \"all\"",
"db-dtypes; extra == \"bigquery\"",
"google-auth; extra == \"bigquery\"",
"google-cloud-bigquery; extra == \"bigquery\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"duckdb; extra == \"duckdb\"",
"pymssql; extra == \"mssql\"",
"mysql-connector-python; extra == \"mysql\"",
"psycopg2-binary; extra == \"postgres\"",
"psycopg2-binary; extra == \"redshift\"",
"google-auth; extra == \"sheets\"",
"pygsheets; extra == \"sheets\"",
"snowflake-connector-python; extra == \"snowflake\""
] | [] | [] | [] | [
"Homepage, https://github.com/bruin-data/bruin",
"Repository, https://github.com/bruin-data/bruin-python",
"Documentation, https://docs.getbruin.com"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-21T10:23:29.784094 | bruin_sdk-0.1.0.tar.gz | 165,364 | 77/4f/b72cddd15c4ca21299f18ad3fdebffa3cf89abb44aa66688c55d873dad00/bruin_sdk-0.1.0.tar.gz | source | sdist | null | false | ae1a02c58adf3aaa11bfd74cf5a63276 | 4c84834c451e643c0291e6b999c6f3b31d302ebe562ab88aa8f016a7d288ca4e | 774fb72cddd15c4ca21299f18ad3fdebffa3cf89abb44aa66688c55d873dad00 | Apache-2.0 | [] | 270 |
2.4 | simple-memo | 0.1.0 | A simple, MIT-licensed CLI for Apple Notes & Reminders on macOS | # simple-memo
A simple, MIT-licensed CLI for Apple Notes & Reminders on macOS.
Use it from the terminal, scripts, or AI agents — no restrictions.
## Install
```bash
# pip / pipx
pip install simple-memo
# or
pipx install simple-memo
# Homebrew (coming soon)
# brew install inkolin/tap/simple-memo
```
## Notes
```bash
simple-memo list # List all notes
simple-memo list -f Work # List notes in a folder
simple-memo folders # List all folders
simple-memo read "Meeting Notes" # Read a note (Markdown)
simple-memo create "Title" "Body text" # Create a note
simple-memo create -i # Create in $EDITOR
simple-memo create "Title" -f Work # Create in specific folder
echo "piped" | simple-memo create "Title" # Create from stdin
simple-memo edit "Meeting Notes" # Edit in $EDITOR
simple-memo append "Title" "More text" # Append to a note
simple-memo move "Title" "Archive" # Move to folder (creates if needed)
simple-memo search "keyword" # Search by content/title
simple-memo search --fzf # Interactive fuzzy search (requires fzf)
simple-memo delete "Old Note" # Delete a note
simple-memo count # Count total notes
simple-memo export # Export all to ~/Desktop/simple-memo-export/
simple-memo export -o ./backup # Export to custom directory
simple-memo export --html # Export as HTML instead of Markdown
simple-memo mkfolder "Projects" # Create a folder
simple-memo rmfolder "Old Stuff" # Delete a folder
```
## Reminders
```bash
simple-memo rem list # List non-completed reminders
simple-memo rem list -a # List all (including completed)
simple-memo rem add "Buy milk" # Create reminder (no due date)
simple-memo rem add "Meeting" -d 2025-03-01 -t 14:00 # With due date
simple-memo rem done "Buy milk" # Mark as completed
simple-memo rem edit "Meeting" --new-title "Team sync" # Rename
simple-memo rem edit "Meeting" --new-date "2025-03-05 10:00" # Reschedule
simple-memo rem delete "Old reminder" # Delete
```
## Why?
Existing Apple Notes CLI tools use restrictive licenses that prohibit AI usage.
`simple-memo` is MIT-licensed — use it however you want, including with AI agents, commercial products, and automation scripts.
## Requirements
- macOS (uses AppleScript to talk to Apple Notes & Reminders)
- Python 3.9+
- Optional: [fzf](https://github.com/junegunn/fzf) for interactive search
## License
MIT — see [LICENSE](LICENSE)
| text/markdown | Nenad Nikolin | null | null | null | null | apple-notes, cli, macos, productivity, reminders | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Topic :: Utilities"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"html2text>=2024.2.26",
"mistune>=3.0"
] | [] | [] | [] | [
"Homepage, https://github.com/inkolin/simple-memo",
"Repository, https://github.com/inkolin/simple-memo",
"Issues, https://github.com/inkolin/simple-memo/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T10:23:11.634915 | simple_memo-0.1.0.tar.gz | 10,019 | b0/b1/b9d3f80138f04e39e7f5cdba5a6f1249191a692267852245c0f885ad66a4/simple_memo-0.1.0.tar.gz | source | sdist | null | false | e270e223abf7c7e779df829ca66d3c3d | 2c3dc4e10c27c08633ddf2b7265410c4940f6073fd4602126139e8820700e647 | b0b1b9d3f80138f04e39e7f5cdba5a6f1249191a692267852245c0f885ad66a4 | MIT | [
"LICENSE"
] | 269 |
2.4 | supreme-max | 1.1.1 | Supreme 2 MAX - AI-first security scanner with 74 analyzers, intelligent false positive reduction, and 180+ AI agent security rules | # Supreme 2 MAX - Multi-Language Security Scanner
[](https://pypi.org/project/supreme-max/)
[](https://pypi.org/project/supreme-max/)
[](https://www.python.org/downloads/)
[](https://www.gnu.org/licenses/agpl-3.0)
[](https://github.com/Zeinullahh/Supreme-2-MAX/actions/workflows/test.yml)
[](https://github.com/Zeinullahh/Supreme-2-MAX)
[](https://github.com/Zeinullahh/Supreme-2-MAX)
[](https://github.com/Zeinullahh/Supreme-2-MAX)
**AI-first security scanner** | 74 analyzers | Intelligent FP reduction | 180+ AI agent security rules | Sandbox compatible
---
## What is Supreme 2 MAX?
Supreme 2 MAX is a comprehensive Static Application Security Testing (SAST) tool with **74 specialized scanners** covering all major languages and platforms. It features intelligent false positive reduction and 180+ AI agent security rules for the agentic era.
### ✨ Key Features
- 🔍 **74 Specialized Scanners** - Most comprehensive coverage available with intelligent selection
- 🎯 **Intelligent FP Filter** - Reduces false positives by 40-60% using context-aware analysis
- 🚨 **CVE Detection** - React2Shell (CVE-2025-55182), Next.js vulnerabilities, supply chain risks
- 🤖 **AI Agent Security** - 180+ rules for MCP, RAG, prompt injection, tool poisoning & more
- 🏖️ **Sandbox Compatible** - Works in Codex, restricted environments, and CI/CD pipelines
- ⚡ **Parallel Processing** - Multi-core scanning (10-40× faster than sequential)
- 🎨 **Beautiful CLI** - Rich terminal output with progress bars
- 🧠 **IDE Integration** - Claude Code, Cursor, VS Code, Gemini CLI, OpenAI Codex support
- 📦 **Auto-Installer** - One-command installation of all security tools (Windows, macOS, Linux)
- 🔄 **Smart Caching** - Skip unchanged files for lightning-fast rescans
- ⚙️ **Configurable** - `.supreme-max.yml` for project-specific settings
- 🌍 **Cross-Platform** - Native Windows, macOS, and Linux support
- 📊 **Multiple Reports** - JSON, HTML, Markdown, SARIF exports for any workflow
- 🎯 **Zero Config** - Works out of the box with sensible defaults
---
## 🚀 Quick Start
### Installation
**Windows (Recommended - Virtual Environment):**
```powershell
# Create and activate virtual environment (security best practice)
py -m venv supreme-max-env
supreme-max-env\Scripts\activate
# Install Supreme 2 MAX
pip install supreme-max
# Verify installation
sm --version
```
**Windows (System-wide - Not Recommended):**
```powershell
# Install Supreme 2 MAX system-wide (not recommended)
py -m pip install supreme-max --no-warn-script-location
# Verify installation
py -m supreme-max --version
```
> **Note for Windows users**: Virtual environments provide better isolation and avoid PATH warnings. If using system-wide install, use `py -m supreme-max` for all commands.
**macOS/Linux (Recommended - Virtual Environment):**
```bash
# Create and activate virtual environment (security best practice)
python3 -m venv supreme-max-env
source supreme-max-env/bin/activate
# Install Supreme 2 MAX
pip install supreme-max
# Verify installation
sm --version
```
**macOS/Linux (System-wide - Not Recommended):**
```bash
# Only use if you understand the implications
pip install supreme-max --user
# Verify installation
sm --version
```
**Install from source (all platforms):**
```bash
git clone https://github.com/Zeinullahh/Supreme-2-MAX.git
cd Supreme-2-MAX
# Use virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e .
```
**Platform-Specific Notes:**
- **Windows**: Use `py -m supreme-max` instead of `sm` if the command is not found
- **macOS**: If `sm` command is not found, run `python3 -m supreme-max setup_path` or use `python3 -m supreme-max`
- **Linux**: Should work out of the box with `sm` command
> **✅ Windows Support**: Supreme 2 MAX now has full native Windows support with automatic tool installation via winget, chocolatey, and npm!
### 5-Minute Setup
**Windows:**
```powershell
# 1. Initialize in your project
cd your-project
py -m supreme-max init
# 2. Install security tools (auto-detected for your platform)
py -m supreme-max install --all
# 3. Run your first scan
py -m supreme-max scan .
```
**macOS/Linux:**
```bash
# 1. Initialize in your project
cd your-project
sm init
# 2. Install security tools (auto-detected for your platform)
sm install --all
# 3. Run your first scan
sm scan .
```
### Example Output
```
Supreme 2 MAX v2025.9.0 - Security Guardian
🎯 Target: .
🔧 Mode: Full
📁 Found 145 scannable files
📊 Scanning 145 files with 6 workers...
✅ Scanned 145 files
🎯 PARALLEL SCAN COMPLETE
📂 Files scanned: 145
⚡ Files cached: 0
🔍 Issues found: 114
⏱️ Total time: 47.28s
📈 Cache hit rate: 0.0%
🔧 Scanners used: bandit, eslint, shellcheck, yamllint
📊 Reports generated:
JSON → .supreme-max/reports/supreme-max-scan-20250119-083045.json
HTML → .supreme-max/reports/supreme-max-scan-20250119-083045.html
Markdown → .supreme-max/reports/supreme-max-scan-20250119-083045.md
✅ Scan complete!
```
### 📊 Report Formats
Supreme 2 MAX generates beautiful reports in multiple formats:
**JSON** - Machine-readable for CI/CD integration
```bash
sm scan . --format json
```
**HTML** - Stunning glassmorphism UI with interactive charts
```bash
sm scan . --format html
```
**Markdown** - Documentation-friendly for GitHub/wikis
```bash
sm scan . --format markdown
```
**All Formats** - Generate everything at once
```bash
sm scan . --format all
```
---
## 📚 Language Support
Supreme 2 MAX supports **42 different scanner types** covering all major programming languages and file formats:
### Backend Languages (9)
| Language | Scanner | Extensions |
|----------|---------|------------|
| Python | Bandit | `.py` |
| JavaScript/TypeScript | ESLint | `.js`, `.jsx`, `.ts`, `.tsx` |
| Go | golangci-lint | `.go` |
| Ruby | RuboCop | `.rb`, `.rake`, `.gemspec` |
| PHP | PHPStan | `.php` |
| Rust | Clippy | `.rs` |
| Java | Checkstyle | `.java` |
| C/C++ | cppcheck | `.c`, `.cpp`, `.cc`, `.cxx`, `.h`, `.hpp` |
| C# | Roslynator | `.cs` |
### JVM Languages (3)
| Language | Scanner | Extensions |
|----------|---------|------------|
| Kotlin | ktlint | `.kt`, `.kts` |
| Scala | Scalastyle | `.scala` |
| Groovy | CodeNarc | `.groovy`, `.gradle` |
### Functional Languages (5)
| Language | Scanner | Extensions |
|----------|---------|------------|
| Haskell | HLint | `.hs`, `.lhs` |
| Elixir | Credo | `.ex`, `.exs` |
| Erlang | Elvis | `.erl`, `.hrl` |
| F# | FSharpLint | `.fs`, `.fsx` |
| Clojure | clj-kondo | `.clj`, `.cljs`, `.cljc` |
### Mobile Development (2)
| Language | Scanner | Extensions |
|----------|---------|------------|
| Swift | SwiftLint | `.swift` |
| Objective-C | OCLint | `.m`, `.mm` |
### Frontend & Styling (3)
| Language | Scanner | Extensions |
|----------|---------|------------|
| CSS/SCSS/Sass/Less | Stylelint | `.css`, `.scss`, `.sass`, `.less` |
| HTML | HTMLHint | `.html`, `.htm` |
| Vue.js | ESLint | `.vue` |
### Infrastructure as Code (4)
| Language | Scanner | Extensions |
|----------|---------|------------|
| Terraform | tflint | `.tf`, `.tfvars` |
| Ansible | ansible-lint | `.yml` (playbooks) |
| Kubernetes | kubeval | `.yml`, `.yaml` (manifests) |
| CloudFormation | cfn-lint | `.yml`, `.yaml`, `.json` (templates) |
### Configuration Files (5)
| Language | Scanner | Extensions |
|----------|---------|------------|
| YAML | yamllint | `.yml`, `.yaml` |
| JSON | built-in | `.json` |
| TOML | taplo | `.toml` |
| XML | xmllint | `.xml` |
| Protobuf | buf lint | `.proto` |
### Shell & Scripts (4)
| Language | Scanner | Extensions |
|----------|---------|------------|
| Bash/Shell | ShellCheck | `.sh`, `.bash` |
| PowerShell | PSScriptAnalyzer | `.ps1`, `.psm1` |
| Lua | luacheck | `.lua` |
| Perl | perlcritic | `.pl`, `.pm` |
### Documentation (2)
| Language | Scanner | Extensions |
|----------|---------|------------|
| Markdown | markdownlint | `.md` |
| reStructuredText | rst-lint | `.rst` |
### Other Languages (5)
| Language | Scanner | Extensions |
|----------|---------|------------|
| SQL | SQLFluff | `.sql` |
| R | lintr | `.r`, `.R` |
| Dart | dart analyze | `.dart` |
| Solidity | solhint | `.sol` |
| Docker | hadolint | `Dockerfile*` |
**Total: 42 scanner types covering 100+ file extensions**
---
## 🚨 React2Shell CVE Detection (NEW in v2025.8)
Supreme 2 MAX now detects **CVE-2025-55182 "React2Shell"** - a CVSS 10.0 RCE vulnerability affecting React Server Components and Next.js.
```bash
# Check if your project is vulnerable
sm scan .
# Vulnerable versions detected:
# - React 19.0.0 - 19.2.0 (Server Components)
# - Next.js 15.0.0 - 15.0.4 (App Router)
# - Various canary/rc releases
```
**Scans**: `package.json`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
**Fix**: Upgrade to React 19.0.1+ and Next.js 15.0.5+
---
## 🤖 AI Agent Security (v2025.7+)
Supreme 2 MAX provides **industry-leading AI security scanning** with **22 specialized scanners** and **180+ detection rules** for the agentic AI era. Updated for **OWASP Top 10 for LLM Applications 2025** and includes detection for **CVE-2025-6514** (mcp-remote RCE).
**[Full AI Security Documentation](docs/AI_SECURITY.md)**
### AI Security Scanners
| Scanner | Rules | Detects |
|---------|-------|---------|
| **OWASPLLMScanner** | LLM01-10 | OWASP Top 10 2025: Prompt injection, system prompt leakage, unbounded consumption |
| **MCPServerScanner** | MCP101-118 | Tool poisoning, CVE-2025-6514, confused deputy, command injection |
| **MCPConfigScanner** | MCP001-013 | Secrets, dangerous paths, HTTP without TLS, untrusted sources |
| **AIContextScanner** | AIC001-030 | Prompt injection, memory manipulation, HITL bypass |
| **RAGSecurityScanner** | RAG001-010 | Vector injection, document poisoning, tenant isolation |
| **VectorDBScanner** | VD001-010 | Unencrypted storage, PII in embeddings, exposed endpoints |
| **LLMOpsScanner** | LO001-010 | Insecure model loading, checkpoint exposure, drift detection |
| + 9 more | 60+ rules | Multi-agent, planning, reflection, A2A, model attacks |
### AI Attack Coverage
<table>
<tr><td>
**Context & Input Attacks**
- Prompt injection patterns
- Role/persona manipulation
- Hidden instructions
- Obfuscation tricks
**Memory & State Attacks**
- Memory poisoning
- Context manipulation
- Checkpoint tampering
- Cross-session exposure
**Tool & Action Attacks**
- Tool poisoning (CVE-2025-6514)
- Command injection
- Tool name spoofing
- Confused deputy patterns
</td><td>
**Workflow & Routing Attacks**
- Router manipulation
- Agent impersonation
- Workflow hijacking
- Delegation abuse
**RAG & Knowledge Attacks**
- Knowledge base poisoning
- Embedding pipeline attacks
- Source confusion
- Retrieval manipulation
**Advanced Attacks**
- HITL bypass techniques
- Semantic manipulation
- Evaluation poisoning
- Training data attacks
</td></tr>
</table>
### Supported AI Files
```
.cursorrules # Cursor AI instructions
CLAUDE.md # Claude Code context
.claude/ # Claude configuration directory
copilot-instructions.md # GitHub Copilot
AGENTS.md # Multi-agent definitions
mcp.json / mcp-config.json # MCP server configs
*.mcp.ts / *.mcp.py # MCP server code
rag.json / knowledge.json # RAG configurations
memory.json # Agent memory configs
```
### Quick AI Security Scan
```bash
# Scan AI configuration files
sm scan . --ai-only
# Example output:
# 🔍 AI Security Scan Results
# ├── .cursorrules: 3 issues (1 CRITICAL, 2 HIGH)
# │ └── AIC001: Prompt injection - ignore previous instructions (line 15)
# │ └── AIC011: Tool shadowing - override default tools (line 23)
# ├── mcp-config.json: 2 issues (2 HIGH)
# │ └── MCP003: Dangerous path - home directory access (line 8)
# └── rag_config.json: 1 issue (1 CRITICAL)
# └── AIR010: Knowledge base injection pattern detected (line 45)
```
---
## 🎮 Usage
### Basic Commands
```bash
# Initialize configuration
sm init
# Scan current directory
sm scan .
# Scan specific directory
sm scan /path/to/project
# Quick scan (changed files only)
sm scan . --quick
# Force full scan (ignore cache)
sm scan . --force
# Use specific number of workers
sm scan . --workers 4
# Fail on HIGH severity or above
sm scan . --fail-on high
# Custom output directory
sm scan . -o /tmp/reports
```
### Install Commands
```bash
# Check which tools are installed
sm install --check
# Install all missing tools (interactive)
sm install --all
# Install specific tool
sm install bandit
# Auto-yes to all prompts (non-interactive)
sm install --all --yes
# Auto-yes to first prompt, then auto-yes all remaining
# When prompted: type 'a' for auto-yes-all
sm install --all
Install all 39 missing tools? [Y/n/a]: a
# Show detailed installation output
sm install --all --debug
# Use latest versions (bypass version pinning)
sm install --all --use-latest
```
### Init Commands
```bash
# Interactive initialization wizard
sm init
# Initialize with specific IDE
sm init --ide claude-code
# Initialize with multiple IDEs
sm init --ide claude-code --ide gemini-cli --ide cursor
# Initialize with all supported IDEs
sm init --ide all
# Force overwrite existing config
sm init --force
# Initialize and install tools
sm init --install
```
### Additional Commands
```bash
# Uninstall specific tool
sm uninstall bandit
# Uninstall all Supreme 2 MAX tools
sm uninstall --all --yes
# Check for updates
sm version --check-updates
# Show current configuration
sm config
# Override scanner for specific file
sm override path/to/file.yaml YAMLScanner
# List available scanners
sm override --list
# Show current overrides
sm override --show
# Remove override
sm override path/to/file.yaml --remove
```
### Scan Options Reference
| Option | Description |
|--------|-------------|
| `TARGET` | Directory or file to scan (default: `.`) |
| `-w, --workers N` | Number of parallel workers (default: auto-detect) |
| `--quick` | Quick scan (changed files only, requires git) |
| `--force` | Force full scan (ignore cache) |
| `--no-cache` | Disable result caching |
| `--fail-on LEVEL` | Exit with error on severity: `critical`, `high`, `medium`, `low` |
| `-o, --output PATH` | Custom output directory for reports |
| `--format FORMAT` | Output format: `json`, `html`, `sarif`, `junit`, `text` (can specify multiple) |
| `--no-report` | Skip generating HTML report |
| `--install-mode MODE` | Tool installation: `batch`, `progressive`, `never` |
| `--auto-install` | Automatically install missing tools without prompting |
| `--no-install` | Never attempt to install missing tools |
### Install Options Reference
| Option | Description |
|--------|-------------|
| `TOOL` | Specific tool to install (e.g., `bandit`, `eslint`) |
| `--check` | Check which tools are installed |
| `--all` | Install all missing tools |
| `-y, --yes` | Skip all confirmation prompts (auto-yes) |
| `--debug` | Show detailed debug output |
| `--use-latest` | Install latest versions instead of pinned versions |
**Interactive Prompts:**
- `[Y/n/a]` - Type `Y` for yes, `n` for no, `a` for auto-yes-all remaining prompts
### Windows Auto-Installation
**✅ Fully Supported!** Supreme 2 MAX automatically installs tools on Windows using winget/Chocolatey.
```powershell
# One-command installation (auto-installs everything)
sm install --all
# When prompted, type 'a' for auto-yes-all:
Install all 39 missing tools? [Y/n/a]: a
Auto-yes enabled for all remaining prompts
# Supreme 2 MAX will automatically:
# - Install Chocolatey (if needed)
# - Install Node.js (if needed)
# - Install Ruby (if needed)
# - Install PHP (if needed)
# - Install all 36+ scanner tools
# - No terminal restart required!
```
**What Gets Installed:**
- **86%** of tools install automatically (36/42 scanners)
- Winget (priority), Chocolatey, npm, pip, gem installers
- PowerShell scripts for specialized tools (phpstan, ktlint, checkstyle, taplo, clj-kondo)
- Runtime dependencies (Node.js, Ruby, PHP) auto-installed
**Manual Installation (Optional):**
Only 3 tools require manual installation:
- `swiftlint` - macOS only
- `checkmake` - Requires Go: `go install github.com/mrtazz/checkmake/cmd/checkmake@latest`
- `cppcheck` - Download from https://cppcheck.sourceforge.io/
---
## ⚙️ Configuration
### `.supreme-max.yml`
Supreme 2 MAX uses a YAML configuration file for project-specific settings:
```yaml
# Supreme 2 MAX Configuration File
version: 2025.9.0
# Scanner control
scanners:
enabled: [] # Empty = all scanners enabled
disabled: [] # List scanners to disable
# Example: disabled: ['bandit', 'eslint']
# Build failure settings
fail_on: high # critical | high | medium | low
# Exclusion patterns
exclude:
paths:
- node_modules/
- venv/
- .venv/
- env/
- .git/
- .svn/
- __pycache__/
- "*.egg-info/"
- dist/
- build/
- .tox/
- .pytest_cache/
- .mypy_cache/
files:
- "*.min.js"
- "*.min.css"
- "*.bundle.js"
- "*.map"
# IDE integration
ide:
claude_code:
enabled: true
auto_scan: true # Scan on file save
inline_annotations: true # Show issues inline
cursor:
enabled: false
vscode:
enabled: false
gemini_cli:
enabled: false
# Scan settings
workers: null # null = auto-detect (cpu_count - 2)
cache_enabled: true # Enable file caching for speed
```
### Generate Default Config
```bash
sm init
```
This creates `.supreme-max.yml` with sensible defaults and auto-detects your IDE.
---
## 🤖 IDE Integration
Supreme 2 MAX supports **5 major AI coding assistants** with native integrations. Initialize with `sm init --ide all` or select specific platforms.
### Supported Platforms
| IDE | Context File | Commands | Status |
|-----|-------------|----------|--------|
| **Claude Code** | `CLAUDE.md` | `/sm-scan`, `/sm-install` | ✅ Full Support |
| **Gemini CLI** | `GEMINI.md` | `/scan`, `/install` | ✅ Full Support |
| **OpenAI Codex** | `AGENTS.md` | Native slash commands | ✅ Full Support |
| **GitHub Copilot** | `.github/copilot-instructions.md` | Code suggestions | ✅ Full Support |
| **Cursor** | Reuses `CLAUDE.md` | MCP + Claude commands | ✅ Full Support |
### Quick Setup
```bash
# Setup for all IDEs (recommended)
sm init --ide all
# Or select specific platforms
sm init --ide claude-code --ide gemini-cli
```
### Claude Code
**What it creates:**
- `CLAUDE.md` - Project context file
- `.claude/agents/supreme-max/agent.json` - Agent configuration
- `.claude/commands/sm-scan.md` - Scan slash command
- `.claude/commands/sm-install.md` - Install slash command
**Usage:**
```
Type: /sm-scan
Claude: *runs security scan*
Results: Displayed in terminal + chat
```
### Gemini CLI
**What it creates:**
- `GEMINI.md` - Project context file
- `.gemini/commands/scan.toml` - Scan command config
- `.gemini/commands/install.toml` - Install command config
**Usage:**
```bash
gemini /scan # Full scan
gemini /scan --quick # Quick scan
gemini /install --check # Check tools
```
### OpenAI Codex
**What it creates:**
- `AGENTS.md` - Project context (root level)
**Usage:**
```
Ask: "Run a security scan"
Codex: *executes sm scan .*
```
### GitHub Copilot
**What it creates:**
- `.github/copilot-instructions.md` - Security standards and best practices
**How it helps:**
- Knows project security standards
- Suggests secure code patterns
- Recommends running scans after changes
- Helps fix security issues
### Cursor
**What it creates:**
- `.cursor/mcp-config.json` - MCP server configuration
- Reuses `.claude/` structure (Cursor is VS Code fork)
**Usage:**
- Works like Claude Code integration
- MCP-native for future deeper integration
---
## 🎯 False Positive Filter (NEW)
Supreme 2 MAX includes an **intelligent false positive filter** that automatically reduces scan noise by identifying findings that are likely safe.
### How It Works
```bash
# Run scan - FP filter is automatic
sm scan .
# Example output showing FP analysis:
🔍 Issues found: 34
- Likely FPs filtered: 12 (35%)
- Remaining issues: 22
```
### What Gets Filtered
| Pattern Type | Description | Confidence |
|--------------|-------------|------------|
| **Security Wrappers** | Credentials passed to SecureString, Fernet, AESGCM | 95% |
| **Docstrings/Comments** | Keywords in documentation, not code | 95% |
| **Test Files** | Findings in test/, spec/, mock/ directories | 70-90% |
| **Template Files** | .env.example, .env.template with placeholders | 90% |
| **Cache Key Hashes** | MD5/SHA1 used for caching, not crypto | 90% |
| **Security Modules** | Files implementing credential protection | 85% |
### FP Analysis in Reports
Each finding includes FP analysis metadata:
```json
{
"issue": "Hardcoded credential detected",
"severity": "HIGH",
"fp_analysis": {
"is_likely_fp": true,
"confidence": 0.95,
"reason": "security_wrapper",
"explanation": "Credential is wrapped in security class 'SecureString' for protection"
},
"adjusted_severity": "LOW"
}
```
### Supported Languages
FP patterns are currently tuned for:
- **Python** - Security wrappers, docstrings, subprocess patterns
- **TypeScript/JavaScript** - JSDoc, test placeholders, secure constructors
- **Go** - Cache key hashes, mock files, checksum functions
- **Docker** - Test Dockerfiles with :latest tag
- **Java** - Test files, example configs (expanding)
---
## 🔧 Advanced Features
### System Load Monitoring
Supreme 2 MAX automatically monitors system load and adjusts worker count:
```python
# Auto-detects optimal workers based on:
# - CPU usage
# - Memory usage
# - Load average
# - Available cores
# Warns when system is overloaded:
⚠️ High CPU usage: 85.3%
Using 2 workers (reduced due to system load)
```
### Sandbox/Codex Compatibility (NEW)
Supreme 2 MAX now works in restricted sandbox environments like OpenAI Codex:
```bash
# In sandbox environments, Supreme 2 MAX auto-detects and adjusts:
🏖️ Sandbox mode detected
Falling back to sequential scanning...
📊 Scanning 145 files (sequential mode)...
✅ Scan complete!
```
**What gets adjusted:**
- Multiprocessing → Sequential scanning when semaphores unavailable
- Worker pool → Single-threaded execution
- No manual configuration needed - fully automatic
**Works in:**
- OpenAI Codex sandbox
- CI/CD containers with restricted permissions
- Docker containers without SHM access
- Any environment where `multiprocessing.Pool()` fails
### Smart Caching
Hash-based caching skips unchanged files:
```bash
# First scan
📂 Files scanned: 145
⏱️ Total time: 47.28s
# Second scan (no changes)
📂 Files scanned: 0
⚡ Files cached: 145
⏱️ Total time: 2.15s # 22× faster!
```
### Parallel Processing
Multi-core scanning for massive speedups:
```
Single-threaded: 417.5 seconds
6 workers: 47.3 seconds # 8.8× faster
24 workers: ~18 seconds # 23× faster
```
---
## 📊 Example Workflow
### New Project Setup
```bash
# 1. Initialize
cd my-awesome-project
sm init
Supreme 2 MAX Initialization Wizard
✅ Step 1: Project Analysis
Found 15 language types
Primary: PythonScanner (44 files)
✅ Step 2: Scanner Availability
Available: 6/42 scanners
Missing: 36 tools
✅ Step 3: Configuration
Created .supreme-max.yml
Auto-detected IDE: Claude Code
✅ Step 4: IDE Integration
Created .claude/agents/supreme-max/agent.json
Created .claude/commands/sm-scan.md
✅ Supreme 2 MAX Initialized Successfully!
# 2. Install tools
sm install --all
📦 Installing 36 missing tools...
✅ bandit installed (pip)
✅ eslint installed (npm)
✅ shellcheck installed (apt)
...
✅ All tools installed!
# 3. First scan
sm scan .
🔍 Issues found: 23
CRITICAL: 0
HIGH: 2
MEDIUM: 18
LOW: 3
# 4. Fix issues and rescan
sm scan . --quick
⚡ Files cached: 142
🔍 Issues found: 12 # Progress!
```
### CI/CD Integration
```yaml
# .github/workflows/security.yml
name: Security Scan
on: [push, pull_request]
jobs:
supreme-max:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install Supreme 2 MAX
run: pip install supreme-max
- name: Install security tools
run: sm install --all --yes
- name: Run security scan
run: sm scan . --fail-on high
```
---
## 🏗️ Architecture
### Scanner Pattern
All scanners follow a consistent pattern:
```python
class PythonScanner(BaseScanner):
"""Scanner for Python files using Bandit"""
def get_tool_name(self) -> str:
return "bandit"
def get_file_extensions(self) -> List[str]:
return [".py"]
def scan_file(self, file_path: Path) -> ScannerResult:
# Run bandit on file
# Parse JSON output
# Map severity levels
# Return structured issues
return ScannerResult(...)
```
### Auto-Registration
Scanners automatically register themselves:
```python
# supreme-max/scanners/__init__.py
registry = ScannerRegistry()
registry.register(PythonScanner())
registry.register(JavaScriptScanner())
# ... all 42 scanners
```
### Severity Mapping
Unified severity levels across all tools:
- **CRITICAL** - Security vulnerabilities, fatal errors
- **HIGH** - Errors, security warnings
- **MEDIUM** - Warnings, code quality issues
- **LOW** - Style issues, conventions
- **INFO** - Suggestions, refactoring opportunities
---
## 🧪 Testing & Quality
### Dogfooding Results
Supreme 2 MAX scans itself daily:
```
✅ Files scanned: 85
✅ CRITICAL issues: 0
✅ HIGH issues: 0
✅ MEDIUM issues: 113
✅ LOW issues: 1
Status: Production Ready ✅
```
### Performance Benchmarks
| Project Size | Files | Time (6 workers) | Speed |
|--------------|-------|------------------|-------|
| Small | 50 | ~15s | 3.3 files/s |
| Medium | 145 | ~47s | 3.1 files/s |
| Large | 500+ | ~3min | 2.8 files/s |
---
## 🗺️ Roadmap
### ✅ Completed (v2025.8)
- **73 Specialized Scanners** - Comprehensive language and platform coverage
- **AI Agent Security** - 20+ scanners, 180+ rules, OWASP LLM 2025 compliant
- **CVE Detection** - React2Shell (CVE-2025-55182), Next.js vulnerabilities
- **Cross-Platform** - Native Windows, macOS, Linux with auto-installation
- **IDE Integration** - Claude Code, Cursor, Gemini CLI, GitHub Copilot
- **Multi-Format Reports** - JSON, HTML, Markdown, SARIF, JUnit
- **Parallel Processing** - 10-40× faster with smart caching
### 🚧 In Progress (v2025.9)
- **Supply Chain Protection** - `sm protect` for install-time scanning
- **Malicious Package Database** - Known bad packages blocked before install
- **Preinstall Script Analysis** - Detect env harvesting, backdoors
### 🔮 Upcoming
- **Web Dashboard** - Cloud-hosted security insights
- **GitHub App** - Automatic PR scanning
- **VS Code Extension** - Native IDE integration
- **Enterprise Features** - SSO, audit logs, team management
---
## 🤝 Contributing
We welcome contributions! Here's how to get started:
```bash
# 1. Fork and clone
git clone https://github.com/yourusername/Supreme-2-MAX.git
cd Supreme-2-MAX
# 2. Create virtual environment
python -m venv .venv
source .venv/bin/activate # or `.venv\Scripts\activate` on Windows
# 3. Install in editable mode
pip install -e ".[dev]"
# 4. Run tests
pytest
# 5. Create feature branch
git checkout -b feature/my-awesome-feature
# 6. Make changes and test
sm scan . # Dogfood your changes!
# 7. Submit PR
git push origin feature/my-awesome-feature
```
### Adding New Scanners
See `docs/development/adding-scanners.md` for a guide on adding new language support.
---
## 📜 License
AGPL-3.0-or-later - See [LICENSE](LICENSE) file
Supreme 2 MAX is free and open source software. You can use, modify, and distribute it freely, but any modifications or derivative works (including SaaS deployments) must also be released under AGPL-3.0.
For commercial licensing options, contact: support@silenceai.net
---
## 🙏 Credits
**Development:**
- Silence AI
- Claude AI (Anthropic) - AI-assisted development
**Built With:**
- Python 3.10+
- Click - CLI framework
- Rich - Terminal formatting
- Bandit, ESLint, ShellCheck, and 39+ other open-source security tools
**Inspired By:**
- Bandit (Python security)
- SonarQube (multi-language analysis)
- Semgrep (pattern-based security)
- Mega-Linter (comprehensive linting)
---
## 📖 Guides
- **[Quick Start](docs/guides/quick-start.md)** - Get running in 5 minutes
- **[AI Security Scanning](docs/AI_SECURITY.md)** - Complete guide to AI/LLM security (OWASP 2025, MCP, RAG)
- **[False Positive Filter](docs/guides/handling-false-positives.md)** - Intelligent FP detection and noise reduction
- **[IDE Integration](docs/guides/ide-integration.md)** - Setup Claude Code, Gemini, Copilot, Codex
- **[Sandbox/CI Mode](docs/guides/sandbox-mode.md)** - Using Supreme 2 MAX in restricted environments
---
## 📞 Support
- **GitHub Issues**: [Report bugs or request features](https://github.com/Zeinullahh/Supreme-2-MAX/issues)
- **Email**: support@silenceai.net
- **Documentation**: https://docs.silenceai.net
- **Discord**: https://discord.gg/supreme-max (coming soon)
---
## 🌟 Why Supreme 2 MAX?
### vs. Bandit
- ✅ Supports 74 scanners (not just Python)
- ✅ Parallel processing (10-40× faster)
- ✅ **Intelligent FP filter** reduces noise
- ✅ Auto-installer for all tools
- ✅ IDE integration
### vs. SonarQube
- ✅ Simpler setup (one command)
- ✅ No server required
- ✅ **Works in sandboxed environments**
- ✅ Faster scans (local processing)
- ✅ Free and open source
### vs. Semgrep
- ✅ More language support (74 vs ~30 scanners)
- ✅ **Built-in FP analysis** per finding
- ✅ Uses established tools (Bandit, ESLint, etc.)
- ✅ Better IDE integration
- ✅ Easier configuration
### vs. Mega-Linter
- ✅ Faster (parallel + sequential fallback)
- ✅ **Context-aware FP filtering**
- ✅ Smarter caching
- ✅ Better error handling
- ✅ AI/LLM security focus
---
**Supreme 2 MAX - Multi-Language Security Scanner**
**One Command. Complete Security.**
```bash
sm init && sm scan .
```
| text/markdown | null | Silence AI <support@silenceai.net> | null | Silence AI <support@silenceai.net> | AGPL-3.0-or-later | security, scanner, sast, ai-security, llm-security, mcp, agent-security, prompt-injection, false-positive-reduction, rag-security, cybersecurity, devsecops | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Security",
"Topic :: Software Development :: Quality Assurance",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Environment :: Console"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.0",
"rich>=13.0.0",
"bandit>=1.9.0",
"yamllint>=1.28.0",
"tqdm>=4.60.0",
"requests>=2.28.0",
"urllib3>=2.6.0",
"pyyaml>=6.0.0",
"psutil>=5.9.0",
"defusedxml>=0.7.0",
"tomli-w>=1.0.0",
"toml>=0.10.2",
"Blinter>=1.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocs-material>=9.0.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://silenceai.net",
"Documentation, https://github.com/Zeinullahh/Supreme-2-MAX",
"Repository, https://github.com/Zeinullahh/Supreme-2-MAX",
"Bug Tracker, https://github.com/Zeinullahh/Supreme-2-MAX/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T10:21:40.116596 | supreme_max-1.1.1.tar.gz | 335,822 | 3a/57/9878d1d23ff9adac01a02b9d19ea8fbbc19f475d8753d6c9fb75d8173c04/supreme_max-1.1.1.tar.gz | source | sdist | null | false | 57be4310675e6e641f1c8452ec3cc2a1 | a79963be4a3d3967fb1f0872ef31db5746429662562beba85ed9bc340e757ee9 | 3a579878d1d23ff9adac01a02b9d19ea8fbbc19f475d8753d6c9fb75d8173c04 | null | [
"LICENSE"
] | 254 |
2.4 | backends | 1.9.0 | A generic interface for linear algebra backends | # [LAB](http://github.com/wesselb/lab)
[](https://github.com/wesselb/lab/actions/workflows/ci.yml)
[](https://coveralls.io/github/wesselb/lab?branch=master)
[](https://wesselb.github.io/lab)
[](https://github.com/psf/black)
A generic interface for linear algebra backends: code it once, run it on any
backend
* [Requirements and Installation](#requirements-and-installation)
* [Basic Usage](#basic-usage)
* [List of Types](#list-of-types)
- [General](#general)
- [NumPy](#numpy)
- [AutoGrad](#autograd)
- [TensorFlow](#tensorflow)
- [PyTorch](#pytorch)
- [JAX](#jax)
* [List of Methods](#list-of-methods)
- [Constants](#constants)
- [Generic](#generic)
- [Linear Algebra](#linear-algebra)
- [Random](#random)
- [Shaping](#shaping)
* [Devices](#devices)
* [Lazy Shapes](#lazy-shapes)
* [Random Numbers](#random-numbers)
* [Control Flow Cache](#control-flow-cache)
## Requirements and Installation
```
pip install backends
```
## Basic Usage
The basic use case for the package is to write code that automatically
determines the backend to use depending on the types of its arguments.
Example:
```python
import lab as B
import lab.autograd # Load the AutoGrad extension.
import lab.torch # Load the PyTorch extension.
import lab.tensorflow # Load the TensorFlow extension.
import lab.jax # Load the JAX extension.
def objective(matrix):
outer_product = B.matmul(matrix, matrix, tr_b=True)
return B.mean(outer_product)
```
The AutoGrad, PyTorch, TensorFlow, and JAX extensions are not loaded automatically to
not enforce a dependency on all three frameworks.
An extension can alternatively be loaded via `import lab.autograd as B`.
Run it with NumPy and AutoGrad:
```python
>>> import autograd.numpy as np
>>> objective(B.randn(np.float64, 2, 2))
0.15772589216756833
>>> grad(objective)(B.randn(np.float64, 2, 2))
array([[ 0.23519042, -1.06282928],
[ 0.23519042, -1.06282928]])
```
Run it with TensorFlow:
```python
>>> import tensorflow as tf
>>> objective(B.randn(tf.float64, 2, 2))
<tf.Tensor 'Mean:0' shape=() dtype=float64>
```
Run it with PyTorch:
```python
>>> import torch
>>> objective(B.randn(torch.float64, 2, 2))
tensor(1.9557, dtype=torch.float64)
```
Run it with JAX:
```python
>>> import jax
>>> import jax.numpy as jnp
>>> jax.jit(objective)(B.randn(jnp.float32, 2, 2))
DeviceArray(0.3109299, dtype=float32)
>>> jax.jit(jax.grad(objective))(B.randn(jnp.float32, 2, 2))
DeviceArray([[ 0.2525182, -1.26065 ],
[ 0.2525182, -1.26065 ]], dtype=float32)
```
## List of Types
This section lists all available types, which can be used to check types of
objects or extend functions.
### General
```
Int # Integers
Float # Floating-point numbers
Complex # Complex numbers
Bool # Booleans
Number # Numbers
Numeric # Numerical objects, including booleans
DType # Data type
Framework # Anything accepted by supported frameworks
Device # Any device type
```
### NumPy
```
NPNumeric
NPDType
NPRandomState
NP # Anything NumPy
```
### AutoGrad
```
AGNumeric
AGDType
AGRandomState
AG # Anything AutoGrad
```
### TensorFlow
```
TFNumeric
TFDType
TFRandomState
TFDevice
TF # Anything TensorFlow
```
### PyTorch
```
TorchNumeric
TorchDType
TorchDevice
TorchRandomState
Torch # Anything PyTorch
```
### JAX
```
JAXNumeric
JAXDType
JAXDevice
JAXRandomState
JAX # Anything JAX
```
## List of Methods
This section lists all available constants and methods.
*
Arguments *must* be given as arguments and keyword arguments *must* be
given as keyword arguments.
For example, `sum(tensor, axis=1)` is valid, but `sum(tensor, 1)` is not.
* The names of arguments are indicative of their function:
- `a`, `b`, and `c` indicate general tensors.
-
`dtype` indicates a data type. E.g, `np.float32` or `tf.float64`; and
`rand(np.float32)` creates a NumPy random number, whereas
`rand(tf.float64)` creates a TensorFlow random number.
Data types are always given as the first argument.
-
`shape` indicates a shape.
The dimensions of a shape are always given as separate arguments to
the function.
E.g., `reshape(tensor, 2, 2)` is valid, but `reshape(tensor, (2, 2))`
is not.
-
`axis` indicates an axis over which the function may perform its action.
An axis is always given as a keyword argument.
-
`device` refers to a device on which a tensor can placed, which can
either be a framework-specific type or a string, e.g. `"cpu"`.
-
`ref` indicates a *reference tensor* from which properties, like its
shape and data type, will be used. E.g., `zeros(tensor)` creates a
tensor full of zeros of the same shape and data type as `tensor`.
See the documentation for more detailed descriptions of each function.
### Special Variables
```
default_dtype # Default data type.
epsilon # Magnitude of diagonal to regularise matrices with.
cholesky_retry_factor # Retry the Cholesky, increasing `epsilon` by a factor at most this.
```
### Constants
```
nan
pi
log_2_pi
```
### Data Types
```
dtype(a)
dtype_float(dtype)
dtype_float(a)
dtype_int(dtype)
dtype_int(a)
promote_dtypes(*dtype)
issubdtype(dtype1, dtype2)
```
### Generic
```
isabstract(a)
jit(f, **kw_args)
isnan(a)
real(a)
imag(a)
device(a)
on_device(device)
on_device(a)
set_global_device(device)
to_active_device(a)
zeros(dtype, *shape)
zeros(*shape)
zeros(ref)
ones(dtype, *shape)
ones(*shape)
ones(ref)
zero(dtype)
zero(*refs)
one(dtype)
one(*refs)
eye(dtype, *shape)
eye(*shape)
eye(ref)
linspace(dtype, a, b, num)
linspace(a, b, num)
range(dtype, start, stop, step)
range(dtype, stop)
range(dtype, start, stop)
range(start, stop, step)
range(start, stop)
range(stop)
cast(dtype, a)
identity(a)
round(a)
floor(a)
ceil(a)
negative(a)
abs(a)
sign(a)
sqrt(a)
exp(a)
log(a)
log1p(a)
sin(a)
arcsin(a)
cos(a)
arccos(a)
tan(a)
arctan(a)
tanh(a)
arctanh(a)
loggamma(a)
logbeta(a)
erf(a)
sigmoid(a)
softplus(a)
relu(a)
add(a, b)
subtract(a, b)
multiply(a, b)
divide(a, b)
power(a, b)
minimum(a, b)
maximum(a, b)
leaky_relu(a, alpha)
softmax(a, axis=None)
min(a, axis=None, squeeze=True)
max(a, axis=None, squeeze=True)
sum(a, axis=None, squeeze=True)
prod(a, axis=None, squeeze=True)
mean(a, axis=None, squeeze=True)
std(a, axis=None, squeeze=True)
logsumexp(a, axis=None, squeeze=True)
all(a, axis=None, squeeze=True)
any(a, axis=None, squeeze=True)
nansum(a, axis=None, squeeze=True)
nanprod(a, axis=None, squeeze=True)
nanmean(a, axis=None, squeeze=True)
nanstd(a, axis=None, squeeze=True)
argmin(a, axis=None)
argmax(a, axis=None)
lt(a, b)
le(a, b)
gt(a, b)
ge(a, b)
eq(a, b)
ne(a, b)
bvn_cdf(a, b, c)
cond(condition, f_true, f_false, xs**)
where(condition, a, b)
scan(f, xs, *init_state)
sort(a, axis=-1, descending=False)
argsort(a, axis=-1, descending=False)
quantile(a, q, axis=None)
to_numpy(a)
jit_to_numpy(a) # Caches results for `B.jit`.
```
### Linear Algebra
```
transpose(a, perm=None) (alias: t, T)
matmul(a, b, tr_a=False, tr_b=False) (alias: mm, dot)
einsum(equation, *elements)
trace(a, axis1=0, axis2=1)
kron(a, b)
svd(a, compute_uv=True)
eig(a, compute_eigvecs=True)
solve(a, b)
inv(a)
pinv(a)
det(a)
logdet(a)
expm(a)
logm(a)
cholesky(a) (alias: chol)
cholesky_solve(a, b) (alias: cholsolve)
triangular_solve(a, b, lower_a=True) (alias: trisolve)
toeplitz_solve(a, b, c) (alias: toepsolve)
toeplitz_solve(a, c)
outer(a, b)
reg(a, diag=None, clip=True)
pw_dists2(a, b)
pw_dists2(a)
pw_dists(a, b)
pw_dists(a)
ew_dists2(a, b)
ew_dists2(a)
ew_dists(a, b)
ew_dists(a)
pw_sums2(a, b)
pw_sums2(a)
pw_sums(a, b)
pw_sums(a)
ew_sums2(a, b)
ew_sums2(a)
ew_sums(a, b)
ew_sums(a)
```
### Random
```
set_random_seed(seed)
create_random_state(dtype, seed=0)
global_random_state(dtype)
global_random_state(a)
set_global_random_state(state)
rand(state, dtype, *shape)
rand(dtype, *shape)
rand(*shape)
rand(state, ref)
rand(ref)
randn(state, dtype, *shape)
randn(dtype, *shape)
randn(*shape)
randn(state, ref)
randn(ref)
randcat(state, p, *shape)
randcat(p, *shape)
choice(state, a, *shape, p=None)
choice(a, *shape, p=None)
randint(state, dtype, *shape, lower=0, upper)
randint(dtype, *shape, lower=0, upper)
randint(*shape, lower=0, upper)
randint(state, ref, lower=0, upper)
randint(ref, lower=0, upper)
randperm(state, dtype, n)
randperm(dtype, n)
randperm(n)
randgamma(state, dtype, *shape, alpha, scale)
randgamma(dtype, *shape, alpha, scale)
randgamma(*shape, alpha, scale)
randgamma(state, ref, *, alpha, scale)
randgamma(ref, *, alpha, scale)
randbeta(state, dtype, *shape, alpha, beta)
randbeta(dtype, *shape, alpha, beta)
randbeta(*shape, alpha, beta)
randbeta(state, ref, *, alpha, beta)
randbeta(ref, *, alpha, beta)
```
### Shaping
```
shape(a, *dims)
rank(a)
length(a) (alias: size)
is_scalar(a)
expand_dims(a, axis=0, times=1)
squeeze(a, axis=None)
uprank(a, rank=2)
downrank(a, rank=2, preserve=False)
broadcast_to(a, *shape)
diag(a)
diag_extract(a)
diag_construct(a)
flatten(a)
vec_to_tril(a, offset=0)
tril_to_vec(a, offset=0)
stack(*elements, axis=0)
unstack(a, axis=0, squeeze=True)
reshape(a, *shape)
concat(*elements, axis=0)
concat2d(*rows)
tile(a, *repeats)
take(a, indices_or_mask, axis=0)
submatrix(a, indices_or_mask)
```
## Devices
You can get the device of a tensor with `B.device(a)`,
and you can execute a computation on a device by entering `B.on_device(device)`
as a context:
```python
with B.on_device("gpu:0"):
a = B.randn(tf.float32, 2, 2)
b = B.randn(tf.float32, 2, 2)
c = a @ b
```
Within such a context, a tensor that is not on the active device can be moved to the
active device with `B.to_active_device(a)`.
You can also globally set the active device with `B.set_global_device("gpu:0")`.
## Lazy Shapes
If a function is evaluated abstractly, then elements of the shape of a tensor, e.g.
`B.shape(a)[0]`, may also be tensors, which can break dispatch.
By entering `B.lazy_shapes()`, shapes and elements of shapes will be wrapped in a custom
type to fix this issue.
```python
with B.lazy_shapes():
a = B.eye(2)
print(type(B.shape(a)))
# <class 'lab.shape.Shape'>
print(type(B.shape(a)[0]))
# <class 'lab.shape.Dimension'>
```
## Random Numbers
If you call a random number generator without providing a random state, e.g.
`B.randn(np.float32, 2)`, the global random state from the corresponding
backend is used.
For JAX, since there is no global random state, LAB provides a JAX global
random state accessible through `B.jax_global_random_state` once `lab.jax`
is loaded.
If you do not want to use a global random state but rather explicitly maintain
one, you can create a random state with `B.create_random_state` and then
pass this as the first argument to the random number generators.
The random number generators will then return a tuple containing the updated
random state and the random result.
```python
# Create random state.
state = B.create_random_state(tf.float32, seed=0)
# Generate two random arrays.
state, x = B.randn(state, tf.float32, 2)
state, y = B.randn(state, tf.float32, 2)
```
## Control Flow Cache
Coming soon!
| text/markdown | Wessel Bruinsma | wessel.p.bruinsma@gmail.com | null | null | MIT | null | [] | [] | https://github.com/wesselb/lab | null | >=3.10 | [] | [] | [] | [
"numpy>=1.16",
"scipy>=1.3",
"plum-dispatch>=2.7.1",
"opt-einsum"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T10:21:24.819149 | backends-1.9.0.tar.gz | 91,710 | 24/3a/42c6901afd39efd8997b70d623913675f57132f0c4f36ab02f3966d326c4/backends-1.9.0.tar.gz | source | sdist | null | false | 95f6b59695179f02db8a41bb7233ba79 | 94cfaad023f54f98d8999a1a561ca65451cf9d4af0d148f77e6cac1112f4c5e3 | 243a42c6901afd39efd8997b70d623913675f57132f0c4f36ab02f3966d326c4 | null | [
"LICENCE.txt"
] | 226 |
2.4 | silicon | 1.1.8 | A Python CLI hello world template | # silicon
[](https://github.com/alkalescent/silicon/actions/workflows/release.yml)
[](https://pypi.org/project/silicon/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A Python CLI hello world template with best practices for packaging, testing, and CI/CD.
## ✨ Features
- **Modern CLI**: Built with [Typer](https://typer.tiangolo.com/) for a clean command-line interface
- **Well-Structured**: Source layout with `src/` directory and proper packaging
- **Comprehensive Testing**: pytest with coverage requirements and parallel execution
- **CI/CD Ready**: GitHub Actions workflows for testing, versioning, and releases
- **Multi-Platform**: PyPI, Homebrew, and pre-built binary distribution
## 💖 Support
Love this tool? Your support means the world! ❤️
<table align="center">
<tr>
<th>Currency</th>
<th>Address</th>
<th>QR</th>
</tr>
<tr>
<td><strong>₿ BTC</strong></td>
<td><code>bc1qwn7ea6s8wqx66hl5rr2supk4kv7qtcxnlqcqfk</code></td>
<td><img src="assets/qr_btc.png" width="80" /></td>
</tr>
<tr>
<td><strong>Ξ ETH</strong></td>
<td><code>0x7cdB1861AC1B4385521a6e16dF198e7bc43fDE5f</code></td>
<td><img src="assets/qr_eth.png" width="80" /></td>
</tr>
<tr>
<td><strong>ɱ XMR</strong></td>
<td><code>463fMSWyDrk9DVQ8QCiAir8TQd4h3aRAiDGA8CKKjknGaip7cnHGmS7bQmxSiS2aYtE9tT31Zf7dSbK1wyVARNgA9pkzVxX</code></td>
<td><img src="assets/qr_xmr.png" width="80" /></td>
</tr>
<tr>
<td><strong>◈ BNB</strong></td>
<td><code>0x7cdB1861AC1B4385521a6e16dF198e7bc43fDE5f</code></td>
<td><img src="assets/qr_bnb.png" width="80" /></td>
</tr>
</table>
## 📦 Installation
### Homebrew (macOS/Linux)
```bash
brew tap alkalescent/tap
brew install silicon
```
### PyPI (Recommended)
```bash
uv pip install silicon
```
After installation, use either the command directly or as a Python module:
```bash
# Direct command
silicon --help
# As Python module (if direct command not in PATH)
uv run python -m silicon --help
```
### From Source
Clone the repository and install in development mode:
```bash
git clone https://github.com/alkalescent/silicon.git
cd silicon
make install DEV=1 # Install with dev dependencies
```
### Pre-built Binaries
Download from [GitHub Releases](https://github.com/alkalescent/silicon/releases):
| Variant | Description | Startup | Format |
|---------|-------------|---------|--------|
| **Portable** | Single file, no installation needed | ~10 sec | `silicon-{os}-portable` |
| **Fast** | Optimized for speed | ~1 sec | `silicon-{os}-fast.tar.gz` |
> **Note**: In the filenames and commands, replace `{os}` with your operating system (e.g., `linux`, `macos`). The examples below use `linux`. For Windows, you may need to use a tool like 7-Zip to extract `.tar.gz` archives.
For **Portable**, download and run directly:
```bash
chmod +x silicon-linux-portable
./silicon-linux-portable --help
```
For **Fast**, extract the archive and run from within:
```bash
tar -xzf silicon-linux-fast.tar.gz
./cli.dist/silicon --help
```
### Build from Source
Build your own binaries using [Nuitka](https://nuitka.net/):
```bash
git clone https://github.com/alkalescent/silicon.git
cd silicon
# Build portable (single file, slower startup)
MODE=onefile make build
# Build fast (directory, faster startup)
MODE=standalone make build
```
## 🚀 Usage
The CLI provides simple `hello` and `goodbye` commands.
### Hello Command
Say hello to someone:
```bash
silicon hello # Hello, World!
silicon hello --name Developer # Hello, Developer!
silicon hello -n "Your Name" # Hello, Your Name!
```
### Goodbye Command
Say goodbye to someone:
```bash
silicon goodbye # Goodbye, World!
silicon goodbye --name Developer # Goodbye, Developer!
silicon goodbye -n "Your Name" # Goodbye, Your Name!
```
### Version
Check the installed version:
```bash
silicon version # v1.0.0
silicon --version # v1.0.0
silicon -v # v1.0.0
```
## 🧪 Testing
Run the test suite:
```bash
make test
```
Run with coverage reporting (requires 90% coverage):
```bash
make cov
```
Run smoke tests:
```bash
make smoke
```
## 🏗️ Architecture
The CLI consists of the following modules:
- **`tools.py`**: Core utility classes
- `Greeter` class: Simple greeting functionality
- **`cli.py`**: Command-line interface using Typer
- `hello`: Say hello to someone
- `goodbye`: Say goodbye to someone
- `version`: Display version
- **`test_tools.py`** / **`test_cli.py`**: Comprehensive test suites
## 📖 Using as a Template
1. Fork or clone this repository
2. Replace `silicon` with your project name in:
- `pyproject.toml` (name, scripts, URLs)
- `src/silicon/` directory name
- Imports in source files
- README.md
3. Add your own functionality in `tools.py` and `cli.py`
4. Update tests accordingly
## 📚 Dependencies
- `typer`: Modern CLI framework
## 📄 License
MIT License - see [LICENSE](LICENSE) for details. | text/markdown | Krish Suchak | null | null | null | null | cli, hello-world, python, template | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"typer>=0.20.1"
] | [] | [] | [] | [
"Homepage, https://github.com/alkalescent/silicon",
"Repository, https://github.com/alkalescent/silicon",
"Issues, https://github.com/alkalescent/silicon/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:21:22.938830 | silicon-1.1.8.tar.gz | 44,724 | 1a/8c/f59496175c174f457eac7324c32783544a3b5f8f36b2823a9f845ad575ed/silicon-1.1.8.tar.gz | source | sdist | null | false | 81fe417d06cc7655c2256150297acbd5 | a56c963ba3696a2ffc7390f9ebfd21fc56ab08926d872d6b98d912845fc7448d | 1a8cf59496175c174f457eac7324c32783544a3b5f8f36b2823a9f845ad575ed | MIT | [
"LICENSE"
] | 244 |
2.4 | cymongoose | 0.1.10 | High-performance Python bindings for the Mongoose embedded networking library | # cymongoose
Python bindings for the Mongoose embedded networking library, built with Cython.
## Overview
**cymongoose** provides Pythonic access to [Mongoose](https://github.com/cesanta/mongoose), a lightweight embedded networking library written in C. It supports HTTP servers, WebSocket, TCP/UDP sockets, and more through a clean, event-driven API.
## Features
### Core Protocols
- **HTTP/HTTPS**: Server and client with TLS support, chunked transfer encoding, SSE
- **WebSocket/WSS**: Full WebSocket support with text/binary frames over TLS
- **MQTT/MQTTS**: Publish/subscribe messaging with QoS support
- **TCP/UDP**: Raw socket support with custom protocols
- **DNS**: Asynchronous hostname resolution
- **SNTP**: Network time synchronization
### Advanced Features
- **TLS/SSL**: Certificate-based encryption with custom CA support
- **Timers**: Periodic callbacks with precise timing control
- **Flow Control**: Backpressure handling and buffer management
- **Authentication**: HTTP Basic Auth, MQTT credentials
- **JSON Parsing**: Built-in JSON extraction utilities
- **URL Encoding**: Safe URL parameter encoding
### Technical
- **Event-driven**: Non-blocking I/O with a simple event loop
- **Low overhead**: Thin Cython wrapper over native C library
- **Python 3.10+**: Modern Python with type hints
- **Comprehensive**: 244 tests, 100% pass rate
- **Production Examples**: 17 complete examples from Mongoose tutorials
- **TLS Support**: Built-in TLS/SSL encryption (MG_TLS_BUILTIN)
- **GIL Optimization**: 21 methods release GIL for true parallel execution
- **High Performance**: 60k+ req/sec (6-37x faster than pure Python frameworks)
## Installation
### From pypi
```sh
pip install cymongoose
```
### From source
```sh
# Clone the repository
git clone https://github.com/shakfu/cymongoose
cd cymongoose
make
```
Also type `make help` gives you a list of commands
### Requirements
- Python 3.10 or higher
- CMake 3.15+
- Cython 3.0+
- C compiler (gcc, clang, or MSVC)
## Quick Start
> **Note:** These examples bind to `127.0.0.1` (localhost only). For production, use `0.0.0.0` to listen on all interfaces.
### Simple HTTP Server
```python
from cymongoose import Manager, MG_EV_HTTP_MSG
def handler(conn, event, data):
if event == MG_EV_HTTP_MSG:
conn.reply(200, "Hello, World!")
mgr = Manager(handler)
mgr.listen("http://127.0.0.1:8000", http=True)
print("Server running on http://localhost:8000. Press Ctrl+C to stop.")
mgr.run()
```
### Serve Static Files
```python
from cymongoose import Manager, MG_EV_HTTP_MSG
def handler(conn, event, data):
if event == MG_EV_HTTP_MSG:
conn.serve_dir(data, root_dir="./public")
mgr = Manager(handler)
mgr.listen("http://127.0.0.1:8000", http=True)
mgr.run()
```
### WebSocket Echo Server
```python
from cymongoose import Manager, MG_EV_HTTP_MSG, MG_EV_WS_MSG
def handler(conn, event, data):
if event == MG_EV_HTTP_MSG:
conn.ws_upgrade(data) # Upgrade HTTP to WebSocket
elif event == MG_EV_WS_MSG:
conn.ws_send(data.text) # Echo back
mgr = Manager(handler)
mgr.listen("http://127.0.0.1:8000", http=True)
mgr.run()
```
### Per-Listener Handlers (new)
Run different handlers on different ports. Accepted connections automatically inherit the handler from their listener:
```python
from cymongoose import Manager, MG_EV_HTTP_MSG
def api_handler(conn, event, data):
if event == MG_EV_HTTP_MSG:
conn.reply(200, '{"status":"ok"}', {"Content-Type": "application/json\r\n"})
def web_handler(conn, event, data):
if event == MG_EV_HTTP_MSG:
conn.serve_dir(data, root_dir="./public")
mgr = Manager() # no default handler needed
mgr.listen("http://127.0.0.1:8080", handler=api_handler, http=True)
mgr.listen("http://127.0.0.1:8090", handler=web_handler, http=True)
mgr.run()
```
## Examples
The project includes several complete examples translated from Mongoose C tutorials:
### Core HTTP/WebSocket
- **HTTP Server** - Static files, TLS, multipart uploads, REST API
- **HTTP Client** - GET/POST, TLS, timeouts, custom headers
- **WebSocket Server** - Echo, mixed HTTP+WS, client tracking
- **WebSocket Broadcasting** - Timer-based broadcasts to multiple clients
### MQTT
- **MQTT Client** - Pub/sub, QoS, reconnection, keepalive
- **MQTT Broker** - Message routing, topic matching, subscriptions
### Specialized HTTP
- **HTTP Streaming** - Chunked transfer encoding, large responses
- **HTTP File Upload** - Disk streaming, multipart forms
- **RESTful Server** - JSON API, CRUD operations, routing
- **Server-Sent Events** - Real-time push updates
### Network Protocols
- **SNTP Client** - Network time sync over UDP
- **DNS Client** - Async hostname resolution
- **TCP Echo Server** - Raw TCP sockets, custom protocols
- **UDP Echo Server** - Connectionless datagrams
### Advanced Features
- **TLS HTTPS Server** - Certificate-based encryption, SNI
- **HTTP Proxy Client** - CONNECT method tunneling
- **Multi-threaded Server** - Background workers, `Manager.wakeup()`
**All examples include:**
- Production-ready patterns (signal handlers, graceful shutdown)
- Command-line arguments for flexibility
- Comprehensive test coverage (42 tests)
- Detailed documentation with C tutorial references
See `tests/examples/README.md` for usage instructions and `tests/examples/` for source code.
## API Reference
### Manager
The main event loop manager.
```python
mgr = Manager(handler=None, enable_wakeup=False)
```
**Core Methods:**
- `poll(timeout_ms=0)` - Run one iteration of the event loop
- `run(poll_ms=100)` - Run the event loop until SIGINT/SIGTERM, then close
- `listen(url, handler=None)` - Create a listening socket (handler is inherited by accepted children)
- `connect(url, handler=None)` - Create an outbound connection
- `close()` - Free resources
**Protocol-Specific:**
- `http_listen(url, handler=None)` - Create HTTP server
- `http_connect(url, handler=None)` - Create HTTP client
- `ws_connect(url, handler=None)` - WebSocket client
- `mqtt_connect(url, handler=None, client_id, username, password, ...)` - MQTT client
- `mqtt_listen(url, handler=None)` - MQTT broker
- `sntp_connect(url, handler=None)` - SNTP time client
- `timer_add(milliseconds, callback, repeat=False, run_now=False)` - Add periodic timer
- `wakeup(connection_id, data)` - Wake connection from another thread
### Connection
Represents a network connection.
```python
# Send data
conn.send(data) # Raw bytes
conn.reply(status, body, headers) # HTTP response
conn.ws_upgrade(message) # Upgrade HTTP to WebSocket
conn.ws_send(data, op) # WebSocket frame
# HTTP
conn.serve_dir(message, root_dir) # Serve static files
conn.serve_file(message, path) # Serve single file
conn.http_chunk(data) # Send chunked data
conn.http_sse(event_type, data) # Server-Sent Events
conn.http_basic_auth(user, pass_) # HTTP Basic Auth
# MQTT
conn.mqtt_pub(topic, message, ..) # Publish an MQTT message
conn.mqtt_sub(topic, qos=0) # Subscribe to an MQTT topic
conn.mqtt_ping() # Send MQTT ping
conn.mqtt_pong() # Send MQTT pong
conn.mqtt_disconnect() # Send MQTT disconnect message
# SNTP
conn.sntp_request() # Request time
# TLS
conn.tls_init(TlsOpts(...)) # Initialize TLS
conn.tls_free() # Free TLS resources
# DNS
conn.resolve(url) # Async DNS lookup
conn.resolve_cancel() # Cancel DNS lookup
# Connection management
conn.drain() # Graceful close (flush buffer first)
conn.close() # Immediate close
conn.error(message) # Trigger error event
# Properties
conn.is_listening # Listener socket?
conn.is_websocket # WebSocket connection?
conn.is_tls # TLS/SSL enabled?
conn.is_udp # UDP socket?
conn.is_readable # Data available?
conn.is_writable # Can write?
conn.is_full # Buffer full? (backpressure)
conn.is_draining # Draining before close?
conn.id # Connection ID
conn.handler # Current handler
conn.set_handler(fn) # Set handler (propagates to children if listener)
conn.userdata # Custom Python object
conn.local_addr # (ip, port) tuple
conn.remote_addr # (ip, port) tuple
# Buffer access
conn.recv_len # Bytes in receive buffer
conn.send_len # Bytes in send buffer
conn.recv_size # Receive buffer capacity
conn.send_size # Send buffer capacity
conn.recv_data(n) # Read from receive buffer
conn.send_data(n) # Read from send buffer
```
### TlsOpts
TLS/SSL configuration.
```python
opts = TlsOpts(
ca=None, # CA certificate (PEM)
cert=None, # Server/client certificate (PEM)
key=None, # Private key (PEM)
name=None, # Server name (SNI)
skip_verification=False # Skip cert validation (dev only!)
)
```
### HttpMessage
HTTP request/response view.
```python
msg.method # "GET", "POST", etc.
msg.uri # "/path"
msg.query # "?key=value"
msg.proto # "HTTP/1.1"
msg.body_text # Body as string
msg.body_bytes # Body as bytes
msg.header("Name") # Get header value
msg.headers() # All headers as list of tuples
msg.query_var("key") # Extract query parameter
msg.status() # HTTP status code
msg.header_var(header, var) # Extract variable from header
```
### WsMessage
WebSocket frame data.
```python
ws.text # Frame data as string
ws.data # Frame data as bytes
ws.flags # WebSocket flags
```
### MqttMessage
MQTT message data.
```python
mqtt.topic # Topic as string
mqtt.data # Payload as bytes
mqtt.id # Message ID
mqtt.cmd # MQTT command
mqtt.qos # Quality of Service (0-2)
mqtt.ack # Acknowledgment flag
```
### Event Constants
```python
# Core events
MG_EV_ERROR # Error occurred
MG_EV_OPEN # Connection created
MG_EV_POLL # Poll iteration
MG_EV_RESOLVE # DNS resolution complete
MG_EV_CONNECT # Outbound connection established
MG_EV_ACCEPT # Inbound connection accepted
MG_EV_TLS_HS # TLS handshake complete
MG_EV_READ # Data available to read
MG_EV_WRITE # Data written
MG_EV_CLOSE # Connection closed
# Protocol events
MG_EV_HTTP_MSG # HTTP message received
MG_EV_WS_OPEN # WebSocket handshake complete
MG_EV_WS_MSG # WebSocket message received
MG_EV_MQTT_CMD # MQTT command received
MG_EV_MQTT_MSG # MQTT message received
MG_EV_MQTT_OPEN # MQTT connection established
MG_EV_SNTP_TIME # SNTP time received
MG_EV_WAKEUP # Wakeup notification
```
### Utility Functions
```python
# JSON parsing
json_get(json_str, "$.path") # Get JSON value
json_get_num(json_str, "$.number") # Get as number
json_get_bool(json_str, "$.bool") # Get as boolean
json_get_long(json_str, "$.int", default=0) # Get as long
json_get_str(json_str, "$.string") # Get as string
# URL encoding
url_encode(data) # Encode for URL
# Multipart forms
http_parse_multipart(body, offset=0) # Parse multipart data
```
## Testing
The project includes a comprehensive test suite with **244 tests** (100% passing):
### Test Coverage by Feature
**Core Functionality (168 tests):**
- **HTTP/HTTPS**: Server, client, headers, query params, chunked encoding, SSE (40 tests)
- **WebSocket**: Handshake, text/binary frames, opcodes (10 tests)
- **MQTT**: Connect, publish, subscribe, ping/pong, disconnect (11 tests)
- **TLS/SSL**: Configuration, initialization, properties (12 tests)
- **Timers**: Single-shot, repeating, callbacks, cleanup (10 tests)
- **DNS**: Resolution, cancellation (4 tests)
- **SNTP**: Time requests, format validation (5 tests)
- **JSON**: Parsing, type conversion, nested access (9 tests)
- **Buffer Access**: Direct buffer inspection, flow control (10 tests)
- **Connection State**: Lifecycle, properties, events (15+ tests)
- **Security**: HTTP Basic Auth, TLS properties (6 tests)
- **Utilities**: URL encoding, multipart forms, wakeup (10 tests)
- **Flow Control**: Drain, backpressure (4 tests)
**Example Tests:**
- HTTP/WebSocket examples
- MQTT examples
- Specialized HTTP examples
- Network protocols
- Advanced features
- README example validation
- WebSocket broadcast examples
### Running Tests
```sh
make test # Run all tests (244 tests)
uv run python -m pytest tests/ -v # Verbose output
uv run python -m pytest tests/test_http_server.py # Run specific file
uv run python -m pytest tests/ -k "test_timer" # Run matching tests
uv run python -m pytest tests/examples/ # Run example tests only
```
### Test Infrastructure
- Dynamic port allocation prevents conflicts
- Background polling threads for async operations
- Proper cleanup in finally blocks
- 100% pass rate (244/244 tests passing)
- WebSocket tests require `websocket-client` (`uv add --dev websocket-client`)
### Memory Safety Testing
AddressSanitizer (ASAN) support is available for detecting memory errors:
```sh
make build-asan # Build with ASAN enabled
make test-asan # Run tests with memory error detection
```
This detects use-after-free, buffer overflows, and other memory bugs at runtime.
> **macOS note:** `build-asan` compiles a small helper (`build/run_asan`) that
> injects the ASAN runtime via `DYLD_INSERT_LIBRARIES` before exec'ing Python.
> This is necessary because macOS SIP strips `DYLD_INSERT_LIBRARIES` from
> processes spawned by system binaries (`/usr/bin/make`, `/bin/sh`).
## Development
The project uses [scikit-build-core](https://scikit-build-core.readthedocs.io/) with CMake to build the Cython extension, and [uv](https://docs.astral.sh/uv/) for environment and dependency management.
```sh
make build # Rebuild the Cython extension
make test # Run all tests
make clean # Remove build artifacts
make help # Show all available targets
```
## Architecture
- **CMake build** (`CMakeLists.txt`): Cythonizes `.pyx` and compiles the extension via scikit-build-core
- **Cython bindings** (`src/cymongoose/_mongoose.pyx`): Python wrapper classes
- **C declarations** (`src/cymongoose/mongoose.pxd`): Cython interface to Mongoose C API
- **Vendored Mongoose** (`thirdparty/mongoose/`): Embedded C library
### Performance Optimization
The wrapper achieves **C-level performance** through aggressive optimization:
**GIL Release (`nogil`):**
- **21 critical methods release GIL** for true parallel execution
- Network: `send()`, `close()`, `resolve()`, `resolve_cancel()`
- WebSocket: `ws_send()`, `ws_upgrade()`
- MQTT: `mqtt_pub()`, `mqtt_sub()`, `mqtt_ping()`, `mqtt_pong()`, `mqtt_disconnect()`
- HTTP: `reply()`, `serve_dir()`, `serve_file()`, `http_chunk()`, `http_sse()`
- TLS: `tls_init()`, `tls_free()`
- Utilities: `sntp_request()`, `http_basic_auth()`, `error()`
- Properties: `local_addr`, `remote_addr`
- Thread-safe: `Manager.wakeup()`
**TLS Compatibility:**
- TLS and `nogil` work together safely
- Mongoose's built-in TLS is event-loop based (no internal locks)
- Both optimizations enabled by default
**Benchmark Results** (Apple Silicon, `wrk -t4 -c100 -d10s`):
- **cymongoose**: 60,973 req/sec (1.67ms latency)
- aiohttp: 42,452 req/sec (1.44x slower)
- FastAPI/uvicorn: 9,989 req/sec (6.1x slower)
- Flask: 1,627 req/sec (37.5x slower)
See `docs/nogil_optimization_summary.md` and `benchmarks/RESULTS.md` for details.
## License
This project is licensed under the **GNU General Public License v2.0 or later** (GPL-2.0-or-later), matching the [Mongoose C library](https://github.com/cesanta/mongoose) license. See [LICENSE](LICENSE) for details.
For use in proprietary/closed-source projects, a [commercial Mongoose license](https://mongoose.ws/licensing/) from Cesanta is required.
## Links
- [Mongoose Documentation](https://mongoose.ws/)
- [GitHub Repository](https://github.com/shakfu/cymongoose)
| text/markdown | null | Shakeeb Alireza <shakfu@users.noreply.github.com> | null | null | null | mongoose, networking, http, https, websocket, mqtt, server, client, embedded, async, high-performance, tls, ssl | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Telecommunications Industry",
"Natural Language :: English",
"Operating System :: OS Independent",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Programming Language :: C",
"Programming Language :: Cython",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Communications",
"Topic :: Internet",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Networking",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/shakfu/cymongoose",
"Repository, https://github.com/shakfu/cymongoose",
"Documentation, https://github.com/shakfu/cymongoose"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-21T10:20:41.050393 | cymongoose-0.1.10.tar.gz | 573,962 | 5a/4c/50c2b0fd3345a7cd32c9f76928d23c4ce4d75bd93b8876d74a58ab5f9341/cymongoose-0.1.10.tar.gz | source | sdist | null | false | 06b8c9f80aa34b49e01b9fd015fd71ea | 493105335f20b91f45ffe12fb025d175f51afb1afd822d72f1e3a21564f2ce13 | 5a4c50c2b0fd3345a7cd32c9f76928d23c4ce4d75bd93b8876d74a58ab5f9341 | GPL-2.0-or-later | [] | 1,184 |
2.4 | tars-robot | 0.4.2 | TARS robot control library with daemon, dashboard, and SDK | # TARS
> **⚠️ Note to Visitors**
>
> This repository is a **personal fork** for experimenting with a new distributed architecture.
> If you're looking for the **main TARS-AI project**, please visit:
>
> 👉 **https://github.com/TARS-AI-Community/TARS-AI**
>
> This fork splits TARS into a dual-machine setup:
> - **Host Computer (macOS/Windows/Linux)**: Handles all AI processing (STT, TTS, LLM, Vision)
> - **Raspberry Pi 5**: Handles all hardware I/O (servos, camera, audio)
---
## Architecture Overview
**New Architecture:** RPi is self-contained and runs a WebRTC server + gRPC server. The host computer connects to it as a client.
```
RPi 5 (tars) - Standalone Robot Host Computer (tars-conversation-app) - AI Brain
┌──────────────────────────────┐ ┌─────────────────────────────┐
│ WEBRTC + gRPC SERVERS │ │ WEBRTC CLIENT + AI │
│ │ │ │
│ tars_daemon.py │ │ tars_bot.py │
│ │ │ │
│ On boot: │ │ Connects to RPi: │
│ - Starts WebRTC server │ WebRTC │ - aiortc client │
│ - Starts gRPC server │◄───────┤ - POST /api/offer │
│ - POST /api/offer endpoint │ P2P │ │
│ │ │ Audio Pipeline: │
│ Audio Routing: │ │ ┌─────────────────────┐ │
│ - Mic → WebRTC track ────────┼────────┼►│ VAD → STT → LLM │ │
│ - WebRTC track → Speaker ◄───┼────────┼─┤ → TTS → Audio Out │ │
│ │ │ └─────────────────────┘ │
│ DataChannel State Sync: │ │ │
│ - Receives eye states │ │ Services: │
│ - Sends battery status │ │ - Deepgram STT │
│ │ │ - GPT LLM + Tools │
│ gRPC API (port 50051): │ │ - ElevenLabs TTS │
│ - Move(movement, speed) │◄───────┤ - Vision (tool calls) │
│ - CaptureCamera(w, h, q) │ gRPC │ │
│ - SetEmotion(emotion) │ │ Tools call RPi via gRPC │
│ - SetEyeState(state) │ │ │
│ - GetStatus() │ │ │
└──────────────────────────────┘ └─────────────────────────────┘
│
│ I2C + USB + CSI
▼
┌──────────────────┐
│ Hardware │
│ - Servos │
│ - USB Soundcard │
│ - Pi Camera │
│ - Display │
│ - Battery │
└──────────────────┘
```
**Key Principle:** The robot is self-contained. It boots up and waits for an AI brain to connect, not the other way around.
---
## 📦 Installation
### SDK Only (App Development)
For controlling TARS from your computer (Mac/Windows/Linux):
```bash
pip install tars-robot
```
This installs only the lightweight SDK (~3 dependencies) needed to connect to the robot via gRPC.
### Full Daemon (Raspberry Pi)
For running the robot daemon on the Pi:
```bash
pip install tars-robot[daemon]
```
This installs all dependencies including FastAPI, pygame, Adafruit libraries, etc.
📖 **[Full Installation Guide](./INSTALLATION.md)** - Detailed instructions, usage examples, and troubleshooting
---
## What This Repo Contains
- **gRPC-based control system** for Raspberry Pi 5 (low-latency hardware control)
- **WebRTC server** for bidirectional audio streaming
- **19 pre-programmed movements** for servo control
- **Camera capture via gRPC** (Pi Camera or USB webcam)
- **Real-time state synchronization** via WebRTC DataChannel
## Quick Start
### Web Dashboard
**First Boot WiFi Setup:**
On first boot, TARS starts a WiFi hotspot:
```
SSID: TARS-Setup
Password: tars1234
Setup URL: http://10.42.0.1:8080/setup
```
After WiFi is configured, access the dashboard at:
```
# Local network (home WiFi)
http://tars.local:8080
# Tailscale (dorm/corporate networks)
http://100.x.x.x:8080
```
📖 **[WiFi Setup Guide](./docs/WIFI_SETUP.md)** - Complete WiFi configuration instructions
**Dashboard Features:**
- Monitor robot status (battery, CPU, network)
- Control movements via joystick
- Install and manage apps from the App Store
- Configure WiFi and system settings
**Tabs:**
- **Status**: System metrics, battery, connections
- **Control**: Movement controls with joystick interface
- **Apps**: Official and community apps marketplace
- **Settings**: WiFi management, enterprise WiFi support, updates
### Command Line
Start the RPi daemon (waits for AI brain to connect):
```bash
# Start WebRTC + gRPC servers (default)
python tars_daemon.py
# Or using start script
./start.sh
# WebRTC only (no gRPC)
python tars_daemon.py --no-grpc
# Specify custom gRPC port
python tars_daemon.py --grpc-port 50052
```
The RPi will:
1. Start the WebRTC server on port 8001
2. Start the gRPC server on port 50051 (default)
3. Start the web dashboard on port 8080
4. Wait for the host computer to connect via POST /api/offer
5. Once connected, audio flows bidirectionally and gRPC handles hardware control
## Documentation
**User Guides:**
- **[DAEMON.md](./docs/DAEMON.md)** - Getting started with unified daemon
- **[DASHBOARD.md](./docs/DASHBOARD.md)** - Web dashboard guide
- **[WIFI_SETUP.md](./docs/WIFI_SETUP.md)** - WiFi configuration and troubleshooting
**API Reference:**
- **[MOVEMENTS.md](./docs/MOVEMENTS.md)** - Servo control and movement API
- **[HARDWARE_IO.md](./docs/HARDWARE_IO.md)** - Camera and audio API
**Architecture & Design:**
- **[ARCHITECTURE.md](./docs/ARCHITECTURE.md)** - System architecture
---
## 🤝 Contributing
- Join the community on Discord:
👉 https://discord.gg/AmE2Gv9EUt
---
## 📜 License
This project is licensed under **Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)**.
You may:
- Build and modify your own TARS robot
- Share improvements and derivatives
- Use the project for personal, educational, and research purposes
You may **not** use this project for commercial purposes without explicit permission from the authors.
Commercial use includes, but is not limited to:
- Selling 3D printed parts, kits, or complete robots
- Selling or distributing STL / CAD files for money
- Offering paid assembly, customization, or installation services
- Monetized YouTube, Social Media, Patreon, or subscription content that distributes project files or derivatives
- Using this project in paid products, commercial research, or corporate projects
- Integrating this project into commercial software or hardware products
- Selling derivatives or modified versions of the hardware or software
If you are unsure whether your use case is commercial, assume it is and request permission from the authors.
See the [LICENSE](./LICENSE) file for details.
---
## 🧾 Attribution
Please follow the attribution guidelines when sharing or publishing derivative work:
👉 [ATTRIBUTION.md](./ATTRIBUTION.md)
---
| text/markdown | TARS Team | null | null | null | CC-BY-NC-4.0 | robotics, grpc, tars, raspberry-pi, robot | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"grpcio>=1.60.0",
"protobuf>=4.25.0",
"loguru>=0.7.0",
"grpcio-tools>=1.60.0; extra == \"daemon\"",
"fastapi>=0.104.0; extra == \"daemon\"",
"uvicorn>=0.24.0; extra == \"daemon\"",
"aiortc>=1.6.0; extra == \"daemon\"",
"opencv-python>=4.8.0; extra == \"daemon\"",
"numpy>=1.24.0; extra == \"daemon\"",
"pygame>=2.5.0; extra == \"daemon\"",
"pyserial>=3.5; extra == \"daemon\"",
"adafruit-circuitpython-pca9685>=3.4.0; extra == \"daemon\"",
"adafruit-circuitpython-ina260>=1.3.0; extra == \"daemon\"",
"psutil>=5.9.0; extra == \"daemon\"",
"aiofiles>=23.0.0; extra == \"daemon\"",
"grpcio-tools>=1.60.0; extra == \"all\"",
"fastapi>=0.104.0; extra == \"all\"",
"uvicorn>=0.24.0; extra == \"all\"",
"aiortc>=1.6.0; extra == \"all\"",
"opencv-python>=4.8.0; extra == \"all\"",
"numpy>=1.24.0; extra == \"all\"",
"pygame>=2.5.0; extra == \"all\"",
"pyserial>=3.5; extra == \"all\"",
"adafruit-circuitpython-pca9685>=3.4.0; extra == \"all\"",
"adafruit-circuitpython-ina260>=1.3.0; extra == \"all\"",
"psutil>=5.9.0; extra == \"all\"",
"aiofiles>=23.0.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/latishab/tars",
"Repository, https://github.com/latishab/tars",
"Documentation, https://github.com/latishab/tars#readme",
"Issues, https://github.com/latishab/tars/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T10:20:11.046966 | tars_robot-0.4.2.tar.gz | 27,366 | d5/73/b5dfd4cd5b131c401ea30f566e8ab464f80b5539e8eb391a679956b93d39/tars_robot-0.4.2.tar.gz | source | sdist | null | false | 0a2add8ed23f92fa23bdeec73395b47d | 83de17574048e6f64d569a13f896a3d3a8f51505f8f70ab4afec8d65ab588414 | d573b5dfd4cd5b131c401ea30f566e8ab464f80b5539e8eb391a679956b93d39 | null | [
"LICENSE"
] | 250 |
2.4 | waveassist | 0.6.1 | WaveAssist Python SDK for storing and retrieving structured data, LLM integration, and credit management | # WaveAssist SDK & CLI 🌊
WaveAssist SDK makes it simple to store and retrieve data in your automation workflows. Access your projects through our Python SDK or CLI.
---
## ✨ Features
- 🔐 One-line `init()` to connect with your [WaveAssist](https://waveassist.io) project
- ⚙️ Automatically works on local and [WaveAssist Cloud](https://waveassist.io) (worker) environments
- 📦 Store and retrieve data (DataFrames, JSON, strings)
- 🧠 LLM-friendly function names (`init`, `store_data`, `fetch_data`)
- 📁 Auto-serialization for common Python objects
- 🤖 LLM integration with structured outputs via Instructor and OpenRouter
- 💳 Credit management and automatic email notifications
- 🖥️ Command-line interface for project management
- ✅ Built for automation workflows, cron jobs, and AI pipelines
---
## 🚀 Getting Started
### 1. Install
```bash
pip install waveassist
```
---
### 2. Initialize the SDK
```python
import waveassist
# Option 1: Use no arguments (recommended)
waveassist.init()
# Option 2: With explicit parameters
waveassist.init(
token="your-user-id",
project_key="your-project-key",
environment_key="your-env-key", # optional
run_id="run-123", # optional
check_credits=True # optional: raises RuntimeError if credits_available is "0"
)
# When check_credits=True, a missing credits_available key is treated as credits available (default "1").
# Will auto-resolve from:
# 1. Explicit args (if passed)
# 2. .env file (uid, project_key, environment_key)
# 3. Worker-injected credentials (on [WaveAssist Cloud](https://waveassist.io))
```
#### 🛠 Setting up `.env` (for local runs)
```env
uid=your-user-id
project_key=your-project-key
# optional
environment_key=your-env-key
```
This file will be ignored by Git if you use our default `.gitignore`.
---
### 3. Store Data
Data is serialized by type. You can rely on type-safe storage and retrieval.
#### 🧾 Store a string
```python
waveassist.store_data("welcome_message", "Hello, world!")
```
#### 📊 Store a DataFrame
```python
import pandas as pd
df = pd.DataFrame({"name": ["Alice", "Bob"], "score": [95, 88]})
waveassist.store_data("user_scores", df)
```
#### 🧠 Store JSON/dict/array
```python
profile = {"name": "Alice", "age": 30}
waveassist.store_data("profile_data", profile)
```
#### 📌 Optional: Force storage type
Store data as a specific type regardless of input:
```python
# Store a dict as a string
waveassist.store_data("config", {"a": 1}, data_type="string")
# Store a string as JSON (wraps as {"value": "..."})
waveassist.store_data("greeting", "Hello", data_type="json")
```
**Parameters:** `store_data(key, data, run_based=False, data_type=None)`. Use `data_type="string"`, `"json"`, or `"dataframe"` to force that storage type.
---
### 4. Fetch Data
```python
result = waveassist.fetch_data("user_scores")
# Returns the correct type:
# - pd.DataFrame (if stored as dataframe)
# - dict or list (if stored as JSON)
# - str (if stored as string)
```
**Use a default when the key might be missing:**
```python
# Return a default if key is missing or API fails
count = waveassist.fetch_data("failure_count", default=0)
greeting = waveassist.fetch_data("welcome", default="Hello")
df = waveassist.fetch_data("results", default=pd.DataFrame()) # empty DataFrame
```
**Parameters:** `fetch_data(key, run_based=False, default=None)`.
---
### 5. Send Email
Send emails (e.g. for notifications) via the WaveAssist backend:
```python
waveassist.send_email(
subject="Daily report",
html_content="<p>Summary: ...</p>",
attachment_file=open("report.pdf", "rb"), # optional
raise_on_failure=True # default: raise WaveAssistEmailError on validation or API failure
)
```
- **Validation:** Subject and HTML body must be non-empty (and within length limits). Attachments must be file-like with a `.read()` method.
- **Retry:** The SDK retries the send once on transient failure.
- **Errors:** By default, validation failures raise `ValueError` and API failures raise `RuntimeError`. Pass `raise_on_failure=False` to return `False` instead.
---
### 6. Check Credits and Notify
Check OpenRouter credits and automatically send email notifications if insufficient credits are available:
```python
# Check if you have enough credits for an operation
has_credits = waveassist.check_credits_and_notify(
required_credits=10.5,
assistant_name="WavePredict"
)
if has_credits:
# Proceed with your operation
print("Credits available, proceeding...")
else:
# Credits insufficient - email notification sent (max 3 times)
print("Insufficient credits, operation skipped")
```
**Features:**
- Automatically checks OpenRouter credit balance
- Sends email notification if credits are insufficient (max 3 times)
- Resets notification count when credits become sufficient
- Stores credit availability status for workflow control
---
### 7. Call LLM with Structured Outputs
Get structured responses from LLMs via OpenRouter with Pydantic models:
```python
from pydantic import BaseModel
# Define your response structure
class UserInfo(BaseModel):
name: str
age: int
email: str
# Call LLM with structured output
result = waveassist.call_llm(
model="gpt-4o",
prompt="Extract user info: John Doe, 30, john@example.com",
response_model=UserInfo
)
print(result.name) # "John Doe"
print(result.age) # 30
print(result.email) # "john@example.com"
```
**Setup:**
1. Store your OpenRouter API key:
```python
waveassist.store_data('open_router_key', 'your_openrouter_api_key')
```
2. Use `call_llm()` with any Pydantic model for structured outputs
**Advanced Usage:**
```python
result = waveassist.call_llm(
model="anthropic/claude-3-opus",
prompt="Analyze this data...",
response_model=MyModel,
should_retry=True, # retry once on JSON/format errors
max_tokens=3000,
extra_body={"web_search_options": {"search_context_size": "medium"}}
)
```
**Errors:** `RuntimeError` (API/network failure), `ValueError` (invalid or non-JSON response). Transport errors are retried once automatically.
---
## 🖥️ Command Line Interface
WaveAssist CLI comes bundled with the Python package. After installation, you can use the following commands:
### 🔑 Authentication
```bash
waveassist login
```
This will open your browser for authentication and store the token locally.
### 📤 Push Code
```bash
waveassist push PROJECT_KEY [--force]
```
Push your local Python code to a WaveAssist project.
### 📥 Pull Code
```bash
waveassist pull PROJECT_KEY [--force]
```
Pull Python code from a WaveAssist project to your local machine.
### ℹ️ Version Info
```bash
waveassist version
```
Display CLI version and environment information.
---
## 🧪 Running Tests
Run with pytest (recommended) or the test scripts:
```bash
# All tests (use project Python if you have a venv)
python -m pytest tests/ -v
# Or run modules directly
python tests/test_core.py
python tests/test_json_generate.py
python tests/test_json_extract.py
```
✅ Includes tests for:
- String, JSON, and DataFrame roundtrips; `store_data` with explicit `data_type`; `fetch_data` with `default`
- `send_email` validation, attachments, and `raise_on_failure`
- Error handling when `init()` is not called (`RuntimeError`)
- Environment variable and `.env` file resolution
- JSON template generation for Pydantic models
- JSON extraction from various formats (pure JSON, markdown code blocks, embedded text)
- Soft parsing with missing required fields (safety fallback for LLM responses)
- Type coercion and nested model handling
---
## 🛠 Project Structure
```
WaveAssist/
├── waveassist/
│ ├── __init__.py # Public API: init(), store_data(), fetch_data(), send_email(),
│ │ # check_credits_and_notify(), call_llm()
│ ├── _config.py # Global config and version
│ ├── constants.py # API_BASE_URL, OpenRouter, dashboard URLs
│ ├── utils.py # API helpers, JSON parsing, soft_parse, exception classes
│ ├── core.py # CLI: login, pull, push (uses constants for URLs)
│ └── cli.py # Command-line entry (waveassist login/push/pull/version)
├── tests/
│ ├── test_core.py # Core SDK + send_email tests
│ ├── test_json_generate.py # JSON template generation tests
│ ├── test_json_extract.py # JSON extraction/parsing tests
│ └── test_llm_call.py # call_llm integration tests (skipped without API key)
```
---
## 📌 Notes
- Data is stored in your [WaveAssist backend](https://waveassist.io) (e.g. MongoDB) as serialized content
- `store_data()` auto-detects the object type and serializes it (dataframe/JSON/string), or use `data_type` to force a type
- `fetch_data()` returns the correct Python type and supports a `default` when the key is missing or the API fails
- **Logging:** The SDK uses the standard library `logging` module (logger name `"waveassist"`). Configure level or handlers to control or suppress SDK messages (e.g. `logging.getLogger("waveassist").setLevel(logging.WARNING)`).
- **Errors:** The SDK raises standard exceptions: **`ValueError`** for bad or missing input (e.g. missing uid/project_key in init, email validation, OpenRouter key not found, LLM JSON/format failure) and **`RuntimeError`** for invalid state or API failures (e.g. not initialized, credits not available, send email failed, LLM API/network failure). Catch `ValueError` or `RuntimeError` (or `Exception`) as needed.
---
## 🧠 Example Use Cases
### Basic Data Storage
```python
import waveassist
waveassist.init() # Auto-initialized from .env or worker
# Store GitHub PR data
waveassist.store_data("latest_pr", {
"title": "Fix bug in auth",
"author": "alice",
"status": "open"
})
# Later, fetch it for further processing
pr = waveassist.fetch_data("latest_pr")
print(pr["title"])
```
### LLM Integration with Credit Management
```python
import waveassist
from pydantic import BaseModel
waveassist.init()
# Store OpenRouter API key
waveassist.store_data('open_router_key', 'your_api_key')
# Check credits before expensive operation
required_credits = 5.0
if waveassist.check_credits_and_notify(required_credits, "MyAssistant"):
# Use LLM with structured output
class AnalysisResult(BaseModel):
summary: str
confidence: float
recommendations: list[str]
result = waveassist.call_llm(
model="gpt-4o",
prompt="Analyze this data and provide recommendations...",
response_model=AnalysisResult
)
# Store the structured result
waveassist.store_data("analysis_result", result.dict())
```
---
## 🤝 Contributing
Want to add formats, features, or cloud extensions? PRs welcome!
---
## 📬 Contact
Need help or have feedback? Reach out at [connect@waveassist.io](mailto:connect@waveassist.io), visit [WaveAssist.io](https://waveassist.io), or open an issue.
---
© 2025 [WaveAssist](https://waveassist.io)
| text/markdown | WaveAssist | kakshil.shah@waveassist.io | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | https://github.com/waveassist/waveassist | null | >=3.10 | [] | [] | [] | [
"pandas>=1.0.0",
"requests>=2.32.4",
"python-dotenv>=1.1.1",
"pydantic>=2.0.0",
"openai>=2.11.0",
"json-repair>=0.57.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:19:24.114162 | waveassist-0.6.1.tar.gz | 37,052 | 3b/b5/108afaacf714ab5b63e2ff3a7bee7a07eb06c74dea46e7cb1f636330956f/waveassist-0.6.1.tar.gz | source | sdist | null | false | fdaa6ade8835adf594a5245ad8dd85eb | f484f04ad0b64237ebe91b68d6c611db1d7f1c72017dc2df876c072938f139a7 | 3bb5108afaacf714ab5b63e2ff3a7bee7a07eb06c74dea46e7cb1f636330956f | null | [
"LICENSE"
] | 250 |
2.4 | sso-zenna | 0.1.1 | Python клиент для взаимодействия с MS Auth Service API | # SSO Zenna Client
Python клиент для взаимодействия с MS Auth Service API. Предоставляет удобные классы для работы с OAuth 2.0 Authorization Code Flow с PKCE (для пользователей) и Client Credentials Grant (для микросервисов).
## Установка
```bash
pip install -e .
```
Или если пакет опубликован:
```bash
pip install sso_zenna
```
## Быстрый старт
### Для пользователей (UserClient)
```python
import asyncio
from sso_client import UserClient
async def main():
# Создаем клиент
client = UserClient(
base_url="http://localhost:8000",
client_id="your_client_id",
redirect_uri="http://localhost:3000/callback"
)
# Полный цикл авторизации
token_response = await client.full_auth_flow(
login="user@example.com",
password="password123",
scope="sso.admin.read sso.admin.create"
)
print(f"Access token: {token_response.access_token}")
print(f"Refresh token: {token_response.refresh_token}")
# Получаем информацию о пользователе
user_info = await client.get_current_user()
print(f"Пользователь: {user_info.name} {user_info.surname}")
# Обновляем токен
new_token = await client.refresh_access_token()
print(f"Новый access token: {new_token.access_token}")
await client.close()
asyncio.run(main())
```
### Пошаговая авторизация
```python
import asyncio
from sso_client import UserClient
async def main():
client = UserClient(
base_url="http://localhost:8000",
client_id="your_client_id"
)
# 1. Получаем PKCE параметры
pkce_params = await client.get_pkce_params()
# 2. Инициируем авторизацию
auth_response = await client.authorize(
scope="sso.admin.read",
pkce_params=pkce_params
)
# 3. Выполняем логин
login_response = await client.login(
login="user@example.com",
password="password123",
session_id=auth_response.session_id
)
# 4. Обмениваем код на токены
token_response = await client.exchange_code_for_tokens(
authorization_code=login_response.authorization_code,
pkce_params=pkce_params
)
print(f"Токены получены: {token_response.access_token}")
await client.close()
asyncio.run(main())
```
### Для микросервисов (ServiceClient)
```python
import asyncio
from sso_client import ServiceClient
async def main():
# Создаем клиент для микросервиса
client = ServiceClient(
base_url="http://localhost:8000",
client_id="service_id",
client_secret="service_secret"
)
# Получаем access token
token_response = await client.get_access_token(
scope="system.client.read system.client.edit"
)
print(f"Access token: {token_response.access_token}")
# Выполняем запросы с авторизацией
user_info = await client.get_current_user()
# Или используем request_with_auth для любых endpoints
data = await client.request_with_auth(
method="GET",
endpoint="admin/users",
params={"skip": 0, "limit": 10}
)
await client.close()
asyncio.run(main())
```
## Использование с async context manager
```python
import asyncio
from sso_client import UserClient
async def main():
async with UserClient(
base_url="http://localhost:8000",
client_id="your_client_id"
) as client:
token_response = await client.full_auth_flow(
login="user@example.com",
password="password123"
)
# Сессия автоматически закроется при выходе
asyncio.run(main())
```
Пример для получения информации по пользвателю для подстановки в Depends
```
from fastapi import FastAPI, Header, HTTPException
from typing import Optional
app = FastAPI()
sso_client = ServiceClient(
base_url="http://localhost:8000",
client_id="your_service_id",
client_secret="your_service_secret"
)
async def get_current_user(authorization: Optional[str] = Header(None)):
"""
Dependency для получения текущего пользователя из токена
"""
if not authorization:
raise HTTPException(status_code=401, detail="Токен не предоставлен")
# Извлекаем токен из заголовка "Bearer <token>"
try:
token = authorization.split(" ")[1]
except IndexError:
raise HTTPException(status_code=401, detail="Неверный формат токена")
try:
user_info = await sso_client.get_current_user(access_token=token)
return user_info
except AuthenticationError:
raise HTTPException(status_code=401, detail="Невалидный токен")
except Exception as e:
raise HTTPException(status_code=500, detail=f"Ошибка при проверке токена: {e}")
@app.get("/protected")
async def protected_endpoint(current_user = Depends(get_current_user)):
"""
Защищенный endpoint, который требует валидный токен пользователя
"""
return {
"message": f"Привет, {current_user.name} {current_user.surname}!",
"user_id": current_user.id,
"email": current_user.email,
"scopes": current_user.scopes
}
```
## API Reference
### UserClient
Класс для пользовательского взаимодействия с OAuth 2.0 Authorization Code Flow с PKCE.
#### Методы
- `get_pkce_params()` - Получить PKCE параметры от сервера
- `authorize(scope, redirect_uri, pkce_params)` - Инициировать OAuth 2.0 flow
- `login(login, password, session_id)` - Выполнить аутентификацию
- `exchange_code_for_tokens(authorization_code, redirect_uri, pkce_params)` - Обменять код на токены
- `refresh_access_token(refresh_token)` - Обновить access token
- `get_current_user(access_token)` - Получить информацию о пользователе
- `logout(refresh_token)` - Выйти из системы
- `get_available_services()` - Получить список доступных микросервисов
- `full_auth_flow(login, password, scope, redirect_uri)` - Выполнить полный цикл авторизации
### ServiceClient
Класс для микросервисного взаимодействия с Client Credentials Grant.
#### Методы
- `get_access_token(scope)` - Получить access token
- `get_current_user(access_token)` - Получить информацию о текущем пользователе/сервисе
- `request_with_auth(method, endpoint, json_data, params, auto_refresh)` - Выполнить запрос с авторизацией
## Исключения
- `SSOClientError` - Базовое исключение
- `AuthenticationError` - Ошибка аутентификации (401)
- `AuthorizationError` - Ошибка авторизации (403)
- `APIError` - Ошибка API (4xx, 5xx)
- `TokenError` - Ошибка работы с токенами
## Лицензия
MIT
| text/markdown | Артем Костюченко | kostyuchenko.work@gmail.com | null | null | LICENSE | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp<4.0.0,>=3.13.2",
"pydantic<3.0.0,>=2.12.5",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"python-dotenv>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [] | poetry/2.3.1 CPython/3.13.11 Windows/10 | 2026-02-21T10:19:08.773398 | sso_zenna-0.1.1-py3-none-any.whl | 24,202 | 16/0f/59ff62986685641e5420f1487d3285b986b3245f2727c6faf737753bd56f/sso_zenna-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | c93732ce0c0f707b5be1d255946681ce | 828f829bda8a8557ec98ea28748082c9c74f9d5a72030e5cb866d4278010a213 | 160f59ff62986685641e5420f1487d3285b986b3245f2727c6faf737753bd56f | null | [
"LICENSE"
] | 243 |
2.4 | kevros | 0.2.2 | Python SDK for the Kevros A2A Governance Gateway — cryptographic action verification, hash-chained provenance, and compliance packaging for AI agents. | <div align="center">
# kevros
**Agentic identity trust for AI agents.**
Issue and verify trust certificates. Check peer reputation scores. Verify release tokens. Generate tamper-evident audit trails with post-quantum cryptographic signatures.
[](https://pypi.org/project/kevros/)
[](https://pypi.org/project/kevros/)
[](https://opensource.org/licenses/MIT)
[](https://governance.taskhawktech.com)
[](https://smithery.ai/server/kevros)
<!-- mcp-name: io.github.ndl-systems/kevros -->
</div>
---
```bash
pip install kevros
# Add to Claude Code as an MCP plugin (one command):
pip install kevros && claude mcp add kevros -- kevros-mcp
```
```python
from kevros_governance import GovernanceClient
client = GovernanceClient() # auto-provisions identity, no signup
# Verify an action, get a cryptographic release token
result = client.verify(
action_type="trade",
action_payload={"symbol": "AAPL", "shares": 100},
agent_id="my-agent",
)
print(result.decision) # ALLOW | CLAMP | DENY
print(result.release_token) # cryptographic proof of authorization
# Check another agent's trust score
peer = client.verify_peer("other-agent-id")
print(peer["trust_score"]) # reputation across the network
```
Your agent now has a verifiable identity. Every action is cryptographically signed, hash-chained, and independently verifiable by any peer.
---
## Why
Agents are talking to other agents now. Trading on behalf of users. Deploying infrastructure. Operating hardware. Calling APIs they've never seen before.
The question isn't "can my agent do this?" It's:
- **Can I trust this agent?** Does it have a verifiable identity and reputation?
- **Was this action authorized?** Is there a cryptographic release token proving it?
- **Can I verify what happened?** Is there a tamper-evident, hash-chained audit trail?
- **Will this hold up post-quantum?** Are the signatures PQC-resilient?
Kevros answers all four.
---
## What Kevros Gives Your Agent
### Identity and Trust
Every agent gets a verifiable identity on first use. No forms, no OAuth dance, no waiting for approval. Call the API and your agent exists in the trust network.
```python
client = GovernanceClient() # identity auto-provisioned
```
Other agents can look you up:
```python
# Any agent can check any other agent's reputation
peer = client.verify_peer("trading-bot-001")
print(peer["trust_score"]) # 0.0 - 1.0
print(peer["chain_length"]) # total verified actions
print(peer["last_active"]) # last attestation timestamp
```
### Release Tokens
Every `verify()` call returns a cryptographic release token — proof that the action was evaluated against policy and authorized. Peers can verify these tokens independently.
```python
result = client.verify(
action_type="trade",
action_payload={"symbol": "AAPL", "shares": 100, "price": 185.50},
policy_context={"max_values": {"shares": 500, "price": 200.0}},
agent_id="trading-bot-001",
)
# The release token is an HMAC signature over the decision
print(result.release_token) # verifiable by any peer
print(result.provenance_hash) # position in the hash chain
# Another agent can verify this token
verified = client.verify_peer_token(
release_token=result.release_token,
token_preimage=result.token_preimage, # provided alongside the release token
)
```
### Trust Certificates
Bundle your agent's entire provenance into an auditor-grade trust certificate. PQC-signed. Independently verifiable. No Kevros access required to validate.
```python
bundle = client.bundle(
agent_id="trading-bot-001",
time_range_start="2026-02-01T00:00:00Z",
time_range_end="2026-02-28T23:59:59Z",
)
print(bundle.chain_integrity) # True -- hash chain intact
print(bundle.record_count) # total attested actions
print(bundle.bundle_hash) # covers the entire evidence set
# PQC signatures included -- resilient against quantum attacks
print(bundle.pqc_signatures) # post-quantum attestation references
```
Hand this to an auditor, a regulator, a counterparty, or another agent. The math verifies itself.
### Hash-Chained Provenance
Every action your agent attests is SHA-256 hash-chained to every prior action. Tamper with one record and the entire chain breaks.
```python
proof = client.attest(
agent_id="trading-bot-001",
action_description="Bought 100 AAPL at $185.42",
action_payload={"symbol": "AAPL", "shares": 100, "price": 185.42},
)
print(proof.hash_prev) # links to previous record
print(proof.hash_curr) # this record's hash
print(proof.chain_length) # total records in the ledger
print(proof.pqc_block_ref) # PQC attestation reference
```
### Intent Binding
Cryptographically prove that a command was issued in service of a declared intent. Close the loop: intent -> command -> action -> outcome -> verification.
```python
from kevros_governance import IntentType
# Declare intent, bind to command
binding = client.bind(
agent_id="nav-agent-001",
intent_type=IntentType.NAVIGATION,
intent_description="Navigate to waypoint Alpha",
command_payload={"lat": 38.8977, "lon": -77.0365, "alt": 100},
goal_state={"lat": 38.8977, "lon": -77.0365},
)
# After execution: did the action achieve the intent?
outcome = client.verify_outcome(
agent_id="nav-agent-001",
intent_id=binding.intent_id,
binding_id=binding.binding_id,
actual_state={"lat": 38.8978, "lon": -77.0364},
tolerance=0.01,
)
print(outcome.status) # ACHIEVED | PARTIALLY_ACHIEVED | FAILED
print(outcome.achieved_percentage) # 99.2
```
---
## The Trust Primitives
| Primitive | What it does | Cost |
|-----------|-------------|------|
| `verify` | Pre-flight action check. Returns ALLOW/CLAMP/DENY + release token. | $0.01 |
| `attest` | Hash-chained provenance record with PQC block reference. | $0.02 |
| `bind` | Cryptographic intent-to-command binding. | $0.02 |
| `verify_outcome` | Did the action achieve the declared intent? | free |
| `bundle` | PQC-signed trust certificate. Auditor-grade. | $0.25 |
| `verify_peer` | Look up any agent's trust score and reputation. | free |
| `verify_peer_token` | Independently verify another agent's release token. | free |
| `health` | Gateway health + chain integrity check. | free |
---
## Zero-Friction Identity
```python
client = GovernanceClient()
```
No signup form. No API key dance. No email confirmation.
Call `GovernanceClient()` with zero arguments and your agent has an identity immediately. The SDK auto-provisions a key on first use and caches it at `~/.kevros/api_key`.
Resolution order:
1. `KEVROS_API_KEY` environment variable
2. `~/.kevros/api_key` cached key
3. Auto-signup (free tier, 100 calls/month)
---
## Framework Integrations
Kevros ships adapters for the frameworks your agents already use.
### Python SDK
```python
from kevros_governance import GovernanceClient
client = GovernanceClient()
result = client.verify(action_type="deploy", action_payload={"service": "api"}, agent_id="cd-bot")
peer = client.verify_peer("other-agent")
```
### LangChain
```python
from kevros_tools import get_governance_tools
tools = get_governance_tools(api_key="kvrs_...")
agent = initialize_agent(llm, tools, agent=AgentType.OPENAI_FUNCTIONS)
```
### CrewAI
```python
from crewai_tools import get_governance_tools
tools = get_governance_tools(api_key="kvrs_...")
agent = Agent(role="Trusted Trader", tools=tools, goal="Execute trades with verifiable identity")
```
### OpenAI Function Calling
```python
import json
with open("openai_tools.json") as f:
governance_tools = json.load(f)
response = openai.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=governance_tools,
)
```
### MCP Server (Claude Code / Cursor / Windsurf)
```bash
# Recommended: pip install + CLI entry point
pip install kevros
claude mcp add kevros -- kevros-mcp
```
Or configure manually:
```json
{
"mcpServers": {
"kevros": {
"command": "kevros-mcp"
}
}
}
```
The MCP server exposes 9 tools (`governance_verify`, `governance_attest`, `governance_bind`, `governance_verify_outcome`, `governance_bundle`, `governance_check_peer`, `governance_verify_token`, `governance_health`, `kevros_status`), 2 resources (`kevros://agent-card`, `kevros://trust-status`), and 2 prompts (`verify-before-act`, `governance-audit`).
### curl
```bash
# Verify an action
curl -X POST https://governance.taskhawktech.com/governance/verify \
-H "X-API-Key: kvrs_..." \
-H "Content-Type: application/json" \
-d '{"action_type":"trade","action_payload":{"symbol":"AAPL","shares":100},"agent_id":"my-agent"}'
# Check a peer's trust score
curl https://governance.taskhawktech.com/governance/reputation/other-agent-id \
-H "X-API-Key: kvrs_..."
```
---
## Async
Every method has an async twin. Prefix with `a`.
```python
async with GovernanceClient() as client:
result = await client.averify(...)
proof = await client.aattest(...)
peer = await client.averify_peer("other-agent")
bundle = await client.abundle(agent_id="my-agent")
```
---
## Trust Gate and Middleware
Protect your own APIs by requiring Kevros trust credentials from callers.
### FastAPI Trust Gate
```python
from kevros_verifier import KevrosTrustGate
gate = KevrosTrustGate(gateway_url="https://governance.taskhawktech.com")
@app.get("/protected")
async def protected_endpoint(trust=Depends(gate)):
# trust.verified is True, trust.agent_id is the caller
return {"msg": f"Hello, {trust.agent_id}"}
```
### ASGI Middleware
```python
from kevros_verifier import KevrosTrustMiddleware
app.add_middleware(KevrosTrustMiddleware, protected_paths=["/api/"])
# All /api/* routes now require X-Kevros-Release-Token headers
```
---
## Pricing
Start free. Scale when you need to.
| Tier | Price | Calls/mo | Best for |
|------|-------|----------|----------|
| **Free** | $0 | 100 | Prototyping, evaluation |
| **Scout** | $29/mo | 5,000 | Single-agent production |
| **Sentinel** | $149/mo | 50,000 | Multi-agent fleets |
| **Sovereign** | $499/mo | 500,000 | Enterprise, regulated industries |
Every tier includes all trust primitives, PQC signatures, peer verification, async support, and the full hash-chained evidence ledger.
---
## How It Works
```
Agent A Kevros Gateway Agent B
| | |
|-- verify(action) ------->| |
|<-- ALLOW + release_token | |
| |--- provenance record -->|
| [execute action] | (hash-chained, PQC) |
| | |
|-- attest(outcome) ------>| |
|<-- hash_curr + chain ----| |
| | |
| |<-- verify_peer(A) ------|
| |--- trust_score: 0.94 -->|
| | |
|-- bundle(time_range) --->| |
|<-- trust certificate ----| |
| (PQC-signed, portable) | |
```
Every record is hash-chained (SHA-256). Trust certificates include PQC signatures for post-quantum resilience. The chain is independently verifiable — export your evidence and validate it anywhere.
---
## A2A Agent Card
The gateway publishes a standard [A2A Agent Card](https://governance.taskhawktech.com/.well-known/agent.json) for agent-to-agent discovery. Any A2A-compatible agent can discover and interact with the trust network programmatically.
---
## Links
- **Gateway**: [governance.taskhawktech.com](https://governance.taskhawktech.com)
- **OpenAPI Spec**: [governance.taskhawktech.com/openapi.json](https://governance.taskhawktech.com/openapi.json)
- **Agent Card**: [governance.taskhawktech.com/.well-known/agent.json](https://governance.taskhawktech.com/.well-known/agent.json)
- **PyPI**: [pypi.org/project/kevros](https://pypi.org/project/kevros/)
- **Source**: [github.com/ndl-systems/kevros-sdk](https://github.com/ndl-systems/kevros-sdk)
---
<div align="center">
**Built by [TaskHawk Systems](https://taskhawktech.com)**
Your agents act. Kevros proves it.
MIT License
</div>
| text/markdown | null | TaskHawk Systems <governance@taskhawktech.com> | null | null | MIT | a2a, agent-to-agent, ai-governance, ai-safety, audit-trail, compliance, langchain, mcp, openai, provenance | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Security",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.24.0",
"langchain-core>=0.2.0; extra == \"langchain\"",
"mcp>=1.0.0; extra == \"mcp\""
] | [] | [] | [] | [
"Homepage, https://governance.taskhawktech.com",
"Documentation, https://docs.taskhawktech.com",
"Repository, https://github.com/ndl-systems/kevros-sdk",
"Agent Card, https://governance.taskhawktech.com/.well-known/agent.json"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:18:55.457820 | kevros-0.2.2.tar.gz | 22,106 | 5d/47/e2705de97390f21d641805bc51b35de54b7196005d05c0c227d72b54e70f/kevros-0.2.2.tar.gz | source | sdist | null | false | 26b05eae84dc776c9e362d92bfd6bb42 | 555cf17fe0350a5b727dfa9ea125b11edc4598c4a5874ccd7e9a87cb7eb7da12 | 5d47e2705de97390f21d641805bc51b35de54b7196005d05c0c227d72b54e70f | null | [] | 244 |
2.4 | tbp-nightly | 2.22.6a20260221 | XProf Profiler Plugin | # XProf (+ Tensorboard Profiler Plugin)
XProf offers a number of tools to analyse and visualize the
performance of your model across multiple devices. Some of the tools include:
* **Overview**: A high-level overview of the performance of your model. This
is an aggregated overview for your host and all devices. It includes:
* Performance summary and breakdown of step times.
* A graph of individual step times.
* High level details of the run environment.
* **Trace Viewer**: Displays a timeline of the execution of your model that shows:
* The duration of each op.
* Which part of the system (host or device) executed an op.
* The communication between devices.
* **Memory Profile Viewer**: Monitors the memory usage of your model.
* **Graph Viewer**: A visualization of the graph structure of HLOs of your model.
To learn more about the various XProf tools, check out the [XProf documentation](https://openxla.org/xprof)
## Demo
First time user? Come and check out this [Colab Demo](https://docs.jaxstack.ai/en/latest/JAX_for_LLM_pretraining.html).
## Quick Start
### Prerequisites
* xprof >= 2.20.0
* (optional) TensorBoard >= 2.20.0
Note: XProf requires access to the Internet to load the [Google Chart library](https://developers.google.com/chart/interactive/docs/basic_load_libs#basic-library-loading).
Some charts and tables may be missing if you run XProf entirely offline on
your local machine, behind a corporate firewall, or in a datacenter.
If you use Google Cloud to run your workloads, we recommend the
[xprofiler tool](https://github.com/AI-Hypercomputer/cloud-diagnostics-xprof).
It provides a streamlined profile collection and viewing experience using VMs
running XProf.
### Installation
To get the most recent release version of XProf, install it via pip:
```
$ pip install xprof
```
## Running XProf
XProf can be launched as a standalone server or used as a plugin within
TensorBoard. For large-scale use, it can be deployed in a distributed mode with
separate aggregator and worker instances ([more details on it later in the
doc](#distributed-profiling)).
### Command-Line Arguments
When launching XProf from the command line, you can use the following arguments:
* **`logdir`** (optional): The directory containing XProf profile data (files
ending in `.xplane.pb`). This can be provided as a positional argument or
with `-l` or `--logdir`. If provided, XProf will load and display profiles
from this directory. If omitted, XProf will start without loading any
profiles, and you can dynamically load profiles using `session_path` or
`run_path` URL parameters, as described in the [Log Directory
Structure](#log-directory-structure) section.
* **`-p <port>`**, **`--port <port>`**: The port for the XProf web server.
Defaults to `8791`.
* **`-gp <grpc_port>`**, **`--grpc_port <grpc_port>`**: The port for the gRPC
server used for distributed processing. Defaults to `50051`. This must be
different from `--port`.
* **`-wsa <addresses>`**, **`--worker_service_address <addresses>`**: A
comma-separated list of worker addresses (e.g., `host1:50051,host2:50051`)
for distributed processing. Defaults to to `0.0.0.0:<grpc_port>`.
* **`-hcpb`**, **`--hide_capture_profile_button`**: If set, hides the 'Capture
Profile' button in the UI.
### Standalone
If you have profile data in a directory (e.g., `profiler/demo`), you can view it
by running:
```
$ xprof profiler/demo --port=6006
```
Or with the optional flag:
```
$ xprof --logdir=profiler/demo --port=6006
```
### With TensorBoard
If you have TensorBoard installed, you can run:
```
$ tensorboard --logdir=profiler/demo
```
If you are behind a corporate firewall, you may need to include the `--bind_all`
tensorboard flag.
Go to `localhost:6006/#profile` of your browser, you should now see the demo
overview page show up.
Congratulations! You're now ready to capture a profile.
### Log Directory Structure
When using XProf, profile data must be placed in a specific directory structure.
XProf expects `.xplane.pb` files to be in the following path:
```
<log_dir>/plugins/profile/<session_name>/
```
* `<log_dir>`: This is the root directory that you supply to `tensorboard
--logdir`.
* `plugins/profile/`: This is a required subdirectory.
* `<session_name>/`: Each subdirectory inside `plugins/profile/` represents a
single profiling session. The name of this directory will appear in the
TensorBoard UI dropdown to select the session.
**Example:**
If your log directory is structured like this:
```
/path/to/your/log_dir/
└── plugins/
└── profile/
├── my_experiment_run_1/
│ └── host0.xplane.pb
└── benchmark_20251107/
└── host1.xplane.pb
```
You would launch TensorBoard with:
```bash
tensorboard --logdir /path/to/your/log_dir/
```
The runs `my_experiment_run_1` and `benchmark_20251107` will be available in the
"Sessions" tab of the UI.
You can also dynamically load sessions from a GCS bucket or local filesystem by
passing URL parameters when loading XProf in your browser. This method works
whether or not you provided a `logdir` at startup and is useful for viewing
profiles from various locations without restarting XProf.
For example, if you start XProf with no log directory:
```bash
xprof
```
You can load sessions using the following URL parameters.
Assume you have profile data stored on GCS or locally, structured like this:
```
gs://your-bucket/profile_runs/
├── my_experiment_run_1/
│ ├── host0.xplane.pb
│ └── host1.xplane.pb
└── benchmark_20251107/
└── host0.xplane.pb
```
There are two URL parameters you can use:
* **`session_path`**: Use this to load a *single* session directly. The path
should point to a directory containing `.xplane.pb` files for one session.
* GCS Example:
`http://localhost:8791/?session_path=gs://your-bucket/profile_runs/my_experiment_run_1`
* Local Path Example:
`http://localhost:8791/?session_path=/path/to/profile_runs/my_experiment_run_1`
* Result: XProf will load the `my_experiment_run_1`
session, and you will see its data in the UI.
* **`run_path`**: Use this to point to a directory that contains *multiple*
session directories.
* GCS Example:
`http://localhost:8791/?run_path=gs://your-bucket/profile_runs/`
* Local Path Example:
`http://localhost:8791/?run_path=/path/to/profile_runs/`
* Result: XProf will list all session directories found under `run_path`
(i.e., `my_experiment_run_1` and `benchmark_20251107`) in the "Sessions"
dropdown in the UI, allowing you to switch between them.
**Loading Precedence**
If multiple sources are provided, XProf uses the following order of precedence
to determine which profiles to load:
1. **`session_path`** URL parameter
2. **`run_path`** URL parameter
3. **`logdir`** command-line argument
### Distributed Profiling
XProf supports distributed profile processing by using an aggregator that
distributes work to multiple XProf workers. This is useful for processing large
profiles or handling multiple users.
**Note**: Currently, distributed processing only benefits the following tools:
`overview_page`, `framework_op_stats`, `input_pipeline`, and `pod_viewer`.
**Note**: The ports used in these examples (`6006` for the aggregator HTTP
server, `9999` for the worker HTTP server, and `50051` for the worker gRPC
server) are suggestions and can be customized.
**Worker Node**
Each worker node should run XProf with a gRPC port exposed so it can receive
processing requests. You should also hide the capture button as workers are not
meant to be interacted with directly.
```
$ xprof --grpc_port=50051 --port=9999 --hide_capture_profile_button
```
**Aggregator Node**
The aggregator node runs XProf with the `--worker_service_address` flag pointing
to all available workers. Users will interact with aggregator node's UI.
```
$ xprof --worker_service_address=<worker1_ip>:50051,<worker2_ip>:50051 --port=6006 --logdir=profiler/demo
```
Replace `<worker1_ip>, <worker2_ip>` with the addresses of your worker machines.
Requests sent to the aggregator on port 6006 will be distributed among the
workers for processing.
For deploying a distributed XProf setup in a Kubernetes environment, see
[Kubernetes Deployment Guide](docs/kubernetes_deployment.md).
## Nightlies
Every night, a nightly version of the package is released under the name of
`xprof-nightly`. This package contains the latest changes made by the XProf
developers.
To install the nightly version of profiler:
```
$ pip uninstall xprof tensorboard-plugin-profile
$ pip install xprof-nightly
```
## Next Steps
* [JAX Profiling Guide](https://jax.readthedocs.io/en/latest/profiling.html#xprof-tensorboard-profiling)
* [PyTorch/XLA Profiling Guide](https://cloud.google.com/tpu/docs/pytorch-xla-performance-profiling-tpu-vm)
* [TensorFlow Profiling Guide](https://tensorflow.org/guide/profiler)
* [Cloud TPU Profiling Guide](https://cloud.google.com/tpu/docs/cloud-tpu-tools)
* [Colab Tutorial](https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras)
* [Tensorflow Colab](https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras)
| text/markdown | Google Inc. | packages@tensorflow.org | null | null | Apache 2.0 | jax pytorch xla tensorflow tensorboard xprof-nightly profile plugin | [
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Libraries"
] | [] | https://github.com/openxla/xprof-nightly | null | !=3.0.*,!=3.1.*,>=2.7 | [] | [] | [] | [
"xprof-nightly==2.22.6a20260221"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-21T10:18:42.251525 | tbp_nightly-2.22.6a20260221-cp310-none-any.whl | 5,760 | e0/10/52e8370cf75ccd85c5448cdb5f95ff712ee66ceb19436e585a35e56da518/tbp_nightly-2.22.6a20260221-cp310-none-any.whl | cp310 | bdist_wheel | null | false | b736abc8400d765a449e582d67a1172f | 1f42d77f45ac1aa59b52d0f4eb9014935ae1c64948ba3cf8f1ce6cd84fe23ffe | e01052e8370cf75ccd85c5448cdb5f95ff712ee66ceb19436e585a35e56da518 | null | [] | 194 |
2.4 | toolssecret | 0.1.2 | Tiny helper to fetch Google Secret Manager secrets with optional service account keyfile support. | # toolssecret
Tiny helper to fetch secrets from **Google Secret Manager**, with optional support for:
- **ADC (Application Default Credentials)** _(default)_
- A **service account JSON keyfile** (`path_keyfile`)
- A **service account info dict** (`keyfile_json`) — useful for CI/CD where you inject JSON via env/secret manager
- A **pre-resolved credentials object** (`credentials`) — reuse credentials from e.g. `toolsbq`
- A **pre-built SM client** (`client`) — fastest option for repeated calls
It's designed so you can simply:
```python
from toolssecret import get_secret
```
## Install
### From pypi
```text
pip install toolssecret
```
## Usage
### 1) Using ADC (Application Default Credentials)
```python
from toolssecret import get_secret
value = get_secret(secret_name="api_key_test", project_id="myproject")
print(value)
```
### 2) Using a service account keyfile
```python
from toolssecret import get_secret
value = get_secret(
secret_name="api_key_test",
project_id="myproject",
path_keyfile="~/.config/gcloud/sa-keys/myserviceaccount.json",
)
print(value)
```
Notes:
- `path_keyfile` supports `~` and environment variable expansion like `$HOME/...` (expanded by Python).
### 3) Using a service account info dict
This is useful when you keep the service account JSON in an environment variable or secret.
```python
import json
import os
from toolssecret import get_secret
sa_info = json.loads(os.environ["GCP_SA_JSON"])
value = get_secret(
secret_name="api_key_test",
project_id="myproject",
keyfile_json=sa_info,
)
print(value)
```
### 4) Reusing credentials from toolsbq
If you already have a BigQuery client from `toolsbq`, you can reuse its credentials to avoid re-reading the SA file:
```python
from toolsbq import bq_get_client
from toolssecret import get_secret
bq = bq_get_client(path_keyfile="~/key.json")
value = get_secret("my-secret", credentials=bq._credentials)
```
### 5) Pre-building an SM client for repeated calls
For hot loops or many secrets, pre-build the client once to avoid repeated gRPC channel setup:
```python
from toolssecret import sm_get_client, get_secret
client = sm_get_client(path_keyfile="~/key.json")
s1 = get_secret("secret-a", client=client)
s2 = get_secret("secret-b", client=client)
s3 = get_secret("secret-c", client=client)
```
### Project ID detection
If you omit `project_id`, `toolssecret` will try to detect it in this order:
1. `GOOGLE_CLOUD_PROJECT`, `GCLOUD_PROJECT`, `GCP_PROJECT` env vars
2. `keyfile_json["project_id"]` (if provided)
3. `path_keyfile`'s embedded project_id (if provided)
4. `GOOGLE_APPLICATION_CREDENTIALS` file's project_id
5. ADC project detection
## API
### `sm_get_client`
```python
sm_get_client(
*,
path_keyfile: str | None = None,
keyfile_json: dict | None = None,
credentials: Any | None = None,
) -> SecretManagerServiceClient
```
Build a reusable Secret Manager client. Credential priority:
1. `credentials` — pre-resolved credentials object (e.g. from `bq_client._credentials`)
2. `keyfile_json` / `path_keyfile` — explicit SA credentials
3. RAM-ADC / env / ADC fallback
### `get_secret`
```python
get_secret(
secret_name: str,
*,
project_id: str | None = None,
version_id: str = "latest",
client: SecretManagerServiceClient | None = None,
credentials: Any | None = None,
path_keyfile: str | None = None,
keyfile_json: dict | None = None,
) -> str
```
Credential priority:
1. `client` — pre-built SM client (fastest for repeated calls)
2. `credentials` — pre-resolved credentials object
3. `keyfile_json` — SA key as dict
4. `path_keyfile` — path to SA JSON file
5. RAM-ADC / env / ADC fallback
Secrets are cached **in-memory per process** (cache key includes project, secret name, version, and credential fingerprint).
## Security notes
- Avoid committing service account keyfiles to git.
- Prefer `keyfile_json` sourced from a secure secret store (CI secrets, vault, etc.).
- `toolssecret` does not log secret values.
| text/markdown | MH | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"google-cloud-secret-manager>=2.16.0",
"google-auth>=2.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T10:18:30.861758 | toolssecret-0.1.2.tar.gz | 6,555 | e3/ab/a0eaeb4d7aa37431e856983474dc1326645d5c354662aea7e6c4190658dd/toolssecret-0.1.2.tar.gz | source | sdist | null | false | 69a6a86f921319f0e1870e7e781dc205 | 2c1c0b616f882a90748b17b73aa4fe870e850851da4f4b236a5d3cde57f1f14c | e3aba0eaeb4d7aa37431e856983474dc1326645d5c354662aea7e6c4190658dd | MIT | [
"LICENSE"
] | 242 |
2.4 | rajkumar-ml-contracts | 0.1.0 | Design-by-Contract for ML pipelines: fail-fast data/features/models | # ml-contracts
[](https://pypi.org/project/ml-contracts/)
Design-by-Contract for ML: Enforce data/features/models pre-deployment.
## Install
```bash
pip install ml-contracts
```
## Quickstart
```python
from ml_contracts import DataContract
contract = DataContract(
name="input-data",
schema={"age": int},
ranges={"age": (18, 75)},
distribution={"age": "norm"}
)
```
Fits production pipelines—lightweight, no infra.
| text/markdown | null | Rajkumar BR <rajkumar@example.com> | null | null | MIT | contracts, ml, pipeline, validation | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas>=1.5",
"scipy>=1.10"
] | [] | [] | [] | [
"Homepage, https://github.com/RajkumarBR9789/ml-contracts",
"Repository, https://github.com/RajkumarBR9789/ml-contracts"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-21T10:17:53.292137 | rajkumar_ml_contracts-0.1.0.tar.gz | 3,603 | e9/18/685df0c47d895317fb9fa0336c19653dedd7b41546ce86d18b1599758f73/rajkumar_ml_contracts-0.1.0.tar.gz | source | sdist | null | false | 128481335b2d5bf885f1ed810212e31a | 2116cc1516709faf7a336a62d018df049c921891644fee6ba78c22c790add40b | e918685df0c47d895317fb9fa0336c19653dedd7b41546ce86d18b1599758f73 | null | [
"LICENSE"
] | 265 |
2.2 | vegamdb | 0.1.1 | A high-performance vector database written in C++ with Python bindings | # VegamDB
A high-performance vector database written in C++ with Python bindings. VegamDB provides fast nearest neighbor search with pluggable index types, zero-copy NumPy integration, and built-in persistence.
## Features
- **Multiple Index Types** -- Flat (exact brute-force), IVF (inverted file with K-Means), and Annoy (random projection trees)
- **C++ Core** -- All indexing and search logic runs in optimized C++17 with `-O3` and `-march=native`
- **Zero-Copy NumPy** -- Vectors pass directly from NumPy arrays to C++ via pointer, with no intermediate copies
- **Persistence** -- Save and load the entire database (vectors + index) to a single binary file
- **Pluggable Architecture** -- Switch index types at runtime without changing application code
- **Type-Safe Python API** -- Full type stubs (`.pyi`) for IDE autocomplete and static analysis
## Installation
### From PyPI
```bash
pip install vegamdb
```
> **Note:** Since VegamDB ships as a source distribution, `pip install` will compile the C++ code on your machine. You'll need:
> - A **C++17 compiler** (GCC 7+, Clang 5+, MSVC 2017+)
> - **CMake** >= 3.15
> - **Python** >= 3.8
### From Source
```bash
git clone https://github.com/LuciAkirami/vegamdb.git
cd vegamdb
pip install .
```
### Development Install
For development, use an editable install so changes to Python files take effect immediately:
```bash
pip install scikit-build-core pybind11 numpy
pip install -e . --no-build-isolation
```
## Quick Start
```python
import numpy as np
from vegamdb import VegamDB
# Create a database
db = VegamDB()
# Add vectors (batch — pass a 2D NumPy array)
data = np.random.random((10000, 128)).astype(np.float32)
db.add_vector_numpy(data)
# Search (defaults to exact flat search)
query = np.random.random(128).astype(np.float32)
results = db.search(query, k=5)
print(results.ids) # [4823, 1092, 7744, 331, 5619]
print(results.distances) # [4.12, 4.15, 4.18, 4.21, 4.23]
```
## Index Types
VegamDB supports three index types, each offering a different trade-off between speed and accuracy.
### Flat Index (Default)
Exact brute-force search. Computes the Euclidean distance between the query and every stored vector. Always returns the true nearest neighbors.
```python
db.use_flat_index()
results = db.search(query, k=10)
```
| Metric | Value |
| ------------ | ------------ |
| Accuracy | 100% |
| Build Time | None |
| Best For | Small datasets (< 50K vectors), ground truth validation |
### IVF Index (Inverted File)
Partitions vectors into clusters using K-Means. At query time, only the closest clusters are searched, trading some accuracy for a large speedup.
```python
db.use_ivf_index(n_clusters=100, max_iters=20, n_probe=1)
db.build_index()
# Search with custom probe count
from vegamdb import IVFSearchParams
params = IVFSearchParams()
params.n_probe = 10 # Search 10 of 100 clusters
results = db.search(query, k=10, params=params)
```
| Parameter | Description | Default |
| ------------- | ------------------------------------------------ | ------- |
| `n_clusters` | Number of Voronoi cells (partitions) | -- |
| `max_iters` | Maximum K-Means training iterations | 50 |
| `n_probe` | Clusters to search at query time | 1 |
### Annoy Index (Approximate Nearest Neighbors)
Builds a forest of random projection trees. Each tree recursively splits the vector space with random hyperplanes. At query time, multiple trees are traversed to collect candidate neighbors.
```python
db.use_annoy_index(num_trees=10, k_leaf=50)
db.build_index()
results = db.search(query, k=10)
```
| Parameter | Description | Default |
| ---------------- | ---------------------------------------------- | ------- |
| `num_trees` | Number of random projection trees | -- |
| `k_leaf` | Maximum points per leaf node | -- |
### Choosing an Index
| Use Case | Recommended Index | Why |
| ---------------------------- | ----------------- | ------------------------------------- |
| Small dataset (< 50K) | Flat | Exact results, no training overhead |
| Medium dataset (50K - 1M) | IVF | Good speed/accuracy with tunable probe|
| Large dataset (1M+) | Annoy | Fast tree traversal, low memory |
| Ground truth / benchmarking | Flat | Guaranteed correct results |
## Persistence
Save and load the entire database state, including vectors and the trained index:
```python
# Save
db.save("my_database.bin")
# Load into a fresh instance
db2 = VegamDB()
db2.load("my_database.bin")
assert db2.size() == db.size()
```
The index type and its trained state are serialized automatically. After loading, the index is ready to search without rebuilding.
## API Reference
### VegamDB
| Method | Description |
| ---------------------- | ----------------------------------------------------------------- |
| `VegamDB()` | Create a new empty database instance |
| `add_vector(vec)` | Add a vector from a Python list of floats |
| `add_vector_numpy(arr)`| Add vectors from a 1D `(dim,)` or 2D `(n, dim)` NumPy array |
| `size()` | Return the number of stored vectors |
| `dimension()` | Return the dimensionality of stored vectors (0 if empty) |
| `use_flat_index()` | Set index to brute-force flat search |
| `use_ivf_index(...)` | Set index to IVF with specified cluster configuration |
| `use_annoy_index(...)` | Set index to Annoy with specified tree configuration |
| `build_index()` | Explicitly build/train the current index |
| `search(query, k, params=None)` | Search for k nearest neighbors, returns `SearchResults` |
| `save(filename)` | Save database and index to a binary file |
| `load(filename)` | Load database and index from a binary file |
### SearchResults
| Attribute | Type | Description |
| ------------ | ------------- | ------------------------------------------------ |
| `ids` | `list[int]` | Indices of nearest neighbors (insertion order) |
| `distances` | `list[float]` | Euclidean distances to the query vector |
### Search Parameters
**IVFSearchParams** -- Override the default probe count for IVF search:
- `n_probe` (int): Number of clusters to search. Higher values improve recall at the cost of latency.
## Architecture
```
VegamDB (Orchestrator)
/ \
VectorStore IndexBase
(raw float vectors) (search strategy)
/ | \
Flat IVF Annoy
(exact) (K-Means) (trees)
```
- **VegamDB** -- Main entry point. Manages the vector store and delegates search to the active index.
- **VectorStore** -- Stores raw vectors in a `vector<vector<float>>`. Handles serialization.
- **IndexBase** -- Abstract interface that all index types implement (`build`, `search`, `save`, `load`).
- **FlatIndex** -- Iterates over all vectors, computing Euclidean distance. O(n) per query.
- **IVFIndex** -- Trains K-Means centroids, assigns vectors to clusters, searches only nearby clusters.
- **AnnoyIndex** -- Builds a forest of binary trees using random hyperplane splits for fast traversal.
## Project Structure
```
vegamdb/
├── include/ # C++ headers
│ ├── VegamDB.hpp
│ ├── indexes/ # IndexBase, FlatIndex, IVFIndex, AnnoyIndex, KMeans
│ ├── storage/ # VectorStore
│ └── utils/ # Math utilities (Euclidean distance, dot product)
├── src/ # C++ implementation
│ ├── VegamDB.cpp
│ ├── bindings.cpp # pybind11 Python bindings
│ ├── indexes/
│ ├── storage/
│ └── utils/
├── vegamdb/ # Python package
│ ├── __init__.py # Public API re-exports
│ └── _vegamdb.pyi # Type stubs for IDE support
├── benchmarks/ # Performance benchmarks
├── CMakeLists.txt # C++ build configuration
└── pyproject.toml # Python packaging (scikit-build-core)
```
## Benchmarks
Run the included benchmarks to evaluate performance on your hardware:
```bash
# Stress test (Flat index, varying dataset sizes)
python benchmarks/stress_test.py
# IVF benchmark (accuracy vs speed trade-off across probe counts)
python benchmarks/ivf_benchmarks.py
# Annoy benchmark (accuracy vs speed trade-off across tree counts)
python benchmarks/annoy_benchmark.py
```
## License
MIT
| text/markdown | Naredla Ajay Kumar Reddy | null | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-21T10:17:47.877106 | vegamdb-0.1.1.tar.gz | 25,114 | 37/4e/d33d3b3505b7b30246229ab55ea6cbde0dd33ffa222147a6fb63b91366cc/vegamdb-0.1.1.tar.gz | source | sdist | null | false | c1b1cde09a21d45e946b022968b6caa8 | 7a5628335a536edcbedbde546cd44c879228579e6becd9bb897376c64c7a17d7 | 374ed33d3b3505b7b30246229ab55ea6cbde0dd33ffa222147a6fb63b91366cc | null | [] | 198 |
2.4 | faran-visualizer | 0.2.3 | Interactive visualization tool for faran trajectory planning simulations | # faran-visualizer
Interactive visualization tool for [faran](https://gitlab.com/risk-metrics/faran) trajectory planning simulations.
## Installation
```bash
pip install faran-visualizer
```
Or with uv:
```bash
uv add faran-visualizer
```
## Requirements
- **Python 3.13+**
- **Node.js 18+** (required at runtime for generating HTML visualizations)
## Usage
### Basic Usage
```python
from faran_visualizer import visualizer, MpccSimulationResult
# Create a visualizer instance
mpcc_viz = visualizer.mpcc()
# After running your simulation, create a result object
result = MpccSimulationResult(
reference=trajectory,
states=states,
optimal_trajectories=optimal_trajectories,
nominal_trajectories=nominal_trajectories,
contouring_errors=contouring_errors,
lag_errors=lag_errors,
wheelbase=wheelbase,
max_contouring_error=max_contouring_error,
max_lag_error=max_lag_error,
)
# Generate visualization
await mpcc_viz(result, key="my-simulation")
```
## Output
Visualizations are saved as:
- `<key>.json` - Raw simulation data
- `<key>.html` - Interactive HTML visualization with Plotly
## Development
The bundled CLI (`visualizer/faran_visualizer/assets/cli.js`) is included in the package distribution. See `visualizer/core/README.md` for build instructions.
To make sure the CLI bundle is ignored by VCS, run:
```bash
git update-index --skip-worktree visualizer/faran_visualizer/assets/cli.js
```
| text/markdown | null | Zurab Mujirishvili <zurab.mujirishvili@fau.de> | null | null | null | robotics, simulation, trajectory planning, visualization | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Visualization",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiofiles>=25.1.0",
"aiopath>=0.7.7",
"faran",
"msgspec>=0.20.0",
"numpy>=2.4.2",
"numtypes>=0.5.1"
] | [] | [] | [] | [] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T10:16:20.418307 | faran_visualizer-0.2.3.tar.gz | 1,871,938 | 59/6f/3963f0a2619861a3f0e9fee6717195c98119f23cd3f4f2da33cdcb318ce9/faran_visualizer-0.2.3.tar.gz | source | sdist | null | false | 3b133997fad75a2e5d842de758a7db83 | 640a6bd7095334736b1f6c76945474033f7b6fb7af9c22a5eb90f66d7fd00d6b | 596f3963f0a2619861a3f0e9fee6717195c98119f23cd3f4f2da33cdcb318ce9 | null | [] | 260 |
2.3 | aiinbx | 0.470.0 | The official Python library for the AIInbx API | # AI Inbx Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/aiinbx/)
The AI Inbx Python library provides convenient access to the AI Inbx REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## MCP Server
Use the AI Inbx MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.
[](https://cursor.com/en-US/install-mcp?name=aiinbx-mcp&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsImFpaW5ieC1tY3AiXSwiZW52Ijp7IkFJX0lOQlhfQVBJX0tFWSI6Ik15IEFQSSBLZXkifX0)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22aiinbx-mcp%22%2C%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22aiinbx-mcp%22%5D%2C%22env%22%3A%7B%22AI_INBX_API_KEY%22%3A%22My%20API%20Key%22%7D%7D)
> Note: You may need to set environment variables in your MCP client.
## Documentation
The full API of this library can be found in [api.md](https://github.com/aiinbx/aiinbx-py/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install aiinbx
```
## Usage
The full API of this library can be found in [api.md](https://github.com/aiinbx/aiinbx-py/tree/main/api.md).
```python
import os
from aiinbx import AIInbx
client = AIInbx(
api_key=os.environ.get("AI_INBX_API_KEY"), # This is the default and can be omitted
)
response = client.threads.search()
print(response.pagination)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `AI_INBX_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncAIInbx` instead of `AIInbx` and use `await` with each API call:
```python
import os
import asyncio
from aiinbx import AsyncAIInbx
client = AsyncAIInbx(
api_key=os.environ.get("AI_INBX_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
response = await client.threads.search()
print(response.pagination)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install aiinbx[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from aiinbx import DefaultAioHttpClient
from aiinbx import AsyncAIInbx
async def main() -> None:
async with AsyncAIInbx(
api_key=os.environ.get("AI_INBX_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.threads.search()
print(response.pagination)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `aiinbx.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `aiinbx.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `aiinbx.APIError`.
```python
import aiinbx
from aiinbx import AIInbx
client = AIInbx()
try:
client.threads.search()
except aiinbx.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except aiinbx.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except aiinbx.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from aiinbx import AIInbx
# Configure the default for all requests:
client = AIInbx(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).threads.search()
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from aiinbx import AIInbx
# Configure the default for all requests:
client = AIInbx(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = AIInbx(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).threads.search()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/aiinbx/aiinbx-py/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `AI_INBX_LOG` to `info`.
```shell
$ export AI_INBX_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from aiinbx import AIInbx
client = AIInbx()
response = client.threads.with_raw_response.search()
print(response.headers.get('X-My-Header'))
thread = response.parse() # get the object that `threads.search()` would have returned
print(thread.pagination)
```
These methods return an [`APIResponse`](https://github.com/aiinbx/aiinbx-py/tree/main/src/aiinbx/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/aiinbx/aiinbx-py/tree/main/src/aiinbx/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.threads.with_streaming_response.search() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from aiinbx import AIInbx, DefaultHttpxClient
client = AIInbx(
# Or use the `AI_INBX_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from aiinbx import AIInbx
with AIInbx() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/aiinbx/aiinbx-py/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import aiinbx
print(aiinbx.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/aiinbx/aiinbx-py/tree/main/./CONTRIBUTING.md).
| text/markdown | AI Inbx | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/aiinbx/aiinbx-py",
"Repository, https://github.com/aiinbx/aiinbx-py"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-21T10:16:10.681105 | aiinbx-0.470.0.tar.gz | 140,761 | 04/ad/0eadd4cf2618193fafd3f38831c6ca61063b5c36f5dd78bb3da02b366fc8/aiinbx-0.470.0.tar.gz | source | sdist | null | false | 36bcb8dd8efcbec6f02ffc67b637b244 | ea522b8f05728c1823c2395e381d4882347d8563d603d0dfcfc3c29720c39146 | 04ad0eadd4cf2618193fafd3f38831c6ca61063b5c36f5dd78bb3da02b366fc8 | null | [] | 256 |
2.4 | ctpelvimetry | 1.1.0 | Automated pelvimetry and body composition analysis from CT segmentations | # ctpelvimetry
Automated CT pelvimetry and body composition analysis from CT segmentations.
## Description and Features
**ctpelvimetry** is a Python package for automated pelvimetric measurement and body composition analysis from CT images. It integrates with [TotalSegmentator](https://github.com/wasserth/TotalSegmentator) for segmentation and provides a complete DICOM-to-results pipeline.
### Pelvimetry Measurements
| Metric | Description |
|---|---|
| ISD (mm) | Inter-Spinous Distance |
| Inlet AP (mm) | Promontory → Upper Symphysis |
| Outlet AP (mm) | Apex → Lower Symphysis |
| Outlet Transverse (mm) | Intertuberous diameter |
| Outlet Area (cm²) | Ellipse approx: π/4 × AP × Transverse |
| Sacral Length (mm) | Promontory → Apex |
| Sacral Depth (mm) | Max anterior concavity |
### Body Composition Measurements
| Metric | Description |
|---|---|
| VAT (cm²) | Visceral Adipose Tissue area |
| SAT (cm²) | Subcutaneous Adipose Tissue area |
| V/S ratio | VAT / SAT ratio |
| SMA (cm²) | Skeletal Muscle Area |
Measured at L3 vertebral level and ISD (mid-pelvis) level.
### Key Features
- **Per-metric error isolation** — failure in one metric does not affect the others
- **Quality gates** — automatic detection of pelvic rotation, tilt, and sacrum offset
- **Batch processing** — process hundreds of patients with progress tracking and failure summaries
- **QC figures** — sagittal combined, extended 3-panel, and body composition overlays
- **Modular design** — use the full pipeline or individual analysis functions
### Package Structure
```
ctpelvimetry/
├── __init__.py # Public API
├── config.py # PelvicConfig, constants
├── io.py # Mask loading, coordinate transforms
├── conversion.py # DICOM → NIfTI (dcm2niix)
├── segmentation.py # TotalSegmentator execution
├── landmarks.py # Midline, symphysis, sacral landmarks
├── metrics.py # ISD, ITD, sacral depth
├── body_composition.py # VAT/SAT/SMA analysis
├── qc.py # QC figure generation
├── pipeline.py # run_combined_pelvimetry, run_full_pipeline
├── batch.py # Batch orchestration
└── cli.py # Unified CLI entry point
```
## Installation
```bash
# Basic install (analyse existing segmentations)
pip install ctpelvimetry
# Full install (includes TotalSegmentator for segmentation)
pip install "ctpelvimetry[seg]"
```
> **Note:** The full install pulls in TotalSegmentator and its PyTorch dependencies.
> If you only need to analyse pre-existing segmentations, the basic install is sufficient.
### Dependencies
| Package | Minimum Version |
|---|---|
| numpy | ≥ 1.24 |
| nibabel | ≥ 5.0 |
| pandas | ≥ 2.0 |
| scipy | ≥ 1.11 |
| matplotlib | ≥ 3.7 |
| tqdm | ≥ 4.60 |
| TotalSegmentator | ≥ 2.0 *(optional, `pip install ".[seg]"`)* |
## Usage Examples
### CLI — Pelvimetry (from existing segmentation)
```bash
ctpelvimetry pelv \
--seg_folder /path/to/segmentations \
--nifti_path /path/to/ct.nii.gz \
--patient Patient_001 \
--output_root ./output --qc
```
### CLI — Full Pipeline (DICOM → NIfTI → Seg → Measurements)
```bash
ctpelvimetry pelv \
--dicom_dir /path/to/Patient_001 \
--output_root ./output \
--patient Patient_001
```
### CLI — Body Composition
```bash
ctpelvimetry body-comp \
--patient Patient_001 \
--seg_root ./batch_output \
--nifti_root ./batch_output \
--pelvimetry_csv ./batch_output/combined_pelvimetry_report.csv \
--output body_comp.csv --qc
```
### CLI — Batch Processing
```bash
# Pelvimetry batch
ctpelvimetry pelv \
--dicom_root /path/to/DICOMs \
--output_root ./output \
--start 1 --end 250
# Body composition batch
ctpelvimetry body-comp \
--seg_root ./batch_output \
--nifti_root ./batch_output \
--pelvimetry_csv ./report.csv \
--output body_comp.csv \
--start 1 --end 210 --qc_root ./qc
```
### Python API
```python
from ctpelvimetry import run_combined_pelvimetry, process_single_patient
# Pelvimetry
result = run_combined_pelvimetry(
"Patient_001", "/path/to/seg", "/path/to/ct.nii.gz"
)
# Body composition
result = process_single_patient(
"Patient_001", "/path/to/seg_root",
"/path/to/ct.nii.gz", "/path/to/report.csv"
)
```
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/your-feature`)
3. Commit your changes (`git commit -m "Add your feature"`)
4. Push to the branch (`git push origin feature/your-feature`)
5. Open a Pull Request
## Citation
If you use **ctpelvimetry** in your research, please cite:
> *Manuscript in preparation.* Citation details will be updated upon publication.
## License
This project is licensed under the [Apache License 2.0](LICENSE).
| text/markdown | null | Shih-Feng Huang <odafeng@hotmail.com> | null | null | Apache-2.0 | pelvimetry, CT, body-composition, medical-imaging, TotalSegmentator | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Healthcare Industry",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Medical Science Apps."
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24",
"nibabel>=5.0",
"pandas>=2.0",
"scipy>=1.11",
"matplotlib>=3.7",
"tqdm>=4.60",
"TotalSegmentator>=2.0; extra == \"seg\"",
"pytest>=8.0; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"flake8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/odafeng/ctpelvimetry",
"Repository, https://github.com/odafeng/ctpelvimetry",
"Issues, https://github.com/odafeng/ctpelvimetry/issues",
"Changelog, https://github.com/odafeng/ctpelvimetry/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T10:16:09.901387 | ctpelvimetry-1.1.0.tar.gz | 46,399 | 20/63/222ccfc5d94e44adde636048a039d0e944641048384ea4e544c6baef4cba/ctpelvimetry-1.1.0.tar.gz | source | sdist | null | false | 8a303aa028f618b73903fc03476d9f31 | 6b8ac3bff2fac14660b41d41d876a5b49b14a3437ed0bb26fbcdb4c7bfebbc0d | 2063222ccfc5d94e44adde636048a039d0e944641048384ea4e544c6baef4cba | null | [
"LICENSE"
] | 256 |
2.4 | pyoccam | 0.9.5 | OCCAM Reconstructability Analysis Tools | # PyOccam
**Python bindings for OCCAM Reconstructability Analysis**
[](https://github.com/occam-ra/occam/actions/workflows/build-wheels.yml)
## What is PyOccam?
PyOccam wraps the OCCAM Reconstructability Analysis tools in a modern Python package. It provides information-theoretic modeling for categorical data, useful for:
- Environmental modeling (landslides, wildfires)
- Biomedical research (disease risk factors)
- Social science and survey analysis
- Any domain with discrete/categorical variables
OCCAM discovers significant variable relationships while emphasizing **interpretability** over black-box prediction.
## Installation
```bash
pip install pyoccam
```
## Quick Start
```python
import pyoccam
# Load built-in dataset
data = pyoccam.load_dementia()
print(f"Loaded {data.n_samples} samples, {data.n_features} features")
# Run search to find best model
manager = data.manager
report = manager.generate_search_report("loopless-up", levels=5, width=3)
print(report)
# Get best model by BIC
best = manager.get_best_model_by_bic()
print(f"Best model: {best}")
# Generate fit report with confusion matrix
fit_report = manager.generate_fit_report(best, target_state="0")
print(fit_report)
# Get confusion matrix as dictionary
cm = manager.get_confusion_matrix(best, target_state="0")
print(f"Accuracy: {cm['train_accuracy']:.1%}")
```
## Convert Your Own CSV Data
```python
import pyoccam
# Convert CSV to OCCAM format with train/test split
output_file, data = pyoccam.make_occam_input_from_csv(
"mydata.csv",
test_split=0.2, # 20% held out for validation
random_state=42, # Reproducible split
max_cardinality=20, # Exclude columns with >20 unique values
dv_column="target", # Specify dependent variable
exclude_columns=["ID", "Name"] # Always exclude these
)
# Analyze - now with train AND test metrics!
best = data.quick_search()
cm = data.manager.get_confusion_matrix(best, target_state="0")
print(f"Train accuracy: {cm['train_accuracy']:.1%}")
print(f"Test accuracy: {cm['test_accuracy']:.1%}")
```
Or from the command line:
```bash
# With 20% test split
python -m pyoccam csv2occam mydata.csv --test-split 0.2 --exclude ID,Name
```
## Model Selection Methods
```python
# Different criteria for "best" model:
best_bic = manager.get_best_model_by_bic() # Most parsimonious (recommended)
best_aic = manager.get_best_model_by_aic() # Less penalty for complexity
best_info = manager.get_best_model_by_information() # Highest info, alpha < 0.05
```
## Built-in Datasets
```python
data = pyoccam.load_dementia() # Alzheimer's disease risk factors
data = pyoccam.load_landslides() # Geological hazard data
data = pyoccam.load_data("file.txt") # Any OCCAM format file
```
## Search Types
- `"loopless-up"` - Bottom-up search, no feedback loops (recommended for directed systems)
- `"loopless-down"` - Top-down search, no loops
- `"full-up"` - Bottom-up with loops allowed
- `"full-down"` - Top-down with loops allowed
## Links
- **Practical Guide**: [PRACTICAL_GUIDE.md](https://github.com/occam-ra/occam/blob/pyoccam-port/PRACTICAL_GUIDE.md) - Tips from real projects
- **OCCAM Manual**: [PDF](https://pdxscholar.library.pdx.edu/sysc_fac/145/) - Complete theory & reference
- **Source Code**: [github.com/occam-ra/occam](https://github.com/occam-ra/occam)
- **Web Interface**: [occam.hsd.pdx.edu](https://occam.hsd.pdx.edu/)
## License
GPL v3 - See LICENSE file
## Credits
OCCAM was developed at Portland State University by Prof. Martin Zwick and contributors including Ken Willett, Joe Fusion, and H. Forrest Alexander. Python bindings by David Percy.
| text/markdown | David Percy | percyd@pdx.edu | null | null | null | reconstructability analysis, information theory, categorical data, discrete multivariate | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: C++",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | https://github.com/occam-ra/occam | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:15:54.410535 | pyoccam-0.9.5.tar.gz | 1,137,434 | b3/6d/dda0e8c74a77e02bafe18876d69f362b75c8b05f514446cfdd43a0e6f70d/pyoccam-0.9.5.tar.gz | source | sdist | null | false | 1fa428e2e733fdde56e45dfa43701cf3 | b33e16df5e0423b78e1490f8966fecb55dffc917e5b24f779ebcf81e8febc0a8 | b36ddda0e8c74a77e02bafe18876d69f362b75c8b05f514446cfdd43a0e6f70d | null | [
"LICENSE"
] | 855 |
2.4 | manhattan-mcp | 0.2.2 | Manhattan MCP Server - Token-Efficient Codebase Navigation (GitMem) for AI Agents | # Manhattan MCP (GitMem)
**Token-Efficient Codebase Navigation** - MCP Server for the Manhattan Memory System.
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
Manhattan MCP is a local [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server that provides AI agents (Claude Desktop, Cursor, Windsurf, etc.) with a **Virtual File System (VFS)** backed by compressed, cached code context. It allows agents to understand large codebases while saving 50-80% on tokens.
## Features
- 🏗️ **GitMem Context** - Compressed semantic code skeletons (signatures, summaries, hierarchies).
- 🔍 **Hybrid Search** - Semantic and keyword search across the entire codebase.
- 📂 **VFS Navigation** - Browse and read files via token-efficient outlines and contexts.
- 📝 **Auto-Indexing** - Automatically keeps the code index fresh after edits.
- 📊 **Token Analytics** - Track token savings and repository indexing status.
## Installation
```bash
pip install manhattan-mcp
```
## Quick Start
### 1. Install Manhattan MCP
```bash
pip install manhattan-mcp
```
### 2. Configure Your AI Client
Setting up Manhattan MCP is a two-step process:
#### Step A: Register the Server (One-time)
Add Manhattan MCP to your AI tool's global settings.
**Claude Desktop**
Add to `claude_desktop_config.json`:
```json
{
"mcpServers": {
"manhattan": {
"command": "manhattan-mcp",
"args": ["start"]
}
}
}
```
**Cursor**
Add to Cursor Settings > MCP:
- Name: `manhattan`
- Type: `command`
- Command: `manhattan-mcp`
**GitHub Copilot (VS Code)**
Add to your Copilot MCP settings:
```json
{
"servers": {
"manhattan": {
"command": "manhattan-mcp",
"args": ["start"]
}
}
}
```
#### Step B: Apply Project Rules (Per Project)
Run the setup command in your project root to ensure the agent follows the mandatory indexing policy.
```bash
# For Cursor
manhattan-mcp setup cursor
# For Claude
manhattan-mcp setup claude
# For Gemini (Antigravity)
manhattan-mcp setup gemini
# For GitHub Copilot
manhattan-mcp setup copilot
# For Windsurf
manhattan-mcp setup windsurf
# For all supported clients
manhattan-mcp setup all
```
### 3. Start Navigating!
Once configured, your AI agent can use Manhattan MCP to understand your codebase efficiently.
#### Example Usage
**Searching for code:**
```
User: How does the authentication flow work?
AI: *calls search_codebase "authentication flow"*
I found the authentication logic in `auth.py`.
Let me read the context for you...
```
**Understanding a file:**
```
User: Summarize the main functions in server.py
AI: *calls get_file_outline "src/server.py"*
The server.py file contains:
- `start_server()`: Initializes the FastMCP instance...
- `api_usage()`: Returns usage statistics...
```
**Saving tokens:**
```
User: Read the implementation of the memory builder.
AI: *calls read_file_context "src/builder.py"*
(Returns a compressed semantic skeleton, saving 70% tokens)
The memory builder uses a two-phase ingestion process...
```
## Configuration Options
| Environment Variable | Description | Default |
|---------------------|-------------|---------|
| `MANHATTAN_API_KEY` | Your API key (if using cloud embeddings) | - |
| `MANHATTAN_API_URL` | Custom API URL (optional) | Gradio Endpoint |
| `MANHATTAN_MEM_PATH` | Storage path for memory/index | `~/.manhattan-mcp/data` |
| `MANHATTAN_TIMEOUT` | Request timeout (seconds) | `120` |
## CLI Commands
```bash
# Start the MCP server (default)
manhattan-mcp start
# Set up client rules (Cursor, Claude, etc.)
manhattan-mcp setup [client]
# Show version
manhattan-mcp --version
# Show help
manhattan-mcp --help
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Links
- 🌐 [Website](https://themanhattanproject.ai)
- 📖 [Documentation](https://themanhattanproject.ai/mcp-docs)
- 🐛 [Issues](https://github.com/agent-architects/manhattan-mcp/issues)
- 💬 [Discord](https://discord.gg/manhattan)
| text/markdown | Agent Architects Studio | null | null | null | null | agent, ai, claude, llm, manhattan, mcp, memory | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.0",
"mcp>=1.0.0",
"python-dotenv>=1.0.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://themanhattanproject.ai",
"Documentation, https://themanhattanproject.ai/mcp-docs",
"Repository, https://github.com/agent-architects/manhattan-mcp"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T10:15:46.250071 | manhattan_mcp-0.2.2.tar.gz | 183,909 | 01/05/139af0fabd3b6203783d9482c94292112bdbc56807b23587a31f3b7e879f/manhattan_mcp-0.2.2.tar.gz | source | sdist | null | false | 553e7726becdd40a1d64db5f5937e1d8 | df7d8b7cd80a421809759fba43d1401abe2d6a0cd86308a0a9d80c81fc262076 | 0105139af0fabd3b6203783d9482c94292112bdbc56807b23587a31f3b7e879f | MIT | [
"LICENSE"
] | 250 |
2.4 | litert-torch-nightly | 0.9.0.dev20260221 | Support PyTorch model conversion with LiteRT. | Library that supports converting PyTorch models into a .tflite format, which can
then be run with LiteRT. This enables applications for
Android, iOS and IOT that can run models completely on-device.
[Install steps](https://github.com/google-ai-edge/litert-torch#installation)
and additional details are in the LiteRT Torch
[GitHub repository](https://github.com/google-ai-edge/litert-torch).
| text/markdown | null | null | null | null | null | On-Device ML, AI, Google, TFLite, LiteRT, PyTorch, LLMs, GenAI | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/google-ai-edge/litert-torch | null | >=3.10 | [] | [] | [] | [
"absl-py",
"numpy",
"scipy",
"safetensors",
"multipledispatch",
"transformers",
"kagglehub",
"tabulate",
"torch<2.10.0,>=2.4.0",
"ai-edge-litert-nightly==2.2.0.dev20260210",
"ai-edge-quantizer-nightly",
"torchao",
"jax",
"torch-xla2[odml]>=0.0.1.dev20241201",
"jaxtyping",
"fire",
"sentencepiece",
"torch_xla>=2.4.0; extra == \"torch-xla\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T10:15:29.175710 | litert_torch_nightly-0.9.0.dev20260221-py3-none-any.whl | 532,672 | c4/f7/f8cf14f3d20114c843f367d0e7a8fe78d8c0c3a1ad56e00bdac15d0f6e07/litert_torch_nightly-0.9.0.dev20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 3bb47872d9b5584269aa84187ec46bc1 | 46b558458a0255dd58ede6b6cd02acafa58457ee91e4e32f9245691765f6c03e | c4f7f8cf14f3d20114c843f367d0e7a8fe78d8c0c3a1ad56e00bdac15d0f6e07 | null | [
"LICENSE"
] | 82 |
2.4 | gsv-tts-lite | 0.2.2 | A high-performance inference engine specifically designed for the GPT-SoVITS text-to-speech model | <div align="center">
<a href="Project_Link_Placeholder">
<img src="huiyeji.gif" alt="Logo" width="240" height="254">
</a>
<h1>GSV-TTS-Lite</h1>
<p>
A high-performance inference engine specifically designed for the GPT-SoVITS text-to-speech model
</p>
<p align="center">
<a href="LICENSE">
<img src="https://img.shields.io/badge/License-MIT-green.svg?style=for-the-badge" alt="License">
</a>
<a href="https://www.python.org/">
<img src="https://img.shields.io/badge/Python-3.10+-blue.svg?style=for-the-badge&logo=python&logoColor=white" alt="Python Version">
</a>
<a href="https://github.com/chinokikiss/GSV-TTS-Lite/stargazers">
<img src="https://img.shields.io/github/stars/chinokikiss/GSV-TTS-Lite?style=for-the-badge&color=yellow&logo=github" alt="GitHub stars">
</a>
</p>
<p>
<a href="README_EN.md">
<img src="https://img.shields.io/badge/English-66ccff?style=flat-square&logo=github&logoColor=white" alt="English">
</a>
<a href="README.md">
<img src="https://img.shields.io/badge/简体中文-ff99cc?style=flat-square&logo=github&logoColor=white" alt="Chinese">
</a>
</p>
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/73097560/115834477-dbab4500-a447-11eb-908a-139a6edaec5c.gif">
</div>
## About
The original motivation for this project was the pursuit of ultimate performance. While using the original GPT-SoVITS, I found that the inference latency often struggled to meet the demands of real-time interaction due to the computing power bottlenecks of the RTX 3050 (Laptop).
To break through these limitations, **GSV-TTS-Lite** was developed as an inference backend based on **GPT-SoVITS V2Pro**. Through deep optimization techniques, this project successfully achieves millisecond-level real-time response in low-VRAM environments.
Beyond the leap in performance, **GSV-TTS-Lite** implements the **decoupling of timbre and style**, supporting independent control over the speaker's voice and emotion. It also features **subtitle timestamp alignment** and **voice conversion (timbre transfer)**.
To facilitate integration for developers, **GSV-TTS-Lite** features a significantly streamlined code architecture and is available on PyPI as the `gsv-tts-lite` library, supporting one-click installation via `pip`.
The currently supported languages are Chinese, Japanese, and English. The available models include v2pro and v2proplus.
## Performance Comparison
> [!NOTE]
> **Test Environment**: NVIDIA GeForce RTX 3050 (Laptop)
| Backend | Settings | TTFT (First Packet) | RTF (Real-time Factor) | VRAM | Speedup |
| :--- | :--- | :---: | :---: | :---: | :--- |
| **Original** | `streaming_mode=3` | 436 ms | 0.381 | 1.6 GB | - |
| **Lite Version** | `Flash_Attn=Off` | 150 ms | 0.125 | **0.8 GB** | ⚡ **2.9x** Speed |
| **Lite Version** | `Flash_Attn=On` | **133 ms** | **0.108** | **0.8 GB** | 🔥 **3.3x** Speed |
As shown, **GSV-TTS-Lite** achieves **3x ~ 4x** speed improvements while **halving** the VRAM usage! 🚀
<br>
## One-click Download (Pre-configured)
> [!TIP]
> If you are a beginner looking for a quick start, you can download the pre-configured integrated package.
- **Hardware Requirements**:
- **OS**: Windows only.
- **GPU**: NVIDIA GPU with at least **4GB** VRAM.
- **VRAM Note**: The `Qwen3-ASR` model is integrated by default. If VRAM is insufficient, you can disable the ASR module via parameters in `go-webui.bat` to save space.
- **Download Link**: [Placeholder]
- **Instructions**:
1. Download and extract the package (ensure the path contains no Chinese characters).
2. Double-click `go-webui.bat` and wait for the web UI to launch.
3. Start experiencing high-speed voice synthesis!
## Deployment (For Developers)
### Prerequisites
- **Anaconda**
- **CUDA Toolkit**
- **Microsoft Visual C++**
### Installation Steps
#### 1. Environment Configuration
It is recommended to create a virtual environment using Python >=3.10 and install the necessary system dependency `ffmpeg`.
```bash
conda create -n gsv-tts python=3.11
conda activate gsv-tts
conda install "ffmpeg"
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
```
#### 2. Install GSV-TTS-Lite
If you have prepared the above basic environment, you can directly execute the following command to complete the integration:
```bash
pip install gsv-tts-lite --prefer-binary
```
### Quick Start
> [!TIP]
> The program will automatically download the required pre-trained models upon the first run.
#### 1. Basic Inference
```python
from gsv_tts import TTS
tts = TTS()
# Load GPT model weights from the specified path into memory; here, the default model is loaded.
tts.load_gpt_model()
# Load SoVITS model weights from the specified path into memory; here, the default model is loaded.
tts.load_sovits_model()
# infer is the simplest and most primitive inference method, suitable for short text inference.
audio = tts.infer(
spk_audio_path="examples\laffey.mp3", # Speaker reference audio
prompt_audio_path="examples\AnAn.ogg", # Prompt reference audio
prompt_audio_text="ちが……ちがう。レイア、貴様は間違っている。", # Text corresponding to the prompt audio
text="へぇー、ここまでしてくれるんですね。", # Target text to generate
)
audio.play()
tts.audio_queue.wait()
```
#### 2. Stream Inference / Subtitle Synchronization
```python
import time
import queue
import threading
from gsv_tts import TTS
class SubtitlesQueue:
def __init__(self):
self.q = queue.Queue()
self.t = None
def process(self):
last_i = 0
last_t = time.time()
while True:
subtitles, text = self.q.get()
if subtitles is None:
break
for subtitle in subtitles:
if subtitle["start_s"] > time.time() - last_t:
while time.time() - last_t <= subtitle["start_s"]:
time.sleep(0.01)
if subtitle["end_s"] and subtitle["end_s"] > time.time() - last_t:
if subtitle["orig_idx_end"] > last_i:
print(text[last_i:subtitle["orig_idx_end"]], end="", flush=True)
last_i = subtitle["orig_idx_end"]
while time.time() - last_t <= subtitle["end_s"]:
time.sleep(0.01)
self.t = None
def add(self, subtitles, text):
self.q.put((subtitles, text))
if self.t is None:
self.t = threading.Thread(target=self.process, daemon=True)
self.t.start()
tts = TTS()
# infer, infer_stream, and infer_batched all support returning subtitle timestamps; infer_stream is used here just as an example.
subtitlesqueue = SubtitlesQueue()
# infer_stream implements token-level streaming output, significantly reducing first-token latency and enabling a ultra-low latency real-time feedback experience.
generator = tts.infer_stream(
spk_audio_path="examples\laffey.mp3",
prompt_audio_path="examples\AnAn.ogg",
prompt_audio_text="ちが……ちがう。レイア、貴様は間違っている。",
text="へぇー、ここまでしてくれるんですね。",
debug=False,
)
for audio in generator:
audio.play()
subtitlesqueue.add(audio.subtitles, audio.orig_text)
tts.audio_queue.wait()
subtitlesqueue.add(None, None)
print()
```
#### 3. Batched Inference
```python
from gsv_tts import TTS
tts = TTS()
# infer_batched is optimized specifically for long-form text and multi-sentence synthesis scenarios. This mode not only offers significant advantages in processing efficiency but also supports assigning different reference audios to different sentences within the same batch, providing high synthesis freedom and flexibility.
audios = tts.infer_batched(
spk_audio_paths="examples\laffey.mp3",
prompt_audio_paths="examples\AnAn.ogg",
prompt_audio_texts="ちが……ちがう。レイア、貴様は間違っている。",
texts=["へぇー、ここまでしてくれるんですね。", "The old map crinkled in Leo’s trembling hands."],
)
for i, audio in enumerate(audios):
audio.save(f"audio{i}.wav")
```
#### 4. Voice Conversion
```python
from gsv_tts import TTS
tts = TTS()
# Although infer_vc supports few-shot voice conversion and offers convenience, its conversion quality still has room for improvement compared to specialized voice conversion models like RVC or SVC.
audio = tts.infer_vc(
spk_audio_path="examples\laffey.mp3",
prompt_audio_path="examples\AnAn.ogg",
prompt_audio_text="ちが……ちがう。レイア、貴様は間違っている。",
)
audio.play()
tts.audio_queue.wait()
```
#### 5. Speaker Verification
```python
from gsv_tts import TTS
tts = TTS(always_load_sv=True)
# verify_speaker is used to compare the speaker characteristics of two audio clips to determine if they are the same person.
similarity = tts.verify_speaker("examples\laffey.mp3", "examples\AnAn.ogg")
print("Speaker Similarity:", similarity)
```
<details>
<summary><strong>6. Other Function Interfaces</strong></summary>
### 1. Model Management
#### `init_language_module(languages)`
Preload necessary language processing modules.
#### `load_gpt_model(model_paths)`
Load GPT model weights from specified paths into memory.
#### `load_sovits_model(model_paths)`
Load SoVITS model weights from specified paths into memory.
#### `unload_gpt_model(model_paths)` / `unload_sovits_model(model_paths)`
Unload models from memory to free up resources.
#### `get_gpt_list()` / `get_sovits_list()`
Get the list of currently loaded models.
#### `to_safetensors(checkpoint_path)`
Converts PyTorch checkpoint files (.pth or .ckpt) into the safetensors format.
### 2. Audio Cache Management
#### `cache_spk_audio(spk_audio_paths)`
Preprocess and cache speaker reference audio data.
#### `cache_prompt_audio(prompt_audio_paths, prompt_audio_texts, prompt_audio_languages)`
Preprocess and cache prompt reference audio data.
#### `del_spk_audio(spk_audio_list)` / `del_prompt_audio(prompt_audio_paths)`
Remove audio data from the cache.
#### `get_spk_audio_list()` / `get_prompt_audio_list()`
Get the list of audio data in the cache.
</details>
## Flash Attn
If you are looking for **lower latency** and **higher throughput**, it is highly recommended to enable `Flash Attention` support.
Since this library has specific compilation requirements, please install it manually based on your system:
* **🐧 Linux / Build from Source**
* Official Repo: [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)
* **🪟 Windows Users**
* Pre-compiled Wheels: [lldacing/flash-attention-windows-wheel](https://huggingface.co/lldacing/flash-attention-windows-wheel/tree/main)
> [!TIP]
> After installation, set `use_flash_attn=True` in your TTS configuration to enjoy the acceleration! 🚀
## Credits
Special thanks to the following projects:
- [RVC-Boss/GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS)
## ⭐ Star History
[](https://star-history.com/#chinokikiss/GSV-TTS-Lite&Date)
| text/markdown | null | mahir0 <2370950583@qq.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"einops",
"tqdm",
"torchcodec",
"transformers",
"sounddevice",
"safetensors",
"wordsegment",
"g2p_en",
"cn2an",
"pypinyin",
"jieba_fast",
"pyopenjtalk>=0.4.1"
] | [] | [] | [] | [
"Homepage, https://github.com/chinokikiss/GSV-TTS-Lite"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T10:15:20.082444 | gsv_tts_lite-0.2.2.tar.gz | 80,808 | e9/b9/0cdfdfc66999051b7e173914ed17ba7e37953cea62933432f39d575d73ed/gsv_tts_lite-0.2.2.tar.gz | source | sdist | null | false | 0a6e4a1529fd0346240f18657848c03c | 501338222edf6537a5193ee2c051bd3206d53d5285efb286de7e0e1b8210e8b6 | e9b90cdfdfc66999051b7e173914ed17ba7e37953cea62933432f39d575d73ed | null | [
"LICENSE"
] | 251 |
2.4 | aboba | 1.0.3 | AB tests library with simplicity in mind | # ABOBA
AB tests library with simplicity in mind
## 📚 Documentation
- **[Full Documentation](https://ab-alexroar-8e192a9a66171262804e0b0f9c942db31bc9c224aebd3d3d415.gitlab.io)** - Complete guide and reference
- **[Tutorial](https://ab-alexroar-8e192a9a66171262804e0b0f9c942db31bc9c224aebd3d3d415.gitlab.io/tutorial/)** - Step-by-step learning guide
- **[API Reference](https://ab-alexroar-8e192a9a66171262804e0b0f9c942db31bc9c224aebd3d3d415.gitlab.io/api/tests/)** - Detailed API documentation
## ✨ Features
- **Simple & Intuitive API** - Easy to learn and use for both beginners and experts
- **Multiple Statistical Tests** - t-tests, ANOVA, Kruskal-Wallis, and more
- **Variance Reduction** - Built-in CUPED, stratification, and regression adjustments
- **Power Analysis** - Simulate synthetic effects to estimate required sample sizes
- **Flexible Pipelines** - Chain data processors and samplers for complex workflows
- **Experiment Orchestration** - Run and visualize multiple test scenarios simultaneously
- **Extensible Architecture** - Easy to create custom tests, samplers, and processors
- **Production Ready** - Type hints, comprehensive tests, and detailed documentation
## 🚀 Quick Start
### Installation
```bash
pip install aboba
```
## 📖 Quick Example
To conduct a test, you need several entities:
- data
- data processing
- data sampling technique
- the test strategy itself
Data can be a simple pandas dataframe or custom data generator.
### General use case
```python
import numpy as np
import pandas as pd
import scipy.stats as sps
from aboba import (
tests,
samplers,
effect_modifiers,
experiment,
)
from aboba.pipeline import Pipeline
# Create dataset with two groups
data = pd.DataFrame({
'value' : np.concatenate([
sps.norm.rvs(size=1000, loc=0, scale=1),
sps.norm.rvs(size=1000, loc=0, scale=1),
]),
'is_b_group': np.concatenate([
np.repeat(0, 1000),
np.repeat(1, 1000),
]),
})
# Configure test
test = tests.AbsoluteIndependentTTest(
value_column='value',
)
# Create pipeline with sampler
sampler = samplers.GroupSampler(
column='is_b_group',
size=100,
)
pipeline = Pipeline([
('sampler', sampler),
])
# Run experiment
n_iter = 500
exp = experiment.AbobaExperiment(draw_cols=1)
group_aa = exp.group(
name="AA, regular",
test=test,
data=data,
data_pipeline=pipeline,
n_iter=n_iter
)
group_aa.run()
effect = effect_modifiers.GroupModifier(
effects={1: 0.3},
value_column='value',
group_column='is_b_group',
)
group_ab = exp.group(
name="AB, regular, effect=0.3",
test=test,
data=data,
data_pipeline=pipeline,
synthetic_effect=effect,
n_iter=n_iter
)
group_ab.run()
# Draw results
fig, axes = exp.draw()
fig.savefig('results.png')
```
## 🎯 Key Components
- **Tests** - Statistical tests for hypothesis testing (t-tests, ANOVA, etc.)
- **Samplers** - Control how data is split into groups (random, stratified, grouped)
- **Processors** - Transform data before testing (CUPED, bucketing, normalization)
- **Pipelines** - Chain multiple processors and samplers together
- **Effect Modifiers** - Simulate synthetic effects for power analysis
- **Experiments** - Orchestrate multiple test runs and visualize results
## 📊 Use Cases
- **A/B Testing** - Compare two variants to determine which performs better
- **Multivariate Testing** - Test multiple variants simultaneously
- **Power Analysis** - Determine required sample sizes for detecting effects
- **Variance Reduction** - Use CUPED or stratification to improve test sensitivity
- **Custom Tests** - Implement domain-specific statistical tests
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## 📄 License
MIT License - see LICENSE file for details
| text/markdown | null | Max Gorishniy <gorishniy@alexdremov.me>, Alex Dremov <alex@alexdremov.me> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pandas>=1.5.0",
"joblib>=1.0.0",
"tqdm>=4.60.0",
"numpy>=1.23.0",
"scipy>=1.10.0",
"matplotlib>=3.7.0",
"statsmodels>=0.13.0",
"seaborn>=0.12.0",
"scikit-posthocs>=0.9.0",
"pydantic",
"pytest>=7.4.0; extra == \"testing\"",
"pytest-cov>=4.1.0; extra == \"testing\"",
"pytest-xdist>=3.3.0; extra == \"testing\"",
"sphinx; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"furo; extra == \"docs\"",
"sphinx-autodoc-typehints; extra == \"docs\"",
"linkify-it-py; extra == \"docs\"",
"mkdocs-material; extra == \"docs\"",
"mkdocs-static-i18n; extra == \"docs\"",
"mkdocstrings[python]; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-21T10:14:44.670793 | aboba-1.0.3.tar.gz | 52,725 | 4b/92/6ed8149c48c7f82008f531976275e48f0e9670ce271b633bbe37036578b1/aboba-1.0.3.tar.gz | source | sdist | null | false | 81ba2ad681aa09a3ebed75b1e0db1cd6 | f91c66d6400fded384be26abb2fe2fb8473a3a6036b50372bb64b053332e3386 | 4b926ed8149c48c7f82008f531976275e48f0e9670ce271b633bbe37036578b1 | null | [] | 260 |
2.4 | interstellar | 1.2.5 | A command-line tool for managing cryptocurrency mnemonics using BIP39 and SLIP39 standards | # interstellar
[](https://github.com/alkalescent/interstellar/actions/workflows/release.yml)
[](https://pypi.org/project/interstellar/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A command-line tool for managing cryptocurrency mnemonics using BIP39 and SLIP39 standards. This tool allows you to split, combine, and convert mnemonic phrases for secure key management.
## ✨ Features
- **BIP39 Support**: Generate, validate, and split BIP39 mnemonic phrases
- **SLIP39 Support**: Create Shamir Secret Sharing (SLIP39) shares from mnemonics
- **Flexible Splitting**: Deconstruct 24-word mnemonics into multiple 12-word parts
- **Share Reconstruction**: Reconstruct mnemonics from SLIP39 shares with threshold requirements
- **Digit Mode**: Convert mnemonics to/from numeric format for easier backup (1-indexed: BIP39 uses 1-2048, SLIP39 uses 1-1024)
## 💖 Support
Love this tool? Your support means the world! ❤️
<table align="center">
<tr>
<th>Currency</th>
<th>Address</th>
<th>QR</th>
</tr>
<tr>
<td><strong>₿ BTC</strong></td>
<td><code>bc1qwn7ea6s8wqx66hl5rr2supk4kv7qtcxnlqcqfk</code></td>
<td><img src="assets/qr_btc.png" width="80" /></td>
</tr>
<tr>
<td><strong>Ξ ETH</strong></td>
<td><code>0x7cdB1861AC1B4385521a6e16dF198e7bc43fDE5f</code></td>
<td><img src="assets/qr_eth.png" width="80" /></td>
</tr>
<tr>
<td><strong>ɱ XMR</strong></td>
<td><code>463fMSWyDrk9DVQ8QCiAir8TQd4h3aRAiDGA8CKKjknGaip7cnHGmS7bQmxSiS2aYtE9tT31Zf7dSbK1wyVARNgA9pkzVxX</code></td>
<td><img src="assets/qr_xmr.png" width="80" /></td>
</tr>
<tr>
<td><strong>◈ BNB</strong></td>
<td><code>0x7cdB1861AC1B4385521a6e16dF198e7bc43fDE5f</code></td>
<td><img src="assets/qr_bnb.png" width="80" /></td>
</tr>
</table>
## 📦 Installation
### Homebrew (macOS/Linux)
```bash
brew tap alkalescent/tap
brew install interstellar
```
### PyPI (Recommended)
```bash
uv pip install interstellar
```
After installation, use either the command directly or as a Python module:
```bash
# Direct command
interstellar --help
# As Python module (if direct command not in PATH)
uv run python -m interstellar --help
```
### From Source
Clone the repository and install in development mode:
```bash
git clone https://github.com/alkalescent/interstellar.git
cd interstellar
make install DEV=1 # Install with dev dependencies
```
### Pre-built Binaries
Download from [GitHub Releases](https://github.com/alkalescent/interstellar/releases):
| Variant | Description | Startup | Format |
|---------|-------------|---------|--------|
| **Portable** | Single file, no installation needed | ~10 sec | `interstellar-{os}-portable` |
| **Fast** | Optimized for speed | ~1 sec | `interstellar-{os}-fast.tar.gz` |
> **Note**: In the filenames and commands, replace `{os}` with your operating system (e.g., `linux`, `macos`). The examples below use `linux`. For Windows, you may need to use a tool like 7-Zip to extract `.tar.gz` archives.
For **Portable**, download and run directly:
```bash
chmod +x interstellar-linux-portable
./interstellar-linux-portable --help
```
For **Fast**, extract the archive and run from within:
```bash
tar -xzf interstellar-linux-fast.tar.gz
./cli.dist/interstellar --help
```
### Build from Source
Build your own binaries using [Nuitka](https://nuitka.net/):
```bash
git clone https://github.com/alkalescent/interstellar.git
cd interstellar
# Build portable (single file, slower startup)
MODE=onefile make build
# Build fast (directory, faster startup)
MODE=standalone make build
```
## 🚀 Usage
The CLI provides two main commands: `deconstruct` and `reconstruct`.
### Deconstruct Command
Split a BIP39 mnemonic into multiple parts or SLIP39 shares.
**From command line:**
```bash
interstellar deconstruct --mnemonic "your 24 word mnemonic phrase here..."
```
**From file:**
```bash
interstellar deconstruct --filename seed.txt
```
**Options:**
- `--mnemonic`: BIP39 mnemonic to deconstruct (default: empty, reads from file)
- `--filename`: File containing the BIP39 mnemonic (default: empty)
- `--standard`: Output format: `BIP39` or `SLIP39` (default: `SLIP39`)
- `--required`: Required shares for SLIP39 reconstruction (default: `2`)
- `--total`: Total SLIP39 shares to generate (default: `3`)
- `--digits`: Output numeric format instead of words (default: `false`)
**Output Format (JSON):**
For BIP39:
```json
[
{"standard": "BIP39", "mnemonic": "first part words..."},
{"standard": "BIP39", "mnemonic": "second part words..."}
]
```
For SLIP39:
```json
{
"standard": "SLIP39",
"shares": [
["share1 group1", "share2 group1", "share3 group1"],
["share1 group2", "share2 group2", "share3 group2"]
]
}
```
**Example: Create SLIP39 shares**
```bash
interstellar deconstruct \
--mnemonic "word1 word2 ... word24" \
--standard SLIP39 \
--required 2 \
--total 3
```
**Example: Split into BIP39 parts**
```bash
interstellar deconstruct \
--mnemonic "word1 word2 ... word24" \
--standard BIP39
```
### Reconstruct Command
Reconstruct a BIP39 mnemonic from shares or parts.
**From command line (semicolon and comma delimited):**
```bash
interstellar reconstruct --shares "group1_share1,group1_share2;group2_share1,group2_share2"
```
**From file:**
```bash
interstellar reconstruct --filename shares.txt
```
**Options:**
- `--shares`: Shares to reconstruct, formatted as semicolon-separated groups with comma-separated shares (default: empty, reads from file)
- `--filename`: File containing shares (default: empty)
- `--standard`: Input format: `BIP39` or `SLIP39` (default: `SLIP39`)
- `--digits`: Input is in numeric format (default: `false`)
**Output Format (JSON):**
```json
{
"standard": "BIP39",
"mnemonic": "reconstructed 24 word mnemonic phrase..."
}
```
**Example: Reconstruct from SLIP39 shares (CLI)**
```bash
interstellar reconstruct \
--shares "group1_share1,group1_share2;group2_share1,group2_share2" \
--standard SLIP39
```
**Example: Reconstruct from file**
```bash
interstellar reconstruct --filename shares.txt --standard SLIP39
```
## 📁 Files
### Input Files
**For deconstruct command:**
The file should contain the mnemonic phrase:
```
word1 word2 word3 ... word24
```
**For reconstruct command:**
Shares should be grouped by line, with comma-separated shares within each group:
```
group1_share1,group1_share2
group2_share1,group2_share2
```
For example, with a 2-of-3 SLIP39 scheme split into 2 BIP39 parts:
```
academic acid ... (20 words),academic always ... (20 words)
academic arcade ... (20 words),academic axes ... (20 words)
```
### Command-Line Format
When using `--shares` on the command line:
- Use commas (`,`) to separate shares within a group
- Use semicolons (`;`) to separate groups
- Example: `"group1_share1,group1_share2;group2_share1,group2_share2"`
### JSON Output
All commands output JSON for easy parsing and piping:
```bash
# Extract just the shares
interstellar deconstruct --filename seed.txt | jq -r '.shares'
# Extract the reconstructed mnemonic
interstellar reconstruct --filename shares.txt | jq -r '.mnemonic'
# Save output to file
interstellar deconstruct --mnemonic "word1 ..." > output.json
```
## 🧪 Testing
Run the test suite:
```bash
uv run python -m pytest -v
```
Run with coverage reporting (requires 90% coverage):
```bash
uv run python -m pytest --cov --cov-report=term-missing --cov-fail-under=90
```
## 🔐 Security
⚠️ **Important Security Considerations:**
- **Run on an airgapped machine or a fresh [Tails](https://tails.net) installation via USB with networking disabled**
- Never share your seed phrase or private keys
- Store mnemonic backups securely in multiple physical locations
- SLIP39 shares should be distributed to different secure locations
- Use the digit format for metal plate backups or other durable storage
- Always verify reconstructed mnemonics match the original
- This tool is for educational and personal use only
## 🏗️ Architecture
The CLI consists of the following modules:
- **`tools.py`**: Core BIP39 and SLIP39 implementation
- `BIP39` class: Mnemonic generation, validation, splitting
- `SLIP39` class: Shamir Secret Sharing implementation
- **`cli.py`**: Command-line interface using Typer
- `deconstruct`: Split mnemonics into parts/shares
- `reconstruct`: Rebuild mnemonics from parts/shares
- **`test_tools.py`** / **`test_cli.py`**: Comprehensive test suites
- BIP39 generation and roundtrip tests
- SLIP39 share creation and reconstruction
- CLI integration tests
## 📖 Examples
### Secure Backup Strategy
1. Generate a 24-word BIP39 mnemonic
2. Split it into 2 parts (two 12-word mnemonics)
3. Convert each part to SLIP39 shares (2-of-3)
4. Distribute 6 total shares across secure locations
5. To recover, need 2 shares from each group (4 shares total)
```bash
# Deconstruct (outputs JSON)
interstellar deconstruct \
--mnemonic "abandon abandon ... art" \
--standard SLIP39 \
--required 2 \
--total 3 > shares.json
# Extract shares with jq
cat shares.json | jq -r '.shares'
# Reconstruct from file
interstellar reconstruct --filename backup_shares.txt --standard SLIP39
# Or from command line
interstellar reconstruct \
--shares "share1,share2;share3,share4" \
--standard SLIP39
```
## 🔒 Cryptotag Odin 7 Backup Guide
The [Cryptotag Odin 7](https://cryptotag.io/products/odin/) is a titanium hexagonal prism system designed for SLIP39 backups. This guide explains how to use interstellar with Odin 7 for secure 24-word BIP39 seed storage.
### Overview
A 24-word BIP39 seed is split into **2 wallet parts** (12 words each), and each part is converted to **3 SLIP39 shares** (20 words each). This creates **6 total shares** that fit perfectly on 6 Odin hexagons, with 1 test hexagon for verification.
| Wallet Part | Shares | Hexagons |
|-------------|--------|----------|
| Wallet 1 (words 1-12) | Share 1, 2, 3 | Hexagons 1-3 |
| Wallet 2 (words 13-24) | Share 1, 2, 3 | Hexagons 4-6 |
| Test | For verification | Hexagon 7 |
### Store: Creating Shares with --digits
```bash
# Generate SLIP39 shares in digit format for metal engraving
interstellar deconstruct \
--mnemonic "your 24 word seed phrase here..." \
--required 2 \
--total 3 \
--digits
```
Output structure:
```json
{
"shares": [
["wallet1_share1 (20 digits)", "wallet1_share2", "wallet1_share3"],
["wallet2_share1 (20 digits)", "wallet2_share2", "wallet2_share3"]
]
}
```
### Hexagon Face Layout
Each Odin hexagon has 6 faces. Here's how to engrave them:
```
┌───────────────────────────────────────────────────┐
│ FACE 1: METADATA │
├─────────────────────────┬─────────────────────────┤
│ Total: 3 Share: 1 │
│ Thresh: 2 Wallet: 1 │
└─────────────────────────┴─────────────────────────┘
┌───────────────────────────────────────────────────┐
│ FACE 2: WORDS 1-4 │
├────────────┬────────────┬────────────┬────────────┤
│ │
│ 1: [----] 2: [----] 3: [----] 4: [----] │
└────────────┴────────────┴────────────┴────────────┘
┌───────────────────────────────────────────────────┐
│ FACE 3: WORDS 5-8 │
├────────────┬────────────┬────────────┬────────────┤
│ │
│ 5: [----] 6: [----] 7: [----] 8: [----] │
└────────────┴────────────┴────────────┴────────────┘
... (Faces 4-6 continue pattern through word 20)
```
### Restore: Reconstructing from Shares
To recover your seed, you need **2 shares from each wallet part** (4 shares total).
1. Create a file `shares.txt` with your digit shares (comma-separated within each wallet group, one group per line):
```
123 456 789 ... 20 digits, 234 567 890 ... 20 digits
345 678 901 ... 20 digits, 456 789 012 ... 20 digits
```
Line 1 contains 2 shares from wallet 1, line 2 contains 2 shares from wallet 2.
2. Reconstruct:
```bash
interstellar reconstruct --filename shares.txt --digits
```
### Security Notes
- Store each hexagon in a **different physical location**
- With 2-of-3 threshold, losing 1 hexagon per wallet part is recoverable
- The test hexagon can verify your engraving is correct before distributing
## 📚 Dependencies
- `hdwallet`: HD wallet generation and derivation (subdependency of slip39)
- `mnemonic`: BIP39 mnemonic implementation
- `slip39`: SLIP39 Shamir Secret Sharing
- `typer`: Modern CLI framework
## 📄 License
MIT License - see [LICENSE](LICENSE) for details. | text/markdown | Krish Suchak | null | null | null | null | bip39, cryptocurrency, hdwallet, mnemonic, shamir, slip39 | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Security :: Cryptography"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"mnemonic>=0.21",
"slip39>=14.0.2",
"typer>=0.20.1"
] | [] | [] | [] | [
"Homepage, https://github.com/alkalescent/interstellar",
"Repository, https://github.com/alkalescent/interstellar",
"Issues, https://github.com/alkalescent/interstellar/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:14:30.183845 | interstellar-1.2.5.tar.gz | 76,314 | af/7c/2cc3299e95336025eeace068f64c2962787d9b9241a89a2c94fbf68f164c/interstellar-1.2.5.tar.gz | source | sdist | null | false | 8b12961ebcb39ef411000ff5bfbbb0f3 | 83668d35a573b68ab100f28d1e6e841c3092e7292ea3d801f17e86f2f7241541 | af7c2cc3299e95336025eeace068f64c2962787d9b9241a89a2c94fbf68f164c | MIT | [
"LICENSE"
] | 248 |
2.4 | edictum | 0.8.1 | Runtime safety for AI agents. Stop agents before they break things. | # Edictum
[](https://pypi.org/project/edictum/)
[](LICENSE)
[](https://pypi.org/project/edictum/)
**Runtime contract enforcement for AI agent tool calls.**
AI agents call tools with real-world side effects -- reading files, querying databases, executing commands. The standard defense is prompt engineering, but prompts are suggestions the LLM can ignore. Edictum enforces contracts at the decision-to-action seam: before a tool call executes, Edictum checks it against YAML contracts and denies it if it violates policy. The agent cannot bypass it.
This is not feature flags. This is not prompt guardrails. Edictum is a deterministic enforcement point for tool calls -- preconditions before execution, postconditions after, session limits across turns, and a full audit trail.
## Show Me
**contracts.yaml**
```yaml
apiVersion: edictum/v1
kind: ContractBundle
metadata:
name: my-policy
defaults:
mode: enforce
contracts:
- id: block-sensitive-reads
type: pre
tool: read_file
when:
args.path:
contains_any: [".env", ".secret", "credentials", ".pem", "id_rsa"]
then:
effect: deny
message: "Sensitive file '{args.path}' denied."
tags: [secrets, dlp]
```
**Python**
```python
import asyncio
from edictum import Edictum, EdictumDenied
async def read_file_fn(path):
return open(path).read()
async def main():
guard = Edictum.from_yaml("contracts.yaml")
try:
result = await guard.run("read_file", {"path": "/app/config.json"}, read_file_fn)
print(result)
except EdictumDenied as e:
print(f"Denied: {e}")
asyncio.run(main())
```
**CLI**
```bash
$ edictum validate contracts.yaml
contracts.yaml -- 1 contract (1 pre)
$ edictum check contracts.yaml --tool read_file --args '{"path": ".env"}'
DENIED by block-sensitive-reads
Message: Sensitive file '.env' denied.
Tags: secrets, dlp
Contracts evaluated: 1
```
**Framework integration (one adapter, same contracts)**
```python
from edictum import Edictum, Principal
from edictum.adapters.langchain import LangChainAdapter
guard = Edictum.from_yaml("contracts.yaml")
adapter = LangChainAdapter(guard, principal=Principal(role="analyst"))
wrapper = adapter.as_tool_wrapper()
# Wraps any LangChain tool -- preconditions, audit, and session limits apply automatically
```
## How It Works
1. **Write contracts in YAML.** Preconditions deny dangerous calls before execution. Postconditions check tool output after. Session limits cap total calls and retries.
2. **Attach to your agent framework.** One adapter line. Same contracts across all six frameworks.
3. **Compose and layer bundles.** Split contracts across files by concern. `from_yaml()` accepts multiple paths with deterministic merge semantics. Shadow-test updates with `observe_alongside` before promoting.
4. **Every tool call passes through the pipeline.** Preconditions, session limits, and principal context are evaluated. If any contract fails, the call is denied and never executes.
5. **Full audit trail.** Every evaluation produces a structured event with automatic secret redaction.
## How It Compares
| Approach | Scope | Runtime enforcement | Audit trail |
|---|---|---|---|
| Prompt/output guardrails | Input/output text | No -- advisory only | No |
| API gateways / MCP proxies | Network transport | Yes -- at the proxy | Partial |
| Security scanners | Post-hoc analysis | No -- detection only | Yes |
| Manual if-statements | Per-tool, ad hoc | Yes -- scattered logic | No |
| **Edictum** | **Tool call contracts** | **Yes -- deterministic pipeline** | **Yes -- structured + redacted** |
## Framework Support
Edictum integrates with six agent frameworks. Same YAML contracts, same enforcement, different adapter patterns:
| Framework | Integration | PII Redaction | Complexity |
|-----------|------------|---------------|------------|
| LangChain + LangGraph | `as_tool_wrapper()` | Full interception | Low |
| OpenAI Agents SDK | `as_guardrails()` | Logged only | Medium |
| Agno | `as_tool_hook()` | Full interception | Low |
| Semantic Kernel | `register()` | Full interception | Medium-High |
| CrewAI | `register()` | Partial | High |
| Claude Agent SDK | `to_hook_callables()` | Logged only | Low |
See [Adapter Docs](https://acartag7.github.io/edictum/adapters/overview/) for setup, known limitations, and recommendations.
## Install
Requires Python 3.11+. Current version: **v0.8.0**.
```bash
pip install edictum # core (zero deps)
pip install edictum[yaml] # + YAML contract engine
pip install edictum[otel] # + OpenTelemetry span emission
pip install edictum[cli] # + validate/check/diff/replay CLI
pip install edictum[all] # everything
```
## Built-in Templates
```python
guard = Edictum.from_template("file-agent") # secret file protection, destructive cmd blocking
guard = Edictum.from_template("research-agent") # output PII detection, session limits
guard = Edictum.from_template("devops-agent") # role gates, ticket requirements, bash safety
```
## Demos & Examples
- **[edictum-demo](https://github.com/acartag7/edictum-demo)** -- Full scenario demos, adversarial tests, benchmarks, and Grafana observability
- **[Contract Patterns](https://acartag7.github.io/edictum/contracts/patterns/)** -- Real-world contract recipes by concern
- **[Framework Adapters](https://acartag7.github.io/edictum/adapters/overview/)** -- Integration guides for six frameworks
## Links
- [Documentation](https://acartag7.github.io/edictum/)
- [GitHub](https://github.com/acartag7/edictum)
- [PyPI](https://pypi.org/project/edictum/)
- [Changelog](CHANGELOG.md)
- [License](LICENSE) (MIT)
| text/markdown | null | Arnold Cartagena <cartagena.arnold@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"agno>=1.0; extra == \"agno\"",
"agno>=1.0; extra == \"all\"",
"click>=8.0; extra == \"all\"",
"crewai>=0.80; extra == \"all\"",
"jsonschema>=4.20; extra == \"all\"",
"langchain-core>=0.3; extra == \"all\"",
"openai-agents>=0.1; extra == \"all\"",
"opentelemetry-api>=1.20.0; extra == \"all\"",
"opentelemetry-exporter-otlp>=1.20.0; extra == \"all\"",
"opentelemetry-sdk>=1.20.0; extra == \"all\"",
"pyyaml>=6.0; extra == \"all\"",
"rich>=13.0; extra == \"all\"",
"semantic-kernel>=1.0; extra == \"all\"",
"click>=8.0; extra == \"cli\"",
"jsonschema>=4.20; extra == \"cli\"",
"pyyaml>=6.0; extra == \"cli\"",
"rich>=13.0; extra == \"cli\"",
"crewai>=0.80; extra == \"crewai\"",
"coverage>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"langchain-core>=0.3; extra == \"langchain\"",
"openai-agents>=0.1; extra == \"openai-agents\"",
"opentelemetry-api>=1.20.0; extra == \"otel\"",
"opentelemetry-exporter-otlp>=1.20.0; extra == \"otel\"",
"opentelemetry-sdk>=1.20.0; extra == \"otel\"",
"semantic-kernel>=1.0; extra == \"semantic-kernel\"",
"jsonschema>=4.20; extra == \"yaml\"",
"pyyaml>=6.0; extra == \"yaml\""
] | [] | [] | [] | [
"Homepage, https://docs.edictum.dev",
"Documentation, https://docs.edictum.dev",
"Repository, https://github.com/acartag7/edictum",
"Changelog, https://github.com/acartag7/edictum/releases"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T10:13:59.715874 | edictum-0.8.1-py3-none-any.whl | 71,541 | 8d/5d/449f89c6fcf0489fb0c89d67505fa5d9bd113d710e6d900763a99bb3e4c3/edictum-0.8.1-py3-none-any.whl | py3 | bdist_wheel | null | false | cf4b5664534b41e32e96d91c6ff88b9e | 5bf528379eb07481c2230b654366e87298d28a5ac979414d1de6a7ac709d7b24 | 8d5d449f89c6fcf0489fb0c89d67505fa5d9bd113d710e6d900763a99bb3e4c3 | MIT | [
"LICENSE"
] | 254 |
2.4 | scanpng | 0.1.0 | Analyseur de fichiers PNG (chunks, CRC recalculé, offsets, append data) | # scanpng
Analyseur forensic de fichiers PNG :
- chunks
- CRC (recalculé)
- offsets
- données appendées
Usage :
scanpng -path fichier.png
| text/markdown | DjCode | null | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"colorama"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.6 | 2026-02-21T10:13:52.788180 | scanpng-0.1.0.tar.gz | 2,972 | 49/9a/f439714654fd2f66798ee8ce6cbb821a0bd30765c598a24e32eb392e0bb4/scanpng-0.1.0.tar.gz | source | sdist | null | false | b9eba896e424de8375ce3eec6571bf4e | 89756d8fd9f6fa071f73386d3d7b94a9ced1669fcf4f72abffc90c3f144a4dfc | 499af439714654fd2f66798ee8ce6cbb821a0bd30765c598a24e32eb392e0bb4 | null | [] | 278 |
2.4 | locstat | 1.2.0 | CLI tool to count lines of code | # locstat
## Get source code line statistics quickly
* [Installation](#installation)
* [Usage](#usage)
* [Primary Action](#primary-action)
* [Parsing Filters](#parsing-filters)
* [Finer Parsing Control](#finer-parsing-controls)
* [Emitting Results](#emitting-results)
* [Examples](#examples)
* [Customizations](#customizations)
* [License](#license)
* [Comptatibility Notes](#compatibility-notes)
* [Acknowledgements](#acknowledgements)
## Installation
```bash
$ pip install locstat
```
## Usage
locstat is designed to be a CLI tool, invocable as the package name itself.
```bash
$ locstat [-h] (-v VERSION | -c CONFIG | -f FILE | -d DIR) [options]
```
### Primary Action
---
A primary action must be specified when invoking the tool through the command line, namely:
* **-v/--version**: Display the installed version and exit
* **-h/--help**: Display the help message and exit
* **-c/--config**: Display and optionally edit the configuration settings and exit
* **-rc/--restore-config**: Restore configuration settings
* **-f/--file**: Filepath to parse
* **-d/--dir**: Directory to parse
Note: These options are **mutually exclusive**
### Parsing Filters
---
#### Directories:
**-xd/--exclude-dir**: Directory paths following this flag will be ignored.
**-id/--include-dir**: Parse only the directories following this flag.
#### Files:
**-xf/--exclude-file**: Filepaths following this flag will be ignored.
**-if/--include-file**: Only filepaths following this flag will be parsed.
**-xt/--exclude-type**: File extensions following this flag will be ignored.
**-it/--include-type**: Only file extensions following this flag will be parsed.
### Finer Parsing Controls
---
**-pm/--parsing-mode**: Override default file parsing behaviour. Available options: MMAP, BUF, COMP.
1) **BUF**: Default parsing mode. Allocates a buffer of 4MB and reads files in chunks into this buffer.
2) **MMAP**: Map files to the virtual memory of the locstat process in an attempt to reduce the number of syscalls. May improve performance for hot page caches or for larger source files. Uses `mmap` and `madvice` on Linux and Mac systems, or `CreateFileMapping` and `MapViewOfFile` on Windows.
3) **COMP**: Read the entire file at once without any buffering.
**-vb/--verbosity**: Amount of statistics to include in the final report. Available modes:
1) **BARE**: Default mode, count only total lines and lines of code.
2) **REPORT**: Additionally include language metadata, i.e. number of files, total lines, and lines of code per file extension parsed.
3) **DETAILED**: Additionally include language metadata and per directory and per file line statistics.
**-md/--max-depth**: Recursively scan sub-directories upto the given level. Negative values are treated as infinite depth. Defaults to -1
**-mc/--min-chars**: Specify the minimum number of non-whitespace characters a line should have to be considered an LOC. Defaults to 1.
### Emitting Results
---
**-o/--output**: Specify output file to dump counts into. If not specified, output is dumped to `stdout`. If output file is in json then output is formatted differently.
## Examples
Let's run locstat against a cloned repository of `cpython-main`
```bash
$ locstat -d /home/tcn/targets/cpython-main/
GENERAL:
total : 2155349
loc : 1722995
time : 0.913s
scanned_at : 19/02/26, at 16:05:11
platform : Linux
```
Additionally, we can fetch per-extension metadata using the `REPORT` verbosity mode.
```bash
$ locstat -d /home/tcn/targets/cpython-main/ -vb REPORT
GENERAL:
total : 2155349
loc : 1722995
time : 0.204s
scanned_at : 19/02/26, at 16:06:15
platform : Linux
LANGUAGE METADATA
Extension Files Total LOC
---------------------------------
py 2211 1088097 851618
bat 32 2327 1959
ps1 6 571 451
sh 14 918 582
css 6 3248 2570
c 485 652306 498587
h 637 356061 319201
js 10 2431 1775
html 18 10327 9132
xml 119 31796 31615
m 10 1029 807
xsl 2 33 16
cpp 7 4989 3796
vbs 1 1 0
pyi 1 4 2
asm 1 46 18
lisp 1 692 502
ts 2 37 32
kts 3 307 236
kt 2 129 96
```
**Note**: The drop in scanning time in the second example is thanks to page caching following the first example.
## Customizations
locstat allows for default behaviour to be overridden per invocation, such as:
```bash
locstat -f foo.py --min-chars 2
```
This overrides the default minimum characters threshold of 1
Furthermore, changes to default values can be saved permanently using the `--config` (shorthand: `-c`) flag.
When invoked without any arguments, `--config` displays the current default configurations for locstat.
```bash
$ locstat --config
max_depth : -1
minimum_characters : 1
parsing_mode : BUF
verbosity : BARE
```
To update any value, append the flag with the option name and it's new value as a space-separated pair.
```bash
$ locstat --config max_depth 5 parsing_mode MMAP verbosity report
$ locstat --config
max_depth : 5
minimum_characters : 1
parsing_mode : MMAP
verbosity : REPORT
```
**Note**: String enums are case-insensitive.
## License
locstat is licensed under the MIT license.
## Compatibility Notes
locstat ships using `cibuildwheels`, allowing for platform-specific wheels for Linux, Windows, and Mac, spanning different architectures.
However, some platforms have been excluded due to cross-compatibility pains or their packaging requiring extremely long waiting times for workflow runners (such as Mac systems using Intel chips).
This does not mean that locstat is completely incompatible on all remaining systems. The source distribution is also shipped alongside wheels.
## Acknowledgements
locstat (formerly named pycloc, but that name was taken 2 weeks before I got to publishing :P) started as a one-day project to allow a good friend to count how many lines of C++ code were present in his UE5 project.
Since then, many features and improvements have been added to this still simple tool, all of which would have been impossible without the collective effort of thousands of open-source contributors, spanning across many projects such as CPython and Perl Cloc (the gold standard!) to even make programming possible for me.
Furthermore, the online help provided by thousands of contributors on StackOverflow, HackerNews and countless personal technical blogs has made it possible for me to polish and refine this tool.
My deepest gratitude goes to everyone, however indirectly involved, in the sphere of computer programming. Even for the smallest steps I take, I stand nonetheless on the shoulders of countless giants.
| text/markdown | null | Parth Acharya <parthacharya@protonmail.com> | null | null | The MIT License (MIT)
Copyright © 2026 Parth Acharya
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| locstat, py-locstat, loc, cloc, parsing, py-cloc, pycloc | [
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pytest; extra == \"dev\"",
"build; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/parthacharyaaaaa/locstat",
"Issues, https://github.com/parthacharyaaaaa/locstat/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:13:48.197841 | locstat-1.2.0.tar.gz | 24,779 | 55/46/69f4e2bfc10c62372864f06ef58bf5e714749aa205a337e742db446cc624/locstat-1.2.0.tar.gz | source | sdist | null | false | a1e0f3e6e7534a6dbb2fd0384e20b145 | 4ca3812b91ec41cc187fb7c8fc6ca3cb5c0db3cd906434d9d7e4d7af0943462e | 554669f4e2bfc10c62372864f06ef58bf5e714749aa205a337e742db446cc624 | null | [
"LICENSE"
] | 2,396 |
2.4 | rusket | 0.1.19 | Blazing-fast FP-Growth and Association Rules — pure Rust via PyO3 | <p align="center">
<img src="docs/assets/logo.svg" alt="rusket logo" width="200" height="200" />
</p>
<p align="center">
<strong>Blazing-fast Market Basket Analysis and Recommender Engines (ALS, BPR, FP-Growth, PrefixSpan) for Python, powered by Rust.</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/rusket/"><img src="https://img.shields.io/pypi/v/rusket?color=%2334D058&logo=pypi&logoColor=white" alt="PyPI"></a>
<a href="https://www.python.org/"><img src="https://img.shields.io/badge/python-3.10%2B-blue?logo=python&logoColor=white" alt="Python"></a>
<a href="https://www.rust-lang.org/"><img src="https://img.shields.io/badge/rust-1.83%2B-orange?logo=rust" alt="Rust"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-BSD--3--Clause-green" alt="License"></a>
<a href="https://bmsuisse.github.io/rusket/"><img src="https://img.shields.io/badge/docs-MkDocs-blue" alt="Docs"></a>
</p>
---
`rusket` is a high-performance library for **Market Basket Analysis**, **Graph Analytics**, and **Recommender Engines**, backed by a **Rust core** (via [PyO3](https://pyo3.rs/)) that delivers **2–15× speed-ups** and dramatically lower memory usage.
It features **Alternating Least Squares (ALS)** and **Bayesian Personalized Ranking (BPR)** for collaborative filtering, as well as **FP-Growth** (parallel via Rayon), **Eclat** (vertical bitset mining), and **PrefixSpan** (sequential pattern mining). It serves as a **drop-in replacement** for [`mlxtend`](https://rasbt.github.io/mlxtend/)'s APIs, natively supporting **Pandas** (including Arrow backend), **Polars**, and **sparse DataFrames** out of the box.
---
## ✨ Highlights
| | `rusket` | `mlxtend` |
|---|---|---|
| **Core language** | Rust (PyO3) | Pure Python |
| **Algorithms** | ALS, BPR, PrefixSpan, FP-Growth, Eclat | FP-Growth only |
| **Recommender API** | ✅ Hybrid Engine + i2i Similarity | ❌ |
| **Graph & Embeddings** | ✅ NetworkX Export, Vector DB Export | ❌ |
| **Pandas dense input** | ✅ C-contiguous `np.uint8` | ✅ |
| **Pandas Arrow backend** | ✅ Arrow zero-copy (pandas 2.0+) | ❌ Not supported |
| **Pandas sparse input** | ✅ Zero-copy CSR → Rust | ❌ Densifies first |
| **Polars input** | ✅ Arrow zero-copy | ❌ Not supported |
| **Parallel mining** | ✅ Rayon work-stealing | ❌ Single-threaded |
| **Memory** | Low (native Rust buffers) | High (Python objects) |
| **API compatibility** | ✅ Drop-in replacement | — |
| **Metrics** | 12 built-in metrics | 9 |
---
## 📦 Installation
```bash
pip install rusket
# or with uv:
uv add rusket
```
**Optional extras:**
```bash
# Polars support
pip install "rusket[polars]"
# Pandas/NumPy support (usually already installed)
pip install "rusket[pandas]"
```
---
## 🚀 Quick Start
### Basic — Pandas
```python
import pandas as pd
from rusket import mine, association_rules
# One-hot encoded boolean DataFrame
data = {
"bread": [1, 1, 0, 1, 1],
"butter": [1, 0, 1, 1, 0],
"milk": [1, 1, 1, 0, 1],
"eggs": [0, 1, 1, 0, 1],
"cheese": [0, 0, 1, 0, 0],
}
df = pd.DataFrame(data).astype(bool)
# 1. Mine frequent itemsets
# method="auto" automatically selects FP-Growth or Eclat based on dataset density
freq = mine(df, min_support=0.4, use_colnames=True)
# 2. Generate association rules
rules = association_rules(
freq,
num_itemsets=len(df),
metric="confidence",
min_threshold=0.6,
)
print(rules[["antecedents", "consequents", "support", "confidence", "lift"]]
.sort_values("lift", ascending=False))
```
---
### 🛒 Transaction Data (Long Format)
Real-world data comes as `(transaction_id, item)` rows — not one-hot matrices. Use the built-in helpers to convert:
```python
import pandas as pd
from rusket import from_transactions, mine
# Long-format transactional data
df = pd.DataFrame({
"order_id": [1, 1, 1, 2, 2, 3],
"item": [3, 4, 5, 3, 5, 8],
})
# Convert to one-hot boolean matrix
ohe = from_transactions(df)
# Mine!
freq = mine(ohe, min_support=0.3, use_colnames=True)
print(freq)
```
Or use the explicit helpers for type clarity:
```python
from rusket import from_pandas, from_polars
ohe = from_pandas(df) # Pandas DataFrame
ohe = from_polars(pl_df) # Polars DataFrame
ohe = from_transactions([[3, 4], [3, 5]]) # list of lists
```
> **Spark** is also supported: `from_spark(spark_df)` calls `.toPandas()` internally.
---
### ⚡ Eclat — Vertical Mining
`eclat` uses vertical bitset representation + hardware `popcnt` for fast support counting. Ideal for **sparse retail basket** data.
```python
import pandas as pd
from rusket import eclat, association_rules
df = pd.DataFrame({
"bread": [True, True, False, True, True],
"butter": [True, False, True, True, False],
"milk": [True, True, True, False, True],
"eggs": [False, True, True, False, True],
})
# Eclat — same API as fpgrowth
freq = eclat(df, min_support=0.4, use_colnames=True)
rules = association_rules(freq, num_itemsets=len(df), min_threshold=0.6)
print(rules)
```
#### When to use which?
You almost always want to use `rusket.mine(method="auto")`. This evaluates the density of your dataset `nnz / (rows * cols)` using the [Borgelt heuristic (2003)](https://borgelt.net/doc/eclat/eclat.html) to pick the best algorithm under the hood:
| Scenario | Algorithm chosen by `method="auto"` |
|---|---|
| Very sparse data (density < 0.15) | `eclat` (bitset/SIMD intersections) |
| Dense data (density > 0.15) | `fpgrowth` (FP-tree traversals) |
---
### 🐻❄️ Polars Input
`rusket` natively accepts [Polars](https://pola.rs/) DataFrames. Data is transferred via Arrow zero-copy buffers — **no conversion overhead**.
```python
import polars as pl
import numpy as np
from rusket import fpgrowth, association_rules
# ── 1. Create a Polars DataFrame ────────────────────────────────────
rng = np.random.default_rng(0)
n_rows, n_cols = 20_000, 150
products = [f"product_{i:03d}" for i in range(n_cols)]
# Power-law popularity: top products appear often, tail products are rare
support = np.clip(0.5 / np.arange(1, n_cols + 1, dtype=float) ** 0.5, 0.005, 0.5)
matrix = rng.random((n_rows, n_cols)) < support
df_pl = pl.DataFrame({p: matrix[:, i].tolist() for i, p in enumerate(products)})
print(f"Polars DataFrame: {df_pl.shape[0]:,} rows × {df_pl.shape[1]} columns")
# ── 2. fpgrowth — same API as pandas ────────────────────────────────
freq = fpgrowth(df_pl, min_support=0.05, use_colnames=True)
print(f"Frequent itemsets: {len(freq):,}")
print(freq.sort_values("support", ascending=False).head(8))
# ── 3. Association rules ────────────────────────────────────────────
rules = association_rules(freq, num_itemsets=n_rows, metric="lift", min_threshold=1.1)
print(f"Rules: {len(rules):,}")
print(
rules[["antecedents", "consequents", "confidence", "lift"]]
.sort_values("lift", ascending=False)
.head(6)
)
```
Or more concisely — just read a Parquet file:
```python
import polars as pl
from rusket import mine
df = pl.read_parquet("transactions.parquet")
freq = mine(df, min_support=0.05, use_colnames=True)
```
> **How it works under the hood:**
> Polars → Arrow buffer → `np.uint8` (zero-copy) → Rust `fpgrowth_from_dense`
---
### 📊 Sparse Pandas Input
For very sparse datasets (e.g. e-commerce with thousands of SKUs), use Pandas `SparseDtype` to minimize memory. `rusket` passes the raw CSR arrays straight to Rust — **no densification ever happens**.
```python
import pandas as pd
import numpy as np
from rusket import fpgrowth
rng = np.random.default_rng(7)
n_rows, n_cols = 30_000, 500
# Very sparse: average basket size ≈ 3 items out of 500
p_buy = 3 / n_cols
matrix = rng.random((n_rows, n_cols)) < p_buy
products = [f"sku_{i:04d}" for i in range(n_cols)]
df_dense = pd.DataFrame(matrix.astype(bool), columns=products)
df_sparse = df_dense.astype(pd.SparseDtype("bool", fill_value=False))
dense_mb = df_dense.memory_usage(deep=True).sum() / 1e6
sparse_mb = df_sparse.memory_usage(deep=True).sum() / 1e6
print(f"Dense memory: {dense_mb:.1f} MB")
print(f"Sparse memory: {sparse_mb:.1f} MB ({dense_mb / sparse_mb:.1f}× smaller)")
# Same API, same results — just faster and lighter
freq = mine(df_sparse, min_support=0.01, use_colnames=True)
print(f"Frequent itemsets: {len(freq):,}")
```
> **How it works under the hood:**
> Sparse DataFrame → COO → CSR → `(indptr, indices)` → Rust `fpgrowth_from_csr`
---
### 🌊 Out-of-Core Processing (FPMiner Streaming)
For datasets scaling to **Billion-row** sizes that don't fit in memory, use the `FPMiner` accumulator. It accepts chunks of `(txn_id, item_id)` pairs, sorting them in-place immediately, and uses a memory-safe **k-way merge** across all chunks to build the CSR matrix on the fly avoiding massive memory spikes.
```python
import numpy as np
from rusket import FPMiner
n_items = 5_000
miner = FPMiner(n_items=n_items)
# Feed chunks incrementally (e.g. from Parquet/CSV/SQL)
for chunk in dataset:
txn_ids = chunk["txn_id"].to_numpy(dtype=np.int64)
item_ids = chunk["item_id"].to_numpy(dtype=np.int32)
# Fast O(k log k) per-chunk sort
miner.add_chunk(txn_ids, item_ids)
# Stream k-way merge and mine in one pass!
# Returns a DataFrame with 'support' and 'itemsets' just like fpgrowth()
freq = miner.mine(min_support=0.001, max_len=3)
```
**Memory efficiency:** The peak memory overhead at `mine()` time is just $O(k)$ for the cursors (where $k$ is the number of chunks), plus the final compressed CSR allocation.
---
### 🔄 Migrating from mlxtend
`rusket` is a **drop-in replacement**. The only API difference is `num_itemsets`:
```diff
- from mlxtend.frequent_patterns import fpgrowth, association_rules
+ from rusket import mine, association_rules
- freq = fpgrowth(df, min_support=0.05, use_colnames=True)
+ freq = mine(df, min_support=0.05, use_colnames=True)
- rules = association_rules(freq, metric="lift", min_threshold=1.2)
+ rules = association_rules(freq, num_itemsets=len(df), # ← add this
+ metric="lift", min_threshold=1.2)
```
> **Why `num_itemsets`?** This makes support calculation explicit and avoids a hidden internal pandas join that `mlxtend` performs.
**Gotchas:**
1. Input must be `bool` or `0/1` integers — `rusket` warns if you pass floats
2. Polars is supported natively — just pass the DataFrame directly
3. Sparse pandas DataFrames work too — and use much less RAM
---
## 📖 API Reference
### `mine`
```python
rusket.mine(
df,
min_support: float = 0.5,
null_values: bool = False,
use_colnames: bool = False,
max_len: int | None = None,
method: str = "auto",
verbose: int = 0,
) -> pd.DataFrame
```
Dynamically selects the optimal mining algorithm based on the dataset density heuristically. It's highly recommended to use this instead of `fpgrowth` or `eclat` directly.
| Parameter | Type | Description |
|-----------|------|-------------|
| `df` | `pd.DataFrame` \| `pl.DataFrame` \| `np.ndarray` | One-hot encoded input (bool / 0-1). Dense, sparse, or Polars. |
| `min_support` | `float` | Minimum support threshold in `(0, 1]`. |
| `null_values` | `bool` | Allow NaN values in `df` (pandas only). |
| `use_colnames` | `bool` | Return column names instead of integer indices in itemsets. |
| `max_len` | `int \| None` | Maximum itemset length. `None` = unlimited. |
| `method` | `"auto" \| "fpgrowth" \| "eclat"` | Algorithm to use. "auto" selects Eclat for `<0.15` density distributions. |
| `verbose` | `int` | Verbosity level. |
**Returns** a `pd.DataFrame` with columns `['support', 'itemsets']`.
---
### `fpgrowth`
```python
rusket.fpgrowth(
df,
min_support: float = 0.5,
null_values: bool = False,
use_colnames: bool = False,
max_len: int | None = None,
verbose: int = 0,
) -> pd.DataFrame
```
| Parameter | Type | Description |
|-----------|------|-------------|
| `df` | `pd.DataFrame` \| `pl.DataFrame` \| `np.ndarray` | One-hot encoded input (bool / 0-1). Dense, sparse, or Polars. |
| `min_support` | `float` | Minimum support threshold in `(0, 1]`. |
| `null_values` | `bool` | Allow NaN values in `df` (pandas only). |
| `use_colnames` | `bool` | Return column names instead of integer indices in itemsets. |
| `max_len` | `int \| None` | Maximum itemset length. `None` = unlimited. |
| `verbose` | `int` | Verbosity level (kept for API compatibility with mlxtend). |
**Returns** a `pd.DataFrame` with columns `['support', 'itemsets']`.
---
### `eclat`
```python
rusket.eclat(
df,
min_support: float = 0.5,
null_values: bool = False,
use_colnames: bool = False,
max_len: int | None = None,
verbose: int = 0,
) -> pd.DataFrame
```
Same parameters as `fpgrowth`. Uses vertical bitset representation (Eclat algorithm) instead of FP-Tree.
**Returns** a `pd.DataFrame` with columns `['support', 'itemsets']`.
---
### `association_rules`
```python
rusket.association_rules(
df,
num_itemsets: int,
metric: str = "confidence",
min_threshold: float = 0.8,
support_only: bool = False,
return_metrics: list[str] = [...], # all 12 metrics by default
) -> pd.DataFrame
```
| Parameter | Type | Description |
|-----------|------|-------------|
| `df` | `pd.DataFrame` | Output from `fpgrowth()`. |
| `num_itemsets` | `int` | Number of transactions in the original dataset (`len(df_original)`). |
| `metric` | `str` | Metric to filter rules on (see table below). |
| `min_threshold` | `float` | Minimum value of `metric` to include a rule. |
| `support_only` | `bool` | Only compute support; fill other columns with `NaN`. |
| `return_metrics` | `list[str]` | Subset of metrics to include in the result. |
**Returns** a `pd.DataFrame` with columns `antecedents`, `consequents`, plus all requested metric columns.
#### Available Metrics
| Metric | Formula / Description |
|--------|----------------------|
| `support` | P(A ∪ B) |
| `confidence` | P(B \| A) |
| `lift` | confidence / P(B) |
| `leverage` | support − P(A)·P(B) |
| `conviction` | (1 − P(B)) / (1 − confidence) |
| `zhangs_metric` | Symmetrical correlation measure |
| `jaccard` | Jaccard similarity between A and B |
| `certainty` | Certainty factor |
| `kulczynski` | Average of P(B\|A) and P(A\|B) |
| `representativity` | Rule coverage across transactions |
| `antecedent support` | P(A) |
| `consequent support` | P(B) |
---
### `from_transactions`
```python
rusket.from_transactions(
data,
transaction_col: str | None = None,
item_col: str | None = None,
) -> pd.DataFrame
```
Converts long-format transactional data to a one-hot boolean matrix. Accepts Pandas DataFrames, Polars DataFrames, Spark DataFrames, or `list[list[...]]`.
### `from_pandas` / `from_polars` / `from_spark`
Explicit typed variants of `from_transactions` for specific DataFrame types:
```python
rusket.from_pandas(df, transaction_col=None, item_col=None) -> pd.DataFrame
rusket.from_polars(df, transaction_col=None, item_col=None) -> pd.DataFrame
rusket.from_spark(df, transaction_col=None, item_col=None) -> pd.DataFrame
```
---
## 🧠 Advanced Pattern & Recommendation Algorithms
`rusket` provides more than just basic market basket analysis. It includes an entire suite of modern algorithms and a high-level Business Recommender API.
### 🎯 Hybrid Recommender API
Combine the serendipity of **Collaborative Filtering** (ALS/BPR) with the strict, deterministic logic of **Frequent Pattern Mining**.
```python
from rusket import ALS, Recommender, mine, association_rules
# 1. Train your Collaborative Filtering model
als = ALS(factors=64).fit(user_item_matrix)
# 2. Mine your Association Rules
rules = association_rules(mine(user_item_matrix))
# 3. Create the Hybrid Engine
rec = Recommender(als_model=als, rules_df=rules)
# Personalized recommendations for a user (ALS)
items, scores = rec.recommend_for_user(user_id=42, n=5)
# Next Best Action for an active shopping cart (Association Rules)
cross_sell = rec.recommend_for_cart([14, 7], n=3)
```
### 📈 BPR & Sequential Patterns
- **BPR (Bayesian Personalized Ranking):** Optimize for implicit feedback (clicks, views, purchases) directly by learning the ranking order of items instead of minimizing error.
- **Sequential Pattern Mining (PrefixSpan):** Look at purchases over time instead of just single transactions (e.g., "Customer bought a Camera -> 1 month later bought a Lens").
### 🕸️ Graph Analytics & Embeddings
Integrate natively with the modern GenAI/LLM stack:
- **Vector Export:** Export user/item factors to a Pandas `DataFrame` ready for FAISS/Qdrant using `rusket.export_item_factors`.
- **Item-to-Item Similarity:** Fast Cosine Similarity on embeddings using `rusket.similar_items(als_model, item_id)`.
- **Graph Generation:** Automatically convert association rules into a `networkx` directed Graph for community detection using `rusket.viz.to_networkx(rules)`.
---
## ⚡ Benchmarks
### Scale Benchmarks (1M → 200M rows)
| Scale | `from_transactions` → fpgrowth | Direct CSR → Rust | **Speedup** |
|---|:---:|:---:|:---:|
| 1M rows | 5.0s | **0.1s** | **50×** |
| 10M rows | 24.4s | **1.7s** | **14×** |
| 50M rows | 63.1s | **10.9s** | **6×** |
| 100M rows (20M txns × 200k items) | 134.2s | **25.9s** | **5×** |
| 200M rows (40M txns × 200k items) | 246.8s | **73.1s** | **3×** |
#### Power-user path: Direct CSR → Rust
```python
import numpy as np
from scipy import sparse as sp
from rusket import mine
# Build CSR directly from integer IDs (no pandas!)
csr = sp.csr_matrix(
(np.ones(len(txn_ids), dtype=np.int8), (txn_ids, item_ids)),
shape=(n_transactions, n_items),
)
freq = mine(csr, min_support=0.001, max_len=3,
use_colnames=True, column_names=item_names)
```
> At 100M rows, the mining step takes **1.3 seconds** — the bottleneck is entirely the CSR build.
### Real-World Datasets
| Dataset | Transactions | Items | `rusket` | `mlxtend` | Speedup |
|---------|:----------:|:-----:|:--------:|:---------:|:-------:|
| [andi_data.txt](https://github.com/andi611/Apriori-and-Eclat-Frequent-Itemset-Mining) | 8,416 | 119 | **9.7 s** (22.8M itemsets) | **TIMEOUT** 💥 | ∞ |
| [andi_data2.txt](https://github.com/andi611/Apriori-and-Eclat-Frequent-Itemset-Mining) | 540,455 | 2,603 | **7.9 s** | 16.2 s | **2×** |
Run benchmarks yourself:
```bash
uv run python benchmarks/bench_scale.py # Scale benchmark + Plotly chart
uv run python benchmarks/bench_realworld.py # Real-world datasets
uv run pytest tests/test_benchmark.py -v -s # pytest-benchmark
```
---
## 🏗 Architecture
### Data Flow
```
pandas dense ──► np.uint8 array (C-contiguous) ──► Rust fpgrowth_from_dense
pandas Arrow backend ──► Arrow → np.uint8 (zero-copy) ──► Rust fpgrowth_from_dense
pandas sparse ──► CSR int32 arrays ──► Rust fpgrowth_from_csr
polars ──► Arrow → np.uint8 (zero-copy) ──► Rust fpgrowth_from_dense
numpy ndarray ──► np.uint8 (C-contiguous) ──► Rust fpgrowth_from_dense
```
All mining and rule generation happens **inside Rust**. No Python loops, no round-trips.
### Project Structure
```
├── src/ # Rust core (PyO3)
│ ├── lib.rs # Module root & Python bindings
│ ├── fpgrowth.rs # FP-Tree construction + FP-Growth mining (Rayon parallel)
│ ├── eclat.rs # Eclat vertical mining (bitset intersection + popcnt)
│ └── association_rules.rs # Rule generation + 12 metrics (Rayon parallel)
│
├── rusket/ # Python wrappers & validation
│ ├── __init__.py # Package root
│ ├── fpgrowth.py # FP-Growth input dispatch (dense / sparse / Polars)
│ ├── eclat.py # Eclat input dispatch (dense / sparse / Polars)
│ ├── association_rules.py # Label mapping + Rust call + result assembly
│ ├── transactions.py # from_transactions / from_pandas / from_polars / from_spark
│ ├── _validation.py # Input validation
│ └── _rusket.pyi # Type stubs for Rust extension
│
├── tests/ # Comprehensive test suite
├── benchmarks/ # Real-world benchmark scripts
├── docs/ # MkDocs documentation
└── pyproject.toml # Build config (maturin)
```
---
## 🧑💻 Development
### Prerequisites
- **Rust** 1.83+ (`rustup update`)
- **Python** 3.10+
- [**uv**](https://docs.astral.sh/uv/) (recommended package manager)
### Getting Started
```bash
# Clone
git clone https://github.com/bmsuisse/rusket.git
cd rusket
# Build Rust extension in dev mode
uv run maturin develop --release
# Run the full test suite
uv run pytest tests/ -x -q
# Type-check the Python layer
uv run pyright rusket/
# Cargo check (Rust)
cargo check
```
### Run Examples
```bash
# Getting started
uv run python examples/01_getting_started.py
# Market basket analysis with Faker
uv run python examples/02_market_basket_faker.py
# Polars input
uv run python examples/03_polars_input.py
# Sparse input
uv run python examples/04_sparse_input.py
# Large-scale mining (100k+ rows)
uv run python examples/05_large_scale.py
# mlxtend migration guide
uv run python examples/06_mlxtend_migration.py
```
---
## 📜 License
[BSD 3-Clause](LICENSE)
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"mkdocs-material>=9.5; extra == \"docs\"",
"mkdocs<2,>=1.5; extra == \"docs\"",
"numpy>=1.24; extra == \"pandas\"",
"pandas>=2.0; extra == \"pandas\"",
"polars>=0.20; extra == \"polars\""
] | [] | [] | [] | [
"Homepage, https://github.com/bmsuisse/rusket",
"Repository, https://github.com/bmsuisse/rusket.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:13:12.665570 | rusket-0.1.19.tar.gz | 354,829 | 00/2c/2251ad9bae3c297a914cac52ea2bee1a93c2db12d25dbbf54d073004dacf/rusket-0.1.19.tar.gz | source | sdist | null | false | ee1b4176d50e6782b228b64629fe5ffb | 83e3ff16ea29b7c8c7027ff274427f6b58424f4b326b3a5492638a00ae683716 | 002c2251ad9bae3c297a914cac52ea2bee1a93c2db12d25dbbf54d073004dacf | MIT | [
"LICENSE"
] | 2,377 |
2.4 | kubera-core | 0.1.1 | Personal asset management backend | # Kubera Core
Personal asset management backend. All financial data stays on your machine.
## Install & Run
```bash
pip install kubera-core
kubera-core start
```
- API docs: http://localhost:8000/docs
- Auth token is auto-generated and printed on first start
## Options
```bash
kubera-core start --host 127.0.0.1 --port 3000
kubera-core token # show current token
kubera-core token --refresh # generate new token
```
## Credential Management
API keys for financial services are managed via CLI only (never through web).
```bash
kubera-core credential add upbit # interactive masked input
kubera-core credential list # show status
kubera-core credential remove upbit
```
Supported providers: upbit, kis, binance, codef
## Tech Stack
FastAPI, SQLAlchemy (SQLite), Pydantic, cryptography (Fernet)
## Related
- [kubera-web](https://github.com/oh-my-kubera/kubera-web) — Next.js dashboard (PWA)
## License
MIT
| text/markdown | null | SanGyuk-Raccoon <dmstjkim80@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"alembic>=1.13.0",
"cryptography>=43.0",
"fastapi>=0.115.0",
"openpyxl>=3.1.5",
"pydantic-settings>=2.0",
"pydantic>=2.0",
"python-multipart>=0.0.9",
"pyyaml>=6.0",
"sqlalchemy>=2.0",
"uvicorn[standard]>=0.30.0"
] | [] | [] | [] | [
"Homepage, https://github.com/oh-my-kubera/kubera-core",
"Repository, https://github.com/oh-my-kubera/kubera-core",
"Issues, https://github.com/oh-my-kubera/kubera-core/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:12:36.600488 | kubera_core-0.1.1.tar.gz | 106,235 | 7f/9c/1218c4bfb1e12c3d99fe687ee411378d09eef2bd9a5f4862be68acbc3406/kubera_core-0.1.1.tar.gz | source | sdist | null | false | c247d399c623ad67637552fca71975d6 | 22d73aa6b4487c037f6e80a83cfa208f451f01863a56d36904543521e90a20fa | 7f9c1218c4bfb1e12c3d99fe687ee411378d09eef2bd9a5f4862be68acbc3406 | null | [
"LICENSE"
] | 271 |
2.4 | faran | 0.2.3 | A library providing performant NumPy & JAX implementations of an MPPI planner, along with implementation of related algorithms/tools. | # faran
> **Primary repository:** [gitlab.com/risk-metrics/faran](https://gitlab.com/risk-metrics/faran) — the [GitHub mirror](https://github.com/zuka011/faran) exists for Colab notebook support.
[](https://gitlab.com/risk-metrics/faran/-/pipelines) [](https://codecov.io/gl/risk-metrics/faran) [](https://bencher.dev/perf/faran) [](https://pypi.org/project/faran/) [](https://pypi.org/project/faran/) [](https://gitlab.com/risk-metrics/faran/-/blob/main/LICENSE)
Sampling-based trajectory planning for autonomous systems. Provides composable building blocks — dynamics models, cost functions, samplers, and risk metrics — so you can assemble a complete MPPI planner in a few lines and iterate on the parts that matter for your problem.
## Installation
```bash
pip install faran # NumPy + JAX (CPU)
pip install faran[cuda] # JAX with GPU support (Linux)
```
Requires Python ≥ 3.13.
## Quick Start
MPPI planner with MPCC (Model Predictive Contouring Control) for path tracking, using a kinematic bicycle model:
```python
from faran.numpy import mppi, model, sampler, trajectory, types, extract
from numtypes import array
def position(states):
return types.positions(x=states.positions.x(), y=states.positions.y())
reference = trajectory.waypoints(
points=array([[0, 0], [10, 0], [20, 5], [30, 5]], shape=(4, 2)),
path_length=35.0,
)
planner, augmented_model, _, _ = mppi.mpcc(
model=model.bicycle.dynamical(
time_step_size=0.1, wheelbase=2.5,
speed_limits=(0.0, 15.0), steering_limits=(-0.5, 0.5),
acceleration_limits=(-3.0, 3.0),
),
sampler=sampler.gaussian(
standard_deviation=array([0.5, 0.2], shape=(2,)),
rollout_count=256,
to_batch=types.bicycle.control_input_batch.create, seed=42,
),
reference=reference,
position_extractor=extract.from_physical(position),
config={
"weights": {"contouring": 50.0, "lag": 100.0, "progress": 1000.0},
"virtual": {"velocity_limits": (0.0, 15.0)},
},
)
state = types.augmented.state.of(
physical=types.bicycle.state.create(x=0.0, y=0.0, heading=0.0, speed=0.0),
virtual=types.simple.state.zeroes(dimension=1),
)
nominal = types.augmented.control_input_sequence.of(
physical=types.bicycle.control_input_sequence.zeroes(horizon=30),
virtual=types.simple.control_input_sequence.zeroes(horizon=30, dimension=1),
)
for _ in range(200):
control = planner.step(temperature=50.0, nominal_input=nominal, initial_state=state)
state = augmented_model.step(inputs=control.optimal, state=state)
nominal = control.nominal
```
<!-- TODO: Replace with simulation GIF -->
To use JAX (GPU), change `from faran.numpy` to `from faran.jax`. The API is identical.
## Features
See the [feature overview](https://risk-metrics.gitlab.io/faran/guide/features/) for the full list of supported components, backend coverage, and roadmap.
## Documentation
| | |
|---|---|
| [Getting Started](https://risk-metrics.gitlab.io/faran/guide/getting-started/) | Installation, first planner, simulation loop |
| [User Guide](https://risk-metrics.gitlab.io/faran/guide/concepts/) | MPPI concepts, cost design, obstacles, boundaries, risk metrics |
| [Examples](https://risk-metrics.gitlab.io/faran/guide/examples/) | Interactive visualizations of MPCC scenarios |
| [API Reference](https://risk-metrics.gitlab.io/faran/api/) | Factory functions and protocol documentation |
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md).
## License
MIT — see [LICENSE](LICENSE).
| text/markdown | null | Zurab Mujirishvili <zurab.mujirishvili@fau.de> | null | null | null | autonomous systems, jax, mppi, robotics, safety, trajectory planning | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"beartype>=0.22.9",
"deepmerge>=2.0",
"jax>=0.9.0.1",
"jaxtyping>=0.3.9",
"numtypes>=0.5.1",
"riskit>=0.3.0",
"scipy>=1.17.0",
"jax[cuda]>=0.9.0.1; extra == \"cuda\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T10:12:11.811423 | faran-0.2.3-py3-none-any.whl | 211,167 | 18/31/a1c72e2bab8a75ab375dbfabf9be0d5721c51d45328ea40649f0e3560424/faran-0.2.3-py3-none-any.whl | py3 | bdist_wheel | null | false | ad31b1ce0b7f3614226076146b78716a | ce9fdc4898694e97acc114569511f166238ed47ccfe6ea66d5d40fad3af7c21d | 1831a1c72e2bab8a75ab375dbfabf9be0d5721c51d45328ea40649f0e3560424 | null | [
"LICENSE"
] | 261 |
2.4 | uromyces | 0.0.4 | uromyces implements parts of Beancount's plain text accounting functionality in Rust | # uro(myces)
uromyces is a toy Rust re-implementation of parts of Beancount's functionality.
## How to use / run
You can use the provided Makefile to set up a virtualenv (at `.venv`) and
install uromyces in it with `make dev` and then try out e.g.
`uro check -v $BEANCOUNT_FILE` to run a bean-check like script that will do a
full parse and print out any errors. There is also a `uro compare` command to
print out differences between Beancount and uromyces.
For more elaborate playing around it's probably best to write a Python script
that uses the `uromyces.load_file` function.
## Components
Just like Beancount, this tries to go from an input file to an usable result of
entries.
It does so in the following series of steps.
1. Parse single files with a tree-sitter grammar to obtain abstract syntax
trees.
1. Convert the syntax tree to produce Rust data structures.
1. Combine the parsed results from multiple files and run initial validations.
1. Booking
1. Plugins
1. Validation
## Performance
On my personal ledger, this is faster than Beancount as follows:
- parsing is about 2x faster
- booking is about 40x faster
- plugins for documents, implicit prices and validations each about 10x faster
### Parsing
There's quite some room for improvement in the parser surely, it hasn't been
the focus so far. Since the parser does more work per-file, parallelisation
with rayon maybe could give nice speed-ups for users who have multiple
includes. There's a couple other Rust parser implementations for Beancount
already, maybe one of them could be adapted as well. With caching, the parsing
performance should not be that relevant since we only need to reparse files
that changed.
### Imcremental computation
In the context of Fava, using salsa-rs ("A generic framework for on-demand,
incrementalized computation.") seems like a good fit to speed up the
incremental re-parses and more of a modified ledger.
## Differences to Beancount (V3)
- Not a lot of attention has been placed on generating the same kinds of
errors. So, e.g., for a transaction that does not balance, the error messages
from Beancount might be quite different.
- The automatic filling of missing currencies is stricter (less powerful) than
the one by Beancount and only takes the account balances into account to
infer cost currencies. IMHO leaving out currencies should be discouraged and
making this depend on the previous account balance seems error-prone.
- Likewise, the interpolation is less powerful. For example it won't be able to
interpolate a missing total cost. Interpolating total cost seems to be rather
an edge case anyway.
- The (deprecated) total cost syntax (`{{}}`) is not supported.
- Deprecated options are not supported.
- The options `account_rounding`, `infer_tolerance_from_cost`, and
`plugin_processing_mode` are not supported.
## Etymology
The name is derived from the genus of rust fungi that can befall bean plants.
| text/markdown; charset=UTF-8; variant=GFM | null | Jakob Schnitzer <mail@jakobschnitzer.de> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Information Technology",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Office/Business :: Financial :: Accounting",
"Topic :: Office/Business :: Financial :: Investment"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click<9,>=7",
"beancount>=3",
"fava>=1.30; extra == \"fava\""
] | [] | [] | [] | [
"Issues, https://github.com/yagebu/uromyces/issues/",
"Source, https://github.com/yagebu/uromyces/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T10:12:09.378245 | uromyces-0.0.4-cp310-abi3-macosx_11_0_arm64.whl | 1,450,524 | 74/b5/35db7ae6b56f86d2bb22a91facbf6866349ab5bc409757b3025d03f0f2c8/uromyces-0.0.4-cp310-abi3-macosx_11_0_arm64.whl | cp310 | bdist_wheel | null | false | b1500442fd3143d53a63abedfbb52b91 | 4a617673d69145687a53237bc6f1425b5a6250c88149cf90aeff4f06547a6001 | 74b535db7ae6b56f86d2bb22a91facbf6866349ab5bc409757b3025d03f0f2c8 | GPL-2.0-only | [] | 542 |
2.4 | blnetwork | 0.1.0 | Behavior Learning (BL): Learning Hierarchical Optimization Structures from Data | # Behavior Learning (BL)
Behavior Learning (BL) is a general-purpose machine learning framework grounded in behavioral science. It unifies predictive performance and intrinsic interpretability within a single modeling paradigm. BL learns explicit optimization structures from data by parameterizing a compositional utility function built from interpretable modular blocks. Each block represents a Utility Maximization Problem (UMP), a foundational framework of decision-making and optimization. BL supports architectures ranging from a single UMP to hierarchical compositions, enabling expressive yet structurally transparent models. Unlike post-hoc explanation methods, BL provides interpretability by design while maintaining strong empirical performance on high-dimensional tasks.
## Installation
blnetwork can be installed via PyPI or directly from GitHub.
**Pre-requisites:**
```
Python 3.10.9 or higher
pip
```
**For developers**
```
git clone https://github.com/YueLiang-hye/Behavior-Learning.git
cd blnetwork
pip install -e .
```
**Installation via github**
```
pip install git+https://github.com/YueLiang-hye/Behavior-Learning.git
```
**Installation via PyPI:**
```
pip install blnetwork
```
Requirements
```python
# python==3.10.9
torch>=2.2
numpy>=1.26
pandas>=2.0
scikit-learn>=1.3
```
After activating the virtual environment, you can install specific package requirements as follows:
```python
pip install -r requirements.txt
```
**Optional: Conda Environment Setup**
For those who prefer using Conda:
```
conda create --name blnetwork-env python=3.10.9
conda activate blnetwork-env
pip install git+https://github.com/YueLiang-hye/Behavior-Learning.git # For GitHub installation
# or
pip install blnetwork # For PyPI installation
```
## Computation Requirements
BL is implemented in PyTorch and supports both CPU and GPU training.
- Small-scale tabular examples run on a single CPU within a few minutes.
- High-dimensional settings may benefit from GPU acceleration (e.g., NVIDIA L40).
For most tabular tasks, CPU training is sufficient.
## Examples
Start with the notebooks in [`examples/`](./examples/):
- [Example 1: Boston Housing (continuous)](./examples/Example_1_boston_housing.ipynb)
- [Example 2: Breast Cancer (classification)](./examples/Example_2_breast_cancer.ipynb)
## Advice on hyperparameter tuning
In many cases, BL can achieve comparable (or slightly better) performance than an MLP baseline using roughly one third of the hidden width.
Other hyperparameters can be initialized based on standard MLP tuning, and then refined for the specific task.
## Contact
If you have any questions, please contact yue.liang@student.uni-tuebingen.de
| text/markdown | null | Yue Liang <yue.liang@student.uni-tuebingen.de> | null | null | MIT License
Copyright (c) 2026 Yue Liang
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10.9 | [] | [] | [] | [
"torch>=2.2",
"numpy>=1.26",
"pandas>=2.1",
"scikit-learn>=1.3"
] | [] | [] | [] | [
"Homepage, https://github.com/YueLiang-hye/Behavior-Learning",
"Issues, https://github.com/YueLiang-hye/Behavior-Learning/issues"
] | twine/6.2.0 CPython/3.10.9 | 2026-02-21T10:11:30.183905 | blnetwork-0.1.0.tar.gz | 17,702 | 78/e0/3055902b836004490d1803ed1f034de3c50eb3b97355d74dcba38e1d3e12/blnetwork-0.1.0.tar.gz | source | sdist | null | false | 5b333f6d3641734a8ad350db527e4283 | d3e4833ed0db56de920b1b154bf698ebd95c35cc3835f4bb84e14fb084919cdf | 78e03055902b836004490d1803ed1f034de3c50eb3b97355d74dcba38e1d3e12 | null | [
"LICENSE"
] | 267 |
2.1 | robhan-cdk-lib.utils | 0.0.178 | @robhan-cdk-lib/utils | robhan_cdk_lib.utils
====================
| text/markdown | Robert Hanuschke<robhan-cdk-lib@hanuschke.eu> | null | null | null | MIT | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved"
] | [] | https://github.com/robert-hanuschke/cdk-utils | null | ~=3.9 | [] | [] | [] | [
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/robert-hanuschke/cdk-utils"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-21T10:11:26.661603 | robhan_cdk_lib_utils-0.0.178.tar.gz | 12,233 | e1/4f/4948e408f49f5a16055d18ccd96e51c54d177a6020aa038f53de4b5035ad/robhan_cdk_lib_utils-0.0.178.tar.gz | source | sdist | null | false | 311e0dfb211c9e8093da7634afbbdbdf | 897b9d32b0444bc618d247e8df2340b089600e135c66ead37c12b102637bc78f | e14f4948e408f49f5a16055d18ccd96e51c54d177a6020aa038f53de4b5035ad | null | [] | 0 |
2.4 | duckdb-sqlalchemy | 1.4.4.3 | DuckDB SQLAlchemy dialect for DuckDB and MotherDuck | # duckdb-sqlalchemy
[](https://pypi.org/project/duckdb-sqlalchemy)
[](https://pypi.org/project/duckdb-sqlalchemy/)
[](https://codecov.io/gh/leonardovida/duckdb-sqlalchemy)
duckdb-sqlalchemy is a DuckDB SQLAlchemy dialect for DuckDB and MotherDuck. It supports SQLAlchemy Core and ORM APIs for local DuckDB and MotherDuck connections.
For new projects, this repository is the recommended dialect when you want production-oriented defaults, explicit MotherDuck guidance, and a clear migration path from older package names.
The dialect handles pooling defaults, bulk inserts, type mappings, and cloud-specific configuration.
## Why choose duckdb-sqlalchemy today
- **SQLAlchemy compatibility**: Core, ORM, Alembic, and reflection.
- **MotherDuck support**: Token handling, attach modes, session hints, and read scaling helpers.
- **Operational defaults**: Pooling defaults, transient retry for reads, and bulk insert optimization via Arrow/DataFrame registration.
- **Active release cadence**: Tracks current DuckDB releases with a long-term support posture.
| Area | `duckdb-sqlalchemy` (this repo) | `duckdb_engine` |
| --- | --- | --- |
| Package/module name | `duckdb-sqlalchemy` / `duckdb_sqlalchemy` | `duckdb-engine` / `duckdb_engine` |
| SQLAlchemy driver URL | `duckdb://` | `duckdb://` |
| MotherDuck workflow coverage | Dedicated URL helper (`MotherDuckURL`), connection guidance, and examples | No dedicated MotherDuck usage section in the upstream README |
| Operational guidance | Documented pooling defaults, read-scaling helpers, and bulk insert patterns | Basic configuration guidance in upstream README |
| Migration path | Explicit migration guide from older package names | Migration to this package is documented in this repo |
| Project direction | Release policy, changelog, roadmap, and docs site are maintained here | Upstream README focuses on the core driver usage |
## Coming from duckdb_engine?
If you already use `duckdb-engine`, migration is straightforward:
- keep the SQLAlchemy URL scheme (`duckdb://`)
- install `duckdb-sqlalchemy`
- switch imports to `duckdb_sqlalchemy`
See the full guide: [docs/migration-from-duckdb-engine.md](docs/migration-from-duckdb-engine.md).
## Project lineage
This project is a heavily modified fork of `Mause/duckdb_engine` and continues to preserve upstream history in `CHANGELOG.md`.
Current direction in this repository:
- package and module rename to `duckdb-sqlalchemy` / `duckdb_sqlalchemy`
- production-oriented defaults for local DuckDB and MotherDuck deployments
- docs-first maintenance with versioned release notes and a published docs site
## Compatibility
| Component | Supported versions |
| --- | --- |
| Python | 3.9+ |
| SQLAlchemy | 1.3.22+ (2.x recommended) |
| DuckDB | 1.3.0+ (1.4.4 recommended) |
## Install
```sh
pip install duckdb-sqlalchemy
```
## Quick start (DuckDB)
```python
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import declarative_base, Session
Base = declarative_base()
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True)
name = Column(String)
engine = create_engine("duckdb:///:memory:")
Base.metadata.create_all(engine)
with Session(engine) as session:
session.add(User(name="Ada"))
session.commit()
assert session.query(User).one().name == "Ada"
```
## Quick start (MotherDuck)
```bash
export MOTHERDUCK_TOKEN="..."
```
```python
from sqlalchemy import create_engine
engine = create_engine("duckdb:///md:my_db")
```
MotherDuck uses the `md:` database prefix. Tokens are picked up from `MOTHERDUCK_TOKEN` (or `motherduck_token`) automatically. If your token has special characters, URL-escape it or pass it via `connect_args`.
## Connection URLs
DuckDB URLs follow the standard SQLAlchemy shape:
```
duckdb:///<database>?<config>
```
Examples:
```
duckdb:///:memory:
duckdb:///analytics.db
duckdb:////absolute/path/to/analytics.db
duckdb:///md:my_db?attach_mode=single&access_mode=read_only&session_hint=team-a
```
Use the URL helpers to build connection strings safely:
```python
from duckdb_sqlalchemy import URL, MotherDuckURL
local_url = URL(database=":memory:", memory_limit="1GB")
md_url = MotherDuckURL(database="md:my_db", attach_mode="single")
```
## Configuration and pooling
This dialect defaults to `NullPool` for file/MotherDuck connections and `SingletonThreadPool` for `:memory:`. You can override pooling explicitly. For long-lived MotherDuck pools, use the performance helper or configure `QueuePool`, `pool_pre_ping`, and `pool_recycle`.
See `docs/configuration.md` and `docs/motherduck.md` for detailed guidance.
## Documentation
- `docs/index.md` - GitHub Pages entrypoint
- `docs/README.md` - Docs index
- `docs/overview.md` - Overview and quick start
- `docs/getting-started.md` - Minimal install + setup walkthrough
- `docs/migration-from-duckdb-engine.md` - Migration guide from older dialects
- `docs/connection-urls.md` - URL formats and helpers
- `docs/motherduck.md` - MotherDuck setup and options
- `docs/configuration.md` - Connection configuration, extensions, filesystems
- `docs/olap.md` - Parquet/CSV scans and ATTACH workflows
- `docs/pandas-jupyter.md` - DataFrame registration and notebook usage
- `docs/types-and-caveats.md` - Type support and known caveats
- `docs/alembic.md` - Alembic integration
Docs site (GitHub Pages):
```
https://leonardovida.github.io/duckdb-sqlalchemy/
```
## Examples
- `examples/sqlalchemy_example.py` - end-to-end example
- `examples/motherduck_read_scaling_per_user.py` - per-user read scaling pattern
- `examples/motherduck_queuepool_high_concurrency.py` - QueuePool tuning
- `examples/motherduck_multi_instance_pool.py` - multi-instance pool rotation
- `examples/motherduck_arrow_reads.py` - Arrow results + streaming
- `examples/motherduck_attach_modes.py` - workspace vs single attach mode
## Release and support policy
- Long-term maintenance: intended to remain supported.
- Compatibility: track current DuckDB and SQLAlchemy releases while preserving SQLAlchemy semantics.
- Breaking changes: only in major/minor releases with explicit notes in `CHANGELOG.md`.
- Security: open an issue with details; fixes are prioritized.
## Changelog and roadmap
- `CHANGELOG.md` - release notes
- `ROADMAP.md` - upcoming work and priorities
## Contributing
See `AGENTS.md` for repo-specific workflow, tooling, and PR expectations. We welcome issues, bug reports, and high-quality pull requests.
## License
MIT. See `LICENSE.txt`.
| text/markdown | null | Leonardo Vida <lleonardovida@gmail.com> | null | null | null | analytics, database, dialect, duckdb, motherduck, olap, sqlalchemy | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Database",
"Topic :: Database :: Front-Ends"
] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"duckdb>=0.5.0",
"packaging>=21",
"sqlalchemy>=1.3.22",
"fsspec<2026.0.0,>=2025.2.0; extra == \"dev\"",
"github-action-utils<2.0.0,>=1.1.0; extra == \"dev\"",
"hypothesis<7.0.0,>=6.75.2; extra == \"dev\"",
"jupysql<0.12.0,>=0.11.1; extra == \"dev\"",
"numpy<2.0,>=1.24; python_version < \"3.12\" and extra == \"dev\"",
"numpy<3.0,>=1.26; (python_version >= \"3.12\" and python_version < \"3.13\") and extra == \"dev\"",
"numpy<3.0,>=2.0; python_version >= \"3.13\" and extra == \"dev\"",
"pandas<2.0,>=1; python_version < \"3.12\" and extra == \"dev\"",
"pandas<3.0,>=2.2; python_version >= \"3.12\" and extra == \"dev\"",
"pyarrow>=22.0.0; python_version >= \"3.10\" and extra == \"dev\"",
"pytest-cov<6.0.0,>=5.0.0; extra == \"dev\"",
"pytest-remotedata<0.5.0,>=0.4.0; extra == \"dev\"",
"pytest-snapshot<1.0.0,>=0.9.0; extra == \"dev\"",
"pytest<9.0.0,>=8.0.0; extra == \"dev\"",
"toml<0.11.0,>=0.10.2; extra == \"dev\"",
"ty; extra == \"dev\"",
"pdbpp<0.12.0,>=0.11.0; extra == \"devtools\"",
"pre-commit>=4.0.0; python_version >= \"3.9\" and extra == \"devtools\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/leonardovida/duckdb-sqlalchemy/issues",
"Changelog, https://github.com/leonardovida/duckdb-sqlalchemy/releases",
"Documentation, https://leonardovida.github.io/duckdb-sqlalchemy/",
"repository, https://github.com/leonardovida/duckdb-sqlalchemy",
"Upstream, https://github.com/Mause/duckdb_engine"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:10:36.715422 | duckdb_sqlalchemy-1.4.4.3.tar.gz | 175,223 | 74/49/a610eb4c453419a8972b523d4af014419d2ccd38fa167a4ebc9a491a35c0/duckdb_sqlalchemy-1.4.4.3.tar.gz | source | sdist | null | false | c938af9ee91a6a3f6b8451a06bd17b08 | 43643e4f59dbff91f87b0757a2a316e1ff09980e8a045122ea872f07911dcdba | 7449a610eb4c453419a8972b523d4af014419d2ccd38fa167a4ebc9a491a35c0 | MIT | [
"LICENSE.txt"
] | 252 |
2.4 | vectl | 0.5.3 | Execution control plane for AI agents — structure agents, save tokens. | # vectl — execution control plane for AI agents
[中文文档](README_zh.md) | [**Read the Introduction**](https://tefx.one/posts/vectl-intro/)
**Structure agents. Save tokens.**
[](https://pypi.org/project/vectl/)
[](https://www.gnu.org/licenses/agpl-3.0)
```bash
uvx vectl --help
```
## Your Markdown Plan Is Wasting Tokens
A 50-step markdown plan, 40 steps done:
- The agent still **re-reads all 50 lines**. 40 completed steps are pure noise — eating context window, burning attention, costing you money.
- `vectl next` **returns only 3 actionable steps**. Completed steps vanish. Blocked steps are invisible.
The more steps you have, the worse it gets. 100 steps, 90 done? Markdown forces the agent to read 100 lines to find 10 useful ones. vectl gives it just those 10.
And Markdown is linear. Three agents online at once? They queue up — because nothing tells them which steps can run in parallel.
vectl's DAG makes parallelism possible: dependencies are explicit, `next` serves up **all** unblocked steps, three agents each claim one, zero conflicts.
Token waste and serialization are just symptoms. The root defect is that **Markdown doesn't express dependencies**:
| Markdown Plans | vectl |
| :--- | :--- |
| ❌ **Full re-read every time**: agent reads all steps regardless of completion | ✅ Returns only actionable steps — done steps vanish |
| ❌ **Implicit dependencies**: "Deploy DB" before "Config App" — agent can only guess if they're related | ✅ `depends_on: [db.deploy]` — explicit, no guessing |
| ❌ **No safe parallelism**: without dependency info, multiple agents queue up or gamble | ✅ DAG makes parallelism computable — `next` returns all conflict-free steps |
| ❌ **Manual dispatch**: "DB is done, go work on App now" | ✅ `next` automatically surfaces all unblocked steps |
| ❌ **Silent overwrites**: two agents write the same file simultaneously | ✅ CAS optimistic locking — conflicts error out, never silently lost |
| ❌ **Self-declared completion**: agent says "Done" and it's Done | ✅ Evidence required: what command, what output, where's the PR |
| ❌ **Context amnesia**: new session = start from scratch | ✅ `checkpoint` generates a state snapshot — inject into new session, instant recovery |
> TODO.md can't say no. vectl can.
## Control Plane, Not a Framework
Agent frameworks manage how agents think. vectl manages **what agents see, when they see it, and what they must prove**.
| Capability | Problem Solved | Mechanism |
| :--- | :--- | :--- |
| **DAG Enforcement** | Agents skip dependencies, guess ordering | Blocked steps are invisible — agents literally *cannot* claim them |
| **Safe Parallelism** | Multiple agents step on each other | `claim` locking + CAS atomic writes |
| **Auto-Dispatch** | Someone must watch and assign tasks | `next` computes all unblocked steps and sorts them; rejected steps float to top |
| **Token Budget** | Agent re-reads hundreds of completed lines | Hard limits across the board: next ≤3, context ≤120 chars, evidence ≤900 chars |
| **Anti-Hallucination** | Agent says "Fixed" and moves on | `evidence_template` forces fill-in-the-blank proof: command, output, PR link |
| **Context Compaction** | Long conversations cause agent amnesia | `checkpoint` generates a deterministic JSON snapshot — inject into new session for instant recovery |
| **Handoff Notes** | Agents lose state between hosts/sessions | `clipboard-write/read/clear` stores short notes in `plan.yaml` (with TTL) |
| **Agent Affinity** | Different agents are good at different tasks | Steps can suggest an agent; `next` sorts by affinity |
## Quick Start
### 1. Initialize
```bash
uvx vectl init --project my-project
```
Creates `plan.yaml` and auto-configures agent instructions (writes `CLAUDE.md` when `.claude/` directory is detected, otherwise `AGENTS.md`).
> Commit `plan.yaml` + `AGENTS.md`/`CLAUDE.md` together. The plan is the state machine; the instructions file is the agent entry point.
### 2. Connect Your Agent
<details>
<summary>⚡ Claude Desktop / Cursor</summary>
```json
{
"mcpServers": {
"vectl": {
"command": "uvx",
"args": ["vectl", "mcp"],
"env": { "VECTL_PLAN_PATH": "/absolute/path/to/plan.yaml" }
}
}
}
```
</details>
<details>
<summary>⚡ OpenCode</summary>
Add to your `opencode.jsonc`:
```jsonc
{
"mcp": {
"vectl": {
"type": "local",
"command": ["uvx", "vectl", "mcp"],
"environment": { "VECTL_PLAN_PATH": "/absolute/path/to/plan.yaml" }
}
}
}
```
See [OpenCode MCP docs](https://opencode.ai/docs/mcp-servers/) for details.
</details>
<details>
<summary>⌨️ CLI Only (no MCP)</summary>
No setup needed — agents call `uvx vectl ...` directly.
> `uvx vectl init` already creates/updates the agent instructions file.
> To update later: `uvx vectl agents-md` (use `--target claude` if needed).
</details>
#### Agent Instruction Files
`vectl init` and `vectl agents-md` manage the agent instruction file in your repo.
That file is the *entry point* for agents: it points to `uvx vectl guide` topics and sets the rules (one claimed step at a time, evidence required, don't guess specs).
```bash
uvx vectl agents-md # Update AGENTS.md / CLAUDE.md with vectl section
uvx vectl agents-md --target claude # Force CLAUDE.md
```
### 3. Migrate (Optional)
If your project already tracks work in a markdown file, issue tracker, or spreadsheet, tell your agent:
```
Read the migration guide (via `uvx vectl guide --on migration` or `vectl_guide` MCP tool).
Migrate our existing plan to plan.yaml.
Prefer MCP tools (`vectl_mutate`, `vectl_guide`) over CLI if available.
```
### 4. The Workflow
Keep it simple:
```bash
uvx vectl status # Where are we?
uvx vectl next # What can run now?
uvx vectl claim <step-id> --agent <name> # Get spec + pinned refs + evidence template
uvx vectl complete <step-id> --evidence "..." # Prove it (paste filled template)
```
### 5. Dashboard (Static HTML)
Generate a single-file HTML dashboard (Overview + DAG) for quick visual inspection:
```bash
uvx vectl dashboard --open
# Or write to a custom path
uvx vectl dashboard --out /tmp/plan-dashboard.html
```

Notes:
- Output is a local HTML file (no server).
- The DAG view loads Mermaid.js from a CDN (network required for that tab).
Everything else is in the guide:
- Architect protocol: `uvx vectl guide --on planning`
- Getting unstuck: `uvx vectl guide --on stuck`
- Review / validation: `uvx vectl guide --on review`
- Migration: `uvx vectl guide --on migration`
## Handoffs: Clipboard (Notes) vs Checkpoint (State)
If you're switching agent hosts (Claude Code ↔ Cursor ↔ OpenCode) or handing work between agents, use both:
- **Clipboard**: short, human-readable notes that live in `plan.yaml` (with TTL).
- **Checkpoint**: compact, machine-readable state snapshot for context injection.
### Clipboard (recommended for handoffs)
Use this when you want to pass **actionable notes** between agent hosts/sessions without creating extra files.
Example: Claude Code did a detailed code review, found a few small issues, and you want OpenCode (GLM-5) to patch them.
Drop the review notes into the clipboard — the other agent reads and applies.
```bash
uvx vectl clipboard-write \
--author "claude-code" \
--summary "Code review: small fixes" \
--content "
Target: src/foo.py
Issues:
- Rename X to Y (see comment in function bar)
- Add missing test for edge case Z
- Run: uv run pytest tests/test_foo.py
"
uvx vectl clipboard-read
uvx vectl clipboard-clear
```
MCP equivalent:
```python
vectl_clipboard(action="write", author="claude-code", summary="Code review: small fixes", content="...")
vectl_clipboard(action="read")
vectl_clipboard(action="clear")
```
### Checkpoint
```bash
uvx vectl checkpoint
```
Paste the JSON into the next session's system prompt.
## Data Model (`plan.yaml`)
```yaml
version: 1
project: my-project
phases:
- id: auth
name: Auth Module
context: |
All auth steps must follow OWASP guidelines.
Test with both valid and malformed JWTs.
depends_on: [core]
steps:
- id: auth.user-model
name: User Model
status: claimed
claimed_by: engineer-1
```
A YAML file. In your git repo.
No database. No SaaS. `git blame` it. Review it in PRs. `git diff` it.
**Phase Context**: Set `context` on a phase to give agents guidance that applies to all steps within it. When an agent runs `vectl show <step>` or `vectl claim`, phase context appears automatically in the output.
Full schema, ID rules, and ordering semantics: [docs/DESIGN.md](docs/DESIGN.md).
## Lock Consistency
Lock status is automatically maintained — agents do not need to manage it. After any write operation (`claim`, `complete`, `mutate`, etc.), vectl recalculates lock status automatically. When a recalculation changes a phase's lock state, vectl emits an informational message:
```
[vectl] Lock status updated: phase-a (pending)
```
If you edit `plan.yaml` directly (outside of vectl commands), run `uvx vectl recalc-lock` to manually diagnose and repair any lock inconsistencies.
## Technical Details
Architecture, CAS safety, and test coverage (658 tests, Hypothesis state machine verification): [docs/DESIGN.md](docs/DESIGN.md).
| text/markdown | null | Tefx <zhaomeng.zhu@gmail.com> | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>. | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Build Tools",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastmcp>=2.0.0",
"pydantic>=2.0.0",
"pyyaml>=6.0",
"rich>=13.0.0",
"typer>=0.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Tefx/vectl",
"Repository, https://github.com/Tefx/vectl",
"Issues, https://github.com/Tefx/vectl/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:10:16.390639 | vectl-0.5.3.tar.gz | 525,065 | 40/23/c1a7817084405e78cbafe67fa292d5c091a59bc9136f2f8f7bdeff88cc01/vectl-0.5.3.tar.gz | source | sdist | null | false | 0c23030063c5456a921621eef2105413 | dda96dd679f95663fab47c6c7685c2c8e5e90a6ca866a7747bdce655d5bd7022 | 4023c1a7817084405e78cbafe67fa292d5c091a59bc9136f2f8f7bdeff88cc01 | null | [
"LICENSE"
] | 257 |
2.4 | nautobot-app-bitwarden-password-manager-secrets | 0.1.1 | Nautobot app providing a secrets provider for Bitwarden Password Manager via the bw serve REST API. | # Nautobot Bitwarden Password Manager Secrets Provider
A [Nautobot](https://nautobot.com/) app that provides a secrets provider for [Bitwarden Password Manager](https://bitwarden.com/), using the REST API exposed by `bw serve`.
## Installation
```bash
pip install nautobot-app-bitwarden-password-manager-secrets
```
## Configuration
Add the app to your `nautobot_config.py`:
```python
PLUGINS = ["nautobot_bitwarden_password_manager_secrets"]
PLUGINS_CONFIG = {
"nautobot_bitwarden_password_manager_secrets": {
"base_url": "http://localhost:8087", # URL of your bw serve instance
},
}
```
## Prerequisites
You must have the [Bitwarden CLI](https://bitwarden.com/help/cli/) installed and running `bw serve` with the vault unlocked:
```bash
bw serve --port 8087
```
## Usage
1. Navigate to **Secrets > Secrets** in Nautobot
2. Create a new Secret and select **Bitwarden Password Manager** as the provider
3. Enter the vault item ID (GUID) and select the field to retrieve
### Supported Fields
| Field | Description |
|-------|-------------|
| Username | Login username |
| Password | Login password |
| TOTP (Current Code) | Current TOTP code generated by Bitwarden |
| URI (first) | First URI associated with the login |
| Notes | Item notes |
| Custom Field | A named custom field (specify the field name) |
## License
This project is licensed under the [Mozilla Public License 2.0](LICENSE).
| text/markdown | Gary T. Giesen | ggiesen@giesen.me | null | null | MPL-2.0 | nautobot, nautobot-app, nautobot-plugin, bitwarden | [
"Development Status :: 3 - Alpha",
"Framework :: Django",
"License :: OSI Approved",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"nautobot<3.0.0,>=2.4.0",
"requests>=2.20.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T10:09:54.066395 | nautobot_app_bitwarden_password_manager_secrets-0.1.1.tar.gz | 11,150 | e9/69/666d005040c26a7f94753461992a49efe2b03d2ae2c221524922773bb3b1/nautobot_app_bitwarden_password_manager_secrets-0.1.1.tar.gz | source | sdist | null | false | 8534cc63d6c2acbe534a4ef7e0bdafef | 2f3d70fadc46d8452a8e144d117086ef02ae2f56d1beda5d49810e89ab4e3cb6 | e969666d005040c26a7f94753461992a49efe2b03d2ae2c221524922773bb3b1 | null | [
"LICENSE"
] | 248 |
2.4 | geniable | 2.6.0 | Hybrid Local-Cloud QA Pipeline for LangSmith Thread Analysis | # Geniable
**Hybrid Local-Cloud QA Pipeline for LangSmith Thread Analysis**
Geniable analyzes your LangSmith conversation threads for quality issues, performance problems, and errors—then creates tickets in Jira or Notion automatically.
## Features
- **Thread Analysis** - Fetch and analyze threads from your LangSmith annotation queue
- **Issue Detection** - Identify performance, quality, security, and UX issues
- **Ticket Creation** - Create standardized issue tickets in Jira or Notion
- **Claude Code Integration** - Interactive analysis via `/analyze-latest` command
- **CI/CD Support** - Automated analysis for pipelines using Anthropic API
- **State Tracking** - Avoids reprocessing already-analyzed threads
---
## Installation
```bash
pip install geniable
```
**Requirements:** Python 3.11+
---
## Quick Start
### 1. Login
```bash
geni login
```
Authenticate with your email and password. Tokens are stored securely.
### 2. Initialize
```bash
geni init
```
Interactive wizard that configures:
- LangSmith API credentials and annotation queue
- Issue tracker (Jira, Notion, or none)
- Claude Code integration (optional)
### 3. Analyze
```bash
geni analyze-latest
```
Fetches unanalyzed threads and launches the Geni Analyzer for interactive analysis.
---
## Commands
### Authentication
| Command | Description |
|---------|-------------|
| `geni login` | Login to Geniable |
| `geni logout` | Logout and clear tokens |
| `geni whoami` | Show current user |
### Configuration
| Command | Description |
|---------|-------------|
| `geni init` | Interactive setup wizard |
| `geni configure --show` | Display current configuration |
| `geni configure --validate` | Test all service connections |
| `geni configure --sync-secrets` | Sync credentials to cloud |
### Analysis
| Command | Description |
|---------|-------------|
| `geni analyze latest` | Analyze latest threads from queue |
| `geni analyze latest --limit 10` | Analyze up to 10 threads |
| `geni analyze latest --dry-run` | Analyze without creating tickets |
| `geni analyze latest --ci` | Automated mode for CI/CD pipelines |
### Tickets
| Command | Description |
|---------|-------------|
| `geni ticket create '<json>'` | Create ticket from IssueCard JSON |
### Utilities
| Command | Description |
|---------|-------------|
| `geni status` | Show connection status |
| `geni stats` | Show processing history |
| `geni clear-state` | Reset processing state |
| `geni --version` | Show version |
---
## Configuration
Configuration is stored in `~/.geniable.yaml`:
```yaml
langsmith:
api_key: "ls_..."
project: "my-project"
queue: "quality-review"
provider: "jira" # or "notion" or "none"
jira:
base_url: "https://company.atlassian.net"
email: "user@company.com"
api_token: "..."
project_key: "PROJ"
issue_type: "Bug"
defaults:
report_dir: "./reports"
log_level: "INFO"
```
### Environment Variables
Override any config value with environment variables:
```bash
export LANGSMITH_API_KEY="ls_..."
export JIRA_API_TOKEN="..."
export ANTHROPIC_API_KEY="sk-ant-..." # Required for --ci mode
```
---
## Claude Code Integration
Geniable integrates with Claude Code for interactive analysis.
### Setup
During `geni init`, Geniable installs:
- **Skill**: `.claude/commands/analyze-latest.md`
- **Agent**: `.claude/agents/Geni Analyzer.md`
- **Permissions**: `.claude/settings.local.json`
### Usage
In Claude Code, run:
```
/analyze-latest
```
The Geni Analyzer agent will:
1. Fetch unanalyzed threads from your LangSmith queue
2. Analyze each thread for issues (security, quality, performance, UX)
3. Generate potential solutions for each issue
4. Present findings and ask for ticket creation confirmation
5. Create tickets in Jira/Notion for approved issues
---
## Usage Modes
### Interactive Mode (Default)
```bash
geni analyze-latest
```
Launches Claude Code for real-time, interactive analysis. Best for development and debugging.
### CI/CD Mode
```bash
export ANTHROPIC_API_KEY="sk-ant-..."
geni analyze-latest --ci
```
Automated analysis using Anthropic API directly. Best for scheduled pipelines.
### Report-Only Mode
```bash
geni analyze-latest --dry-run
```
Generates reports without creating tickets. Best for previewing analysis.
---
## Reports
Analysis reports are saved to `./reports/` (configurable):
```
reports/
├── processing_state.json # Tracks processed threads
├── Thread-ProjectName-abc123.md # Individual thread reports
└── analysis_report_20250125.md # Batch analysis reports
```
---
## Issue Detection
The analyzer identifies these issue types:
| Category | Examples | Priority |
|----------|----------|----------|
| **Security** | Data exposure, leaked internals, auth issues | Critical/High |
| **Quality** | Incomplete responses, hallucinations, poor UX | High |
| **Performance** | Slow response (>30s), high tokens (>50K) | High/Medium |
| **Bug** | Errors, exceptions, failures | High |
---
## Troubleshooting
### "Authentication required"
```bash
geni login
```
### "Configuration file not found"
```bash
geni init
```
### Service validation failures
```bash
geni configure --validate
```
### Reset processing state
To reprocess all threads:
```bash
geni clear-state -y
```
### Debug mode
```bash
geni analyze-latest --verbose
```
---
## Development
```bash
# Install dev dependencies
pip install -e ".[dev]"
# Quality checks
make lint # Ruff linting
make format # Black + isort
make typecheck # Mypy
# Testing
make test # All tests with coverage
make test-unit # Unit tests only
```
---
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ LOCAL (geniable/) │
│ CLI → Agent → API Clients → AWS Cloud Services │
└─────────────────────┬───────────────────────────────────────┘
│ REST API + Cognito Auth
┌─────────────────────▼───────────────────────────────────────┐
│ AWS CLOUD │
│ ┌─────────────────────────┐ ┌──────────────────────────┐ │
│ │ Integration Service │ │ Evaluation Service │ │
│ │ • /threads/annotated │ │ • /evaluations/discovery │ │
│ │ • /threads/{id}/details │ │ • /evaluations/execute │ │
│ │ • /integrations/ticket │ │ │ │
│ └─────────────────────────┘ └──────────────────────────┘ │
│ │
│ DynamoDB (state) + Secrets Manager (credentials) │
└──────────────────────────────────────────────────────────────┘
```
---
## License
MIT
---
## Links
- **Repository**: https://github.com/mnedelko/geniable
- **Issues**: https://github.com/mnedelko/geniable/issues
- **PyPI**: https://pypi.org/project/geniable/
| text/markdown | null | Mike Nedelko <mnedelko@users.noreply.github.com> | null | null | null | langsmith, llm, qa, testing, analysis, mcp, jira, notion | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.0.0",
"pyyaml>=6.0.0",
"typer[all]>=0.9.0",
"rich>=13.0.0",
"questionary>=2.0.0",
"requests>=2.31.0",
"httpx>=0.25.0",
"langsmith>=0.1.0",
"boto3>=1.34.0",
"anthropic>=0.40.0; extra == \"llm\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-mock>=3.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"moto[dynamodb]>=5.0.0; extra == \"dev\"",
"responses>=0.23.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"boto3-stubs[dynamodb]>=1.34.0; extra == \"dev\"",
"types-requests>=2.31.0; extra == \"dev\"",
"types-PyYAML>=6.0.0; extra == \"dev\"",
"black>=24.0.0; extra == \"dev\"",
"isort>=5.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"anthropic>=0.40.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/mnedelko/geniable",
"Documentation, https://github.com/mnedelko/geniable#readme",
"Repository, https://github.com/mnedelko/geniable",
"Issues, https://github.com/mnedelko/geniable/issues"
] | twine/6.2.0 CPython/3.12.2 | 2026-02-21T10:09:40.220721 | geniable-2.6.0.tar.gz | 118,492 | e0/06/4bd6e5664452002bf0dd76adb00c3ac0cad8196c554d81b622baf3476759/geniable-2.6.0.tar.gz | source | sdist | null | false | 2fb43d0141ddd7c74ec2404e9e619c6f | 02a18a491efb7b9c603cc17ec18aea738822ba0d12f77f6f9dc663138d269237 | e0064bd6e5664452002bf0dd76adb00c3ac0cad8196c554d81b622baf3476759 | MIT | [
"LICENSE"
] | 236 |
2.4 | comfy-3d-viewers | 0.2.32 | Reusable 3D viewer infrastructure for ComfyUI nodes | # comfy-3d-viewers
Reusable 3D viewer infrastructure for ComfyUI nodes.
Provides VTK.js and Gaussian splatting viewers, shared utilities, and HTML templates for 3D mesh visualization in ComfyUI.
## Installation
```bash
pip install -e .
```
## Usage
This package is used by ComfyUI-GeometryPack. The `prestartup_script.py` copies the viewer files to the extension's web directory at runtime.
```python
from comfy_3d_viewers import get_js_dir, get_html_dir, get_utils_dir
# Get paths to viewer files
js_dir = get_js_dir() # JS bundles and viewer source
html_dir = get_html_dir() # HTML viewer templates
utils_dir = get_utils_dir() # Shared JS utilities
```
## Contents
- **JS Bundles**: VTK.js, Gaussian splatting, modular viewer bundle
- **Viewer Modules**: Modular viewer architecture (core, loaders, viewers, ui, features, utils)
- **Shared Utilities**: Extension folder detection, screenshot handling, UI components, formatting, analysis panels, postMessage helpers
- **HTML Templates**: Viewer pages for VTK, textured, dual, gaussian, UV, etc.
## Building
To rebuild the VTK.js bundle:
```bash
cd build_vtk_bundle
npm install
npm run build
```
## License
GPL-3.0-or-later
| text/markdown | ComfyUI-GeometryPack Contributors | null | null | null | null | 3d, comfyui, gaussian-splatting, mesh-visualization, viewer, vtk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Graphics :: 3D Modeling",
"Topic :: Scientific/Engineering :: Visualization"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/PozzettiAndrea/comfy-3d-viewers",
"Repository, https://github.com/PozzettiAndrea/comfy-3d-viewers"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:07:49.089376 | comfy_3d_viewers-0.2.32.tar.gz | 1,147,647 | 60/ea/afccde43fb52eab1051949b717f58739a8d8d992099fbaaa231491b75f06/comfy_3d_viewers-0.2.32.tar.gz | source | sdist | null | false | 176febc371f7d2818625b2946f49ed24 | ce370c5de539c025fa7cbf8016484b35b76d5c0b5ebd646e94746085c3338bae | 60eaafccde43fb52eab1051949b717f58739a8d8d992099fbaaa231491b75f06 | GPL-3.0-or-later | [] | 314 |
2.4 | nvidia-nat-redis | 1.5.0a20260221 | Subpackage for Redis integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for Redis memory integration in NeMo Agent Toolkit.
For more information about NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit package](https://pypi.org/project/nvidia-nat/).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, agents, memory | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"redis<5.0.0,>=4.3.4",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:07:19.596102 | nvidia_nat_redis-1.5.0a20260221-py3-none-any.whl | 58,919 | 1f/34/5e053ab5de12cb080a4663dad135a9e55b2ae8da93874e7a71c31ee28bce/nvidia_nat_redis-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | ef1f841d7b0f32090e3736ca42f08cbc | f154123e310a5424d0aed0d9654e6b962a4e37c130f76ff94350ca4201bfbfb4 | 1f345e053ab5de12cb080a4663dad135a9e55b2ae8da93874e7a71c31ee28bce | null | [] | 72 |
2.4 | nvidia-nat-data-flywheel | 1.5.0a20260221 | Subpackage for NVIDIA Data Flywheel Blueprint integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA Agent Toolkit Subpackage
This is a subpackage for NVIDIA Data Flywheel Blueprint integration for continuous model improvement.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, observability, nemo, data flywheel | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"elasticsearch~=8.1",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:07:02.685474 | nvidia_nat_data_flywheel-1.5.0a20260221-py3-none-any.whl | 90,494 | 98/b0/ff2930de8b1a05a721e2b5352d2aa225f11ad0bbcdeded25772d52f4e469/nvidia_nat_data_flywheel-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | d8b2e4b1f930338451ecbe15c554224d | 297d898e082925036c21a825e9122612ed946e6a8b8494960110f68d8f7ecdfa | 98b0ff2930de8b1a05a721e2b5352d2aa225f11ad0bbcdeded25772d52f4e469 | null | [] | 71 |
2.4 | nvidia-nat-semantic-kernel | 1.5.0a20260221 | Subpackage for Semantic-Kernel integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for Semantic-Kernel integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"ruamel-yaml-clibz==0.3.5",
"semantic-kernel~=1.36",
"werkzeug>=3.1.5",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:06:45.486164 | nvidia_nat_semantic_kernel-1.5.0a20260221-py3-none-any.whl | 56,466 | 3b/a1/9b4523b352cc86e39032c173195a466f83578bb0d4d6fffa56f134440115/nvidia_nat_semantic_kernel-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 6fa992556de8e6ddd5dfc3ea6f6f40a9 | e5f3445a610a08c279a79ea60bee29c0dd3f31c4724b990b7422d41c3fb4204a | 3ba19b4523b352cc86e39032c173195a466f83578bb0d4d6fffa56f134440115 | null | [] | 72 |
2.4 | substr8 | 0.7.1 | Substr8 Platform CLI - Verifiable AI Infrastructure | # Substr8 CLI
[](https://pypi.org/project/substr8/)
[](https://opensource.org/licenses/MIT)
**Verifiable AI Infrastructure** — The command-line interface for the Substr8 platform.
Substr8 provides provable, auditable, and deterministic infrastructure for AI agents. This CLI bundles our core tools:
- **GAM** — Git-Native Agent Memory (cryptographically verifiable memory)
- **FDAA** — File-Driven Agent Architecture (coming soon)
- **ACC** — Agent Capability Control (coming soon)
## Installation
```bash
pip install substr8
```
### Optional Dependencies
```bash
# With cryptographic signing (agent DIDs, GPG integration)
pip install substr8[crypto]
# With semantic search (embeddings-based recall)
pip install substr8[retrieval]
# Everything
pip install substr8[full]
```
## Quick Start
```bash
# Initialize GAM in your workspace
cd your-project
substr8 gam init
# Store a memory
substr8 gam remember "Raza exercises 4-5x per week" --tag health
# Search memories (semantic search)
substr8 gam recall "fitness routine"
# Verify provenance
substr8 gam verify mem_1234567890_abcd
# Show status
substr8 gam status
```
## GAM — Git-Native Agent Memory
GAM uses git's 20-year-old version control primitives to provide:
| Feature | How |
|---------|-----|
| **Cryptographic provenance** | Every memory has a commit SHA |
| **Tamper-evident history** | Merkle tree — change anything, hash breaks |
| **Human-auditable** | Plain Markdown files, `git blame` works |
| **Temporal awareness** | Decay scoring, point-in-time queries |
| **W^X permissions** | Path-based access control with HITL gates |
### Commands
```bash
# Core operations
substr8 gam init # Initialize repository
substr8 gam remember <text> # Store a memory
substr8 gam recall <query> # Search memories
substr8 gam verify <id> # Verify provenance
substr8 gam forget <id> # Delete a memory
substr8 gam status # Show repository status
# Identity management
substr8 gam identity create-agent <name> # Create agent DID
substr8 gam identity list # List identities
# Permissions (W^X)
substr8 gam permissions list # Show all path policies
substr8 gam permissions check # Check a specific path
substr8 gam permissions hitl # Show human-required paths
# Maintenance
substr8 gam import <path> # Import existing .md files
substr8 gam reindex # Rebuild indexes
```
### Memory File Format
Memories are stored as Markdown with YAML frontmatter:
```markdown
---
gam_version: 1
id: mem_2026021908001234
created: 2026-02-19T08:00:00Z
source: conversation
confidence: high
tags: [health, fitness]
---
# Raza's Fitness Goals
Raza exercises 4-5x/week and has quit alcohol.
```
### W^X Permissions
| Path | Permission | Signature Required |
|------|------------|-------------------|
| `SOUL.md` | HUMAN_SIGN | Human GPG |
| `AGENTS.md` | HUMAN_SIGN | Human GPG |
| `MEMORY.md` | AGENT_SIGN | Agent DID |
| `memory/daily/*` | OPEN | None |
| `memory/archive/*` | READONLY | N/A |
## Platform Status
```bash
substr8 info
```
```
┌─────────────────────────────────────┐
│ Substr8 Platform v1.0.0 │
├───────────┬──────────┬──────────────┤
│ Component │ Status │ Description │
├───────────┼──────────┼──────────────┤
│ GAM │ ✅ v1.0.0│ Git Memory │
│ FDAA │ 🔜 │ File Agents │
│ ACC │ 🔜 │ Capabilities │
└───────────┴──────────┴──────────────┘
```
## Research
| Paper | DOI |
|-------|-----|
| GAM: Git-Native Agent Memory | [`10.5281/zenodo.18704573`](https://doi.org/10.5281/zenodo.18704573) |
| ACC: Agent Capability Control | [`10.5281/zenodo.18704577`](https://doi.org/10.5281/zenodo.18704577) |
| FDAA: File-Driven Agent Architecture | [`10.5281/zenodo.18675147`](https://doi.org/10.5281/zenodo.18675147) |
## Links
- **Website:** [substr8labs.com](https://substr8labs.com)
- **Substack:** [substr8labs.substack.com](https://substr8labs.substack.com)
- **GitHub:** [github.com/Substr8-Labs](https://github.com/Substr8-Labs)
- **Twitter:** [@substr8labs](https://twitter.com/substr8labs)
## License
MIT — [Substr8 Labs](https://substr8labs.com)
---
*AI systems should be provable, not just probable.*
| text/markdown | null | Substr8 Labs <hello@substr8labs.com> | null | null | null | acc, agents, ai, capability, fdaa, gam, memory, verifiable | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"cryptography>=41.0",
"gitpython>=3.1.0",
"pyyaml>=6.0",
"rich>=13.0",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"anthropic>=0.18.0; extra == \"fdaa\"",
"openai>=1.0.0; extra == \"fdaa\"",
"opentelemetry-api>=1.20.0; extra == \"fdaa\"",
"opentelemetry-exporter-jaeger>=1.20.0; extra == \"fdaa\"",
"opentelemetry-sdk>=1.20.0; extra == \"fdaa\"",
"fastapi>=0.100.0; extra == \"fdaa-server\"",
"motor>=3.0.0; extra == \"fdaa-server\"",
"uvicorn>=0.23.0; extra == \"fdaa-server\"",
"anthropic>=0.18.0; extra == \"full\"",
"chromadb>=0.4.0; extra == \"full\"",
"cryptography>=41.0; extra == \"full\"",
"openai>=1.0.0; extra == \"full\"",
"opentelemetry-api>=1.20.0; extra == \"full\"",
"opentelemetry-sdk>=1.20.0; extra == \"full\"",
"sentence-transformers>=2.0; extra == \"full\"",
"httpx>=0.24.0; extra == \"gam-client\"",
"fastapi>=0.100.0; extra == \"gam-server\"",
"uvicorn>=0.23.0; extra == \"gam-server\"",
"numpy>=1.24.0; extra == \"retrieval\"",
"sentence-transformers>=2.0; extra == \"retrieval\"",
"chromadb>=0.4.0; extra == \"search\"",
"openai>=1.0.0; extra == \"search\"",
"chromadb>=0.4.0; extra == \"search-local\"",
"sentence-transformers>=2.0; extra == \"search-local\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T10:06:38.040964 | substr8-0.7.1.tar.gz | 103,272 | 0c/18/52991cad6fde690dc41b52720e656b528becdb4bcc2c7d0b8826c837d23c/substr8-0.7.1.tar.gz | source | sdist | null | false | f1deb0e609f87cd9feb0d54d9251e216 | f1b2cd46c9c555cf6eaef655b815c05e8d9f6ad9ad27c0dbb22f09e89a97ac0f | 0c1852991cad6fde690dc41b52720e656b528becdb4bcc2c7d0b8826c837d23c | MIT | [] | 243 |
2.4 | nvidia-nat-opentelemetry | 1.5.0a20260221 | Subpackage for OpenTelemetry integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
.
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, observability, opentelemetry | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"opentelemetry-api~=1.2",
"opentelemetry-exporter-otlp~=1.3",
"opentelemetry-sdk~=1.3",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:06:28.716452 | nvidia_nat_opentelemetry-1.5.0a20260221-py3-none-any.whl | 67,573 | e4/ff/aff6a054e50f073d57876f5a1134e84753e04fb7f7e6091157b66423e8de/nvidia_nat_opentelemetry-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | c20ac083e3c693fe6f7dedd2efb279c0 | d8ff9d2199bdaa8cb953a30200df754f59beaff2b47e72a67b512e900ac0da7c | e4ffaff6a054e50f073d57876f5a1134e84753e04fb7f7e6091157b66423e8de | null | [] | 71 |
2.4 | nvidia-nat-openpipe-art | 1.5.0a20260221 | Subpackage for OpenPipe ART integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for OpenPipe ART integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, finetuning | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"matplotlib~=3.9",
"openpipe-art==0.5.4",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:06:11.824700 | nvidia_nat_openpipe_art-1.5.0a20260221-py3-none-any.whl | 65,987 | 0e/f7/5533be0096dd7a1306d5ad7b4554bb97795c5d056a3aa6ff86b807ed4a49/nvidia_nat_openpipe_art-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 022d8d9330568df7e5aeba1099f3269d | fd278a00df49704e80fd653fdb3a6c4e7940f1c7f38fc0f03afeb036722556b1 | 0ef75533be0096dd7a1306d5ad7b4554bb97795c5d056a3aa6ff86b807ed4a49 | null | [] | 70 |
2.4 | nvidia-nat-llama-index | 1.5.0a20260221 | Subpackage for Llama-Index integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for Llama-Index integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"llama-index-core<1.0.0,>=0.14.12",
"llama-index-embeddings-azure-openai<1.0.0,>=0.4.1",
"llama-index-embeddings-nvidia<1.0.0,>=0.4.2",
"llama-index-embeddings-openai<1.0.0,>=0.5.1",
"llama-index-llms-azure-openai<1.0.0,>=0.4.2",
"llama-index-llms-bedrock<1.0.0,>=0.4.2",
"llama-index-llms-litellm<1.0.0,>=0.6.3",
"llama-index-llms-nvidia<1.0.0,>=0.4.4",
"llama-index-llms-openai<1.0.0,>=0.6.12",
"llama-index-readers-file<1.0.0,>=0.5.6",
"llama-index<1.0.0,>=0.14.12",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:05:54.733579 | nvidia_nat_llama_index-1.5.0a20260221-py3-none-any.whl | 59,147 | b2/9c/9142bd60895c13cc458d32afd757457f84cc19424816e2eb52939ad18060/nvidia_nat_llama_index-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 0606bfc864c81fb84c769f38f73830c0 | e123be2aaab23463265243843ce59c3e3ac863d94bbd51bd1a3fc56d9ffe90e0 | b29c9142bd60895c13cc458d32afd757457f84cc19424816e2eb52939ad18060 | null | [] | 71 |
2.4 | nvidia-nat-mem0ai | 1.5.0a20260221 | Subpackage for Mem0 integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for Mem0 memory integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, agents, memory | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"mem0ai<1.0.0,>=0.1.30",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:05:37.610201 | nvidia_nat_mem0ai-1.5.0a20260221-py3-none-any.whl | 51,948 | 3b/6b/d7a9e41674da215a1fc2e25793e5f6ff33bd1d5e7708869dbfd38dfbb8f9/nvidia_nat_mem0ai-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 7d9663cf90e76d1f58be33a589ddf47f | f5a211926d62c51b6d4ba0a79c17b668da5caa48d5b6e2dbf951bc2f194fc062 | 3b6bd7a9e41674da215a1fc2e25793e5f6ff33bd1d5e7708869dbfd38dfbb8f9 | null | [] | 69 |
2.4 | nvidia-nat-fastmcp | 1.5.0a20260221 | Subpackage for FastMCP server integration in NeMo Agent Toolkit | # NVIDIA NeMo Agent Toolkit FastMCP package
<!--
SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit — FastMCP Subpackage
Subpackage providing FastMCP integration for the NVIDIA NeMo Agent toolkit.
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents, mcp, fastmcp | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"watchfiles~=1.1",
"nvidia-nat-core==v1.5.0a20260221",
"fastmcp>=3.0.0b1",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:05:20.683030 | nvidia_nat_fastmcp-1.5.0a20260221-py3-none-any.whl | 20,410 | 38/0a/48e7b7c119fd83bf813b41eafb61216ff54691e1ce98160ff4bf12c24891/nvidia_nat_fastmcp-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 5a646cafbfc3357c99a74434b232afde | ddd4a56196a7b9594ee0dd76394d786f6b63b7b78f336b7894242b9882e9117a | 380a48e7b7c119fd83bf813b41eafb61216ff54691e1ce98160ff4bf12c24891 | null | [] | 68 |
2.4 | nvidia-nat-langchain | 1.5.0a20260221 | Subpackage for LangChain/LangGraph integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for LangChain/LangGraph integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"langchain<2.0.0,>=1.2.3",
"langchain-aws<2.0.0,>=1.1.0",
"langchain-classic<2.0.0,>=1.0.1",
"langchain-community~=0.3",
"langchain-core<2.0.0,>=1.2.6",
"langchain-huggingface<2.0.0,>=1.2.0",
"langchain-litellm<1.0.0,>=0.3.5",
"langchain-milvus<1.0.0,>=0.3.3",
"langchain-nvidia-ai-endpoints<2.0.0,>=1.0.2",
"langchain-openai<2.0.0,>=1.1.6",
"langchain-tavily<1.0.0,>=0.2.16",
"langgraph<2.0.0,>=1.0.5",
"openevals<1.0.0,>=0.1.3",
"nvidia-nat-eval[profiling]==v1.5.0a20260221; extra == \"test\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:05:03.740569 | nvidia_nat_langchain-1.5.0a20260221-py3-none-any.whl | 160,386 | 65/26/b292ef1f702e2e408111506a4dff31849138198da36139b08d6927ef93c7/nvidia_nat_langchain-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | c4018b5448c2a0513751815ef26e42d3 | 930a8ea2489089b33822e66ffa434e2ddcfdd8ce69561781edb864f6c5c68abc | 6526b292ef1f702e2e408111506a4dff31849138198da36139b08d6927ef93c7 | null | [] | 69 |
2.4 | nvidia-nat-phoenix | 1.5.0a20260221 | Subpackage for Arize Phoenix integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
.
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, observability, phoenix, arize | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"nvidia-nat-opentelemetry==v1.5.0a20260221",
"arize-phoenix-otel<1.0.0,>=0.13.1",
"openinference-instrumentation",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:04:46.636811 | nvidia_nat_phoenix-1.5.0a20260221-py3-none-any.whl | 53,567 | d3/ea/3dd12360cc5813dff5d53a97ce40a4eaf726361b485f095caef5b11f361f/nvidia_nat_phoenix-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 3830918f33e6aadc7b32bf0e59e7f947 | 2b66e0dcb5de031ccc1f787f1b5b09b35123af3e0f6b21f8d93f32ddf5641eb5 | d3ea3dd12360cc5813dff5d53a97ce40a4eaf726361b485f095caef5b11f361f | null | [] | 69 |
2.4 | nvidia-nat-crewai | 1.5.0a20260221 | Subpackage for CrewAI integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for CrewAI integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"crewai<1.0.0,>=0.193.2",
"litellm~=1.74",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:04:29.781485 | nvidia_nat_crewai-1.5.0a20260221-py3-none-any.whl | 54,780 | eb/39/e52206c2c738ab57696e767ef0e78e0a8a3f274ab24a915bd3a65efbf0e3/nvidia_nat_crewai-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | ed2f5c396daf4b8185dc8b28bd04d5c7 | 3d5a8d9067dcb5feda36efce3704f2dbf71b2cf5b0f7c952aaa764367cf4dffd | eb39e52206c2c738ab57696e767ef0e78e0a8a3f274ab24a915bd3a65efbf0e3 | null | [] | 69 |
2.4 | nvidia-nat-test | 1.5.0a20260221 | Testing utilities for NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for NeMo Agent Toolkit test utilities.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"langchain-community~=0.3",
"pytest~=8.3",
"pytest-asyncio==0.24.*",
"pytest-cov~=6.1",
"pytest_httpserver==1.1.*",
"pytest-timeout~=2.4",
"asgi-lifespan~=2.1"
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:04:12.719989 | nvidia_nat_test-1.5.0a20260221-py3-none-any.whl | 74,756 | 78/37/b22282736a546e4eb576be62f5f02fcddaf4e8bcc4c370601a0d1ebebd9c/nvidia_nat_test-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 855abb94a6eab36bbc541f3fe845ec41 | 84156c5744306ca50e9483ccf08bb6991c76381c2dee3f889dcd07f5ac42b537 | 7837b22282736a546e4eb576be62f5f02fcddaf4e8bcc4c370601a0d1ebebd9c | null | [] | 71 |
2.4 | nvidia-nat | 1.5.0a20260221 | NVIDIA NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2024-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit
<!-- vale off (due to hyperlinks) -->
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/NVIDIA/NeMo-Agent-Toolkit/releases)
[](https://pypi.org/project/nvidia-nat/)
[](https://github.com/NVIDIA/NeMo-Agent-Toolkit/issues)
[](https://github.com/NVIDIA/NeMo-Agent-Toolkit/pulls)
[](https://github.com/NVIDIA/NeMo-Agent-Toolkit)
[](https://github.com/NVIDIA/NeMo-Agent-Toolkit/network/members)
[](https://deepwiki.com/NVIDIA/NeMo-Agent-Toolkit)
[](https://colab.research.google.com/github/NVIDIA/NeMo-Agent-Toolkit/)
<!-- vale on -->
<div align="center">
*NVIDIA NeMo Agent Toolkit adds intelligence to AI agents across any framework—enhancing speed, accuracy, and decision-making through enterprise-grade instrumentation, observability, and continuous learning.*
</div>
## 🔥 New Features
- [**LangGraph Agent Automatic Wrapper:**](./examples/frameworks/auto_wrapper/langchain_deep_research/README.md) Easily onboard existing LangGraph agents to NeMo Agent Toolkit. Use the automatic wrapper to access NeMo Agent Toolkit advanced features with very little modification of LangGraph agents.
- [**Automatic Reinforcement Learning (RL):**](./docs/source/improve-workflows/finetuning/index.md) Improve your agent quality by fine-tuning open LLMs to better understand your agent's workflows, tools, and prompts. Perform GRPO with [OpenPipe ART](./docs/source/improve-workflows/finetuning/rl_with_openpipe.md) or DPO with [NeMo Customizer](./docs/source/improve-workflows/finetuning/dpo_with_nemo_customizer.md) using NeMo Agent Toolkit built-in evaluation system as a verifier.
- [**Initial NVIDIA Dynamo Integration:**](./examples/dynamo_integration/README.md) Accelerate end-to-end deployment of agentic workflows with initial Dynamo support. Utilize the new agent-aware router to improve worker latency by predicting future agent behavior.
- [**A2A Support:**](./docs/source/components/integrations/a2a.md) Build teams of distributed agents using the A2A protocol.
- [**Safety and Security Engine:**](./examples/safety_and_security/retail_agent/README.md) Strengthen safety and security workflows by simulating scenario-based attacks, profiling risk, running guardrail-ready evaluations, and applying defenses with red teaming. Validate defenses, profile risk, monitor behavior, and harden agents across any framework.
- [**Amazon Bedrock AgentCore and Strands Agents Support:**](./docs/source/components/integrations/frameworks.md#strands) Build agents using Strands Agents framework and deploy them securely on Amazon Bedrock AgentCore runtime.
- [**Microsoft AutoGen Support:**](./docs/source/components/integrations/frameworks.md#autogen) Build agents using the Microsoft AutoGen framework.
- [**Per-User Functions:**](./docs/source/extend/custom-components/custom-functions/per-user-functions.md) Use per-user functions for deferred instantiation, enabling per-user stateful functions, per-user resources, and other features.
## ✨ Key Features
- 🛠️ **Building Agents**: Accelerate your agent development with tools that make it easier to get your agent into production.
- 🧩 [**Framework Agnostic:**](./docs/source/components/integrations/frameworks.md) Work side-by-side with agentic frameworks to add the instrumentation necessary for observing, profiling, and optimizing your agents. Use the toolkit with popular frameworks such as [LangChain](https://www.langchain.com/), [LlamaIndex](https://www.llamaindex.ai/), [CrewAI](https://www.crewai.com/), [Microsoft Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/), and [Google ADK](https://google.github.io/adk-docs/), as well as custom enterprise agentic frameworks and simple Python agents.
- 🔁 [**Reusability:**](./docs/source/components/sharing-components.md) Build components once and use them multiple times to maximize the value from development effort.
- ⚡ [**Customization:**](docs/source/get-started/tutorials/customize-a-workflow.md) Start with a pre-built agent, tool, or workflow, and customize it to your needs.
- 💬 [**Built-In User Interface:**](./docs/source/run-workflows/launching-ui.md) Use the NeMo Agent Toolkit UI chat interface to interact with your agents, visualize output, and debug workflows.
- 📈 **Agent Insights:** Utilize NeMo Agent Toolkit instrumentation to better understand how your agents function at runtime.
- 📊 [**Profiling:**](./docs/source/improve-workflows/profiler.md) Profile entire workflows from the agent level all the way down to individual tokens to identify bottlenecks, analyze token efficiency, and guide developers in optimizing their agents.
- 🔎 [**Observability:**](./docs/source/run-workflows/observe/observe.md) Track performance, trace execution flows, and gain insights into your agent behaviors in production.
- 🚀 **Agent Optimization:** Improve your agent's quality, accuracy, and performance with a suite of tools for all phases of the agent lifecycle.
- 🧪 [**Evaluation System:**](./docs/source/improve-workflows/evaluate.md) Validate and maintain accuracy of agentic workflows with a suite of tools for offline evaluation.
- 🎯 [**Hyper-Parameter and Prompt Optimizer:**](./docs/source/improve-workflows/optimizer.md) Automatically identify the best configuration and prompts to ensure you are getting the most out of your agent.
- 🧠 [**Fine-tuning with Reinforcement Learning:**](./docs/source/improve-workflows/finetuning/index.md) Fine-tune LLMs specifically for your agent and train intrinsic information about your workflow directly into the model.
- ⚡ [**NVIDIA Dynamo Integration:**](./examples/dynamo_integration/README.md) Use Dynamo and NeMo Agent Toolkit together to improve agent performance at scale.
- 🔌 **Protocol Support:** Integrate with common protocols used to build agents.
- 🔗 [**Model Context Protocol (MCP):**](./docs/source/build-workflows/mcp-client.md) Integrate [MCP tools](./docs/source/build-workflows/mcp-client.md) into your agents or serve your tools and agents as an [MCP server](./docs/source/run-workflows/mcp-server.md) for others to consume.
- 🤝 [**Agent-to-Agent (A2A) Protocol:**](./docs/source/components/integrations/a2a.md) Build teams of distributed agents with full support for authentication.
With NeMo Agent Toolkit, you can move quickly, experiment freely, and ensure reliability across all your agent-driven projects.
## 🚀 Installation
Before you begin using NeMo Agent Toolkit, ensure that you have Python 3.11, 3.12, or 3.13 installed on your system.
> [!NOTE]
> For users who want to run the examples, it's required to clone the repository and install from source to get the necessary files required to run the examples. Please refer to the [Examples](./examples/README.md) documentation for more information.
To install the latest stable version of NeMo Agent Toolkit from PyPI, run the following command:
```bash
pip install nvidia-nat
```
NeMo Agent Toolkit has many optional dependencies that can be installed with the core package. Optional dependencies are grouped by framework. For example, to install the LangChain/LangGraph plugin, run the following:
```bash
pip install "nvidia-nat[langchain]"
```
Detailed installation instructions, including the full list of optional dependencies and their conflicts, can be found in the [Installation Guide](./docs/source/get-started/installation.md).
## 🌟 Hello World Example
Before getting started, it's possible to run this simple workflow and many other examples in Google Colab with no setup. Click here to open the introduction notebook: [](https://colab.research.google.com/github/NVIDIA/NeMo-Agent-Toolkit/).
1. Ensure you have set the `NVIDIA_API_KEY` environment variable to allow the example to use NVIDIA NIMs. An API key can be obtained by visiting [`build.nvidia.com`](https://build.nvidia.com/) and creating an account.
```bash
export NVIDIA_API_KEY=<your_api_key>
```
2. Create the NeMo Agent Toolkit workflow configuration file. This file will define the agents, tools, and workflows that will be used in the example. Save the following as `workflow.yml`:
```yaml
functions:
# Add a tool to search wikipedia
wikipedia_search:
_type: wiki_search
max_results: 2
llms:
# Tell NeMo Agent Toolkit which LLM to use for the agent
nim_llm:
_type: nim
model_name: nvidia/nemotron-3-nano-30b-a3b
temperature: 0.0
chat_template_kwargs:
enable_thinking: false
workflow:
# Use an agent that 'reasons' and 'acts'
_type: react_agent
# Give it access to our wikipedia search tool
tool_names: [wikipedia_search]
# Tell it which LLM to use
llm_name: nim_llm
# Make it verbose
verbose: true
# Retry up to 3 times
parse_agent_response_max_retries: 3
```
3. Run the Hello World example using the `nat` CLI and the `workflow.yml` file.
```bash
nat run --config_file workflow.yml --input "List five subspecies of Aardvarks"
```
This will run the workflow and output the results to the console.
```console
Workflow Result:
['Here are five subspecies of Aardvarks:\n\n1. Orycteropus afer afer (Southern aardvark)\n2. O. a. adametzi Grote, 1921 (Western aardvark)\n3. O. a. aethiopicus Sundevall, 1843\n4. O. a. angolensis Zukowsky & Haltenorth, 1957\n5. O. a. erikssoni Lönnberg, 1906']
```
## 📚 Additional Resources
* 📖 [Documentation](https://docs.nvidia.com/nemo/agent-toolkit/latest): Explore the full documentation for NeMo Agent Toolkit.
* 🧭 [Get Started Guide](./docs/source/get-started/installation.md): Set up your environment and start building with NeMo Agent Toolkit.
* 🤝 [Contributing](./docs/source/resources/contributing/index.md): Learn how to contribute to NeMo Agent Toolkit and set up your development environment.
* 🧪 [Examples](./examples/README.md): Explore examples of NeMo Agent Toolkit workflows located in the [`examples`](./examples) directory of the source repository.
* 🛠️ [Create and Customize NeMo Agent Toolkit Workflows](docs/source/get-started/tutorials/customize-a-workflow.md): Learn how to create and customize NeMo Agent Toolkit workflows.
* 🎯 [Evaluate with NeMo Agent Toolkit](./docs/source/improve-workflows/evaluate.md): Learn how to evaluate your NeMo Agent Toolkit workflows.
* 🆘 [Troubleshooting](./docs/source/resources/troubleshooting.md): Get help with common issues.
## 🛣️ Roadmap
- [x] Automatic Reinforcement Learning (RL) to fine-tune LLMs for a specific agent.
- [x] Integration with [NVIDIA Dynamo](https://github.com/ai-dynamo/dynamo) to reduce LLM latency at scale.
- [ ] Improve agent throughput with KV-Cache optimization.
- [ ] Integration with [NeMo Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) to improve agent safety and security.
- [ ] Improved memory interface to support self-improving agents.
## 💬 Feedback
We would love to hear from you! Please file an issue on [GitHub](https://github.com/NVIDIA/NeMo-Agent-Toolkit/issues) if you have any feedback or feature requests.
## 🤝 Acknowledgements
We would like to thank the following groups for their contribution to the toolkit:
- [Synopsys](https://www.synopsys.com/)
- Google ADK framework support.
- Microsoft AutoGen framework support.
- [W&B Weave Team](https://wandb.ai/site/weave/)
- Contributions to the evaluation and telemetry system.
In addition, we would like to thank the following open source projects that made NeMo Agent Toolkit possible:
- [Agent2Agent (A2A) Protocol](https://github.com/a2aproject/A2A)
- [CrewAI](https://github.com/crewAIInc/crewAI)
- [Dynamo](https://github.com/ai-dynamo/dynamo)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Google Agent Development Kit (ADK)](https://github.com/google/adk-python)
- [LangChain](https://github.com/langchain-ai/langchain)
- [Llama-Index](https://github.com/run-llama/llama_index)
- [Mem0ai](https://github.com/mem0ai/mem0)
- [Microsoft AutoGen](https://github.com/microsoft/autogen)
- [MinIO](https://github.com/minio/minio)
- [Model Context Protocol (MCP)](https://github.com/modelcontextprotocol/modelcontextprotocol)
- [OpenTelemetry](https://github.com/open-telemetry/opentelemetry-python)
- [Phoenix](https://github.com/arize-ai/phoenix)
- [Ragas](https://github.com/explodinggradients/ragas)
- [Redis](https://github.com/redis/redis-py)
- [Semantic Kernel](https://github.com/microsoft/semantic-kernel)
- [Strands](https://github.com/strands-agents/sdk-python)
- [uv](https://github.com/astral-sh/uv)
- [Weave](https://github.com/wandb/weave)
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"nvidia-nat-a2a==v1.5.0a20260221; extra == \"a2a\"",
"nvidia-nat-adk==v1.5.0a20260221; extra == \"adk\"",
"nvidia-nat-agno==v1.5.0a20260221; extra == \"agno\"",
"nvidia-nat-autogen==v1.5.0a20260221; extra == \"autogen\"",
"nvidia-nat-core==v1.5.0a20260221; extra == \"core\"",
"nvidia-nat-crewai==v1.5.0a20260221; extra == \"crewai\"",
"nvidia-nat-eval==v1.5.0a20260221; extra == \"eval\"",
"nvidia-nat-data-flywheel==v1.5.0a20260221; extra == \"data-flywheel\"",
"nvidia-nat-fastmcp==v1.5.0a20260221; extra == \"fastmcp\"",
"nvidia-nat-langchain==v1.5.0a20260221; extra == \"langchain\"",
"nvidia-nat-llama-index==v1.5.0a20260221; extra == \"llama-index\"",
"nvidia-nat-mcp==v1.5.0a20260221; extra == \"mcp\"",
"nvidia-nat-mem0ai==v1.5.0a20260221; extra == \"mem0ai\"",
"nvidia-nat-nemo-customizer==v1.5.0a20260221; extra == \"nemo-customizer\"",
"nvidia-nat-openpipe-art==v1.5.0a20260221; extra == \"openpipe-art\"",
"nvidia-nat-opentelemetry==v1.5.0a20260221; extra == \"opentelemetry\"",
"nvidia-nat-phoenix==v1.5.0a20260221; extra == \"phoenix\"",
"nvidia-nat-rag==v1.5.0a20260221; extra == \"rag\"",
"nvidia-nat-ragaai==v1.5.0a20260221; extra == \"ragaai\"",
"nvidia-nat-mysql==v1.5.0a20260221; extra == \"mysql\"",
"nvidia-nat-redis==v1.5.0a20260221; extra == \"redis\"",
"nvidia-nat-s3==v1.5.0a20260221; extra == \"s3\"",
"nvidia-nat-semantic-kernel==v1.5.0a20260221; extra == \"semantic-kernel\"",
"nvidia-nat-strands==v1.5.0a20260221; extra == \"strands\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\"",
"nvidia-nat-vanna==v1.5.0a20260221; extra == \"vanna\"",
"nvidia-nat-weave==v1.5.0a20260221; extra == \"weave\"",
"nvidia-nat-zep-cloud==v1.5.0a20260221; extra == \"zep-cloud\"",
"nvidia-nat-core[async_endpoints]==v1.5.0a20260221; extra == \"async-endpoints\"",
"nvidia-nat-core[gunicorn]==v1.5.0a20260221; extra == \"gunicorn\"",
"nvidia-nat-core[pii-defense]==v1.5.0a20260221; extra == \"pii-defense\"",
"nvidia-nat-eval[profiling]==v1.5.0a20260221; extra == \"profiling\"",
"nvidia-nat-a2a==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-adk==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-agno==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-autogen==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-core[async_endpoints,gunicorn,pii-defense]==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-crewai==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-data-flywheel==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-eval[profiling]==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-fastmcp==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-langchain==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-llama-index==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-mcp==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-mem0ai==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-mysql==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-nemo-customizer==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-opentelemetry==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-phoenix==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-redis==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-s3==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-semantic-kernel==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-strands==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-vanna==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-weave==v1.5.0a20260221; extra == \"most\"",
"nvidia-nat-zep-cloud==v1.5.0a20260221; extra == \"most\"",
"nat_adk_demo; extra == \"examples\"",
"nat_agno_personal_finance; extra == \"examples\"",
"nat_agents_examples; extra == \"examples\"",
"nat_alert_triage_agent; extra == \"examples\"",
"nat_autogen_demo; extra == \"examples\"",
"nat_automated_description_generation; extra == \"examples\"",
"nat_currency_agent_a2a; extra == \"examples\"",
"nat_dpo_tic_tac_toe; extra == \"examples\"",
"nat_documentation_guides; extra == \"examples\"",
"nat_email_phishing_analyzer; extra == \"examples\"",
"nat_haystack_deep_research_agent; extra == \"examples\"",
"nat_hybrid_control_flow; extra == \"examples\"",
"nat_kaggle_mcp; extra == \"examples\"",
"nat_math_assistant_a2a; extra == \"examples\"",
"nat_math_assistant_a2a_protected; extra == \"examples\"",
"nat_multi_frameworks; extra == \"examples\"",
"nat_notebooks; extra == \"examples\"",
"nat_per_user_workflow; extra == \"examples\"",
"nat_plot_charts; extra == \"examples\"",
"nat_por_to_jiratickets; extra == \"examples\"",
"nat_prompt_from_file; extra == \"examples\"",
"nat_profiler_agent; extra == \"examples\"",
"nat_react_benchmark_agent; extra == \"examples\"",
"nat_redis_example; extra == \"examples\"",
"nat_retail_agent; extra == \"examples\"",
"nat_rl_with_openpipe_art; extra == \"examples\"",
"nat_router_agent; extra == \"examples\"",
"nat_semantic_kernel_demo; extra == \"examples\"",
"nat_sequential_executor; extra == \"examples\"",
"nat_service_account_auth_mcp; extra == \"examples\"",
"nat_simple_auth; extra == \"examples\"",
"nat_simple_auth_mcp; extra == \"examples\"",
"nat_simple_calculator; extra == \"examples\"",
"nat_simple_calculator_custom_routes; extra == \"examples\"",
"nat_simple_calculator_eval; extra == \"examples\"",
"nat_simple_calculator_fastmcp; extra == \"examples\"",
"nat_simple_calculator_fastmcp_protected; extra == \"examples\"",
"nat_simple_calculator_hitl; extra == \"examples\"",
"nat_simple_calculator_mcp; extra == \"examples\"",
"nat_simple_calculator_mcp_protected; extra == \"examples\"",
"nat_simple_calculator_observability; extra == \"examples\"",
"nat_simple_rag; extra == \"examples\"",
"nat_simple_web_query; extra == \"examples\"",
"nat_simple_web_query_eval; extra == \"examples\"",
"nat_strands_demo; extra == \"examples\"",
"nat_swe_bench; extra == \"examples\"",
"nat_user_report; extra == \"examples\"",
"text_file_ingest; extra == \"examples\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:03:55.352216 | nvidia_nat-1.5.0a20260221-py3-none-any.whl | 52,701 | ae/02/5f5410cd53e576364f4f1b51bc8f41c443e7bf90079a5ef401becbafcc23/nvidia_nat-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 4a02800f76e02ec4f6a4257d92a0acaf | 9165c2e47a0d31345581172ad7a39b9907b7b1adb08b66a755c7f32892e472f8 | ae025f5410cd53e576364f4f1b51bc8f41c443e7bf90079a5ef401becbafcc23 | null | [] | 69 |
2.4 | nvidia-nat-a2a | 1.5.0a20260221 | Subpackage for A2A Protocol integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit A2A Subpackage
Subpackage for A2A Protocol integration in NeMo Agent Toolkit.
This package provides A2A (Agent-to-Agent) Protocol functionality, allowing NeMo Agent Toolkit workflows to connect to remote A2A agents and invoke their skills as functions. This package includes both the client and server components of the A2A protocol.
## Features
### Client
- Connect to remote A2A agents via HTTP with JSON-RPC transport
- Discover agent capabilities through Agent Cards
- Submit tasks to remote agents with async execution
### Server
- Serve A2A agents via HTTP with JSON-RPC transport
- Support for A2A agent executor pattern
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents, a2a | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"a2a-sdk[http-server]<1.0.0,>=0.3.20",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:03:38.867121 | nvidia_nat_a2a-1.5.0a20260221-py3-none-any.whl | 44,114 | 35/7a/eacaaefd19e9d0bdab6434431eb32e8336e10f600eaf2d23bf41981c39f4/nvidia_nat_a2a-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 49988004b43ec81bfacce24a17ea62e6 | cc8d545db0d51cc91d2fc8edc8b33a6d7da481abeaca3addf07af0fd55b18450 | 357aeacaaefd19e9d0bdab6434431eb32e8336e10f600eaf2d23bf41981c39f4 | null | [] | 74 |
2.4 | nvidia-nat-zep-cloud | 1.5.0a20260221 | Subpackage for Zep integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for Zep memory integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, agents, memory | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"zep-cloud~=3.0",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:03:22.066805 | nvidia_nat_zep_cloud-1.5.0a20260221-py3-none-any.whl | 54,044 | 7d/f4/079b4298fee2300344fd32f122f8f833cb42aeddcb209324e58bcdae93e8/nvidia_nat_zep_cloud-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | d9cd540f7e2b4c65d6e1de3f83e4ce13 | acfd2dcc5a616de27f9a907a2178414b25417e5f8ba7e454dfd0f5a977d16159 | 7df4079b4298fee2300344fd32f122f8f833cb42aeddcb209324e58bcdae93e8 | null | [] | 71 |
2.4 | nvidia-nat-agno | 1.5.0a20260221 | Subpackage for Agno integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
<!-- Note: "Agno" is the official product name despite Vale spelling checker warnings -->
This is a subpackage for `Agno` integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"agno<2.0.0,>=1.2.3",
"google-search-results<3.0.0,>=2.4.2",
"litellm~=1.74",
"openai~=1.106",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:03:04.989098 | nvidia_nat_agno-1.5.0a20260221-py3-none-any.whl | 62,690 | 45/1e/2bf57e50fa0ac1017f6624808eb86efb37669ef6e0726b548d686fc28f0f/nvidia_nat_agno-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 4d80ec805f4993fb2b23f5e714744ec7 | 7b3a546b685fe1c36f9ead3314d5b5da6a26d3188f586398a3573183b4566dab | 451e2bf57e50fa0ac1017f6624808eb86efb37669ef6e0726b548d686fc28f0f | null | [] | 68 |
2.4 | nvidia-nat-strands | 1.5.0a20260221 | Subpackage for AWS Strands integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for AWS Strands integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"strands-agents[openai]~=1.17",
"strands-agents-tools~=0.2",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:02:47.400672 | nvidia_nat_strands-1.5.0a20260221-py3-none-any.whl | 17,198 | dd/53/1bf1969243e014e3fc95582fe95faf54ca3077f0a0bf4fc284bb799c376a/nvidia_nat_strands-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | fa123901a7f370e253c6bab577f48e58 | 7bf4de2761d88748ce90d7508d9e6ddf4ea54977eddcf934d66215b3baae5a52 | dd531bf1969243e014e3fc95582fe95faf54ca3077f0a0bf4fc284bb799c376a | null | [] | 71 |
2.4 | mb-netwatch | 0.0.1 | macOS internet connection monitor | # mb-netwatch
macOS internet connection monitor. Tracks latency, VPN status, and public IP at a glance via a menu bar icon.
> **Status:** Under active development.
## What it monitors
Three types of checks run continuously in the background:
- **Latency** — HTTP probe every 2 seconds
- **VPN status** — tunnel detection every 10 seconds
- **Public IP** — IP address and country every 60 seconds
### Latency
Latency is measured via HTTP/HTTPS requests, not ICMP ping — many VPN tunnels don't route ICMP traffic, making ping unreliable. HTTP requests work over any TCP-capable connection regardless of VPN configuration.
Probe targets are **captive portal detection endpoints** — lightweight URLs that OS and browser vendors operate specifically for connectivity checking:
- `https://connectivitycheck.gstatic.com/generate_204` — Google, HTTPS, 204 No Content
- `https://www.apple.com/library/test/success.html` — Apple, HTTPS, tiny HTML
- `http://detectportal.firefox.com/success.txt` — Mozilla, HTTP, "success"
- `http://www.msftconnecttest.com/connecttest.txt` — Microsoft, HTTP, "Microsoft Connect Test"
**Why these endpoints:**
- **Purpose-built** — designed for automated connectivity checks, not general web pages
- **Minimal payload** — empty body or a few bytes, negligible bandwidth
- **Global CDN** — low latency from virtually any location
- **High uptime** — operated by Google, Apple, Mozilla, Microsoft
- **No rate limiting** — billions of devices hit them daily; our requests are invisible
- **Never blocked by ISPs** — blocking would break captive portal detection on every phone, laptop, and tablet
- **Multiple providers** — if one company's infrastructure has issues, the others still work
**How probing works:**
1. Requests are sent to all endpoints simultaneously
2. The first successful response wins — all remaining requests are cancelled immediately
3. If no response arrives within 5 seconds — status is "Down"
4. Connections are reused between checks (keep-alive) — lower baseline latency makes network degradation more visible, and eliminates measurement noise from TLS handshake variance. If sustained failures are detected, the HTTP session is automatically recreated to recover from stale connections
**Polling:**
- A probe runs every 2 seconds
- Each measurement is stored as a raw value in the database
### VPN status
Detects VPN state every 10 seconds and stores only information that is directly useful for end users:
- **Active/inactive** — whether traffic is currently routed through a tunnel interface
- **Tunnel mode** — full tunnel (all traffic via VPN) vs split tunnel (only part of traffic via VPN)
- **Provider (best effort)** — VPN app name when it can be identified with sufficient confidence; otherwise `NULL`
#### How VPN detection works
The detector uses a simple priority-based pipeline:
1. **Detect tunnel presence**
- Find active `tun*`/`utun*` interface with IPv4 address.
- If no tunnel interface is found, VPN is considered inactive.
2. **Detect tunnel mode**
- Parse `netstat -rn -f inet`.
- Full tunnel if default route is via tunnel, or if OpenVPN-style `0/1` + `128.0/1` routes are via tunnel.
- Otherwise split tunnel.
- If routing cannot be parsed, mode is `unknown`.
3. **Detect provider**
- Parse `scutil --nc list`.
- If a service with `(Connected)` status is found, use its name as provider.
- Otherwise `NULL`.
### Public IP
Detects the public IP address and its country every 60 seconds. Useful for verifying which exit point your traffic uses — especially after toggling a VPN.
**IP detection services** (plain-text responses):
- `https://api.ipify.org` — ipify
- `https://icanhazip.com` — icanhazip
- `https://checkip.amazonaws.com` — Amazon
- `https://ifconfig.me/ip` — ifconfig.me
- `https://ipinfo.io/ip` — ipinfo
- `https://v4.ident.me` — ident.me
**Country resolution services** (2-letter ISO code):
- `https://ipinfo.io/{ip}/country` — ipinfo
- `https://ipapi.co/{ip}/country/` — ipapi
**How it works:**
1. Two random services are picked from the IP list and raced — first valid IPv4 response wins
2. If the IP is the same as the previous check, the country code is reused (saves API quota)
3. If the IP changed, two country services are raced for the new IP
4. Responses are validated: IP must be a valid IPv4 address, country must be exactly 2 uppercase ASCII letters
## CLI commands
- `mb-netwatch probe` — one-shot connectivity probe, print result
- `mb-netwatch probed` — run continuous background measurements
- `mb-netwatch tray` — run menu bar UI process
- `mb-netwatch watch` — live terminal view of measurements
- `mb-netwatch start [probed|tray]` — start processes in the background (no argument = both)
- `mb-netwatch stop [probed|tray]` — stop background processes (no argument = both)
## Architecture
Two long-running processes in normal operation:
- **probed** (`mb-netwatch probed`) — source of truth; measures latency every 2 seconds, VPN status every 10 seconds, and public IP every 60 seconds; writes results to SQLite.
- **tray** (`mb-netwatch tray`) — UI only; reads latest samples from SQLite, updates menu bar icon and dropdown.
The tray must not perform network probing directly. This separation keeps UI responsive and simplifies debugging.
## Storage
SQLite database at `~/.local/mb-netwatch/netwatch.db`.
Journal mode: WAL (concurrent reads while probed writes).
### Schema
```sql
CREATE TABLE latency_checks (
ts REAL NOT NULL, -- UTC Unix timestamp (seconds since epoch)
latency_ms REAL, -- winning request latency; NULL when all endpoints failed
winner_endpoint TEXT -- URL that responded first; NULL when down
);
CREATE INDEX idx_latency_checks_ts ON latency_checks(ts);
CREATE TABLE vpn_checks (
ts REAL NOT NULL, -- UTC Unix timestamp (seconds since epoch)
is_active INTEGER NOT NULL, -- 1 = VPN active, 0 = inactive
tunnel_mode TEXT NOT NULL, -- "full", "split", or "unknown"
provider TEXT -- VPN app name, NULL when not identified reliably
);
CREATE INDEX idx_vpn_checks_ts ON vpn_checks(ts);
CREATE TABLE ip_checks (
ts REAL NOT NULL, -- UTC Unix timestamp (seconds since epoch)
ip TEXT, -- public IPv4 address; NULL when all lookups failed
country_code TEXT -- 2-letter ISO country code; NULL when lookup failed
);
CREATE INDEX idx_ip_checks_ts ON ip_checks(ts);
```
Retention: raw rows kept for 30 days, older rows purged periodically by probed.
## Configuration
Optional TOML config at `~/.local/mb-netwatch/config.toml`. The file is not created automatically — create it only if you want to override defaults. All keys are optional — only specify what you want to change.
```toml
[probed]
latency_interval = 2.0 # seconds between latency probes (default: 2.0)
vpn_interval = 10.0 # seconds between VPN status checks (default: 10.0)
ip_interval = 60.0 # seconds between public IP lookups (default: 60.0)
purge_interval = 3600.0 # seconds between old-data purge runs (default: 3600.0)
latency_timeout = 5.0 # HTTP timeout for latency probes (default: 5.0)
ip_timeout = 5.0 # HTTP timeout for IP/country lookups (default: 5.0)
retention_days = 30 # days to keep raw rows before purging (default: 30)
[tray]
poll_interval = 2.0 # seconds between tray DB polls (default: 2.0)
ok_threshold_ms = 300 # latency below this → OK (default: 300)
slow_threshold_ms = 800 # latency below this → SLOW, at or above → BAD (default: 800)
stale_threshold = 10.0 # seconds before data is considered stale (default: 10.0)
[watch]
poll_interval = 0.5 # seconds between terminal view DB polls (default: 0.5)
```
The menu bar shows a fixed-width 3-character title: 2-letter country code + status symbol (`●` OK / `◐` SLOW / `○` BAD / `✕` DOWN), e.g. `US●`. Click the menu bar icon to see the exact latency in the dropdown. If probed stops writing data, the symbol changes to `–` (en dash) after `stale_threshold` seconds (default 10). While waiting for the first data, a middle dot `·` is displayed.
## Installation
```
uv tool install mb-netwatch
mb-netwatch start
```
## Tech stack
- Python 3.14
- [aiohttp](https://docs.aiohttp.org/) — HTTP probes
- [psutil](https://github.com/giampaolo/psutil) — network interface inspection for VPN detection
- [mm-pymac](https://github.com/mcbarinov/mm-pymac) — macOS menu bar app
| text/markdown | mcbarinov | null | null | null | null | latency, macos, menubar, monitor, network, vpn | [
"Operating System :: MacOS",
"Topic :: System :: Networking :: Monitoring",
"Topic :: Utilities"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"aiohttp~=3.11",
"mm-pymac~=0.0.1",
"psutil~=7.0",
"pydantic~=2.12",
"typer~=0.24.0"
] | [] | [] | [] | [
"Homepage, https://github.com/mcbarinov/mb-netwatch",
"Repository, https://github.com/mcbarinov/mb-netwatch"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T10:02:33.065864 | mb_netwatch-0.0.1.tar.gz | 18,589 | 85/18/466554a94c3a0cee4cd1ec09dd010fa9c8621be23ee3c29f7afe293de6ff/mb_netwatch-0.0.1.tar.gz | source | sdist | null | false | ef5fac9c05f95335f7992f40ba091f02 | a816bc2ef9b20cdac6ac2c9601e944af2a8eae5393a71cc061940db5a3508521 | 8518466554a94c3a0cee4cd1ec09dd010fa9c8621be23ee3c29f7afe293de6ff | MIT | [
"LICENSE"
] | 236 |
2.4 | nvidia-nat-eval | 1.5.0a20260221 | Subpackage for evaluation in NVIDIA NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Eval Subpackage
Subpackage for evaluation support in NeMo Agent Toolkit.
This package provides evaluation-specific components and CLI commands under `nat.plugins.eval`.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents, evaluation | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"datasets~=4.4",
"nvidia-nat-core==v1.5.0a20260221",
"openpyxl~=3.1",
"ragas~=0.2.14",
"matplotlib~=3.9; extra == \"profiling\"",
"prefixspan~=0.5.2; extra == \"profiling\"",
"scikit-learn~=1.6; extra == \"profiling\"",
"nvidia-nat-core[async_endpoints]==v1.5.0a20260221; extra == \"test\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:02:30.655924 | nvidia_nat_eval-1.5.0a20260221-py3-none-any.whl | 196,232 | 2f/d4/9eac4dc871ec3ea8a4d0964f02eed13f70cbc99ed2d3473584a6cbce6c1e/nvidia_nat_eval-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | a275edaa2c4404b1655383521b835c44 | 9fd07eb5f62a32b9d182239e0a2e5f03950b8b7dc2642315c0778e07a5842d8e | 2fd49eac4dc871ec3ea8a4d0964f02eed13f70cbc99ed2d3473584a6cbce6c1e | null | [] | 72 |
2.4 | mypypipkg | 0.1.1 | Short description of my package | # Hello PyPI
A simple example Python package.
| text/markdown | null | ethankonopeksi <ilicepolom371@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T10:02:16.891259 | mypypipkg-0.1.1.tar.gz | 2,560 | 1b/97/1f08afc2564b77f79053bd29e430818ea5ac931a75836f215459bcdcfd0e/mypypipkg-0.1.1.tar.gz | source | sdist | null | false | ff68bd4455a793923e598b087d68a298 | 8fa7ad67dd76949820fc64f5c288bb06bee98f0f2fe54557d38917cde5690824 | 1b971f08afc2564b77f79053bd29e430818ea5ac931a75836f215459bcdcfd0e | null | [] | 224 |
2.4 | nvidia-nat-vanna | 1.5.0a20260221 | Vanna-based Text-to-SQL integration for NeMo Agent Toolkit with Databricks support | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# NVIDIA NeMo Agent Toolkit Vanna
Vanna-based Text-to-SQL integration for NeMo Agent Toolkit.
## Overview
This package provides production-ready text-to-SQL capabilities using the Vanna framework with Databricks support.
## Features
- **AI-Powered SQL Generation**: Convert natural language to SQL using LLMs
- **Databricks Support**: Optimized for Databricks SQL warehouses
- **Vector-Based Similarity Search**: Milvus integration for few-shot learning
- **Streaming Support**: Real-time progress updates
- **Query Execution**: Optional database execution with formatted results
- **Highly Configurable**: Customizable prompts, examples, and connections
## Quick Start
Install the package:
```bash
pip install nvidia-nat-vanna
```
Create a workflow configuration:
```yaml
functions:
text2sql:
_type: text2sql
llm_name: my_llm
embedder_name: my_embedder
milvus_retriever: my_retriever
database_type: databricks
connection_url: "${CONNECTION_URL}"
execute_sql: false
execute_db_query:
_type: execute_db_query
database_type: databricks
connection_url: "${CONNECTION_URL}"
max_rows: 100
llms:
my_llm:
_type: nim
model_name: meta/llama-3.1-70b-instruct
api_key: "${NVIDIA_API_KEY}"
embedders:
my_embedder:
_type: nim
model_name: nvidia/llama-3.2-nv-embedqa-1b-v2
api_key: "${NVIDIA_API_KEY}"
retrievers:
my_retriever:
_type: milvus_retriever
uri: "${MILVUS_URI}"
connection_args:
user: "developer"
password: "${MILVUS_PASSWORD}"
db_name: "default"
embedding_model: my_embedder
content_field: text
use_async_client: true
workflow:
_type: rewoo_agent
tool_names: [text2sql, execute_db_query]
llm_name: my_llm
```
Run the workflow:
```bash
nat run --config config.yml --input "How many customers do we have?"
```
## Components
### `text2sql` Function
Generates SQL queries from natural language using:
- Few-shot learning with similar examples
- DDL (schema) information
- Custom documentation
- LLM-powered query generation
### `execute_db_query` Function
Executes SQL queries and returns formatted results:
- Databricks SQL execution
- Result limiting and pagination
- Structured output format
- SQLAlchemy Object Relational Mapper (ORM)-based connection
## Use Cases
- **Business Intelligence**: Enable non-technical users to query data
- **Data Exploration**: Rapid prototyping and analysis
- **Conversational Analytics**: Multi-turn Q&A about your data
- **SQL Assistance**: Help analysts write complex queries
## Documentation
Full documentation: <https://docs.nvidia.com/nemo/agent-toolkit/latest/>
## License
Part of NVIDIA NeMo Agent Toolkit. See repository for license details.
| text/markdown | null | null | null | null | null | ai, agents, text2sql, vanna, sql, database | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"nvidia-nat-langchain==v1.5.0a20260221",
"databricks-sql-connector<5.0.0,>=4.1.4",
"databricks-sqlalchemy<3.0.0,>=2.0.8",
"pandas~=2.0",
"pymilvus[model]~=2.6",
"sqlglot~=26.33",
"vanna[chromadb]<3.0.0,>=2.0.1",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:01:56.846171 | nvidia_nat_vanna-1.5.0a20260221-py3-none-any.whl | 25,659 | 7b/04/9b0dd4d337973348516a313f3dd8674964ff0c05277e8f71a49199bbd939/nvidia_nat_vanna-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 6f1d70c9defbd4d69c23318586001336 | b8915f7b80450f2358c3cebf0053006a72bc216530234ecabcbbed697375bab6 | 7b049b0dd4d337973348516a313f3dd8674964ff0c05277e8f71a49199bbd939 | null | [] | 68 |
2.4 | schemaxpy | 0.2.1 | Python SDK and CLI for Databricks Unity Catalog schema management | # SchemaX Python SDK & CLI
**Declarative schema management** for modern data catalogs. Version control your schemas, generate SQL migrations, and deploy with confidence across multiple environments.
## Features
- **Multi-Provider Architecture**: Unity Catalog (Databricks), Hive, PostgreSQL, and more
- **Version-Controlled Schemas**: Git-based workflow with snapshots and changelogs
- **SQL Migration Generation**: Generate idempotent SQL DDL from schema changes
- **Environment Management**: Dev, test, prod with catalog name mapping
- **Deployment Tracking**: Know what's deployed where with database-backed tracking
- **Auto-Rollback**: Automatically rollback failed deployments with data loss detection (NEW!)
- **Safety Validation**: Analyze data impact before rollback operations
- **Type-Safe**: Full type annotations, validated with mypy
- **CI/CD Ready**: Designed for GitHub Actions, GitLab CI, and other pipelines
- **Extensible**: Plugin architecture for custom catalog providers
## Why SchemaX?
**Provider-agnostic design**: Write your schema once, deploy to any catalog system. Start with Unity Catalog (Databricks) and easily extend to Hive, PostgreSQL, Snowflake, or custom providers.
**Git-based workflow**: Your schemas are code. Version them, review them, and deploy them with confidence using familiar Git workflows.
**Environment-aware**: Manage dev, test, and prod environments with automatic catalog name mapping. No more hardcoded catalog names in SQL.
**Type-safe and tested**: Built with Python 3.11+ type hints, validated with mypy, and covered by 138+ tests. Production-ready from day one.
## Installation
```bash
pip install schemaxpy
```
### Development Install
```bash
git clone https://github.com/vb-dbrks/schemax-vscode.git
cd schemax-vscode/packages/python-sdk
pip install -e ".[dev]"
```
## Quick Start
### 1. Initialize a New Project
```bash
# Unity Catalog (Databricks) - default
schemax init
# PostgreSQL
schemax init --provider postgres
# Hive Metastore
schemax init --provider hive
```
This creates a `.schemax/` directory with your project configuration.
### 2. Validate Your Schema
```bash
schemax validate
```
Validates project structure, provider compatibility, and schema correctness.
### 3. Generate SQL Migration
```bash
# Generate SQL from changelog
schemax sql --output migration.sql
# Generate for specific environment (with catalog mapping)
schemax sql --target dev --output dev-migration.sql
```
### 4. Apply Changes (Unity Catalog)
```bash
# Preview changes
schemax apply --target dev --profile my-databricks --warehouse-id abc123 --dry-run
# Apply with automatic rollback on failure (MVP feature!)
schemax apply --target dev --profile my-databricks --warehouse-id abc123 --auto-rollback
# Apply to environment
schemax apply --target dev --profile my-databricks --warehouse-id abc123
```
### 5. Track Deployments
```bash
# Record deployment (works for all providers)
schemax record-deployment --environment prod --version v1.0.0 --mark-deployed
```
## CLI Commands
### `schemax sql`
Generate SQL DDL migration scripts from schema changes.
**Options:**
- `--output, -o`: Output file path (default: stdout)
- `--target, -t`: Target environment (applies catalog name mapping)
**Examples:**
```bash
# Output to stdout
schemax sql
# Save to file
schemax sql --output migration.sql
# Generate for specific environment
schemax sql --target prod --output prod-migration.sql
```
### `schemax apply` (Unity Catalog only)
Execute SQL migrations against a Databricks Unity Catalog environment with automatic deployment tracking and optional rollback.
**Options:**
- `--target, -t`: Target environment (required)
- `--profile, -p`: Databricks CLI profile (required)
- `--warehouse-id, -w`: SQL Warehouse ID (required)
- `--sql`: SQL file to execute (optional, generates from changelog if not provided)
- `--dry-run`: Preview changes without executing
- `--no-interaction`: Skip confirmation prompts (for CI/CD)
- `--auto-rollback`: Automatically rollback on failure (NEW!)
**Features:**
- Interactive snapshot prompts (create snapshot before deployment)
- SQL preview with statement-by-statement display
- Database-backed deployment tracking in `{catalog}.schemax`
- Automatic rollback on partial failures (with `--auto-rollback`)
**Examples:**
```bash
# Preview changes
schemax apply --target dev --profile default --warehouse-id abc123 --dry-run
# Apply with automatic rollback on failure
schemax apply --target dev --profile default --warehouse-id abc123 --auto-rollback
# Apply with confirmation
schemax apply --target prod --profile prod --warehouse-id xyz789
# Non-interactive (CI/CD)
schemax apply --target prod --profile prod --warehouse-id xyz789 --no-interaction
```
### `schemax rollback` (Unity Catalog only)
Rollback failed or unwanted deployments with safety validation. Idempotent design prevents redundant operations by checking database state.
**Partial Rollback** - Revert successful operations from a failed deployment:
```bash
schemax rollback --partial --deployment <id> --target dev --profile DEFAULT --warehouse-id <id>
# With dry-run
schemax rollback --partial --deployment <id> --target dev --profile DEFAULT --warehouse-id <id> --dry-run
# Only safe operations
schemax rollback --partial --deployment <id> --target dev --profile DEFAULT --warehouse-id <id> --safe-only
```
**Complete Rollback** - Rollback to a previous snapshot:
```bash
schemax rollback --to-snapshot v0.2.0 --target dev --profile DEFAULT --warehouse-id <id>
# With dry-run
schemax rollback --to-snapshot v0.2.0 --target dev --profile DEFAULT --warehouse-id <id> --dry-run
```
**Options:**
- `--partial`: Rollback successful operations from a failed deployment
- `--deployment, -d`: Deployment ID to rollback (required for partial)
- `--to-snapshot`: Snapshot version to rollback to (required for complete)
- `--target, -t`: Target environment (required)
- `--profile, -p`: Databricks CLI profile (required)
- `--warehouse-id, -w`: SQL Warehouse ID (required)
- `--dry-run`: Preview rollback SQL without executing
- `--safe-only`: Only execute SAFE operations (skip RISKY/DESTRUCTIVE)
**Safety Levels:**
- **SAFE**: No data loss (e.g., DROP empty table)
- **RISKY**: Potential data loss (e.g., ALTER COLUMN TYPE)
- **DESTRUCTIVE**: Certain data loss (e.g., DROP table with data)
**Features:**
- **Idempotent**: Checks database deployment state to prevent redundant rollbacks
- **SQL Preview**: Shows exact SQL statements before execution (matches `apply` UX)
- **Database as Source of Truth**: Queries deployment tracking table for accurate state
### `schemax snapshot`
Manage schema snapshots with lifecycle commands.
**Create Snapshot:**
```bash
# Auto-generate version
schemax snapshot create --name "Initial schema"
# Specify version manually
schemax snapshot create --name "Production release" --version v1.0.0
# With tags
schemax snapshot create --name "Hotfix" --version v0.2.1 --tags hotfix,urgent
```
**Validate Snapshots:**
```bash
# Detect stale snapshots after git rebase
schemax snapshot validate
```
**Rebase Snapshot:**
```bash
# Rebase a stale snapshot onto new base
schemax snapshot rebase v0.3.0
```
**Features:**
- Semantic versioning (MAJOR.MINOR.PATCH)
- Detects stale snapshots after Git rebases
- Unpacks and replays operations on new base
- Conflict detection with manual UI resolution
- Validates snapshot lineage
### `schemax validate`
Validate `.schemax/` project files for correctness and provider compatibility.
**Examples:**
```bash
# Validate current directory
schemax validate
# Validate specific directory
schemax validate /path/to/project
```
### `schemax record-deployment`
Manually record deployment metadata (useful for non-Unity Catalog providers).
**Options:**
- `--environment, -e`: Environment name (required)
- `--version, -v`: Version deployed (default: latest snapshot)
- `--mark-deployed`: Mark as successfully deployed
**Examples:**
```bash
# Record successful deployment
schemax record-deployment --environment prod --version v1.0.0 --mark-deployed
```
### `schemax diff`
Compare two schema versions and show the operations needed to transform one into the other.
**Examples:**
```bash
# Basic diff
schemax diff --from v0.1.0 --to v0.2.0
# Show generated SQL with logical catalog names
schemax diff --from v0.1.0 --to v0.2.0 --show-sql
# Show SQL with environment-specific catalog names
schemax diff --from v0.1.0 --to v0.2.0 --show-sql --target dev
# Show detailed operation payloads
schemax diff --from v0.1.0 --to v0.2.0 --show-details
```
## Python API
### Generate SQL Programmatically
```python
from pathlib import Path
from schemax.core.storage import load_current_state, read_project, get_environment_config
from schemax.providers.base.operations import Operation
# Load schema with provider
workspace = Path.cwd()
state, changelog, provider = load_current_state(workspace)
print(f"Provider: {provider.info.name} v{provider.info.version}")
# Convert ops to Operation objects
operations = [Operation(**op) for op in changelog["ops"]]
# Generate SQL using provider's SQL generator
generator = provider.get_sql_generator(state)
sql = generator.generate_sql(operations)
print(sql)
```
### Environment-Specific SQL Generation
```python
from pathlib import Path
from schemax.core.storage import load_current_state, read_project, get_environment_config
workspace = Path.cwd()
state, changelog, provider = load_current_state(workspace)
# Get environment configuration
project = read_project(workspace)
env_config = get_environment_config(project, "prod")
# Build catalog name mapping (logical -> physical)
catalog_mapping = {}
for catalog in state.get("catalogs", []):
logical_name = catalog.get("name", "__implicit__")
physical_name = env_config.get("catalog", logical_name)
catalog_mapping[logical_name] = physical_name
# Generate SQL with environment-specific catalog names
generator = provider.get_sql_generator(state)
generator.catalog_name_mapping = catalog_mapping # For Unity provider
operations = [Operation(**op) for op in changelog["ops"]]
sql = generator.generate_sql(operations)
print(sql) # Contains prod catalog names
```
### Working with Multiple Providers
```python
from schemax.providers import ProviderRegistry
# List available providers
providers = ProviderRegistry.get_all_ids()
print(f"Available providers: {providers}")
# Get specific provider
unity_provider = ProviderRegistry.get("unity")
if unity_provider:
print(f"Name: {unity_provider.info.name}")
print(f"Version: {unity_provider.info.version}")
print(f"Operations: {len(unity_provider.info.capabilities.supported_operations)}")
```
### Validate Schema
```python
from pathlib import Path
from schemax.core.storage import read_project, load_current_state
try:
workspace = Path.cwd()
project = read_project(workspace)
state, changelog, provider = load_current_state(workspace)
# Validate with provider
validation = provider.validate_state(state)
if validation.valid:
print("✓ Schema is valid")
else:
print("✗ Validation failed:")
for error in validation.errors:
print(f" - {error.field}: {error.message}")
except Exception as e:
print(f"✗ Error: {e}")
```
## CI/CD Integration
### GitHub Actions (Generic)
```yaml
name: Schema Management
on:
pull_request:
push:
branches: [main]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install SchemaX
run: pip install schemaxpy
- name: Validate Schema
run: schemax validate
- name: Generate SQL Preview
run: schemax sql --target prod --output migration.sql
- name: Upload SQL
uses: actions/upload-artifact@v3
with:
name: migration-sql
path: migration.sql
```
### GitHub Actions (Unity Catalog - Automated Deployment)
```yaml
name: Deploy to Unity Catalog
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install SchemaX
run: pip install schemaxpy
- name: Validate Schema
run: schemax validate
- name: Apply to Production
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_TOKEN: ${{ secrets.DATABRICKS_TOKEN }}
run: |
schemax apply \
--target prod \
--profile default \
--warehouse-id ${{ secrets.WAREHOUSE_ID }} \
--no-interaction
```
### GitLab CI
```yaml
validate-schema:
stage: test
image: python:3.11
script:
- pip install schemaxpy
- schemax validate
- schemax sql --target prod --output migration.sql
artifacts:
paths:
- migration.sql
expire_in: 1 week
```
## Supported Providers
| Provider | Status | Operations | Apply Command | Notes |
|----------|--------|------------|---------------|-------|
| **Unity Catalog** | ✅ Stable | 29 | ✅ `schemax apply` | Full Databricks integration |
| **Hive Metastore** | 🚧 Planned | TBD | Manual | SQL generation only |
| **PostgreSQL** | 🚧 Planned | TBD | Manual | SQL generation only |
Want to add a provider? See the [documentation site](https://github.com/vb-dbrks/schemax-vscode/tree/main/docs/schemax) — **For Contributors** → Provider contract.
## Requirements
- **Python 3.11+**
- A SchemaX project (`.schemax/` directory)
- For Unity Catalog: Databricks workspace with SQL Warehouse access
## Documentation
All guides and reference live in the **Docusaurus site** (`docs/schemax/`):
- **For Users:** Quickstart, Architecture, Workflows, CLI, Environments & scope, Unity Catalog grants
- **For Contributors:** Development, Testing, Provider contract, Contributing
Run `cd docs/schemax && npm run start` to browse locally. See also [SETUP.md](SETUP.md) for SDK-specific setup.
## Development
See [SETUP.md](https://github.com/vb-dbrks/schemax-vscode/blob/main/packages/python-sdk/SETUP.md) for complete development setup instructions.
**Quick setup:**
```bash
cd packages/python-sdk
uv pip install -e ".[dev]" # Or use pip
pre-commit install
make all # Run all quality checks
```
**Commands:**
```bash
make format # Format code
make lint # Lint code
make typecheck # Type check
make test # Run tests
make all # Run all checks
```
## License
Apache License 2.0 - see [LICENSE](https://github.com/vb-dbrks/schemax-vscode/blob/main/LICENSE) for details.
## Links
- **Repository**: https://github.com/vb-dbrks/schemax-vscode
- **Issues**: https://github.com/vb-dbrks/schemax-vscode/issues
- **VS Code Extension**: [schemax-vscode](https://github.com/vb-dbrks/schemax-vscode/tree/main/packages/vscode-extension)
- **PyPI**: https://pypi.org/project/schemaxpy/
| text/markdown | null | null | Hari Gopinath, Varun Bhandary | null | Apache-2.0 | cli, databricks, migration, schema, sql, unity-catalog | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0.0",
"databricks-sdk>=0.18.0",
"networkx>=3.0",
"pydantic>=2.0.0",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"sqlglot>=20.0.0",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"sqlglot>=20.0.0; extra == \"dev\"",
"types-networkx>=3.0.0; extra == \"dev\"",
"sqlglot>=20.0.0; extra == \"validation\""
] | [] | [] | [] | [
"Homepage, https://github.com/vb-dbrks/schemax-vscode",
"Documentation, https://github.com/vb-dbrks/schemax-vscode/tree/main/docs",
"Repository, https://github.com/vb-dbrks/schemax-vscode",
"Issues, https://github.com/vb-dbrks/schemax-vscode/issues",
"Contributors, https://github.com/vb-dbrks/schemax/graphs/contributors"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:01:44.442609 | schemaxpy-0.2.1.tar.gz | 392,649 | a4/5a/b1a8114fc8408f52771cb335b5e973cae57a48cbec5e376281b9b9efe202/schemaxpy-0.2.1.tar.gz | source | sdist | null | false | 7778c6bcd5ffb73b986ee89c7eb35b6f | c357ce4ec6167bfbc27bcffe1db369554b88584e3801ece2e697c1a15d52cfba | a45ab1a8114fc8408f52771cb335b5e973cae57a48cbec5e376281b9b9efe202 | null | [] | 213 |
2.4 | torchax | 0.0.12.dev20260221 | torchax is a library for running Jax and PyTorch together | # torchax: Running PyTorch on TPU via JAX
Docs page: https://google.github.io/torchax/
Discord Discussion Channel: https://discord.gg/JqeJqGPyzC

**torchax** is a backend for PyTorch that allows users to run
PyTorch programs on Google Cloud TPUs. It also provides graph-level
interoperability between PyTorch and JAX.
With **torchax**, you can:
* Run PyTorch code on TPUs with minimal code changes.
* Call JAX functions from PyTorch, passing in `jax.Array`s.
* Call PyTorch functions from JAX, passing in `torch.Tensor`s.
* Use JAX features like `jax.grad`, `optax`, and `GSPMD` to train PyTorch
models.
* Use a PyTorch model as a feature extractor with a JAX model.
## Install
First, install the CPU version of PyTorch:
```bash
# On Linux
pip install torch --index-url https://download.pytorch.org/whl/cpu
# On Mac
pip install torch
```
Next, install JAX for your desired accelerator:
```bash
# On Google Cloud TPU
pip install -U jax[tpu]
# On GPU machines
pip install -U jax[cuda12]
# On Linux CPU machines or Macs (see the note below)
pip install -U jax
```
Note: For Apple devices, you can install the [Metal version](https://developer.apple.com/metal/jax/) of JAX for
hardware acceleration.
Finally, install torchax:
```bash
# Install from PyPI
pip install torchax
# Or, install torchax from source.
pip install git+https://github.com/google/torchax
```
## Running a Model
To execute a model with torchax, start with any `torch.nn.Module`.
Here’s an example with a simple 2-layer model:
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(28 * 28, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = x.view(-1, 28 * 28)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
m = MyModel()
# Execute this model using torch.
inputs = torch.randn(3, 3, 28, 28)
print(m(inputs))
```
To execute this model with `torchax`, we need to enable torchax to capture PyTorch ops:
```python
import torchax
torchax.enable_globally()
```
Then, we can use a `jax` device:
```python
inputs = torch.randn(3, 3, 28, 28, device='jax')
m = MyModel().to('jax')
res = m(inputs)
print(type(res)) # outputs torchax.tensor.Tensor
print(res.jax()) # print the underlying Jax Array
```
`torchax.tensor.Tensor` is a `torch.Tensor` subclass that holds
a `jax.Array`. You can inspect that JAX array with `res.jax()`.
Although the code appears to be standard PyTorch, it's actually running on JAX.
## How It Works
torchax uses a `torch.Tensor` subclass, `torchax.tensor.Tensor`, which holds a
`jax.Array` and overrides the `__torch_dispatch__` method. When a PyTorch operation
is executed within the torchax environment (enabled by `torchax.enable_globally()`),
the implementation of that operation is swapped with its JAX equivalent.
When a model is instantiated, tensor constructors like `torch.rand` create
`torchax.tensor.Tensor` objects containing `jax.Arrays`. Subsequent operations
extract the `jax.Array`, call the corresponding JAX implementation, and wrap the
result back into a `torchax.tensor.Tensor`.
For more details, see the [How It Works](docs/docs/user_guide/how-it-works.md) and
[Ops Registry](docs/ops_registry.md) documentation.
### Executing with `jax.jit`
While torchax can run models in eager mode, `jax.jit` can be used for better performance.
`jax.jit` is a decorator that compiles a function that takes and returns `torch.Tensors`
into a faster, JAX-compiled version.
To use `jax.jit`, you first need a functional version of your model where parameters
are passed as inputs:
```python
def model_func(param, inputs):
return torch.func.functional_call(m, param, inputs)
```
Here we use [torch.func.functional_call](https://pytorch.org/docs/stable/generated/torch.func.functional_call.html)
from PyTorch to replace the model weights with `param` and then call the
model. This is roughly equivalent to:
```python
def model_func(param, inputs):
m.load_state_dict(param)
return m(*inputs)
```
Now, we can apply `jax_jit` on `module_func`:
```python
from torchax.interop import jax_jit
model_func_jitted = jax_jit(model_func)
print(model_func_jitted(new_state_dict, inputs))
```
See more examples at [eager_mode.py](examples/eager_mode.py) and the
[examples folder](examples/).
To ease the idiom of creating functional model and calling it with parameters,
we also created the `JittableModule` helper class. It lets us rewrite the
above as:
```python
from torchax.interop import JittableModule
m_jitted = JittableModule(m)
res = m_jitted(...)
```
The first time `m_jitted` is called, it will trigger `jax.jit` to compile the
compile for the given input shapes. Subsequent calls with the same input shapes
will be fast as the compilation is cached.
## Saving and Loading Checkpoints
You can save and load your training state using `torchax.save_checkpoint` and `torchax.load_checkpoint`.
The state can be a dictionary containing the model's weights, optimizer state, and any other relevant
information.
```python
import torchax
import torch
import optax
# Assume model, optimizer, and other states are defined
model = MyModel()
optimizer = optax.adam(1e-3)
opt_state = optimizer.init(model.parameters())
weights = model.parameters()
buffers = model.buffers()
epoch = 10
state = {
'weights': weights,
'buffers': buffers,
'opt_state': opt_state,
'epoch': epoch,
}
# Save checkpoint
torchax.save_checkpoint(state, '/path/to/checkpoint.pt')
# Load checkpoint
loaded_state = torchax.load_checkpoint('/path/to/checkpoint.pt')
# Restore state
model.load_state_dict(loaded_state['weights'])
opt_state = loaded_state['opt_state']
epoch = loaded_state['epoch']
```
## Citation
```
@software{torchax,
author = {Han Qi, Chun-nien Chan, Will Cromar, Manfei Bai, Kevin Gleanson},
title = {torchax: PyTorch on TPU and JAX interoperability},
url = {https://github.com/pytorch/xla/tree/master/torchax}
version = {0.0.4},
date = {2025-02-24},
}
```
## Maintainers & Contributors
This library is maintained by a team within Google Cloud. It has benefited from
many contributions from both inside and outside the team.
Thank you to recent contributors.
```
Han Qi (qihqi), PyTorch/XLA
Manfei Bai (manfeibai), PyTorch/XLA
Will Cromar (will-cromar), Meta
Milad Mohammadi (miladm), PyTorch/XLA
Siyuan Liu (lsy323), PyTorch/XLA
Bhavya Bahl (bhavya01), PyTorch/XLA
Pei Zhang (zpcore), PyTorch/XLA
Yifei Teng (tengyifei), PyTorch/XLA
Chunnien Chan (chunnienc), Google, ODML
Alban Desmaison (albanD), Meta, PyTorch
Simon Teo (simonteozw), Google (20%)
David Huang (dvhg), Google (20%)
Barni Seetharaman (barney-s), Google (20%)
Anish Karthik (anishfish2), Google (20%)
Yao Gu (guyao), Google (20%)
Yenkai Wang (yenkwang), Google (20%)
Greg Shikhman (commander), Google (20%)
Matin Akhlaghinia (matinehAkhlaghinia), Google (20%)
Tracy Chen (tracych477), Google (20%)
Matthias Guenther (mrguenther), Google (20%)
WenXin Dong (wenxindongwork), Google (20%)
Kevin Gleason (GleasonK), Google, StableHLO
Nupur Baghel (nupurbaghel), Google (20%)
Gwen Mittertreiner (gmittert), Google (20%)
Zeev Melumian (zmelumian), Lightricks
Vyom Sharma (vyom1611), Google (20%)
Shitong Wang (ShitongWang), Adobe
Rémi Doreau (ayshiff), Google (20%)
Lance Wang (wang2yn84), Google, CoreML
Hossein Sarshar (hosseinsarshar), Google (20%)
Daniel Vega-Myhre (danielvegamyhre), Google (20%)
Tianqi Fan (tqfan28), Google (20%)
Jim Lin (jimlinntu), Google (20%)
Fanhai Lu (FanhaiLu1), Google Cloud
DeWitt Clinton (dewitt), Google PyTorch
Aman Gupta (aman2930), Google (20%)
```
A special thank you to @albanD for the [initial inspiration](https://github.com/albanD/subclass_zoo/blob/main/new_device.py)
for torchax.
| text/markdown | null | Han Qi <qihan.dev@gmail.com>, Google Cloud Inference Team <cmcs-inference-eng@google.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"jax[cpu]; extra == \"cpu\"",
"jax[cpu]>=0.6.2; extra == \"cpu\"",
"jax[cpu]>=0.6.2; extra == \"cuda\"",
"jax[cuda12]; extra == \"cuda\"",
"jax[cpu]; extra == \"odml\"",
"jax[cpu]>=0.6.2; extra == \"odml\"",
"jax[cpu]>=0.6.2; extra == \"tpu\"",
"jax[tpu]; extra == \"tpu\""
] | [] | [] | [] | [
"Homepage, https://github.com/google/torchax"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T10:01:40.396627 | torchax-0.0.12.dev20260221-py3-none-any.whl | 115,515 | 95/c4/9a0f0a5a01c7dd7dbee249f078d2aff9efcd7eab432c46efa493bc7a6540/torchax-0.0.12.dev20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | bf227f8e753e87abcae6cd6f0fc0799b | 49f31db5534e7c463b6296cd554aa5388bc3111cdce5b6edb2c015ab6111faf9 | 95c49a0f0a5a01c7dd7dbee249f078d2aff9efcd7eab432c46efa493bc7a6540 | null | [
"LICENSE"
] | 70 |
2.4 | nvidia-nat-rag | 1.5.0a20260221 | Subpackage for NVIDIA RAG in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit RAG Library Subpackage
Subpackage for NVIDIA RAG library integration in NeMo Agent toolkit.
This package provides integration with the NVIDIA RAG Blueprint library, allowing NeMo Agent toolkit workflows to use retrieval-augmented generation capabilities with flexible configuration.
## Features
- RAG generation and semantic search over vector stores
- Query rewriting and query decomposition for improved retrieval
- Reranking for higher quality results
- Filter expression generation for metadata filtering
- Multimodal support with VLM inference
- Citation generation and guardrails
For more information about the NVIDIA NeMo Agent toolkit, please visit the [NeMo Agent toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents, retrieval | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"langchain-openai<2.0.0,>=1.1.6",
"nvidia-rag>=2.4.0",
"opentelemetry-api~=1.2",
"opentelemetry-sdk~=1.3",
"nvidia-nat-eval[profiling]==v1.5.0a20260221; extra == \"test\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:01:40.190309 | nvidia_nat_rag-1.5.0a20260221-py3-none-any.whl | 55,672 | d9/bc/9be8c54b80571b1c746335f89d49bcca1ce8b89f823b144f6a90983c7298/nvidia_nat_rag-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | c2ad923fff3dac2c13b3cf455a230489 | b807491035dd56422192b4ea6d4ef03eef6f836b288f2411e77fc563c3a483da | d9bc9be8c54b80571b1c746335f89d49bcca1ce8b89f823b144f6a90983c7298 | null | [] | 73 |
2.4 | edupsyadmin | 8.1.1 | edupsyadmin provides tools to help school psychologists with their documentation | # edupsyadmin
edupsyadmin provides tools to help school psychologists with their
documentation
## Basic Setup
You can install the CLI using pip or
[uv](https://docs.astral.sh/uv/getting-started/installation).
Install with uv:
uv tool install edupsyadmin
You may get a warning that the `bin` directory is not on your environment path.
If that is the case, copy the path from the warning and add it directory to
your **environment path** permanently or just for the current session.
Run the application:
edupsyadmin --help
## Getting started
### keyring backend
edupsyadmin uses `keyring` to store the encryption credentials. `keyring` has
several backends.
- On Windows the default is the Windows Credential Manager (German:
Anmeldeinformationsverwaltung).
- On macOS, the default is Keychain (German: Schlüsselbund)
Those default keyring backends unlock when you login to your machine. You may
want to install a backend that requires separate unlocking:
<https://keyring.readthedocs.io/en/latest/#third-party-backends>
### Modify the config file
First, you have to update the config file with your data using the config
editor TUI:
edupsyadmin edit-config
## The database
The information you enter, is stored in an SQLite database with the fields
described [in the documentation for
edupsyadmin](https://edupsyadmin.readthedocs.io/en/latest/clients_model.html#)
## Examples
Get information about the path to the config file and the path to the database:
edupsyadmin info
Add a client interactively:
edupsyadmin new-client
Add a client to the database from a Webuntis csv export:
edupsyadmin new-client --csv ./path/to/your/file.csv --name "short_name_of_client"
Change values for the database entry with `client_id=42` interactively:
edupsyadmin set-client 42
Change values for the database entry with `client_id=42` from the commandline:
```
$ edupsyadmin set-client 42 \
--key_value_pairs \
"nta_font=1" \
"nta_zeitv_vieltext=20" \
"nos_rs=0" \
"lrst_diagnosis_encr=iLst"
```
See an overview of all clients in the database:
edupsyadmin get-clients
Fill a PDF form for the database entry with `client_id=42`:
edupsyadmin create-documentation 42 --form_paths ./path/to/your/file.pdf
Fill all files that belong to the form_set `lrst` (as defined in the
config.yml) with the data for `client_id=42`:
edupsyadmin create-documentation 42 --form_set lrst
## Development
Create the development enviroment:
uv v
uv pip install -e .
Run the test suite:
.venv/bin/python -m pytest -v -n auto --cov=src test/
Build documentation:
.venv/bin/python -m sphinx -M html docs docs/_build
## License
This project is licensed under the terms of the MIT License. Portions of this
project are derived from the python application project cookiecutter template
by Michael Klatt, which is also licensed under the MIT license. See the
LICENSE.txt file for details.
| text/markdown | Lukas Liebermann | null | null | null | MIT | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"alembic>=1.14.1",
"cryptography>=43.0.3",
"fillpdf>=0.7.3",
"keyring>=25.5.0",
"pandas>=2.2.3",
"platformdirs>=4.3.6",
"pydantic>=2.12.3",
"pypdf>=5.1.0",
"python-liquid>=1.12.1",
"pyyaml>=6.0.2",
"scipy>=1.14.1",
"sqlalchemy>=2.0.36",
"textual>=2.1.2",
"bitwarden-keyring>=0.3.1; extra == \"bwbackend\"",
"pdf2image>=1.17.0; extra == \"flattenpdf\"",
"dataframe-image>=0.2.6; extra == \"reportsandtaetigkeitsber\"",
"fpdf2>=2.8.3; extra == \"reportsandtaetigkeitsber\"",
"matplotlib>=3.9.2; extra == \"reportsandtaetigkeitsber\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T10:01:39.173628 | edupsyadmin-8.1.1.tar.gz | 412,964 | 2f/64/c2c511da379342918cc163ad522a1b598415e0862a19305cacad20463c03/edupsyadmin-8.1.1.tar.gz | source | sdist | null | false | 318c818d1cec225f8e258204e5a5f7e3 | 36e270e7056e241173cbd9927cd3bc08007a2ce26e21e6beb104dc3dcb9f02c8 | 2f64c2c511da379342918cc163ad522a1b598415e0862a19305cacad20463c03 | null | [
"LICENSE.txt"
] | 224 |
2.4 | nvidia-nat-mcp | 1.5.0a20260221 | Subpackage for MCP client integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit MCP Subpackage
Subpackage for MCP integration in NeMo Agent Toolkit.
This package provides MCP (Model Context Protocol) functionality, allowing NeMo Agent Toolkit workflows to connect to external MCP servers and use their tools as functions.
## Features
- Connect to MCP servers via streamable-http, SSE, or stdio transports
- Wrap individual MCP tools as NeMo Agent Toolkit functions
- Connect to MCP servers and dynamically discover available tools
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents, mcp | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"aiorwlock~=1.5",
"mcp~=1.25",
"nvidia-nat-core[async_endpoints]==v1.5.0a20260221; extra == \"test\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:01:22.908733 | nvidia_nat_mcp-1.5.0a20260221-py3-none-any.whl | 128,697 | a1/fa/b5f622284ec312e8359ba1906e42a6e096cb33a511412ceead36e0ee85eb/nvidia_nat_mcp-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | a81fa088f74ec80a569a26c7e77aacb9 | b5f5c3bd810efdd0b4aa9ba688feefecca1a082d54ae97ecfb9d89dedb68b6c6 | a1fab5f622284ec312e8359ba1906e42a6e096cb33a511412ceead36e0ee85eb | null | [] | 70 |
2.4 | nvidia-nat-ragaai | 1.5.0a20260221 | Subpackage for RagaAI Catalyst integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for RagaAI Catalyst integration for observability.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, observability, ragaai catalyst | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"nvidia-nat-opentelemetry==v1.5.0a20260221",
"ragaai-catalyst~=2.2",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:01:06.020734 | nvidia_nat_ragaai-1.5.0a20260221-py3-none-any.whl | 55,710 | 18/44/97fa930d6456d52a65dc758a3a4708a117bf396a86a825c6f89cef7f7f8f/nvidia_nat_ragaai-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 488ecb019b33dc6419eafdc093509d52 | 10a042e042ae89b3175dca3863fd223dfc72c3cb6db9e407eae8f12d01195ca3 | 184497fa930d6456d52a65dc758a3a4708a117bf396a86a825c6f89cef7f7f8f | null | [] | 73 |
2.4 | nvidia-nat-core | 1.5.0a20260221 | Core library for NVIDIA NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2024-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit
NeMo Agent Toolkit is a flexible library designed to seamlessly integrate your enterprise agents—regardless of framework—with various data sources and tools. By treating agents, tools, and agentic workflows as simple function calls, NeMo Agent Toolkit enables true composability: build once and reuse anywhere.
## Key Features
- [**Framework Agnostic:**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/extend/plugins.html) Works with any agentic framework, so you can use your current technology stack without replatforming.
- [**Reusability:**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/extend/sharing-components.html) Every agent, tool, or workflow can be combined and repurposed, allowing developers to leverage existing work in new scenarios.
- [**Rapid Development:**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/tutorials/index.html) Start with a pre-built agent, tool, or workflow, and customize it to your needs.
- [**Profiling:**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/workflows/profiler.html) Profile entire workflows down to the tool and agent level, track input/output tokens and timings, and identify bottlenecks.
- [**Observability:**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/workflows/observe/observe-workflow-with-phoenix.html) Monitor and debug your workflows with any OpenTelemetry-compatible observability tool, with examples using [Phoenix](https://docs.nvidia.com/nemo/agent-toolkit/1.3/workflows/observe/observe-workflow-with-phoenix.html) and [W&B Weave](https://docs.nvidia.com/nemo/agent-toolkit/1.3/workflows/observe/observe-workflow-with-weave.html).
- [**Evaluation System:**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/workflows/evaluate.html) Validate and maintain accuracy of agentic workflows with built-in evaluation tools.
- [**User Interface:**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/quick-start/launching-ui.html) Use the NeMo Agent Toolkit UI chat interface to interact with your agents, visualize output, and debug workflows.
- [**MCP Compatibility**](https://docs.nvidia.com/nemo/agent-toolkit/1.3/workflows/mcp/mcp-client.html) Compatible with Model Context Protocol (MCP), allowing tools served by MCP Servers to be used as NeMo Agent Toolkit functions.
With NeMo Agent Toolkit, you can move quickly, experiment freely, and ensure reliability across all your agent-driven projects.
## Links
* [Documentation](https://docs.nvidia.com/nemo/agent-toolkit/1.3/index.html): Explore the full documentation for NeMo Agent Toolkit.
## First time user?
If this is your first time using NeMo Agent Toolkit, it is recommended to install the latest version from the [source repository](https://github.com/NVIDIA/NeMo-Agent-Toolkit?tab=readme-ov-file#quick-start) on GitHub. This package is intended for users who are familiar with NeMo Agent Toolkit applications and need to add NeMo Agent Toolkit as a dependency to their project.
## Feedback
We would love to hear from you! Please file an issue on [GitHub](https://github.com/NVIDIA/NeMo-Agent-Toolkit/issues) if you have any feedback or feature requests.
## Acknowledgements
We would like to thank the following open source projects that made NeMo Agent Toolkit possible:
- [CrewAI](https://github.com/crewAIInc/crewAI)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [LangChain](https://github.com/langchain-ai/langchain)
- [Llama-Index](https://github.com/run-llama/llama_index)
- [Mem0ai](https://github.com/mem0ai/mem0)
- [Ragas](https://github.com/explodinggradients/ragas)
- [Semantic Kernel](https://github.com/microsoft/semantic-kernel)
- [uv](https://github.com/astral-sh/uv)
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"aioboto3>=11.0.0",
"authlib<2.0.0,>=1.6.5",
"click~=8.1",
"colorama<1.0.0,>=0.4.6",
"expandvars~=1.0",
"fastapi~=0.119",
"flask>=3.0.0",
"httpx~=0.27",
"jinja2~=3.1",
"jsonpath-ng~=1.7",
"nest-asyncio2~=1.7",
"networkx~=3.4",
"numpy~=2.3",
"openinference-semantic-conventions<1.0.0,>=0.1.14",
"optuna~=4.4",
"pandas~=2.2",
"pip>=24.3.1",
"pkce==1.0.3",
"pkginfo~=1.12",
"platformdirs~=4.3",
"plotly~=6.0",
"pydantic~=2.11",
"pyjwt~=2.11",
"pymilvus~=2.6",
"python-dotenv<2.0.0,>=1.1.1",
"python-multipart>=0.0.21",
"PyYAML~=6.0",
"rich~=14.0",
"tabulate~=0.9",
"uvicorn[standard]~=0.38",
"wikipedia~=1.4",
"urllib3<3.0.0,>=2.6.3",
"aiosqlite~=0.21; extra == \"async-endpoints\"",
"dask~=2026.1; extra == \"async-endpoints\"",
"distributed~=2026.1; extra == \"async-endpoints\"",
"sqlalchemy[asyncio]~=2.0; extra == \"async-endpoints\"",
"gunicorn~=23.0; extra == \"gunicorn\"",
"presidio-analyzer; extra == \"pii-defense\"",
"presidio-anonymizer; extra == \"pii-defense\"",
"nvidia-nat-eval[profiling]==v1.5.0a20260221; extra == \"test\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:00:48.615004 | nvidia_nat_core-1.5.0a20260221-py3-none-any.whl | 762,188 | de/8f/fac2cdbef11efed51285fa974c5244164dd1fdfbfce6d0923040aac9ea1a/nvidia_nat_core-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 76b739ad478bd4e2f9791e0be3df46c4 | 412f48add391e3f0840cffac79d5890cd51d305083780c416e9ec79787454a3d | de8ffac2cdbef11efed51285fa974c5244164dd1fdfbfce6d0923040aac9ea1a | null | [] | 77 |
2.4 | fluxqueue | 0.2.1 | A lightweight, resource-efficient, high-throughput task queue for Python, written in Rust. | <p align="center">
<img src="https://fluxqueue.ccxlv.dev/images/logo_full.png" alt="FluxQueue" />
</p>
<div align="center">
**A lightweight, resource-efficient, high-throughput task queue for Python, written in Rust.**
[](https://github.com/ccxlv/fluxqueue/actions/workflows/tests.yml)

[Documentation](https://fluxqueue.ccxlv.dev) · [Benchmarks](https://github.com/CCXLV/fluxqueue-benchmarks)
</div>
---
## Overview
FluxQueue is a task queue for Python that gets out of your way. The Rust core makes the process fast with less overhead, least dependencies, and most importantly, less memory usage. Tasks are managed through Redis.
## Key Features
- **Lightweight**: Minimal dependencies, low memory footprint, and low CPU usage even at high concurrency
- **High Throughput**: Rust-powered core for efficient task enqueueing and processing
- **Redis-Backed**: Reliable task persistence and distribution
- **Async & Sync**: Support for both synchronous and asynchronous Python functions
- **Retry Mechanism**: Built-in automatic retry with configurable limits
- **Multiple Queues**: Organize tasks across different queues
- **Simple API**: Decorator-based interface that feels natural in Python
- **Type Safe**: Full type hints support
## Requirements
- Python 3.11, 3.12, 3.13 or 3.14
- Redis server
## Installation
```bash
pip install fluxqueue[cli]
```
## How to use FluxQueue
FluxQueue can be used very easily. For example here's how `myapp/tasks.py` could look like:
```py
from fluxqueue import FluxQueue
fluxqueue = FluxQueue()
@fluxqueue.task()
def send_email(to: str, subject: str, body: str):
print(f"Sending email to {to}")
print(f"Subject: {subject}")
print(f"Body: {body}")
```
### Enqueue Tasks
Call the decorated function to enqueue it. The function returns immediately, the actual work happens in the background:
```python
send_email("user@example.com", "Hello", "This is a test email")
```
The task is now in the queue, waiting to be processed by a worker.
### Async Tasks
FluxQueue supports async functions too. Just define an async function and use the same decorator:
```python
@fluxqueue.task()
async def process_data(data: dict):
await some_async_operation(data)
```
Running the async function in an async context will also enqueue the task.
## Installing the worker
In order the tasks to be executed you need to run a FluxQueue worker. You need to install the worker on your system, recommended way of doing that is using `fluxqueue-worker`:
```bash
fluxqueue worker install
```
It picks the latest released worker based on your python version and installs it. You can also pass `--version` argument to specify the version you want to install.
## Running the worker
Running the worker is straightforward:
```bash
fluxqueue start --tasks-module-path myapp/tasks
```
In order the worker to disover your tasks you need to pass `--tasks-module-path` argument with the path to the tasks module. For more information please view the [defining and exposing tasks](https://fluxqueue.ccxlv.dev/tutorial/defininig_and_exposing_tasks) documentation.
## License
FluxQueue is licensed under the Apache-2.0 license. See [LICENSE](LICENSE) for details.
| text/markdown; charset=UTF-8; variant=GFM | null | Giorgi Merebashvili <mereba2627@gmail.com> | null | null | null | task-queue, queue, redis, async, background-tasks, worker, rust, fluxqueue | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Rust",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries",
"Topic :: System :: Distributed Computing"
] | [] | https://fluxqueue.ccxlv.dev | null | <3.15,>=3.11 | [] | [] | [] | [
"maturin; extra == \"build\"",
"fluxqueue-cli>=0.1.0b4; extra == \"cli\"",
"ruff; extra == \"dev\"",
"mkdocs-material; extra == \"docs\"",
"mkdocstrings; extra == \"docs\"",
"mkdocstrings-python; extra == \"docs\"",
"pytest-asyncio; extra == \"tests\"",
"python-dotenv; extra == \"tests\"",
"redis; extra == \"tests\""
] | [] | [] | [] | [
"Changelog, https://github.com/CCXLV/fluxqueue/releases",
"Documentation, https://fluxqueue.ccxlv.dev",
"Homepage, https://fluxqueue.ccxlv.dev",
"Issues, https://github.com/CCXLV/fluxqueue/issues",
"Repository, https://github.com/CCXLV/fluxqueue"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T10:00:46.503191 | fluxqueue-0.2.1-cp312-cp312-win_amd64.whl | 753,430 | bf/1d/f11ab6025173c0487a9b08858e002c9019d68285b5958ff9876b49c9b37a/fluxqueue-0.2.1-cp312-cp312-win_amd64.whl | cp312 | bdist_wheel | null | false | 2cbb291fb76a005e833a921d34e5363a | 77e37841af9f0a594967fdf0e72faaf7adabd7d47809b28a795871507bdbb653 | bf1df11ab6025173c0487a9b08858e002c9019d68285b5958ff9876b49c9b37a | Apache-2.0 | [
"LICENSE"
] | 993 |
2.4 | nvidia-nat-adk | 1.5.0a20260221 | Subpackage for Google ADK integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit — Google ADK Subpackage
Subpackage providing Google ADK integration for the NVIDIA NeMo Agent Toolkit.
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, rag, agents | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"google-adk~=1.18",
"litellm~=1.74",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:00:31.371741 | nvidia_nat_adk-1.5.0a20260221-py3-none-any.whl | 59,085 | 1f/f2/52c4caea5991cc5c09e566764185a592cfb167ac9a0d8e036c9c61b6e10b/nvidia_nat_adk-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 33a09d1f60f495a67559079a87ec4782 | 163aada30fe087dbbb55f869e2be8d14586a81b633f7da9b1a37a0ba7e6d4458 | 1ff252c4caea5991cc5c09e566764185a592cfb167ac9a0d8e036c9c61b6e10b | null | [] | 69 |
2.4 | nvidia-nat-autogen | 1.5.0a20260221 | Subpackage for AutoGen integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for [`Microsoft AutoGen`](https://github.com/microsoft/autogen) integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | null | null | null | null | null | ai, agents, autogen, multi-agent | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"autogen-agentchat~=0.7",
"autogen-core~=0.7",
"autogen-ext[anthropic,openai]~=0.7",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T10:00:14.161841 | nvidia_nat_autogen-1.5.0a20260221-py3-none-any.whl | 61,559 | d2/23/48fe0533e68e93b8923c48d9ac93dba9c2a6361415adb9b060afbf98b8bd/nvidia_nat_autogen-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | 32ca05fc80a83c65d770b35bf05f4b55 | d0f43870798cf1398cfdb34d6486ff5910a0b4936b9f324013c9a13fa91389f2 | d22348fe0533e68e93b8923c48d9ac93dba9c2a6361415adb9b060afbf98b8bd | null | [] | 66 |
2.4 | nvidia-nat-nemo-customizer | 1.5.0a20260221 | Subpackage for NeMo Customizer integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for NeMo Customizer integration in NeMo Agent Toolkit.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, finetuning | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"huggingface-hub~=0.36",
"nemo-microservices~=1.4",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T09:59:56.798825 | nvidia_nat_nemo_customizer-1.5.0a20260221-py3-none-any.whl | 75,240 | e6/98/2085a02e0041b9da1408dea034a0e64c68cac57dbd9519fa69bb97a8b7a2/nvidia_nat_nemo_customizer-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | c0d699782c19280d9b762e53883fc8df | d826c00eb1e0694849f33778cd676b76d7ed4f5a897560ca8faba5b2400d8c57 | e6982085a02e0041b9da1408dea034a0e64c68cac57dbd9519fa69bb97a8b7a2 | null | [] | 69 |
2.4 | nvidia-nat-weave | 1.5.0a20260221 | Subpackage for Weave integration in NeMo Agent Toolkit | <!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# NVIDIA NeMo Agent Toolkit Subpackage
This is a subpackage for Weights and Biases Weave integration for observability.
For more information about the NVIDIA NeMo Agent Toolkit, please visit the [NeMo Agent Toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit).
| text/markdown | NVIDIA Corporation | null | NVIDIA Corporation | null | Apache-2.0 | ai, observability, wandb, pii | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"nvidia-nat-core==v1.5.0a20260221",
"presidio-analyzer~=2.2",
"presidio-anonymizer~=2.2",
"weave==0.52.22",
"blis~=1.3",
"fickling<1.0.0,>=0.1.7",
"nvidia-nat-core[async_endpoints]==v1.5.0a20260221; extra == \"test\"",
"nvidia-nat-test==v1.5.0a20260221; extra == \"test\""
] | [] | [] | [] | [
"documentation, https://docs.nvidia.com/nemo/agent-toolkit/latest/",
"source, https://github.com/NVIDIA/NeMo-Agent-Toolkit"
] | twine/5.0.0 CPython/3.10.19 | 2026-02-21T09:59:21.967338 | nvidia_nat_weave-1.5.0a20260221-py3-none-any.whl | 56,566 | e3/b7/e2d4814b14766b38f2465e1e006bae211a4e52bf2db4f420285ca78b399f/nvidia_nat_weave-1.5.0a20260221-py3-none-any.whl | py3 | bdist_wheel | null | false | d3f2e4ba5788501411b4c189e10d7a5a | 4a0a6c19db7e0e7a41c33154ec4cd9ee7e6a53fbbaad5e02a5403178696173dd | e3b7e2d4814b14766b38f2465e1e006bae211a4e52bf2db4f420285ca78b399f | null | [] | 74 |
2.4 | gsheetstables | 3.0.1 | Access the Tables of your Google Sheets as Pandas Dataframes and write them to a database | # Google Sheets Tables extraction
Use Google Spreadsheet Tables (only Tables) as Pandas Dataframes or save them to a SQL database.
```shell
pip install gsheetstables
```
PyPi page: https://pypi.org/project/gsheetstables/
## Command Line tool
The tool does one thing and does it well: Makes database tables of all the
Google Sheets Tables (only Tables) found on the spreadsheet.
On any database; tested with **SQLite**, **MariaDB** and **PostgreSQL**.
Just make sure you have the correct SQLAlchemy driver installed. Simplest
example with SQLite:
```shell
gsheetstables2db -s 1zYR...tT8
```
This will create the SQLite database on file `tables.sqlite` with all tables
from GSheet `1zYR...tT8`.
Execute some SQL queries after (or before, with `--sql-pre`) the tables were loaded/created:
```shell
gsheetstables2db -s 1zYR...tT8 \
--table-prefix _raw_tables_ \
--sql-split-char § \
--sql-post "{% for table in tables %}create index if not exists idx_snapshot_{{table}} on _raw_tables_{{table}} (_gsheet_utc_timestamp) § create view if not exists {{table}} as select * from _raw_tables_{{table}} where _gsheet_utc_timestamp=(select max(_gsheet_utc_timestamp) from _raw_tables_{{table}}) § {% endfor %}"
```
Prepend “`mysheet_`” to all table names in DB, keep up to 6 snapshots of each table (after running it multiple times) and save a column with the row numbers that users see in GSpread:
```shell
gsheetstables2db -s 1zYR...tT8 \
--table-prefix mysheet_ \
--append \
--keep-snapshots 6 \
--row-numbers
```
Write it to a MariaDB/MySQL database accessible through local socket:
```shell
pip install mysql-connector-python
gsheetstables2db -s 1zYR...tT8 --db mariadb://localhost/marketing_db
```
### Run it regularly via `cron` or a `systemd` timer
I have the following in my personal `~/.config/systemd/user/gsheetstables.service`:
```shell
[Unit]
Description=Sync tables from Investment Dashboard Google Sheet into MariaDB
[Service]
Type=oneshot
ExecStart=%h/.local/bin/gsheetstables2db \
--sheet 1i…so \
--db mariadb://localhost/my_db \
--table-prefix raw__ \
--append \
--keep-snapshots 100 \
--sql-split-char § \
--sql-post "\
{% for table in tables %} \
CREATE INDEX IF NOT EXISTS idx_snapshot_{{table}} \
ON raw__{{table}} (_gsheet_utc_timestamp) § \
DROP VIEW IF EXISTS {{table}} § \
CREATE VIEW {{table}} AS \
SELECT * FROM raw__{{table}} \
WHERE _gsheet_utc_timestamp=( \
SELECT max(_gsheet_utc_timestamp) FROM raw__{{table}} \
) § \
{% endfor %} \
" \
--rename ' \
{ \
"table_1": { \
"Original column name": "new_name1", \
"Other original column name/tag": "new_name2" \
}, \
"table_2": { \
"Once more an original column name": "another_new_name1", \
"And again another original/nice column name": "new_name2" \
} \
} \
' \
--service-account gsheets-access@my-app.iam.gserviceaccount.com \
--service-account-private-key "MII…F4c="
```
And my personal `~/.config/systemd/user/gsheetstables.timer` contains:
```shell
[Unit]
Description=Run Google Sheets sync frequently
[Timer]
OnCalendar=*:0/20
[Install]
WantedBy=timers.target
```
Every 20 minutes, it will try to sync and update my DB tables with the Google Sheets Tables.
Database tables will only be updated if its correspondent GSheets Table has been modified.
I’m connecting to a local MariaDB server configured to accept authenticated connections via UDP socket (`mariadb://localhost/…`), which eliminates the need for passwords.
This single command contains everything thats is needed for a successful run, no external config file is needed.
The `…very long private key…` is computed and displayed to you when you run `gsheetstables2db -i SERVICE_ACCOUNT_FILE -vv`.
Grab that long string, use it in the command line, along with `--service-account`, and then you can discard the service account JSON file.
This command writes and logs up to 100 versions the my GSheets Tables into my DB with prefix `raw__` and then create views without that prefix with only the last snapshot of that table.
Data consumers should use the view, not the raw table. The raw tables contains the time-tagged history of changes to each table.
Test it before scheduling it:
```shell
systemctl --user daemon-reload;
systemctl --user start gsheetstables.service;
systemctl --user status gsheetstables.service
```
When you are happy with results in your DB, enable the scheduler:
```shell
systemctl --user enable --now gsheetstables.timer
```
## SQLAlchemy Drivers Reference
Here are SQLAlchemy URL examples along with drivers required for connectors (table provided by ChatGPT and then edited a bit):
| Database | Example SQLAlchemy URL | Driver / Package to install | Notes |
|--------|------------------------|-----------------------------|------|
| **MariaDB via local socket** | `mariadb://localhost/sales_db` | `dnf install python3-mysqlclient` or `pip install mysqlclient` | [Unix user must match a MariaDB user configured with unix_socket](https://mariadb.com/docs/server/security/user-account-management/authentication-from-mariadb-10-4) |
| **MariaDB with regular user** | `mariadb://dbuser:dbpass@mariadb.example.com:3306/sales_db` | `dnf install python3-mysqlclient` or `pip install mysqlclient` | Native MariaDB driver |
| **MariaDB (alt)** | `mysql+pymysql://dbuser:dbpass@mariadb.example.com:3306/sales_db?charset=utf8mb4` | `pip install pymysql` | Pure Python |
| **PostgreSQL via local socket** | `postgresql+psycopg:///analytics_db` (note the 3 slashes) | `dnf install python3-psycopg3 python3-sqlalchemy+postgresql` or `pip install psycopg[binary]` | Recommended |
| **PostgreSQL** | `postgresql+psycopg://dbuser:dbpass@postgres.example.com:5432/analytics_db` | `dnf install python3-psycopg3 python3-sqlalchemy+postgresql` or `pip install psycopg[binary]` | Recommended |
| **PostgreSQL (legacy)** | `postgresql+psycopg2://dbuser:dbpass@postgres.example.com:5432/analytics_db` | `pip install psycopg2-binary` | Legacy |
| **Oracle** | `oracle+oracledb://dbuser:dbpass@oracle.example.com:1521/?service_name=ORCLPDB1` | `pip install oracledb` | Thin mode (no Oracle Client) |
| **AWS Athena** | `awsathena+rest://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY@athena.us-east-1.amazonaws.com:443/my_schema?s3_staging_dir=s3://my-athena-results/&work_group=primary` | `pip install sqlalchemy-athena` | Uses REST API |
| **Databricks SQL** | `databricks+connector://token:dapiXXXXXXXXXXXXXXXX@adb-123456789012.3.azuredatabricks.net:443/default?http_path=/sql/1.0/warehouses/abc123` | `pip install databricks-sql-connector sqlalchemy-databricks` | Token-based auth |
## API Usage
Initialize and bring all tables (only tables) from a Google Sheet:
```python
import gsheetstables
account_file = "account.json"
gsheetid = "1zYR7Hlo7EtmY6...tT8"
tables = gsheetstables.GSheetsTables(
gsheetid = gsheetid,
service_account_file = account_file,
slugify = True
)
```
This is done very efficiently, doing exactly 2 calls to Google’s API. One for table discovery and second one to retrieve all tables data at once.
See bellow how to get the service account file
Tables retrieved:
```python
>>> tables.tables
[
'products',
'clients',
'sales'
]
```
Use the tables as Pandas Dataframes.
```python
tables.t('products')
```
| ID | Name | Price |
|----|-------------|-------|
| 1 | Laptop | 999.99 |
| 2 | Smartphone | 699.00 |
| 3 | Headphones | 149.50 |
| 4 | Keyboard | 89.90 |
Sheet rows that are completeley empty will be removed from resulting dataframe.
But the index will always match the Google Sheet row number as seen by
spreadsheet users. So you can use `loc` method to get a specific sheet row
number:
```python
tables.t('products').loc[1034]
```
Another example using data and time columns:
```python
tables.t('clients')
```
| ID | Name | birthdate | affiliated |
|----|---------------|------------------------------------|----------------------------------------|
| 1 | Alice Silva | 1990-05-12T00:00:00-03:00 | 2021-03-15T10:45:00-03:00 |
| 2 | Bruno Costa | 1985-11-23T00:00:00-03:00 | 2019-08-02T14:20:00-03:00 |
| 3 | Carla Mendes | 1998-02-07T00:00:00-03:00 | 2022-01-10T09:00:00-03:00 |
| 4 | Daniel Rocha | 1976-09-30T00:00:00-03:00 | 2015-06-25T16:35:00-03:00 |
Notice that Google Sheets Table columns of type `DATE` (which may contain also time) will be converted to `pandas.Timestamp`s and the spreadsheet timezone will be associated to it, aiming at minimum loss of data.
If you want just naive dates, as they are probably formated in your sheets, use Pandas like this:
```python
(
tables.t('clients')
.assign(
birthdate = lambda table: table.birthdate.dt.normalize().dt.tz_localize(None),
affiliated = lambda table: table.affiliated.dt.normalize().dt.tz_localize(None),
)
)
```
| ID | Name | birthdate | affiliated |
|----|---------------|------------|-----------------|
| 1 | Alice Silva | 1990-05-12 | 2021-03-15 |
| 2 | Bruno Costa | 1985-11-23 | 2019-08-02 |
| 3 | Carla Mendes | 1998-02-07 | 2022-01-10 |
| 4 | Daniel Rocha | 1976-09-30 | 2015-06-25 |
Remember that the complete concept of universal and portable Time always includes date, time and timezone.
Displaying as just the date is an abbreviation that assumes interpretation by the reader.
Information that seems to contain just a date, is actually stored as the starting midnight of that day, in the timezone of the spreadsheet.
If that date is describing a business transaction, it probably didn't happen at that moment, most likely closer to the mid of the day.
Your spreadsheet must display timestamps as date and time to reduce ambiguity.
Example of ambiguity is Alices‘s birthday as it is actually stored by your spreadsheet: **1990-05-12T00:00:00-03:00** (not just `1990-05-12` as spreadsheet formatting shows).
This timestamp is a different day in other timezones, for example, it is the same moment in Time as timestamp **1990-05-11T23:00:00-04:00** (late night of the previous day of another time zone).
If you hide time and timezone from users, specially the ones that input data, you are increasing the chance of ambiguity.
Data processing must always, ALWAYS, consider and handle time and timezone.
## Column names normalization
People that edit spreadsheets can get creative when naming columns.
Pass `slugify=True` (the default) to:
- transliterate accents and international characters with unidecode
- convert spaces, `/`, `:` to `_`
- lowercase all characters
So a column named `Column with strange chars/letters` will become `column_with_strange_chars_letters`.
Still pretty long and annoying for your later SQL, so in addition, you can pass a dict for custom column renaming as:
```python
tables = gsheetstables.GSheetsTables(
...
column_rename_map = {
"table_1": {
"Column with strange chars/letters": "short_name",
"Other crazy column name": "other_short_name",
},
"table_2": {
"Column with strange chars/letters": "short_name",
"Other crazy column name": "other_short_name",
}
},
...
)
```
Pass only the columns you want to rename.
Combine with `slugify=True` to have a complete service.
Your `column_rename_map` dict will have priority over slugification.
## What are Google Sheets Tables
[Tables feature was introduced in 2024-05](https://workspaceupdates.googleblog.com/2024/05/tables-in-google-sheets.html) and they look like this:

More than looks, Tables have structure:
- table names are unique
- columns have names
- columns have types as number, date, text, dropdown (kind of categories)
- cells have validation and can reference data in other tables and sheets
These are features that make data entry by humans less susceptible to errors, yet as easy and well known as editing a spreadsheet.
This Python module closes the gap of bringing all that nice and structured human-generated data back to the database or to your app.
## Get a Service Account file for authorization
1. Go to https://console.cloud.google.com/projectcreate, make sure you are under correct Google account and create a project named **My Project** (or reuse a previously existing project)
1. On same page, **edit the Project ID to make it smaller and more meanigfull** (or leave defaults); this will be part of an e-mail address that we’ll use later
1. **Activate** [Sheets API](https://console.cloud.google.com/apis/library/sheets.googleapis.com) and [Drive API](https://console.cloud.google.com/apis/library/drive.googleapis.com) (Drive is optional, just to get file modification time)
1. Go to https://console.cloud.google.com/apis/credentials, make sure you are in the correct project and select **Create Credentials → Service account**. This is like creating an operator user that will access your Google Spreadsheet; and as a user, it has an e-mail address that appears on the screen. Copy this e-mail address.
1. After service account created, go into its details and **create a keypair** (or upload the public part of an existing keypair).
1. **Download the JSON file** generated for this keypair, it contains the private part of the key, required to identify the program as your service account.
1. Go to the Google Sheet your program needs to extract tables, **hit Share button** on top right and **add the virtual e-mail address** of the service account you just created and copied. This is an e-mail address that looks like
_operator-one@my-project.iam.gserviceaccount.com_
| text/markdown | null | Avi Alkalay <avi@unix.sh> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 5 - Production/Stable",
"Environment :: MacOS X",
"Environment :: Win32 (MS Windows)",
"Environment :: Console",
"Intended Audience :: System Administrators",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Topic :: System :: Archiving :: Backup",
"Topic :: System :: Recovery Tools"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"Unidecode",
"dotmap",
"pandas",
"google-auth-oauthlib",
"google-api-python-client",
"Jinja2",
"SQLAlchemy"
] | [] | [] | [] | [
"Homepage, https://github.com/avibrazil/autorsync",
"Source, https://github.com/avibrazil/autorsync",
"Issues, https://github.com/avibrazil/autorsync/issues/new/choose",
"Pypi, https://pypi.org/project/auto-remote-sync"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:59:17.331668 | gsheetstables-3.0.1.tar.gz | 33,020 | 42/be/047331617469cfbb0b44b1a125fba3cc3cdb339f6fdfe85dd8d80617090a/gsheetstables-3.0.1.tar.gz | source | sdist | null | false | 378970d0e72fa6696ec3b0c4c22b8bf4 | ae836cb985bdefdab38f6c37cbc3d30a12bbe1ad5a6bbda4f0eb14f81e792aea | 42be047331617469cfbb0b44b1a125fba3cc3cdb339f6fdfe85dd8d80617090a | null | [
"LICENSE"
] | 233 |
2.4 | gabay | 0.1.7 | A self-hosted, Telegram-first AI assistant for productivity. | # Gabay AI Assistant
Gabay is a self-hosted, Docker-based AI assistant designed as a Telegram-first productivity tool. It acts as a unified gateway allowing users to manage Gmail, Google Drive, Notion, and Meta (Facebook/Instagram) properties autonomously.
## Features
- **Daily Briefing (`/brief`)**: Scans unread emails and Facebook/Instagram notifications to provide a prioritized, LLM-summarized update.
- **Content Reading (`/read`)**: Fetches unread emails and recent Notion pages (try "read my emails" or "what's new in Notion?").
- **Universal Search (`/search`)**: Deep-search across Google Drive and Notion directly from Telegram.
- **Seamless Saving**: Reply to any file or PDF with `/save` to upload it to Drive or save to Notion.
- **Agentic Intent**: Leverages Groq (Llama 3) to understand natural language requests.
## Quick Install
Gabay is now available as a Python package. You can install it directly via pip:
```bash
pip install gabay
```
Then run the setup wizard:
```bash
gabay config
```
## Setup Instructions (Docker)
Gabay uses both a **Bot** (commands) and a **Userbot** (acting as you).
1. **Bot**: Create a bot via [@BotFather](https://t.me/botfather) to get your `TELEGRAM_BOT_TOKEN`.
2. **API Credentials**: Go to [my.telegram.org](https://my.telegram.org/auth), log in, and create an "App" to get your `TELEGRAM_API_ID` and `TELEGRAM_API_HASH`.
3. **Phone Number**: Your `TELEGRAM_PHONE` is required to authenticate the Userbot session.
### 2. Google Cloud Setup (Gmail/Drive)
1. Go to the [Google Cloud Console](https://console.cloud.google.com/).
2. **Enable APIs**: Enable both the **Gmail API** and **Google Drive API**.
3. **OAuth Consent Screen**:
- Set up an "External" consent screen.
- Under **Test Users**, add your own email address.
4. **Create Credentials**:
- Create an **OAuth client ID** (Web application).
- Add `http://localhost:8000/auth/google/callback` to **Authorized redirect URIs**.
5. Get your **Client ID** and **Client Secret**.
### 3. Notion Setup
1. Create an **Internal Integration**: Go to [Notion My Integrations](https://www.notion.com/my-integrations).
2. Get your `NOTION_API_KEY`.
3. **Share Database**: Open your Notion database, click "..." -> **Connect to**, and find your integration.
4. Copy the **Database ID** from the URL (the string after the `/` and before the `?`).
### 4. Environment Variables
Create a `.env` file in the root directory:
```env
# Telegram
TELEGRAM_BOT_TOKEN="your_token"
TELEGRAM_API_ID=12345
TELEGRAM_API_HASH="your_hash"
TELEGRAM_PHONE="+123456789"
# LLM
GROQ_API_KEY="your_groq_key"
# Google OAuth
GOOGLE_CLIENT_ID="your_google_id"
GOOGLE_CLIENT_SECRET="your_google_secret"
# Notion
NOTION_API_KEY="your_notion_key"
NOTION_DATABASE_ID="your_database_id"
```
### 5. Running and Connecting
1. **Start**: `docker-compose up -d --build`
2. **Dashboard**: Send `/auth` to your bot on Telegram.
3. **Login**: Open the link and click **Connect Account** for each service.
## Use Cases
- "Give me a briefing on my emails today."
- "Read my latest Notion notes."
- "Search for 'Project Alpha' on Drive."
- "Send an email to john@example.com saying I'll be late."
| text/markdown | null | Gabay Team <hello@example.com> | null | null | MIT | telegram, assistant, ai, productivity, self-hosted | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Communications :: Chat",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"fastapi>=0.100.0",
"uvicorn>=0.23.0",
"pydantic>=2.0.0",
"pydantic-settings>=2.0.0",
"jinja2>=3.1.0",
"python-multipart>=0.0.6",
"python-telegram-bot>=20.0",
"telethon>=1.30.0",
"celery>=5.3.0",
"redis>=5.0.0",
"groq>=0.5.0",
"google-api-python-client>=2.0.0",
"google-auth-httplib2>=0.1.0",
"google-auth-oauthlib>=1.0.0",
"notion-client>=2.2.0",
"requests>=2.31.0",
"python-dotenv>=1.0.0",
"click>=8.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/gabay",
"Documentation, https://github.com/yourusername/gabay#readme",
"Repository, https://github.com/yourusername/gabay.git",
"Issues, https://github.com/yourusername/gabay/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:58:53.345300 | gabay-0.1.7.tar.gz | 46,129 | d5/79/c76b2c5b82e00b95d4604666faf5048facbb5bc44265f65dc5a8599ca2c1/gabay-0.1.7.tar.gz | source | sdist | null | false | 3b70982a6d939856f7912476d4e5ce43 | 2e8c727f68f469de1cabf0e58a98690c81b779f6ed657d1a85887ed56fade68a | d579c76b2c5b82e00b95d4604666faf5048facbb5bc44265f65dc5a8599ca2c1 | null | [] | 231 |
2.4 | vncalendar | 1.3.0 | Thư viện Python tính lịch Âm – Dương, Can Chi và ngày tốt xấu theo lịch Việt Nam | # vncalendar
---
## I. Introduction
**vncalendar** is a Python library designed for computing and querying the Vietnamese *Vạn Sự* (Traditional Almanac), including Gregorian–Lunar calendar conversion, *Can Chi*, auspicious / inauspicious days, traditional forbidden days, and the 24 Solar Terms (*Tiết Khí*).
The library is built based on:
- Traditional Vietnamese calendar
- Vietnamese lunar calendar algorithms by Hồ Ngọc Đức
---
## II. Main Features
### 2.1. Date and Time Processing (`Date`)
- Leap year detection
```python
from main import Date
print(Date.isLeap(2024)) # True
```
- Number of days in a month / day index in a year
```python
print(Date.dayMonth(2, 2026)) # 28
print(Date.dayYear(6, 2, 2026)) # 37 → 06/02/2026 is the 37th day of 2026
print(Date.weekYear(6, 2, 2026)) # ISO week number
```
- Date addition and subtraction
```python
print(Date.addDays(9, 2, 2026, 51)) # (1, 4, 2026)
print(Date.subtDays(1, 4, 2026, 51)) # (9, 2, 2026)
print(Date.dateDiff(9, 2, 2026, 1, 4, 2026)) # 51
```
- Day-of-week calculation
```python
print(Date.dayWeek(13, 4, 2025)) # Chủ nhật
```
- Exact age calculation
```python
print(Date.exactAge(3, 6, 1936)) # (89, 8, 6) → 89 years, 8 months, 6 days
```
---
### 2.2. Calendar Conversion (`SolarAndLunar`)
- Gregorian → Lunar
```python
from main import SolarAndLunar
print(SolarAndLunar.convertSolar2Lunar(20, 12, 2025)) # (1, 11, 2025, 0)
```
- Lunar → Gregorian
```python
print(SolarAndLunar.convertLunar2Solar(1, 11, 2025, 1)) # (20, 12, 2025)
# The 4th argument: 1 if the lunar month is a leap month, otherwise 0.
```
---
### 2.3. Heavenly Stems and Earthly Branches (`CanChi`)
- Year Can Chi
```python
from main import CanChi
print(CanChi.nam(2026)) # Bính Ngọ
```
- Month Can Chi
```python
print(CanChi.thang(11, 2025)) # Mậu Tý
```
- Day Can Chi (Gregorian calendar only)
```python
print(CanChi.ngay(19, 1, 2026)) # Qúy Tị
```
---
### 2.4. Auspicious – Inauspicious Days (`TotXau`)
- Hoàng Đạo / Hắc Đạo determination
```python
from main import TotXau
print(TotXau.getHoangHacDao('Tị', 12)) # ('Ngọc Đường', 'Hoàng Đạo')
# Arg 1: Earthly Branch of the day → CanChi.ngay(d, m, y).split()[1]
# Arg 2: Lunar month of the date
```
- Traditional inauspicious days (Lunar calendar input)
```python
print(TotXau.isTamNuong(22, 12, 2025)) # True – Tam Nương
print(TotXau.isNguyetPha(27, 12, 2025)) # True – Nguyệt Phá
print(TotXau.isSatChu(8, 1, 2026)) # True – Sát Chủ
print(TotXau.isThoTu(1, 1, 2026)) # True – Thọ Tử
print(TotXau.isVangVong(17, 1, 2026)) # True – Vãng Vong
print(TotXau.isNguyetKy(23, 1, 2026)) # True – Nguyệt Kỵ
print(TotXau.isDaiBai(21, 3, 2026)) # True – Đại Bại
```
- Auspicious hours (Lunar calendar input)
```python
print(TotXau.getGioHoangDao(21, 3, 2026))
# ('Sửu', 'Thìn', 'Ngọ', 'Mùi', 'Tuất', 'Hợi')
```
- Conflicting ages for a given Gregorian date
```python
print(TotXau.getXung(7, 5, 2026))
# List of Can Chi combinations considered conflicting with the day
```
- Earthly Branch ↔ Solar time conversion
```python
print(TotXau.quyHoi('Sửu')) # (1, 3) → 1 a.m. – 3 a.m.
print(TotXau.gioAm(2)) # 'Sửu' → 2 a.m. belongs to Sửu hour
```
- Cửu Diệu (Nine-Star) for a person
```python
print(TotXau.getCuuDieu(1990, 'm')) # Nine-Star name for a male born in lunar year 1990
```
---
### 2.5. Solar Terms (`TietKhi`)
- Solar term of a given date (Gregorian)
```python
from main import TietKhi
print(TietKhi.getTerm(7, 5, 2026)) # Lập Hạ
```
- Exact start time of a solar term
```python
print(TietKhi.getTermDate('Lập Hạ', 2026)) # 2026-05-05 18:36:00
```
- All 24 solar terms in a year
```python
print(TietKhi.getAllTerms(2026))
# {
# 'Lập Xuân': 'Ngày 04/02/2026, vào 02:57:00 (giờ Sửu)',
# 'Vũ Thủy': 'Ngày 18/02/2026, vào 22:46:00 (giờ Hợi)',
# ...
# }
```
---
### 2.6. Aggregated Vạn Sự Information (`VanSu`)
- Full almanac summary for a given date
```python
from main import VanSu
print(VanSu.getInfo(7, 5, 2026, 's'))
# 7/5/2026 THỨ NĂM Ngày 21/3/2026 ÂL
# Ngày Tân Tị - Tháng Nhâm Thìn - Năm Bính Ngọ
# Hành Kim - Sao Tỉnh
# Minh Đường Hoàng Đạo
# Đại Bại
# - Giờ tốt: Sửu (1h - 3h), Thìn (7h - 9h), ...
# - Thuộc tiết Lập Hạ.
```
Use `'s'` for Gregorian input, `'l'` for Lunar input.
- 12 Trực destiny reading based on lunar birth year
```python
print(VanSu.getPredict12Truc(1990)) # Four-line destiny poem in Vietnamese
```
- Cửu Diệu prediction text
```python
print(VanSu.getPredictCuuDieu(1990, 'm')) # Prediction string in Vietnamese
```
---
### 2.7. Person (`Person`)
Represents an individual and provides personal almanac readings based on their birth date.
```python
from main import Person
p = Person(15, 1, 1990, 'm')
# birth_day, birth_month, birth_year (Gregorian), gender ('m' / 'f')
```
| Method | Description |
|---|---|
| `p.truc12()` | 12 Trực destiny poem based on lunar birth year |
| `p.cuuDieu()` | Cửu Diệu prediction based on lunar birth year and gender |
| `p.age()` | Exact age as `(years, months, days, hours, minutes, seconds)` |
```python
print(p.truc12())
print(p.cuuDieu())
print(p.age()) # (36, 1, 6, ...)
```
---
## III. Library Structure
```
main.py
│
├── Date
│ ├── isLeap
│ ├── dayMonth
│ ├── dayYear
│ ├── weekYear
│ ├── convertDate2jdn
│ ├── convertjdn2Date
│ ├── addDays
│ ├── subtDays
│ ├── dateDiff
│ ├── exactAge
│ └── dayWeek
│
├── SolarAndLunar
│ ├── getNewMoonDay
│ ├── getSunLongitude
│ ├── getLunarMonth11
│ ├── getLeapMonthOffset
│ ├── convertSolar2Lunar
│ └── convertLunar2Solar
│
├── CanChi
│ ├── nam
│ ├── thang
│ └── ngay
│
├── TotXau
│ ├── getHoangHacDao
│ ├── isTamNuong
│ ├── isNguyetPha
│ ├── isSatChu
│ ├── isThoTu
│ ├── isVangVong
│ ├── isNguyetKy
│ ├── isDaiBai
│ ├── getCuuDieu
│ ├── getGioHoangDao
│ ├── getXung
│ ├── quyHoi
│ └── gioAm
│
├── TietKhi
│ ├── TERMS
│ ├── TERMS_LIST
│ ├── jdate
│ ├── getSunLongitude
│ ├── getDay
│ ├── getExactTime
│ ├── getTermDate
│ ├── getTerm
│ └── getAllTerms
│
├── VanSu
│ ├── getSao
│ ├── getHanh
│ ├── get28_Hanh
│ ├── getInfo
│ ├── getPredict12Truc
│ └── getPredictCuuDieu
│
└── Person
├── __init__(bday, bmon, byr, gen)
├── truc12
├── cuuDieu
└── age
```
---
## IV. Accuracy and Scope
- Default time zone: UTC+7 (Vietnam)
- Algorithms are based on astronomical calculations
- Suitable for:
- Calendar applications
- Vạn Sự lookup tools
- Traditional calendar research
---
## V. Notes
- The library does not rely on any third-party dependencies
- Results are for reference purposes according to traditional Vietnamese calendar practices
---
## VI. License and References
- Traditional Vạn Sự calendar (printed editions in Ho Chi Minh City, Vietnam)
- Lunar calendar algorithms by **Hồ Ngọc Đức**
https://lyso.vn/co-ban/thuat-toan-tinh-am-lich-ho-ngoc-duc-t2093/
*(!) This is a Vietnamese website.*
---
## VII. Author
Full name: Hoàng Đức Tùng
Email: hoangdtung2021@gmail.com
Facebook: https://www.facebook.com/hoangductung.keocon/
| text/markdown | null | Hoang Duc Tung <hoangdtung2021@gmail.com> | null | null | MIT | vietnamese-calendar, chinese-calendar, lunar-calendar, solar-calendar, can-chi, vn-calendar, am-duong | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/hoangdtung-2013/vncalendar",
"Repository, https://github.com/hoangdtung-2013/vncalendar",
"Issues, https://github.com/hoangdtung-2013/vncalendar/issues"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-21T09:57:02.640559 | vncalendar-1.3.0.tar.gz | 22,286 | 9a/1b/ca6ad19c19d3a14791d56687935deb6e1c9edf2a82efb682c7dd6106a663/vncalendar-1.3.0.tar.gz | source | sdist | null | false | 4f9b1c5456bfec8df86f1ba05ab361d0 | 18773b0b05f8dbfb2cf1a1529847d877ab10ab953b4b9cd48ca3db85dd829d6e | 9a1bca6ad19c19d3a14791d56687935deb6e1c9edf2a82efb682c7dd6106a663 | null | [
"LICENSE"
] | 233 |
2.1 | semfind | 0.1.3 | Semantic grep for the terminal — search files by meaning, not pattern | # semfind
Semantic grep for the terminal. Search files by meaning, not pattern.
`grep` finds exact text matches. `semfind` finds lines that mean the same thing. Search your logs, notes, docs, or any text file using natural language — no regex needed.
## Why semfind?
Traditional grep fails when you don't know the exact wording. If your log says "container build failed due to missing environment variables" but you search for "deployment issue", grep finds nothing. semfind finds it instantly because it understands meaning.
**Built for AI agents.** Tools like [OpenClaw](https://github.com/openclaw/openclaw) and other AI agents need lightweight semantic search over local files — searching memory, history, and context without spinning up a full vector database. semfind is a single CLI command with auto-caching that agents can call directly from the shell.
**Also great for humans.** Search your markdown notes, project logs, documentation, or any text files by what you mean, not what you remember typing.
### Key features
- **No API keys** — runs 100% locally using [fastembed](https://github.com/qdrant/fastembed) (BAAI/bge-small-en-v1.5) + FAISS
- **Auto-caching** — indexes files on first search, caches embeddings, auto-invalidates when files change
- **Fast** — ~2s cold start, 14ms cached queries, 252MB RAM
- **Grep-like output** — colored results with file, line number, and similarity score
- **Zero config** — just `pip install semfind` and go
## Install
```bash
pip install semfind
```
## Usage
```bash
# Search a file
semfind "deployment issue" logs.md
# Search multiple files, top 3 results
semfind "permission error" memory/*.md -k 3
# Show 2 lines of context around each match
semfind "database migration" notes.md -n 2
# Force re-index (ignore cache)
semfind "query" file.md --reindex
# Set minimum similarity threshold
semfind "auth bug" *.md -m 0.5
```
### Output
```
memory/HISTORY.md:9: [2026-01-15 10:30] DEPLOYMENT: Fixed docker build... (0.796)
memory/HISTORY.md:3: [2026-01-17 09:15] FILE_PERMS: Agent couldn't... (0.689)
```
## How it works
1. On first search, each file's non-empty lines are embedded and cached in `~/.cache/semfind/`
2. Cache is keyed by file content hash — changes auto-invalidate
3. Your query is embedded and compared via FAISS inner-product search
4. Results are printed grep-style with similarity scores
## Options
| Flag | Description | Default |
|------|-------------|---------|
| `-k, --top-k` | Number of results | 5 |
| `-n, --context` | Context lines before/after | 0 |
| `-m, --max-distance` | Minimum similarity score | none |
| `--reindex` | Force re-embed | false |
| `--model` | Embedding model | BAAI/bge-small-en-v1.5 |
| `--no-cache` | Skip cache | false |
| `--version` | Print version | |
## For OpenClaw and AI agents
semfind is built as a CLI tool specifically so that AI agents like [OpenClaw](https://github.com/openclaw/openclaw) can install and use it on demand.
### When to use semfind
Use `grep` first — it's instant and has zero overhead. But when grep fails because you don't know the exact wording, install semfind and search by meaning instead.
```bash
# grep found nothing useful
grep "deployment issue" memory/*.md # no results
# semfind understands meaning
semfind "deployment issue" memory/*.md -k 3
# memory/HISTORY.md:9: [2026-01-15] Fixed docker build with missing env vars (0.796)
```
### Why a CLI tool instead of a built-in agent tool?
A built-in tool would load the embedding model into memory on every agent session — even when semantic search isn't needed. That's ~250MB of RAM wasted on most runs.
As a CLI tool, semfind only loads the model when you actually call it. The process starts, runs the search, and exits — freeing all memory immediately. Agents that never need semantic search pay zero cost.
### Resource usage
| | First run | Cached runs |
|---|---|---|
| **Model download** | ~65MB download (~10-30s depending on connection) | Skipped (cached in `/tmp/fastembed_cache/`) |
| **Model disk storage** | ~65MB | Same |
| **RAM while running** | ~250MB (model + embeddings) | ~250MB |
| **RAM after exit** | 0 (process ends) | 0 |
| **Query latency** | ~2s (model load + embed + search) | ~14ms (embed + search) |
| **Embedding cache** | Written to `~/.cache/semfind/` | Read from cache, auto-invalidates on file change |
### Example agent use cases
- **Memory search** — searching history/memory files for relevant past context
- **Document retrieval** — finding relevant docs before answering user questions
- **Log analysis** — searching logs by describing the problem rather than knowing exact error strings
```bash
# Agent searching its memory
semfind "user asked about authentication" memory/*.md -k 3
# Searching project docs for context
semfind "how to configure database" docs/*.md -k 5
# Finding relevant logs without knowing exact error strings
semfind "something went wrong with file permissions" logs/*.md -k 3
```
## License
MIT
| text/markdown | PaperBoardOfficial | null | null | null | MIT | semantic-search, grep, embeddings, cli | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Text Processing :: General",
"Topic :: Utilities"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"fastembed>=0.7.0",
"faiss-cpu>=1.7.0",
"numpy>=1.24.0"
] | [] | [] | [] | [
"Homepage, https://github.com/PaperBoardOfficial/semfind"
] | twine/5.1.1 CPython/3.12.3 | 2026-02-21T09:56:58.955297 | semfind-0.1.3.tar.gz | 9,288 | 9f/10/0bbc32146e63d939f9830b61c9a2728795f6be613237a03e8daa168bd4ce/semfind-0.1.3.tar.gz | source | sdist | null | false | 28f4b38c0a83dc711397de994b49d66c | b1de21a52d418c4ffd6b5beaba0f12935f44b36d5218659e281a013381cf1836 | 9f100bbc32146e63d939f9830b61c9a2728795f6be613237a03e8daa168bd4ce | null | [] | 243 |
2.4 | jleechanorg-pr-automation | 0.2.153 | GitHub PR automation system with safety limits and actionable counting | # GitHub PR Automation System
**Autonomous PR fixing and code review automation for the jleechanorg organization**
## 🚀 Quick Start
```bash
# 1. Install the automation packages from PyPI
pip install jleechanorg-orchestration jleechanorg-pr-automation
# 2. Install cron entries (sets up automated workflows)
cd automation
./install_cron_entries.sh
```
> **Important:** Ensure `MINIMAX_API_KEY` is set in your environment for MiniMax CLI support.
## Overview
This automation system provides three core workflows:
1. **@codex Comment Agent** - Monitors PRs and posts intelligent automation comments
2. **FixPR Workflow** - Autonomously fixes merge conflicts and failing CI checks
3. **Codex GitHub Mentions** - Processes OpenAI Codex tasks via browser automation
All workflows use safety limits, commit tracking, and orchestrated AI agents to process PRs reliably.
---
## 🤖 Workflow 1: @codex Comment Agent
### What It Does
The @codex comment agent continuously monitors all open PRs across the jleechanorg organization and posts standardized Codex instruction comments when new commits are pushed. This enables AI assistants (@codex, @coderabbitai, @cursor) to review and improve PRs automatically.
### How It Works
```
┌─────────────────────────────────────────────────────────────┐
│ 1. DISCOVERY PHASE │
│ ───────────────────────────────────────────────────────────│
│ • Scan all repositories in jleechanorg organization │
│ • Find open PRs updated in last 24 hours │
│ • Filter to actionable PRs (new commits, not drafts) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 2. COMMIT TRACKING │
│ ───────────────────────────────────────────────────────────│
│ • Check if PR has new commits since last processed │
│ • Skip if already commented on this commit SHA │
│ • Prevent duplicate comments on same commit │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 3. SAFETY CHECKS │
│ ───────────────────────────────────────────────────────────│
│ • Verify PR hasn't exceeded attempt limits (max 10) │
│ • Check global automation limit (max 50 runs) │
│ • Skip if safety limits reached │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 4. POST COMMENT │
│ ───────────────────────────────────────────────────────────│
│ • Post standardized @codex instruction comment │
│ • Include hidden commit marker: <!-- codex-automation- │
│ commit:abc123def --> │
│ • Record processing in commit history │
└─────────────────────────────────────────────────────────────┘
```
### Comment Template
The agent posts this standardized instruction:
```markdown
@codex @coderabbitai @cursor [AI automation] Codex will implement
the code updates while coderabbitai and cursor focus on review
support. Please make the following changes to this PR.
Use your judgment to fix comments from everyone or explain why it should
not be fixed. Use /commentreply to post ONE consolidated summary with all
responses (avoids GitHub rate limits from individual replies). Address all
comments on this PR. Fix any failing tests and resolve merge conflicts.
Push any commits needed to remote so the PR is updated.
<!-- codex-automation-commit:abc123def456 -->
```
### Tech Stack
| Component | Technology | Purpose |
|-----------|-----------|---------|
| **PR Discovery** | GitHub GraphQL API | Organization-wide PR search |
| **Commit Detection** | `check_codex_comment.py` | Prevents duplicate comments |
| **Comment Posting** | GitHub REST API (`gh pr comment`) | Posts automation instructions |
| **Safety Manager** | `AutomationSafetyManager` | File-based rate limiting |
| **Scheduling** | launchd/cron | Runs every 10 minutes |
### Usage
#### CLI Commands
```bash
# Monitor all repositories (posts comments to actionable PRs)
jleechanorg-pr-monitor
# Monitor specific repository
jleechanorg-pr-monitor --single-repo worldarchitect.ai
# Process specific PR
jleechanorg-pr-monitor --target-pr 123 --target-repo jleechanorg/worldarchitect.ai
# Dry run (discovery only, no comments)
jleechanorg-pr-monitor --dry-run
# Check safety status
automation-safety-cli status
# Clear safety data (resets limits)
automation-safety-cli clear
```
#### Slash Command Integration
```bash
# From Claude Code
/automation status # View automation state
/automation monitor # Process actionable PRs
/automation safety check # View safety limits
```
### Configuration
```bash
# Required
export GITHUB_TOKEN="your_github_token_here"
# Safety limits (defaults shown). Override via CLI flags (not environment variables):
# - jleechanorg-pr-monitor --pr-limit 10 --global-limit 50 --approval-hours 24
# - jleechanorg-pr-monitor --pr-automation-limit 10 --fix-comment-limit 10 --fixpr-limit 10
# Or persist via `automation-safety-cli` which writes `automation_safety_config.json` in the safety data dir.
# Optional - Email Notifications
export SMTP_SERVER="smtp.gmail.com"
export SMTP_PORT=587
export EMAIL_USER="your-email@gmail.com"
export EMAIL_PASS="your-app-password"
export EMAIL_TO="recipient@example.com"
```
### Key Features
- ✅ **Commit-based tracking** - Only comments when new commits appear
- ✅ **Hidden markers** - Uses HTML comments to track processed commits
- ✅ **Safety limits** - Prevents automation abuse with dual limits
- ✅ **Cross-repo support** - Monitors entire organization
- ✅ **Draft PR filtering** - Skips draft PRs automatically
---
## 🔧 Workflow 2: FixPR (Autonomous PR Fixing)
### What It Does
The FixPR workflow autonomously fixes PRs that have merge conflicts or failing CI checks by spawning AI agents in isolated workspaces. Each agent analyzes the PR, reproduces failures locally, applies fixes, and pushes updates.
### How It Works
```
┌─────────────────────────────────────────────────────────────┐
│ 1. PR DISCOVERY & FILTERING │
│ ───────────────────────────────────────────────────────────│
│ • Query PRs updated in last 24 hours │
│ • Filter to PRs with: │
│ - mergeable: CONFLICTING │
│ - failing CI checks (FAILURE, ERROR, TIMED_OUT) │
│ • Skip PRs without blockers │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 2. WORKSPACE ISOLATION │
│ ───────────────────────────────────────────────────────────│
│ • Clone base repository to /tmp/pr-orch-bases/ │
│ • Create worktree at /tmp/{repo}/pr-{number}-{branch} │
│ • Checkout PR branch in isolated workspace │
│ • Clean previous tmux sessions with matching names │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 3. AI AGENT DISPATCH │
│ ───────────────────────────────────────────────────────────│
│ • Create TaskDispatcher with workspace config │
│ • Spawn agent with: │
│ - CLI: claude/codex/gemini (configurable) │
│ - Task: Fix PR #{number} - resolve conflicts & tests │
│ - Workspace: Isolated worktree path │
│ • Agent runs autonomously in tmux session │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 4. AGENT WORKFLOW (Autonomous) │
│ ───────────────────────────────────────────────────────────│
│ • Checkout PR: gh pr checkout {pr_number} │
│ • Analyze failures: gh pr view --json statusCheckRollup │
│ • Reproduce locally: Run failing tests │
│ • Apply fixes: Code changes to resolve issues │
│ • Verify: Run full test suite │
│ • Commit & Push: git push origin {branch} │
│ • Write report: /tmp/orchestration_results/pr-{num}.json │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 5. VERIFICATION │
│ ───────────────────────────────────────────────────────────│
│ • Agent monitors GitHub CI for updated status │
│ • Verifies mergeable: MERGEABLE │
│ • Confirms all checks passing │
│ • Logs success/failure to results file │
└─────────────────────────────────────────────────────────────┘
```
### Tech Stack
| Component | Technology | Purpose |
|-----------|-----------|---------|
| **PR Query** | GitHub GraphQL API | Find PRs with conflicts/failures |
| **CI Checks** | `gh pr checks` JSON output | Detect failing tests |
| **Worktree Isolation** | `git worktree add` | Isolated PR workspaces |
| **Agent Orchestration** | `TaskDispatcher` | Spawn AI agents in tmux |
| **AI CLI** | Claude/Codex/Gemini | Execute fixes autonomously |
| **Workspace Management** | `/tmp/{repo}/{pr-branch}/` | Clean isolated environments |
### Usage
#### CLI Commands
```bash
# Fix PRs with default settings (last 24h, max 5 PRs, Claude CLI)
python3 -m orchestrated_pr_runner
# Custom time window and PR limit
python3 -m orchestrated_pr_runner --cutoff-hours 48 --max-prs 10
# Use different AI CLI
python3 -m jleechanorg_pr_automation.orchestrated_pr_runner --agent-cli codex
python3 -m jleechanorg_pr_automation.orchestrated_pr_runner --agent-cli gemini
# List actionable PRs without fixing
jleechanorg-pr-monitor --fixpr --dry-run
```
#### Slash Command Integration
```bash
# Fix specific PR (from Claude Code)
/fixpr 123
# With auto-apply for safe fixes
/fixpr 123 --auto-apply
# Pattern detection mode (fixes similar issues)
/fixpr 123 --scope=pattern
```
#### Integration with PR Monitor
```bash
# Monitor and fix in one command
jleechanorg-pr-monitor --fixpr --max-prs 5 --cli-agent claude
```
### Agent CLI Options
The FixPR workflow supports multiple AI CLIs for autonomous fixing:
| CLI | Model | Best For | Configuration |
|-----|-------|----------|---------------|
| **claude** | Claude Sonnet 4.5 | Complex refactors, multi-file changes | Default |
| **codex** | OpenAI Codex | Code generation, boilerplate fixes | Requires `codex` binary in PATH |
| **gemini** | Gemini 3 Pro | Large codebases, pattern detection | `pip install google-gemini-cli` + `GOOGLE_API_KEY` |
**Usage:**
```bash
# Explicit CLI selection
python3 -m orchestrated_pr_runner --agent-cli gemini
# Via environment variable
export AGENT_CLI=codex
python3 -m orchestrated_pr_runner
```
### Workspace Structure
```
/tmp/
├── pr-orch-bases/ # Base clones (shared)
│ ├── worldarchitect.ai/
│ └── ai_universe/
└── {repo}/ # PR workspaces (isolated)
├── pr-123-fix-auth/
├── pr-456-merge-conflict/
└── pr-789-test-failures/
```
### Key Features
- ✅ **Autonomous fixing** - AI agents work independently
- ✅ **Worktree isolation** - Each PR gets clean workspace
- ✅ **Multi-CLI support** - Claude, Codex, or Gemini
- ✅ **Tmux sessions** - Long-running agents in background
- ✅ **Result tracking** - JSON reports in `/tmp/orchestration_results/`
- ✅ **Safety limits** - Respects global and per-PR limits
---
## 🤝 Workflow 3: Codex GitHub Mentions Automation
### What It Does
The Codex GitHub Mentions automation processes "GitHub Mention:" tasks from OpenAI's Codex interface via browser automation. When GitHub issues or PRs are mentioned in Codex conversations, they appear as actionable tasks that require manual approval to update the branch. This workflow automates clicking the "Update branch" button for each task.
### How It Works
```
┌─────────────────────────────────────────────────────────────┐
│ 1. AUTHENTICATION │
│ ───────────────────────────────────────────────────────────│
│ • Connect to existing Chrome via CDP (port 9222) │
│ • Load saved auth state from Storage State API │
│ • Skip login if cookies/localStorage already exist │
│ • Auth persisted to ~/.chatgpt_codex_auth_state.json │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 2. TASK DISCOVERY │
│ ───────────────────────────────────────────────────────────│
│ • Navigate to https://chatgpt.com/codex/tasks │
│ • Find all task links matching "GitHub Mention:" │
│ • Collect task URLs and metadata │
│ • Filter to first N tasks (default: 50) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 3. TASK PROCESSING │
│ ───────────────────────────────────────────────────────────│
│ • Navigate to each task page │
│ • Wait for page to fully load │
│ • Search for "Update branch" button │
│ • Click button if present │
│ • Log success/failure for each task │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 4. STATE PERSISTENCE │
│ ───────────────────────────────────────────────────────────│
│ • Save cookies and localStorage to auth state file │
│ • Auth persists across runs (no manual login required) │
│ • Browser context reusable for future runs │
└─────────────────────────────────────────────────────────────┘
```
### Tech Stack
| Component | Technology | Purpose |
|-----------|-----------|---------|
| **Browser Automation** | Playwright (Python) | Controls Chrome via CDP |
| **CDP Connection** | Chrome DevTools Protocol | Connects to existing browser on port 9222 |
| **Auth Persistence** | Storage State API | Saves/restores cookies and localStorage |
| **Cloudflare Bypass** | Existing browser session | Avoids rate limiting by appearing as normal user |
| **Task Selection** | CSS selector `a:has-text("GitHub Mention:")` | Finds GitHub PR tasks |
| **Scheduling** | cron | Runs every hour at :15 past the hour |
### Usage
#### Prerequisites
**Start Chrome with remote debugging:**
```bash
# Kill existing Chrome instances
killall "Google Chrome" 2>/dev/null
# Start Chrome with CDP enabled (custom profile to avoid conflicts)
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
--remote-debugging-port=9222 \
--user-data-dir="$HOME/.chrome-cdp-debug" \
> /dev/null 2>&1 &
# Verify CDP is accessible
curl -s http://localhost:9222/json/version | python3 -m json.tool
# IMPORTANT: Log in to chatgpt.com manually in the Chrome window
# The automation will save your auth state for future runs
```
#### CLI Commands
```bash
# Run automation (connects to existing Chrome on port 9222)
python3 -m jleechanorg_pr_automation.openai_automation.codex_github_mentions \
--use-existing-browser \
--cdp-port 9222 \
--limit 50
# Debug mode with verbose logging
python3 -m jleechanorg_pr_automation.openai_automation.codex_github_mentions \
--use-existing-browser \
--cdp-port 9222 \
--limit 50 \
--debug
# Process only first 10 tasks
python3 -m jleechanorg_pr_automation.openai_automation.codex_github_mentions \
--use-existing-browser \
--cdp-port 9222 \
--limit 10
```
#### Cron Job Integration
The automation runs automatically via cron every hour at :15 past the hour (offset from PR monitor):
```bash
# Cron entry (installed via install_jleechanorg_automation.sh)
15 * * * * jleechanorg-pr-monitor --codex-update >> \
$HOME/Library/Logs/worldarchitect-automation/codex_automation.log 2>&1
```
**Note:** The `--codex-update` flag internally calls:
```bash
python3 -m jleechanorg_pr_automation.openai_automation.codex_github_mentions \
--use-existing-browser --cdp-host 127.0.0.1 --cdp-port 9222 --limit 50
```
**Self-healing:** If Chrome CDP is not reachable, `--codex-update` will auto-start Chrome
using the settings below before retrying.
#### Slash Command Integration
```bash
# From Claude Code (manual run)
python3 -m jleechanorg_pr_automation.openai_automation.codex_github_mentions \
--use-existing-browser --cdp-port 9222 --limit 50
```
### Configuration
```bash
# Required: Chrome with remote debugging on port 9222
# (See "Prerequisites" section above)
# Optional: Customize task limit (used by `jleechanorg-pr-monitor --codex-update`)
# Default: 50 (matches CLI default; max 200). Override to keep evidence/test runs fast.
# Use: `jleechanorg-pr-monitor --codex-update --codex-task-limit 200`
# Optional: Auth state file location
# Default: ~/.chatgpt_codex_auth_state.json
# Optional: CDP self-heal controls (used by jleechanorg-pr-monitor --codex-update)
export CODEX_CDP_AUTO_START=1 # default: 1 (auto-start Chrome if needed)
export CODEX_CDP_HOST=127.0.0.1 # default: 127.0.0.1
export CODEX_CDP_PORT=9222 # default: 9222
export CODEX_CDP_USER_DATA_DIR="$HOME/.chrome-automation-profile"
export CODEX_CDP_START_TIMEOUT=20 # seconds to wait for CDP after start
# Optional: custom launcher (script path). Port is appended as final arg.
export CODEX_CDP_START_SCRIPT="/path/to/start_chrome_debug.sh"
```
### Key Features
- ✅ **CDP-based automation** - Connects to existing Chrome to bypass Cloudflare
- ✅ **Persistent authentication** - Storage State API saves cookies/localStorage
- ✅ **No manual login** - Auth state persists across runs
- ✅ **Cloudflare bypass** - Appears as normal user browsing, not a bot
- ✅ **Configurable limits** - Process 1-N tasks per run
- ✅ **Robust task detection** - Handles dynamic page loading
### Troubleshooting
**Issue**: Cloudflare rate limiting (0 tasks found)
```bash
# Solution: Use existing browser via CDP instead of launching new instances
# The CDP approach connects to your logged-in Chrome session, avoiding detection
# Verify Chrome is running with CDP enabled
curl -s http://localhost:9222/json/version
# Expected output:
# {
# "Browser": "Chrome/131.0.6778.265",
# "Protocol-Version": "1.3",
# "webSocketDebuggerUrl": "ws://localhost:9222/..."
# }
```
**Issue**: Auth state not persisting
```bash
# Check auth state file exists
ls -lh ~/.chatgpt_codex_auth_state.json
# Expected: ~5-6KB JSON file
# If missing: Log in manually to chatgpt.com in the CDP Chrome window
# The script will save auth state on first successful run
```
**Issue**: "Update branch" button not found
```bash
# Run with debug logging
python3 -m jleechanorg_pr_automation.openai_automation.codex_github_mentions \
--use-existing-browser \
--cdp-port 9222 \
--debug
# Check if tasks are actually "GitHub Mention:" type
# Only GitHub PR tasks have "Update branch" buttons
```
**Issue**: Chrome CDP connection fails
```bash
# Verify Chrome is running with correct flags
ps aux | grep "remote-debugging-port=9222"
# If not running, start Chrome with CDP:
killall "Google Chrome" 2>/dev/null
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
--remote-debugging-port=9222 \
--user-data-dir="$HOME/.chrome-cdp-debug" &
```
**Issue**: Cron job failing with "unrecognized arguments: --codex-update"
```bash
# This happens when installed PyPI package is older than source code
# DO NOT use editable install (pip install -e .) for cron jobs - breaks in multi-worktree setups
# Option 1: Install from git (pin to tag or commit for reproducibility)
pip install "git+https://github.com/jleechanorg/worldarchitect.ai.git@<tag-or-commit>#subdirectory=automation"
# Option 2: Build and install from source (NOT editable - safe for cron)
cd automation && pip install .
# Option 3: Wait for PyPI package update (safest)
# Check your installed version:
pip show jleechanorg-pr-automation
# See latest releases on PyPI: https://pypi.org/project/jleechanorg-pr-automation/
# Verify flag exists
jleechanorg-pr-monitor --help | grep codex-update
```
---
## Installation
### Production / Cron Jobs / Normal Usage (RECOMMENDED)
```bash
# Basic installation from PyPI (stable, production-ready)
pip install jleechanorg-pr-automation
# With email notifications
pip install jleechanorg-pr-automation[email]
```
✅ **Use this for:**
- Cron jobs and scheduled automation
- Production deployments
- Multi-worktree development
- Any system where code stability matters
### Development Only (Active Code Changes)
```bash
# Install in editable mode (links to local source)
cd automation && pip install -e .
# With optional dependencies
cd automation && pip install -e .[email,dev]
```
⚠️ **ONLY use editable installs when:**
- You are actively modifying the package source code
- You need immediate reflection of code changes without reinstalling
- You are working in a single worktree
🚫 **DO NOT use editable installs for:**
- Cron jobs (will break when source directory changes)
- Production deployments (unstable source code)
- Multi-worktree setups (editable install points to single worktree)
- Scheduled automation (no control over which worktree is active)
### When PyPI Package is Outdated
If you need features not yet in PyPI:
```bash
# Option 1: Install from git (pin to tag or commit for reproducibility)
pip install "git+https://github.com/jleechanorg/worldarchitect.ai.git@<tag-or-commit>#subdirectory=automation"
# Option 2: Build and install from source (NOT editable)
cd automation
pip install . # Note: NOT pip install -e .
# Option 3: Wait for PyPI package update (safest)
# Check your installed version: pip show jleechanorg-pr-automation
# See latest releases on PyPI: https://pypi.org/project/jleechanorg-pr-automation/
```
### macOS Automation (Scheduled Monitoring)
```bash
# Install launchd service
./automation/install_jleechanorg_automation.sh
# Verify service
launchctl list | grep jleechanorg
# View logs
tail -f ~/Library/Logs/worldarchitect-automation/jleechanorg_pr_monitor.log
```
### Crontab Management
Use the `restore_crontab.sh` script to manage cron jobs for all three automation workflows:
```bash
# Dry run (preview what will be restored)
cd automation
./restore_crontab.sh --dry-run
# Interactive restore (prompts for confirmation)
./restore_crontab.sh
# Force restore (no prompts)
./restore_crontab.sh --force
# View current crontab
crontab -l
# Restore from backup (if needed)
crontab ~/.crontab_backup_YYYYMMDD_HHMMSS
```
**Standard Cron Jobs:**
| Schedule | Command | Purpose |
|----------|---------|---------|
| Every 2 hours (`:00`) | `jleechanorg-pr-monitor --max-prs 10` | Workflow 1: PR monitoring |
| Every hour (`:45`) | `jleechanorg-pr-monitor --fix-comment --cli-agent minimax,gemini,cursor --max-prs 3` | Workflow 2: Fix-comment automation |
| Every 30 minutes | `jleechanorg-pr-monitor --comment-validation --max-prs 10` | Workflow 3: Comment validation |
| Every hour (`:15`) | `jleechanorg-pr-monitor --codex-update --codex-task-limit 10` | Workflow 4: Codex update automation |
| Every hour (`:30`) | `jleechanorg-pr-monitor --codex-api --codex-apply-and-push --codex-task-limit 10` | Workflow 5: Codex API automation |
| Every 30 minutes | `jleechanorg-pr-monitor --fixpr --max-prs 10 --cli-agent minimax,gemini,cursor` | Workflow 6: Fix PRs autonomously |
---
## Safety System
Both workflows use `AutomationSafetyManager` for rate limiting:
### Dual Limits
1. **Per-PR Limit**: Max 10 consecutive attempts per PR (internal safety)
2. **Global Limit**: Max 50 total automation runs per day
3. **Workflow-Specific Comment Limits**: Each workflow has its own limit for automation comments per PR (some workflows may not currently post comments, but have limits reserved for future compatibility):
- **PR Automation**: 10 comments (default)
- **Fix-Comment**: 10 comments (default)
- **Codex Update**: 10 comments (default; does not currently post PR comments—limit reserved for future compatibility)
- **FixPR**: 10 comments (default)
These limits prevent one workflow from blocking others. Configure via CLI flags:
- `--pr-automation-limit`
- `--fix-comment-limit`
- `--fixpr-limit`
**Note**: Workflow comment counting is marker-based:
- PR automation comments: `codex-automation-commit`
- Fix-comment queued runs: `fix-comment-automation-run` (separate from completion marker)
- Fix-comment completion/review requests: `fix-comment-automation-commit`
- FixPR queued runs: `fixpr-automation-run`
### Safety Data Storage
```
~/Library/Application Support/worldarchitect-automation/
├── automation_safety_data.json # Attempt tracking
└── pr_history/ # Commit tracking per repo
├── worldarchitect.ai/
│ ├── main.json
│ └── feature-branch.json
└── ai_universe/
└── develop.json
```
### Safety Commands
```bash
# Check current status
automation-safety-cli status
# Example output:
# Global runs: 23/50
# Requires approval: False
# PR attempts:
# worldarchitect.ai-1634: 2/10 (OK)
# ai_universe-42: 10/10 (BLOCKED)
# Clear all data (reset limits)
automation-safety-cli clear
# Check specific PR
automation-safety-cli check-pr 123 --repo worldarchitect.ai
```
---
## Architecture Comparison
| Feature | @codex Comment Agent | FixPR Workflow | Codex GitHub Mentions |
|---------|---------------------|----------------|----------------------|
| **Trigger** | New commits on open PRs | Merge conflicts or failing checks | Codex tasks queue |
| **Action** | Posts instruction comment | Autonomously fixes code | Clicks "Update branch" buttons |
| **Execution** | Quick (API calls only) | Long-running (agent in tmux) | Medium (browser automation) |
| **Workspace** | None (comment-only) | Isolated git worktree | Chrome CDP session |
| **AI CLI** | N/A (GitHub API) | Claude/Codex/Gemini | N/A (Playwright) |
| **Output** | GitHub PR comment | Code commits + JSON report | Browser button clicks |
| **Schedule** | Every hour | Every 30 minutes | Every hour at :15 |
---
## Environment Variables
### Required
```bash
export GITHUB_TOKEN="ghp_xxxxxxxxxxxx"
```
### Optional
```bash
# Workspace configuration
export PR_AUTOMATION_WORKSPACE="/custom/path"
# Email notifications
export SMTP_SERVER="smtp.gmail.com"
export SMTP_PORT=587
export EMAIL_USER="your@email.com"
export EMAIL_PASS="app-password"
export EMAIL_TO="recipient@email.com"
# Agent CLI selection (for FixPR)
export AGENT_CLI="claude" # or "codex" or "gemini"
export GEMINI_MODEL="gemini-3-pro-preview"
```
---
## Development
### Running Tests
```bash
# Run all tests
pytest
# With coverage
pytest --cov=jleechanorg_pr_automation
# Specific test suite
pytest automation/jleechanorg_pr_automation/tests/test_pr_filtering_matrix.py
```
### Code Quality
```bash
# Format code
black .
ruff check .
# Type checking
mypy jleechanorg_pr_automation
```
---
## Troubleshooting
### @codex Comment Agent
**Issue**: No PRs discovered
```bash
# Check GitHub authentication
gh auth status
# Verify organization access
gh repo list jleechanorg --limit 5
```
**Issue**: Duplicate comments on same commit
```bash
# Check commit marker detection
python3 -c "from jleechanorg_pr_automation.check_codex_comment import decide; print(decide('<!-- codex-automation-commit:', '-->'))"
```
### FixPR Workflow
**Issue**: Worktree creation fails
```bash
# Clean stale worktrees
cd ~/worldarchitect.ai
git worktree prune
# Remove old workspace
rm -rf /tmp/worldarchitect.ai/pr-*
```
**Issue**: Agent not spawning
```bash
# Check tmux sessions
tmux ls
# View agent logs
ls -la /tmp/orchestration_results/
```
**Issue**: Wrong AI CLI used
```bash
# Verify CLI availability
which claude codex gemini
# Set explicit CLI
export AGENT_CLI=claude
python3 -m orchestrated_pr_runner
```
---
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass (`pytest`)
5. Format code (`black . && ruff check .`)
6. Submit a pull request
---
## License
MIT License - see LICENSE file for details.
---
## Changelog
### 0.2.21 (Latest)
- Refined Codex updater logging and update-branch click handling.
### 0.2.20
- Stabilized Codex updater tab reuse and recovery when pages close mid-run.
- Added login verification guard and extra diagnostics for tab switching.
### 0.2.19
- Fixed `cleanup()` indentation so `CodexGitHubMentionsAutomation` can release resources.
- Note: version 0.2.18 was intentionally skipped (no public release).
### 0.2.5
- Enhanced @codex comment detection with actor pattern matching
- Improved commit marker parsing for multiple AI assistants
- Added Gemini CLI support for FixPR workflow
### 0.1.1
- Fixed daily reset of global automation limit
- Added last reset timestamp tracking
### 0.1.0
- Initial release with @codex comment agent and FixPR workflow
- Comprehensive safety system with dual limits
- Cross-organization PR monitoring
| text/markdown | null | jleechan <jlee@jleechan.org> | null | null | null | github, automation, pr, pull-request, monitoring | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Version Control :: Git"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"requests>=2.25.0",
"jleechanorg-orchestration>=0.1.18",
"playwright>=1.40.0",
"playwright-stealth>=1.0.0",
"aiohttp>=3.8.0",
"PyYAML>=6.0",
"keyring>=23.0.0; extra == \"email\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jleechanorg/worldarchitect.ai",
"Repository, https://github.com/jleechanorg/worldarchitect.ai",
"Issues, https://github.com/jleechanorg/worldarchitect.ai/issues"
] | twine/6.2.0 CPython/3.11.10 | 2026-02-21T09:55:20.650431 | jleechanorg_pr_automation-0.2.153.tar.gz | 232,924 | 28/6f/df0f260c1f9bfa267e392a5320e3db00525f855171c773014b01da7094d5/jleechanorg_pr_automation-0.2.153.tar.gz | source | sdist | null | false | 45932bfd268b36c7c675f597339ee670 | 4b6489fb96265b7605696bf13e0817bd4c6d4c791c3f7d310ef86c8daf539497 | 286fdf0f260c1f9bfa267e392a5320e3db00525f855171c773014b01da7094d5 | MIT | [] | 229 |
2.4 | comfy-api-simplified | 1.5.0 | A simple way to schedule ComfyUI prompts with different parameters | # Comfy API Simplified
This is a small python wrapper over the [ComfyUI](https://github.com/comfyanonymous/ComfyUI) API. It allows you to edit API-format ComfyUI workflows and queue them programmaticaly to the already running ComfyUI.
I use it to iterate over multiple prompts and key parameters of workflow and get hundreds of images overnight to cherrypick from.
## Limitations
Only Basic auth and no auth (for local server) are supported.
## Install
`pip3 install comfy_api_simplified`
## Use prerequisits
### Prepare workflow
You would like to have your node titles unique. Usually both positive and negative prompts have title "CLIP Text Encode (Prompt)", you would like to at least give them different names in case you would like to change it's parameters from python.
### Enable "dev options"
In ComfyUI settings, check "Enable Dev mode Options":

### Download your workflow in API-format
<img src="misc/download.png" width="150">
### Have running ComfyUI server
## Use
```python
from comfy_api_simplified import ComfyApiWrapper, ComfyWorkflowWrapper
# create api wrapper using your ComfyUI url (add user and password params if needed)
api = ComfyApiWrapper("http://127.0.0.1:8188/")
# create workflow wrapper using your downloaded in api format workflow
wf = ComfyWorkflowWrapper("workflow_api.json")
# change anything you like in your workflow
# the syntax is "Node Title", then "Input param name", then value
wf.set_node_param("Empty Latent Image", "batch_size", 2)
wf.set_node_param("negative", "text", "embedding:EasyNegative")
# queue your workflow for completion
results = api.queue_and_wait_images(wf, "Save Image")
for filename, image_data in results.items():
with open(f"{filename}", "wb+") as f:
f.write(image_data)
```
More examples:
- Queue prompt and get result images [example](examples/queue_with_different_params.py).
- Queue many prompts and do not wait for completion [example](examples/queue_and_wait_result.py).
- Send input image and then call i2i workflow [example](examples/send_input_image.py).
## Additional info
There are some other approaches to use Python with ComfyUI out there.
If you are looking to conver your workflows to backend server code, check out [ComfyUI-to-Python-Extension](https://github.com/pydn/ComfyUI-to-Python-Extension)
If you are looking to use running ComfyUI as backend, but declare workflow in Python imperatively, check out [ComfyScript](https://github.com/Chaoses-Ib/ComfyScript/tree/main).
## Known issues
If you try to run queue_and_wait_images in async method, it may give you an error since there is already async code inside.
As a workaround, you can use
```python
import nest_asyncio
nest_asyncio.apply()
```
for now.
| text/markdown | null | Deimos Deimos <deimos.double@gmail.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests",
"websockets"
] | [] | [] | [] | [
"Homepage, https://github.com/deimos-deimos/comfy_api_simplified",
"Issues, https://github.com/deimos-deimos/comfy_api_simplified/issues"
] | twine/6.2.0 CPython/3.11.13 | 2026-02-21T09:54:58.595520 | comfy_api_simplified-1.5.0.tar.gz | 1,251,974 | 59/3e/d90a2bd1ce8b2c899fafff38f26960f11aaff504ef4d005088def1465b77/comfy_api_simplified-1.5.0.tar.gz | source | sdist | null | false | 56b42aabceec6a55832c3a21231c6bc2 | 5190c729271bd1406507063001af7160b5bbce47b5764b4c38e457bd19b368b0 | 593ed90a2bd1ce8b2c899fafff38f26960f11aaff504ef4d005088def1465b77 | null | [
"LICENSE"
] | 223 |
2.4 | prlens | 0.1.5 | AI-powered GitHub PR code reviewer for teams | # PR Lens
[](https://github.com/codingdash/prlens/actions/workflows/ci.yml)
[](https://codecov.io/gh/codingdash/prlens)
[](https://pypi.org/project/prlens/)
[](https://pypi.org/project/prlens/)
AI-powered GitHub PR code reviewer for teams. Reviews each changed file against your coding guidelines using Claude or GPT-4o, posts inline comments on GitHub, and keeps a shared history of past reviews.
## Features
- **Codebase-aware** — injects co-change history, directory siblings, and paired test files into every review so the AI understands context beyond the diff
- **Language-agnostic** — context signals are based on git history and filename patterns, not import parsing; works identically for Python, Go, TypeScript, Ruby, Rust, and more
- **Team history** — stores review records in a shared GitHub Gist (zero infra) or local SQLite; query with `prlens history` and `prlens stats`
- **Zero onboarding friction** — `prlens init` creates `.prlens.yml`, provisions the team Gist, and generates the GitHub Actions workflow in one command
- **GitHub CLI fallback** — resolves your GitHub token from an existing `gh auth login` session; no PAT copy-paste required for local runs
- Supports **Anthropic Claude** and **OpenAI GPT-4o** as AI backends
- Bring your own guidelines via a Markdown file
- Posts inline review comments via the GitHub Review API
- Incremental reviews — only reviews files changed since the last review, not the whole PR again
- Prevents duplicate comments across repeated runs
---
## Packages
prlens is structured as a monorepo with three independently installable packages:
| Package | PyPI name | Purpose |
|---|---|---|
| `packages/core` | `prlens-core` | Review engine: providers, context gathering, orchestration |
| `packages/store` | `prlens-store` | Pluggable history: NoOpStore, GistStore, SQLiteStore |
| `packages/cli` | `prlens` | CLI: `review`, `init`, `history`, `stats` |
Installing `prlens` pulls in `prlens-core` and `prlens-store` automatically.
---
## Installation
```bash
pip install 'prlens[anthropic]' # Claude (default)
pip install 'prlens[openai]' # GPT-4o
pip install 'prlens[all]' # both providers
```
---
## Quick Start
### Option A: Zero-friction team setup (recommended)
```bash
pip install 'prlens[anthropic]'
prlens init
```
`prlens init` will:
1. Detect your GitHub repo from `git remote`
2. Create `.prlens.yml` with your chosen provider
3. Create a shared GitHub Gist for team review history (via `gh` CLI)
4. Generate `.github/workflows/prlens.yml` so CI runs automatically on every PR
After `init`, every developer on the team can run:
```bash
prlens review --repo owner/repo --pr 42
```
No PAT setup required if they have `gh` installed and are logged in.
### Option B: Manual quick start
```bash
export GITHUB_TOKEN=ghp_...
export ANTHROPIC_API_KEY=sk-ant-...
prlens review --repo owner/repo --pr 42 --model anthropic
```
Omit `--pr` to list open PRs and pick one interactively.
---
## GitHub Action
Add this workflow to automatically review every pull request:
```yaml
# .github/workflows/code-review.yml
name: Code Review
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: codingdash/prlens/.github/actions/review@main
with:
model: anthropic
github-token: ${{ secrets.GITHUB_TOKEN }}
anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
```
Or use `prlens init` to generate this file automatically with the correct secrets and permissions.
---
## CLI Reference
```
Usage: prlens [OPTIONS] COMMAND [ARGS]...
AI-powered GitHub PR code reviewer for teams.
Options:
--config TEXT Path to the configuration file. [default: .prlens.yml]
--help Show this message and exit.
Commands:
review Run AI review on a pull request
init Interactive team setup wizard
history Show past review records
stats Aggregated comment statistics
```
### `prlens review`
```
Usage: prlens review [OPTIONS]
Options:
--repo TEXT GitHub repository (owner/name). [required]
--pr INTEGER Pull request number. Omit to pick interactively.
--model [anthropic|openai] AI provider. Overrides config file.
--guidelines PATH Markdown guidelines file. Overrides config file.
--config TEXT Config file path. [default: .prlens.yml]
-y, --yes Skip confirmation prompts.
-s, --shadow Dry-run: print comments without posting to GitHub.
--full-review Review all files even if a previous review exists.
```
### `prlens init`
```
Usage: prlens init [OPTIONS]
Set up prlens for your team.
Creates .prlens.yml, optionally creates a shared GitHub Gist for team
history, and generates a GitHub Actions workflow.
Options:
--repo TEXT GitHub repository (owner/name). Auto-detected from git remote.
```
### `prlens history`
```
Usage: prlens history [OPTIONS]
Show past AI review records for a repository.
Options:
--repo TEXT GitHub repository (owner/name). [required]
--pr INTEGER Filter by PR number.
--limit INTEGER Maximum records to show. [default: 20]
```
### `prlens stats`
```
Usage: prlens stats [OPTIONS]
Show aggregated review statistics for a repository.
Options:
--repo TEXT GitHub repository (owner/name). [required]
--top INTEGER Number of top entries per category. [default: 10]
```
---
## Configuration
`.prlens.yml` in your repository root:
```yaml
# AI provider: anthropic | openai
model: anthropic
# Review history store: noop (default) | gist | sqlite
# store: gist
# gist_id: <gist-id> # created automatically by `prlens init`
# store_path: .prlens.db # only for store: sqlite
max_chars_per_file: 20000
batch_limit: 60
# Path to your team's coding guidelines (Markdown)
# guidelines: ./docs/guidelines.md
# Files/directories to skip — fnmatch globs or directory names
# exclude:
# - migrations/
# - "*.min.js"
# - "*.lock"
# Review draft PRs (skipped by default)
review_draft_prs: false
```
---
## Team Review History
When a store is configured, prlens persists every review result. `prlens init` sets this up automatically using a private GitHub Gist — no server, no DB, no extra credentials.
```bash
# View recent reviews for a repo
prlens history --repo owner/repo
# View stats: severity breakdown + most-flagged files
prlens stats --repo owner/repo
# Filter history to a specific PR
prlens history --repo owner/repo --pr 42
```
---
## Codebase-Aware Reviews
prlens injects three types of context into every file review:
| Signal | Source | Why it helps |
|---|---|---|
| **Repository file tree** | Git tree at PR's head SHA | Lets the AI reason about layer boundaries, naming conventions, and test coverage |
| **Co-changed files** | Git commit history | Catches architectural coupling that isn't expressed via imports |
| **Paired test file** | Filename pattern matching | Avoids flagging already-tested behaviour; spots missing coverage |
All context is fetched from the GitHub API pinned to the PR's exact head SHA — no local filesystem, no stale state.
---
## Custom Guidelines
Point `guidelines` in `.prlens.yml` to any Markdown file with your team's coding standards:
```yaml
guidelines: ./docs/guidelines.md
```
Built-in defaults are in [`packages/core/src/prlens_core/guidelines/`](packages/core/src/prlens_core/guidelines/). Copy and customise them as a starting point.
---
## Environment Variables
| Variable | Required | Description |
|---|---|---|
| `GITHUB_TOKEN` | Yes (or `gh` CLI) | GitHub token with `pull_requests: write` |
| `ANTHROPIC_API_KEY` | When using Claude | Anthropic API key |
| `OPENAI_API_KEY` | When using GPT-4o | OpenAI API key |
If `GITHUB_TOKEN` is not set, prlens falls back to `gh auth token` (the token from your `gh auth login` session).
---
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, running tests, and adding new AI providers.
## License
[MIT](LICENSE)
| text/markdown | null | null | null | null | MIT | code-review, github, ai, llm, pull-request | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Quality Assurance"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"prlens-core>=0.1.3",
"prlens-store>=0.1.3",
"click>=8.0",
"rich>=13.0",
"pyyaml>=6.0",
"python-dotenv>=1.0",
"prlens-core[anthropic]; extra == \"anthropic\"",
"prlens-core[openai]; extra == \"openai\"",
"prlens-core[all]; extra == \"all\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest-mock>=3.12; extra == \"dev\"",
"black>=24.0; extra == \"dev\"",
"flake8>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/codingdash/prlens",
"Issues, https://github.com/codingdash/prlens/issues"
] | twine/6.2.0 CPython/3.12.2 | 2026-02-21T09:54:49.412882 | prlens-0.1.5.tar.gz | 13,110 | 51/23/5383c38d6a9fbed10589cd6cdf1d7948b57c3139286262fdde0ef2c7489d/prlens-0.1.5.tar.gz | source | sdist | null | false | 29ca1002ca9d0ee5d1538b2df1a78f7e | 06c2af468f5239f41e338bffd9f9b88b065581a566c78c9bfd50085104c81b83 | 51235383c38d6a9fbed10589cd6cdf1d7948b57c3139286262fdde0ef2c7489d | null | [] | 252 |
2.4 | prlens-store | 0.1.5 | Pluggable review history store for prlens | # prlens-store
Pluggable review history backends for [prlens](https://github.com/codingdash/prlens) — AI-powered GitHub PR code reviewer for teams.
## What's in this package
| Backend | Class | Description |
|---|---|---|
| `noop` | `NoOpStore` | Default — no persistence, zero config |
| `gist` | `GistStore` | Shared GitHub Gist, zero infrastructure |
| `sqlite` | `SQLiteStore` | Local SQLite file |
## Installation
```bash
pip install prlens-store
```
This package is a library dependency of [`prlens`](https://pypi.org/project/prlens/). Install `prlens` directly unless you are embedding history storage in your own tool.
## Usage
```python
from prlens_store.sqlite import SQLiteStore
store = SQLiteStore(".prlens.db")
records = store.list_reviews("owner/repo")
store.close()
```
## Links
- [Documentation & CLI](https://github.com/codingdash/prlens)
- [Changelog](https://github.com/codingdash/prlens/releases)
- [Issues](https://github.com/codingdash/prlens/issues)
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"PyGithub>=2.1",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest-mock>=3.12; extra == \"dev\"",
"black>=24.0; extra == \"dev\"",
"flake8>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.2 | 2026-02-21T09:54:47.514448 | prlens_store-0.1.5.tar.gz | 7,456 | 1c/77/3b82620b164295de761883dd760a3ae6144c4280d2153532a0ca8e15771d/prlens_store-0.1.5.tar.gz | source | sdist | null | false | 9e276d3f67a4f49313c31c9c7b3596cd | fca224500bbc2b41e57e87ff2ae5f90d92e70c7d86dde830ea1568c6d9257b7f | 1c773b82620b164295de761883dd760a3ae6144c4280d2153532a0ca8e15771d | null | [] | 282 |
2.4 | prlens-core | 0.1.5 | Core review engine for prlens — AI-powered GitHub PR code reviewer | # prlens-core
Core review engine for [prlens](https://github.com/codingdash/prlens) — AI-powered GitHub PR code reviewer for teams.
## What's in this package
- **AI providers** — `BaseReviewer` + concrete implementations for Anthropic Claude and OpenAI GPT-4o
- **Codebase context** — injects repository file tree, co-change history, and paired test files into every review
- **GitHub API client** — fetches PR diffs, posts inline review comments, all pinned to the PR's head SHA
- **Config loader** — reads `.prlens.yml` and merges environment variables
## Installation
```bash
pip install 'prlens-core[anthropic]' # Claude
pip install 'prlens-core[openai]' # GPT-4o
pip install 'prlens-core[all]' # both
```
This package is a library dependency of [`prlens`](https://pypi.org/project/prlens/). Install `prlens` directly unless you are embedding the review engine in your own tool.
## Usage
```python
from prlens_core.reviewer import run_review
from prlens_core.config import load_config
config = load_config(".prlens.yml")
config["github_token"] = "ghp_..."
summary = run_review(repo="owner/repo", pr_number=42, config=config)
print(summary.total_comments, summary.event)
```
## Links
- [Documentation & CLI](https://github.com/codingdash/prlens)
- [Changelog](https://github.com/codingdash/prlens/releases)
- [Issues](https://github.com/codingdash/prlens/issues)
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"PyGithub>=2.1",
"pyyaml>=6.0",
"python-dotenv>=1.0",
"anthropic>=0.25; extra == \"anthropic\"",
"openai>=1.0; extra == \"openai\"",
"anthropic>=0.25; extra == \"all\"",
"openai>=1.0; extra == \"all\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest-mock>=3.12; extra == \"dev\"",
"black>=24.0; extra == \"dev\"",
"flake8>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.2 | 2026-02-21T09:54:46.017109 | prlens_core-0.1.5.tar.gz | 32,523 | 85/d4/4829148986b03dce21d70dbb2aa7cc000365d3a698bc3528144d9a27f08c/prlens_core-0.1.5.tar.gz | source | sdist | null | false | 92d43d8bdbeea7389a8190c3c780dfca | 4602a071a0b57563517f616b2cf114462aa822f47c901cc8bd9ec390e2ed3dd2 | 85d44829148986b03dce21d70dbb2aa7cc000365d3a698bc3528144d9a27f08c | null | [] | 283 |
2.4 | pyportainer | 1.0.28 | Asynchronous Python client for the Portainer API | <!-- PROJECT SHIELDS -->
[![GitHub Release][releases-shield]][releases]
[![Python Versions][python-versions-shield]][pypi]
![Project Stage][project-stage-shield]
![Project Maintenance][maintenance-shield]
[![License][license-shield]](LICENSE)
[![GitHub Activity][commits-shield]][commits-url]
[![PyPi Downloads][downloads-shield]][downloads-url]
[![GitHub Last Commit][last-commit-shield]][commits-url]
[![Open in Dev Containers][devcontainer-shield]][devcontainer]
[![Build Status][build-shield]][build-url]
[![Typing Status][typing-shield]][typing-url]
[![Code Coverage][codecov-shield]][codecov-url]
Asynchronous Python client for Python Portainer.
## About
This is an asynchronous Python client for the [Portainer API](https://docs.portainer.io/api-docs/). It is designed to be used with the [Portainer](https://www.portainer.io/) container management tool.
This package is a wrapper around the Portainer API, which allows you to interact with Portainer programmatically.
In it's current stage it's still in development and not all endpoints are implemented yet.
## Installation
```bash
pip install pyportainer
```
### Example
```python
import asyncio
from pyportainer import Portainer
async def main() -> None:
"""Run the example."""
async with Portainer(
api_url="http://localhost:9000",
api_key="YOUR_API_KEY",
) as portainer:
endpoints = await portainer.get_endpoints()
print("Portainer Endpoints:", endpoints)
if __name__ == "__main__":
asyncio.run(main())
```
More examples can be found in the [examples folder](./examples/).
## Image Update Watcher
`pyportainer` includes a built-in background watcher that continuously monitors your Docker containers for available image updates. It polls Portainer at a configurable interval, checks each running container's local image digest against the registry, and exposes the results for easy consumption.
### Basic usage
```python
import asyncio
from datetime import timedelta
from pyportainer import Portainer, PortainerImageWatcher
async def main() -> None:
async with Portainer(
api_url="http://localhost:9000",
api_key="YOUR_API_KEY",
) as portainer:
watcher = PortainerImageWatcher(
portainer,
interval=timedelta(hours=6),
)
watcher.start()
await asyncio.sleep(30) # Let the first check complete
for (endpoint_id, container_id), result in watcher.results.items():
if result.status and result.status.update_available:
print(f"Update available for container {container_id} on endpoint {endpoint_id}")
watcher.stop()
if __name__ == "__main__":
asyncio.run(main())
```
### Configuration
| Parameter | Type | Default | Description |
| ------------- | ------------- | -------- | ------------------------------------------------- |
| `portainer` | `Portainer` | — | The Portainer client instance |
| `endpoint_id` | `int \| None` | `None` | Endpoint to monitor. `None` watches all endpoints |
| `interval` | `timedelta` | 12 hours | How often to poll for updates |
| `debug` | `bool` | `False` | Enable debug-level logging |
### Results
`watcher.results` returns a dictionary keyed by `(endpoint_id, container_id)` tuples. Each value is a `PortainerImageWatcherResult` containing:
- `endpoint_id` — the endpoint the container belongs to
- `container_id` — the container ID
- `status` — a `PortainerImageUpdateStatus` with:
- `update_available` (`bool`) — whether a newer image is available in the registry
- `local_digest` (`str | None`) — digest of the locally running image
- `registry_digest` (`str | None`) — digest of the latest image in the registry
You can also inspect `watcher.last_check` to get the Unix timestamp of the most recent completed poll, or update `watcher.interval` at runtime to change the polling frequency.
## Documentation
The full documentation, including API reference, can be found at: [https://erwindouna.github.io/pyportainer/](https://erwindouna.github.io/pyportainer/)
## Contributing
This is an active open-source project. We are always open to people who want to
use the code or contribute to it.
We've set up a separate document for our
[contribution guidelines](CONTRIBUTING.md).
Thank you for being involved! :heart_eyes:
## Setting up development environment
The simplest way to begin is by utilizing the [Dev Container][devcontainer]
feature of Visual Studio Code or by opening a CodeSpace directly on GitHub.
By clicking the button below you immediately start a Dev Container in Visual Studio Code.
[![Open in Dev Containers][devcontainer-shield]][devcontainer]
This Python project relies on [UV][poetry] as its dependency manager,
providing comprehensive management and control over project dependencies.
### Installation
Install all packages, including all development requirements:
```bash
uv sync --all-groups && pre-commit install
```
_UV creates by default an virtual environment where it installs all necessary pip packages_.
### Pre-commit
This repository uses the [pre-commit][pre-commit] framework, all changes
are linted and tested with each commit. To setup the pre-commit check, run:
```bash
uv run pre-commit install
```
And to run all checks and tests manually, use the following command:
```bash
uv run pre-commit run --all-files
```
### Testing
It uses [pytest](https://docs.pytest.org/en/stable/) as the test framework. To run the tests:
```bash
uv run pytest
```
To update the [syrupy](https://github.com/tophat/syrupy) snapshot tests:
```bash
uv run pytest --snapshot-update
```
## License
MIT License
Copyright (c) 2025 Erwin Douna
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
<!-- LINKS FROM PLATFORM -->
<!-- MARKDOWN LINKS & IMAGES -->
[build-shield]: https://github.com/erwindouna/pyportainer/actions/workflows/tests.yaml/badge.svg
[build-url]: https://github.com/erwindouna/pyportainer/actions/workflows/tests.yaml
[codecov-shield]: https://codecov.io/gh/erwindouna/pyportainer/branch/main/graph/badge.svg?token=TOKEN
[codecov-url]: https://codecov.io/gh/erwindouna/pyportainer
[commits-shield]: https://img.shields.io/github/commit-activity/y/erwindouna/pyportainer.svg
[commits-url]: https://github.com/erwindouna/pyportainer/commits/main
[devcontainer-shield]: https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode
[devcontainer]: https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/erwindouna/pyportainer
[downloads-shield]: https://img.shields.io/pypi/dm/pyportainer
[downloads-url]: https://pypistats.org/packages/pyportainer
[last-commit-shield]: https://img.shields.io/github/last-commit/erwindouna/pyportainer.svg
[license-shield]: https://img.shields.io/github/license/erwindouna/pyportainer.svg
[project-stage-shield]: https://img.shields.io/badge/project%20stage-experimental-yellow.svg
[maintenance-shield]: https://img.shields.io/maintenance/yes/2026.svg
[pypi]: https://pypi.org/project/pyportainer/
[python-versions-shield]: https://img.shields.io/pypi/pyversions/pyportainer
[releases-shield]: https://img.shields.io/github/release/erwindouna/pyportainer.svg
[releases]: https://github.com/erwindouna/pyportainer/releases
[typing-shield]: https://github.com/erwindouna/pyportainer/actions/workflows/typing.yaml/badge.svg
[typing-url]: https://github.com/erwindouna/pyportainer/actions/workflows/typing.yaml
[uv]: https://docs.astral.sh/uv/
[pre-commit]: https://pre-commit.com
| text/markdown | null | Erwin Douna <e.douna@gmail.com> | null | Erwin Douna <e.douna@gmail.com> | null | api, async, client, python-portainer | [
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"aiohttp>=3.0.0",
"mashumaro<4,>=3.17",
"orjson<4,>=3.10.16",
"yarl>=1.6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/erwindouna/pyportainer",
"Repository, https://github.com/erwindouna/pyportainer",
"Documentation, https://github.com/erwindouna/pyportainer",
"Bug Tracker, https://github.com/erwindouna/pyportainer/issues",
"Changelog, https://github.com/erwindouna/pyportainer/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:54:04.283893 | pyportainer-1.0.28.tar.gz | 20,692 | 16/6b/2014f871508cec9873c4ece1fca7cc42eb9dd7486834f7cb85721606e5f9/pyportainer-1.0.28.tar.gz | source | sdist | null | false | 1d72d9e1c0f99d36b3d3a8e7d2abc8d3 | 2f9ef89f8420e427de32a49e6131d28a4ba5af2b09dcaffa8e9f537e034a9602 | 166b2014f871508cec9873c4ece1fca7cc42eb9dd7486834f7cb85721606e5f9 | MIT | [
"LICENSE"
] | 454 |
2.4 | jleechanorg-orchestration | 0.1.74 | AI Orchestration - tmux-based interactive AI CLI wrapper and multi-agent orchestration system | # Orchestration Library (`jleechanorg-orchestration`)
Python package and CLI for task-driven agent orchestration.
## Table of Contents
- [Install from PyPI](#install-from-pypi)
- [Verify Install and Version](#verify-install-and-version)
- [CLI Entry Points](#cli-entry-points)
- [Primary Usage: Task Dispatching](#primary-usage-task-dispatching)
- [Task Dispatcher Python Interface](#task-dispatcher-python-interface)
- [Legacy Interactive Mode (`live`)](#legacy-interactive-mode-live)
- [Design Summary](#design-summary)
- [Tech Stack (Summary)](#tech-stack-summary)
- [Local Development (Package Source)](#local-development-package-source)
## Install from PyPI
```bash
python3 -m pip install jleechanorg-orchestration
```
Upgrade:
```bash
python3 -m pip install --upgrade jleechanorg-orchestration
```
Install a specific version:
```bash
python3 -m pip install "jleechanorg-orchestration==0.1.40" # omit version for latest
```
## Verify Install and Version
```bash
python3 -m pip show jleechanorg-orchestration
ai_orch --version
```
## CLI Entry Points
Console scripts installed by the package:
- `ai_orch`
- `orch` (alias)
Both map to `orchestration.live_mode:main`.
## Primary Usage: Task Dispatching
### 1. Unified orchestration (`run`)
```bash
ai_orch run --agent-cli gemini,claude "Fix flaky integration tests and open/update PR"
```
Shorthand (defaults to `run`):
```bash
ai_orch --agent-cli claude "Implement task dispatcher retry metrics"
```
Useful flags:
- `--agent-cli`: force CLI or fallback chain (`claude`, `codex`, `gemini`, `cursor`)
- `--context`: inject markdown context file
- `--branch`: force branch checkout
- `--pr`: update an existing PR
- `--no-new-pr`, `--no-new-branch`: hard guardrails
- `--validate`: post-run validation command
- `--model`: model override for supported CLIs
### 2. Task Dispatcher Interface
Analyze task only:
```bash
ai_orch dispatcher analyze --agent-cli codex --json "Refactor auth middleware"
```
Create agents from task:
```bash
ai_orch dispatcher create --agent-cli gemini --model gemini-3-flash-preview "Fix PR #123 review blockers"
```
Dry-run planned agent specs:
```bash
ai_orch dispatcher create --agent-cli claude --dry-run "Investigate failing CI"
```
## Task Dispatcher Python Interface
```python
from orchestration.task_dispatcher import TaskDispatcher
dispatcher = TaskDispatcher()
agent_specs = dispatcher.analyze_task_and_create_agents(
"Fix failing tests in PR #123 and push updates",
forced_cli="claude",
)
for spec in agent_specs:
ok = dispatcher.create_dynamic_agent(spec)
print(spec["name"], ok)
```
### Core methods
- `TaskDispatcher.analyze_task_and_create_agents(task_description: str, forced_cli: str | None = None) -> list[dict]`
- `TaskDispatcher.create_dynamic_agent(agent_spec: dict) -> bool`
### Agent spec fields used by `create_dynamic_agent`
- `name`: unique agent/session name
- `task`: full task instructions
- `cli` or `cli_chain`: selected CLI or fallback chain
- `workspace_config` (optional): workspace placement/settings
- `model` (optional): model override
## Legacy Interactive Mode (`live`)
Interactive tmux mode is still available:
```bash
ai_orch live --cli codex
ai_orch list
ai_orch attach <session>
ai_orch kill <session>
```
Use this when you explicitly want a persistent manual CLI session. For automated task execution, prefer `run` or `dispatcher`.
## Design Summary
Full design details live in [`orchestration/design.md`](./design.md).
High-level design:
- Entry point is `ai_orch`/`orch` (mapped to `orchestration.live_mode:main`).
- Default flow is unified `run` mode, which delegates orchestration to `UnifiedOrchestration`.
- `dispatcher` mode exposes task planning and agent creation directly via `TaskDispatcher`.
- Agent execution is isolated via tmux sessions and workspace-specific execution context.
- Coordination uses file-backed A2A/task state under `/tmp` (no Redis dependency).
## Tech Stack (Summary)
- Runtime: Python 3.11+
- Session/process isolation: `tmux`
- VCS/PR operations: `git`, `gh`
- Agent CLIs: `claude`, `codex`, `gemini`, `cursor-agent`
- Coordination: file-backed A2A/task state under `/tmp` (no Redis requirement)
Detailed architecture and implementation docs:
- `orchestration/design.md`
- `orchestration/A2A_DESIGN.md`
- `orchestration/AGENT_SESSION_CONFIG.md`
## Local Development (Package Source)
```bash
cd orchestration
python3 -m pip install -e .
python3 -m pytest tests -q
```
Use editable installs only for local package development.
| text/markdown | null | jleechan <jlee@jleechan.org> | null | null | MIT | ai, orchestration, tmux, claude, codex, cli, automation, agents, terminal, interactive | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Shells",
"Topic :: Terminals"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jleechanorg/worldarchitect.ai",
"Repository, https://github.com/jleechanorg/worldarchitect.ai",
"Issues, https://github.com/jleechanorg/worldarchitect.ai/issues",
"Documentation, https://github.com/jleechanorg/worldarchitect.ai/tree/main/orchestration"
] | twine/6.2.0 CPython/3.11.10 | 2026-02-21T09:54:02.510525 | jleechanorg_orchestration-0.1.74.tar.gz | 172,803 | ae/a5/48cfc7e14338a2c8b31122d0dade31b4c2efc5b7f1173038f8ad0a3a3b34/jleechanorg_orchestration-0.1.74.tar.gz | source | sdist | null | false | 4e0546eb8f41c46d620ab6c126365b61 | 011640d910ef764f4e8487aceda8692007807689d288f18112e00388cb9f645f | aea548cfc7e14338a2c8b31122d0dade31b4c2efc5b7f1173038f8ad0a3a3b34 | null | [] | 223 |
2.1 | sputnikqa | 0.0.2b2 | Extensible test automation framework — built for QA engineers who value clean architecture and scalability. | # 🛰️ SputnikQA — Test Automation Framework



___
**SputnikQA** — это современный, типизированный и расширяемый фреймворк для написания автотестов на Python.
Идеально подходит для команд, которые хотят писать читаемые, надёжные и поддерживаемые автотесты.
## Установка
```bash
# Основной пакет
pip install sputnikqa
# С интеграцией Allure (рекомендуется)
pip install sputnikqa[allure]
```
## Основные возможности
- restapi booster
- [x] **Типизированные модели ответов** через `Pydantic`
- [x] **Автоматическая валидация** статуса и тела ответа
- [x] **Декларативные секции API** (`BaseApiSection`) — по аналогии с Page Object
- [x] **Билдеры запросов** — для удобного формирования тел и multipart-загрузок
- [x] **Поддержка middleware** — логирование, авторизация, retry и др.
- [x] **Встроенная интеграция с Allure**:
- шаги с `@allure.step`
- прикрепление cURL, статуса и тела ответа
- [x] **Гибкий HTTP-клиент** — поддержка `httpx`, `requests` и кастомных реализаций
- [x] **Поддержка async/await** через `AsyncApiClient`
## Пример использования
### 1. Определите модель ответа
```python
from sputnikqa.boosters.restapi.response_models import JsonResponse
class Pet(JsonResponse):
id: int | None = None
name: str
status: str
```
### 2. Создайте секцию API
```python
from sputnikqa.boosters.restapi.sections.base import BaseApiSection
class PetSection(BaseApiSection):
base_url = "https://petstore.swagger.io/v2"
def get_pet_by_id(self, pet_id: int):
response = self.client.get(self.url_join(f"/pet/{pet_id}"))
return (
self.validator(response)
.validate_status(200)
.validate_response_model({200: Pet})
)
```
### 3. Напишите тест
```python
def test_get_pet(pet_section):
pet = pet_section.get_pet_by_id(1).get_validated_model()
assert pet.name == "doggie"
assert pet.status == "available"
```
## Архитектура
```text
sputnikqa/
├── boosters/
│ └── restapi/
│ ├── clients/ # HTTP-клиенты (httpx, requests и др.)
│ ├── middleware/ # Middleware (логирование, авторизация)
│ ├── request_builders/ # Билдеры (тело, файлы)
│ ├── response_models/ # Базовые модели ответов
│ ├── sections/ # Базовый класс секций
│ └── validators/ # Валидаторы ответов
└── integrations/
└── allure/ # Интеграция с Allure
```
## Интеграции
### Allure
Автоматически прикрепляет к отчёту:
- cURL-команду запроса
- HTTP-статус
- тело ответа
```python
from sputnikqa.integrations.allure.client import ApiClientAllure
from sputnikqa.integrations.allure.middleware import AllureMiddleware
client = ApiClientAllure(HttpxClient(), middlewares=[AllureMiddleware()])
```
> Требуется: `pip install sputnikqa[allure]`
## Документация и примеры
Полный пример тестов для [Petstore API](https://petstore.swagger.io/) доступен в репозитории:
- Пример: [`response_models.py`](./examples/src/petstore_api/response_models.py)
- Пример: [`sections/pet.py`](./examples/src/petstore_api/sections/pet.py)
- Пример: [`test_pet.py`](./examples/tests/petstore_api/test_pet.py)
## Для разработчиков
Хотите расширить фреймворк?
- Реализуйте свой `BaseHttpClient` для кастомного HTTP-стека.
- Напишите middleware для авторизации, retry или трассировки.
- Используйте `PrimitiveResponse` для обёртки примитивов (`str`, `int`, `dict`).
## Лицензия
Distributed under the MIT License. See [LICENSE](LICENSE) for more information.
___
> SputnikQA — ваш надёжный спутник в мире автоматизации тестирования.
📌 PyPI: https://pypi.org/project/sputnikqa/
📌 GitVerse: https://gitverse.ru/crenom/SputnikQA | text/markdown | Crenom | pytimdev@mail.ru | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"httpx<0.29.0,>=0.28.1",
"pydantic<3.0.0,>=2.11.9",
"allure-python-commons<3.0.0,>=2.15.0; extra == \"allure\""
] | [] | [] | [] | [] | poetry/1.8.3 CPython/3.10.9 Windows/10 | 2026-02-21T09:52:51.843552 | sputnikqa-0.0.2b2.tar.gz | 21,271 | 11/f0/85c06cd7e019aa981fe8d64f2b40fcdaade17673b7ea3f3a1251f3155c73/sputnikqa-0.0.2b2.tar.gz | source | sdist | null | false | 738235304183634bde71380b4336c022 | 3a31249bba9f48bec1a8e105382ae3525d9eb04ac2efa0a225c6459022a5add4 | 11f085c06cd7e019aa981fe8d64f2b40fcdaade17673b7ea3f3a1251f3155c73 | null | [] | 186 |
2.4 | ezan | 0.1.1 | Namaz vakitleri ve kıble hesaplama aracı | # Ezan: Adhan
---
## Ezan:
Namaz vakitlerini (Ezan) ve Kıble hesaplama modülü.
Prayer times (Adhan) and Qibla calculation module.
---
## Kullanım: Usage
Kullanım Klavuzu: User Manual:
https://github.com/WhiteSymmetry/ezan/blob/main/notebook/ezan.ipynb
---
```bash
from ezan import print_prayer_times, get_user_location_and_date
lat, lon, tz, date = get_user_location_and_date()
print_prayer_times(lat, lon, tz, date)
```
---
```bash
from ezan import print_prayer_times
import datetime
#### Kendi değerlerinizi girin
lat = 41.0 # enlem (derece)
lon = 29.0 # boylam (derece)
tz = 'Europe/Istanbul' # zaman dilimi (Europe/Istanbul, Asia/Damascus vb.)
tarih = datetime.date(2026, 2, 20) # tarih: yıl, ay, gün
#### Hesaplamayı çalıştır
print_prayer_times(lat, lon, tz, tarih)
```
---
# tz listesi:
```bash
Africa/Abidjan Africa/Accra Africa/Addis_Ababa Africa/Algiers Africa/Asmara Africa/Asmera Africa/Bamako Africa/Bangui Africa/Banjul Africa/Bissau Africa/Blantyre Africa/Brazzaville Africa/Bujumbura Africa/Cairo Africa/Casablanca Africa/Ceuta Africa/Conakry Africa/Dakar Africa/Dar_es_Salaam Africa/Djibouti Africa/Douala Africa/El_Aaiun Africa/Freetown Africa/Gaborone Africa/Harare Africa/Johannesburg Africa/Juba Africa/Kampala Africa/Khartoum Africa/Kigali Africa/Kinshasa Africa/Lagos Africa/Libreville Africa/Lome Africa/Luanda Africa/Lubumbashi Africa/Lusaka Africa/Malabo Africa/Maputo Africa/Maseru Africa/Mbabane Africa/Mogadishu Africa/Monrovia Africa/Nairobi Africa/Ndjamena Africa/Niamey Africa/Nouakchott Africa/Ouagadougou Africa/Porto-Novo Africa/Sao_Tome Africa/Timbuktu Africa/Tripoli Africa/Tunis Africa/Windhoek America/Adak America/Anchorage America/Anguilla America/Antigua America/Araguaina America/Argentina/Buenos_Aires America/Argentina/Catamarca America/Argentina/ComodRivadavia America/Argentina/Cordoba America/Argentina/Jujuy America/Argentina/La_Rioja America/Argentina/Mendoza America/Argentina/Rio_Gallegos America/Argentina/Salta America/Argentina/San_Juan America/Argentina/San_Luis America/Argentina/Tucuman America/Argentina/Ushuaia America/Aruba America/Asuncion America/Atikokan America/Atka America/Bahia America/Bahia_Banderas America/Barbados America/Belem America/Belize America/Blanc-Sablon America/Boa_Vista America/Bogota America/Boise America/Buenos_Aires America/Cambridge_Bay America/Campo_Grande America/Cancun America/Caracas America/Catamarca America/Cayenne America/Cayman America/Chicago America/Chihuahua America/Ciudad_Juarez America/Coral_Harbour America/Cordoba America/Costa_Rica America/Coyhaique America/Creston America/Cuiaba America/Curacao America/Danmarkshavn America/Dawson America/Dawson_Creek America/Denver America/Detroit America/Dominica America/Edmonton America/Eirunepe America/El_Salvador America/Ensenada America/Fort_Nelson America/Fort_Wayne America/Fortaleza America/Glace_Bay America/Godthab America/Goose_Bay America/Grand_Turk America/Grenada America/Guadeloupe America/Guatemala America/Guayaquil America/Guyana America/Halifax America/Havana America/Hermosillo America/Indiana/Indianapolis America/Indiana/Knox America/Indiana/Marengo America/Indiana/Petersburg America/Indiana/Tell_City America/Indiana/Vevay America/Indiana/Vincennes America/Indiana/Winamac America/Indianapolis America/Inuvik America/Iqaluit America/Jamaica America/Jujuy America/Juneau America/Kentucky/Louisville America/Kentucky/Monticello America/Knox_IN America/Kralendijk America/La_Paz America/Lima America/Los_Angeles America/Louisville America/Lower_Princes America/Maceio America/Managua America/Manaus America/Marigot America/Martinique America/Matamoros America/Mazatlan America/Mendoza America/Menominee America/Merida America/Metlakatla America/Mexico_City America/Miquelon America/Moncton America/Monterrey America/Montevideo America/Montreal America/Montserrat America/Nassau America/New_York America/Nipigon America/Nome America/Noronha America/North_Dakota/Beulah America/North_Dakota/Center America/North_Dakota/New_Salem America/Nuuk America/Ojinaga America/Panama America/Pangnirtung America/Paramaribo America/Phoenix America/Port-au-Prince America/Port_of_Spain America/Porto_Acre America/Porto_Velho America/Puerto_Rico America/Punta_Arenas America/Rainy_River America/Rankin_Inlet America/Recife America/Regina America/Resolute America/Rio_Branco America/Rosario America/Santa_Isabel America/Santarem America/Santiago America/Santo_Domingo America/Sao_Paulo America/Scoresbysund America/Shiprock America/Sitka America/St_Barthelemy America/St_Johns America/St_Kitts America/St_Lucia America/St_Thomas America/St_Vincent America/Swift_Current America/Tegucigalpa America/Thule America/Thunder_Bay America/Tijuana America/Toronto America/Tortola America/Vancouver America/Virgin America/Whitehorse America/Winnipeg America/Yakutat America/Yellowknife Antarctica/Casey Antarctica/Davis Antarctica/DumontDUrville Antarctica/Macquarie Antarctica/Mawson Antarctica/McMurdo Antarctica/Palmer Antarctica/Rothera Antarctica/South_Pole Antarctica/Syowa Antarctica/Troll Antarctica/Vostok Arctic/Longyearbyen Asia/Aden Asia/Almaty Asia/Amman Asia/Anadyr Asia/Aqtau Asia/Aqtobe Asia/Ashgabat Asia/Ashkhabad Asia/Atyrau Asia/Baghdad Asia/Bahrain Asia/Baku Asia/Bangkok Asia/Barnaul Asia/Beirut Asia/Bishkek Asia/Brunei Asia/Calcutta Asia/Chita Asia/Choibalsan Asia/Chongqing Asia/Chungking Asia/Colombo Asia/Dacca Asia/Damascus Asia/Dhaka Asia/Dili Asia/Dubai Asia/Dushanbe Asia/Famagusta Asia/Gaza Asia/Harbin Asia/Hebron Asia/Ho_Chi_Minh Asia/Hong_Kong Asia/Hovd Asia/Irkutsk Asia/Istanbul Asia/Jakarta Asia/Jayapura Asia/Jerusalem Asia/Kabul Asia/Kamchatka Asia/Karachi Asia/Kashgar Asia/Kathmandu Asia/Katmandu Asia/Khandyga Asia/Kolkata Asia/Krasnoyarsk Asia/Kuala_Lumpur Asia/Kuching Asia/Kuwait Asia/Macao Asia/Macau Asia/Magadan Asia/Makassar Asia/Manila Asia/Muscat Asia/Nicosia Asia/Novokuznetsk Asia/Novosibirsk Asia/Omsk Asia/Oral Asia/Phnom_Penh Asia/Pontianak Asia/Pyongyang Asia/Qatar Asia/Qostanay Asia/Qyzylorda Asia/Rangoon Asia/Riyadh Asia/Saigon Asia/Sakhalin Asia/Samarkand Asia/Seoul Asia/Shanghai Asia/Singapore Asia/Srednekolymsk Asia/Taipei Asia/Tashkent Asia/Tbilisi Asia/Tehran Asia/Tel_Aviv Asia/Thimbu Asia/Thimphu Asia/Tokyo Asia/Tomsk Asia/Ujung_Pandang Asia/Ulaanbaatar Asia/Ulan_Bator Asia/Urumqi Asia/Ust-Nera Asia/Vientiane Asia/Vladivostok Asia/Yakutsk Asia/Yangon Asia/Yekaterinburg Asia/Yerevan Atlantic/Azores Atlantic/Bermuda Atlantic/Canary Atlantic/Cape_Verde Atlantic/Faeroe Atlantic/Faroe Atlantic/Jan_Mayen Atlantic/Madeira Atlantic/Reykjavik Atlantic/South_Georgia Atlantic/St_Helena Atlantic/Stanley Australia/ACT Australia/Adelaide Australia/Brisbane Australia/Broken_Hill Australia/Canberra Australia/Currie Australia/Darwin Australia/Eucla Australia/Hobart Australia/LHI Australia/Lindeman Australia/Lord_Howe Australia/Melbourne Australia/NSW Australia/North Australia/Perth Australia/Queensland Australia/South Australia/Sydney Australia/Tasmania Australia/Victoria Australia/West Australia/Yancowinna Brazil/Acre Brazil/DeNoronha Brazil/East Brazil/West CET CST6CDT Canada/Atlantic Canada/Central Canada/Eastern Canada/Mountain Canada/Newfoundland Canada/Pacific Canada/Saskatchewan Canada/Yukon Chile/Continental Chile/EasterIsland Cuba EET EST EST5EDT Egypt Eire Etc/GMT Etc/GMT+0 Etc/GMT+1 Etc/GMT+10 Etc/GMT+11 Etc/GMT+12 Etc/GMT+2 Etc/GMT+3 Etc/GMT+4 Etc/GMT+5 Etc/GMT+6 Etc/GMT+7 Etc/GMT+8 Etc/GMT+9 Etc/GMT-0 Etc/GMT-1 Etc/GMT-10 Etc/GMT-11 Etc/GMT-12 Etc/GMT-13 Etc/GMT-14 Etc/GMT-2 Etc/GMT-3 Etc/GMT-4 Etc/GMT-5 Etc/GMT-6 Etc/GMT-7 Etc/GMT-8 Etc/GMT-9 Etc/GMT0 Etc/Greenwich Etc/UCT Etc/UTC Etc/Universal Etc/Zulu Europe/Amsterdam Europe/Andorra Europe/Astrakhan Europe/Athens Europe/Belfast Europe/Belgrade Europe/Berlin Europe/Bratislava Europe/Brussels Europe/Bucharest Europe/Budapest Europe/Busingen Europe/Chisinau Europe/Copenhagen Europe/Dublin Europe/Gibraltar Europe/Guernsey Europe/Helsinki Europe/Isle_of_Man Europe/Istanbul Europe/Jersey Europe/Kaliningrad Europe/Kiev Europe/Kirov Europe/Kyiv Europe/Lisbon Europe/Ljubljana Europe/London Europe/Luxembourg Europe/Madrid Europe/Malta Europe/Mariehamn Europe/Minsk Europe/Monaco Europe/Moscow Europe/Nicosia Europe/Oslo Europe/Paris Europe/Podgorica Europe/Prague Europe/Riga Europe/Rome Europe/Samara Europe/San_Marino Europe/Sarajevo Europe/Saratov Europe/Simferopol Europe/Skopje Europe/Sofia Europe/Stockholm Europe/Tallinn Europe/Tirane Europe/Tiraspol Europe/Ulyanovsk Europe/Uzhgorod Europe/Vaduz Europe/Vatican Europe/Vienna Europe/Vilnius Europe/Volgograd Europe/Warsaw Europe/Zagreb Europe/Zaporozhye Europe/Zurich GB GB-Eire GMT GMT+0 GMT-0 GMT0 Greenwich HST Hongkong Iceland Indian/Antananarivo Indian/Chagos Indian/Christmas Indian/Cocos Indian/Comoro Indian/Kerguelen Indian/Mahe Indian/Maldives Indian/Mauritius Indian/Mayotte Indian/Reunion Iran Israel Jamaica Japan Kwajalein Libya MET MST MST7MDT Mexico/BajaNorte Mexico/BajaSur Mexico/General NZ NZ-CHAT Navajo PRC PST8PDT Pacific/Apia Pacific/Auckland Pacific/Bougainville Pacific/Chatham Pacific/Chuuk Pacific/Easter Pacific/Efate Pacific/Enderbury Pacific/Fakaofo Pacific/Fiji Pacific/Funafuti Pacific/Galapagos Pacific/Gambier Pacific/Guadalcanal Pacific/Guam Pacific/Honolulu Pacific/Johnston Pacific/Kanton Pacific/Kiritimati Pacific/Kosrae Pacific/Kwajalein Pacific/Majuro Pacific/Marquesas Pacific/Midway Pacific/Nauru Pacific/Niue Pacific/Norfolk Pacific/Noumea Pacific/Pago_Pago Pacific/Palau Pacific/Pitcairn Pacific/Pohnpei Pacific/Ponape Pacific/Port_Moresby Pacific/Rarotonga Pacific/Saipan Pacific/Samoa Pacific/Tahiti Pacific/Tarawa Pacific/Tongatapu Pacific/Truk Pacific/Wake Pacific/Wallis Pacific/Yap Poland Portugal ROC ROK Singapore Turkey UCT US/Alaska US/Aleutian US/Arizona US/Central US/East-Indiana US/Eastern US/Hawaii US/Indiana-Starke US/Michigan US/Mountain US/Pacific US/Samoa UTC Universal W-SU WET Zulu
```
---
⚙️ Gereksinimler / Requirements
Python ≥ 3.11
astropy
pytz
requests
Kurulum sırasında otomatik olarak yüklenirler.
They are installed automatically during setup.
📄 Lisans / License
```bash
AGPL-3.0 license
```
| text/markdown | Mehmet Keçeci | Mehmet Keçeci <mkececi@yaani.com> | null | null | AGPL-3.0-or-later | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/WhiteSymmetry/ezan | null | >=3.11 | [] | [] | [] | [
"astropy>=7.2.0",
"pytz>=2025.2",
"requests>=2.32"
] | [] | [] | [] | [
"Homepage, https://github.com/WhiteSymmetry/ezan",
"Repository, https://github.com/WhiteSymmetry/ezan.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:52:10.628622 | ezan-0.1.1.tar.gz | 28,441 | 7d/3e/6839eea7a91e3d81c0bd51a77d5091d73eb049e4901a182888c83033a608/ezan-0.1.1.tar.gz | source | sdist | null | false | 492e0946b8d50fd784722fbbc8e247a2 | c0013afdd8d242a39c1f34b0cb5af2da9a1bab3f4e67bafce556f50c1e1527ee | 7d3e6839eea7a91e3d81c0bd51a77d5091d73eb049e4901a182888c83033a608 | null | [
"LICENSE"
] | 208 |
2.4 | matter-python-client | 0.4.1 | Python Client for the OHF Matter Server | # Python Client for the OHF Matter Server
A PyPI package (`matter-python-client`) providing Python bindings for the [OHF Matter Server](https://github.com/matter-js/matterjs-server). This is a drop-in replacement for the client portion of [`python-matter-server`](https://github.com/matter-js/python-matter-server), with custom cluster definitions updated to match the Matter.js server.
## Origin
The client and common modules were copied from [`python-matter-server` v8.1.2](https://github.com/matter-js/python-matter-server) and modified:
- **Source**: `matter_server/client/` and `matter_server/common/` from python-matter-server
- **Modified**: `matter_server/common/custom_clusters.py` — updated to match the JS server's cluster definitions
- **Excluded**: `matter_server/server/` (the JS server replaces it), `common/helpers/logger.py` (depends on `coloredlogs`, a server-only dependency)
### Custom Cluster Changes
| Cluster | Change |
|---------|--------|
| EveCluster | +8 attributes (getConfig, setConfig, loggingMetadata, loggingData, lastEventTime, statusFault, childLock, rloc16), wattAccumulatedControlPoint type fixed |
| HeimanCluster | Attribute IDs shortened to match JS server |
| NeoCluster | Types changed from float32 to uint |
| Polling | Removed (JS server handles polling natively) |
## Package
- **PyPI name**: `matter-python-client`
- **Python module**: `matter_server` (same import path as `python-matter-server`)
- **Python**: >= 3.12
- **Dependencies**: `aiohttp`, `orjson`, `home-assistant-chip-clusters`
## npm Scripts (from monorepo root)
```bash
# First-time setup: create venv and install with test dependencies
npm run python:install
# Run unit + mock server tests (fast, ~1s)
npm run python:test
# Run full integration tests against real Matter.js server + test device (~40s)
npm run python:test-integration
# Run all Python tests
npm run python:test-all
# Build the PyPI package (.tar.gz + .whl)
npm run python:build
```
## Usage
The package provides the same `matter_server.client` API as `python-matter-server`:
```python
import aiohttp
from matter_server.client import MatterClient
async with aiohttp.ClientSession() as session:
client = MatterClient("ws://localhost:5580/ws", session)
await client.connect()
init_ready = asyncio.Event()
listen_task = asyncio.create_task(client.start_listening(init_ready))
await init_ready.wait()
nodes = client.get_nodes()
```
To use this package instead of `python-matter-server` in the Home Assistant Matter integration, change `manifest.json`:
```diff
- "requirements": ["python-matter-server==8.1.2"],
+ "requirements": ["matter-python-client==1.0.0"],
```
## Testing
The test suite has three layers:
- **Import smoke tests** (8 tests) — verify all HA integration imports resolve
- **Mock server tests** (4 tests) — Python client against JS MockMatterServer subprocess
- **Integration tests** (63 tests) — Python client against real Matter.js server + test device, mirroring the JS `IntegrationTest.ts`
| text/markdown | null | Open Home Foundation <hello@openhomefoundation.io> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Home Automation"
] | [
"any"
] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp",
"orjson",
"home-assistant-chip-clusters==2025.7.0",
"pytest>=9.0; extra == \"test\"",
"pytest-asyncio>=0.24; extra == \"test\"",
"pytest-aiohttp>=1.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/matter-js/matterjs-server",
"Source, https://github.com/matter-js/matterjs-server/tree/main/python_client",
"Bug Tracker, https://github.com/matter-js/matterjs-server/issues",
"Changelog, https://github.com/matter-js/matterjs-server/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:51:47.701874 | matter_python_client-0.4.1.tar.gz | 39,718 | b9/fd/2d061bd945dae067779e7c96f9af981c9b3f9a81cf772b614f906568d6c6/matter_python_client-0.4.1.tar.gz | source | sdist | null | false | c43799266c1acbaf62d191d0485a2663 | 6c1109463f172e325079714fb33d5248a6850312aa1e70c13d8354df76d7ebb3 | b9fd2d061bd945dae067779e7c96f9af981c9b3f9a81cf772b614f906568d6c6 | Apache-2.0 | [] | 201 |
2.4 | aiokdb | 0.1.36 | Pure Python asyncio connector to KDB |   [](https://badge.fury.io/py/aiokdb)
[](https://app.fossa.com/projects/git%2Bgithub.com%2FTeaEngineering%2Faiokdb?ref=badge_shield)
# aiokdb
Python asyncio connector to KDB. Pure python, so does not depend on the `k.h` bindings or kdb shared objects, or numpy/pandas. Fully type hinted to comply with `PEP-561`. No non-core dependencies, and tested on Python 3.8 - 3.12.
## Peer review & motivation
[qPython](https://github.com/exxeleron/qPython) is a widely used library for this task that maps KDB tables to Pandas Dataframes. Sometimes a dependency on numpy/pandas is not desired, or the dynamic type mapping might be unwanted.
This library takes a different approach and aims to replicate using the KDB C-library functions, ie. being 100% explicit about KDB types. It was built working from the publicly documented [Serialisation Examples](https://code.kx.com/q/kb/serialization/) and [C API for kdb+](https://code.kx.com/q/wp/capi/) pages. Users might also need to be familiar with [k.h](https://github.com/KxSystems/ffi/blob/master/include/k.h).
```python
% python
Python 3.9.22 (main, Apr 8 2025, 15:21:55)
>>> from aiokdb import *
>>> from aiokdb.extras import ktns, ktni, ktnb
>>> x = xt(xd(ktns("hi", "there"), kk(ktni(TypeEnum.KI, 45, 56), ktnb(False, True))))
>>> x # shows repr() output
xt(xd(ktns('hi', 'there'), kk(ktni(TypeEnum.KI, 45, 56), ktnb(False, True))))
>>> from aiokdb.format import AsciiFormatter
>>> print(AsciiFormatter().format(x))
hi there
--------
45 0
56 1
>>> len(x)
2
>>> x[0]
xd(ktns('hi', 'there'), kk(ki(45), kb(False)))
>>> print(AsciiFormatter().format(x[1]))
hi | 56i
there| 1
>>> x[2]
IndexError
```
Basic RPC, using blocking sockets:
```python
# run ./q -p 12345 &
from aiokdb.socket import khpu
h = khpu("localhost", 12345, "kdb:pass")
# if remote returns Exception, it is raised here, unless khpu(..., raise_krr=False)
result = h.k("2.0+3.0")
assert result.aF() == 5.0
result.aJ() # raises ValueError: wrong type KF (-9) for aJ
```
The `result` object is a K-like Python object (a `KObj`), having the usual signed integer type available as `result.type`. Accessors for the primitive types are prefixed with an `a` and check at runtime that the accessor is appropriate for the stored type (`.aI()`, `.aJ()`, `.aH()`, `.aF()` etc.). Atoms store their value to a `bytes` object irrespective of the type, and encode/decode on demand. Atomic values can be set with (`.i(3)`, `.j(12)`, `.ss("hello")`).
Arrays are implemented with subtypes that use [Python's native arrays module](https://docs.python.org/3/library/array.html) for efficient array types. The `MutableSequence` arrays are returned using the usual array accessor functions `.kI()`, `.kB()`, `.kS()` etc.
```
kdb type name python python python python python
n c TypeEnum accessor setter create type
--------------------------------------------------------------------------------------
-19 t time -KT - - - -
-18 v second -KV - - - -
-17 u minute -KU - - - -
-16 n timespan -KN - - - -
-15 z datetime -KZ - - - -
-14 d date -KD - - - -
-13 m month -KM - - - -
-12 p timestamp -KP - - - -
-11 s symbol -KS .aS() .ss("sym") ks() str
-10 c char -KC .aC() .c("c") kc() str (len 1)
-9 f float -KF .aF() .f(6.1) kf() float
-8 e real -KE .aE() .f(6.2) ke() float
-7 j long -KJ .aJ() .j(7) kj() int
-6 i int -KI .aI() .i(6) ki() int
-5 h short -KH .aH() .h(5) kh() int
-4 x byte -KG .aG() .g(4) kg() int
-2 g guid -UU .aU() .uu(UUID()) kuu() uuid.UUID
-1 b boolean -KB .aB() .b(True) kb() bool
0 * list K .kK() - kk() MutableSequence[KObj]
1 b boolean KB .kB() - ktnb() MutableSequence[bool]
2 g guid UU .kU() - ktnu() MutableSequence[uuid.UUID]
4 x byte KG .kG() - ktni() MutableSequence[int]
5 h short KH .kH() - ktni() MutableSequence[int]
6 i int KI .kI() - ktni() MutableSequence[int]
7 j long KJ .kJ() - ktni() MutableSequence[int]
8 e real KE .kE() - ktnf() MutableSequence[float]
9 f float KF .kF() - ktnf() MutableSequence[float]
10 c char KC .kC(),.aS() - cv() array.array, str
11 s symbol KS .kS() - ktns() MutableSequence[str]
12 p timestamp KP - - - -
13 m month KM - - - -
14 d date KD - - - -
15 z datetime KZ - - - -
16 n timespan KN - - - -
17 u minute KU - - - -
18 v second KV - - - -
19 t time KT - - - -
98 flip XT .kkey(), .kvalue() xt() KObj, KObj
99 dict XD .kkey(), .kvalue() xd() KObj, KObj
100 function FN - - KFnAtom -
101 :: nil NIL - - kNil -
127 `s#dict SD .kkey(), .kvalue() - KObj, KObj
-128 ' err KRR .aS() .ss() krr() str
```
Serialisation is handled by the `b9` function, which encodes a `KObj` to a python `bytes`, and the `d9` function which takes a `bytes` and returns a `KObj`.
Calling `repr()` on `KObj` returns a string representation that, when passed to `eval()`, will exactly recreate the `KObj`. This may be an expensive operation for deeply nested or large tables.
* Atoms are created by `ka`, `kb`, `ku`, `kg`, `kh`, `ki`, `kj`, `ke`, `kf`, `kc`, `ks`, `kt`, `kd`, `kz`, `ktj`
* Vectors from python primitives with `ktnu`, `ktni`, `ktnb`, `ktnf`, `ktns`, passing desired `TypeEnum` value as the first argument.
* Mixed-type objects lists with `kk`.
* Dictionaries with `xd` and tables with `xt`.
Python manages garbage collection, so none of the reference counting primitives exist, i.e. `k.r` and functions `r1`, `r0` and `m9`, `setm`.
## Asyncio
Both kdb client and server *protocols* are implemented using asyncio, and can be tested back-to-back.
For instance running `python -m aiokdb.server` and then `python -m aiokdb.client` will connect together using KDB IPC. However since there is no _interpreter_ (and the default server does not handle any commands) the server will return an `nyi` error to all queries. To implement a partial protocol for your own application, subclass `aiokdb.server.ServerContext` and implement `on_sync_request()`, `on_async_message()`, and perhaps `check_login()`.
## Command Line Interface
Usable command line client support for connecting to a remote KDB instance (using python `asyncio`, and `prompt_toolkit` for line editing and history) is built into the package:
```bash
$ pip install aiokdb prompt_toolkit
$ ./q -p 12345 &
$ python -m aiokdb.cli --host localhost --port 12345
(eval) > ([s:7 6 0Nj]x:3?0Ng;y:2)
s| x y
-|---------------------------------------
7| 409031f3-b19c-6770-ee84-6e9369c98697 2
6| 52cb20d9-f12c-9963-2829-3c64d8d8cb14 2
| cddeceef-9ee9-3847-9172-3e3d7ab39b26 2
(eval) > 4 5 6!(`abc`def;til 300;(3 4!`a`b))
4| abc def
5| 0 1 2 ... 297 298 299
6| KDict
(eval) > [ctrl-D]
$
```
## Formatting
We implement `repr(KObj)` as a recursively descending, type explicit formatter, suitable for logging complex or unknown responses, albeit whose size could be very large for large datasets. The implementation of `str(KObj)` is non-descending, and will always be constant time/space irrespective of the payload.
Text formatting (as shown above) is controlled by `aiokdb.format.ASCIIFormatter`, which looks inside a `KObj` to render `XD`, `SD`, `XT` types in tabular form containing atom and vector values. Nested complex types ie. dictionary or tables render as the literals `KDict` or `KFlip`, so the outer form is preserved. The output can be bounded by passing arguments to the formatter, e.g. `ASCIIFormatter(width=120, height=20)`.
Finally `aiokdb.format.HtmlFormatter` is suitable for dashboards etc and can be trivially added to web frameworks as a 'template tag' or similar.
## QDB Files
Ordinary `.qdb` files written with set can be read by `kfromfile` or written by `ktofile`:
```python
>>> from aiokdb.files import kfromfile, ktofile
>>> k = kfromfile('test_qdb0/test.qdb')
>>> k
<aiokdb.KObjArray object at 0x7559136d8230>
>>> from aiokdb.format import AsciiFormatter
>>> fmt = AsciiFormatter()
>>> print(fmt.format(k))
[5, hello]
```
Ordinarily `k` is dictionary representing a KDB namespace containing other objects.
There is no support for splayed or partitioned datasets, however the primitives are (or, at least used to be) the same so this would be possible.
## Char vectors
Char vectors are created with `cv` and symbols with `ks`, both from a string. They both support returning immutable python strings with accessor `.aS()`. You can also return a char vector as a *mutable* string (writing back to the KObj) as an array of characters with `.kC()`, however this is an unusual pattern to use in python:
```python
>>> x = cv('hello')
>>> x.kC()
array('u', 'hello')
>>> x.aS()
'hello'
>>> x.kC()[3] = 'u'; x
cv('heluo')
# With symbol
>>> ks("hello").aS()
'hello'
>>> ks("hello").kC()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lib/python3.9/site-packages/aiokdb/__init__.py", line 262, in kC
raise self._te()
aiokdb.WrongTypeForOperationError: Not available for KS (-11)
```
## Tests
The library has extensive test coverage, however de-serialisation of certain (obscure) KObj may not be fully supported yet. PR's welcome. All tests are pure python except for those in `test/test_rpc.py`, which will use a real KDB server to test against if you set the `KDB_PYTEST_SERVICE` environment variable (to a URL of the form `kdb://user:password@hostname:port`), otherwise that test is skipped.
* Formatting with `ruff check .`
* Formatting with `ruff format .`
* Check type annotations with `mypy --strict .`
* Run `pytest .` in the root directory
## License
[](https://app.fossa.com/projects/git%2Bgithub.com%2FTeaEngineering%2Faiokdb?ref=badge_large)
| text/markdown | null | Chris Shucksmith <chris@shucksmith.co.uk> | null | null | null | null | [
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/TeaEngineering/aiokdb",
"Issues, https://github.com/TeaEngineering/aiokdb/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:51:05.069600 | aiokdb-0.1.36.tar.gz | 50,318 | 58/3f/37ff836b8b983a4a260276d02404362bef1fa8762171e775b3eee43652fa/aiokdb-0.1.36.tar.gz | source | sdist | null | false | 22d02653c68865c282e8db231264f96f | 9bfa7760171b4273e04fd8633ce6169b122c8f5e6fd7b43eff667a5e1d1fbb08 | 583f37ff836b8b983a4a260276d02404362bef1fa8762171e775b3eee43652fa | null | [
"LICENSE"
] | 216 |
2.4 | pygtkspellcheck | 5.0.4 | A simple but quite powerful spellchecking library for GTK written in pure Python. | # Python GTK Spellcheck
[](https://pypi.python.org/pypi/pygtkspellcheck)
[](https://pygtkspellcheck.readthedocs.org/en/latest/)
Python GTK Spellcheck is a simple but quite powerful spellchecking library for GTK written in pure Python. It's spellchecking component is based on [Enchant](http://www.abisource.com/projects/enchant/) and it supports both GTK 3 and 4 via [PyGObject](https://live.gnome.org/PyGObject/).
**⚡️ News:** Thanks to [@cheywood](https://github.com/cheywood), Python GTK Spellcheck now supports GTK 4! 🎉
**🟢 Status:** This project is mature, actively maintained, and open to contributions and co-maintainership.
## ✨ Features
- **spellchecking** based on [Enchant](http://www.abisource.com/projects/enchant/) for `GtkTextView`
- support for word, line, and multiline **ignore regular expressions**
- support for both **GTK 3 and 4** via [PyGObject](https://live.gnome.org/PyGObject/) for Python 3
- configurable extra word characters such as `'`
- localized names of the available languages based on [ISO-Codes](http://pkg-isocodes.alioth.debian.org/)
- support for custom ignore tags and hot swap of `GtkTextBuffer`
- support for Hunspell (LibreOffice) and Aspell (GNU) dictionaries
<p align="center">
<img src="https://raw.githubusercontent.com/koehlma/pygtkspellcheck/master/docs/screenshots/screenshot.png" alt="Screenshot" />
</p>
## 🚀 Getting Started
Python GTK Spellcheck is available from the [Python Package Index](https://pypi.python.org/pypi/pygtkspellcheck):
```sh
pip install pygtkspellcheck
```
Depending on your distribution, you may also find Python GTK Spellcheck in your package manager.
For instance, on Debian you may want to install the [`python3-gtkspellcheck`](https://packages.debian.org/bullseye/python3-gtkspellcheck) package.
## 🥳 Showcase
Over time, several projects have used Python GTK Spellcheck or are still using it. Among those are:
- [Nested Editor](http://nestededitor.sourceforge.net/about.html): “Specialized editor for structured documents.”
- [Cherry Tree](http://www.giuspen.com/cherrytree/): “A hierarchical note taking application, […].”
- [Zim](http://zim-wiki.org/): “Zim is a graphical text editor used to maintain a collection of wiki pages.”
- [REMARKABLE](http://remarkableapp.github.io/): “The best markdown editor for Linux and Windows.”
- [RedNotebook](http://rednotebook.sourceforge.net/): “RedNotebook is a modern journal.”
- [Reportbug](https://packages.debian.org/stretch/reportbug): “Reports bugs in the Debian distribution.”
- [UberWriter](http://uberwriter.wolfvollprecht.de/): “UberWriter is a writing application for markdown.”
- [Gourmet](https://github.com/thinkle/gourmet): “Gourmet Recipe Manager is a manager, editor, and organizer for recipes.“
## 🔖 Versions
Version numbers follow [Semantic Versioning](http://semver.org/). However, the update from 3 to 4 pertains only API incompatible changes in `oxt_extract` and not the spellchecking component. The update from 4 to 5 removed support for Python 2, GTK 2, `pylocales`, and the `oxt_extract` API. Otherwise, the API is still compatible with version 3.
## 📚 Documentation
The documentation is available at [Read the Docs](http://pygtkspellcheck.readthedocs.org/).
## 🏗 Contributing
We welcome all kinds of contributions! ❤️
For minor changes and bug fixes feel free to simply open a pull request. For major changes impacting the overall design of Python GTK Spellcheck, please first [start a discussion](https://github.com/koehlma/pygtkspellcheck/discussions/new?category=ideas) outlining your idea.
By submitting a PR, you agree to license your contributions under “GPLv3 or later”.
| text/markdown | Maximilian Köhl | mail@koehlma.de | null | null | GPL-3.0-or-later | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: X11 Applications :: GTK",
"Environment :: X11 Applications :: Gnome",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Localization"
] | [] | https://github.com/koehlma/pygtkspellcheck | null | <4.0,>=3.7 | [] | [] | [] | [
"pyenchant<4.0,>=3.0",
"PyGObject<4.0.0,>=3.42.1",
"sphinx<5.0.0,>=4.5.0; extra == \"docs\"",
"myst-parser<0.19.0,>=0.18.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/koehlma/pygtkspellcheck",
"Repository, https://github.com/koehlma/pygtkspellcheck.git"
] | poetry/2.2.1 CPython/3.13.11 Linux/6.19.2 | 2026-02-21T09:50:59.383161 | pygtkspellcheck-5.0.4.tar.gz | 45,040 | e6/88/ee26f12225d4e8e39bc0190ec7c69076554436e51407379799ca295bd7ca/pygtkspellcheck-5.0.4.tar.gz | source | sdist | null | false | 5d0cadd99db10418c91f21eed470a0c8 | a693defede048360b6b61e7de84f4fc9d6a0421b6d908922ad0513cfd6280ce3 | e688ee26f12225d4e8e39bc0190ec7c69076554436e51407379799ca295bd7ca | null | [] | 233 |
2.4 | medicafe | 0.260221.2 | MediCafe | # MediCafe
MediCafe is a healthcare workflow automation suite for claims, eligibility, remittance, and operations support.
## What It Covers
- `MediBot`: local preprocessing and Medisoft-oriented workflow automation
- `MediLink`: claims flow, payer API integration, eligibility/status, remittance handling
- `cloud/orchestrator`: Cloud Run/PubSub/Firestore pipeline for Gmail ingestion and queueing
- `xp_client`: XP-compatible daemon for cloud queue consumption
## Choose Your Runtime Track
MediCafe intentionally supports two runtime tracks.
| Track | Scope | Python | Primary Use |
|---|---|---|---|
| XP / Local | `MediBot` + `MediLink` legacy-compatible clinic workflows | 3.4.4 | Production-style local runtime |
| Cloud / Orchestrator | `cloud/orchestrator` setup, deployment, runtime operations | 3.11+ | Cloud ingestion and ops tooling |
This split is expected and by design.
## Quick Start
Run the interactive launcher:
```bash
medicafe launcher
```
Or use module form:
```bash
python -m MediCafe launcher
```
Common task commands:
- `medicafe medilink`
- `medicafe claims_status`
- `medicafe deductible`
- `medicafe preflight`
- `medicafe send_error_report`
## CLI Reference
Available commands:
- `medicafe launcher`
- `medicafe medibot [config_file]`
- `medicafe medilink`
- `medicafe claims_status`
- `medicafe deductible`
- `medicafe download_emails`
- `medicafe cloud_daemon [config_file]`
- `medicafe send_error_report`
- `medicafe send_queued_error_reports`
- `medicafe docx_index_rebuild`
- `medicafe preflight`
- `medicafe reconcile`
- `medicafe version`
## Preflight Check
Run before workflows to catch environment issues:
```bash
medicafe preflight
```
Also available from the launcher: preflight runs automatically before local workflows (MediLink, MediBot, download_emails, etc.) when risk signals change (first run of session, config/crosswalk file change, or prior preflight failure). Output: PASS/WARN/FAIL lines with remediation hints. Exit code 0 if no failures, non-zero if any FAIL. In scripted mode (`--auto-choice`, `--direct-action`), preflight failures abort without prompting.
## Install and Dev Notes
### Legacy package path (XP runtime compatibility)
Package metadata targets Python 3.4.x runtime compatibility.
```bash
pip install medicafe
```
### Active repo workflow (recommended for maintainers)
```bash
git clone https://github.com/katanada2/MediCafe.git
cd MediCafe
```
For cloud tooling, run with Python 3.11+ directly from repo scripts (for example `cloud/orchestrator/validate_and_complete_setup.py`).
## Configuration
Primary local config files:
- `json/config.json`
- `json/crosswalk.json`
- `json/medisoftconfig.json`
## Documentation Map
Start here:
- `docs/README.md`
- `docs/MEDICAFE_MASTER_GUIDE.md`
- `docs/MEDICAFE_API_ARCHITECTURE.md`
- `docs/MEDILINK_CLOUD_MIGRATION.md`
Cloud-specific docs:
- `docs/architecture/MediLink_Gmail_GCP_Implementation.md`
- `docs/runbooks/MediLink_Gmail_Orchestrator_Operations.md`
## Security and Compliance
- Avoid PHI in logs
- Use endpoint-specific OAuth/token flows as documented
- Use `send_error_report` for diagnostics instead of ad hoc data exports
## License
MIT License. See `LICENSE`.
## Author
Daniel Vidaud
daniel@personalizedtransformation.com
| text/markdown | Daniel Vidaud | daniel@personalizedtransformation.com | null | null | MIT | medicafe medibot medilink medisoft automation healthcare claims | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/katanada2/MediCafe | null | <3.5,>=3.4 | [] | [] | [] | [
"requests==2.21.0",
"openpyxl==2.4.11",
"argparse==1.4.0",
"numpy==1.11.3; platform_python_implementation != \"CPython\" or sys_platform != \"win32\" or python_version > \"3.5\"",
"pandas==0.20.0; platform_python_implementation != \"CPython\" or sys_platform != \"win32\" or python_version > \"3.5\"",
"tqdm==4.14.0",
"lxml==4.2.0; platform_python_implementation != \"CPython\" or sys_platform != \"win32\" or python_version > \"3.5\"",
"python-docx==0.8.11",
"PyYAML==5.2",
"chardet==3.0.4",
"cffi==1.8.2",
"msal==1.26.0",
"numpy==1.11.3; (platform_python_implementation == \"CPython\" and sys_platform == \"win32\" and python_version <= \"3.5\") and extra == \"binary\"",
"pandas==0.20.0; (platform_python_implementation == \"CPython\" and sys_platform == \"win32\" and python_version <= \"3.5\") and extra == \"binary\"",
"lxml==4.2.0; (platform_python_implementation == \"CPython\" and sys_platform == \"win32\" and python_version <= \"3.5\") and extra == \"binary\""
] | [] | [] | [] | [
"Source, https://github.com/katanada2/MediCafe",
"Bug Tracker, https://github.com/katanada2/MediCafe/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:50:44.343720 | medicafe-0.260221.2.tar.gz | 999,879 | 67/d7/49a6c27f4efa0d95d77242007b31f9a440fb6fee3c71997db0e839b8cb5c/medicafe-0.260221.2.tar.gz | source | sdist | null | false | 1b28287926432d76efbda54adc27282d | bd4c84af27c51a9e4be1a30f7327785cb4f2de7b3367dd873bc80b1676ec3076 | 67d749a6c27f4efa0d95d77242007b31f9a440fb6fee3c71997db0e839b8cb5c | null | [
"LICENSE"
] | 225 |
2.4 | cachekaro | 2.3.1 | Cross-platform storage and cache analyzer & cleaner - Cache Karo! | <div align="center">
# **CacheKaro**
### Cross-Platform Storage & Cache Manager
**CacheKaro** - *Clean It Up!*
[](https://www.python.org/downloads/)
[](LICENSE)
[](#-platform-support)
[](#-development)
⭐ **If you find CacheKaro useful, please consider giving it a star!** ⭐
[Overview](#-overview) · [Installation](#-installation) · [Uninstall](#-uninstall) · [Quick Start](#-quick-start) · [Commands](#-commands) · [Detection](#-what-it-detects) · [Safety](#-safety--risk-levels) · [Export Formats](#-export-formats) · [Config](#-configuration) · [Development](#-development) · [Platform Support](#-platform-support) · [License](#-license)
</div>
---
## ▸ Overview
**CacheKaro** is a cross-platform CLI tool to analyze and clean cache/storage on **macOS**, **Linux** and **Windows**. It automatically discovers caches from all installed applications and games.
### Why CacheKaro?
| # | Feature | Description |
|:-:|---------|-------------|
| 1 | **Auto-Discovery** | Automatically detects 300+ known apps and any new software you install |
| 2 | **Cross-Platform** | One tool for macOS, Linux and Windows |
| 3 | **Developer Friendly** | Cleans npm, pip, Gradle, Maven, Cargo, Go, Docker and more |
| 4 | **Game Support** | Steam, Epic Games, Riot Games, Battle.net, Minecraft and more |
| 5 | **Creative Suite** | Adobe CC, DaVinci Resolve, Blender, Ableton, AutoCAD and more |
| 6 | **Safe by Default** | Risk-based classification prevents accidental data loss |
| 7 | **Beautiful Reports** | Cyberpunk-themed HTML reports with charts |
---
## ▸ Installation
### • Prerequisites
- Python 3.9 or higher
- pip (Python package manager)
### • Install via pip (Recommended)
```bash
pip install cachekaro
```
### • Install from Source
```bash
# 1. Clone the repository
git clone https://github.com/Mohit-Bagri/cachekaro.git
# 2. Navigate to the ROOT folder (not cachekaro/cachekaro)
cd cachekaro
# 3. Create and activate virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate # macOS/Linux
# OR
.\venv\Scripts\activate # Windows
# 4. Install CacheKaro
pip install -e .
```
### • Verify Installation
```bash
cachekaro --version
```
> **Note:** If installed from source, the `cachekaro` command only works when the virtual environment is activated. Always run `source venv/bin/activate` before using CacheKaro.
### • 🚀 Getting Started (Run These After Install!)
```bash
# See what's taking up space
cachekaro analyze
# View system info
cachekaro info
# Generate a detailed HTML report
cachekaro report
# Clean caches safely (interactive)
cachekaro clean
# Get help
cachekaro --help
```
---
## ▸ Uninstall
```bash
pip uninstall cachekaro
```
To also remove configuration files:
| Platform | Command |
|----------|---------|
| macOS/Linux | `rm -rf ~/.config/cachekaro` |
| Windows | `rmdir /s %APPDATA%\cachekaro` |
---
## ▸ Quick Start
```bash
# ► Analyze your storage
cachekaro analyze
# ► Preview what can be cleaned (safe mode)
cachekaro clean --dry-run
# ► Clean caches interactively
cachekaro clean
# ► Auto-clean all safe items without prompts
cachekaro clean --auto
# ► Generate cyberpunk HTML report
cachekaro report --output report.html
```
---
## ▸ Commands
### • `cachekaro analyze`
Scans and displays all cache/storage usage on your system.
```bash
cachekaro analyze # Basic analysis
cachekaro analyze -f json # Output as JSON
cachekaro analyze -f csv -o data.csv # Export to CSV
cachekaro analyze -c browser # Only browser caches
cachekaro analyze --min-size 100MB # Only items > 100MB
cachekaro analyze --stale-days 7 # Mark items older than 7 days as stale
```
| Option | Short | Description | Default |
|--------|-------|-------------|---------|
| `--format` | `-f` | Output format: `text`, `json`, `csv` | `text` |
| `--output` | `-o` | Save output to file | stdout |
| `--category` | `-c` | Filter: `browser`, `development`, `game`, `application`, `system` | all |
| `--min-size` | — | Minimum size filter (e.g., `50MB`, `1GB`) | `0` |
| `--stale-days` | — | Days threshold for stale detection | `30` |
---
### • `cachekaro clean`
Removes cache files based on selected criteria.
```bash
cachekaro clean # Interactive mode
cachekaro clean --dry-run # Preview only, no deletion
cachekaro clean --auto # Auto-clean without prompts
cachekaro clean --auto --risk moderate # Include moderate risk items
cachekaro clean -c browser # Clean only browser caches
cachekaro clean --stale-only # Clean only stale items
```
| Option | Description | Default |
|--------|-------------|---------|
| `--dry-run` | Preview what would be deleted without actually deleting | `false` |
| `--auto` | Automatically clean all items without confirmation prompts | `false` |
| `--category` | Category to clean: `browser`, `development`, `game`, `application`, `system` | all |
| `--risk` | Maximum risk level: `safe`, `moderate`, `caution` | `safe` |
| `--stale-only` | Only clean items older than stale threshold | `false` |
---
### • `cachekaro report`
Generates detailed visual reports with charts.
```bash
cachekaro report # Generate HTML report
cachekaro report -o myreport.html # Custom filename
cachekaro report -f json -o report.json # JSON format
```
| Option | Short | Description | Default |
|--------|-------|-------------|---------|
| `--format` | `-f` | Report format: `html`, `json`, `csv`, `text` | `html` |
| `--output` | `-o` | Output file path | `cachekaro_report_<timestamp>.html` |
---
### • `cachekaro info`
Displays system information and CacheKaro configuration.
```bash
cachekaro info
```
---
### • `cachekaro update`
Check for updates and get upgrade instructions.
```bash
cachekaro update # Check for new versions
```
CacheKaro automatically notifies you when a new version is available each time you run a command.
---
## ▸ What It Detects
### • Automatic Discovery
CacheKaro automatically scans standard cache directories and identifies **any** application by its folder name. It recognizes 300+ known apps with friendly names.
### • Categories
| # | Category | Examples |
|:-:|----------|----------|
| 1 | **Browser** | Chrome, Firefox, Safari, Edge, Brave, Arc, Vivaldi, Opera |
| 2 | **Development** | npm, pip, Cargo, Gradle, Maven, Docker, VS Code, JetBrains, Xcode |
| 3 | **Games** | Steam, Epic Games, Riot Games, Battle.net, Minecraft, Unity, GOG |
| 4 | **Creative** | Photoshop, Premiere Pro, After Effects, DaVinci Resolve, Final Cut Pro |
| 5 | **3D & Design** | Blender, Cinema 4D, Maya, ZBrush, SketchUp, Figma, Sketch |
| 6 | **Audio** | Ableton Live, FL Studio, Logic Pro, Pro Tools, Cubase, GarageBand |
| 7 | **Engineering** | AutoCAD, SolidWorks, Fusion 360, MATLAB, Simulink, Revit |
| 8 | **Applications** | Spotify, Discord, Slack, Zoom, WhatsApp, Notion, Obsidian |
| 9 | **System** | OS caches, temp files, logs, crash reports, font caches |
### • Platform-Specific Paths
| Platform | Locations Scanned |
|----------|-------------------|
| **macOS** | `~/Library/Caches`, `~/.cache`, `~/Library/Logs`, `~/Library/Application Support` |
| **Linux** | `~/.cache`, `~/.config`, `~/.local/share`, `~/.steam`, `~/.var/app` |
| **Windows** | `%LOCALAPPDATA%`, `%APPDATA%`, `%TEMP%`, `%USERPROFILE%` |
---
## ▸ Safety & Risk Levels
| Level | Icon | Description | Examples |
|-------|------|-------------|----------|
| **Safe** | 🟢 | 100% safe to delete, no data loss | Browser cache, npm cache, pip cache, temp files |
| **Moderate** | 🟡 | Generally safe, may require re-login or re-download | HuggingFace models, Maven repo, Docker images |
| **Caution** | 🔴 | Review before deleting, may contain user data | Downloads folder, application data |
```bash
# ► Only clean safe items (default behavior)
cachekaro clean --risk safe
# ► Include moderate risk items
cachekaro clean --risk moderate
# ► Preview caution-level items before cleaning
cachekaro clean --risk caution --dry-run
```
---
## ▸ Export Formats
| # | Format | Use Case | Command Example |
|:-:|--------|----------|-----------------|
| 1 | **Text** | Terminal output with colors | `cachekaro analyze` |
| 2 | **JSON** | APIs and automation | `cachekaro analyze -f json` |
| 3 | **CSV** | Spreadsheet analysis | `cachekaro analyze -f csv -o data.csv` |
| 4 | **HTML** | Interactive reports with charts | `cachekaro report` |
---
## ▸ Configuration
### • Config File Location
| Platform | Path |
|----------|------|
| macOS/Linux | `~/.config/cachekaro/config.yaml` |
| Windows | `%APPDATA%\cachekaro\config.yaml` |
### • Example Config
```yaml
settings:
stale_threshold_days: 30 # Days before item is considered stale
default_format: text # Default output format
color_output: true # Enable colored terminal output
backup_before_delete: false # Create backup before deletion
custom_paths: # Add your own cache paths
- path: ~/my-app/cache
name: My App Cache
category: custom
risk_level: safe
```
---
## ▸ Development
```bash
# ► Setup development environment
git clone https://github.com/Mohit-Bagri/cachekaro.git
cd cachekaro
python3 -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
# ► Run tests
pytest
# ► Linting & type checking
ruff check .
mypy cachekaro
```
---
## ▸ Platform Support
| OS | Python 3.9 | Python 3.10 | Python 3.11 | Python 3.12 |
|----|:----------:|:-----------:|:-----------:|:-----------:|
| macOS | ✓ | ✓ | ✓ | ✓ |
| Ubuntu | ✓ | ✓ | ✓ | ✓ |
| Windows | ✓ | ✓ | ✓ | ✓ |
---
## ▸ License
MIT License — see [LICENSE](LICENSE)
---
<div align="center">
Made in 🇮🇳 with ❤️ by [MOHIT BAGRI](https://github.com/Mohit-Bagri)
**CacheKaro** - *Clean It Up!*
⭐ **Star this repo if you found it helpful!** ⭐
</div>
| text/markdown | null | MOHIT BAGRI <mailmohitbagri@gmail.com> | null | MOHIT BAGRI <mailmohitbagri@gmail.com> | null | cache, storage, cleanup, disk-space, system-maintenance, cross-platform, macos, linux, windows | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: End Users/Desktop",
"Operating System :: OS Independent",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Systems Administration",
"Topic :: Utilities"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyyaml>=6.0",
"colorama>=0.4.6",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"types-PyYAML>=6.0; extra == \"dev\"",
"rich>=13.0; extra == \"rich\""
] | [] | [] | [] | [
"Homepage, https://github.com/Mohit-Bagri/cachekaro",
"Documentation, https://github.com/Mohit-Bagri/cachekaro#readme",
"Repository, https://github.com/Mohit-Bagri/cachekaro.git",
"Issues, https://github.com/Mohit-Bagri/cachekaro/issues",
"Changelog, https://github.com/Mohit-Bagri/cachekaro/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:49:55.509758 | cachekaro-2.3.1.tar.gz | 61,204 | c9/39/f50c85baa47e9ccc2470545807b35dfeaee9c8b0dc60d754353799c80715/cachekaro-2.3.1.tar.gz | source | sdist | null | false | ff7bcc9e04d83a84f1d42d2b3245d439 | 3ebdb6171312373c4b95aeae346daa4161ff79036e5026d6be2cb64c28150164 | c939f50c85baa47e9ccc2470545807b35dfeaee9c8b0dc60d754353799c80715 | MIT | [
"LICENSE"
] | 201 |
2.4 | unaiverse | 0.1.15 | UNaIVERSE: A Collectionless AI Project. The new web of humans & AI Agents, built on privacy, control, and reduced energy consumption. | <div align="center">
<h1 style="text-align: center;">Welcome to UNaIVERSE ~ https://unaiverse.io</h1>
<img src="./assets/caicat_planets.png" alt="UNaIVERSE Logo" style="width:450px;">
</div>
<br>
<p align="center">
<em>Welcome to a new "UN(a)IVERSE," where humans and artificial agents coexist, interact, learn from each other, grow together, in a privacy and low-energy oriented reality.</em>
</p>
<br>
UNaIVERSE is a project framed in the context of [Collectionless AI](https://collectionless.ai), our perspective on Artificial Intelligence rooted in **privacy**, **low energy consumption**, and, more importantly, a **decentralized** model.
UN(a)IVERSE is a **peer-to-peer network**, aiming to become the new incarnation of the Web, combining (in the long run) the principles of Social Networks and AI under a **privacy** lens—a perspective that is crucial given how the Web, especially Social Networks, and AI are used today by both businesses and individual users.
- Enter UNaIVERSE: [**UNaIVERSE portal (login/register)**](https://unaiverse.io)
- Check our preprint of Collectionless AI & UNaIVERSE, to explore [**UNaIVERSE features**](./UNaIVERSE_techrep.pdf)
- Read more on our ideas: [**Collectionless AI website**](https://collectionless.ai)
---
## 🚀 Features
Check our presentation, starting from Collectionless AI and ending up in [**UNaIVERSE and its features**](./UNaIVERSE.pdf).
UNaIVERSE is a peer-to-peer network where each node is either a **world** or an **agent**. What can you do?
- You can create your own **agents**, based on [PyTorch modules](https://pytorch.org/), and, in function of their capabilities, they are ready to join the existing worlds and interact with others. Feel free to join a world, stay there for a while, leave it and join another one! They can also just showcase your technology, hence not join any worlds, becoming what we call **lone wolves**.
- You can create your own **worlds** as well. Different worlds are about different topics, tasks, whatever (think about a school, a shop, a chat room, an industrial plant, ...), and you don't have to write any code to let your agent participate in a world! It is the world designer that defines the expected **roles** and corresponding agent **behaviors** (special State Machines): join a world, get a role, and you are ready to behave coherently with your role!
- In UNaIVERSE, you, as **human**, are an agent as the other ones. The browser is your interface to UNaIVERSE, and you are already set up! No need to install anything, just jump into the UNaIVERSE portal, login, and you are a citizen of UNaIVERSE.
Remarks:
- *Are you a researcher?* This is perfect to study models that learn over time (Lifelong/Continual Learning), and social dynamics of different categories of models! Feel free to propose novel ideas to exploit UNaIVERSE in your research!
- *Are you in the industry or, more generally, business oriented?* **Think about privacy-oriented solutions that we can build over this new UN(a)IVERSE!**
---
## ⚡ Status
- Very first version: we think it will always stay alpha/beta/whatever 😎, but right now there are many features we plan to add and several parts to improve, **thanks to your feedback!**
- Missing features (work-in-progress): mobile agents running on dedicated Web App; build customizable UIs for human agents in the browser; fully decentralized discovery of new Peers; actual social network features (right now it is very preliminary, not really showcasing where we want to go)
---
## 📦 Installation
Jump to [https://unaiverse.io](https://unaiverse.io), create a new account (free!) or log in with an existing one. If you did not already do it, click on the top-right icon with "a person" on it:
<img src="./assets/unaiverse8443-me.png" alt="UNaIVERSE Logo" style="width:150px;">
Then click on "Generate a Token":
<img src="./assets/unaiverse8443-token.png" alt="UNaIVERSE Logo" style="width:500px;">
**COPY THE TOKEN**, you won't be able to see it twice! Now, let's focus on Python:
```bash
pip install unaiverse
```
That's it. Of course, if you want to dive into details, you find the source code here in this repo.
---
## 🛠 Mini Tutorial
The simplest usage you can think of is the one which does not exploit the real features of UNaIVERSE, but it is so simple that is a good way to put you in touch with UNaIVERSE itself.
You can **showcase** your PyTorch networks (actually, it can be every kind of model son of the PyTorch [*torch.nn.Module*](https://docs.pytorch.org/docs/stable/generated/torch.nn.Module.html) class) as follows. Let's focus on ResNet for simplicity.
Alright, let's discuss the code in the [assets/tutorial](./assets/tutorial) folder of this repo, composed of numbered scripts.
### Step A1. Do you know how to set up a network in PyTorch?
Let us set up a ResNet50 in the most basic PyTorch manner. The code is composed of a **generator of tensors** interpreted as pictures (actually, an ugly tensor with randomly colored pixels) and a pretrained **resnet classifier** which classifies the pictures generating a probability distribution over 1,000 classes. Try to run [script 1](./assets/tutorial/A_move_to_unaiverse/1_generator_and_resnet.py) from the [assets/tutorial](./assets/tutorial) folder. We report it here, carefully read the comments!
```python
import torch
import torchvision
# Downloading PyTorch module (ResNet)
net = torchvision.models.resnet50(weights="IMAGENET1K_V1").eval()
# Generating a random image (don't care about it, it is just a toy example,
# think it is a nice image!)
inp = torch.rand((1, 3, 224, 224), dtype=torch.float32)
# Inference: expects as input a tensor of type torch.float32, custom width and
# height, but 3 channels and batch dimension must be there; the output is a
# tensor with shape (1, 1000), i.e., a tensor in which batch dimension is
# present and then 1000 elements.
out = net(inp)
# Print shapes
print(f"Input shape: {tuple(inp.shape)}, dtype: {inp.dtype}")
print(f"Output shape: {tuple(out.shape)}, dtype: {out.dtype}")
```
### Step A2. Let's create UNaIVERSE agents!
We are going to create two agents, **independently running and possibly located in different places/machines**.
- One is based on the **resnet classifier**, waiting to be asked (by some other agents) for a prediction about a given image.
- The other is the **generator of tensors**, ready to generate a tensor (representation of a picture) and ask another agent to classify it.
Here is the **resnet classifier** agent, running forever and waiting for somebody to ask for a prediction, taken from [script 2](./assets/tutorial/A_move_to_unaiverse/2_agent_resnet.py) in the [assets/tutorial](./assets/tutorial) folder:
```python
import torch
import torchvision
from unaiverse.agent import Agent
from unaiverse.dataprops import Data4Proc
from unaiverse.networking.node.node import Node
# Downloading PyTorch module (ResNet)
net = torchvision.models.resnet50(weights="IMAGENET1K_V1").eval()
# Agent: we pass the network as "processor".
# Check the input and output properties of the processor, they are coherent with the
# input and output shapes of ResNet; here "None" means "whatever, but this axis must be
# there!". By default, this agent will act as a serving "lone wolf", serving whoever asks for
# a prediction.
agent = Agent(proc=net,
proc_inputs=[Data4Proc(data_type="tensor", tensor_shape=(None, 3, None, None),
tensor_dtype=torch.float32)],
proc_outputs=[Data4Proc(data_type="tensor", tensor_shape=(None, 1000),
tensor_dtype=torch.float32)])
# Node hosting agent: a node will be created in your account with this name, if not
# existing; it is "hidden" meaning that only you can see it in UNaIVERSE (since it is
# just a test!); the clock speed can be tuned accordingly to your needed and computing
# power.
node = Node(node_name="Test0", hosted=agent, hidden=True, clock_delta=1. / 5.)
# Running node (forever)
node.run()
```
Run it. Now, here is the agent capable of **generating tensors** (let's say images), which is asked to get in touch with the resnet agent, taken from [script 3](./assets/tutorial/A_move_to_unaiverse/3_agent_generator.py) in the [assets/tutorial](./assets/tutorial) folder:
```python
import torch
from unaiverse.agent import Agent
from unaiverse.dataprops import Data4Proc
from unaiverse.networking.node.node import Node
# Custom generator network: a module that simply generates an image with
# "random" pixel intensities; we will use this as processor of our new agent.
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
# The input will be ignored, and a default None value is needed
def forward(self, x: torch.Tensor | None = None):
inp = torch.rand((1, 3, 224, 224), dtype=torch.float32)
print(f"Generated data shape: {tuple(inp.shape)}, dtype: {inp.dtype}")
return inp
# Agent: we use the generator as processor.
agent = Agent(proc=Net(),
proc_inputs=[Data4Proc(data_type="all")], # Able to get every type of data (since it won't use it :))
proc_outputs=[Data4Proc(data_type="tensor", tensor_shape=(1, 3, 224, 224),
tensor_dtype="torch.float32")], # These are the properties of generator output
)
# To retrieve the result we got from the ResNet agent, we define a hook
# that will be called at the end of every run cycle
def hook(_node: Node):
# Printing the last received data from the ResNet agent
_out = _node.agent.get_last_streamed_data('Test0')[0]
if _out is not None:
_node.agent.print(f"Received data shape: {tuple(_out.shape)}, dtype: {_out.dtype}")
# Node hosting agent
node = Node(node_name="Test1", hosted=agent, hidden=True, clock_delta=1. / 5., run_hook=hook)
# Running node for 10 seconds
node.run(get_in_touch="Test0", max_time=10.0)
```
Run this script as well, and what will happen is that the generator will send its picture through the peer-to-peer network, reaching the resnet agent, and getting back a prediction.
### Step B1. Embellishment
We can upgrade the **resnet agent** to take real-world images as input, instead of random tensors, and to output class names (text) instead of a probability distribution. All we need to do is to re-define the properties of the inputs/outputs of the agent processor, and add transformations. Dive into [script 4](./assets/tutorial/B_improve_A_and_use_browser/4_agent_resnet_img_text.py):
```python
import torchvision
import urllib.request
from unaiverse.agent import Agent
from unaiverse.dataprops import Data4Proc
from unaiverse.networking.node.node import Node
# Downloading PyTorch module (ResNet)
net = torchvision.models.resnet50(weights="IMAGENET1K_V1").eval()
# Getting input transforms from PyTorch model
transforms = torchvision.transforms.Compose([
torchvision.transforms.Lambda(lambda x: x.convert("RGB")),
torchvision.models.ResNet50_Weights.IMAGENET1K_V1.transforms(),
torchvision.transforms.Lambda(lambda x: x.unsqueeze(0))
])
# Getting output class names
with urllib.request.urlopen("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt") as f:
c_names = [line.strip().decode('utf-8') for line in f.readlines()]
# Agent: we change the data type, to be able to handle stream of images (instead of tensors).
# We can customize the transformations from the streamed format to the processor inference
# format (every callable function is fine!). Similarly, we can customize the way we go from
# the actual output of the processor and what will be streamed (here we go from class
# probabilities to winning class name).
agent = Agent(proc=net,
proc_inputs=[Data4Proc(data_type="img", stream_to_proc_transforms=transforms)],
proc_outputs=[Data4Proc(data_type="text", proc_to_stream_transforms=lambda p: c_names[p.argmax(1)[0]])])
# Node hosting agent
node = Node(node_name="Test0", hosted=agent, hidden=True, clock_delta=1. / 5.)
# Running node
node.run()
```
Now let us promote the **generator** to an agent that downloads and offers a picture of a cat and expects to get back a text description of it (the class name in this case - this is [script 5](./assets/tutorial/B_improve_A_and_use_browser/5_agent_generator_img.py)):
```python
import torch
import urllib.request
from PIL import Image
from io import BytesIO
from unaiverse.agent import Agent
from unaiverse.dataprops import Data4Proc
from unaiverse.networking.node.node import Node
# Image offering network: a module that simpy downloads and offers an image as its output
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x: torch.Tensor | None = None):
with urllib.request.urlopen("https://cataas.com/cat") as response:
inp = Image.open(BytesIO(response.read()))
# inp.show() # Let's see the pic (watch out: random pic with a cat somewhere)
print(f"Downloaded image shape {inp.size}, type: {type(inp)}, expected-content: cat")
return inp
# Agent
agent = Agent(proc=Net(),
proc_inputs=[Data4Proc(data_type="all")],
proc_outputs=[Data4Proc(data_type="img")], # A PIL image is being "generated" here
behav_lone_wolf="ask")
# To retrieve the result we got from the ResNet agent, we define a hook
# that will be called at the end of every run cycle
def hook(_node: Node):
# Printing the last received data from the ResNet agent
out = _node.agent.get_last_streamed_data('Test0')[0]
_node.agent.print(f"Received response: {out}") # Now we expect a textual response
_node.agent.print("")
_node.agent.print(f"Notice: instead of using this agent, you can also: search for the ResNet node (ResNetAgent) "
f"in the UNaIVERSE portal, connect to it using our in-browser agent, select a picture from "
f"your disk, send it to the agent, get back the text response!")
# Node hosting agent
node = Node(node_name="Test1", hosted=agent, hidden=True, clock_delta=1. / 5., run_hook=hook)
# Running node for 45 seconds
node.run(max_time=45.0, get_in_touch="Test0")
```
### Step B2. Connect to your ResNet agent by means of a browser running agent!
Instead of using the artificial generator agent, **you can become the generator agent**!
Search for the ResNet node (ResNetAgent) in the UNaIVERSE portal, connect to it using the in-browser agent, select a picture from your disk, send it to the agent, get back the text response!
### Step C. Unleash UNaIVERSE!
What you did so far is just to showcase your model. UNaIVERSE is composed of several **worlds** that you can create and customize. Your agent can enter one world at a time, stay there, leave it, enter another, and so on.
Agents will behave according to what the world indicates, and you don't have to write any extra code to act in worlds you have never been into!
Alright, there are so many things to say, but examples are always a good thing!
We prepared a repository with examples of many worlds and different lone wolves, go there in order to continue your journey into UNaIVERSE!
*THE TUTORIAL CONTINUES:* [https://github.com/collectionlessai/unaiverse-examples](https://github.com/collectionlessai/unaiverse-examples)
**See you in our UNaIVERSE!**
---
## 📄 License
This project is licensed under the Apache 2.0 License.
Commercial licenses can be provided.
See the [LICENSE](./LICENSE) file for details (research, etc.).
See the Contributor License Agreement [CLA.md](./CLA.md) if you want to contribute.
This project includes third-party libraries. See [THIRD_PARTY_LICENSES.md](./THIRD_PARTY_LICENSES.md) for details.
---
## 📚 Documentation
You can find an API reference in file [docs.html](./docs.html), that you can visualize here:
- [API Reference](https://collectionlessai.github.io/unaiverse-docs.github.io/)
---
## 🤝 Contributing
Contributions are welcome!
Please contact us in order to suggest changes, report bugs, and suggest ideas for novel applications based on UNaIVERSE!
---
## 👨💻 Main Authors
- Stefano Melacci (Project Leader) [stefano.melacci@unisi.it](stefano.melacci@unisi.it)
- Christian Di Maio [christian.dimaio@phd.unipi.it](christian.dimaio@phd.unipi.it)
- Tommaso Guidi [tommaso.guidi.1998@gmail.com](tommaso.guidi.1998@gmail.com)
- Marco Gori (Scientific Advisor) [marco.gori@unisi.it](marco.gori@unisi.it)
---
| text/markdown | null | Stefano Melacci <stefano.melacci@unisi.it>, Christian Di Maio <christian.dimaio@phd.unipi.it>, Tommaso Guidi <tommaso.guidi.1998@gmail.com> | null | Stefano Melacci <stefano.melacci@unisi.it>, Christian Di Maio <christian.dimaio@phd.unipi.it>, Tommaso Guidi <tommaso.guidi.1998@gmail.com> | null | UNaIVERSE, Collectionless AI, AI, Agentic AI, Agents, Machine Learning, Learning Over Time | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >3.10 | [] | [] | [] | [
"opencv-python",
"Flask",
"flask_cors",
"graphviz",
"ntplib",
"numpy",
"Pillow",
"protobuf",
"psutil",
"PyJWT",
"cryptography",
"Requests",
"torch",
"torchvision",
"transformers",
"sortedcontainers",
"plotly"
] | [] | [] | [] | [
"A-Homepage, https://unaiverse.io",
"B-CollectionlessAI, https://collectionless.ai",
"C-Source, https://github.com/collectionlessai/unaiverse-src",
"D-Starting, https://github.com/collectionlessai/unaiverse-src/blob/main/README.md",
"E-Examples, https://github.com/collectionlessai/unaiverse-examples",
"F-Documentation, https://github.com/collectionlessai/unaiverse-src/blob/main/README.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:48:44.590231 | unaiverse-0.1.15.tar.gz | 294,430 | b8/fd/1bcd8a95781be1a39ba15f336b149b19a7f023eebfbd53b68ad88eea63b1/unaiverse-0.1.15.tar.gz | source | sdist | null | false | 60e0e9138837f3030b99cfd8c5550c4f | f851a50cdb88cf0062deb733c8a43c1689c075732bb8c9600e7cd8cd87d4fba6 | b8fd1bcd8a95781be1a39ba15f336b149b19a7f023eebfbd53b68ad88eea63b1 | null | [
"LICENSE"
] | 1,726 |
2.4 | lattice-memory | 0.1.0 | Local-first, bicameral memory system for AI agents — remember, evolve, search | # Lattice — Bicameral Memory for AI Agents
> **Solving the "Goldfish Memory" problem for AI Agents**
Lattice is a **bicameral memory system** that enables AI agents to:
- **Remember** your preferences, conventions, and decisions
- **Evolve** by automatically extracting rules from conversations
- **Search** historical dialogues to find relevant context
```
┌─────────────────────────────────────────────────────────┐
│ System 1 (Instinct) │ System 2 (Memory) │
│ ───────────────────── │ ───────────────────── │
│ Always-on Rules │ Searchable Logs │
│ 0ms latency │ ~100ms latency │
│ Markdown files │ SQLite + Vector Search │
│ "Prefer TDD" │ "Last week we fixed auth" │
└─────────────────────────────────────────────────────────┘
▲
│ Compiler (LLM)
│ Extracts patterns, generates rules
│
┌───────┴───────┐
│ store.db │
│ (chat logs) │
└───────────────┘
```
## Installation
```bash
# Install from GitHub
pip install git+https://github.com/tefx/lattice.git
# Or clone and install locally
git clone https://github.com/tefx/lattice.git
cd lattice
pip install -e .
```
## Quick Start
### 1. Initialize
```bash
lattice init
```
This creates:
- `./.lattice/` — Project-level memory directory
- `~/.config/lattice/` — Global configuration directory
### 2. Configure LLM
Edit `~/.config/lattice/config.toml`:
```toml
[compiler]
# Recommended: GPT-5 Mini (best cost-performance ratio)
model = "openrouter/openai/gpt-5-mini"
api_key_env = "OPENROUTER_API_KEY"
# Or use opencode CLI (zero configuration)
# model = "cli:opencode:openrouter/openai/gpt-5-mini"
[thresholds]
warn_tokens = 3000
alert_tokens = 5000
[safety]
auto_apply = true
backup_keep = 10
```
Set environment variable:
```bash
export OPENROUTER_API_KEY="sk-or-..."
```
### 3. Data Capture
**Option A: OpenCode Plugin (Recommended)**
```bash
# Install plugin (TypeScript, single file)
mkdir -p ~/.config/opencode/plugins
curl -o ~/.config/opencode/plugins/lattice-capture.ts \
https://raw.githubusercontent.com/tefx/lattice/main/plugins/opencode-lattice/index.ts
# Or clone and copy:
git clone https://github.com/tefx/lattice.git
mkdir -p ~/.config/opencode/plugins
cp lattice/plugins/opencode-lattice/index.ts ~/.config/opencode/plugins/lattice-capture.ts
# Restart opencode
opencode
```
OpenCode uses Bun runtime with native TypeScript support — no build step required.
**Alternative: Install from npm (once published)**
```json
// ~/.config/opencode/opencode.json
{
"plugin": ["opencode-lattice"]
}
```
**Option B: MCP Server**
```bash
# Start MCP server
lattice serve
# Configure MCP in Claude Code / opencode
```
**Option C: Python SDK**
```python
import lattice
client = lattice.Client()
# Log conversation
client.log_turn(
user="Fix the auth bug",
assistant="I updated the middleware...",
)
# Search history
results = client.search("auth bug", limit=5)
# Get rules
instincts = client.get_instincts()
```
### 4. Evolve Rules
```bash
# Incremental evolution (only new sessions)
lattice evolve
# Check status
lattice status
# Apply/reject proposals
lattice apply <proposal>
lattice revert
```
## CLI Commands
```bash
# Initialization
lattice init # Initialize project (also creates global config)
# Evolution
lattice evolve # Project-level evolution
lattice evolve --global # Global evolution (cross-project patterns)
lattice status # Check status
# Search
lattice search "query" # Search project memory
lattice search --global "query" # Search global memory
# Rule Management
lattice apply <proposal> # Apply proposal
lattice revert # Rollback
# MCP
lattice serve # Start MCP server
# Configuration
lattice config init --global # Create global config.toml
lattice config show --global # Show current config
```
## Configuration
Lattice creates a comprehensive config file with all options documented.
### Create Config
```bash
# Create global config with all options documented
lattice config init --global
# Or force overwrite existing config
lattice config init --global --force
```
`lattice init` automatically creates the global config if it doesn't exist.
### Config File Location
```
~/.config/lattice/config.toml # Global config
```
### Quick Setup
```bash
# 1. Initialize project (creates config if missing)
lattice init
# 2. Save your API key
lattice auth login openai
# API Key for openai: ********
# ✓ Saved API key for openai
# Key will be used automatically (no config.toml changes needed)
# 3. Edit model in config (optional)
vim ~/.config/lattice/config.toml
# Change model = "openai/gpt-5-mini" to your preferred model
# 4. Start using
lattice evolve
```
## Shell Completion
Lattice supports shell completion for **bash**, **zsh**, **fish**, and **PowerShell**.
### Install Completion
```bash
# Auto-detect shell and install
lattice completion --install
# Or specify shell explicitly
lattice completion --shell bash --install
lattice completion --shell zsh --install
lattice completion --shell fish --install
lattice completion --shell powershell --install
```
After installation, restart your terminal or source the completion file:
```bash
# Bash
source ~/.bash_completions/lattice.sh
# Zsh (add to ~/.zshrc)
fpath+=~/.zfunc
autoload -U compinit && compinit
```
### Show Completion Script
To view or manually install the completion script:
```bash
lattice completion --shell zsh
lattice completion --shell bash
lattice completion --shell fish
```
## API Key Configuration
Lattice supports flexible API key configuration with multiple sources and priority resolution.
### Quick Setup (Recommended)
Just run `lattice auth login` and you're done:
```bash
lattice auth login openai
# API Key for openai: ********
# ✓ Saved API key for openai
# Key will be used automatically (no config.toml changes needed)
```
That's it! No need to edit config.toml. The key is stored securely and used automatically.
### Priority Order
Keys are resolved in this order (highest to lowest priority):
1. **Config `api_key`/`api_key_env`** - Explicit configuration in config.toml
2. **Auth storage** - `~/.config/lattice/auth.json` (automatic fallback)
3. **Environment variable** - Standard env vars (e.g., `OPENAI_API_KEY`)
4. **LiteLLM defaults** - LiteLLM's built-in key detection
**Key insight**: If you use `lattice auth login`, you don't need to configure anything in config.toml.
### When to Use config.toml
Only add `api_key` or `api_key_env` to config.toml if you want to:
1. **Override** the auth storage key for a specific model
2. **Use different keys** for different providers
3. **Use environment variables** in CI/CD
```toml
[compiler]
model = "openai/gpt-4"
# Optional: Override auth storage
# api_key = "{env:MY_CUSTOM_KEY}"
# api_key_env = "MY_CUSTOM_KEY"
```
### Variable Syntax
Use these variable formats in your config (only needed for advanced use cases):
| Syntax | Description | Example |
|--------|-------------|---------|
| `{env:VAR}` | Read from environment variable | `{env:OPENAI_API_KEY}` |
| `{file:/path}` | Read from file | `{file:~/.secrets/openai_key}` |
| `{auth:provider}` | Read from auth storage | `{auth:openai}` |
| Direct key | Plain text (not recommended) | `sk-proj-...` |
### Auth CLI Commands
Manage your API keys securely:
```bash
# Save an API key (prompts securely with masking)
lattice auth login openai
# Or provide via command line (shown in history, less secure)
lattice auth login openai --key sk-proj-...
# List saved providers (keys are redacted)
lattice auth list
# Test an API key
lattice auth test openai
# Remove an API key
lattice auth logout openai
```
### Security Best Practices
1. **Use auth storage** - Keys are stored with `chmod 0o600` permissions
2. **Avoid `--key` flag** - It shows in shell history; use interactive prompt instead
3. **Avoid direct keys in config** - Never hardcode keys in config files
4. **Use environment variables** - For CI/CD pipelines
### File Location
Auth keys are stored in:
```
~/.config/lattice/auth.json
```
The file is created with `0o600` permissions (owner read/write only).
## Complete Usage Workflow
This section walks through a typical usage cycle from initialization to observing memory effects.
### Week 1: Setup & Data Collection
```bash
# Day 1: Initialize
cd your-project
lattice init
# Configure LLM for Compiler
cat > ~/.config/lattice/config.toml << 'EOF'
[compiler]
model = "openrouter/openai/gpt-5-mini"
api_key_env = "OPENROUTER_API_KEY"
[thresholds]
warn_tokens = 3000
alert_tokens = 5000
[safety]
auto_apply = true
backup_keep = 10
EOF
export OPENROUTER_API_KEY="sk-or-..."
# Verify setup
lattice status
# Output:
# ── Sessions ─────────────────────
# Total: 0
# Pending evolution: 0
#
# ── Rules ───────────────────────
# rules/: 0 files, 0 tokens
```
```bash
# Day 1-7: Use your agent normally
# OpenCode Plugin captures conversations automatically
# Or use MCP: lattice serve
# Check data collection
lattice status
# Output:
# ── Sessions ─────────────────────
# Total: 23
# Pending evolution: 23
```
### Week 1-2: First Evolution
```bash
# Run Compiler to extract patterns
lattice evolve
# Output:
# Processing 23 sessions...
# LLM response received.
# Proposals written to:
# - drift/proposals/20260219_143052_prefer_explicit_imports.md
# - drift/proposals/20260219_143052_use_result_types.md
#
# Applied 2 proposals.
# Updated last_evolved_at.
# Check generated rules
lattice status
# Output:
# ── Sessions ─────────────────────
# Total: 23
# Pending evolution: 0
#
# ── Rules ───────────────────────
# rules/: 2 files, ~450 tokens
#
# ── Proposals ───────────────────
# Pending: 0
# View generated rules
cat .lattice/rules/*.md
```
### Week 2-4: Iteration & Review
```bash
# Sessions accumulate over time
lattice status
# ── Sessions ─────────────────────
# Total: 67
# Pending evolution: 15
# Run evolution again (incremental - only new sessions)
lattice evolve
# If auto_apply=false, review and apply manually
lattice status
# ── Proposals ───────────────────
# Pending: 3 proposals
# Review a proposal
cat .lattice/drift/proposals/20260225_091234_avoid_bare_except.md
# Apply it
lattice apply drift/proposals/20260225_091234_avoid_bare_except.md
# Or reject it (delete the file)
rm .lattice/drift/proposals/20260225_091234_avoid_bare_except.md
# Made a mistake? Revert
lattice revert
# Restored from backup: .lattice/backups/rules_20260219_143052.tar.gz
```
### Month 1+: Search & Cross-Project
```bash
# Search historical conversations
lattice search "authentication bug"
# Output:
# [1] ses_abc123 (user, 2026-02-15)
# "I keep getting 401 errors on the API..."
# [2] ses_def456 (assistant, 2026-02-15)
# "The JWT tokens were expiring. I added refresh logic..."
# After multiple projects, run global evolution
lattice evolve --global
# Scans all registered projects for cross-project patterns
# Promotes rules appearing in ≥3 projects to global rules
# Check global rules
lattice status --global
# ── Global Rules ─────────────────
# rules/: 3 files, ~600 tokens
```
### Measuring Effectiveness
| Signal | How to Verify |
| ------ | -------------- |
| **Data Collection** | `lattice status` shows increasing session count |
| **Pattern Extraction** | `lattice evolve` generates proposals after ~10+ sessions |
| **Rule Accumulation** | `rules/` directory grows with `.md` files |
| **Behavior Change** | Agent starts applying learned preferences automatically |
| **Search Utility** | `lattice search "keyword"` returns relevant history |
### Key Files to Monitor
```bash
# Session logs (System 2)
sqlite3 .lattice/store.db "SELECT COUNT(*) FROM logs;"
# Generated rules (System 1)
ls -la .lattice/rules/
# Pending proposals
ls -la .lattice/drift/proposals/
# Evolution traces (audit log)
ls -la .lattice/drift/traces/
# Backups
ls -la .lattice/backups/
```
## Directory Structure
```
~/.config/lattice/ # Global configuration (XDG compliant)
├── config.toml # LLM settings
├── projects.toml # Project registry
├── rules/ # Global rules
├── store/
│ └── global.db # Cross-project evidence
├── drift/
│ ├── proposals/ # Pending global proposals
│ └── traces/ # Global Compiler audit logs
└── backups/ # Global rule backups
./.lattice/ # Project-level memory
├── rules/ # Project rules (System 1)
├── store.db # Chat logs (System 2)
├── drift/
│ ├── proposals/ # Pending proposals
│ └── traces/ # Compiler reasoning (audit)
└── backups/ # Rule backups for revert
```
## Core Concepts
| Concept | Description |
| ------------ | -------------------------------------------------------- |
| **Instinct** | Always-loaded rules (preferences, conventions, constraints) |
| **Compiler** | LLM process that extracts patterns from logs and generates rules |
| **Store** | Searchable archive of conversation logs |
| **Promotion**| Project rules promoted to global rules (≥3 projects with same pattern) |
## Safety
- **Local-First**: All data stored locally
- **Secret Sanitization**: Automatically filters API keys, passwords, and sensitive information
- **Backup**: Automatic backup on every apply, with revert capability
## Documentation
## Documentation
- [RFC-002: Bicameral Memory Architecture](docs/RFC-002-Lattice-Bicameral-Memory.md)
- [Architecture Guide](docs/ARCHITECTURE.md) — Core/Shell layers, data flow, configuration
- [API Reference](docs/api-reference.md) — Python SDK usage
- [Session Compression RFC](docs/RFC-002-R1-Session-Compression.md) — Layer 0/1 compression for token efficiency
## License
AGPL-3.0-only | text/markdown | null | Tefx <zhaomeng.zhu@gmail.com> | null | null | AGPL-3.0-only | agent, ai, llm, mcp, memory, persistent, rag | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"deal>=4.24.0",
"invar-runtime>=0.1.0",
"litellm>=1.0.0",
"mcp>=1.0.0",
"opentelemetry-api>=1.20.0",
"opentelemetry-sdk>=1.20.0",
"returns>=0.22.0",
"sqlite-vec>=0.1.0",
"tiktoken>=0.5.0",
"tomli>=2.0.0; python_version < \"3.11\"",
"typer>=0.9.0",
"pytest-dotenv>=0.5.2; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/tefx/lattice",
"Documentation, https://github.com/tefx/lattice#readme",
"Repository, https://github.com/tefx/lattice",
"Issues, https://github.com/tefx/lattice/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T09:47:33.124429 | lattice_memory-0.1.0.tar.gz | 542,221 | bc/e0/0acfd496f98e8b0ab70323db581d884b0ee06d59376d4f97681beea8aee6/lattice_memory-0.1.0.tar.gz | source | sdist | null | false | 70c55151fc117cc434c56f62a02e3ac6 | 270ca7b8bc56c2de8115ac6e67a3a33d30ee56186c3ce68bc645ce0ca32eb182 | bce00acfd496f98e8b0ab70323db581d884b0ee06d59376d4f97681beea8aee6 | null | [
"LICENSE"
] | 210 |
2.4 | framework-m-standard | 0.3.2 | Default adapters for Framework M (SQLAlchemy, Redis, Litestar) | # Framework M Standard
[](https://badge.fury.io/py/framework-m-standard)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://gitlab.com/castlecraft/framework-m/-/pipelines)
[](https://github.com/astral-sh/ruff)
Default adapters for Framework M - batteries-included.
## Installation
```bash
pip install framework-m-standard
```
Or with `uv`:
```bash
uv add framework-m-standard
```
## What's Included
### Database Adapters
- **SQLAlchemy Repository** - Generic repository implementation with async support
- **Schema Mapper** - Automatic Pydantic → SQLAlchemy table generation
- **Migration System** - Alembic integration with auto-migration detection
- **Unit of Work** - Transaction management with optimistic concurrency control
### Web Framework
- **Litestar Integration** - REST API with automatic OpenAPI docs
- **Auth Guards** - JWT authentication and permission checking
- **Exception Handlers** - Consistent error responses
### Cache & Events
- **Redis Cache** - Distributed caching with TTL support
- **Redis Event Bus** - Pub/sub for domain events
### Background Jobs
- **Taskiq Integration** - Background job processing with NATS JetStream
- **Outbox Pattern** - Reliable event publishing
## Supported Databases
| Database | Status |
| ---------- | --------------- |
| PostgreSQL | ✅ Full support |
| SQLite | ✅ For testing |
| MySQL | 🔜 Planned |
## Usage
This package is typically used as a dependency of `framework-m`.
For most applications, install the full `framework-m` package instead.
```python
from framework_m_standard.adapters.db import GenericRepository
from framework_m_standard.adapters.cache import RedisCache
from framework_m_standard.adapters.web import create_litestar_app
```
## License
MIT License - see [LICENSE](https://gitlab.com/castlecraft/framework-m/blob/main/LICENSE) for details.
| text/markdown | Framework M Contributors | null | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aioboto3>=15.5.0",
"aiofiles>=25.1.0",
"alembic>=1.13.0",
"argon2-cffi>=23.1.0",
"asyncpg>=0.29.0",
"framework-m-core>=0.6.0",
"litestar[standard]>=2.0.0",
"nats-py>=2.0.0",
"pyjwt>=2.8.0",
"redis>=5.0.0",
"sqlalchemy[asyncio]>=2.0.0",
"taskiq-nats>=0.4.0",
"taskiq>=0.11.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T09:47:26.636021 | framework_m_standard-0.3.2.tar.gz | 299,163 | 90/ee/412a9575186c3b59dce4fec48a107e88e1000002bf13bdff7a7e7ab45816/framework_m_standard-0.3.2.tar.gz | source | sdist | null | false | 7dc91453ef62549b53719b6a7f093adb | cf9cf4fc2bb3f5757a9afe9150d154fe0126e202d1b62a092c705d1be1ccb554 | 90ee412a9575186c3b59dce4fec48a107e88e1000002bf13bdff7a7e7ab45816 | null | [] | 223 |
2.4 | admin-gen-mcp | 0.1.4 | Vue 3 + Element Plus + Spring Boot 鍏ㄦ爤浠g爜鐢熸垚鍣?MCP Server | # Admin Gen MCP
Vue 3 + Element Plus 前端 & Spring Boot + MyBatis-Plus 后端 全栈代码生成器,基于 MCP 协议。
## 功能
- **前端生成**:api.ts, options.ts, index.vue, form.vue
- **后端生成**:Entity, Controller, Service, ServiceImpl, Mapper, Mapper.xml
- **全栈生成**:一份 JSON 配置同时生成前端 + 后端(自动类型映射)
- 支持主表 + 多子表
- 支持字典配置
- 代码直接保存到磁盘,返回文件路径列表
- 调用阿里云 qwen3-max 模型生成代码
## 快速开始
### 前提条件
- 阿里云 DashScope API Key([获取地址](https://dashscope.console.aliyun.com/))
- 安装方式二选一:
- **uvx(推荐)**:需安装 [uv](https://docs.astral.sh/uv/),无需手动安装包
- **pip**:需 Python >= 3.10
### 安装 uv(如果选择 uvx 方式)
```bash
# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# 或通过 pip
pip install uv
```
### 安装包(如果选择 pip 方式)
```bash
pip install admin-gen-mcp
```
## IDE 配置
所有 IDE 的 MCP 配置中都需要设置 `DASHSCOPE_API_KEY` 环境变量。将 `your-api-key` 替换为你的阿里云 DashScope API Key。
### Trae IDE
Trae 主界面 → AI 面板 → 右上角设置 → MCP → 添加 → 手动配置:
**uvx 方式(推荐):**
```json
{
"mcpServers": {
"admin-gen": {
"command": "uvx",
"args": ["admin-gen-mcp"],
"env": {
"DASHSCOPE_API_KEY": "your-api-key"
}
}
}
}
```
**pip 方式:**
```json
{
"mcpServers": {
"admin-gen": {
"command": "cmd",
"args": ["/c", "admin-gen-mcp"],
"env": {
"DASHSCOPE_API_KEY": "your-api-key"
}
}
}
}
```
> 配置保存后选择智能体 **Builder with MCP**。
>
> 详细使用文档见 [TRAE_GUIDE.md](./TRAE_GUIDE.md)
### Claude Code
```bash
# uvx 方式(推荐)
claude mcp add admin-gen -- uvx admin-gen-mcp
# 然后设置环境变量 DASHSCOPE_API_KEY
# pip 方式
claude mcp add-json admin-gen '{"type":"stdio","command":"admin-gen-mcp","env":{"DASHSCOPE_API_KEY":"your-api-key"}}'
```
### Cursor
设置 → MCP → 添加,粘贴与 Trae 相同的 JSON 配置。
### VS Code (Copilot)
在项目根目录创建 `.vscode/mcp.json`,内容与上方 JSON 配置相同。
### Windsurf
设置 → MCP → 添加配置,内容与上方 JSON 配置相同。
## 命令行工具
```bash
# 从 JSON 配置文件生成前端代码
admin-gen --config fields/example.json
# 交互模式
admin-gen -i
```
## Python API
```python
from admin_gen_mcp.generator import CodeGenerator
gen = CodeGenerator(api_key="your-api-key")
config = gen.load_config("fields/example.json")
# 全栈生成
results = gen.generate_fullstack(config)
print(results["frontend"]) # 前端 4 个文件
print(results["backend"]) # 后端 6~9 个文件
```
## MCP 工具列表
| 工具 | 说明 |
|------|------|
| `generate_admin_page` | 前端代码生成(直接传字段参数) |
| `generate_from_config` | 前端代码生成(JSON 配置文件路径) |
| `generate_backend_code` | 后端代码生成(直接传字段参数) |
| `generate_fullstack` | 前后端一体生成(JSON 配置文件路径) |
## 配置文件格式
```json
{
"功能名称": "员工培训申请",
"模块路径": "hr/trainApplication",
"权限前缀": "hr_trainapplication",
"输出目录": "src/views",
"主表字段": [
{
"label": "培训主题",
"key": "trainTitle",
"type": "input",
"required": true,
"show": true,
"alwaysHide": false,
"smart": true,
"width": "200"
}
],
"子表配置": [
{
"名称": "参训人员明细",
"标识": "detail",
"实体名称": "TrainApplicationDetail",
"字段列表": [...]
}
],
"字典配置": [
{ "key": "trainType", "dict": "hr_train_type", "import": "/@/enums/dict" }
]
}
```
## 字段类型映射
| 前端 type | 前端组件 | Java 类型 |
|-----------|---------|-----------|
| input | el-input | String |
| textarea | el-input[textarea] | String |
| select | el-select | String |
| date | el-date-picker | LocalDate |
| datetime | el-date-picker | LocalDateTime |
| number | microme-operator | BigDecimal |
| upload | upload-file | String |
## License
MIT
| text/markdown | null | Your Name <your@email.com> | null | null | null | admin, code-generator, element-plus, mcp, spring-boot, vue | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.0",
"mcp>=1.0.0",
"openai>=1.0.0",
"black>=23.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/yourname/admin-gen-mcp",
"Repository, https://github.com/yourname/admin-gen-mcp"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T09:47:25.554848 | admin_gen_mcp-0.1.4.tar.gz | 38,022 | 1c/4b/1cac4520fb2be0114841118ffd4cf46d6ed88e350f4b44a66713518a8926/admin_gen_mcp-0.1.4.tar.gz | source | sdist | null | false | 085f174bac671440a7fba8da04aa0b32 | 5e3a1cab343395e1ac384c222ca203a5e122dd6faf1d1c6b1f5b7baa7e38524b | 1c4b1cac4520fb2be0114841118ffd4cf46d6ed88e350f4b44a66713518a8926 | MIT | [] | 203 |
2.4 | framework-m | 0.4.15 | A modern, metadata-driven business application framework | # Framework M
The complete metadata-driven business application framework for Python 3.12+.
Official Website: **[frameworkm.dev](https://www.frameworkm.dev)**
[](https://www.python.org/downloads/)
[](https://badge.fury.io/py/framework-m)
[](https://gitlab.com/castlecraft/framework-m/-/pipelines)
[](https://opensource.org/licenses/MIT)
[](https://github.com/astral-sh/ruff)
[](https://mypy-lang.org/)
[](https://test.pypi.org/project/framework-m/)
> **📦 Metapackage**: This is the primary Framework M package. Installing it gives you everything required to build applications. For individual components, see the [Related Packages](#related-packages) section.
## Overview
Framework M is inspired by [Frappe Framework](https://frappeframework.com/) but built with modern Python practices:
- **Hexagonal Architecture**: Clean separation via Ports & Adapters
- **Async-First**: Native asyncio with Litestar and SQLAlchemy
- **Type-Safe**: 100% type hints, mypy strict compatible
- **Stateless**: JWT/Token auth, no server-side sessions
- **Metadata-Driven**: Define DocTypes as Pydantic models
## Installation
```bash
pip install framework-m
```
Or with `uv`:
```bash
uv add framework-m
```
## Quick Start
### 1. Define a DocType
```python
from framework_m import DocType, Field
class Todo(DocType):
"""A simple task document."""
title: str = Field(description="Task title")
description: str | None = Field(default=None, description="Task details")
is_completed: bool = Field(default=False, description="Completion status")
priority: int = Field(default=1, ge=1, le=5, description="Priority (1-5)")
```
### 2. Use the CLI
```bash
# Show version
m --version
# Show framework info
m info
# Start production server
m prod
# Start development server
m dev
```
## Features
### Metadata-Driven DocTypes
DocTypes are Pydantic models with automatic:
- Database table generation
- REST API endpoints
- JSON Schema for frontends
- Validation and serialization
### Hexagonal Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Primary Adapters │
│ (HTTP API, CLI, WebSocket, Background Jobs) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Core Domain │
│ (DocTypes, Controllers, Business Logic) │
│ │
│ ┌─────────────┐ ┌───────────────┐ ┌──────────────┐ │
│ │ BaseDocType │ │ BaseController│ │ MetaRegistry │ │
│ └─────────────┘ └───────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Ports (Interfaces) │
│ RepositoryProtocol │ EventBusProtocol │ PermissionProtocol │
│ StorageProtocol │ JobQueueProtocol │ CacheProtocol │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Secondary Adapters │
│ (PostgreSQL, Redis, S3, SMTP, External APIs) │
└─────────────────────────────────────────────────────────────┘
```
### Built-in Protocols
| Protocol | Purpose |
| ---------------------- | ----------------------------- |
| `RepositoryProtocol` | CRUD operations for documents |
| `EventBusProtocol` | Publish/subscribe events |
| `PermissionProtocol` | Authorization with RLS |
| `StorageProtocol` | File storage abstraction |
| `JobQueueProtocol` | Background job processing |
| `CacheProtocol` | Caching layer |
| `NotificationProtocol` | Email/SMS notifications |
| `SearchProtocol` | Full-text search |
| `PrintProtocol` | PDF generation |
| `I18nProtocol` | Internationalization |
### Extensibility
Override any adapter via Python entrypoints:
```toml
# pyproject.toml
[project.entry-points."framework_m.overrides"]
repository = "my_app.adapters:CustomRepository"
```
## Technology Stack
- **Web Framework**: [Litestar](https://litestar.dev/) 2.0+
- **ORM**: [SQLAlchemy](https://www.sqlalchemy.org/) 2.0 (Async)
- **Validation**: [Pydantic](https://docs.pydantic.dev/) V2
- **Task Queue**: [Taskiq](https://taskiq-python.github.io/) + NATS JetStream
- **DI Container**: [dependency-injector](https://python-dependency-injector.ets-labs.org/)
- **Database**: PostgreSQL (default), SQLite (testing)
- **Cache/Events**: Redis
## Project Structure
```
libs/framework-m/
├── src/framework_m/
│ ├── core/
│ │ ├── domain/ # DocType, Controller, Mixins
│ │ └── interfaces/ # Protocol definitions (Ports)
│ ├── adapters/ # Infrastructure implementations
│ ├── cli/ # CLI commands
│ └── public/ # Built-in DocTypes
└── tests/
```
## Development
```bash
# Clone and setup
git clone https://gitlab.com/castlecraft/framework-m.git
cd framework-m
# Install dependencies
uv sync --all-extras
# Run tests
uv run pytest
# Type checking
uv run mypy src/framework_m --strict
# Linting
uv run ruff check .
uv run ruff format .
```
## Documentation
- [Architecture Overview](https://gitlab.com/castlecraft/framework-m/blob/main/ARCHITECTURE.md)
- [Contributing Guide](https://gitlab.com/castlecraft/framework-m/blob/main/CONTRIBUTING.md)
- [Phase Checklists](https://gitlab.com/castlecraft/framework-m/tree/main/checklists)
## Related Packages
Framework M is split into modular packages:
| Package | Description |
| ------------------------------------------------------------------------ | --------------------------------------------------------------- |
| [`framework-m`](https://pypi.org/project/framework-m/) | **This package** - Metapackage with CLI, kernel, and re-exports |
| [`framework-m-core`](https://pypi.org/project/framework-m-core/) | Core protocols and dependency injection |
| [`framework-m-standard`](https://pypi.org/project/framework-m-standard/) | Default adapters (SQLAlchemy, Redis, Litestar) |
| [`framework-m-studio`](https://pypi.org/project/framework-m-studio/) | Visual DocType builder UI |
## License
MIT License - see [LICENSE](https://gitlab.com/castlecraft/framework-m/blob/main/LICENSE) for details.
## Acknowledgments
Inspired by [Frappe Framework](https://frappeframework.com/), reimagined with:
- Modern Python (3.12+, async/await, type hints)
- Clean architecture (Hexagonal/Ports & Adapters)
- No global state (Dependency Injection)
- Code-first schemas (Pydantic, not database JSON)
| text/markdown | Framework M Contributors | null | null | null | MIT | async, business, doctype, framework, metadata | [
"Development Status :: 3 - Alpha",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"framework-m-core>=0.6.0",
"framework-m-standard>=0.3.2"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/castlecraft/framework-m",
"Documentation, https://gitlab.com/castlecraft/framework-m#readme",
"Repository, https://gitlab.com/castlecraft/framework-m"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T09:47:15.138198 | framework_m-0.4.15.tar.gz | 349,260 | c2/47/1e14af0df60b80d42a1aa2c1b9602d7e230e3422e40d7cbe40d58ebd1d5d/framework_m-0.4.15.tar.gz | source | sdist | null | false | 81bf4a8628dd68b4b94acf395f863b83 | 8e19f1881687e719fbe506a402ecb53d5c23003bebcdf7a9286e3f5d4ae13f6a | c2471e14af0df60b80d42a1aa2c1b9602d7e230e3422e40d7cbe40d58ebd1d5d | null | [] | 219 |
2.4 | currencyapinet | 2.0.0 | Python wrapper for CurrencyApi.net | # CurrencyApi Python wrapper
[](https://pypi.org/project/currencyapinet/) [](https://coveralls.io/github/houseofapis/currencyapi-python?branch=main)
**Note:** API v1 is deprecated and will be retired on **31st July 2026**, at which point all v1 traffic will be redirected to v2. This SDK (v2.0.0+) targets API v2. If you are on an older version of this SDK, please upgrade.
<a href="https://currencyapi.net" title="CurrencyApi">CurrencyApi.net</a> provides live currency rates via a REST API. A live currency feed for over 166 currencies, including physical (USD, GBP, EUR + more) and cryptos (Bitcoin, Litecoin, Ethereum + more). A JSON and XML currency api updated every 60 seconds.
Features:
- Live exchange rates (updated every 60 seconds).
- 166 currencies world currencies.
- Popular cryptocurrencies included; Bitcoin, Litecoin etc.
- Convert currencies on the fly with the convert endpoint.
- Historical currency rates back to year 2000.
- OHLC (Open, High, Low, Close) data for technical analysis (Tier 3+).
- Easy to follow <a href="https://currencyapi.net/documentation" title="currency-api-documentation">documentation</a>
Signup for a free or paid account <a href="https://currencyapi.net/#pricing-sec" title="currency-api-pricing">here</a>.
## This package is a:
Python wrapper for <a href="https://currencyapi.net" title="CurrencyApi">CurrencyApi.net</a> endpoints (API v2).
## Developer Guide
For an easy to following developer guide, check out our [Python Developer Guide](https://currencyapi.net/sdk/python).
Alternatively keep reading below.
#### Prerequisites
- Minimum Python version 3.10
- Last tested and working on Python 3.14
- Free or Paid account with CurrencyApi.net
#### Test Coverage
- 100% coverage
## Installation
View our <a href="https://currencyapi.net/sdk/python">Python SDK</a> guide
```
pip install currencyapinet
```
## Usage
```python
from currencyapinet import Currency
currency = Currency('YOUR_API_KEY')
```
---
### Rates
Returns live currency rates for all supported currencies. Base currency defaults to USD.
```python
result = currency.rates().get()
# With optional base currency
result = currency.rates().base('EUR').get()
# XML output
result = currency.rates().output('XML').get()
```
---
### Convert
Converts an amount from one currency to another.
```python
result = currency.convert().from_currency('USD').to_currency('EUR').amount(100).get()
```
---
### History
Returns historical currency rates for a specific date.
```python
result = currency.history().date('2023-12-25').get()
# With optional base currency
result = currency.history().base('GBP').date('2023-12-25').get()
```
---
### Timeframe
Returns historical currency rates for a date range (max 365 days).
```python
result = currency.timeframe().start_date('2023-12-01').end_date('2023-12-31').get()
# With optional base currency
result = currency.timeframe().base('GBP').start_date('2023-12-01').end_date('2023-12-31').get()
```
---
### Currencies
Returns a list of all supported currencies.
```python
result = currency.currencies().get()
```
---
### OHLC
Returns OHLC (Open, High, Low, Close) data for a currency pair on a specific date. Requires a Tier 3 subscription.
**Parameters:**
| Parameter | Required | Description |
|------------|----------|-------------|
| `currency` | Yes | Target currency code (e.g. `EUR`, `GBP`, `BTC`) |
| `date` | Yes | Date in `YYYY-MM-DD` format (must be in the past) |
| `base` | No | Base currency code (defaults to `USD`) |
| `interval` | No | Time interval: `5m`, `15m`, `30m`, `1h`, `4h`, `12h`, `1d` (defaults to `1d`) |
```python
# Basic request (1-day interval)
result = currency.ohlc().currency('EUR').date('2023-12-25').get()
# With custom interval
result = currency.ohlc().currency('GBP').date('2023-12-25').interval('1h').get()
# With custom base currency and interval
result = currency.ohlc().currency('JPY').date('2023-12-25').base('EUR').interval('4h').get()
# XML output
result = currency.ohlc().currency('EUR').date('2023-12-25').output('XML').get()
```
**Example response:**
```json
{
"valid": true,
"base": "USD",
"quote": "EUR",
"date": "2023-12-25",
"interval": "1d",
"ohlc": [
{
"start": "2023-12-25T00:00:00Z",
"open": 0.92000000000000,
"high": 0.92500000000000,
"low": 0.91800000000000,
"close": 0.92200000000000
}
]
}
```
| text/markdown | Oli Girling | support@currencyapi.net | null | null | null | currency feed, currency rates, currencyapi, currency | [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://currencyapi.net/sdk/python | null | >=3.10 | [] | [] | [] | [
"requests"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T09:46:36.761146 | currencyapinet-2.0.0.tar.gz | 9,336 | 78/0e/6a5a10131f9daac2754a3be0c10b5d7caf2d7cc2108c3fe19dba118bf04d/currencyapinet-2.0.0.tar.gz | source | sdist | null | false | 7658551f8637e33427489d0c642f8dd6 | d432fb4f892078207189ebc5ae28ab0bc183b4d9c72e5b3a9443b13e421d8c7a | 780e6a5a10131f9daac2754a3be0c10b5d7caf2d7cc2108c3fe19dba118bf04d | null | [
"LICENSE"
] | 206 |
2.4 | framework-m-studio | 0.4.7 | Framework M Studio - Visual DocType Builder & Developer Tools | # Framework M Studio
Visual DocType builder and developer tools for Framework M.
[](https://badge.fury.io/py/framework-m-studio)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://gitlab.com/castlecraft/framework-m/-/pipelines)
[](https://github.com/astral-sh/ruff)
## Overview
`framework-m-studio` provides development-time tools that are NOT included in the production runtime:
- **Studio UI**: Visual DocType builder (React + Vite)
- **Code Generators**: LibCST-based Python code generation
- **DevTools CLI**: `m codegen`, `m docs:generate`
> **Note:** Studio is for developers to build DocTypes. The **Desk** (end-user data management UI) is a separate frontend that connects to the Framework M backend.
## Installation
```bash
# Add to your project's dev dependencies
uv add --dev framework-m-studio
```
## Documentation
Check out the official documentation for Studio at **[www.frameworkm.dev](https://www.frameworkm.dev)**
## Usage
```bash
# Start Studio UI
m studio
# Generate TypeScript client from OpenAPI
m codegen client --lang ts --out ./frontend/src/api
```
## Development
```bash
cd apps/studio
uv sync
uv run pytest
```
| text/markdown | Framework M Contributors | null | null | null | MIT | codegen, developer-tools, doctype, framework, studio | [
"Development Status :: 3 - Alpha",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"framework-m-core>=0.6.0",
"framework-m>=0.4.15",
"honcho>=1.1.0",
"jinja2>=3.1.0",
"libcst>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/castlecraft/framework-m",
"Documentation, https://gitlab.com/castlecraft/framework-m#readme",
"Repository, https://gitlab.com/castlecraft/framework-m"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T09:45:28.880809 | framework_m_studio-0.4.7.tar.gz | 532,849 | 24/7e/817beb55242cf7d502b8e109438439cb5325fe09c05b357e0710e378d9be/framework_m_studio-0.4.7.tar.gz | source | sdist | null | false | 403e431943c9b3894943bbf19145fada | 064fd69ce9af228ac0a98b2d942f251b384da396304f468f1852e476e050af09 | 247e817beb55242cf7d502b8e109438439cb5325fe09c05b357e0710e378d9be | null | [] | 206 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.